Search results for: hyperspectral image classification using tree search algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9314

Search results for: hyperspectral image classification using tree search algorithm

44 Influence of Ride Control Systems on the Motions Response and Passenger Comfort of High-Speed Catamarans in Irregular Waves

Authors: Ehsan Javanmardemamgheisi, Javad Mehr, Jason Ali-Lavroff, Damien Holloway, Michael Davis

Abstract:

During the last decades, a growing interest in faster and more efficient waterborne transportation has led to the development of high-speed vessels for both commercial and military applications. To satisfy this global demand, a wide variety of arrangements of high-speed crafts have been proposed by designers. Among them, high-speed catamarans have proven themselves to be a suitable Roll-on/Roll-off configuration for carrying passengers and cargo due to widely spaced demi hulls, a wide deck zone, and a high ratio of deadweight to displacement. To improve passenger comfort and crew workability and enhance the operability and performance of high-speed catamarans, mitigating the severity of motions and structural loads using Ride Control Systems (RCS) is essential.In this paper, a set of towing tank tests was conducted on a 2.5 m scaled model of a 112 m Incat Tasmania high-speed catamaran in irregular head seas to investigate the effect of different ride control algorithms including linear and nonlinear versions of the heave control, pitch control, and local control on motion responses and passenger comfort of the full-scale ship. The RCS included a centre bow-fitted T-Foil and two transom-mounted stern tabs. All the experiments were conducted at the Australian Maritime College (AMC) towing tank at a model speed of 2.89 m/s (37 knots full scale), a modal period of 1.5 sec (10 sec full scale) and two significant wave heights of 60 mm and 90 mm, representing full-scale wave heights of 2.7 m and 4 m, respectively. Spectral analyses were performed using Welch’s power spectral density method on the vertical motion time records of the catamaran model to calculate heave and pitch Response Amplitude Operators (RAOs). Then, noting that passenger discomfort arises from vertical accelerations and that the vertical accelerations vary at different longitudinal locations within the passenger cabin due to the variations in amplitude and relative phase of the pitch and heave motions, the vertical accelerations were calculated at three longitudinal locations (LCG, T-Foil, and stern tabs). Finally, frequency-weighted Root Mean Square (RMS) vertical accelerations were calculated to estimate Motion Sickness Dose Value (MSDV) of the ship based on ISO 2631-recommendations. It was demonstrated that in small seas, implementing a nonlinear pitch control algorithm reduces the peak pitch motions by 41%, the vertical accelerations at the forward location by 46%, and motion sickness at the forward position by around 20% which provides great potential for further improvement in passenger comfort, crew workability, and operability of high-speed catamarans.

Keywords: high-speed catamarans, ride control system, response amplitude operators, vertical accelerations, motion sickness, irregular waves, towing tank tests.

Procedia PDF Downloads 56
43 A Case Study of Brownfield Revitalization in Taiwan

Authors: Jen Wang, Wei-Chia Hsu, Zih-Sin Wang, Ching-Ping Chu, Bo-Shiou Guo

Abstract:

In the late 19th century, the Jinguashi ore deposit in northern Taiwan was discovered, and accompanied with flourishing mining activities. However, tons of contaminants including heavy metals, sulfur dioxide, and total petroleum hydrocarbons (TPH) were released to surroundings and caused environmental problems. Site T was one of copper smelter located on the coastal hill near Jinguashi ore deposit. In over ten years of operation, variety contaminants were emitted that it polluted the surrounding soil and groundwater quality. In order to exhaust fumes produced from smelting process, three stacks were built along the hill behind the factory. The sediment inside the stacks contains high concentration of heavy metals such as arsenic, lead, copper, etc. Moreover, soil around the discarded stacks suffered a serious contamination when deposition leached from the ruptures of stacks. Consequently, Site T (including the factory and its surroundings) was declared as a pollution remediation site that visiting the site and land-use activities on it are forbidden. However, the natural landscape and cultural attractions of Site T are spectacular that it attracts a lot of visitors annually. Moreover, land resources are extremely precious in Taiwan. In addition, Taiwan Environmental Protection Administration (EPA) is actively promoting the contaminated land revitalization policy. Therefore, this study took Site T as case study for brownfield revitalization planning to the limits of activate and remediate the natural resources. Land-use suitability analysis and risk mapping were applied in this study to make appropriate risk management measures and redevelopment plan for the site. In land-use suitability analysis, surrounding factors into consideration such as environmentally sensitive areas, biological resources, land use, contamination, culture, and landscapes were collected to assess the development of each area; health risk mapping was introduced to show the image of risk assessments results based on the site contamination investigation. According to land-use suitability analysis, the site was divided into four zones: priority area (for high-efficiency development), secondary area (for co-development with priority area), conditional area (for reusing existing building) and limited area (for Eco-tourism and education). According to the investigation, polychlorinated biphenyls (PCB), heavy metals and TPH were considered as target contaminants while oral, inhalation and dermal would be the major exposure pathways in health risk assessment. In accordance with health risk map, the highest risk was found in the southwest and eastern side. Based on the results, the development plan focused on zoning and land use. Site T was recommended be divides to public facility zone, public architectonic art zone, viewing zone, existing building preservation zone, historic building zone, and cultural landscape zone for various purpose. In addition, risk management measures including sustained remediation, extinguish exposure and administration management are applied to ensure particular places are suitable for visiting and protect the visitors’ health. The consolidated results are corroborated available by analyzing aspects of law, land acquired method, maintenance and management and public participation. Therefore, this study has a certain reference value to promote the contaminated land revitalization policy in Taiwan.

Keywords: brownfield revitalization, land-use suitability analysis, health risk map, risk management

Procedia PDF Downloads 147
42 Nonlinear Homogenized Continuum Approach for Determining Peak Horizontal Floor Acceleration of Old Masonry Buildings

Authors: Andreas Rudisch, Ralf Lampert, Andreas Kolbitsch

Abstract:

It is a well-known fact among the engineering community that earthquakes with comparatively low magnitudes can cause serious damage to nonstructural components (NSCs) of buildings, even when the supporting structure performs relatively well. Past research works focused mainly on NSCs of nuclear power plants and industrial plants. Particular attention should also be given to architectural façade elements of old masonry buildings (e.g. ornamental figures, balustrades, vases), which are very vulnerable under seismic excitation. Large numbers of these historical nonstructural components (HiNSCs) can be found in highly frequented historical city centers and in the event of failure, they pose a significant danger to persons. In order to estimate the vulnerability of acceleration sensitive HiNSCs, the peak horizontal floor acceleration (PHFA) is used. The PHFA depends on the dynamic characteristics of the building, the ground excitation, and induced nonlinearities. Consequently, the PHFA can not be generalized as a simple function of height. In the present research work, an extensive case study was conducted to investigate the influence of induced nonlinearity on the PHFA for old masonry buildings. Probabilistic nonlinear FE time-history analyses considering three different hazard levels were performed. A set of eighteen synthetically generated ground motions was used as input to the structure models. An elastoplastic macro-model (multiPlas) for nonlinear homogenized continuum FE-calculation was calibrated to multiple scales and applied, taking specific failure mechanisms of masonry into account. The macro-model was calibrated according to the results of specific laboratory and cyclic in situ shear tests. The nonlinear macro-model is based on the concept of multi-surface rate-independent plasticity. Material damage or crack formation are detected by reducing the initial strength after failure due to shear or tensile stress. As a result, shear forces can only be transmitted to a limited extent by friction when the cracking begins. The tensile strength is reduced to zero. The first goal of the calibration was the consistency of the load-displacement curves between experiment and simulation. The calibrated macro-model matches well with regard to the initial stiffness and the maximum horizontal load. Another goal was the correct reproduction of the observed crack image and the plastic strain activities. Again the macro-model proved to work well in this case and shows very good correlation. The results of the case study show that there is significant scatter in the absolute distribution of the PHFA between the applied ground excitations. An absolute distribution along the normalized building height was determined in the framework of probability theory. It can be observed that the extent of nonlinear behavior varies for the three hazard levels. Due to the detailed scope of the present research work, a robust comparison with code-recommendations and simplified PHFA distributions are possible. The chosen methodology offers a chance to determine the distribution of PHFA along the building height of old masonry structures. This permits a proper hazard assessment of HiNSCs under seismic loads.

Keywords: nonlinear macro-model, nonstructural components, time-history analysis, unreinforced masonry

Procedia PDF Downloads 145
41 Optimal-Based Structural Vibration Attenuation Using Nonlinear Tuned Vibration Absorbers

Authors: Pawel Martynowicz

Abstract:

Vibrations are a crucial problem for slender structures such as towers, masts, chimneys, wind turbines, bridges, high buildings, etc., that is why most of them are equipped with vibration attenuation or fatigue reduction solutions. In this work, a slender structure (i.e., wind turbine tower-nacelle model) equipped with nonlinear, semiactive tuned vibration absorber(s) is analyzed. For this study purposes, magnetorheological (MR) dampers are used as semiactive actuators. Several optimal-based approaches to structural vibration attenuation are investigated against the standard ‘ground-hook’ law and passive tuned vibration absorber(s) implementations. The common approach to optimal control of nonlinear systems is offline computation of the optimal solution, however, so determined open loop control suffers from lack of robustness to uncertainties (e.g., unmodelled dynamics, perturbations of external forces or initial conditions), and thus perturbation control techniques are often used. However, proper linearization may be an issue for highly nonlinear systems with implicit relations between state, co-state, and control. The main contribution of the author is the development as well as numerical and experimental verification of the Pontriagin maximum-principle-based vibration control concepts that produce directly actuator control input (not the demanded force), thus force tracking algorithm that results in control inaccuracy is entirely omitted. These concepts, including one-step optimal control, quasi-optimal control, and optimal-based modified ‘ground-hook’ law, can be directly implemented in online and real-time feedback control for periodic (or semi-periodic) disturbances with invariant or time-varying parameters, as well as for non-periodic, transient or random disturbances, what is a limitation for some other known solutions. No offline calculation, excitations/disturbances assumption or vibration frequency determination is necessary, moreover, all of the nonlinear actuator (MR damper) force constraints, i.e., no active forces, lower and upper saturation limits, hysteresis-type dynamics, etc., are embedded in the control technique, thus the solution is optimal or suboptimal for the assumed actuator, respecting its limitations. Depending on the selected method variant, a moderate or decisive reduction in the computational load is possible compared to other methods of nonlinear optimal control, while assuring the quality and robustness of the vibration reduction system, as well as considering multi-pronged operational aspects, such as possible minimization of the amplitude of the deflection and acceleration of the vibrating structure, its potential and/or kinetic energy, required actuator force, control input (e.g. electric current in the MR damper coil) and/or stroke amplitude. The developed solutions are characterized by high vibration reduction efficiency – the obtained maximum values of the dynamic amplification factor are close to 2.0, while for the best of the passive systems, these values exceed 3.5.

Keywords: magnetorheological damper, nonlinear tuned vibration absorber, optimal control, real-time structural vibration attenuation, wind turbines

Procedia PDF Downloads 102
40 Cicadas: A Clinician-assisted, Closed-loop Technology, Mobile App for Adolescents with Autism Spectrum Disorders

Authors: Bruno Biagianti, Angela Tseng, Kathy Wannaviroj, Allison Corlett, Megan DuBois, Kyu Lee, Suma Jacob

Abstract:

Background: ASD is characterized by pervasive Sensory Processing Abnormalities (SPA) and social cognitive deficits that persist throughout the course of the illness and have been linked to functional abnormalities in specific neural systems that underlie the perception, processing, and representation of sensory information. SPA and social cognitive deficits are associated with difficulties in interpersonal relationships, poor development of social skills, reduced social interactions and lower academic performance. Importantly, they can hamper the effects of established evidence-based psychological treatments—including PEERS (Program for the Education and Enrichment of Relationship Skills), a parent/caregiver-assisted, 16-weeks social skills intervention—which nonetheless requires a functional brain capable of assimilating and retaining information and skills. As a matter of fact, some adolescents benefit from PEERS more than others, calling for strategies to increase treatment response rates. Objective: We will present interim data on CICADAS (Care Improving Cognition for ADolescents on the Autism Spectrum)—a clinician-assisted, closed-loop technology mobile application for adolescents with ASD. Via ten mobile assessments, CICADAS captures data on sensory processing abnormalities and associated cognitive deficits. These data populate a machine learning algorithm that tailors the delivery of ten neuroplasticity-based social cognitive training (NB-SCT) exercises targeting sensory processing abnormalities. Methods: In collaboration with the Autism Spectrum and Neurodevelopmental Disorders Clinic at the University of Minnesota, we conducted a fully remote, three-arm, randomized crossover trial with adolescents with ASD to document the acceptability of CICADAS and evaluate its potential as a stand-alone treatment or as a treatment enhancer of PEERS. Twenty-four adolescents with ASD (ages 11-18) have been initially randomized to 16 weeks of PEERS + CICADAS (Arm A) vs. 16 weeks of PEERS + computer games vs. 16 weeks of CICADAS alone (Arm C). After 16 weeks, the full battery of assessments has been remotely administered. Results: We have evaluated the acceptability of CICADAS by examining adherence rates, engagement patterns, and exit survey data. We found that: 1) CICADAS is able to serve as a treatment enhancer for PEERS, inducing greater improvements in sensory processing, cognition, symptom reduction, social skills and behaviors, as well as the quality of life compared to computer games; 2) the concurrent delivery of PEERS and CICADAS induces greater improvements in study outcomes compared to CICADAS only. Conclusion: While preliminary, our results indicate that the individualized assessment and treatment approach designed in CICADAS seems effective in inducing adaptive long-term learning about social-emotional events. CICADAS-induced enhancement of processing and cognition facilitates the application of PEERS skills in the environment of adolescents with ASD, thus improving their real-world functioning.

Keywords: ASD, social skills, cognitive training, mobile app

Procedia PDF Downloads 182
39 Modeling the Demand for the Healthcare Services Using Data Analysis Techniques

Authors: Elizaveta S. Prokofyeva, Svetlana V. Maltseva, Roman D. Zaitsev

Abstract:

Rapidly evolving modern data analysis technologies in healthcare play a large role in understanding the operation of the system and its characteristics. Nowadays, one of the key tasks in urban healthcare is to optimize the resource allocation. Thus, the application of data analysis in medical institutions to solve optimization problems determines the significance of this study. The purpose of this research was to establish the dependence between the indicators of the effectiveness of the medical institution and its resources. Hospital discharges by diagnosis; hospital days of in-patients and in-patient average length of stay were selected as the performance indicators and the demand of the medical facility. The hospital beds by type of care, medical technology (magnetic resonance tomography, gamma cameras, angiographic complexes and lithotripters) and physicians characterized the resource provision of medical institutions for the developed models. The data source for the research was an open database of the statistical service Eurostat. The choice of the source is due to the fact that the databases contain complete and open information necessary for research tasks in the field of public health. In addition, the statistical database has a user-friendly interface that allows you to quickly build analytical reports. The study provides information on 28 European for the period from 2007 to 2016. For all countries included in the study, with the most accurate and complete data for the period under review, predictive models were developed based on historical panel data. An attempt to improve the quality and the interpretation of the models was made by cluster analysis of the investigated set of countries. The main idea was to assess the similarity of the joint behavior of the variables throughout the time period under consideration to identify groups of similar countries and to construct the separate regression models for them. Therefore, the original time series were used as the objects of clustering. The hierarchical agglomerate algorithm k-medoids was used. The sampled objects were used as the centers of the clusters obtained, since determining the centroid when working with time series involves additional difficulties. The number of clusters used the silhouette coefficient. After the cluster analysis it was possible to significantly improve the predictive power of the models: for example, in the one of the clusters, MAPE error was only 0,82%, which makes it possible to conclude that this forecast is highly reliable in the short term. The obtained predicted values of the developed models have a relatively low level of error and can be used to make decisions on the resource provision of the hospital by medical personnel. The research displays the strong dependencies between the demand for the medical services and the modern medical equipment variable, which highlights the importance of the technological component for the successful development of the medical facility. Currently, data analysis has a huge potential, which allows to significantly improving health services. Medical institutions that are the first to introduce these technologies will certainly have a competitive advantage.

Keywords: data analysis, demand modeling, healthcare, medical facilities

Procedia PDF Downloads 115
38 Times2D: A Time-Frequency Method for Time Series Forecasting

Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan

Abstract:

Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.

Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation

Procedia PDF Downloads 16
37 Generative Design of Acoustical Diffuser and Absorber Elements Using Large-Scale Additive Manufacturing

Authors: Saqib Aziz, Brad Alexander, Christoph Gengnagel, Stefan Weinzierl

Abstract:

This paper explores a generative design, simulation, and optimization workflow for the integration of acoustical diffuser and/or absorber geometry with embedded coupled Helmholtz-resonators for full-scale 3D printed building components. Large-scale additive manufacturing in conjunction with algorithmic CAD design tools enables a vast amount of control when creating geometry. This is advantageous regarding the increasing demands of comfort standards for indoor spaces and the use of more resourceful and sustainable construction methods and materials. The presented methodology highlights these new technological advancements and offers a multimodal and integrative design solution with the potential for an immediate application in the AEC-Industry. In principle, the methodology can be applied to a wide range of structural elements that can be manufactured by additive manufacturing processes. The current paper focuses on a case study of an application for a biaxial load-bearing beam grillage made of reinforced concrete, which allows for a variety of applications through the combination of additive prefabricated semi-finished parts and in-situ concrete supplementation. The semi-prefabricated parts or formwork bodies form the basic framework of the supporting structure and at the same time have acoustic absorption and diffusion properties that are precisely acoustically programmed for the space underneath the structure. To this end, a hybrid validation strategy is being explored using a digital and cross-platform simulation environment, verified with physical prototyping. The iterative workflow starts with the generation of a parametric design model for the acoustical geometry using the algorithmic visual scripting editor Grasshopper3D inside the building information modeling (BIM) software Revit. Various geometric attributes (i.e., bottleneck and cavity dimensions) of the resonator are parameterized and fed to a numerical optimization algorithm which can modify the geometry with the goal of increasing absorption at resonance and increasing the bandwidth of the effective absorption range. Using Rhino.Inside and LiveLink for Revit, the generative model was imported directly into the Multiphysics simulation environment COMSOL. The geometry was further modified and prepared for simulation in a semi-automated process. The incident and scattered pressure fields were simulated from which the surface normal absorption coefficients were calculated. This reciprocal process was repeated to further optimize the geometric parameters. Subsequently the numerical models were compared to a set of 3D concrete printed physical twin models, which were tested in a .25 m x .25 m impedance tube. The empirical results served to improve the starting parameter settings of the initial numerical model. The geometry resulting from the numerical optimization was finally returned to grasshopper for further implementation in an interdisciplinary study.

Keywords: acoustical design, additive manufacturing, computational design, multimodal optimization

Procedia PDF Downloads 136
36 Biomedical Application of Green Biosynthesis Magnetic Iron Oxide (Fe3O4) Nanoparticles Using Seaweed (Sargassum muticum) Aqueous Extract

Authors: Farideh Namvar, Rosfarizan Mohamed

Abstract:

In the field of nanotechnology, the use of various biological units instead of toxic chemicals for the reduction and stabilization of nanoparticles, has received extensive attention. This use of biological entities to create nanoparticles has designated as “Green” synthesis and it is considered to be far more beneficial due to being economical, eco-friendly and applicable for large-scale synthesis as it operates on low pressure, less input of energy and low temperatures. The lack of toxic byproducts and consequent decrease in degradation of the product renders this technique more preferable over physical and classical chemical methods. The variety of biomass having reduction properties to produce nanoparticles makes them an ideal candidate for fabrication. Metal oxide nanoparticles have been said to represent a "fundamental cornerstone of nanoscience and nanotechnology" due to their variety of properties and potential applications. However, this also provides evidence of the fact that metal oxides include many diverse types of nanoparticles with large differences in chemical composition and behaviour. In this study, iron oxide nanoparticles (Fe3O4-NPs) were synthesized using a rapid, single step and completely green biosynthetic method by reduction of ferric chloride solution with brown seaweed (Sargassum muticum) water extract containing polysaccharides as a main factor which acts as reducing agent and efficient stabilizer. Antimicrobial activity against six microorganisms was tested using well diffusion method. The resulting S-IONPs are crystalline in nature, with a cubic shape. The average particle diameter, as determined by TEM, was found to be 18.01 nm. The S-IONPs were efficiently inhibited the growth of Listeria monocytogenes, Escherichia coli and Candida species. Our favorable results suggest that S-IONPs could be a promising candidate for development of future antimicrobial therapies. The nature of biosynthesis and the therapeutic potential by S-IONPs could pave the way for further research on design of green synthesis therapeutic agents, particularly nanomedicine, to deal with treatment of infections. Further studies are needed to fully characterize the toxicity and the mechanisms involved with the antimicrobial activity of these particles. Antioxidant activity of S-IONPs synthesized by green method was measured by ABTS (2, 2'-azino-bis (3-ethylbenzothiazoline-6-sulphonic acid) (IC50= 1000µg) radical scavenging activity. Also, with the increasing concentration of S-IONPs, catalase gene expression compared to control gene GAPDH increased. For anti-angiogenesis study the Ross fertilized eggs were divided into four groups; the control and three experimental groups. The gelatin sponges containing albumin were placed on the chorioalantoic membrane and soaked with different concentrations of S-IONPs. All the cases were photographed using a photo stereomicroscope. The number and the lengths of the vessels were measured using Image J software. The crown rump (CR) and weight of the embryo were also recorded. According to the data analysis, the number and length of the blood vessels, as well as the CR and weight of the embryos reduced significantly compared to the control (p < 0.05), dose dependently. The total hemoglobin was quantified as an indicator of the blood vessel formation, and in the treated samples decreased, which showed its inhibitory effect on angiogenesis.

Keywords: anti-angiogenesis, antimicrobial, antioxidant, biosynthesis, iron oxide (fe3o4) nanoparticles, sargassum muticum, seaweed

Procedia PDF Downloads 295
35 Experimental-Numerical Inverse Approaches in the Characterization and Damage Detection of Soft Viscoelastic Layers from Vibration Test Data

Authors: Alaa Fezai, Anuj Sharma, Wolfgang Mueller-Hirsch, André Zimmermann

Abstract:

Viscoelastic materials have been widely used in the automotive industry over the last few decades with different functionalities. Besides their main application as a simple and efficient surface damping treatment, they may ensure optimal operating conditions for on-board electronics as thermal interface or sealing layers. The dynamic behavior of viscoelastic materials is generally dependent on many environmental factors, the most important being temperature and strain rate or frequency. Prior to the reliability analysis of systems including viscoelastic layers, it is, therefore, crucial to accurately predict the dynamic and lifetime behavior of these materials. This includes the identification of the dynamic material parameters under critical temperature and frequency conditions along with a precise damage localization and identification methodology. The goal of this work is twofold. The first part aims at applying an inverse viscoelastic material-characterization approach for a wide frequency range and under different temperature conditions. For this sake, dynamic measurements are carried on a single lap joint specimen using an electrodynamic shaker and an environmental chamber. The specimen consists of aluminum beams assembled to adapter plates through a viscoelastic adhesive layer. The experimental setup is reproduced in finite element (FE) simulations, and frequency response functions (FRF) are calculated. The parameters of both the generalized Maxwell model and the fractional derivatives model are identified through an optimization algorithm minimizing the difference between the simulated and the measured FRFs. The second goal of the current work is to guarantee an on-line detection of the damage, i.e., delamination in the viscoelastic bonding of the described specimen during frequency monitored end-of-life testing. For this purpose, an inverse technique, which determines the damage location and size based on the modal frequency shift and on the change of the mode shapes, is presented. This includes a preliminary FE model-based study correlating the delamination location and size to the change in the modal parameters and a subsequent experimental validation achieved through dynamic measurements of specimen with different, pre-generated crack scenarios and comparing it to the virgin specimen. The main advantage of the inverse characterization approach presented in the first part resides in the ability of adequately identifying the material damping and stiffness behavior of soft viscoelastic materials over a wide frequency range and under critical temperature conditions. Classic forward characterization techniques such as dynamic mechanical analysis are usually linked to limitations under critical temperature and frequency conditions due to the material behavior of soft viscoelastic materials. Furthermore, the inverse damage detection described in the second part guarantees an accurate prediction of not only the damage size but also its location using a simple test setup and outlines; therefore, the significance of inverse numerical-experimental approaches in predicting the dynamic behavior of soft bonding layers applied in automotive electronics.

Keywords: damage detection, dynamic characterization, inverse approaches, vibration testing, viscoelastic layers

Procedia PDF Downloads 179
34 Artificial Intelligence for Traffic Signal Control and Data Collection

Authors: Reggie Chandra

Abstract:

Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.

Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal

Procedia PDF Downloads 131
33 Medical Examiner Collection of Comprehensive, Objective Medical Evidence for Conducted Electrical Weapons and Their Temporal Relationship to Sudden Arrest

Authors: Michael Brave, Mark Kroll, Steven Karch, Charles Wetli, Michael Graham, Sebastian Kunz, Dorin Panescu

Abstract:

Background: Conducted electrical weapons (CEW) are now used in 107 countries and are a common law enforcement less-lethal force practice in the United Kingdom (UK), United States of America (USA), Canada, Australia, New Zealand, and others. Use of these devices is rarely temporally associated with the occurrence of sudden arrest-related deaths (ARD). Because such deaths are uncommon, few Medical Examiners (MEs) ever encounter one, and even fewer offices have established comprehensive investigative protocols. Without sufficient scientific data, the role, if any, played by a CEW in a given case is largely supplanted by conjecture often defaulting to a CEW-induced fatal cardiac arrhythmia. In addition to the difficulty in investigating individual deaths, the lack of information also detrimentally affects being able to define and evaluate the ARD cohort generally. More comprehensive, better information leads to better interpretation in individual cases and also to better research. The purpose of this presentation is to provide MEs with a comprehensive evidence-based checklist to assist in the assessment of CEW-ARD cases. Methods: PUBMED and Sociology/Criminology data bases were queried to find all medical, scientific, electrical, modeling, engineering, and sociology/criminology peer-reviewed literature for mentions of CEW or synonymous terms. Each paper was then individually reviewed to identify those that discussed possible bioelectrical mechanisms relating CEW to ARD. A Naranjo-type pharmacovigilance algorithm was also employed, when relevant, to identify and quantify possible direct CEW electrical myocardial stimulation. Additionally, CEW operational manuals and training materials were reviewed to allow incorporation of CEW-specific technical parameters. Results: Total relevant PUBMED citations of CEWs were less than 250, and reports of death extremely rare. Much relevant information was available from Sociology/Criminology data bases. Once the relevant published papers were identified, and reviewed, we compiled an annotated checklist of data that we consider critical to a thorough CEW-involved ARD investigation. Conclusion: We have developed an evidenced-based checklist that can be used by MEs and their staffs to assist them in identifying, collecting, documenting, maintaining, and objectively analyzing the role, if any, played by a CEW in any specific case of sudden death temporally associated with the use of a CEW. Even in cases where the collected information is deemed by the ME as insufficient for formulating an opinion or diagnosis to a reasonable degree of medical certainty, information collected as per the checklist will often be adequate for other stakeholders to use as a basis for informed decisions. Having reviewed the appropriate materials in a significant number of cases careful examination of the heart and brain is likely adequate. Channelopathy testing should be considered in some cases, however it may be considered cost prohibitive (aprox $3000). Law enforcement agencies may want to consider establishing a reserve fund to help manage such rare cases. The expense may stay the enormous costs associated with incident-precipitated litigation.

Keywords: ARD, CEW, police, TASER

Procedia PDF Downloads 328
32 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 53
31 Optical Coherence Tomography in Differentiation of Acute and Non-Healing Wounds

Authors: Ananya Barui, Provas Banerjee, Jyotirmoy Chatterjee

Abstract:

Application of optical technology in medicine and biology has a long track-record. In this endeavor, OCT is able to attract both engineers and biologists to work together in the field of photonics for establishing a striking non-invasive imaging technology. In contrast to other in vivo imaging modalities like Raman imaging, confocal imaging, two-photon microscopy etc. which can perform in vivo imaging upto 100-200 micron depth due to limitation in numerical aperture or scattering, however, OCT can achieve high-resolution imaging upto few millimeters of tissue structures depending on their refractive index in different anatomical location. This tomographic system depends on interference of two light waves in an interferometer to produce a depth profile of specimen. In wound healing, frequent collection of biopsies for follow-up of repair process could be avoided by such imaging technique. Real time skin OCT (the optical biopsy) has efficacy in deeper and faster illumination of cutaneou tissue to acquire high resolution cross sectional images of their internal micro-structure. Swept Source-OCT (SS-OCT), a novel imaging technique, can generate high-speed depth profile (~ 2 mm) of wound at a sweeping rate of laser with micron level resolution and optimum coherent length of 5-6 mm. Normally multi-layered skin tissue depicts different optical properties along with variation in thickness, refractive index and composition (i.e. keratine layer, water, fat etc.) according to their anatomical location. For instance, stratum corneum, the upper-most and relatively dehydrated layer of epidermis reflects more light and produces more lucid and a sharp demarcation line with rest of the hydrated epidermal region. During wound healing or regeneration, optical properties of cutaneous tissue continuously altered with maturation of wound bed. More mature and less hydrated tissue component reflects more light and becomes visible as a brighter area in comparison to immature region which content higher amount water or fat that depicts as a darker area in OCT image. Non-healing wound possess prolonged inflammation and inhibits nascent proliferative stage. Accumulation of necrotic tissues also prevents the repair of non-healing wounds. Due to high resolution and potentiality to reflect the compositional aspects of tissues in terms of their optical properties, this tomographic method may facilitate in differentiating non-healing and acute wounds in addition to clinical observations. Non-invasive OCT offers better insight regarding specific biological status of tissue in health and pathological conditions, OCT images could be associated with histo-pathological ‘gold standard’. This correlated SS-OCT and microscopic evaluation of the wound edges can provide information regarding progressive healing and maturation of the epithelial components. In the context of searching analogy between two different imaging modalities, their relative performances in imaging of healing bed were estimated for probing an alternative approach. Present study validated utility of SS-OCT in revealing micro-anatomic structure in the healing bed with newer information. Exploring precise correspondence of OCT images features with histo-chemical findings related to epithelial integrity of the regenerated tissue could have great implication. It could establish the ‘optical biopsy’ as a potent non-invasive diagnostic tool for cutaneous pathology.

Keywords: histo-pathology, non invasive imaging, OCT, wound healing

Procedia PDF Downloads 257
30 A Hardware-in-the-loop Simulation for the Development of Advanced Control System Design for a Spinal Joint Wear Simulator

Authors: Kaushikk Iyer, Richard M Hall, David Keeling

Abstract:

Hardware-in-the-loop (HIL) simulation is an advanced technique for developing and testing complex real-time control systems. This paper presents the benefits of HIL simulation and how it can be implemented and used effectively to develop, test, and validate advanced control algorithms used in a spinal joint Wear simulator for the Tribological testing of spinal disc prostheses. spinal wear simulator is technologically the most advanced machine currently employed For the in-vitro testing of newly developed spinal Discimplants. However, the existing control techniques, such as a simple position control Does not allow the simulator to test non-sinusoidal waveforms. Thus, there is a need for better and advanced control methods that can be developed and tested Rigorouslybut safely before deploying it into the real simulator. A benchtop HILsetupis was created for experimentation, controller verification, and validation purposes, allowing different control strategies to be tested rapidly in a safe environment. The HIL simulation aspect in this setup attempts to replicate similar spinal motion and loading conditions. The spinal joint wear simulator containsa four-Barlinkpowered by electromechanical actuators. LabVIEW software is used to design a kinematic model of the spinal wear Simulator to Validatehow each link contributes towards the final motion of the implant under test. As a result, the implant articulates with an angular motion specified in the international standards, ISO-18192-1, that define fixed, simplified, and sinusoid motion and load profiles for wear testing of cervical disc implants. Using a PID controller, a velocity-based position control algorithm was developed to interface with the benchtop setup that performs HIL simulation. In addition to PID, a fuzzy logic controller (FLC) was also developed that acts as a supervisory controller. FLC provides intelligence to the PID controller by By automatically tuning the controller for profiles that vary in amplitude, shape, and frequency. This combination of the fuzzy-PID controller is novel to the wear testing application for spinal simulators and demonstrated superior performance against PIDwhen tested for a spectrum of frequency. Kaushikk Iyer is a Ph.D. Student at the University of Leeds and an employee at Key Engineering Solutions, Leeds, United Kingdom, (e-mail: [email protected], phone: +44 740 541 5502). Richard M Hall is with the University of Leeds, the United Kingdom as a professor in the Mechanical Engineering Department (e-mail: [email protected]). David Keeling is the managing director of Key Engineering Solutions, Leeds, United Kingdom (e-mail: [email protected]). Results obtained are successfully validated against the load and motion tolerances specified by the ISO18192-1 standard and fall within limits, that is, ±0.5° at the maxima and minima of the motion and ±2 % of the complete cycle for phasing. The simulation results prove the efficacy of the test setup using HIL simulation to verify and validate the accuracy and robustness of the prospective controller before its deployment into the spinal wear simulator. This method of testing controllers enables a wide range of possibilities to test advanced control algorithms that can potentially test even profiles of patients performing various dailyliving activities.

Keywords: Fuzzy-PID controller, hardware-in-the-loop (HIL), real-time simulation, spinal wear simulator

Procedia PDF Downloads 150
29 Towards Automatic Calibration of In-Line Machine Processes

Authors: David F. Nettleton, Elodie Bugnicourt, Christian Wasiak, Alejandro Rosales

Abstract:

In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820

Keywords: data model, machine learning, industrial winding, calibration

Procedia PDF Downloads 215
28 Mobi-DiQ: A Pervasive Sensing System for Delirium Risk Assessment in Intensive Care Unit

Authors: Subhash Nerella, Ziyuan Guan, Azra Bihorac, Parisa Rashidi

Abstract:

Intensive care units (ICUs) provide care to critically ill patients in severe and life-threatening conditions. However, patient monitoring in the ICU is limited by the time and resource constraints imposed on healthcare providers. Many critical care indices such as mobility are still manually assessed, which can be subjective, prone to human errors, and lack granularity. Other important aspects, such as environmental factors, are not monitored at all. For example, critically ill patients often experience circadian disruptions due to the absence of effective environmental “timekeepers” such as the light/dark cycle and the systemic effect of acute illness on chronobiologic markers. Although the occurrence of delirium is associated with circadian disruption risk factors, these factors are not routinely monitored in the ICU. Hence, there is a critical unmet need to develop systems for precise and real-time assessment through novel enabling technologies. We have developed the mobility and circadian disruption quantification system (Mobi-DiQ) by augmenting biomarker and clinical data with pervasive sensing data to generate mobility and circadian cues related to mobility, nightly disruptions, and light and noise exposure. We hypothesize that Mobi-DiQ can provide accurate mobility and circadian cues that correlate with bedside clinical mobility assessments and circadian biomarkers, ultimately important for delirium risk assessment and prevention. The collected multimodal dataset consists of depth images, Electromyography (EMG) data, patient extremity movement captured by accelerometers, ambient light levels, Sound Pressure Level (SPL), and indoor air quality measured by volatile organic compounds, and the equivalent CO₂ concentration. For delirium risk assessment, the system recognizes mobility cues (axial body movement features and body key points) and circadian cues, including nightly disruptions, ambient SPL, and light intensity, as well as other environmental factors such as indoor air quality. The Mobi-DiQ system consists of three major components: the pervasive sensing system, a data storage and analysis server, and a data annotation system. For data collection, six local pervasive sensing systems were deployed, including a local computer and sensors. A video recording tool with graphical user interface (GUI) developed in python was used to capture depth image frames for analyzing patient mobility. All sensor data is encrypted, then automatically uploaded to the Mobi-DiQ server through a secured VPN connection. Several data pipelines are developed to automate the data transfer, curation, and data preparation for annotation and model training. The data curation and post-processing are performed on the server. A custom secure annotation tool with GUI was developed to annotate depth activity data. The annotation tool is linked to the MongoDB database to record the data annotation and to provide summarization. Docker containers are also utilized to manage services and pipelines running on the server in an isolated manner. The processed clinical data and annotations are used to train and develop real-time pervasive sensing systems to augment clinical decision-making and promote targeted interventions. In the future, we intend to evaluate our system as a clinical implementation trial, as well as to refine and validate it by using other data sources, including neurological data obtained through continuous electroencephalography (EEG).

Keywords: deep learning, delirium, healthcare, pervasive sensing

Procedia PDF Downloads 69
27 Generative Syntaxes: Macro-Heterophony and the Form of ‘Synchrony’

Authors: Luminiţa Duţică, Gheorghe Duţică

Abstract:

One of the most powerful language innovation in the twentieth century music was the heterophony–hypostasis of the vertical syntax entered into the sphere of interest of many composers, such as George Enescu, Pierre Boulez, Mauricio Kagel, György Ligeti and others. The heterophonic syntax has a history of its growth, which means a succession of different concepts and writing techniques. The trajectory of settling this phenomenon does not necessarily take into account the chronology: there are highly complex primary stages and advanced stages of returning to the simple forms of writing. In folklore, the plurimelodic simultaneities are free or random and originate from the (unintentional) differences/‘deviations’ from the state of unison, through a variety of ornaments, melismas, imitations, elongations and abbreviations, all in a flexible rhythmic and non-periodic/immeasurable framework, proper to the parlando-rubato rhythmics. Within the general framework of the multivocal organization, the heterophonic syntax in elaborate (academic) version has imposed itself relatively late compared with polyphony and homophony. Of course, the explanation is simple, if we consider the causal relationship between the sound vocabulary elements – in this case, the modalism – and the typologies of vertical organization appropriate for it. Therefore, adding up the ‘classic’ pathway of the writing typologies (monody – polyphony – homophony), heterophony - applied equally to the structures of modal, serial or synthesis vocabulary – reclaims necessarily an own macrotemporal form, in the sense of the analogies enshrined by the evolution of the musical styles and languages: polyphony→fugue, homophony→sonata. Concerned about the prospect of edifying a new musical ontology, the composer Ştefan Niculescu experienced – along with the mathematical organization of heterophony according to his own original methods – the possibility of extrapolation of this phenomenon in macrostructural plan, reaching this way to the unique form of ‘synchrony’. Founded on coincidentia oppositorum principle (involving the ‘one-multiple’ binom), the sound architecture imagined by Ştefan Niculescu consists in one (temporal) model / algorithm of articulation of two sound states: 1. monovocality state (principle of identity) and 2. multivocality state (principle of difference). In this context, the heterophony becomes an (auto)generative mechanism, with macrotemporal amplitude, strategy that will be grown by the composer, practically throughout his creation (see the works: Ison I, Ison II, Unisonos I, Unisonos II, Duplum, Triplum, Psalmus, Héterophonies pour Montreux (Homages to Enescu and Bartók etc.). For the present demonstration, we selected one of the most edifying works of Ştefan Niculescu – Simphony II, Opus dacicum – where the form of (heterophony-)synchrony acquires monumental-symphonic features, representing an emblematic case for the complexity level achieved by this type of vertical syntax in the twentieth century music.

Keywords: heterophony, modalism, serialism, synchrony, syntax

Procedia PDF Downloads 319
26 Transcriptional Differences in B cell Subpopulations over the Course of Preclinical Autoimmunity Development

Authors: Aleksandra Bylinska, Samantha Slight-Webb, Kevin Thomas, Miles Smith, Susan Macwana, Nicolas Dominguez, Eliza Chakravarty, Joan T. Merrill, Judith A. James, Joel M. Guthridge

Abstract:

Background: Systemic Lupus Erythematosus (SLE) is an interferon-related autoimmune disease characterized by B cell dysfunction. One of the main hallmarks is a loss of tolerance to self-antigens leading to increased levels of autoantibodies against nuclear components (ANAs). However, up to 20% of healthy ANA+ individuals will not develop clinical illness. SLE is more prevalent among women and minority populations (African, Asian American and Hispanics). Moreover, African Americans have a stronger interferon (IFN) signature and develop more severe symptoms. The exact mechanisms involved in ethnicity-dependent B cell dysregulation and the progression of autoimmune disease from ANA+ healthy individuals to clinical disease remains unclear. Methods: Peripheral blood mononuclear cells (PBMCs) from African (AA) and European American (EA) ANA- (n=12), ANA+ (n=12) and SLE (n=12) individuals were assessed by multimodal scRNA-Seq/CITE-Seq methods to examine differential gene signatures in specific B cell subsets. Library preparation was done with a 10X Genomics Chromium according to established protocols and sequenced on Illumina NextSeq. The data were further analyzed for distinct cluster identification and differential gene signatures in the Seurat package in R and pathways analysis was performed using Ingenuity Pathways Analysis (IPA). Results: Comparing all subjects, 14 distinct B cell clusters were identified using a community detection algorithm and visualized with Uniform Manifold Approximation Projection (UMAP). The proportion of each of those clusters varied by disease status and ethnicity. Transitional B cells trended higher in ANA+ healthy individuals, especially in AA. Ribonucleoprotein high population (HNRNPH1 elevated, heterogeneous nuclear ribonucleoprotein, RNP-Hi) of proliferating Naïve B cells were more prevalent in SLE patients, specifically in EA. Interferon-induced protein high population (IFIT-Hi) of Naive B cells are increased in EA ANA- individuals. The proportion of memory B cells and plasma cells clusters tend to be expanded in SLE patients. As anticipated, we observed a higher signature of cytokine-related pathways, especially interferon, in SLE individuals. Pathway analysis among AA individuals revealed an NRF2-mediated Oxidative Stress response signature in the transitional B cell cluster, not seen in EA individuals. TNFR1/2 and Sirtuin Signaling pathway genes were higher in AA IFIT-Hi Naive B cells, whereas they were not detected in EA individuals. Interferon signaling was observed in B cells in both ethnicities. Oxidative phosphorylation was found in age-related B cells (ABCs) for both ethnicities, whereas Death Receptor Signaling was found only in EA patients in these cells. Interferon-related transcription factors were elevated in ABCs and IFIT-Hi Naive B cells in SLE subjects of both ethnicities. Conclusions: ANA+ healthy individuals have altered gene expression pathways in B cells that might drive apoptosis and subsequent clinical autoimmune pathogenesis. Increases in certain regulatory pathways may delay progression to SLE. Further, AA individuals have more elevated activation pathways that may make them more susceptible to SLE.

Keywords:

Procedia PDF Downloads 153
25 Kinematic Gait Analysis Is a Non-Invasive, More Objective and Earlier Measurement of Impairment in the Mdx Mouse Model of Duchenne Muscular Dystrophy

Authors: P. J. Sweeney, T. Ahtoniemi, J. Puoliväli, T. Laitinen, K. Lehtimäki, A. Nurmi, D. Wells

Abstract:

Duchenne muscular dystrophy (DMD) is caused by an X linked mutation in the dystrophin gene; lack of dystrophin causes a progressive muscle necrosis which leads to a progressive decrease in mobility in those suffering from the disease. The MDX mouse, a mutant mouse model which displays a frank dystrophinopathy, is currently widely employed in pre clinical efficacy models for treatments and therapies aimed at DMD. In general the end-points examined within this model have been based on invasive histopathology of muscles and serum biochemical measures like measurement of serum creatine kinase (sCK). It is established that a “critical period” between 4 and 6 weeks exists in the MDX mouse when there is extensive muscle damage that is largely sub clinical but evident with sCK measurements and histopathological staining. However, a full characterization of the MDX model remains largely incomplete especially with respect to the ability to aggravate of the muscle damage beyond the critical period. The purpose of this study was to attempt to aggravate the muscle damage in the MDX mouse and to create a wider, more readily translatable and discernible, therapeutic window for the testing of potential therapies for DMD. The study consisted of subjecting 15 male mutant MDX mice and 15 male wild-type mice to an intense chronic exercise regime that consisted of bi-weekly (two times per week) treadmill sessions over a 12 month period. Each session was 30 minutes in duration and the treadmill speed was gradually built up to 14m/min for the entire session. Baseline plasma creatine kinase (pCK), treadmill training performance and locomotor activity were measured after the “critical period” at around 10 weeks of age and again at 14 weeks of age, 6 months, 9 months and 12 months of age. In addition, kinematic gait analysis was employed using a novel analysis algorithm in order to compare changes in gait and fine motor skills in diseased exercised MDX mice compared to exercised wild type mice and non exercised MDX mice. In addition, a morphological and metabolic profile (including lipid profile), from the muscles most severely affected, the gastrocnemius muscle and the tibialis anterior muscle, was also measured at the same time intervals. Results indicate that by aggravating or exacerbating the underlying muscle damage in the MDX mouse by exercise a more pronounced and severe phenotype in comes to light and this can be picked up earlier by kinematic gait analysis. A reduction in mobility as measured by open field is not apparent at younger ages nor during the critical period, but changes in gait are apparent in the mutant MDX mice. These gait changes coincide with pronounced morphological and metabolic changes by non-invasive anatomical MRI and proton spectroscopy (1H-MRS) we have reported elsewhere. Evidence of a progressive asymmetric pathology in imaging parameters as well as in the kinematic gait analysis was found. Taken together, the data show that chronic exercise regime exacerbates the muscle damage beyond the critical period and the ability to measure through non-invasive means are important factors to consider when performing preclinical efficacy studies in the MDX mouse.

Keywords: Gait, muscular dystrophy, Kinematic analysis, neuromuscular disease

Procedia PDF Downloads 259
24 Human Bone Marrow Stem Cell Behavior on 3D Printed Scaffolds as Trabecular Bone Grafts

Authors: Zeynep Busra Velioglu, Deniz Pulat, Beril Demirbakan, Burak Ozcan, Ece Bayrak, Cevat Erisken

Abstract:

Bone tissue has the ability to perform a wide array of functions including providing posture, load-bearing capacity, protection for the internal organs, initiating hematopoiesis, and maintaining the homeostasis of key electrolytes via calcium/phosphate ion storage. The most common cause for bone defects is extensive trauma and subsequent infection. Bone tissue has the self-healing capability without a scar tissue formation for the majority of the injuries. However, some may result with delayed union or fracture non-union. Such cases include reconstruction of large bone defects or cases of compromised regenerative process as a result of avascular necrosis and osteoporosis. Several surgical methods exist to treat bone defects, including Ilizarov method, Masquelete technique, growth factor stimulation, and bone replacement. Unfortunately, these are technically demanding and come with noteworthy disadvantages such as lengthy treatment duration, adverse effects on the patient’s psychology, repeated surgical procedures, and often long hospitalization times. These limitations associated with surgical techniques make bone substitutes an attractive alternative. Here, it was hypothesized that a 3D printed scaffold will mimic trabecular bone in terms of biomechanical properties and that such scaffolds will support cell attachment and survival. To test this hypothesis, this study aimed at fabricating poly(lactic acid), PLA, structures using 3D printing technology for trabecular bone defects, characterizing the scaffolds and comparing with bovine trabecular bone. Capacity of scaffolds on human bone marrow stem cell (hBMSC) attachment and survival was also evaluated. Cubes with a volume of 1 cm³ having pore sizes of 0.50, 1.00 and 1.25 mm were printed. The scaffolds/grafts were characterized in terms of porosity, contact angle, compressive mechanical properties as well cell response. Porosities of the 3D printed scaffolds were calculated based on apparent densities. For contact angles, 50 µl distilled water was dropped over the surface of scaffolds, and contact angles were measured using ‘Image J’ software. Mechanical characterization under compression was performed on scaffolds and native trabecular bone (bovine, 15 months) specimens using a universal testing machine at a rate of 0.5mm/min. hBMSCs were seeded onto the 3D printed scaffolds. After 3 days of incubation with fully supplemented Dulbecco’s modified Eagle’s medium, the cells were fixed using 2% formaldehyde and glutaraldehyde mixture. The specimens were then imaged under scanning electron microscopy. Cell proliferation was determined by using EZQuant dsDNA Quantitation kit. Fluorescence was measured using microplate reader Spectramax M2 at the excitation and emission wavelengths of 485nm and 535nm, respectively. Findings suggested that porosity of scaffolds with pore dimensions of 0.5mm, 1.0mm and 1.25mm were not affected by pore size, while contact angle and compressive modulus decreased with increasing pore size. Biomechanical characterization of trabecular bone yielded higher modulus values as compared to scaffolds with all pore sizes studied. Cells attached and survived in all surfaces, demonstrating higher proliferation on scaffolds with 1.25mm pores as compared with those of 1mm. Collectively, given lower mechanical properties of scaffolds as compared to native bone, and biocompatibility of the scaffolds, the 3D printed PLA scaffolds of this study appear as candidate substitutes for bone repair and regeneration.

Keywords: 3D printing, biomechanics, bone repair, stem cell

Procedia PDF Downloads 155
23 Results concerning the University: Industry Partnership for a Research Project Implementation (MUROS) in the Romanian Program Star

Authors: Loretta Ichim, Dan Popescu, Grigore Stamatescu

Abstract:

The paper reports the collaboration between a top university from Romania and three companies for the implementation of a research project in a multidisciplinary domain, focusing on the impact and benefits both for the education and industry. The joint activities were developed under the Space Technology and Advanced Research Program (STAR), funded by the Romanian Space Agency (ROSA) for a university-industry partnership. The context was defined by linking the European Space Agency optional programs, with the development and promotion national research, with the educational and industrial capabilities in the aeronautics, security and related areas by increasing the collaboration between academic and industrial entities as well as by realizing high-level scientific production. The project name is Multisensory Robotic System for Aerial Monitoring of Critical Infrastructure Systems (MUROS), which was carried 2013-2016. The project included the University POLITEHNICA of Bucharest (coordinator) and three companies, which manufacture and market unmanned aerial systems. The project had as main objective the development of an integrated system for combined ground wireless sensor networks and UAV monitoring in various application scenarios for critical infrastructure surveillance. This included specific activities related to fundamental and applied research, technology transfer, prototype implementation and result dissemination. The core area of the contributions laid in distributed data processing and communication mechanisms, advanced image processing and embedded system development. Special focus is given by the paper to analyzing the impact the project implementation in the educational process, directly or indirectly, through the faculty members (professors and students) involved in the research team. Three main directions are discussed: a) enabling students to carry out internships at the partner companies, b) handling advanced topics and industry requirements at the master's level, c) experiments and concept validation for doctoral thesis. The impact of the research work (as the educational component) developed by the faculty members on the increasing performances of the companies’ products is highlighted. The collaboration between university and companies was well balanced both for contributions and results. The paper also presents the outcomes of the project which reveals the efficient collaboration between high education and industry: master thesis, doctoral thesis, conference papers, journal papers, technical documentation for technology transfer, prototype, and patent. The experience can provide useful practices of blending research and education within an academia-industry cooperation framework while the lessons learned represent a starting point in debating the new role of advanced research and development performing companies in association with higher education. This partnership, promoted at UE level, has a broad impact beyond the constrained scope of a single project and can develop into long-lasting collaboration while benefiting all stakeholders: students, universities and the surrounding knowledge-based economic and industrial ecosystem. Due to the exchange of experiences between the university (UPB) and the manufacturing company (AFT Design), a new project, SIMUL, under the Bridge Grant Program (Romanian executive agency UEFISCDI) was started (2016 – 2017). This project will continue the educational research for innovation on master and doctoral studies in MUROS thematic (collaborative multi-UAV application for flood detection).

Keywords: education process, multisensory robotic system, research and innovation project, technology transfer, university-industry partnership

Procedia PDF Downloads 211
22 Learning Curve Effect on Materials Procurement Schedule of Multiple Sister Ships

Authors: Vijaya Dixit Aasheesh Dixit

Abstract:

Shipbuilding industry operates in Engineer Procure Construct (EPC) context. Product mix of a shipyard comprises of various types of ships like bulk carriers, tankers, barges, coast guard vessels, sub-marines etc. Each order is unique based on the type of ship and customized requirements, which are engineered into the product right from design stage. Thus, to execute every new project, a shipyard needs to upgrade its production expertise. As a result, over the long run, holistic learning occurs across different types of projects which contributes to the knowledge base of the shipyard. Simultaneously, in the short term, during execution of a project comprising of multiple sister ships, repetition of similar tasks leads to learning at activity level. This research aims to capture above learnings of a shipyard and incorporate learning curve effect in project scheduling and materials procurement to improve project performance. Extant literature provides support for the existence of such learnings in an organization. In shipbuilding, there are sequences of similar activities which are expected to exhibit learning curve behavior. For example, the nearly identical structural sub-blocks which are successively fabricated, erected, and outfitted with piping and electrical systems. Learning curve representation can model not only a decrease in mean completion time of an activity, but also a decrease in uncertainty of activity duration. Sister ships have similar material requirements. The same supplier base supplies materials for all the sister ships within a project. On one hand, this provides an opportunity to reduce transportation cost by batching the order quantities of multiple ships. On the other hand, it increases the inventory holding cost at shipyard and the risk of obsolescence. Further, due to learning curve effect the production scheduled of each consequent ship gets compressed. Thus, the material requirement schedule of every next ship differs from its previous ship. As more and more ships get constructed, compressed production schedules increase the possibility of batching the orders of sister ships. This work aims at integrating materials management with project scheduling of long duration projects for manufacturing of multiple sister ships. It incorporates the learning curve effect on progressively compressing material requirement schedules and addresses the above trade-off of transportation cost and inventory holding and shortage costs while satisfying budget constraints of various stages of the project. The activity durations and lead time of items are not crisp and are available in the form of probabilistic distribution. A Stochastic Mixed Integer Programming (SMIP) model is formulated which is solved using evolutionary algorithm. Its output provides ordering dates of items and degree of order batching for all types of items. Sensitivity analysis determines the threshold number of sister ships required in a project to leverage the advantage of learning curve effect in materials management decisions. This analysis will help materials managers to gain insights about the scenarios: when and to what degree is it beneficial to treat a multiple ship project as an integrated one by batching the order quantities and when and to what degree to practice distinctive procurement for individual ship.

Keywords: learning curve, materials management, shipbuilding, sister ships

Procedia PDF Downloads 476
21 Urban Heat Islands Analysis of Matera, Italy Based on the Change of Land Cover Using Satellite Landsat Images from 2000 to 2017

Authors: Giuseppina Anna Giorgio, Angela Lorusso, Maria Ragosta, Vito Telesca

Abstract:

Climate change is a major public health threat due to the effects of extreme weather events on human health and on quality of life in general. In this context, mean temperatures are increasing, in particular, extreme temperatures, with heat waves becoming more frequent, more intense, and longer lasting. In many cities, extreme heat waves have drastically increased, giving rise to so-called Urban Heat Island (UHI) phenomenon. In an urban centre, maximum temperatures may be up to 10° C warmer, due to different local atmospheric conditions. UHI occurs in the metropolitan areas as function of the population size and density of a city. It consists of a significant difference in temperature compared to the rural/suburban areas. Increasing industrialization and urbanization have increased this phenomenon and it has recently also been detected in small cities. Weather conditions and land use are one of the key parameters in the formation of UHI. In particular surface urban heat island is directly related to temperatures, to land surface types and surface modifications. The present study concern a UHI analysis of Matera city (Italy) based on the analysis of temperature, change in land use and land cover, using Corine Land Cover maps and satellite Landsat images. Matera, located in Southern Italy, has a typical Mediterranean climate with mild winters and hot and humid summers. Moreover, Matera has been awarded the international title of the 2019 European Capital of Culture. Matera represents a significant example of vernacular architecture. The structure of the city is articulated by a vertical succession of dug layers sometimes excavated or partly excavated and partly built, according to the original shape and height of the calcarenitic slope. In this study, two meteorological stations were selected: MTA (MaTera Alsia, in industrial zone) and MTCP (MaTera Civil Protection, suburban area located in a green zone). In order to evaluate the increase in temperatures (in terms of UHI occurrences) over time, and evaluating the effect of land use on weather conditions, the climate variability of temperatures for both stations was explored. Results show that UHI phenomena is growing in Matera city, with an increase of maximum temperature values at a local scale. Subsequently, spatial analysis was conducted by Landsat satellite images. Four years was selected in the summer period (27/08/2000, 27/07/2006, 11/07/2012, 02/08/2017). In Particular, Landsat 7 ETM+ for 2000, 2006 and 2012 years; Landsat 8 OLI/TIRS for 2017. In order to estimate the LST, Mono Window Algorithm was applied. Therefore, the increase of LST values spatial scale trend has been verified, in according to results obtained at local scale. Finally, the analysis of land use maps over the years by the LST and/or the maximum temperatures measured, show that the development of industrialized area produces a corresponding increase in temperatures and consequently a growth in UHI.

Keywords: climate variability, land surface temperature, LANDSAT images, urban heat island

Procedia PDF Downloads 98
20 Unidentified Remains with Extensive Bone Disease without a Clear Diagnosis

Authors: Patricia Shirley Almeida Prado, Selma Paixão Argollo, Maria De Fátima Teixeira Guimarães, Leticia Matos Sobrinho

Abstract:

Skeletal differential diagnosis is essential in forensic anthropology in order to differentiate skeletal trauma from normal osseous variation and pathological processes. Thus, part of forensic anthropological field is differentiate skeletal criminal injuries from the normal skeletal variation (bone fusion or nonunion, transitional vertebrae and other non-metric traits), non-traumatic skeletal pathology (myositis ossificans, arthritis, bone metastasis, osteomyelitis) from traumatic skeletal pathology (myositis ossificans traumatic) avoiding misdiagnosis. This case shows the importance of effective pathological diagnosis in order to accelerate the identification process of skeletonized human remains. THE CASE: An unidentified skeletal remains at the medico legal institute Nina Rodrigues-Salvador, of a male young adult (29 to 40 years estimated) showing a massive heterotopic ossification on its right tibia at upper epiphysis and adjacent articular femur surface; an extensive ossification on the right clavicle (at the sternal extremity) also presenting an heterotopic ossification at right scapulae (upper third of scapulae lateral margin and infraglenoid tubercule) and at the head of right humerus at the shoulder joint area. Curiously, this case also shows an unusual porosity in certain vertebrae´s body and in some tarsal and carpal bones. Likewise, his left fifth metacarpal bones (right and left) showed a healed fracture which led both bones distorted. Based on identification, of pathological conditions in human skeletal remains literature and protocols these alterations can be misdiagnosed and this skeleton may present more than one pathological process. The anthropological forensic lab at Medico-legal Institute Nina Rodrigues in Salvador (Brazil) adopts international protocols to ancestry, sex, age and stature estimations, also implemented well-established conventions to identify pathological disease and skeletal alterations. The most compatible diagnosis for this case is hematogenous osteomyelitis due to following findings: 1: the healed fracture pattern at the clavicle showing a cloaca which is a pathognomonic for osteomyelitis; 2: the metacarpals healed fracture does not present cloaca although they developed a periosteal formation. 3: the superior articular surface of the right tibia shows an extensive inflammatory healing process that extends to adjacent femur articular surface showing some cloaca at tibia bone disease. 4: the uncommon porosities may result from hematogenous infectious process. The fractures probably have occurred in a different moments based on the healing process; the tibia injury is more extensive and has not been reorganized, while metacarpals and clavicle fracture is properly healed. We suggest that the clavicle and tibia´s fractures were infected by an existing infectious disease (syphilis, tuberculosis, brucellosis) or an existing syndrome (Gorham’s disease), which led to the development of osteomyelitis. This hypothesis is supported by the fact that different bones are affected in diverse levels. Like the metacarpals that do not show the cloaca, but then a periosteal new bone formation; then the unusual porosities do not show a classical osteoarthritic processes findings as the marginal osteophyte, pitting and new bone formation, they just show an erosive process without bone formation or osteophyte. To confirm and prove our hypothesis we are working on different clinical approaches like DNA, histopathology and other image exams to find the correct diagnostic.

Keywords: bone disease, forensic anthropology, hematogenous osteomyelitis, human identification, human remains

Procedia PDF Downloads 304
19 An Elasto-Viscoplastic Constitutive Model for Unsaturated Soils: Numerical Implementation and Validation

Authors: Maria Lazari, Lorenzo Sanavia

Abstract:

Mechanics of unsaturated soils has been an active field of research in the last decades. Efficient constitutive models that take into account the partial saturation of soil are necessary to solve a number of engineering problems e.g. instability of slopes and cuts due to heavy rainfalls. A large number of constitutive models can now be found in the literature that considers fundamental issues associated with the unsaturated soil behaviour, like the volume change and shear strength behaviour with suction or saturation changes. Partially saturated soils may either expand or collapse upon wetting depending on the stress level, and it is also possible that a soil might experience a reversal in the volumetric behaviour during wetting. Shear strength of soils also changes dramatically with changes in the degree of saturation, and a related engineering problem is slope failures caused by rainfall. There are several states of the art reviews over the last years for studying the topic, usually providing a thorough discussion of the stress state, the advantages, and disadvantages of specific constitutive models as well as the latest developments in the area of unsaturated soil modelling. However, only a few studies focused on the coupling between partial saturation states and time effects on the behaviour of geomaterials. Rate dependency is experimentally observed in the mechanical response of granular materials, and a viscoplastic constitutive model is capable of reproducing creep and relaxation processes. Therefore, in this work an elasto-viscoplastic constitutive model for unsaturated soils is proposed and validated on the basis of experimental data. The model constitutes an extension of an existing elastoplastic strain-hardening constitutive model capable of capturing the behaviour of variably saturated soils, based on energy conjugated stress variables in the framework of superposed continua. The purpose was to develop a model able to deal with possible mechanical instabilities within a consistent energy framework. The model shares the same conceptual structure of the elastoplastic laws proposed to deal with bonded geomaterials subject to weathering or diagenesis and is capable of modelling several kinds of instabilities induced by the loss of hydraulic bonding contributions. The novelty of the proposed formulation is enhanced with the incorporation of density dependent stiffness and hardening coefficients in order to allow the modeling of the pycnotropy behaviour of granular materials with a single set of material constants. The model has been implemented in the commercial FE platform PLAXIS, widely used in Europe for advanced geotechnical design. The algorithmic strategies adopted for the stress-point algorithm had to be revised to take into account the different approach adopted by PLAXIS developers in the solution of the discrete non-linear equilibrium equations. An extensive comparison between models with a series of experimental data reported by different authors is presented to validate the model and illustrate the capability of the newly developed model. After the validation, the effectiveness of the viscoplastic model is displayed by numerical simulations of a partially saturated slope failure of the laboratory scale and the effect of viscosity and degree of saturation on slope’s stability is discussed.

Keywords: PLAXIS software, slope, unsaturated soils, Viscoplasticity

Procedia PDF Downloads 198
18 Influence of the Local External Pressure on Measured Parameters of Cutaneous Microcirculation

Authors: Irina Mizeva, Elena Potapova, Viktor Dremin, Mikhail Mezentsev, Valeri Shupletsov

Abstract:

The local tissue perfusion is regulated by the microvascular tone which is under the control of a number of physiological mechanisms. Laser Doppler flowmetry (LDF) together with wavelet analyses is the most commonly used technique to study the regulatory mechanisms of cutaneous microcirculation. External factors such as temperature, local pressure of the probe on the skin, etc. influence on the blood flow characteristics and are used as physiological tests to evaluate microvascular regulatory mechanisms. Local probe pressure influences on the microcirculation parameters measured by optical methods: diffuse reflectance spectroscopy, fluorescence spectroscopy, and LDF. Therefore, further study of probe pressure effects can be useful to improve the reliability of optical measurement. During pressure tests variation of the mean perfusion measured by means of LDF usually is estimated. An additional information concerning the physiological mechanisms of the vascular tone regulation system in response to local pressure can be obtained using spectral analyses of LDF samples. The aim of the present work was to develop protocol and algorithm of data processing appropriate for study physiological response to the local pressure test. Involving 6 subjects (20±2 years) and providing 5 measurements for every subject we estimated intersubject and-inter group variability of response of both averaged and oscillating parts of the LDF sample on external surface pressure. The final purpose of the work was to find special features which further can be used in wider clinic studies. The cutaneous perfusion measurements were carried out by LAKK-02 (SPE LAZMA Ltd., Russia), the skin loading was provided by the originally designed device which allows one to distribute the pressure around the LDF probe. The probe was installed on the dorsal part of the distal finger of the index figure. We collected measurements continuously for one hour and varied loading from 0 to 180mmHg stepwise with a step duration of 10 minutes. Further, we post-processed the samples using the wavelet transform and traced the energy of oscillations in five frequency bands over time. Weak loading leads to pressure-induced vasodilation, so one should take into account that the perfusion measured under pressure conditions will be overestimated. On the other hand, we revealed a decrease in endothelial associated fluctuations. Further loading (88 mmHg) induces amplification of pulsations in all frequency bands. We assume that such loading leads to a higher number of closed capillaries, higher input of arterioles in the LDF signal and as a consequence more vivid oscillations which mainly are formed in arterioles. External pressure higher than 144 mmHg leads to the decrease of oscillating components, after removing the loading very rapid restore of the tissue perfusion takes place. In this work, we have demonstrated that local skin loading influence on the microcirculation parameters measured by optic technique; this should be taken into account while developing portable electronic devices. The proposed protocol of local loading allows one to evaluate PIV as far as to trace dynamic of blood flow oscillations. This study was supported by the Russian Science Foundation under project N 18-15-00201.

Keywords: blood microcirculation, laser Doppler flowmetry, pressure-induced vasodilation, wavelet analyses blood

Procedia PDF Downloads 124
17 Mapping of Urban Micro-Climate in Lyon (France) by Integrating Complementary Predictors at Different Scales into Multiple Linear Regression Models

Authors: Lucille Alonso, Florent Renard

Abstract:

The characterizations of urban heat island (UHI) and their interactions with climate change and urban climates are the main research and public health issue, due to the increasing urbanization of the population. These solutions require a better knowledge of the UHI and micro-climate in urban areas, by combining measurements and modelling. This study is part of this topic by evaluating microclimatic conditions in dense urban areas in the Lyon Metropolitan Area (France) using a combination of data traditionally used such as topography, but also from LiDAR (Light Detection And Ranging) data, Landsat 8 satellite observation and Sentinel and ground measurements by bike. These bicycle-dependent weather data collections are used to build the database of the variable to be modelled, the air temperature, over Lyon’s hyper-center. This study aims to model the air temperature, measured during 6 mobile campaigns in Lyon in clear weather, using multiple linear regressions based on 33 explanatory variables. They are of various categories such as meteorological parameters from remote sensing, topographic variables, vegetation indices, the presence of water, humidity, bare soil, buildings, radiation, urban morphology or proximity and density to various land uses (water surfaces, vegetation, bare soil, etc.). The acquisition sources are multiple and come from the Landsat 8 and Sentinel satellites, LiDAR points, and cartographic products downloaded from an open data platform in Greater Lyon. Regarding the presence of low, medium, and high vegetation, the presence of buildings and ground, several buffers close to these factors were tested (5, 10, 20, 25, 50, 100, 200 and 500m). The buffers with the best linear correlations with air temperature for ground are 5m around the measurement points, for low and medium vegetation, and for building 50m and for high vegetation is 100m. The explanatory model of the dependent variable is obtained by multiple linear regression of the remaining explanatory variables (Pearson correlation matrix with a |r| < 0.7 and VIF with < 5) by integrating a stepwise sorting algorithm. Moreover, holdout cross-validation is performed, due to its ability to detect over-fitting of multiple regression, although multiple regression provides internal validation and randomization (80% training, 20% testing). Multiple linear regression explained, on average, 72% of the variance for the study days, with an average RMSE of only 0.20°C. The impact on the model of surface temperature in the estimation of air temperature is the most important variable. Other variables are recurrent such as distance to subway stations, distance to water areas, NDVI, digital elevation model, sky view factor, average vegetation density, or building density. Changing urban morphology influences the city's thermal patterns. The thermal atmosphere in dense urban areas can only be analysed on a microscale to be able to consider the local impact of trees, streets, and buildings. There is currently no network of fixed weather stations sufficiently deployed in central Lyon and most major urban areas. Therefore, it is necessary to use mobile measurements, followed by modelling to characterize the city's multiple thermal environments.

Keywords: air temperature, LIDAR, multiple linear regression, surface temperature, urban heat island

Procedia PDF Downloads 109
16 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization

Authors: Soheila Sadeghi

Abstract:

Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.

Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction

Procedia PDF Downloads 12
15 Identifying the Conservation Gaps in Poorly Studied Protected Area in the Philippines: A Study Case of Sibuyan Island

Authors: Roven Tumaneng, Angelica Kristina Monzon, Ralph Sedricke Lapuz, Jose Don De Alban, Jennica Paula Masigan, Joanne Rae Pales, Laila Monera Pornel, Dennis Tablazon, Rizza Karen Veridiano, Jackie Lou Wenceslao, Edmund Leo Rico, Neil Aldrin Mallari

Abstract:

Most protected area management plans in the Philippines, particularly the smaller and more remote islands suffer from insufficient baseline data, which should provide the bases for formulating measureable conservation targets and appropriate management interventions for these protected areas. Attempts to synthesize available data particularly on cultural and socio-economic characteristic of local peoples within and outside protected areas also suffer from the lack of comprehensive and detailed inventories, which should be considered in designing adaptive management interventions to be used for those protected areas. Mt Guiting-guiting Natural Park (MGGNP) located in Sibuyan Island is one of the poorly studied protected areas in the Philippines. In this study, we determined the highly biologically important areas of the protected area using Maximum Entropy approach (MaxEnt) from environmental predictors (i.e., topographic, bioclimatic,land cover, and soil image layers) derived from global remotely sensed data and point occurrence data of species of birds and trees recorded during field surveys on the island. A total of 23 trigger species of birds and trees was modeled and stacked to generate species richness maps for biological high conservation value areas (HCVAs). Forest habitat change was delineated using dual-polarised L-band ALOS-PALSAR mosaic data at 25 meter spatial resolution, taken at two acquisition years 2007 and 2009 to provide information on forest cover ad habitat change in the island between year 2007 and 2009. Determining the livelihood guilds were also conducted using the data gathered from171 household interviews, from which demographic and livelihood variables were extracted (i.e., age, gender, number of household members, educational attainment, years of residency, distance from forest edge, main occupation, alternative sources of food and resources during scarcity months, and sources of these alternative resources).Using Principal Component Analysis (PCA) and Kruskal-Wallis test, the diversity and patterns of forest resource use by people in the island were determined with particular focus on the economic activities that directly and indirectly affect the population of key species as well as to identify levels of forest resource use by people in different areas of the park.Results showed that there are gaps in the area occupied by the natural park, as evidenced by the mismatch of the proposed HCVAs and the existing perimeters of the park. We found out that subsistence forest gathering was the possible main driver for forest degradation out of the eight livelihood guilds that were identified in the park. Determining the high conservation areas and identifyingthe anthropogenic factors that influence the species richness and abundance of key species in the different management zone of MGGNP would provide guidance for the design of a protected area management plan and future monitoring programs. However, through intensive communication and consultation with government stakeholders and local communities our results led to setting conservation targets in local development plans and serve as a basis for the reposition of the boundaries and reconfiguration of the management zones of MGGNP.

Keywords: conservation gaps, livelihood guilds, MaxEnt, protected area

Procedia PDF Downloads 385