Search results for: sensor calibration
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1818

Search results for: sensor calibration

48 Radish Sprout Growth Dependency on LED Color in Plant Factory Experiment

Authors: Tatsuya Kasuga, Hidehisa Shimada, Kimio Oguchi

Abstract:

Recent rapid progress in ICT (Information and Communication Technology) has advanced the penetration of sensor networks (SNs) and their attractive applications. Agriculture is one of the fields well able to benefit from ICT. Plant factories control several parameters related to plant growth in closed areas such as air temperature, humidity, water, culture medium concentration, and artificial lighting by using computers and AI (Artificial Intelligence) is being researched in order to obtain stable and safe production of vegetables and medicinal plants all year anywhere, and attain self-sufficiency in food. By providing isolation from the natural environment, a plant factory can achieve higher productivity and safe products. However, the biggest issue with plant factories is the return on investment. Profits are tenuous because of the large initial investments and running costs, i.e. electric power, incurred. At present, LED (Light Emitting Diode) lights are being adopted because they are more energy-efficient and encourage photosynthesis better than the fluorescent lamps used in the past. However, further cost reduction is essential. This paper introduces experiments that reveal which color of LED lighting best enhances the growth of cultured radish sprouts. Radish sprouts were cultivated in the experimental environment formed by a hydroponics kit with three cultivation shelves (28 samples per shelf) each with an artificial lighting rack. Seven LED arrays of different color (white, blue, yellow green, green, yellow, orange, and red) were compared with a fluorescent lamp as the control. Lighting duration was set to 12 hours a day. Normal water with no fertilizer was circulated. Seven days after germination, the length, weight and area of leaf of each sample were measured. Electrical power consumption for all lighting arrangements was also measured. Results and discussions: As to average sample length, no clear difference was observed in terms of color. As regards weight, orange LED was less effective and the difference was significant (p < 0.05). As to leaf area, blue, yellow and orange LEDs were significantly less effective. However, all LEDs offered higher productivity per W consumed than the fluorescent lamp. Of the LEDs, the blue LED array attained the best results in terms of length, weight and area of leaf per W consumed. Conclusion and future works: An experiment on radish sprout cultivation under 7 different color LED arrays showed no clear difference in terms of sample size. However, if electrical power consumption is considered, LEDs offered about twice the growth rate of the fluorescent lamp. Among them, blue LEDs showed the best performance. Further cost reduction e.g. low power lighting remains a big issue for actual system deployment. An automatic plant monitoring system with sensors is another study target.

Keywords: electric power consumption, LED color, LED lighting, plant factory

Procedia PDF Downloads 188
47 Photophysics and Torsional Dynamics of Thioflavin T in Deep Eutectic Solvents

Authors: Rajesh Kumar Gautam, Debabrata Seth

Abstract:

Thioflavin-T (ThT) play a key role of an important biologically active fluorescent sensor for amyloid fibrils. ThT molecule has been developed a method to detect the analysis of different type of diseases such as neurodegenerative disorders, Alzheimer’s, Parkinson’s, and type II diabetes. ThT was used as a fluorescent marker to detect the formation of amyloid fibril. In the presence of amyloid fibril, ThT becomes highly fluorescent. ThT undergoes twisting motion around C-C bonds of the two adjacent benzothiazole and dimethylaniline aromatic rings, which is predominantly affected by the micro-viscosity of the local environment. The present study articulates photophysics and torsional dynamics of biologically active molecule ThT in the presence of deep-eutectic solvents (DESs). DESs are environment-friendly, low cost and biodegradable alternatives to the ionic liquids. DES resembles ionic liquids, but the constituents of a DES include a hydrogen bond donor and acceptor species, in addition to ions. Due to the presence of the H-bonding network within a DES, it exhibits structural heterogeneity. Herein, we have prepared two different DESs by mixing urea with choline chloride and N, N-diethyl ethanol ammonium chloride at ~ 340 K. It was reported that deep eutectic mixture of choline chloride with urea gave a liquid with a freezing point of 12°C. We have experimented by taking two different concentrations of ThT. It was observed that at higher concentration of ThT (50 µM) it forms aggregates in DES. The photophysics of ThT as a function of temperature have been explored by using steady-state, and picoseconds time-resolved fluorescence emission spectroscopic techniques. From the spectroscopic analysis, we have observed that with rising temperature the fluorescence quantum yields and lifetime values of ThT molecule gradually decreases; this is the cumulative effect of thermal quenching and increase in the rate of the torsional rate constant. The fluorescence quantum yield and fluorescence lifetime decay values were always higher for DES-II (urea & N, N-diethyl ethanol ammonium chloride) than those for DES-I (urea & choline chloride). This was mainly due to the presence of structural heterogeneity of the medium. This was further confirmed by comparison with the activation energy of viscous flow with the activation energy of non-radiative decay. ThT molecule in less viscous media undergoes a very fast twisting process and leads to deactivation from the photoexcited state. In this system, the torsional motion increases with increasing temperature. We have concluded that beside bulk viscosity of the media, structural heterogeneity of the medium play crucial role to guide the photophysics of ThT in DESs. The analysis of the experimental data was carried out in the temperature range 288 ≤ T = 333K. The present articulate is to obtain an insight into the DESs as media for studying various photophysical processes of amyloid fibrils sensing molecule of ThT.

Keywords: deep eutectic solvent, photophysics, Thioflavin T, the torsional rate constant

Procedia PDF Downloads 162
46 Carbon Nanotubes (CNTs) as Multiplex Surface Enhanced Raman Scattering Sensing Platforms

Authors: Pola Goldberg Oppenheimer, Stephan Hofmann, Sumeet Mahajan

Abstract:

Owing to its fingerprint molecular specificity and high sensitivity, surface-enhanced Raman scattering (SERS) is an established analytical tool for chemical and biological sensing capable of single-molecule detection. A strong Raman signal can be generated from SERS-active platforms given the analyte is within the enhanced plasmon field generated near a noble-metal nanostructured substrate. The key requirement for generating strong plasmon resonances to provide this electromagnetic enhancement is an appropriate metal surface roughness. Controlling nanoscale features for generating these regions of high electromagnetic enhancement, the so-called SERS ‘hot-spots’, is still a challenge. Significant advances have been made in SERS research, with wide-ranging techniques to generate substrates with tunable size and shape of the nanoscale roughness features. Nevertheless, the development and application of SERS has been inhibited by the irreproducibility and complexity of fabrication routes. The ability to generate straightforward, cost-effective, multiplex-able and addressable SERS substrates with high enhancements is of profound interest for miniaturised sensing devices. Carbon nanotubes (CNTs) have been concurrently, a topic of extensive research however, their applications for plasmonics has been only recently beginning to gain interest. CNTs can provide low-cost, large-active-area patternable substrates which, coupled with appropriate functionalization capable to provide advanced SERS-platforms. Herein, advanced methods to generate CNT-based SERS active detection platforms will be discussed. First, a novel electrohydrodynamic (EHD) lithographic technique will be introduced for patterning CNT-polymer composites, providing a straightforward, single-step approach for generating high-fidelity sub-micron-sized nanocomposite structures within which anisotropic CNTs are vertically aligned. The created structures are readily fine-tuned, which is an important requirement for optimizing SERS to obtain the highest enhancements with each of the EHD-CNTs individual structural units functioning as an isolated sensor. Further, gold-functionalized VACNTFs are fabricated as SERS micro-platforms. The dependence on the VACNTs’ diameters and density play an important role in the Raman signal strength, thus highlighting the importance of structural parameters, previously overlooked in designing and fabricating optimized CNTs-based SERS nanoprobes. VACNTs forests patterned into predesigned pillar structures are further utilized for multiplex detection of bio-analytes. Since CNTs exhibit electrical conductivity and unique adsorption properties, these are further harnessed in the development of novel chemical and bio-sensing platforms.

Keywords: carbon nanotubes (CNTs), EHD patterning, SERS, vertically aligned carbon nanotube forests (VACNTF)

Procedia PDF Downloads 331
45 Physical Activity Based on Daily Step-Count in Inpatient Setting in Stroke and Traumatic Brain Injury Patients in Subacute Stage Follow Up: A Cross-Sectional Observational Study

Authors: Brigitte Mischler, Marget Hund, Hilfiker Roger, Clare Maguire

Abstract:

Background: Brain injury is one of the main causes of permanent physical disability, and improving walking ability is one of the most important goals for patients. After inpatient rehabilitation, most do not receive long-term rehabilitation services. Physical activity is important for the health prevention of the musculoskeletal system, circulatory system and the psyche. Objective: This follow-up study measured physical activity in subacute patients after traumatic brain injury and stroke. The difference in the number of steps in the inpatient setting was compared to the number of steps 1 year after the event in the outpatient setting. Methods: This follow-up study is a cross-sectional observational study with 29 participants. The measurement of daily step count over a seven-day period one year after the event was evaluated with the StepWatch™ ankle sensor. The number of steps taken one year after the event in the outpatient setting was compared with the number of steps taken during the inpatient stay and evaluated if they reached the recommended target value. Correlations between steps-count and exit domain, FAC level, walking speed, light touch, joint position sense, cognition, and fear of falling were calculated. Results: The median (IQR) daily step count of all patients was 2512 (568.5, 4070.5). During follow-up, the number of steps improved to 3656(1710,5900). The average difference was 1159(-2825, 6840) steps per day. Participants who were unable to walk independently (FAC 1) improved from 336(5-705) to 1808(92, 5354) steps per day. Participants able to walk with assistance (FAC 2-3) walked 700(31-3080) and at follow-up 3528(243,6871). Independent walkers (FAC 4-5) walked 4093(2327-5868) and achieved 3878(777,7418) daily steps at follow-up. This value is significantly below the recommended guideline. Step-count at follow-up showed moderate to high and statistically significant correlations: positive for FAC score, positive for FIM total score, positive for walking speed, and negative for fear of falling. Conclusions: Only 17% of all participants achieved the recommended daily step count one year after the event. We need better inpatient and outpatient strategies to improve physical activity. In everyday clinical practice, pedometers and diaries with objectives should be used. A concrete weekly schedule should be drawn up together with the patient, relatives, or nursing staff after discharge. This should include daily self-training, which was instructed during the inpatient stay. A good connection to social life (professional connection or a daily task/activity) can be an important part of improving daily activity. Further research should evaluate strategies to increase daily step counts in inpatient settings as well as in outpatient settings.

Keywords: neurorehabilitation, stroke, traumatic brain injury, steps, stepcount

Procedia PDF Downloads 14
44 Fault Diagnosis and Fault-Tolerant Control of Bilinear-Systems: Application to Heating, Ventilation, and Air Conditioning Systems in Multi-Zone Buildings

Authors: Abderrhamane Jarou, Dominique Sauter, Christophe Aubrun

Abstract:

Over the past decade, the growing demand for energy efficiency in buildings has attracted the attention of the control community. Failures in HVAC (heating, ventilation and air conditioning) systems in buildings can have a significant impact on the desired and expected energy performance of buildings and on the user's comfort as well. FTC is a recent technology area that studies the adaptation of control algorithms to faulty operating conditions of a system. The application of Fault-Tolerant Control (FTC) in HVAC systems has gained attention in the last two decades. The objective is to maintain the variations in system performance due to faults within an acceptable range with respect to the desired nominal behavior. This paper considers the so-called active approach, which is based on fault and identification scheme combined with a control reconfiguration algorithm that consists in determining a new set of control parameters so that the reconfigured performance is "as close as possible, "in some sense, to the nominal performance. Thermal models of buildings and their HVAC systems are described by non-linear (usually bi-linear) equations. Most of the works carried out so far in FDI (fault diagnosis and isolation) or FTC consider a linearized model of the studied system. However, this model is only valid in a reduced range of variation. This study presents a new fault diagnosis (FD) algorithm based on a bilinear observer for the detection and accurate estimation of the magnitude of the HVAC system failure. The main contribution of the proposed FD algorithm is that instead of using specific linearized models, the algorithm inherits the structure of the actual bilinear model of the building thermal dynamics. As an immediate consequence, the algorithm is applicable to a wide range of unpredictable operating conditions, i.e., weather dynamics, outdoor air temperature, zone occupancy profile. A bilinear fault detection observer is proposed for a bilinear system with unknown inputs. The residual vector in the observer design is decoupled from the unknown inputs and, under certain conditions, is made sensitive to all faults. Sufficient conditions are given for the existence of the observer and results are given for the explicit computation of observer design matrices. Dedicated observer schemes (DOS) are considered for sensor FDI while unknown input bilinear observers are considered for actuator or system components FDI. The proposed strategy for FTC works as follows: At a first level, FDI algorithms are implemented, making it also possible to estimate the magnitude of the fault. Once the fault is detected, the fault estimation is then used to feed the second level and reconfigure the control low so that that expected performances are recovered. This paper is organized as follows. A general structure for fault-tolerant control of buildings is first presented and the building model under consideration is introduced. Then, the observer-based design for Fault Diagnosis of bilinear systems is studied. The FTC approach is developed in Section IV. Finally, a simulation example is given in Section V to illustrate the proposed method.

Keywords: bilinear systems, fault diagnosis, fault-tolerant control, multi-zones building

Procedia PDF Downloads 172
43 Exploring the Application of IoT Technology in Lower Limb Assistive Devices for Rehabilitation during the Golden Period of Stroke Patients with Hemiplegia

Authors: Ching-Yu Liao, Ju-Joan Wong

Abstract:

Recent years have shown a trend of younger stroke patients and an increase in ischemic strokes with the rise in stroke incidence. This has led to a growing demand for telemedicine, particularly during the COVID-19 pandemic, which has made the need for telemedicine even more urgent. This shift in healthcare is also closely related to advancements in Internet of Things (IoT) technology. Stroke-induced hemiparesis is a significant issue for patients. The medical community believes that if intervention occurs within three to six months of stroke onset, 80% of the residual effects can be restored to normal, a period known as the stroke golden period. During this time, patients undergo treatment and rehabilitation, and neural plasticity is at its best. Lower limb rehabilitation for stroke generally includes exercises such as support standing and walking posture, typically involving the healthy limb to guide the affected limb to achieve rehabilitation goals. Existing gait training aids in hospitals usually involve balance gait, sitting posture training, and precise muscle control, effectively addressing issues of poor gait, insufficient muscle activity, and inability to train independently during recovery. However, home training aids, such as braced and wheeled devices, often rely on the healthy limb to pull the affected limb, leading to lower usage of the affected limb, worsening circular walking, and compensatory movement issues. IoT technology connects devices via the internet to record, receive data, provide feedback, and adjust equipment for intelligent effects. Therefore, this study aims to explore how IoT can be integrated into existing gait training aids to monitor and sensor home rehabilitation movements, improve gait training compensatory issues through real-time feedback, and enable healthcare professionals to quickly understand patient conditions and enhance medical communication. To understand the needs of hemiparetic patients, a review of relevant literature from the past decade will be conducted. From the perspective of user experience, participant observation will be used to explore the use of home training aids by stroke patients and therapists, and interviews with physical therapists will be conducted to obtain professional opinions and practical experiences. Design specifications for home training aids for hemiparetic patients will be summarized. Applying IoT technology to lower limb training aids for stroke hemiparesis can help promote walking function recovery in hemiparetic patients, reduce muscle atrophy, and allow healthcare professionals to immediately grasp patient conditions and adjust gait training plans based on collected and analyzed information. Exploring these potential development directions provides a valuable reference for the further application of IoT technology in the field of medical rehabilitation.

Keywords: stroke, hemiplegia, rehabilitation, gait training, internet of things technology

Procedia PDF Downloads 29
42 European Electromagnetic Compatibility Directive Applied to Astronomical Observatories

Authors: Oibar Martinez, Clara Oliver

Abstract:

The Cherenkov Telescope Array Project (CTA) aims to build two different observatories of Cherenkov Telescopes, located in Cerro del Paranal, Chile, and La Palma, Spain. These facilities are used in this paper as a case study to investigate how to apply standard Directives on Electromagnetic Compatibility to astronomical observatories. Cherenkov Telescopes are able to provide valuable information from both Galactic and Extragalactic sources by measuring Cherenkov radiation, which is produced by particles which travel faster than light in the atmosphere. The construction requirements demand compliance with the European Electromagnetic Compatibility Directive. The largest telescopes of these observatories, called Large Scale Telescopes (LSTs), are high precision instruments with advanced photomultipliers able to detect the faint sub-nanosecond blue light pulses produced by Cherenkov Radiation. They have a 23-meter parabolic reflective surface. This surface focuses the radiation on a camera composed of an array of high-speed photosensors which are highly sensitive to the radio spectrum pollution. The camera has a field of view of about 4.5 degrees and has been designed for maximum compactness and lowest weight, cost and power consumption. Each pixel incorporates a photo-sensor able to discriminate single photons and the corresponding readout electronics. The first LST is already commissioned and intends to be operated as a service to Scientific Community. Because of this, it must comply with a series of reliability and functional requirements and must have a Conformité Européen (CE) marking. This demands compliance with Directive 2014/30/EU on electromagnetic compatibility. The main difficulty of accomplishing this goal resides on the fact that Conformité Européen marking setups and procedures were implemented for industrial products, whereas no clear protocols have been defined for scientific installations. In this paper, we aim to give an answer to the question on how the directive should be applied to our installation to guarantee the fulfillment of all the requirements and the proper functioning of the telescope itself. Experts in Optics and Electromagnetism were both needed to make these kinds of decisions and match tests which were designed to be made over the equipment of limited dimensions on large scientific plants. An analysis of the elements and configurations most likely to be affected by external interferences and those that are most likely to cause the maximum disturbances was also performed. Obtaining the Conformité Européen mark requires knowing what the harmonized standards are and how the elaboration of the specific requirement is defined. For this type of large installations, one needs to adapt and develop the tests to be carried out. In addition, throughout this process, certification entities and notified bodies play a key role in preparing and agreeing the required technical documentation. We have focused our attention mostly on the technical aspects of each point. We believe that this contribution will be of interest for other scientists involved in applying industrial quality assurance standards to large scientific plant.

Keywords: CE marking, electromagnetic compatibility, european directive, scientific installations

Procedia PDF Downloads 110
41 Advancing Agriculture through Technology: An Abstract of Research Findings

Authors: Eugene Aninagyei-Bonsu

Abstract:

Introduction: Agriculture has been a cornerstone of human civilization, ensuring food security and livelihoods for billions of people worldwide. In recent decades, rapid advancements in technology have revolutionized the agricultural sector, offering innovative solutions to enhance productivity, sustainability, and efficiency. This abstract summarizes key findings from a research study that explores the impacts of technology in modern agriculture and its implications for future food production systems. Methodologies: The research study employed a mixed-methods approach, combining quantitative data analysis with qualitative interviews and surveys to gain a comprehensive understanding of the role of technology in agriculture. Data was collected from various stakeholders, including farmers, agricultural technicians, and industry experts, to capture diverse perspectives on the adoption and utilization of agricultural technologies. The study also utilized case studies and literature reviews to contextualize the findings within the broader agricultural landscape. Major Findings: The research findings reveal that technology plays a pivotal role in transforming traditional farming practices and driving innovation in agriculture. Advanced technologies such as precision agriculture, drone technology, genetic engineering, and smart irrigation systems have significantly improved crop yields, reduced environmental impact, and optimized resource utilization. Farmers who have embraced these technologies have reported increased productivity, enhanced profitability, and improved resilience to environmental challenges. Furthermore, the study highlights the importance of accessible and affordable technology solutions for smallholder farmers in developing countries. Mobile applications, sensor technologies, and digital platforms have enabled small-scale farmers to access market information, weather forecasts, and agricultural best practices, empowering them to make informed decisions and improve their livelihoods. The research emphasizes the need for targeted policies and investments to bridge the digital divide and promote equitable technology adoption in agriculture. Conclusion: In conclusion, this research underscores the transformative potential of technology in agriculture and its critical role in advancing sustainable food production systems. The findings suggest that harnessing technology can address key challenges facing the agricultural sector, including climate change, resource scarcity, and food insecurity. By embracing innovation and leveraging technology, farmers can enhance their productivity, profitability, and resilience in a rapidly evolving global food system. Moving forward, policymakers, researchers, and industry stakeholders must collaborate to facilitate the adoption of appropriate technologies, support capacity building, and promote sustainable agricultural practices for a more resilient and food-secure future.

Keywords: technology development in modern agriculture, the influence of information technology access in agriculture, analyzing agricultural technology development, analyzing of the frontier technology of agriculture loT

Procedia PDF Downloads 35
40 Comparison of Bioelectric and Biomechanical Electromyography Normalization Techniques in Disparate Populations

Authors: Drew Commandeur, Ryan Brodie, Sandra Hundza, Marc Klimstra

Abstract:

The amplitude of raw electromyography (EMG) is affected by recording conditions and often requires normalization to make meaningful comparisons. Bioelectric methods normalize with an EMG signal recorded during a standardized task or from the experimental protocol itself, while biomechanical methods often involve measurements with an additional sensor such as a force transducer. Common bioelectric normalization techniques for treadmill walking include maximum voluntary isometric contraction (MVIC), dynamic EMG peak (EMGPeak) or dynamic EMG mean (EMGMean). There are several concerns with using MVICs to normalize EMG, including poor reliability and potential discomfort. A limitation of bioelectric normalization techniques is that they could result in a misrepresentation of the absolute magnitude of force generated by the muscle and impact the interpretation of EMG between functionally disparate groups. Additionally, methods that normalize to EMG recorded during the task may eliminate some real inter-individual variability due to biological variation. This study compared biomechanical and bioelectric EMG normalization techniques during treadmill walking to assess the impact of the normalization method on the functional interpretation of EMG data. For the biomechanical method, we normalized EMG to a target torque (EMGTS) and the bioelectric methods used were normalization to the mean and peak of the signal during the walking task (EMGMean and EMGPeak). The effect of normalization on muscle activation pattern, EMG amplitude, and inter-individual variability were compared between disparate cohorts of OLD (76.6 yrs N=11) and YOUNG (26.6 yrs N=11) adults. Participants walked on a treadmill at a self-selected pace while EMG was recorded from the right lower limb. EMG data from the soleus (SOL), medial gastrocnemius (MG), tibialis anterior (TA), vastus lateralis (VL), and biceps femoris (BF) were phase averaged into 16 bins (phases) representing the gait cycle with bins 1-10 associated with right stance and bins 11-16 with right swing. Pearson’s correlations showed that activation patterns across the gait cycle were similar between all methods, ranging from r =0.86 to r=1.00 with p<0.05. This indicates that each method can characterize the muscle activation pattern during walking. Repeated measures ANOVA showed a main effect for age in MG for EMGPeak but no other main effects were observed. Interactions between age*phase of EMG amplitude between YOUNG and OLD with each method resulted in different statistical interpretation between methods. EMGTS normalization characterized the fewest differences (four phases across all 5 muscles) while EMGMean (11 phases) and EMGPeak (19 phases) showed considerably more differences between cohorts. The second notable finding was that coefficient of variation, the representation of inter-individual variability, was greatest for EMGTS and lowest for EMGMean while EMGPeak was slightly higher than EMGMean for all muscles. This finding supports our expectation that EMGTS normalization would retain inter-individual variability which may be desirable, however, it also suggests that even when large differences are expected, a larger sample size may be required to observe the differences. Our findings clearly indicate that interpretation of EMG is highly dependent on the normalization method used, and it is essential to consider the strengths and limitations of each method when drawing conclusions.

Keywords: electromyography, EMG normalization, functional EMG, older adults

Procedia PDF Downloads 91
39 Physiological Effects on Scientist Astronaut Candidates: Hypobaric Training Assessment

Authors: Pedro Llanos, Diego García

Abstract:

This paper is addressed to expanding our understanding of the effects of hypoxia training on our bodies to better model its dynamics and leverage some of its implications and effects on human health. Hypoxia training is a recommended practice for military and civilian pilots that allow them to recognize their early hypoxia signs and symptoms, and Scientist Astronaut Candidates (SACs) who underwent hypobaric hypoxia (HH) exposure as part of a training activity for prospective suborbital flight applications. This observational-analytical study describes physiologic responses and symptoms experienced by a SAC group before, during and after HH exposure and proposes a model for assessing predicted versus observed physiological responses. A group of individuals with diverse Science Technology Engineering Mathematics (STEM) backgrounds conducted a hypobaric training session to an altitude up to 22,000 ft (FL220) or 6,705 meters, where heart rate (HR), breathing rate (BR) and core temperature (Tc) were monitored with the use of a chest strap sensor pre and post HH exposure. A pulse oximeter registered levels of saturation of oxygen (SpO2), number and duration of desaturations during the HH chamber flight. Hypoxia symptoms as described by the SACs during the HH training session were also registered. This data allowed to generate a preliminary predictive model of the oxygen desaturation and O2 pressure curve for each subject, which consists of a sixth-order polynomial fit during exposure, and a fifth or fourth-order polynomial fit during recovery. Data analysis showed that HR and BR showed no significant differences between pre and post HH exposure in most of the SACs, while Tc measures showed slight but consistent decrement changes. All subjects registered SpO2 greater than 94% for the majority of their individual HH exposures, but all of them presented at least one clinically significant desaturation (SpO2 < 85% for more than 5 seconds) and half of the individuals showed SpO2 below 87% for at least 30% of their HH exposure time. Finally, real time collection of HH symptoms presented temperature somatosensory perceptions (SP) for 65% of individuals, and task-focus issues for 52.5% of individuals as the most common HH indications. 95% of the subjects experienced HH onset symptoms below FL180; all participants achieved full recovery of HH symptoms within 1 minute of donning their O2 mask. The current HH study performed on this group of individuals suggests a rapid and fully reversible physiologic response after HH exposure as expected and obtained in previous studies. Our data showed consistent results between predicted versus observed SpO2 curves during HH suggesting a mathematical function that may be used to model HH performance deficiencies. During the HH study, real-time HH symptoms were registered providing evidenced SP and task focusing as the earliest and most common indicators. Finally, an assessment of HH signs of symptoms in a group of heterogeneous, non-pilot individuals showed similar results to previous studies in homogeneous populations of pilots.

Keywords: slow onset hypoxia, hypobaric chamber training, altitude sickness, symptoms and altitude, pressure cabin

Procedia PDF Downloads 116
38 Reducing Flood Risk in a Megacity: Using Mobile Application and Value Capture for Flood Risk Prevention and Risk Reduction Financing

Authors: Dedjo Yao Simon, Takahiro Saito, Norikazu Inuzuka, Ikuo Sugiyama

Abstract:

The megacity of Abidjan is a coastal urban area where the number of floods reported and the associated impacts are on a rapid increase due to climate change, an uncontrolled urbanization, a rapid population increase, a lack of flood disaster mitigation and citizens’ awareness. The objective of this research is to reduce in the short and long term period, the human and socio-economic impact of the flood. Hydrological simulation is applied on free of charge global spatial data (digital elevation model, satellite-based rainfall estimate, landuse) to identify the flood-prone area and to map the risk of flood. A direct interview to a sample residents is used to validate the simulation results. Then a mobile application (Flood Locator) is prototyped to disseminate the risk information to the citizen. In addition, a value capture strategy is proposed to mobilize financial resource for disaster risk reduction (DRRf) to reduce the impact of the flood. The town of Cocody in Abidjan is selected as a case study area to implement this research. The mapping of the flood risk reveals that population living in the study area is highly vulnerable. For a 5-year flood, more than 60% of the floodplain is affected by a water depth of at least 0.5 meters; and more than 1000 ha with at least 5000 buildings are directly exposed. The risk becomes higher for a 50 and 100-year floods. Also, the interview reveals that the majority of the citizen are not aware of the risk and severity of flooding in their community. This shortage of information is overcome by the Flood Locator and by an urban flood database we prototype for accumulate flood data. Flood Locator App allows the users to view floodplain and depth on a digital map; the user can activate the GPS sensor of the mobile to visualize his location on the map. Some more important additional features allow the citizen user to capture flood events and damage information that they can send remotely to the database. Also, the disclosure of the risk information could result to a decrement (-14%) of the value of properties locate inside floodplain and an increment (+19%) of the value of property in the suburb area. The tax increment due to the higher tax increment in the safer area should be captured to constitute the DRRf. The fund should be allocated to the reduction of flood risk for the benefit of people living in flood-prone areas. The flood prevention system discusses in this research will minimize in the short and long term the direct damages in the risky area due to effective awareness of citizen and the availability of DRRf. It will also contribute to the growth of the urban area in the safer zone and reduce human settlement in the risky area in the long term. Data accumulated in the urban flood database through the warning app will contribute to regenerate Abidjan towards the more resilient city by means of risk avoidable landuse in the master plan.

Keywords: abidjan, database, flood, geospatial techniques, risk communication, smartphone, value capture

Procedia PDF Downloads 290
37 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models

Authors: V. Mantey, N. Findlay, I. Maddox

Abstract:

The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.

Keywords: building detection, disaster relief, mask-RCNN, satellite mapping

Procedia PDF Downloads 169
36 Tele-Rehabilitation for Multiple Sclerosis: A Case Study

Authors: Sharon Harel, Rachel Kizony, Yoram Feldman, Gabi Zeilig, Mordechai Shani

Abstract:

Multiple Sclerosis (MS) is a neurological disease that may cause restriction in participation in daily activities of young adults. Main symptoms include fatigue, weakness and cognitive decline. The appearance of symptoms, their severity and deterioration rate, change between patients. The challenge of health services is to provide long-term rehabilitation services to people with MS. The objective of this presentation is to describe a course of tele-rehabilitation service of a woman with MS. Methods; R is a 48 years-old woman, diagnosed with MS when she was 22. She started to suffer from weakness of her non-dominant left upper extremity about ten years after the diagnosis. She was referred to the tele-rehabilitation service by her rehabilitation team, 16 years after diagnosis. Her goals were to improve ability to use her affected upper extremity in daily activities. On admission her score in the Mini-Mental State Exam was 30/30. Her Fugl-Meyer Assessment (FMA) score of the left upper extremity was 48/60, indicating mild weakness and she had a limitation of her shoulder abduction (90 degrees). In addition, she reported little use of her arm in daily activities as shown in her responses to the Motor Activity Log (MAL) that were equal to 1.25/5 in amount and 1.37 in quality of use. R. received two 30 minutes on-line sessions per week in the tele-rehabilitation service, with the CogniMotion system. These were complemented by self-practice with the system. The CogniMotion system provides a hybrid (synchronous-asynchronous), the home-based tele-rehabilitation program to improve the motor, cognitive and functional status of people with neurological deficits. The system consists of a computer, large monitor, and the Microsoft’s Kinect 3D sensor. This equipment is located in the client’s home and connected to a clinician’s computer setup in a remote clinic via WiFi. The client sits in front of the monitor and uses his body movements to interact with games and tasks presented on the monitor. The system provides feedback in the form of ‘knowledge of results’ (e.g., the success of a game) and ‘knowledge of performance’ (e.g., alerts for compensatory movements) to enhance motor learning. The games and tasks were adapted for R. motor abilities and level of difficulty was gradually increased according to her abilities. The results of her second assessment (after 35 on-line sessions) showed improvement in her FMA score to 52 and shoulder abduction to 140 degrees. Moreover, her responses to the MAL indicated an increased amount (2.4) and quality (2.2) of use of her left upper extremity in daily activities. She reported high level of enjoyment from the treatments (5/5), specifically the combination of cognitive challenges while moving her body. In addition, she found the system easy to use as reflected by her responses to the System Usability Scale (85/100). To-date, R. continues to receive treatments in the tele-rehabilitation service. To conclude, this case report shows the potential of using tele-rehabilitation for people with MS to provide strategies to enhance the use of the upper extremity in daily activities as well as for maintaining motor function.

Keywords: motor function, multiple-sclerosis, tele-rehabilitation, daily activities

Procedia PDF Downloads 180
35 Efficacy of Deep Learning for Below-Canopy Reconstruction of Satellite and Aerial Sensing Point Clouds through Fractal Tree Symmetry

Authors: Dhanuj M. Gandikota

Abstract:

Sensor-derived three-dimensional (3D) point clouds of trees are invaluable in remote sensing analysis for the accurate measurement of key structural metrics, bio-inventory values, spatial planning/visualization, and ecological modeling. Machine learning (ML) holds the potential in addressing the restrictive tradeoffs in cost, spatial coverage, resolution, and information gain that exist in current point cloud sensing methods. Terrestrial laser scanning (TLS) remains the highest fidelity source of both canopy and below-canopy structural features, but usage is limited in both coverage and cost, requiring manual deployment to map out large, forested areas. While aerial laser scanning (ALS) remains a reliable avenue of LIDAR active remote sensing, ALS is also cost-restrictive in deployment methods. Space-borne photogrammetry from high-resolution satellite constellations is an avenue of passive remote sensing with promising viability in research for the accurate construction of vegetation 3-D point clouds. It provides both the lowest comparative cost and the largest spatial coverage across remote sensing methods. However, both space-borne photogrammetry and ALS demonstrate technical limitations in the capture of valuable below-canopy point cloud data. Looking to minimize these tradeoffs, we explored a class of powerful ML algorithms called Deep Learning (DL) that show promise in recent research on 3-D point cloud reconstruction and interpolation. Our research details the efficacy of applying these DL techniques to reconstruct accurate below-canopy point clouds from space-borne and aerial remote sensing through learned patterns of tree species fractal symmetry properties and the supplementation of locally sourced bio-inventory metrics. From our dataset, consisting of tree point clouds obtained from TLS, we deconstructed the point clouds of each tree into those that would be obtained through ALS and satellite photogrammetry of varying resolutions. We fed this ALS/satellite point cloud dataset, along with the simulated local bio-inventory metrics, into the DL point cloud reconstruction architectures to generate the full 3-D tree point clouds (the truth values are denoted by the full TLS tree point clouds containing the below-canopy information). Point cloud reconstruction accuracy was validated both through the measurement of error from the original TLS point clouds as well as the error of extraction of key structural metrics, such as crown base height, diameter above root crown, and leaf/wood volume. The results of this research additionally demonstrate the supplemental performance gain of using minimum locally sourced bio-inventory metric information as an input in ML systems to reach specified accuracy thresholds of tree point cloud reconstruction. This research provides insight into methods for the rapid, cost-effective, and accurate construction of below-canopy tree 3-D point clouds, as well as the supported potential of ML and DL to learn complex, unmodeled patterns of fractal tree growth symmetry.

Keywords: deep learning, machine learning, satellite, photogrammetry, aerial laser scanning, terrestrial laser scanning, point cloud, fractal symmetry

Procedia PDF Downloads 102
34 Automated Adaptions of Semantic User- and Service Profile Representations by Learning the User Context

Authors: Nicole Merkle, Stefan Zander

Abstract:

Ambient Assisted Living (AAL) describes a technological and methodological stack of (e.g. formal model-theoretic semantics, rule-based reasoning and machine learning), different aspects regarding the behavior, activities and characteristics of humans. Hence, a semantic representation of the user environment and its relevant elements are required in order to allow assistive agents to recognize situations and deduce appropriate actions. Furthermore, the user and his/her characteristics (e.g. physical, cognitive, preferences) need to be represented with a high degree of expressiveness in order to allow software agents a precise evaluation of the users’ context models. The correct interpretation of these context models highly depends on temporal, spatial circumstances as well as individual user preferences. In most AAL approaches, model representations of real world situations represent the current state of a universe of discourse at a given point in time by neglecting transitions between a set of states. However, the AAL domain currently lacks sufficient approaches that contemplate on the dynamic adaptions of context-related representations. Semantic representations of relevant real-world excerpts (e.g. user activities) help cognitive, rule-based agents to reason and make decisions in order to help users in appropriate tasks and situations. Furthermore, rules and reasoning on semantic models are not sufficient for handling uncertainty and fuzzy situations. A certain situation can require different (re-)actions in order to achieve the best results with respect to the user and his/her needs. But what is the best result? To answer this question, we need to consider that every smart agent requires to achieve an objective, but this objective is mostly defined by domain experts who can also fail in their estimation of what is desired by the user and what not. Hence, a smart agent has to be able to learn from context history data and estimate or predict what is most likely in certain contexts. Furthermore, different agents with contrary objectives can cause collisions as their actions influence the user’s context and constituting conditions in unintended or uncontrolled ways. We present an approach for dynamically updating a semantic model with respect to the current user context that allows flexibility of the software agents and enhances their conformance in order to improve the user experience. The presented approach adapts rules by learning sensor evidence and user actions using probabilistic reasoning approaches, based on given expert knowledge. The semantic domain model consists basically of device-, service- and user profile representations. In this paper, we present how this semantic domain model can be used in order to compute the probability of matching rules and actions. We apply this probability estimation to compare the current domain model representation with the computed one in order to adapt the formal semantic representation. Our approach aims at minimizing the likelihood of unintended interferences in order to eliminate conflicts and unpredictable side-effects by updating pre-defined expert knowledge according to the most probable context representation. This enables agents to adapt to dynamic changes in the environment which enhances the provision of adequate assistance and affects positively the user satisfaction.

Keywords: ambient intelligence, machine learning, semantic web, software agents

Procedia PDF Downloads 281
33 Characterizing the Rectification Process for Designing Scoliosis Braces: Towards Digital Brace Design

Authors: Inigo Sanz-Pena, Shanika Arachchi, Dilani Dhammika, Sanjaya Mallikarachchi, Jeewantha S. Bandula, Alison H. McGregor, Nicolas Newell

Abstract:

The use of orthotic braces for adolescent idiopathic scoliosis (AIS) patients is the most common non-surgical treatment to prevent deformity progression. The traditional method to create an orthotic brace involves casting the patient’s torso to obtain a representative geometry, which is then rectified by an orthotist to the desired geometry of the brace. Recent improvements in 3D scanning technologies, rectification software, CNC, and additive manufacturing processes have given the possibility to compliment, or in some cases, replace manual methods with digital approaches. However, the rectification process remains dependent on the orthotist’s skills. Therefore, the rectification process needs to be carefully characterized to ensure that braces designed through a digital workflow are as efficient as those created using a manual process. The aim of this study is to compare 3D scans of patients with AIS against 3D scans of both pre- and post-rectified casts that have been manually shaped by an orthotist. Six AIS patients were recruited from the Ragama Rehabilitation Clinic, Colombo, Sri Lanka. All patients were between 10 and 15 years old, were skeletally immature (Risser grade 0-3), and had Cobb angles between 20-45°. Seven spherical markers were placed at key anatomical locations on each patient’s torso and on the pre- and post-rectified molds so that distances could be reliably measured. 3D scans were obtained of 1) the patient’s torso and pelvis, 2) the patient’s pre-rectification plaster mold, and 3) the patient’s post-rectification plaster mold using a Structure Sensor Mark II 3D scanner (Occipital Inc., USA). 3D stick body models were created for each scan to represent the distances between anatomical landmarks. The 3D stick models were used to analyze the changes in position and orientation of the anatomical landmarks between scans using Blender open-source software. 3D Surface deviation maps represented volume differences between the scans using CloudCompare open-source software. The 3D stick body models showed changes in the position and orientation of thorax anatomical landmarks between the patient and the post-rectification scans for all patients. Anatomical landmark position and volume differences were seen between 3D scans of the patient’s torsos and the pre-rectified molds. Between the pre- and post-rectified molds, material removal was consistently seen on the anterior side of the thorax and the lateral areas below the ribcage. Volume differences were seen in areas where the orthotist planned to place pressure pads (usually at the trochanter on the side to which the lumbar curve was tilted (trochanter pad), at the lumbar apical vertebra (lumbar pad), on the rib connected to the apical vertebrae at the mid-axillary line (thoracic pad), and on the ribs corresponding to the upper thoracic vertebra (axillary extension pad)). The rectification process requires the skill and experience of an orthotist; however, this study demonstrates that the brace shape, location, and volume of material removed from the pre-rectification mold can be characterized and quantified. Results from this study can be fed into software that can accelerate the brace design process and make steps towards the automated digital rectification process.

Keywords: additive manufacturing, orthotics, scoliosis brace design, sculpting software, spinal deformity

Procedia PDF Downloads 144
32 Assessment and Characterization of Dual-Hardening Adhesion Promoter for Self-Healing Mechanisms in Metal-Plastic Hybrid System

Authors: Anas Hallak, Latifa Seblini, Juergen Wilde

Abstract:

In mechatronics or sensor technology, plastic housings are used to protect sensitive components from harmful environmental influences, such as moisture, media, or reactive substances. Connections, preferably in the form of metallic lead-frame structures, through the housing wall are required for their electrical supply or control. In this system, an insufficient connection between the plastic component, e.g., Polyamide66, and the metal surface, e.g., copper, due to the incompatibility is dominating. As a result, leakage paths can occur along with the plastic-metal interface. Since adhesive bonding has been established as one of the most important joining processes and its use has expanded significantly, driven by the development of improved high-performance adhesives and bonding techniques, this technology has been involved in metal-plastic hybrid structures. In this study, an epoxy bonding agent from DELO (DUALBOND LT2266) has been used to improve the mechanical and chemical binding between the metal and the polymer. It is an adhesion promoter with two reaction stages. In these, the first stage provides fixation to the lead frame directly after the coating step, which can be done by UV-Exposure for a few seconds. In the second stage, the material will be thermally hardened during injection molding. To analyze the two reaction stages of the primer, dynamic DSC experiments were carried out and correlated with Fourier-transform infrared spectroscopy measurements. Furthermore, the number of crosslinking bonds formed in the system in each reaction stage has also been estimated by a rheological characterization. Those investigations have been performed with different times of UV exposure: 12, 96 s and in an industrial preferred temperature range from -20 to 175°C. The shear viscosity values of primer have been measured as a function of temperature and exposure times. For further interpretation, the storage modulus values have been calculated, and the so-called Booij–Palmen plot has been sketched. The next approach in this study is the self-healing mechanisms in the hydride system in which the primer should flow into micro-damage such as interface, cracks, inhibit them from growing, and close them. The ability of the primer to flow in and penetrate defined capillaries made in Ultramid was investigated. Holes with a diameter of 0.3 mm were produced in injection-molded A3EG7 plates with 4 mm thickness. A copper substrate coated with the DUALBOND was placed on the A3EG7 plate and pressed with a certain force. Metallographic analyses were carried out to verify the filling grade, which showed an almost 95% filling ratio of the capillaries. Finally, to estimate the self-healing mechanism in metal-plastic hybrid systems, characterizations have been done on a simple geometry with a metal inlay developed by the Institute of Polymer Technology in Friedrich-Alexander-University. The specimens have been modified with tungsten wire which was to be pulled out after the injection molding to create a micro-hole in the specimen at the interface between the primer and the polymer. The capability of the primer to heal those micro-cracks upon heating, pressing, and thermal aging has been characterized through metallographic analyses.

Keywords: hybrid structures, self-healing, thermoplastic housing, adhesive

Procedia PDF Downloads 193
31 Quantifying Impairments in Whiplash-Associated Disorders and Association with Patient-Reported Outcomes

Authors: Harpa Ragnarsdóttir, Magnús Kjartan Gíslason, Kristín Briem, Guðný Lilja Oddsdóttir

Abstract:

Introduction: Whiplash-Associated Disorder (WAD) is a health problem characterized by motor, neurological and psychosocial symptoms, stressing the need for a multimodal treatment approach. To achieve individualized multimodal approach, prognostic factors need to be identified early using validated patient-reported and objective outcome measures. The aim of this study is to demonstrate the degree of association between patient-reported and clinical outcome measures of WAD patients in the subacute phase. Methods: Individuals (n=41) with subacute (≥1, ≤3 months) WAD (I-II), medium to high-risk symptoms, or neck pain rating ≥ 4/10 on the Visual Analog Scale (VAS) were examined. Outcome measures included measurements for movement control (Butterfly test) and cervical active range of motion (cAROM) using the NeckSmart system, a computer system using an inertial measurement unit (IMU) that connects to a computer. The IMU sensor is placed on the participant’s head, who receives visual feedback about the movement of the head. Patient-reported neck disability, pain intensity, general health, self-perceived handicap, central sensitization, and difficulties due to dizziness were measured using questionnaires. Excel and R statistical software were used for statistical analyses. Results: Forty-one participants, 15 males (37%), 26 females (63%), mean (SD) age 36.8 (±12.7), underwent data collection. Mean amplitude accuracy (AA) (SD) in the Butterfly test for easy, medium, and difficult paths were 2.4mm (0.9), 4.4mm (1.8), and 6.8mm (2.7), respectively. Mean cAROM (SD) for flexion, extension, left-, and right rotation were 46.3° (18.5), 48.8° (17.8), 58.2° (14.3), and 58.9° (15.0), respectively. Mean scores on the Neck Disability Index (NDI), VAS, Dizziness Handicap Inventory (DHI), Central Sensitization Inventory (CSI), and 36-Item Short Form Survey RAND version (RAND) were 43% (17.4), 7 (1.7), 37 (25.4), 51 (17.5), and 39.2 (17.7) respectively. Females showed significantly greater deviation for AA compared to males for easy and medium Butterfly paths (p<0.05). Statistically significant moderate to strong positive correlation was found between the DHI and easy (r=0.6, p=0.05), medium (r=0.5, p=0.05)) and difficult (r=0.5, p<0.05) Butterfly paths, between the total RAND score and all cAROMs (r between 0.4-0.7, p≤0.05) except flexion (r=0.4, p=0.7), and between the NDI score and CSI (r=0.7, p<0.01), VAS (r=0.5, p<0.01), and DHI (r=0.7, p<0.01) scores respectively. Discussion: All patient-reported and objective measures were found to be outside the reference range. Results suggest females have worse movement control in the neck in the subacute WAD phase. However, no statistical difference based on gender was found in patient-reported measures. Suggesting females might have worse movement control than males in general in this phase. The correlation found between DHI and the Butterfly test can be explained because the DHI measures proprioceptive symptoms like dizziness and eye movement disorders that can affect the outcome of movement control tests. A correlation was found between the total RAND score and cAROM, suggesting that a reduced range of motion affects the quality of life. Significance: The NeckSmart system can detect abnormalities in cAROM, fine movement control, and kinesthesia of the neck. Results suggest females have worse movement control than males. Results show a moderate to a high correlation between several patient-reported and objective measurements.

Keywords: whiplash associated disorders, car-collision, neck, trauma, subacute

Procedia PDF Downloads 70
30 Hyperspectral Imagery for Tree Speciation and Carbon Mass Estimates

Authors: Jennifer Buz, Alvin Spivey

Abstract:

The most common greenhouse gas emitted through human activities, carbon dioxide (CO2), is naturally consumed by plants during photosynthesis. This process is actively being monetized by companies wishing to offset their carbon dioxide emissions. For example, companies are now able to purchase protections for vegetated land due-to-be clear cut or purchase barren land for reforestation. Therefore, by actively preventing the destruction/decay of plant matter or by introducing more plant matter (reforestation), a company can theoretically offset some of their emissions. One of the biggest issues in the carbon credit market is validating and verifying carbon offsets. There is a need for a system that can accurately and frequently ensure that the areas sold for carbon credits have the vegetation mass (and therefore for carbon offset capability) they claim. Traditional techniques for measuring vegetation mass and determining health are costly and require many person-hours. Orbital Sidekick offers an alternative approach that accurately quantifies carbon mass and assesses vegetation health through satellite hyperspectral imagery, a technique which enables us to remotely identify material composition (including plant species) and condition (e.g., health and growth stage). How much carbon a plant is capable of storing ultimately is tied to many factors, including material density (primarily species-dependent), plant size, and health (trees that are actively decaying are not effectively storing carbon). All of these factors are capable of being observed through satellite hyperspectral imagery. This abstract focuses on speciation. To build a species classification model, we matched pixels in our remote sensing imagery to plants on the ground for which we know the species. To accomplish this, we collaborated with the researchers at the Teakettle Experimental Forest. Our remote sensing data comes from our airborne “Kato” sensor, which flew over the study area and acquired hyperspectral imagery (400-2500 nm, 472 bands) at ~0.5 m/pixel resolution. Coverage of the entire teakettle experimental forest required capturing dozens of individual hyperspectral images. In order to combine these images into a mosaic, we accounted for potential variations of atmospheric conditions throughout the data collection. To do this, we ran an open source atmospheric correction routine called ISOFIT1 (Imaging Spectrometer Optiman FITting), which converted all of our remote sensing data from radiance to reflectance. A database of reflectance spectra for each of the tree species within the study area was acquired using the Teakettle stem map and the geo-referenced hyperspectral images. We found that a wide variety of machine learning classifiers were able to identify the species within our images with high (>95%) accuracy. For the most robust quantification of carbon mass and the best assessment of the health of a vegetated area, speciation is critical. Through the use of high resolution hyperspectral data, ground-truth databases, and complex analytical techniques, we are able to determine the species present within a pixel to a high degree of accuracy. These species identifications will feed directly into our carbon mass model.

Keywords: hyperspectral, satellite, carbon, imagery, python, machine learning, speciation

Procedia PDF Downloads 126
29 Coil-Over Shock Absorbers Compared to Inherent Material Damping

Authors: Carina Emminger, Umut D. Cakmak, Evrim Burkut, Rene Preuer, Ingrid Graz, Zoltan Major

Abstract:

Damping accompanies us daily in everyday life and is used to protect (e.g., in shoes) and make our life more comfortable (damping of unwanted motion) and calm (noise reduction). In general, damping is the absorption of energy which is either stored in the material (vibration isolation systems) or changed into heat (vibration absorbers). In case of the last, the damping mechanism can be split in active, passive, as well as semi-active (a combination of active and passive). Active damping is required to enable an almost perfect damping over the whole application range and is used, for instance, in sport cars. In contrast, passive damping is a response of the material due to external loading. Consequently, the material composition has a huge influence on the damping behavior. For elastomers, the material behavior is inherent viscoelastic, temperature, and frequency dependent. However, passive damping is not adjustable during application. Therefore, it is of importance to understand the fundamental viscoelastic behavior and the dissipation capability due to external loading. The objective of this work is to assess the limitation and applicability of viscoelastic material damping for applications in which currently coil-over shock absorbers are utilized. Coil-over shock absorbers are usually made of various mechanical parts and incorporate fluids within the damper. These shock absorbers are well-known and studied in the industry, and when needed, they can be easily adjusted during their product lifetime. In contrary, dampers made of – ideally – a single material are more resource efficient, have an easier serviceability, and are easier manufactured. However, they lack of adaptability and adjustability in service. Therefore, a case study with a remote-controlled sport car was conducted. The original shock absorbers were redesigned, and the spring-dashpot system was replaced by both an elastomer and a thermoplastic-elastomer, respectively. Here, five different formulations of elastomers were used, including a pure and an iron-particle filled thermoplastic poly(urethan) (TPU) and blends of two different poly(dimethyl siloxane) (PDMS). In addition, the TPUs were investigated as full and hollow dampers to investigate the difference between solid and structured material. To get comparative results each material formulation was comprehensively characterized, by monotonic uniaxial compression tests, dynamic thermomechanical analysis (DTMA), and rebound resilience. Moreover, the new material-based shock absorbers were compared with spring-dashpot shock absorbers. The shock absorbers were analyzed under monotonic and cyclic loading. In addition, an impact loading was applied on the remote-controlled car to measure the damping properties in operation. A servo-hydraulic high-speed linear actuator was utilized to apply the loads. The acceleration of the car and the displacement of specific measurement points were recorded while testing by a sensor and high-speed camera, respectively. The results prove that elastomers are suitable in damping applications, but they are temperature and frequency dependent. This is a limitation in applicability of viscous material damper. Feasible fields of application may be in the case of micromobility, like bicycles, e-scooters, and e-skateboards. Furthermore, the viscous material damping could be used to increase the inherent damping of a whole structure, e.g., in bicycle-frames.

Keywords: damper structures, material damping, PDMS, TPU

Procedia PDF Downloads 114
28 Sentinel-2 Based Burn Area Severity Assessment Tool in Google Earth Engine

Authors: D. Madhushanka, Y. Liu, H. C. Fernando

Abstract:

Fires are one of the foremost factors of land surface disturbance in diverse ecosystems, causing soil erosion and land-cover changes and atmospheric effects affecting people's lives and properties. Generally, the severity of the fire is calculated as the Normalized Burn Ratio (NBR) index. This is performed manually by comparing two images obtained afterward. Then by using the bitemporal difference of the preprocessed satellite images, the dNBR is calculated. The burnt area is then classified as either unburnt (dNBR<0.1) or burnt (dNBR>= 0.1). Furthermore, Wildfire Severity Assessment (WSA) classifies burnt areas and unburnt areas using classification levels proposed by USGS and comprises seven classes. This procedure generates a burn severity report for the area chosen by the user manually. This study is carried out with the objective of producing an automated tool for the above-mentioned process, namely the World Wildfire Severity Assessment Tool (WWSAT). It is implemented in Google Earth Engine (GEE), which is a free cloud-computing platform for satellite data processing, with several data catalogs at different resolutions (notably Landsat, Sentinel-2, and MODIS) and planetary-scale analysis capabilities. Sentinel-2 MSI is chosen to obtain regular processes related to burnt area severity mapping using a medium spatial resolution sensor (15m). This tool uses machine learning classification techniques to identify burnt areas using NBR and to classify their severity over the user-selected extent and period automatically. Cloud coverage is one of the biggest concerns when fire severity mapping is performed. In WWSAT based on GEE, we present a fully automatic workflow to aggregate cloud-free Sentinel-2 images for both pre-fire and post-fire image compositing. The parallel processing capabilities and preloaded geospatial datasets of GEE facilitated the production of this tool. This tool consists of a Graphical User Interface (GUI) to make it user-friendly. The advantage of this tool is the ability to obtain burn area severity over a large extent and more extended temporal periods. Two case studies were carried out to demonstrate the performance of this tool. The Blue Mountain national park forest affected by the Australian fire season between 2019 and 2020 is used to describe the workflow of the WWSAT. This site detected more than 7809 km2, using Sentinel-2 data, giving an error below 6.5% when compared with the area detected on the field. Furthermore, 86.77% of the detected area was recognized as fully burnt out, of which high severity (17.29%), moderate-high severity (19.63%), moderate-low severity (22.35%), and low severity (27.51%). The Arapaho and Roosevelt National Forest Park, California, the USA, which is affected by the Cameron peak fire in 2020, is chosen for the second case study. It was found that around 983 km2 had burned out, of which high severity (2.73%), moderate-high severity (1.57%), moderate-low severity (1.18%), and low severity (5.45%). These spots also can be detected through the visual inspection made possible by cloud-free images generated by WWSAT. This tool is cost-effective in calculating the burnt area since satellite images are free and the cost of field surveys is avoided.

Keywords: burnt area, burnt severity, fires, google earth engine (GEE), sentinel-2

Procedia PDF Downloads 234
27 The Healing 'Touch' of Music: A Neuro-Acoustics Approach to Understand Its Therapeutic Effect

Authors: Jagmeet S. Kanwal, Julia F. Langley

Abstract:

Music can heal the body, but a mechanistic understanding of this phenomenon is lacking. This study explores the effects of music presentation on neurologic and physiologic responses leading to metabolic changes in the human body. The mind and body co-exist in a corporeal entity and within this framework, sickness ensues when the mind-body balance goes awry. It is further hypothesized that music has the capacity to directly reset this balance. Two lines of inquiry taken together can provide a mechanistic understanding of this phenomenon 1) Empirical evidence for a sound-sensitive pressure sensor system in the body, and 2) The notion of a “healing center” within the brain that is activated by specific patterns of sounds. From an acoustics perspective, music is spatially distributed as pressure waves ranging from a few cm to several meters in wavelength. These waves interact and propagate in three-dimensions in unique ways, depending on the wavelength. Furthermore, music creates dynamically changing wave-fronts. Frequencies between 200 Hz and 1 kHz generate wavelengths that range from 5'6" to 1 foot. These dimensions are in the range of the body size of most people making it plausible that these pressure waves can geometrically interact with the body surface and create distinct patterns of pressure stimulation across the skin surface. For humans, short wavelength, high frequency (> 200 Hz) sounds are best received via cochlear receptors. For low frequency (< 200 Hz), long wavelength sound vibrations, however, the whole body may act as an ideal receiver. A vast array of highly sensitive pressure receptors (Pacinian corpuscles) is present just beneath the skin surface, as well as in the tendons, bones, several organs in the abdomen, and the sexual organs. Per the available empirical evidence, these receptors contribute to music perception by allowing the whole body to function as a sound receiver, and knowledge of how they function is essential to fully understanding the therapeutic effect of music. Neuroscientific studies have established that music stimulates the limbic system that can trigger states of anxiety, arousal, fear, and other emotions. These emotional states of brain activity play a crucial role in filtering top-down feedback from thoughts and bottom-up sensory inputs to the autonomic system, which automatically regulates bodily functions. Music likely exerts its pleasurable and healing effects by enhancing functional and effective connectivity and feedback mechanisms between brain regions that mediate reward, autonomic, and cognitive processing. Stimulation of pressure receptors under the skin by low-frequency music-induced sensations can activate multiple centers in the brain, including the amygdala, the cingulate cortex, and nucleus accumbens. Melodies in music in the low (< 600 Hz) frequency range may augment auditory inputs after convergence of the pressure-sensitive inputs from the vagus nerve onto emotive processing regions within the limbic system. The integration of music-generated auditory and somato-visceral inputs may lead to a synergistic input to the brain that promotes healing. Thus, music can literally heal humans through “touch” as it energizes the brain’s autonomic system for restoring homeostasis.

Keywords: acoustics, brain, music healing, pressure receptors

Procedia PDF Downloads 166
26 Intelligent Materials and Functional Aspects of Shape Memory Alloys

Authors: Osman Adiguzel

Abstract:

Shape-memory alloys are a new class of functional materials with a peculiar property known as shape memory effect. These alloys return to a previously defined shape on heating after deformation in low temperature product phase region and take place in a class of functional materials due to this property. The origin of this phenomenon lies in the fact that the material changes its internal crystalline structure with changing temperature. Shape memory effect is based on martensitic transitions, which govern the remarkable changes in internal crystalline structure of materials. Martensitic transformation, which is a solid state phase transformation, occurs in thermal manner in material on cooling from high temperature parent phase region. This transformation is governed by changes in the crystalline structure of the material. Shape memory alloys cycle between original and deformed shapes in bulk level on heating and cooling, and can be used as a thermal actuator or temperature-sensitive elements due to this property. Martensitic transformations usually occur with the cooperative movement of atoms by means of lattice invariant shears. The ordered parent phase structures turn into twinned structures with this movement in crystallographic manner in thermal induced case. The twinned martensites turn into the twinned or oriented martensite by stressing the material at low temperature martensitic phase condition. The detwinned martensite turns into the parent phase structure on first heating, first cycle, and parent phase structures turn into the twinned and detwinned structures respectively in irreversible and reversible memory cases. On the other hand, shape memory materials are very important and useful in many interdisciplinary fields such as medicine, pharmacy, bioengineering, metallurgy and many engineering fields. The choice of material as well as actuator and sensor to combine it with the host structure is very essential to develop main materials and structures. Copper based alloys exhibit this property in metastable beta-phase region, which has bcc-based structures at high temperature parent phase field, and these structures martensitically turn into layered complex structures with lattice twinning following two ordered reactions on cooling. Martensitic transition occurs as self-accommodated martensite with inhomogeneous shears, lattice invariant shears which occur in two opposite directions, <110 > -type directions on the {110}-type plane of austenite matrix which is basal plane of martensite. This kind of shear can be called as {110}<110> -type mode and gives rise to the formation of layered structures, like 3R, 9R or 18R depending on the stacking sequences on the close-packed planes of the ordered lattice. In the present contribution, x-ray diffraction and transmission electron microscopy (TEM) studies were carried out on two copper based alloys which have the chemical compositions in weight; Cu-26.1%Zn 4%Al and Cu-11%Al-6%Mn. X-ray diffraction profiles and electron diffraction patterns reveal that both alloys exhibit super lattice reflections inherited from parent phase due to the displacive character of martensitic transformation. X-ray diffractograms taken in a long time interval show that locations and intensities of diffraction peaks change with the aging time at room temperature. In particular, some of the successive peak pairs providing a special relation between Miller indices come close each other.

Keywords: Shape memory effect, martensite, twinning, detwinning, self-accommodation, layered structures

Procedia PDF Downloads 426
25 Multifield Problems in 3D Structural Analysis of Advanced Composite Plates and Shells

Authors: Salvatore Brischetto, Domenico Cesare

Abstract:

Major improvements in future aircraft and spacecraft could be those dependent on an increasing use of conventional and unconventional multilayered structures embedding composite materials, functionally graded materials, piezoelectric or piezomagnetic materials, and soft foam or honeycomb cores. Layers made of such materials can be combined in different ways to obtain structures that are able to fulfill several structural requirements. The next generation of aircraft and spacecraft will be manufactured as multilayered structures under the action of a combination of two or more physical fields. In multifield problems for multilayered structures, several physical fields (thermal, hygroscopic, electric and magnetic ones) interact each other with different levels of influence and importance. An exact 3D shell model is here proposed for these types of analyses. This model is based on a coupled system including 3D equilibrium equations, 3D Fourier heat conduction equation, 3D Fick diffusion equation and electric and magnetic divergence equations. The set of partial differential equations of second order in z is written using a mixed curvilinear orthogonal reference system valid for spherical and cylindrical shell panels, cylinders and plates. The order of partial differential equations is reduced to the first one thanks to the redoubling of the number of variables. The solution in the thickness z direction is obtained by means of the exponential matrix method and the correct imposition of interlaminar continuity conditions in terms of displacements, transverse stresses, electric and magnetic potentials, temperature, moisture content and transverse normal multifield fluxes. The investigated structures have simply supported sides in order to obtain a closed form solution in the in-plane directions. Moreover, a layerwise approach is proposed which allows a 3D correct description of multilayered anisotropic structures subjected to field loads. Several results will be proposed in tabular and graphical formto evaluate displacements, stresses and strains when mechanical loads, temperature gradients, moisture content gradients, electric potentials and magnetic potentials are applied at the external surfaces of the structures in steady-state conditions. In the case of inclusions of piezoelectric and piezomagnetic layers in the multilayered structures, so called smart structures are obtained. In this case, a free vibration analysis in open and closed circuit configurations and a static analysis for sensor and actuator applications will be proposed. The proposed results will be useful to better understand the physical and structural behaviour of multilayered advanced composite structures in the case of multifield interactions. Moreover, these analytical results could be used as reference solutions for those scientists interested in the development of 3D and 2D numerical shell/plate models based, for example, on the finite element approach or on the differential quadrature methodology. The correct impositions of boundary geometrical and load conditions, interlaminar continuity conditions and the zigzag behaviour description due to transverse anisotropy will be also discussed and verified.

Keywords: composite structures, 3D shell model, stress analysis, multifield loads, exponential matrix method, layer wise approach

Procedia PDF Downloads 67
24 Thermally Stable Crystalline Triazine-Based Organic Polymeric Nanodendrites for Mercury(2+) Ion Sensing

Authors: Dimitra Das, Anuradha Mitra, Kalyan Kumar Chattopadhyay

Abstract:

Organic polymers, constructed from light elements like carbon, hydrogen, nitrogen, oxygen, sulphur, and boron atoms, are the emergent class of non-toxic, metal-free, environmental benign advanced materials. Covalent triazine-based polymers with a functional triazine group are significant class of organic materials due to their remarkable stability arising out of strong covalent bonds. They can conventionally form hydrogen bonds, favour π–π contacts, and they were recently revealed to be involved in interesting anion–π interactions. The present work mainly focuses upon the development of a single-crystalline, highly cross-linked triazine-based nitrogen-rich organic polymer with nanodendritic morphology and significant thermal stability. The polymer has been synthesized through hydrothermal treatment of melamine and ethylene glycol resulting in cross-polymerization via condensation-polymerization reaction. The crystal structure of the polymer has been evaluated by employing Rietveld whole profile fitting method. The polymer has been found to be composed of monoclinic melamine having space group P21/a. A detailed insight into the chemical structure of the as synthesized polymer has been elucidated by Fourier Transform Infrared Spectroscopy (FTIR) and Raman spectroscopic analysis. X-Ray Photoelectron Spectroscopic (XPS) analysis has also been carried out for further understanding of the different types of linkages required to create the backbone of the polymer. The unique rod-like morphology of the triazine based polymer has been revealed from the images obtained from Field Emission Scanning Electron Microscopy (FESEM) and Transmission Electron Microscopy (TEM). Interestingly, this polymer has been found to selectively detect mercury (Hg²⁺) ions at an extremely low concentration through fluorescent quenching with detection limit as low as 0.03 ppb. The high toxicity of mercury ions (Hg²⁺) arise from its strong affinity towards the sulphur atoms of biological building blocks. Even a trace quantity of this metal is dangerous for human health. Furthermore, owing to its small ionic radius and high solvation energy, Hg²⁺ ions remain encapsulated by water molecules making its detection a challenging task. There are some existing reports on fluorescent-based heavy metal ion sensors using covalent organic frameworks (COFs) but reports on mercury sensing using triazine based polymers are rather undeveloped. Thus, the importance of ultra-trace detection of Hg²⁺ ions with high level of selectivity and sensitivity has contemporary significance. A plausible sensing phenomenon by the polymer has been proposed to understand the applicability of the material as a potential sensor. The impressive sensitivity of the polymer sample towards Hg²⁺ is the very first report in the field of highly crystalline triazine based polymers (without the introduction of any sulphur groups or functionalization) towards mercury ion detection through photoluminescence quenching technique. This crystalline metal-free organic polymer being cheap, non-toxic and scalable has current relevance and could be a promising candidate for Hg²⁺ ion sensing at commercial level.

Keywords: fluorescence quenching , mercury ion sensing, single-crystalline, triazine-based polymer

Procedia PDF Downloads 136
23 Physiological Effects during Aerobatic Flights on Science Astronaut Candidates

Authors: Pedro Llanos, Diego García

Abstract:

Spaceflight is considered the last frontier in terms of science, technology, and engineering. But it is also the next frontier in terms of human physiology and performance. After more than 200,000 years humans have evolved under earth’s gravity and atmospheric conditions, spaceflight poses environmental stresses for which human physiology is not adapted. Hypoxia, accelerations, and radiation are among such stressors, our research involves suborbital flights aiming to develop effective countermeasures in order to assure sustainable human space presence. The physiologic baseline of spaceflight participants is subject to great variability driven by age, gender, fitness, and metabolic reserve. The objective of the present study is to characterize different physiologic variables in a population of STEM practitioners during an aerobatic flight. Cardiovascular and pulmonary responses were determined in Science Astronaut Candidates (SACs) during unusual attitude aerobatic flight indoctrination. Physiologic data recordings from 20 subjects participating in high-G flight training were analyzed. These recordings were registered by wearable sensor-vest that monitored electrocardiographic tracings (ECGs), signs of dysrhythmias or other electric disturbances during all the flight. The same cardiovascular parameters were also collected approximately 10 min pre-flight, during each high-G/unusual attitude maneuver and 10 min after the flights. The ratio (pre-flight/in-flight/post-flight) of the cardiovascular responses was calculated for comparison of inter-individual differences. The resulting tracings depicting the cardiovascular responses of the subjects were compared against the G-loads (Gs) during the aerobatic flights to analyze cardiovascular variability aspects and fluid/pressure shifts due to the high Gs. In-flight ECG revealed cardiac variability patterns associated with rapid Gs onset in terms of reduced heart rate (HR) and some scattered dysrhythmic patterns (15% premature ventricular contractions-type) that were considered as triggered physiological responses to high-G/unusual attitude training and some were considered as instrument artifact. Variation events were observed in subjects during the +Gz and –Gz maneuvers and these may be due to preload and afterload, sudden shift. Our data reveal that aerobatic flight influenced the breathing rate of the subject, due in part by the various levels of energy expenditure due to the increased use of muscle work during these aerobatic maneuvers. Noteworthy was the high heterogeneity in the different physiological responses among a relatively small group of SACs exposed to similar aerobatic flights with similar Gs exposures. The cardiovascular responses clearly demonstrated that SACs were subjected to significant flight stress. Routine ECG monitoring during high-G/unusual attitude flight training is recommended to capture pathology underlying dangerous dysrhythmias in suborbital flight safety. More research is currently being conducted to further facilitate the development of robust medical screening, medical risk assessment approaches, and suborbital flight training in the context of the evolving commercial human suborbital spaceflight industry. A more mature and integrative medical assessment method is required to understand the physiology state and response variability among highly diverse populations of prospective suborbital flight participants.

Keywords: g force, aerobatic maneuvers, suborbital flight, hypoxia, commercial astronauts

Procedia PDF Downloads 129
22 Electroactive Ferrocenyl Dendrimers as Transducers for Fabrication of Label-Free Electrochemical Immunosensor

Authors: Sudeshna Chandra, Christian Gäbler, Christian Schliebe, Heinrich Lang

Abstract:

Highly branched dendrimers provide structural homogeneity, controlled composition, comparable size to biomolecules, internal porosity and multiple functional groups for conjugating reactions. Electro-active dendrimers containing multiple redox units have generated great interest in their use as electrode modifiers for development of biosensors. The electron transfer between the redox-active dendrimers and the biomolecules play a key role in developing a biosensor. Ferrocenes have multiple and electrochemically equivalent redox units that can act as electron “pool” in a system. The ferrocenyl-terminated polyamidoamine dendrimer is capable of transferring multiple numbers of electrons under the same applied potential. Therefore, they can be used for dual purposes: one in building a film over the electrode for immunosensors and the other for immobilizing biomolecules for sensing. Electrochemical immunosensor, thus developed, exhibit fast and sensitive analysis, inexpensive and involve no prior sample pre-treatment. Electrochemical amperometric immunosensors are even more promising because they can achieve a very low detection limit with high sensitivity. Detection of the cancer biomarkers at an early stage can provide crucial information for foundational research of life science, clinical diagnosis and prevention of disease. Elevated concentration of biomarkers in body fluid is an early indication of some type of cancerous disease and among all the biomarkers, IgG is the most common and extensively used clinical cancer biomarkers. We present an IgG (=immunoglobulin) electrochemical immunosensor using a newly synthesized redox-active ferrocenyl dendrimer of generation 2 (G2Fc) as glassy carbon electrode material for immobilizing the antibody. The electrochemical performance of the modified electrodes was assessed in both aqueous and non-aqueous media using varying scan rates to elucidate the reaction mechanism. The potential shift was found to be higher in an aqueous electrolyte due to presence of more H-bond which reduced the electrostatic attraction within the amido groups of the dendrimers. The cyclic voltammetric studies of the G2Fc-modified GCE in 0.1 M PBS solution of pH 7.2 showed a pair of well-defined redox peaks. The peak current decreased significantly with the immobilization of the anti-goat IgG. After the immunosensor is blocked with BSA, a further decrease in the peak current was observed due to the attachment of the protein BSA to the immunosensor. A significant decrease in the current signal of the BSA/anti-IgG/G2Fc/GCE was observed upon immobilizing IgG which may be due to the formation of immune-conjugates that blocks the tunneling of mass and electron transfer. The current signal was found to be directly related to the amount of IgG captured on the electrode surface. With increase in the concentration of IgG, there is a formation of an increasing amount of immune-conjugates that decreased the peak current. The incubation time and concentration of the antibody was optimized for better analytical performance of the immunosensor. The developed amperometric immunosensor is sensitive to IgG concentration as low as 2 ng/mL. Tailoring of redox-active dendrimers provides enhanced electroactivity to the system and enlarges the sensor surface for binding the antibodies. It may be assumed that both electron transfer and diffusion contribute to the signal transformation between the dendrimers and the antibody.

Keywords: ferrocenyl dendrimers, electrochemical immunosensors, immunoglobulin, amperometry

Procedia PDF Downloads 337
21 Design Aspects for Developing a Microfluidics Diagnostics Device Used for Low-Cost Water Quality Monitoring

Authors: Wenyu Guo, Malachy O’Rourke, Mark Bowkett, Michael Gilchrist

Abstract:

Many devices for real-time monitoring of surface water have been developed in the past few years to provide early warning of pollutions and so to decrease the risk of environmental pollution efficiently. One of the most common methodologies used in the detection system is a colorimetric process, in which a container with fixed volume is filled with target ions and reagents to combine a colorimetric dye. The colorimetric ions can sensitively absorb a specific-wavelength radiation beam, and its absorbance rate is proportional to the concentration of the fully developed product, indicating the concentration of target nutrients in the pre-mixed water samples. In order to achieve precise and rapid detection effect, channels with dimensions in the order of micrometers, i.e., microfluidic systems have been developed and introduced into these diagnostics studies. Microfluidics technology largely reduces the surface to volume ratios and decrease the samples/reagents consumption significantly. However, species transport in such miniaturized channels is limited by the low Reynolds numbers in the regimes. Thus, the flow is extremely laminar state, and diffusion is the dominant mass transport process all over the regimes of the microfluidic channels. The objective of this present work has been to analyse the mixing effect and chemistry kinetics in a stop-flow microfluidic device measuring Nitride concentrations in fresh water samples. In order to improve the temporal resolution of the Nitride microfluidic sensor, we have used computational fluid dynamics to investigate the influence that the effectiveness of the mixing process between the sample and reagent within a microfluidic device exerts on the time to completion of the resulting chemical reaction. This computational approach has been complemented by physical experiments. The kinetics of the Griess reaction involving the conversion of sulphanilic acid to a diazonium salt by reaction with nitrite in acidic solution is set in the Laminar Finite-rate chemical reaction in the model. Initially, a methodology was developed to assess the degree of mixing of the sample and reagent within the device. This enabled different designs of the mixing channel to be compared, such as straight, square wave and serpentine geometries. Thereafter, the time to completion of the Griess reaction within a straight mixing channel device was modeled and the reaction time validated with experimental data. Further simulations have been done to compare the reaction time to effective mixing within straight, square wave and serpentine geometries. Results show that square wave channels can significantly improve the mixing effect and provides a low standard deviations of the concentrations of nitride and reagent, while for straight channel microfluidic patterns the corresponding values are 2-3 orders of magnitude greater, and consequently are less efficiently mixed. This has allowed us to design novel channel patterns of micro-mixers with more effective mixing that can be used to detect and monitor levels of nutrients present in water samples, in particular, Nitride. Future generations of water quality monitoring and diagnostic devices will easily exploit this technology.

Keywords: nitride detection, computational fluid dynamics, chemical kinetics, mixing effect

Procedia PDF Downloads 202
20 Optimizing Machine Learning Algorithms for Defect Characterization and Elimination in Liquids Manufacturing

Authors: Tolulope Aremu

Abstract:

The key process steps to produce liquid detergent products will introduce potential defects, such as formulation, mixing, filling, and packaging, which might compromise product quality, consumer safety, and operational efficiency. Real-time identification and characterization of such defects are of prime importance for maintaining high standards and reducing waste and costs. Usually, defect detection is performed by human inspection or rule-based systems, which is very time-consuming, inconsistent, and error-prone. The present study overcomes these limitations in dealing with optimization in defect characterization within the process for making liquid detergents using Machine Learning algorithms. Performance testing of various machine learning models was carried out: Support Vector Machine, Decision Trees, Random Forest, and Convolutional Neural Network on defect detection and classification of those defects like wrong viscosity, color deviations, improper filling of a bottle, packaging anomalies. These algorithms have significantly benefited from a variety of optimization techniques, including hyperparameter tuning and ensemble learning, in order to greatly improve detection accuracy while minimizing false positives. Equipped with a rich dataset of defect types and production parameters consisting of more than 100,000 samples, our study further includes information from real-time sensor data, imaging technologies, and historic production records. The results are that optimized machine learning models significantly improve defect detection compared to traditional methods. Take, for instance, the CNNs, which run at 98% and 96% accuracy in detecting packaging anomaly detection and bottle filling inconsistency, respectively, by fine-tuning the model with real-time imaging data, through which there was a reduction in false positives of about 30%. The optimized SVM model on detecting formulation defects gave 94% in viscosity variation detection and color variation. These values of performance metrics correspond to a giant leap in defect detection accuracy compared to the usual 80% level achieved up to now by rule-based systems. Moreover, this optimization with models can hasten defect characterization, allowing for detection time to be below 15 seconds from an average of 3 minutes using manual inspections with real-time processing of data. With this, the reduction in time will be combined with a 25% reduction in production downtime because of proactive defect identification, which can save millions annually in recall and rework costs. Integrating real-time machine learning-driven monitoring drives predictive maintenance and corrective measures for a 20% improvement in overall production efficiency. Therefore, the optimization of machine learning algorithms in defect characterization optimum scalability and efficiency for liquid detergent companies gives improved operational performance to higher levels of product quality. In general, this method could be conducted in several industries within the Fast moving consumer Goods industry, which would lead to an improved quality control process.

Keywords: liquid detergent manufacturing, defect detection, machine learning, support vector machines, convolutional neural networks, defect characterization, predictive maintenance, quality control, fast-moving consumer goods

Procedia PDF Downloads 18
19 Relevance of Dosing Time for Everolimus Toxicity in Respect to the Circadian P-Glycoprotein Expression in Mdr1a::Luc Mice

Authors: Narin Ozturk, Xiao-Mei Li, Sylvie Giachetti, Francis Levi, Alper Okyar

Abstract:

P-glycoprotein (P-gp, MDR1, ABCB1) is a transmembrane protein acting as an ATP-dependent efflux pump and functions as a biological barrier by extruding drugs and xenobiotics out of cells in healthy tissues especially in intestines, liver and brain as well as in tumor cells. The circadian timing system controls a variety of biological functions in mammals including xenobiotic metabolism and detoxification, proliferation and cell cycle events, and may affect pharmacokinetics, toxicity and efficacy of drugs. Selective mTOR (mammalian target of rapamycin) inhibitor everolimus is an immunosuppressant and anticancer drug that is active against many cancers, and its pharmacokinetics depend on P-gp. The aim of this study was to investigate the dosing time-dependent toxicity of everolimus with respect to the intestinal P-gp expression rhythms in mdr1a::Luc mice using Real Time-Biolumicorder (RT-BIO) System. Mdr1a::Luc male mice were synchronized with 12 h of Light and 12 h of Dark (LD12:12, with Zeitgeber Time 0 – ZT0 – corresponding Light onset). After 1-week baseline recordings, everolimus (5 mg/kg/day x 14 days) was administered orally at ZT1-resting period- and ZT13-activity period- to mdr1a::Luc mice singly housed in an innovative monitoring device, Real Time-Biolumicorder units which let us monitor real-time and long-term gene expression in freely moving mice. D-luciferin (1.5 mg/mL) was dissolved in drinking water. Mouse intestinal mdr1a::Luc oscillation profile reflecting P-gp gene expression and locomotor activity pattern were recorded every minute with the photomultiplier tube and infrared sensor respectively. General behavior and clinical signs were monitored, and body weight was measured every day as an index of toxicity. Drug-induced body weight change was expressed relative to body weight on the initial treatment day. Statistical significance of differences between groups was validated with ANOVA. Circadian rhythms were validated with Cosinor Analysis. Everolimus toxicity changed as a function of drug timing, which was least following dosing at ZT13, near the onset of the activity span in male mice. Mean body weight loss was nearly twice as large in mice treated with 5 mg/kg everolimus at ZT1 as compared to ZT13 (8.9% vs. 5.4%; ANOVA, p < 0.001). Based on the body weight loss and clinical signs upon everolimus treatment, tolerability for the drug was best following dosing at ZT13. Both rest-activity and mdr1a::Luc expression displayed stable 24-h periodic rhythms before everolimus and in both vehicle-treated controls. Real-time bioluminescence pattern of mdr1a revealed a circadian rhythm with a 24-h period with an acrophase at ZT16 (Cosinor, p < 0.001). Mdr1a expression remained rhythmic in everolimus-treated mice, whereas down-regulation was observed in P-gp expression in 2 of 4 mice. The study identified the circadian pattern of intestinal P-gp expression with an unprecedented precision. The circadian timing depending on the P-gp expression rhythms may play a crucial role in the tolerability/toxicity of everolimus. The circadian changes in mdr1a genes deserve further studies regarding their relevance for in vitro and in vivo chronotolerance of mdr1a-transported anticancer drugs. Chronotherapy with P-gp-effluxed anticancer drugs could then be applied according to their rhythmic patterns in host and tumor to jointly maximize treatment efficacy and minimize toxicity.

Keywords: circadian rhythm, chronotoxicity, everolimus, mdr1a::Luc mice, p-glycoprotein

Procedia PDF Downloads 342