Search results for: UWB sensor
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1408

Search results for: UWB sensor

58 Platform Virtual for Joint Amplitude Measurement Based in MEMS

Authors: Mauro Callejas-Cuervo, Andrea C. Alarcon-Aldana, Andres F. Ruiz-Olaya, Juan C. Alvarez

Abstract:

Motion capture (MC) is the construction of a precise and accurate digital representation of a real motion. Systems have been used in the last years in a wide range of applications, from films special effects and animation, interactive entertainment, medicine, to high competitive sport where a maximum performance and low injury risk during training and competition is seeking. This paper presents an inertial and magnetic sensor based technological platform, intended for particular amplitude monitoring and telerehabilitation processes considering an efficient cost/technical considerations compromise. Our platform particularities offer high social impact possibilities by making telerehabilitation accessible to large population sectors in marginal socio-economic sector, especially in underdeveloped countries that in opposition to developed countries specialist are scarce, and high technology is not available or inexistent. This platform integrates high-resolution low-cost inertial and magnetic sensors with adequate user interfaces and communication protocols to perform a web or other communication networks available diagnosis service. The amplitude information is generated by sensors then transferred to a computing device with adequate interfaces to make it accessible to inexperienced personnel, providing a high social value. Amplitude measurements of the platform virtual system presented a good fit to its respective reference system. Analyzing the robotic arm results (estimation error RMSE 1=2.12° and estimation error RMSE 2=2.28°), it can be observed that during arm motion in any sense, the estimation error is negligible; in fact, error appears only during sense inversion what can easily be explained by the nature of inertial sensors and its relation to acceleration. Inertial sensors present a time constant delay which acts as a first order filter attenuating signals at large acceleration values as is the case for a change of sense in motion. It can be seen a damped response of platform virtual in other images where error analysis show that at maximum amplitude an underestimation of amplitude is present whereas at minimum amplitude estimations an overestimation of amplitude is observed. This work presents and describes the platform virtual as a motion capture system suitable for telerehabilitation with the cost - quality and precision - accessibility relations optimized. These particular characteristics achieved by efficiently using the state of the art of accessible generic technology in sensors and hardware, and adequate software for capture, transmission analysis and visualization, provides the capacity to offer good telerehabilitation services, reaching large more or less marginal populations where technologies and specialists are not available but accessible with basic communication networks.

Keywords: inertial sensors, joint amplitude measurement, MEMS, telerehabilitation

Procedia PDF Downloads 259
57 Application of IoTs Based Multi-Level Air Quality Sensing for Advancing Environmental Monitoring in Pingtung County

Authors: Men An Pan, Hong Ren Chen, Chih Heng Shih, Hsing Yuan Yen

Abstract:

Pingtung County is located in the southernmost region of Taiwan. During the winter season, pollutants due to insufficient dispersion caused by the downwash of the northeast monsoon lead to the poor air quality of the County. Through the implementation of various control methods, including the application of permits of air pollution, fee collection of air pollution, control oil fume of catering sectors, smoke detection of diesel vehicles, regular inspection of locomotives, and subsidies for low-polluting vehicles. Moreover, to further mitigate the air pollution, additional alternative controlling strategies are also carried out, such as construction site control, prohibition of open-air agricultural waste burning, improvement of river dust, and strengthening of road cleaning operations. The combined efforts have significantly reduced air pollutants in the County. However, in order to effectively and promptly monitor the ambient air quality, the County has subsequently deployed micro-sensors, with a total of 400 IoTs (Internet of Things) micro-sensors for PM2.5 and VOC detection and 3 air quality monitoring stations of the Environmental Protection Agency (EPA), covering 33 townships of the County. The covered area has more than 1,300 listed factories and 5 major industrial parks; thus forming an Internet of Things (IoTs) based multi-level air quality monitoring system. The results demonstrate that the IoTs multi-level air quality sensors combined with other strategies such as “sand and gravel dredging area technology monitoring”, “banning open burning”, “intelligent management of construction sites”, “real-time notification of activation response”, “nighthawk early bird plan with micro-sensors”, “unmanned aircraft (UAV) combined with land and air to monitor abnormal emissions”, and “animal husbandry odour detection service” etc. The satisfaction improvement rate of air control, through a 2021 public survey, reached a high percentage of 81%, an increase of 46% as compared to 2018. For the air pollution complaints for the whole year of 2021, the total number was 4213 in contrast to 7088 in 2020, a reduction rate reached almost 41%. Because of the spatial-temporal features of the air quality monitoring IoTs system by the application of microsensors, the system does assist and strengthen the effectiveness of the existing air quality monitoring network of the EPA and can provide real-time control of the air quality. Therefore, the hot spots and potential pollution locations can be timely determined for law enforcement. Hence, remarkable results were obtained for the two years. That is, both reduction of public complaints and better air quality are successfully achieved through the implementation of the present IoTs system for real-time air quality monitoring throughout Pingtung County.

Keywords: IoT, PM, air quality sensor, air pollution, environmental monitoring

Procedia PDF Downloads 73
56 Employing Remotely Sensed Soil and Vegetation Indices and Predicting ‎by Long ‎Short-Term Memory to Irrigation Scheduling Analysis

Authors: Elham Koohikerade, Silvio Jose Gumiere

Abstract:

In this research, irrigation is highlighted as crucial for improving both the yield and quality of ‎potatoes due to their high sensitivity to soil moisture changes. The study presents a hybrid Long ‎Short-Term Memory (LSTM) model aimed at optimizing irrigation scheduling in potato fields in ‎Quebec City, Canada. This model integrates model-based and satellite-derived datasets to simulate ‎soil moisture content, addressing the limitations of field data. Developed under the guidance of the ‎Food and Agriculture Organization (FAO), the simulation approach compensates for the lack of direct ‎soil sensor data, enhancing the LSTM model's predictions. The model was calibrated using indices ‎like Surface Soil Moisture (SSM), Normalized Vegetation Difference Index (NDVI), Enhanced ‎Vegetation Index (EVI), and Normalized Multi-band Drought Index (NMDI) to effectively forecast ‎soil moisture reductions. Understanding soil moisture and plant development is crucial for assessing ‎drought conditions and determining irrigation needs. This study validated the spectral characteristics ‎of vegetation and soil using ECMWF Reanalysis v5 (ERA5) and Moderate Resolution Imaging ‎Spectrometer (MODIS) data from 2019 to 2023, collected from agricultural areas in Dolbeau and ‎Peribonka, Quebec. Parameters such as surface volumetric soil moisture (0-7 cm), NDVI, EVI, and ‎NMDI were extracted from these images. A regional four-year dataset of soil and vegetation moisture ‎was developed using a machine learning approach combining model-based and satellite-based ‎datasets. The LSTM model predicts soil moisture dynamics hourly across different locations and ‎times, with its accuracy verified through cross-validation and comparison with existing soil moisture ‎datasets. The model effectively captures temporal dynamics, making it valuable for applications ‎requiring soil moisture monitoring over time, such as anomaly detection and memory analysis. By ‎identifying typical peak soil moisture values and observing distribution shapes, irrigation can be ‎scheduled to maintain soil moisture within Volumetric Soil Moisture (VSM) values of 0.25 to 0.30 ‎m²/m², avoiding under and over-watering. The strong correlations between parcels suggest that a ‎uniform irrigation strategy might be effective across multiple parcels, with adjustments based on ‎specific parcel characteristics and historical data trends. The application of the LSTM model to ‎predict soil moisture and vegetation indices yielded mixed results. While the model effectively ‎captures the central tendency and temporal dynamics of soil moisture, it struggles with accurately ‎predicting EVI, NDVI, and NMDI.‎

Keywords: irrigation scheduling, LSTM neural network, remotely sensed indices, soil and vegetation ‎monitoring

Procedia PDF Downloads 41
55 The Study of Mirror Self-Recognition in Wildlife

Authors: Azwan Hamdan, Mohd Qayyum Ab Latip, Hasliza Abu Hassim, Tengku Rinalfi Putra Tengku Azizan, Hafandi Ahmad

Abstract:

Animal cognition provides some evidence for self-recognition, which is described as the ability to recognize oneself as an individual separate from the environment and other individuals. The mirror self-recognition (MSR) or mark test is a behavioral technique to determine whether an animal have the ability of self-recognition or self-awareness in front of the mirror. It also describes the capability for an animal to be aware of and make judgments about its new environment. Thus, the objectives of this study are to measure and to compare the ability of wild and captive wildlife in mirror self-recognition. Wild animals from the Royal Belum Rainforest Malaysia were identified based on the animal trails and salt lick grounds. Acrylic mirrors with wood frame (200 x 250cm) were located near to animal trails. Camera traps (Bushnell, UK) with motion-detection infrared sensor are placed near the animal trails or hiding spot. For captive wildlife, animals such as Malayan sun bear (Helarctos malayanus) and chimpanzee (Pan troglodytes) were selected from Zoo Negara Malaysia. The captive animals were also marked using odorless and non-toxic white paint on its forehead. An acrylic mirror with wood frame (200 x 250cm) and a video camera were placed near the cage. The behavioral data were analyzed using ethogram and classified through four stages of MSR; social responses, physical inspection, repetitive mirror-testing behavior and realization of seeing themselves. Results showed that wild animals such as barking deer (Muntiacus muntjak) and long-tailed macaque (Macaca fascicularis) increased their physical inspection (e.g inspecting the reflected image) and repetitive mirror-testing behavior (e.g rhythmic head and leg movement). This would suggest that the ability to use a mirror is most likely related to learning process and cognitive evolution in wild animals. However, the sun bear’s behaviors were inconsistent and did not clearly undergo four stages of MSR. This result suggests that when keeping Malayan sun bear in captivity, it may promote communication and familiarity between conspecific. Interestingly, chimp has positive social response (e.g manipulating lips) and physical inspection (e.g using hand to inspect part of the face) when they facing a mirror. However, both animals did not show any sign towards the mark due to lost of interest in the mark and realization that the mark is inconsequential. Overall, the results suggest that the capacity for MSR is the beginning of a developmental process of self-awareness and mental state attribution. In addition, our findings show that self-recognition may be based on different complex neurological and level of encephalization in animals. Thus, research on self-recognition in animals will have profound implications in understanding the cognitive ability of an animal as an effort to help animals, such as enhanced management, design of captive individuals’ enclosures and exhibits, and in programs to re-establish populations of endangered or threatened species.

Keywords: mirror self-recognition (MSR), self-recognition, self-awareness, wildlife

Procedia PDF Downloads 272
54 Engineering Topology of Photonic Systems for Sustainable Molecular Structure: Autopoiesis Systems

Authors: Moustafa Osman Mohammed

Abstract:

This paper introduces topological order in descried social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. Topological order is important in describing the physical systems for exploiting optical systems and improving photonic devices. The stats of topological order have some interesting properties of topological degeneracy and fractional statistics that reveal the entanglement origin of topological order, etc. Topological ideas in photonics form exciting developments in solid-state materials, that being; insulating in the bulk, conducting electricity on their surface without dissipation or back-scattering, even in the presence of large impurities. A specific type of autopoiesis system is interrelated to the main categories amongst existing groups of the ecological phenomena interaction social and medical sciences. The hypothesis, nevertheless, has a nonlinear interaction with its natural environment 'interactional cycle' for exchange photon energy with molecules without changes in topology. The engineering topology of a biosensor is based on the excitation boundary of surface electromagnetic waves in photonic band gap multilayer films. The device operation is similar to surface Plasmonic biosensors in which a photonic band gap film replaces metal film as the medium when surface electromagnetic waves are excited. The use of photonic band gap film offers sharper surface wave resonance leading to the potential of greatly enhanced sensitivity. So, the properties of the photonic band gap material are engineered to operate a sensor at any wavelength and conduct a surface wave resonance that ranges up to 470 nm. The wavelength is not generally accessible with surface Plasmon sensing. Lastly, the photonic band gap films have robust mechanical functions that offer new substrates for surface chemistry to understand the molecular design structure and create sensing chips surface with different concentrations of DNA sequences in the solution to observe and track the surface mode resonance under the influences of processes that take place in the spectroscopic environment. These processes led to the development of several advanced analytical technologies: which are; automated, real-time, reliable, reproducible, and cost-effective. This results in faster and more accurate monitoring and detection of biomolecules on refractive index sensing, antibody-antigen reactions with a DNA or protein binding. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other in order to form unique spatial structure and dynamics of biological molecules for providing the environment mutual contribution in investigation of changes due to the pathogenic archival architecture of cell clusters.

Keywords: autopoiesis, photonics systems, quantum topology, molecular structure, biosensing

Procedia PDF Downloads 94
53 Correlation between Defect Suppression and Biosensing Capability of Hydrothermally Grown ZnO Nanorods

Authors: Mayoorika Shukla, Pramila Jakhar, Tejendra Dixit, I. A. Palani, Vipul Singh

Abstract:

Biosensors are analytical devices with wide range of applications in biological, chemical, environmental and clinical analysis. It comprises of bio-recognition layer which has biomolecules (enzymes, antibodies, DNA, etc.) immobilized over it for detection of analyte and transducer which converts the biological signal into the electrical signal. The performance of biosensor primarily the depends on the bio-recognition layer and therefore it has to be chosen wisely. In this regard, nanostructures of metal oxides such as ZnO, SnO2, V2O5, and TiO2, etc. have been explored extensively as bio-recognition layer. Recently, ZnO has the attracted attention of researchers due to its unique properties like high iso-electric point, biocompatibility, stability, high electron mobility and high electron binding energy, etc. Although there have been many reports on usage of ZnO as bio-recognition layer but to the authors’ knowledge, none has ever observed correlation between optical properties like defect suppression and biosensing capability of the sensor. Here, ZnO nanorods (ZNR) have been synthesized by a low cost, simple and low-temperature hydrothermal growth process, over Platinum (Pt) coated glass substrate. The ZNR have been synthesized in two steps viz. initially a seed layer was coated over substrate (Pt coated glass) followed by immersion of it into nutrient solution of Zinc nitrate and Hexamethylenetetramine (HMTA) with in situ addition of KMnO4. The addition of KMnO4 was observed to have a profound effect over the growth rate anisotropy of ZnO nanostructures. Clustered and powdery growth of ZnO was observed without addition of KMnO4, although by addition of it during the growth, uniform and crystalline ZNR were found to be grown over the substrate. Moreover, the same has resulted in suppression of defects as observed by Normalized Photoluminescence (PL) spectra since KMnO4 is a strong oxidizing agent which provides an oxygen rich growth environment. Further, to explore the correlation between defect suppression and biosensing capability of the ZNR Glucose oxidase (Gox) was immobilized over it, using physical adsorption technique followed by drop casting of nafion. Here the main objective of the work was to analyze effect of defect suppression over biosensing capability, and therefore Gox has been chosen as model enzyme, and electrochemical amperometric glucose detection was performed. The incorporation of KMnO4 during growth has resulted in variation of optical and charge transfer properties of ZNR which in turn were observed to have deep impact on biosensor figure of merits. The sensitivity of biosensor was found to increase by 12-18 times, due to variations introduced by addition of KMnO4 during growth. The amperometric detection of glucose in continuously stirred buffer solution was performed. Interestingly, defect suppression has been observed to contribute towards the improvement of biosensor performance. The detailed mechanism of growth of ZNR along with the overall influence of defect suppression on the sensing capabilities of the resulting enzymatic electrochemical biosensor and different figure of merits of the biosensor (Glass/Pt/ZNR/Gox/Nafion) will be discussed during the conference.

Keywords: biosensors, defects, KMnO4, ZnO nanorods

Procedia PDF Downloads 282
52 Sensitivity Improvement of Optical Ring Resonator for Strain Analysis with the Direction of Strain Recognition Possibility

Authors: Tayebeh Sahraeibelverdi, Ahmad Shirazi Hadi Veladi, Mazdak Radmalekshah

Abstract:

Optical sensors became attractive due to preciseness, low power consumption, and intrinsic electromagnetic interference-free characteristic. Among the waveguide optical sensors, cavity-based ones attended for the high Q-factor. Micro ring resonators as a potential platform have been investigated for various applications as biosensors to pressure sensors thanks to their sensitive ring structure responding to any small change in the refractive index. Furthermore, these small micron size structures can come in an array, bringing the opportunity to have any of the resonance in a specific wavelength and be addressed in this way. Another exciting application is applying a strain to the ring and making them an optical strain gauge where the traditional ones are based on the piezoelectric material. Making them in arrays needs electrical wiring and about fifty times bigger in size. Any physical element that impacts the waveguide cross-section, Waveguide elastic-optic property change, or ring circumference can play a role. In comparison, ring size change has a larger effect than others. Here an engineered ring structure is investigated to study the strain effect on the ring resonance wavelength shift and its potential for more sensitive strain devices. At the same time, these devices can measure any strain by mounting on the surface of interest. The idea is to change the" O" shape ring to a "C" shape ring with a small opening starting from 2π/360 or one degree. We used the Mode solution of Lumbrical software to investigate the effect of changing the ring's opening and the shift induced by applied strain. The designed ring radius is a three Micron silicon on isolator ring which can be fabricated by standard complementary metal-oxide-semiconductor (CMOS) micromachining. The measured wavelength shifts from1-degree opening of the ring to a 6-degree opening have been investigated. Opening the ring for 1-degree affects the ring's quality factor from 3000 to 300, showing an order of magnitude Q-factor reduction. Assuming a strain making the ring-opening from 1 degree to 6 degrees, our simulation results showing negligible Q-factor reduction from 300 to 280. A ring resonator quality factor can reach up to 108 where an order of magnitude reduction is negligible. The resonance wavelength shift showed a blue shift and was obtained to be 1581, 1579,1578,1575nm for 1-, 2-, 4- and 6-degree ring-opening, respectively. This design can find the direction of the strain-induced by applying the opening on different parts of the ring. Moreover, by addressing the specified wavelength, we can precisely find the direction. We can open a significant opportunity to find cracks and any surface mechanical property very specifically and precisely. This idea can be implemented on polymer ring resonators while they can come with a flexible substrate and can be very sensitive to any strain making the two ends of the ring in the slit part come closer or further.

Keywords: optical ring resonator, strain gauge, strain sensor, surface mechanical property analysis

Procedia PDF Downloads 126
51 Spectral Responses of the Laser Generated Coal Aerosol

Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Tomi Smausz, Zoltán Kónya, Béla Hopp, Gábor Szabó, Zoltán Bozóki

Abstract:

Characterization of spectral responses of light absorbing carbonaceous particulate matter (LAC) is of great importance in both modelling its climate effect and interpreting remote sensing measurement data. The residential or domestic combustion of coal is one of the dominant LAC constituent. According to some related assessments the residential coal burning account for roughly half of anthropogenic BC emitted from fossil fuel burning. Despite of its significance in climate the comprehensive investigation of optical properties of residential coal aerosol is really limited in the literature. There are many reason of that starting from the difficulties associated with the controlled burning conditions of the fuel, through the lack of detailed supplementary proximate and ultimate chemical analysis enforced, the interpretation of the measured optical data, ending with many analytical and methodological difficulties regarding the in-situ measurement of coal aerosol spectral responses. Since the gas matrix of ambient can significantly mask the physicochemical characteristics of the generated coal aerosol the accurate and controlled generation of residential coal particulates is one of the most actual issues in this research area. Most of the laboratory imitation of residential coal combustion is simply based on coal burning in stove with ambient air support allowing one to measure only the apparent spectral feature of the particulates. However, the recently introduced methodology based on a laser ablation of solid coal target opens up novel possibilities to model the real combustion procedure under well controlled laboratory conditions and makes the investigation of the inherent optical properties also possible. Most of the methodology for spectral characterization of LAC is based on transmission measurement made of filter accumulated aerosol or deduced indirectly from parallel measurements of scattering and extinction coefficient using free floating sampling. In the former one the accuracy while in the latter one the sensitivity are liming the applicability of this approaches. Although the scientific community are at the common platform that aerosol-phase PhotoAcoustic Spectroscopy (PAS) is the only method for precise and accurate determination of light absorption by LAC, the PAS based instrumentation for spectral characterization of absorption has only been recently introduced. In this study, the investigation of the inherent, spectral features of laser generated and chemically characterized residential coal aerosols are demonstrated. The experimental set-up and its characteristic for residential coal aerosol generation are introduced here. The optical absorption and the scattering coefficients as well as their wavelength dependency are determined by our state-of-the-art multi wavelength PAS instrument (4λ-PAS) and multi wavelength cosinus sensor (Aurora 3000). The quantified wavelength dependency (AAE and SAE) are deduced from the measured data. Finally, some correlation between the proximate and ultimate chemical as well as the measured or deduced optical parameters are also revealed.

Keywords: absorption, scattering, residential coal, aerosol generation by laser ablation

Procedia PDF Downloads 361
50 Radish Sprout Growth Dependency on LED Color in Plant Factory Experiment

Authors: Tatsuya Kasuga, Hidehisa Shimada, Kimio Oguchi

Abstract:

Recent rapid progress in ICT (Information and Communication Technology) has advanced the penetration of sensor networks (SNs) and their attractive applications. Agriculture is one of the fields well able to benefit from ICT. Plant factories control several parameters related to plant growth in closed areas such as air temperature, humidity, water, culture medium concentration, and artificial lighting by using computers and AI (Artificial Intelligence) is being researched in order to obtain stable and safe production of vegetables and medicinal plants all year anywhere, and attain self-sufficiency in food. By providing isolation from the natural environment, a plant factory can achieve higher productivity and safe products. However, the biggest issue with plant factories is the return on investment. Profits are tenuous because of the large initial investments and running costs, i.e. electric power, incurred. At present, LED (Light Emitting Diode) lights are being adopted because they are more energy-efficient and encourage photosynthesis better than the fluorescent lamps used in the past. However, further cost reduction is essential. This paper introduces experiments that reveal which color of LED lighting best enhances the growth of cultured radish sprouts. Radish sprouts were cultivated in the experimental environment formed by a hydroponics kit with three cultivation shelves (28 samples per shelf) each with an artificial lighting rack. Seven LED arrays of different color (white, blue, yellow green, green, yellow, orange, and red) were compared with a fluorescent lamp as the control. Lighting duration was set to 12 hours a day. Normal water with no fertilizer was circulated. Seven days after germination, the length, weight and area of leaf of each sample were measured. Electrical power consumption for all lighting arrangements was also measured. Results and discussions: As to average sample length, no clear difference was observed in terms of color. As regards weight, orange LED was less effective and the difference was significant (p < 0.05). As to leaf area, blue, yellow and orange LEDs were significantly less effective. However, all LEDs offered higher productivity per W consumed than the fluorescent lamp. Of the LEDs, the blue LED array attained the best results in terms of length, weight and area of leaf per W consumed. Conclusion and future works: An experiment on radish sprout cultivation under 7 different color LED arrays showed no clear difference in terms of sample size. However, if electrical power consumption is considered, LEDs offered about twice the growth rate of the fluorescent lamp. Among them, blue LEDs showed the best performance. Further cost reduction e.g. low power lighting remains a big issue for actual system deployment. An automatic plant monitoring system with sensors is another study target.

Keywords: electric power consumption, LED color, LED lighting, plant factory

Procedia PDF Downloads 188
49 Photophysics and Torsional Dynamics of Thioflavin T in Deep Eutectic Solvents

Authors: Rajesh Kumar Gautam, Debabrata Seth

Abstract:

Thioflavin-T (ThT) play a key role of an important biologically active fluorescent sensor for amyloid fibrils. ThT molecule has been developed a method to detect the analysis of different type of diseases such as neurodegenerative disorders, Alzheimer’s, Parkinson’s, and type II diabetes. ThT was used as a fluorescent marker to detect the formation of amyloid fibril. In the presence of amyloid fibril, ThT becomes highly fluorescent. ThT undergoes twisting motion around C-C bonds of the two adjacent benzothiazole and dimethylaniline aromatic rings, which is predominantly affected by the micro-viscosity of the local environment. The present study articulates photophysics and torsional dynamics of biologically active molecule ThT in the presence of deep-eutectic solvents (DESs). DESs are environment-friendly, low cost and biodegradable alternatives to the ionic liquids. DES resembles ionic liquids, but the constituents of a DES include a hydrogen bond donor and acceptor species, in addition to ions. Due to the presence of the H-bonding network within a DES, it exhibits structural heterogeneity. Herein, we have prepared two different DESs by mixing urea with choline chloride and N, N-diethyl ethanol ammonium chloride at ~ 340 K. It was reported that deep eutectic mixture of choline chloride with urea gave a liquid with a freezing point of 12°C. We have experimented by taking two different concentrations of ThT. It was observed that at higher concentration of ThT (50 µM) it forms aggregates in DES. The photophysics of ThT as a function of temperature have been explored by using steady-state, and picoseconds time-resolved fluorescence emission spectroscopic techniques. From the spectroscopic analysis, we have observed that with rising temperature the fluorescence quantum yields and lifetime values of ThT molecule gradually decreases; this is the cumulative effect of thermal quenching and increase in the rate of the torsional rate constant. The fluorescence quantum yield and fluorescence lifetime decay values were always higher for DES-II (urea & N, N-diethyl ethanol ammonium chloride) than those for DES-I (urea & choline chloride). This was mainly due to the presence of structural heterogeneity of the medium. This was further confirmed by comparison with the activation energy of viscous flow with the activation energy of non-radiative decay. ThT molecule in less viscous media undergoes a very fast twisting process and leads to deactivation from the photoexcited state. In this system, the torsional motion increases with increasing temperature. We have concluded that beside bulk viscosity of the media, structural heterogeneity of the medium play crucial role to guide the photophysics of ThT in DESs. The analysis of the experimental data was carried out in the temperature range 288 ≤ T = 333K. The present articulate is to obtain an insight into the DESs as media for studying various photophysical processes of amyloid fibrils sensing molecule of ThT.

Keywords: deep eutectic solvent, photophysics, Thioflavin T, the torsional rate constant

Procedia PDF Downloads 162
48 Carbon Nanotubes (CNTs) as Multiplex Surface Enhanced Raman Scattering Sensing Platforms

Authors: Pola Goldberg Oppenheimer, Stephan Hofmann, Sumeet Mahajan

Abstract:

Owing to its fingerprint molecular specificity and high sensitivity, surface-enhanced Raman scattering (SERS) is an established analytical tool for chemical and biological sensing capable of single-molecule detection. A strong Raman signal can be generated from SERS-active platforms given the analyte is within the enhanced plasmon field generated near a noble-metal nanostructured substrate. The key requirement for generating strong plasmon resonances to provide this electromagnetic enhancement is an appropriate metal surface roughness. Controlling nanoscale features for generating these regions of high electromagnetic enhancement, the so-called SERS ‘hot-spots’, is still a challenge. Significant advances have been made in SERS research, with wide-ranging techniques to generate substrates with tunable size and shape of the nanoscale roughness features. Nevertheless, the development and application of SERS has been inhibited by the irreproducibility and complexity of fabrication routes. The ability to generate straightforward, cost-effective, multiplex-able and addressable SERS substrates with high enhancements is of profound interest for miniaturised sensing devices. Carbon nanotubes (CNTs) have been concurrently, a topic of extensive research however, their applications for plasmonics has been only recently beginning to gain interest. CNTs can provide low-cost, large-active-area patternable substrates which, coupled with appropriate functionalization capable to provide advanced SERS-platforms. Herein, advanced methods to generate CNT-based SERS active detection platforms will be discussed. First, a novel electrohydrodynamic (EHD) lithographic technique will be introduced for patterning CNT-polymer composites, providing a straightforward, single-step approach for generating high-fidelity sub-micron-sized nanocomposite structures within which anisotropic CNTs are vertically aligned. The created structures are readily fine-tuned, which is an important requirement for optimizing SERS to obtain the highest enhancements with each of the EHD-CNTs individual structural units functioning as an isolated sensor. Further, gold-functionalized VACNTFs are fabricated as SERS micro-platforms. The dependence on the VACNTs’ diameters and density play an important role in the Raman signal strength, thus highlighting the importance of structural parameters, previously overlooked in designing and fabricating optimized CNTs-based SERS nanoprobes. VACNTs forests patterned into predesigned pillar structures are further utilized for multiplex detection of bio-analytes. Since CNTs exhibit electrical conductivity and unique adsorption properties, these are further harnessed in the development of novel chemical and bio-sensing platforms.

Keywords: carbon nanotubes (CNTs), EHD patterning, SERS, vertically aligned carbon nanotube forests (VACNTF)

Procedia PDF Downloads 331
47 Physical Activity Based on Daily Step-Count in Inpatient Setting in Stroke and Traumatic Brain Injury Patients in Subacute Stage Follow Up: A Cross-Sectional Observational Study

Authors: Brigitte Mischler, Marget Hund, Hilfiker Roger, Clare Maguire

Abstract:

Background: Brain injury is one of the main causes of permanent physical disability, and improving walking ability is one of the most important goals for patients. After inpatient rehabilitation, most do not receive long-term rehabilitation services. Physical activity is important for the health prevention of the musculoskeletal system, circulatory system and the psyche. Objective: This follow-up study measured physical activity in subacute patients after traumatic brain injury and stroke. The difference in the number of steps in the inpatient setting was compared to the number of steps 1 year after the event in the outpatient setting. Methods: This follow-up study is a cross-sectional observational study with 29 participants. The measurement of daily step count over a seven-day period one year after the event was evaluated with the StepWatch™ ankle sensor. The number of steps taken one year after the event in the outpatient setting was compared with the number of steps taken during the inpatient stay and evaluated if they reached the recommended target value. Correlations between steps-count and exit domain, FAC level, walking speed, light touch, joint position sense, cognition, and fear of falling were calculated. Results: The median (IQR) daily step count of all patients was 2512 (568.5, 4070.5). During follow-up, the number of steps improved to 3656(1710,5900). The average difference was 1159(-2825, 6840) steps per day. Participants who were unable to walk independently (FAC 1) improved from 336(5-705) to 1808(92, 5354) steps per day. Participants able to walk with assistance (FAC 2-3) walked 700(31-3080) and at follow-up 3528(243,6871). Independent walkers (FAC 4-5) walked 4093(2327-5868) and achieved 3878(777,7418) daily steps at follow-up. This value is significantly below the recommended guideline. Step-count at follow-up showed moderate to high and statistically significant correlations: positive for FAC score, positive for FIM total score, positive for walking speed, and negative for fear of falling. Conclusions: Only 17% of all participants achieved the recommended daily step count one year after the event. We need better inpatient and outpatient strategies to improve physical activity. In everyday clinical practice, pedometers and diaries with objectives should be used. A concrete weekly schedule should be drawn up together with the patient, relatives, or nursing staff after discharge. This should include daily self-training, which was instructed during the inpatient stay. A good connection to social life (professional connection or a daily task/activity) can be an important part of improving daily activity. Further research should evaluate strategies to increase daily step counts in inpatient settings as well as in outpatient settings.

Keywords: neurorehabilitation, stroke, traumatic brain injury, steps, stepcount

Procedia PDF Downloads 15
46 Fault Diagnosis and Fault-Tolerant Control of Bilinear-Systems: Application to Heating, Ventilation, and Air Conditioning Systems in Multi-Zone Buildings

Authors: Abderrhamane Jarou, Dominique Sauter, Christophe Aubrun

Abstract:

Over the past decade, the growing demand for energy efficiency in buildings has attracted the attention of the control community. Failures in HVAC (heating, ventilation and air conditioning) systems in buildings can have a significant impact on the desired and expected energy performance of buildings and on the user's comfort as well. FTC is a recent technology area that studies the adaptation of control algorithms to faulty operating conditions of a system. The application of Fault-Tolerant Control (FTC) in HVAC systems has gained attention in the last two decades. The objective is to maintain the variations in system performance due to faults within an acceptable range with respect to the desired nominal behavior. This paper considers the so-called active approach, which is based on fault and identification scheme combined with a control reconfiguration algorithm that consists in determining a new set of control parameters so that the reconfigured performance is "as close as possible, "in some sense, to the nominal performance. Thermal models of buildings and their HVAC systems are described by non-linear (usually bi-linear) equations. Most of the works carried out so far in FDI (fault diagnosis and isolation) or FTC consider a linearized model of the studied system. However, this model is only valid in a reduced range of variation. This study presents a new fault diagnosis (FD) algorithm based on a bilinear observer for the detection and accurate estimation of the magnitude of the HVAC system failure. The main contribution of the proposed FD algorithm is that instead of using specific linearized models, the algorithm inherits the structure of the actual bilinear model of the building thermal dynamics. As an immediate consequence, the algorithm is applicable to a wide range of unpredictable operating conditions, i.e., weather dynamics, outdoor air temperature, zone occupancy profile. A bilinear fault detection observer is proposed for a bilinear system with unknown inputs. The residual vector in the observer design is decoupled from the unknown inputs and, under certain conditions, is made sensitive to all faults. Sufficient conditions are given for the existence of the observer and results are given for the explicit computation of observer design matrices. Dedicated observer schemes (DOS) are considered for sensor FDI while unknown input bilinear observers are considered for actuator or system components FDI. The proposed strategy for FTC works as follows: At a first level, FDI algorithms are implemented, making it also possible to estimate the magnitude of the fault. Once the fault is detected, the fault estimation is then used to feed the second level and reconfigure the control low so that that expected performances are recovered. This paper is organized as follows. A general structure for fault-tolerant control of buildings is first presented and the building model under consideration is introduced. Then, the observer-based design for Fault Diagnosis of bilinear systems is studied. The FTC approach is developed in Section IV. Finally, a simulation example is given in Section V to illustrate the proposed method.

Keywords: bilinear systems, fault diagnosis, fault-tolerant control, multi-zones building

Procedia PDF Downloads 172
45 High Speed Motion Tracking with Magnetometer in Nonuniform Magnetic Field

Authors: Jeronimo Cox, Tomonari Furukawa

Abstract:

Magnetometers have become more popular in inertial measurement units (IMU) for their ability to correct estimations using the earth's magnetic field. Accelerometer and gyroscope-based packages fail with dead-reckoning errors accumulated over time. Localization in robotic applications with magnetometer-inclusive IMUs has become popular as a way to track the odometry of slower-speed robots. With high-speed motions, the accumulated error increases over smaller periods of time, making them difficult to track with IMU. Tracking a high-speed motion is especially difficult with limited observability. Visual obstruction of motion leaves motion-tracking cameras unusable. When motions are too dynamic for estimation techniques reliant on the observability of the gravity vector, the use of magnetometers is further justified. As available magnetometer calibration methods are limited with the assumption that background magnetic fields are uniform, estimation in nonuniform magnetic fields is problematic. Hard iron distortion is a distortion of the magnetic field by other objects that produce magnetic fields. This kind of distortion is often observed as the offset from the origin of the center of data points when a magnetometer is rotated. The magnitude of hard iron distortion is dependent on proximity to distortion sources. Soft iron distortion is more related to the scaling of the axes of magnetometer sensors. Hard iron distortion is more of a contributor to the error of attitude estimation with magnetometers. Indoor environments or spaces inside ferrite-based structures, such as building reinforcements or a vehicle, often cause distortions with proximity. As positions correlate to areas of distortion, methods of magnetometer localization include the production of spatial mapping of magnetic field and collection of distortion signatures to better aid location tracking. The goal of this paper is to compare magnetometer methods that don't need pre-productions of magnetic field maps. Mapping the magnetic field in some spaces can be costly and inefficient. Dynamic measurement fusion is used to track the motion of a multi-link system with us. Conventional calibration by data collection of rotation at a static point, real-time estimation of calibration parameters each time step, and using two magnetometers for determining local hard iron distortion are compared to confirm the robustness and accuracy of each technique. With opposite-facing magnetometers, hard iron distortion can be accounted for regardless of position, Rather than assuming that hard iron distortion is constant regardless of positional change. The motion measured is a repeatable planar motion of a two-link system connected by revolute joints. The links are translated on a moving base to impulse rotation of the links. Equipping the joints with absolute encoders and recording the motion with cameras to enable ground truth comparison to each of the magnetometer methods. While the two-magnetometer method accounts for local hard iron distortion, the method fails where the magnetic field direction in space is inconsistent.

Keywords: motion tracking, sensor fusion, magnetometer, state estimation

Procedia PDF Downloads 84
44 Exploring the Application of IoT Technology in Lower Limb Assistive Devices for Rehabilitation during the Golden Period of Stroke Patients with Hemiplegia

Authors: Ching-Yu Liao, Ju-Joan Wong

Abstract:

Recent years have shown a trend of younger stroke patients and an increase in ischemic strokes with the rise in stroke incidence. This has led to a growing demand for telemedicine, particularly during the COVID-19 pandemic, which has made the need for telemedicine even more urgent. This shift in healthcare is also closely related to advancements in Internet of Things (IoT) technology. Stroke-induced hemiparesis is a significant issue for patients. The medical community believes that if intervention occurs within three to six months of stroke onset, 80% of the residual effects can be restored to normal, a period known as the stroke golden period. During this time, patients undergo treatment and rehabilitation, and neural plasticity is at its best. Lower limb rehabilitation for stroke generally includes exercises such as support standing and walking posture, typically involving the healthy limb to guide the affected limb to achieve rehabilitation goals. Existing gait training aids in hospitals usually involve balance gait, sitting posture training, and precise muscle control, effectively addressing issues of poor gait, insufficient muscle activity, and inability to train independently during recovery. However, home training aids, such as braced and wheeled devices, often rely on the healthy limb to pull the affected limb, leading to lower usage of the affected limb, worsening circular walking, and compensatory movement issues. IoT technology connects devices via the internet to record, receive data, provide feedback, and adjust equipment for intelligent effects. Therefore, this study aims to explore how IoT can be integrated into existing gait training aids to monitor and sensor home rehabilitation movements, improve gait training compensatory issues through real-time feedback, and enable healthcare professionals to quickly understand patient conditions and enhance medical communication. To understand the needs of hemiparetic patients, a review of relevant literature from the past decade will be conducted. From the perspective of user experience, participant observation will be used to explore the use of home training aids by stroke patients and therapists, and interviews with physical therapists will be conducted to obtain professional opinions and practical experiences. Design specifications for home training aids for hemiparetic patients will be summarized. Applying IoT technology to lower limb training aids for stroke hemiparesis can help promote walking function recovery in hemiparetic patients, reduce muscle atrophy, and allow healthcare professionals to immediately grasp patient conditions and adjust gait training plans based on collected and analyzed information. Exploring these potential development directions provides a valuable reference for the further application of IoT technology in the field of medical rehabilitation.

Keywords: stroke, hemiplegia, rehabilitation, gait training, internet of things technology

Procedia PDF Downloads 29
43 European Electromagnetic Compatibility Directive Applied to Astronomical Observatories

Authors: Oibar Martinez, Clara Oliver

Abstract:

The Cherenkov Telescope Array Project (CTA) aims to build two different observatories of Cherenkov Telescopes, located in Cerro del Paranal, Chile, and La Palma, Spain. These facilities are used in this paper as a case study to investigate how to apply standard Directives on Electromagnetic Compatibility to astronomical observatories. Cherenkov Telescopes are able to provide valuable information from both Galactic and Extragalactic sources by measuring Cherenkov radiation, which is produced by particles which travel faster than light in the atmosphere. The construction requirements demand compliance with the European Electromagnetic Compatibility Directive. The largest telescopes of these observatories, called Large Scale Telescopes (LSTs), are high precision instruments with advanced photomultipliers able to detect the faint sub-nanosecond blue light pulses produced by Cherenkov Radiation. They have a 23-meter parabolic reflective surface. This surface focuses the radiation on a camera composed of an array of high-speed photosensors which are highly sensitive to the radio spectrum pollution. The camera has a field of view of about 4.5 degrees and has been designed for maximum compactness and lowest weight, cost and power consumption. Each pixel incorporates a photo-sensor able to discriminate single photons and the corresponding readout electronics. The first LST is already commissioned and intends to be operated as a service to Scientific Community. Because of this, it must comply with a series of reliability and functional requirements and must have a Conformité Européen (CE) marking. This demands compliance with Directive 2014/30/EU on electromagnetic compatibility. The main difficulty of accomplishing this goal resides on the fact that Conformité Européen marking setups and procedures were implemented for industrial products, whereas no clear protocols have been defined for scientific installations. In this paper, we aim to give an answer to the question on how the directive should be applied to our installation to guarantee the fulfillment of all the requirements and the proper functioning of the telescope itself. Experts in Optics and Electromagnetism were both needed to make these kinds of decisions and match tests which were designed to be made over the equipment of limited dimensions on large scientific plants. An analysis of the elements and configurations most likely to be affected by external interferences and those that are most likely to cause the maximum disturbances was also performed. Obtaining the Conformité Européen mark requires knowing what the harmonized standards are and how the elaboration of the specific requirement is defined. For this type of large installations, one needs to adapt and develop the tests to be carried out. In addition, throughout this process, certification entities and notified bodies play a key role in preparing and agreeing the required technical documentation. We have focused our attention mostly on the technical aspects of each point. We believe that this contribution will be of interest for other scientists involved in applying industrial quality assurance standards to large scientific plant.

Keywords: CE marking, electromagnetic compatibility, european directive, scientific installations

Procedia PDF Downloads 110
42 Advancing Agriculture through Technology: An Abstract of Research Findings

Authors: Eugene Aninagyei-Bonsu

Abstract:

Introduction: Agriculture has been a cornerstone of human civilization, ensuring food security and livelihoods for billions of people worldwide. In recent decades, rapid advancements in technology have revolutionized the agricultural sector, offering innovative solutions to enhance productivity, sustainability, and efficiency. This abstract summarizes key findings from a research study that explores the impacts of technology in modern agriculture and its implications for future food production systems. Methodologies: The research study employed a mixed-methods approach, combining quantitative data analysis with qualitative interviews and surveys to gain a comprehensive understanding of the role of technology in agriculture. Data was collected from various stakeholders, including farmers, agricultural technicians, and industry experts, to capture diverse perspectives on the adoption and utilization of agricultural technologies. The study also utilized case studies and literature reviews to contextualize the findings within the broader agricultural landscape. Major Findings: The research findings reveal that technology plays a pivotal role in transforming traditional farming practices and driving innovation in agriculture. Advanced technologies such as precision agriculture, drone technology, genetic engineering, and smart irrigation systems have significantly improved crop yields, reduced environmental impact, and optimized resource utilization. Farmers who have embraced these technologies have reported increased productivity, enhanced profitability, and improved resilience to environmental challenges. Furthermore, the study highlights the importance of accessible and affordable technology solutions for smallholder farmers in developing countries. Mobile applications, sensor technologies, and digital platforms have enabled small-scale farmers to access market information, weather forecasts, and agricultural best practices, empowering them to make informed decisions and improve their livelihoods. The research emphasizes the need for targeted policies and investments to bridge the digital divide and promote equitable technology adoption in agriculture. Conclusion: In conclusion, this research underscores the transformative potential of technology in agriculture and its critical role in advancing sustainable food production systems. The findings suggest that harnessing technology can address key challenges facing the agricultural sector, including climate change, resource scarcity, and food insecurity. By embracing innovation and leveraging technology, farmers can enhance their productivity, profitability, and resilience in a rapidly evolving global food system. Moving forward, policymakers, researchers, and industry stakeholders must collaborate to facilitate the adoption of appropriate technologies, support capacity building, and promote sustainable agricultural practices for a more resilient and food-secure future.

Keywords: technology development in modern agriculture, the influence of information technology access in agriculture, analyzing agricultural technology development, analyzing of the frontier technology of agriculture loT

Procedia PDF Downloads 35
41 Comparison of Bioelectric and Biomechanical Electromyography Normalization Techniques in Disparate Populations

Authors: Drew Commandeur, Ryan Brodie, Sandra Hundza, Marc Klimstra

Abstract:

The amplitude of raw electromyography (EMG) is affected by recording conditions and often requires normalization to make meaningful comparisons. Bioelectric methods normalize with an EMG signal recorded during a standardized task or from the experimental protocol itself, while biomechanical methods often involve measurements with an additional sensor such as a force transducer. Common bioelectric normalization techniques for treadmill walking include maximum voluntary isometric contraction (MVIC), dynamic EMG peak (EMGPeak) or dynamic EMG mean (EMGMean). There are several concerns with using MVICs to normalize EMG, including poor reliability and potential discomfort. A limitation of bioelectric normalization techniques is that they could result in a misrepresentation of the absolute magnitude of force generated by the muscle and impact the interpretation of EMG between functionally disparate groups. Additionally, methods that normalize to EMG recorded during the task may eliminate some real inter-individual variability due to biological variation. This study compared biomechanical and bioelectric EMG normalization techniques during treadmill walking to assess the impact of the normalization method on the functional interpretation of EMG data. For the biomechanical method, we normalized EMG to a target torque (EMGTS) and the bioelectric methods used were normalization to the mean and peak of the signal during the walking task (EMGMean and EMGPeak). The effect of normalization on muscle activation pattern, EMG amplitude, and inter-individual variability were compared between disparate cohorts of OLD (76.6 yrs N=11) and YOUNG (26.6 yrs N=11) adults. Participants walked on a treadmill at a self-selected pace while EMG was recorded from the right lower limb. EMG data from the soleus (SOL), medial gastrocnemius (MG), tibialis anterior (TA), vastus lateralis (VL), and biceps femoris (BF) were phase averaged into 16 bins (phases) representing the gait cycle with bins 1-10 associated with right stance and bins 11-16 with right swing. Pearson’s correlations showed that activation patterns across the gait cycle were similar between all methods, ranging from r =0.86 to r=1.00 with p<0.05. This indicates that each method can characterize the muscle activation pattern during walking. Repeated measures ANOVA showed a main effect for age in MG for EMGPeak but no other main effects were observed. Interactions between age*phase of EMG amplitude between YOUNG and OLD with each method resulted in different statistical interpretation between methods. EMGTS normalization characterized the fewest differences (four phases across all 5 muscles) while EMGMean (11 phases) and EMGPeak (19 phases) showed considerably more differences between cohorts. The second notable finding was that coefficient of variation, the representation of inter-individual variability, was greatest for EMGTS and lowest for EMGMean while EMGPeak was slightly higher than EMGMean for all muscles. This finding supports our expectation that EMGTS normalization would retain inter-individual variability which may be desirable, however, it also suggests that even when large differences are expected, a larger sample size may be required to observe the differences. Our findings clearly indicate that interpretation of EMG is highly dependent on the normalization method used, and it is essential to consider the strengths and limitations of each method when drawing conclusions.

Keywords: electromyography, EMG normalization, functional EMG, older adults

Procedia PDF Downloads 91
40 Physiological Effects on Scientist Astronaut Candidates: Hypobaric Training Assessment

Authors: Pedro Llanos, Diego García

Abstract:

This paper is addressed to expanding our understanding of the effects of hypoxia training on our bodies to better model its dynamics and leverage some of its implications and effects on human health. Hypoxia training is a recommended practice for military and civilian pilots that allow them to recognize their early hypoxia signs and symptoms, and Scientist Astronaut Candidates (SACs) who underwent hypobaric hypoxia (HH) exposure as part of a training activity for prospective suborbital flight applications. This observational-analytical study describes physiologic responses and symptoms experienced by a SAC group before, during and after HH exposure and proposes a model for assessing predicted versus observed physiological responses. A group of individuals with diverse Science Technology Engineering Mathematics (STEM) backgrounds conducted a hypobaric training session to an altitude up to 22,000 ft (FL220) or 6,705 meters, where heart rate (HR), breathing rate (BR) and core temperature (Tc) were monitored with the use of a chest strap sensor pre and post HH exposure. A pulse oximeter registered levels of saturation of oxygen (SpO2), number and duration of desaturations during the HH chamber flight. Hypoxia symptoms as described by the SACs during the HH training session were also registered. This data allowed to generate a preliminary predictive model of the oxygen desaturation and O2 pressure curve for each subject, which consists of a sixth-order polynomial fit during exposure, and a fifth or fourth-order polynomial fit during recovery. Data analysis showed that HR and BR showed no significant differences between pre and post HH exposure in most of the SACs, while Tc measures showed slight but consistent decrement changes. All subjects registered SpO2 greater than 94% for the majority of their individual HH exposures, but all of them presented at least one clinically significant desaturation (SpO2 < 85% for more than 5 seconds) and half of the individuals showed SpO2 below 87% for at least 30% of their HH exposure time. Finally, real time collection of HH symptoms presented temperature somatosensory perceptions (SP) for 65% of individuals, and task-focus issues for 52.5% of individuals as the most common HH indications. 95% of the subjects experienced HH onset symptoms below FL180; all participants achieved full recovery of HH symptoms within 1 minute of donning their O2 mask. The current HH study performed on this group of individuals suggests a rapid and fully reversible physiologic response after HH exposure as expected and obtained in previous studies. Our data showed consistent results between predicted versus observed SpO2 curves during HH suggesting a mathematical function that may be used to model HH performance deficiencies. During the HH study, real-time HH symptoms were registered providing evidenced SP and task focusing as the earliest and most common indicators. Finally, an assessment of HH signs of symptoms in a group of heterogeneous, non-pilot individuals showed similar results to previous studies in homogeneous populations of pilots.

Keywords: slow onset hypoxia, hypobaric chamber training, altitude sickness, symptoms and altitude, pressure cabin

Procedia PDF Downloads 116
39 Reducing Flood Risk in a Megacity: Using Mobile Application and Value Capture for Flood Risk Prevention and Risk Reduction Financing

Authors: Dedjo Yao Simon, Takahiro Saito, Norikazu Inuzuka, Ikuo Sugiyama

Abstract:

The megacity of Abidjan is a coastal urban area where the number of floods reported and the associated impacts are on a rapid increase due to climate change, an uncontrolled urbanization, a rapid population increase, a lack of flood disaster mitigation and citizens’ awareness. The objective of this research is to reduce in the short and long term period, the human and socio-economic impact of the flood. Hydrological simulation is applied on free of charge global spatial data (digital elevation model, satellite-based rainfall estimate, landuse) to identify the flood-prone area and to map the risk of flood. A direct interview to a sample residents is used to validate the simulation results. Then a mobile application (Flood Locator) is prototyped to disseminate the risk information to the citizen. In addition, a value capture strategy is proposed to mobilize financial resource for disaster risk reduction (DRRf) to reduce the impact of the flood. The town of Cocody in Abidjan is selected as a case study area to implement this research. The mapping of the flood risk reveals that population living in the study area is highly vulnerable. For a 5-year flood, more than 60% of the floodplain is affected by a water depth of at least 0.5 meters; and more than 1000 ha with at least 5000 buildings are directly exposed. The risk becomes higher for a 50 and 100-year floods. Also, the interview reveals that the majority of the citizen are not aware of the risk and severity of flooding in their community. This shortage of information is overcome by the Flood Locator and by an urban flood database we prototype for accumulate flood data. Flood Locator App allows the users to view floodplain and depth on a digital map; the user can activate the GPS sensor of the mobile to visualize his location on the map. Some more important additional features allow the citizen user to capture flood events and damage information that they can send remotely to the database. Also, the disclosure of the risk information could result to a decrement (-14%) of the value of properties locate inside floodplain and an increment (+19%) of the value of property in the suburb area. The tax increment due to the higher tax increment in the safer area should be captured to constitute the DRRf. The fund should be allocated to the reduction of flood risk for the benefit of people living in flood-prone areas. The flood prevention system discusses in this research will minimize in the short and long term the direct damages in the risky area due to effective awareness of citizen and the availability of DRRf. It will also contribute to the growth of the urban area in the safer zone and reduce human settlement in the risky area in the long term. Data accumulated in the urban flood database through the warning app will contribute to regenerate Abidjan towards the more resilient city by means of risk avoidable landuse in the master plan.

Keywords: abidjan, database, flood, geospatial techniques, risk communication, smartphone, value capture

Procedia PDF Downloads 290
38 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models

Authors: V. Mantey, N. Findlay, I. Maddox

Abstract:

The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.

Keywords: building detection, disaster relief, mask-RCNN, satellite mapping

Procedia PDF Downloads 169
37 Tele-Rehabilitation for Multiple Sclerosis: A Case Study

Authors: Sharon Harel, Rachel Kizony, Yoram Feldman, Gabi Zeilig, Mordechai Shani

Abstract:

Multiple Sclerosis (MS) is a neurological disease that may cause restriction in participation in daily activities of young adults. Main symptoms include fatigue, weakness and cognitive decline. The appearance of symptoms, their severity and deterioration rate, change between patients. The challenge of health services is to provide long-term rehabilitation services to people with MS. The objective of this presentation is to describe a course of tele-rehabilitation service of a woman with MS. Methods; R is a 48 years-old woman, diagnosed with MS when she was 22. She started to suffer from weakness of her non-dominant left upper extremity about ten years after the diagnosis. She was referred to the tele-rehabilitation service by her rehabilitation team, 16 years after diagnosis. Her goals were to improve ability to use her affected upper extremity in daily activities. On admission her score in the Mini-Mental State Exam was 30/30. Her Fugl-Meyer Assessment (FMA) score of the left upper extremity was 48/60, indicating mild weakness and she had a limitation of her shoulder abduction (90 degrees). In addition, she reported little use of her arm in daily activities as shown in her responses to the Motor Activity Log (MAL) that were equal to 1.25/5 in amount and 1.37 in quality of use. R. received two 30 minutes on-line sessions per week in the tele-rehabilitation service, with the CogniMotion system. These were complemented by self-practice with the system. The CogniMotion system provides a hybrid (synchronous-asynchronous), the home-based tele-rehabilitation program to improve the motor, cognitive and functional status of people with neurological deficits. The system consists of a computer, large monitor, and the Microsoft’s Kinect 3D sensor. This equipment is located in the client’s home and connected to a clinician’s computer setup in a remote clinic via WiFi. The client sits in front of the monitor and uses his body movements to interact with games and tasks presented on the monitor. The system provides feedback in the form of ‘knowledge of results’ (e.g., the success of a game) and ‘knowledge of performance’ (e.g., alerts for compensatory movements) to enhance motor learning. The games and tasks were adapted for R. motor abilities and level of difficulty was gradually increased according to her abilities. The results of her second assessment (after 35 on-line sessions) showed improvement in her FMA score to 52 and shoulder abduction to 140 degrees. Moreover, her responses to the MAL indicated an increased amount (2.4) and quality (2.2) of use of her left upper extremity in daily activities. She reported high level of enjoyment from the treatments (5/5), specifically the combination of cognitive challenges while moving her body. In addition, she found the system easy to use as reflected by her responses to the System Usability Scale (85/100). To-date, R. continues to receive treatments in the tele-rehabilitation service. To conclude, this case report shows the potential of using tele-rehabilitation for people with MS to provide strategies to enhance the use of the upper extremity in daily activities as well as for maintaining motor function.

Keywords: motor function, multiple-sclerosis, tele-rehabilitation, daily activities

Procedia PDF Downloads 180
36 Efficacy of Deep Learning for Below-Canopy Reconstruction of Satellite and Aerial Sensing Point Clouds through Fractal Tree Symmetry

Authors: Dhanuj M. Gandikota

Abstract:

Sensor-derived three-dimensional (3D) point clouds of trees are invaluable in remote sensing analysis for the accurate measurement of key structural metrics, bio-inventory values, spatial planning/visualization, and ecological modeling. Machine learning (ML) holds the potential in addressing the restrictive tradeoffs in cost, spatial coverage, resolution, and information gain that exist in current point cloud sensing methods. Terrestrial laser scanning (TLS) remains the highest fidelity source of both canopy and below-canopy structural features, but usage is limited in both coverage and cost, requiring manual deployment to map out large, forested areas. While aerial laser scanning (ALS) remains a reliable avenue of LIDAR active remote sensing, ALS is also cost-restrictive in deployment methods. Space-borne photogrammetry from high-resolution satellite constellations is an avenue of passive remote sensing with promising viability in research for the accurate construction of vegetation 3-D point clouds. It provides both the lowest comparative cost and the largest spatial coverage across remote sensing methods. However, both space-borne photogrammetry and ALS demonstrate technical limitations in the capture of valuable below-canopy point cloud data. Looking to minimize these tradeoffs, we explored a class of powerful ML algorithms called Deep Learning (DL) that show promise in recent research on 3-D point cloud reconstruction and interpolation. Our research details the efficacy of applying these DL techniques to reconstruct accurate below-canopy point clouds from space-borne and aerial remote sensing through learned patterns of tree species fractal symmetry properties and the supplementation of locally sourced bio-inventory metrics. From our dataset, consisting of tree point clouds obtained from TLS, we deconstructed the point clouds of each tree into those that would be obtained through ALS and satellite photogrammetry of varying resolutions. We fed this ALS/satellite point cloud dataset, along with the simulated local bio-inventory metrics, into the DL point cloud reconstruction architectures to generate the full 3-D tree point clouds (the truth values are denoted by the full TLS tree point clouds containing the below-canopy information). Point cloud reconstruction accuracy was validated both through the measurement of error from the original TLS point clouds as well as the error of extraction of key structural metrics, such as crown base height, diameter above root crown, and leaf/wood volume. The results of this research additionally demonstrate the supplemental performance gain of using minimum locally sourced bio-inventory metric information as an input in ML systems to reach specified accuracy thresholds of tree point cloud reconstruction. This research provides insight into methods for the rapid, cost-effective, and accurate construction of below-canopy tree 3-D point clouds, as well as the supported potential of ML and DL to learn complex, unmodeled patterns of fractal tree growth symmetry.

Keywords: deep learning, machine learning, satellite, photogrammetry, aerial laser scanning, terrestrial laser scanning, point cloud, fractal symmetry

Procedia PDF Downloads 102
35 Automated Adaptions of Semantic User- and Service Profile Representations by Learning the User Context

Authors: Nicole Merkle, Stefan Zander

Abstract:

Ambient Assisted Living (AAL) describes a technological and methodological stack of (e.g. formal model-theoretic semantics, rule-based reasoning and machine learning), different aspects regarding the behavior, activities and characteristics of humans. Hence, a semantic representation of the user environment and its relevant elements are required in order to allow assistive agents to recognize situations and deduce appropriate actions. Furthermore, the user and his/her characteristics (e.g. physical, cognitive, preferences) need to be represented with a high degree of expressiveness in order to allow software agents a precise evaluation of the users’ context models. The correct interpretation of these context models highly depends on temporal, spatial circumstances as well as individual user preferences. In most AAL approaches, model representations of real world situations represent the current state of a universe of discourse at a given point in time by neglecting transitions between a set of states. However, the AAL domain currently lacks sufficient approaches that contemplate on the dynamic adaptions of context-related representations. Semantic representations of relevant real-world excerpts (e.g. user activities) help cognitive, rule-based agents to reason and make decisions in order to help users in appropriate tasks and situations. Furthermore, rules and reasoning on semantic models are not sufficient for handling uncertainty and fuzzy situations. A certain situation can require different (re-)actions in order to achieve the best results with respect to the user and his/her needs. But what is the best result? To answer this question, we need to consider that every smart agent requires to achieve an objective, but this objective is mostly defined by domain experts who can also fail in their estimation of what is desired by the user and what not. Hence, a smart agent has to be able to learn from context history data and estimate or predict what is most likely in certain contexts. Furthermore, different agents with contrary objectives can cause collisions as their actions influence the user’s context and constituting conditions in unintended or uncontrolled ways. We present an approach for dynamically updating a semantic model with respect to the current user context that allows flexibility of the software agents and enhances their conformance in order to improve the user experience. The presented approach adapts rules by learning sensor evidence and user actions using probabilistic reasoning approaches, based on given expert knowledge. The semantic domain model consists basically of device-, service- and user profile representations. In this paper, we present how this semantic domain model can be used in order to compute the probability of matching rules and actions. We apply this probability estimation to compare the current domain model representation with the computed one in order to adapt the formal semantic representation. Our approach aims at minimizing the likelihood of unintended interferences in order to eliminate conflicts and unpredictable side-effects by updating pre-defined expert knowledge according to the most probable context representation. This enables agents to adapt to dynamic changes in the environment which enhances the provision of adequate assistance and affects positively the user satisfaction.

Keywords: ambient intelligence, machine learning, semantic web, software agents

Procedia PDF Downloads 281
34 Characterizing the Rectification Process for Designing Scoliosis Braces: Towards Digital Brace Design

Authors: Inigo Sanz-Pena, Shanika Arachchi, Dilani Dhammika, Sanjaya Mallikarachchi, Jeewantha S. Bandula, Alison H. McGregor, Nicolas Newell

Abstract:

The use of orthotic braces for adolescent idiopathic scoliosis (AIS) patients is the most common non-surgical treatment to prevent deformity progression. The traditional method to create an orthotic brace involves casting the patient’s torso to obtain a representative geometry, which is then rectified by an orthotist to the desired geometry of the brace. Recent improvements in 3D scanning technologies, rectification software, CNC, and additive manufacturing processes have given the possibility to compliment, or in some cases, replace manual methods with digital approaches. However, the rectification process remains dependent on the orthotist’s skills. Therefore, the rectification process needs to be carefully characterized to ensure that braces designed through a digital workflow are as efficient as those created using a manual process. The aim of this study is to compare 3D scans of patients with AIS against 3D scans of both pre- and post-rectified casts that have been manually shaped by an orthotist. Six AIS patients were recruited from the Ragama Rehabilitation Clinic, Colombo, Sri Lanka. All patients were between 10 and 15 years old, were skeletally immature (Risser grade 0-3), and had Cobb angles between 20-45°. Seven spherical markers were placed at key anatomical locations on each patient’s torso and on the pre- and post-rectified molds so that distances could be reliably measured. 3D scans were obtained of 1) the patient’s torso and pelvis, 2) the patient’s pre-rectification plaster mold, and 3) the patient’s post-rectification plaster mold using a Structure Sensor Mark II 3D scanner (Occipital Inc., USA). 3D stick body models were created for each scan to represent the distances between anatomical landmarks. The 3D stick models were used to analyze the changes in position and orientation of the anatomical landmarks between scans using Blender open-source software. 3D Surface deviation maps represented volume differences between the scans using CloudCompare open-source software. The 3D stick body models showed changes in the position and orientation of thorax anatomical landmarks between the patient and the post-rectification scans for all patients. Anatomical landmark position and volume differences were seen between 3D scans of the patient’s torsos and the pre-rectified molds. Between the pre- and post-rectified molds, material removal was consistently seen on the anterior side of the thorax and the lateral areas below the ribcage. Volume differences were seen in areas where the orthotist planned to place pressure pads (usually at the trochanter on the side to which the lumbar curve was tilted (trochanter pad), at the lumbar apical vertebra (lumbar pad), on the rib connected to the apical vertebrae at the mid-axillary line (thoracic pad), and on the ribs corresponding to the upper thoracic vertebra (axillary extension pad)). The rectification process requires the skill and experience of an orthotist; however, this study demonstrates that the brace shape, location, and volume of material removed from the pre-rectification mold can be characterized and quantified. Results from this study can be fed into software that can accelerate the brace design process and make steps towards the automated digital rectification process.

Keywords: additive manufacturing, orthotics, scoliosis brace design, sculpting software, spinal deformity

Procedia PDF Downloads 145
33 Assessment and Characterization of Dual-Hardening Adhesion Promoter for Self-Healing Mechanisms in Metal-Plastic Hybrid System

Authors: Anas Hallak, Latifa Seblini, Juergen Wilde

Abstract:

In mechatronics or sensor technology, plastic housings are used to protect sensitive components from harmful environmental influences, such as moisture, media, or reactive substances. Connections, preferably in the form of metallic lead-frame structures, through the housing wall are required for their electrical supply or control. In this system, an insufficient connection between the plastic component, e.g., Polyamide66, and the metal surface, e.g., copper, due to the incompatibility is dominating. As a result, leakage paths can occur along with the plastic-metal interface. Since adhesive bonding has been established as one of the most important joining processes and its use has expanded significantly, driven by the development of improved high-performance adhesives and bonding techniques, this technology has been involved in metal-plastic hybrid structures. In this study, an epoxy bonding agent from DELO (DUALBOND LT2266) has been used to improve the mechanical and chemical binding between the metal and the polymer. It is an adhesion promoter with two reaction stages. In these, the first stage provides fixation to the lead frame directly after the coating step, which can be done by UV-Exposure for a few seconds. In the second stage, the material will be thermally hardened during injection molding. To analyze the two reaction stages of the primer, dynamic DSC experiments were carried out and correlated with Fourier-transform infrared spectroscopy measurements. Furthermore, the number of crosslinking bonds formed in the system in each reaction stage has also been estimated by a rheological characterization. Those investigations have been performed with different times of UV exposure: 12, 96 s and in an industrial preferred temperature range from -20 to 175°C. The shear viscosity values of primer have been measured as a function of temperature and exposure times. For further interpretation, the storage modulus values have been calculated, and the so-called Booij–Palmen plot has been sketched. The next approach in this study is the self-healing mechanisms in the hydride system in which the primer should flow into micro-damage such as interface, cracks, inhibit them from growing, and close them. The ability of the primer to flow in and penetrate defined capillaries made in Ultramid was investigated. Holes with a diameter of 0.3 mm were produced in injection-molded A3EG7 plates with 4 mm thickness. A copper substrate coated with the DUALBOND was placed on the A3EG7 plate and pressed with a certain force. Metallographic analyses were carried out to verify the filling grade, which showed an almost 95% filling ratio of the capillaries. Finally, to estimate the self-healing mechanism in metal-plastic hybrid systems, characterizations have been done on a simple geometry with a metal inlay developed by the Institute of Polymer Technology in Friedrich-Alexander-University. The specimens have been modified with tungsten wire which was to be pulled out after the injection molding to create a micro-hole in the specimen at the interface between the primer and the polymer. The capability of the primer to heal those micro-cracks upon heating, pressing, and thermal aging has been characterized through metallographic analyses.

Keywords: hybrid structures, self-healing, thermoplastic housing, adhesive

Procedia PDF Downloads 193
32 Quantifying Impairments in Whiplash-Associated Disorders and Association with Patient-Reported Outcomes

Authors: Harpa Ragnarsdóttir, Magnús Kjartan Gíslason, Kristín Briem, Guðný Lilja Oddsdóttir

Abstract:

Introduction: Whiplash-Associated Disorder (WAD) is a health problem characterized by motor, neurological and psychosocial symptoms, stressing the need for a multimodal treatment approach. To achieve individualized multimodal approach, prognostic factors need to be identified early using validated patient-reported and objective outcome measures. The aim of this study is to demonstrate the degree of association between patient-reported and clinical outcome measures of WAD patients in the subacute phase. Methods: Individuals (n=41) with subacute (≥1, ≤3 months) WAD (I-II), medium to high-risk symptoms, or neck pain rating ≥ 4/10 on the Visual Analog Scale (VAS) were examined. Outcome measures included measurements for movement control (Butterfly test) and cervical active range of motion (cAROM) using the NeckSmart system, a computer system using an inertial measurement unit (IMU) that connects to a computer. The IMU sensor is placed on the participant’s head, who receives visual feedback about the movement of the head. Patient-reported neck disability, pain intensity, general health, self-perceived handicap, central sensitization, and difficulties due to dizziness were measured using questionnaires. Excel and R statistical software were used for statistical analyses. Results: Forty-one participants, 15 males (37%), 26 females (63%), mean (SD) age 36.8 (±12.7), underwent data collection. Mean amplitude accuracy (AA) (SD) in the Butterfly test for easy, medium, and difficult paths were 2.4mm (0.9), 4.4mm (1.8), and 6.8mm (2.7), respectively. Mean cAROM (SD) for flexion, extension, left-, and right rotation were 46.3° (18.5), 48.8° (17.8), 58.2° (14.3), and 58.9° (15.0), respectively. Mean scores on the Neck Disability Index (NDI), VAS, Dizziness Handicap Inventory (DHI), Central Sensitization Inventory (CSI), and 36-Item Short Form Survey RAND version (RAND) were 43% (17.4), 7 (1.7), 37 (25.4), 51 (17.5), and 39.2 (17.7) respectively. Females showed significantly greater deviation for AA compared to males for easy and medium Butterfly paths (p<0.05). Statistically significant moderate to strong positive correlation was found between the DHI and easy (r=0.6, p=0.05), medium (r=0.5, p=0.05)) and difficult (r=0.5, p<0.05) Butterfly paths, between the total RAND score and all cAROMs (r between 0.4-0.7, p≤0.05) except flexion (r=0.4, p=0.7), and between the NDI score and CSI (r=0.7, p<0.01), VAS (r=0.5, p<0.01), and DHI (r=0.7, p<0.01) scores respectively. Discussion: All patient-reported and objective measures were found to be outside the reference range. Results suggest females have worse movement control in the neck in the subacute WAD phase. However, no statistical difference based on gender was found in patient-reported measures. Suggesting females might have worse movement control than males in general in this phase. The correlation found between DHI and the Butterfly test can be explained because the DHI measures proprioceptive symptoms like dizziness and eye movement disorders that can affect the outcome of movement control tests. A correlation was found between the total RAND score and cAROM, suggesting that a reduced range of motion affects the quality of life. Significance: The NeckSmart system can detect abnormalities in cAROM, fine movement control, and kinesthesia of the neck. Results suggest females have worse movement control than males. Results show a moderate to a high correlation between several patient-reported and objective measurements.

Keywords: whiplash associated disorders, car-collision, neck, trauma, subacute

Procedia PDF Downloads 70
31 Hyperspectral Imagery for Tree Speciation and Carbon Mass Estimates

Authors: Jennifer Buz, Alvin Spivey

Abstract:

The most common greenhouse gas emitted through human activities, carbon dioxide (CO2), is naturally consumed by plants during photosynthesis. This process is actively being monetized by companies wishing to offset their carbon dioxide emissions. For example, companies are now able to purchase protections for vegetated land due-to-be clear cut or purchase barren land for reforestation. Therefore, by actively preventing the destruction/decay of plant matter or by introducing more plant matter (reforestation), a company can theoretically offset some of their emissions. One of the biggest issues in the carbon credit market is validating and verifying carbon offsets. There is a need for a system that can accurately and frequently ensure that the areas sold for carbon credits have the vegetation mass (and therefore for carbon offset capability) they claim. Traditional techniques for measuring vegetation mass and determining health are costly and require many person-hours. Orbital Sidekick offers an alternative approach that accurately quantifies carbon mass and assesses vegetation health through satellite hyperspectral imagery, a technique which enables us to remotely identify material composition (including plant species) and condition (e.g., health and growth stage). How much carbon a plant is capable of storing ultimately is tied to many factors, including material density (primarily species-dependent), plant size, and health (trees that are actively decaying are not effectively storing carbon). All of these factors are capable of being observed through satellite hyperspectral imagery. This abstract focuses on speciation. To build a species classification model, we matched pixels in our remote sensing imagery to plants on the ground for which we know the species. To accomplish this, we collaborated with the researchers at the Teakettle Experimental Forest. Our remote sensing data comes from our airborne “Kato” sensor, which flew over the study area and acquired hyperspectral imagery (400-2500 nm, 472 bands) at ~0.5 m/pixel resolution. Coverage of the entire teakettle experimental forest required capturing dozens of individual hyperspectral images. In order to combine these images into a mosaic, we accounted for potential variations of atmospheric conditions throughout the data collection. To do this, we ran an open source atmospheric correction routine called ISOFIT1 (Imaging Spectrometer Optiman FITting), which converted all of our remote sensing data from radiance to reflectance. A database of reflectance spectra for each of the tree species within the study area was acquired using the Teakettle stem map and the geo-referenced hyperspectral images. We found that a wide variety of machine learning classifiers were able to identify the species within our images with high (>95%) accuracy. For the most robust quantification of carbon mass and the best assessment of the health of a vegetated area, speciation is critical. Through the use of high resolution hyperspectral data, ground-truth databases, and complex analytical techniques, we are able to determine the species present within a pixel to a high degree of accuracy. These species identifications will feed directly into our carbon mass model.

Keywords: hyperspectral, satellite, carbon, imagery, python, machine learning, speciation

Procedia PDF Downloads 128
30 Coil-Over Shock Absorbers Compared to Inherent Material Damping

Authors: Carina Emminger, Umut D. Cakmak, Evrim Burkut, Rene Preuer, Ingrid Graz, Zoltan Major

Abstract:

Damping accompanies us daily in everyday life and is used to protect (e.g., in shoes) and make our life more comfortable (damping of unwanted motion) and calm (noise reduction). In general, damping is the absorption of energy which is either stored in the material (vibration isolation systems) or changed into heat (vibration absorbers). In case of the last, the damping mechanism can be split in active, passive, as well as semi-active (a combination of active and passive). Active damping is required to enable an almost perfect damping over the whole application range and is used, for instance, in sport cars. In contrast, passive damping is a response of the material due to external loading. Consequently, the material composition has a huge influence on the damping behavior. For elastomers, the material behavior is inherent viscoelastic, temperature, and frequency dependent. However, passive damping is not adjustable during application. Therefore, it is of importance to understand the fundamental viscoelastic behavior and the dissipation capability due to external loading. The objective of this work is to assess the limitation and applicability of viscoelastic material damping for applications in which currently coil-over shock absorbers are utilized. Coil-over shock absorbers are usually made of various mechanical parts and incorporate fluids within the damper. These shock absorbers are well-known and studied in the industry, and when needed, they can be easily adjusted during their product lifetime. In contrary, dampers made of – ideally – a single material are more resource efficient, have an easier serviceability, and are easier manufactured. However, they lack of adaptability and adjustability in service. Therefore, a case study with a remote-controlled sport car was conducted. The original shock absorbers were redesigned, and the spring-dashpot system was replaced by both an elastomer and a thermoplastic-elastomer, respectively. Here, five different formulations of elastomers were used, including a pure and an iron-particle filled thermoplastic poly(urethan) (TPU) and blends of two different poly(dimethyl siloxane) (PDMS). In addition, the TPUs were investigated as full and hollow dampers to investigate the difference between solid and structured material. To get comparative results each material formulation was comprehensively characterized, by monotonic uniaxial compression tests, dynamic thermomechanical analysis (DTMA), and rebound resilience. Moreover, the new material-based shock absorbers were compared with spring-dashpot shock absorbers. The shock absorbers were analyzed under monotonic and cyclic loading. In addition, an impact loading was applied on the remote-controlled car to measure the damping properties in operation. A servo-hydraulic high-speed linear actuator was utilized to apply the loads. The acceleration of the car and the displacement of specific measurement points were recorded while testing by a sensor and high-speed camera, respectively. The results prove that elastomers are suitable in damping applications, but they are temperature and frequency dependent. This is a limitation in applicability of viscous material damper. Feasible fields of application may be in the case of micromobility, like bicycles, e-scooters, and e-skateboards. Furthermore, the viscous material damping could be used to increase the inherent damping of a whole structure, e.g., in bicycle-frames.

Keywords: damper structures, material damping, PDMS, TPU

Procedia PDF Downloads 114
29 Sentinel-2 Based Burn Area Severity Assessment Tool in Google Earth Engine

Authors: D. Madhushanka, Y. Liu, H. C. Fernando

Abstract:

Fires are one of the foremost factors of land surface disturbance in diverse ecosystems, causing soil erosion and land-cover changes and atmospheric effects affecting people's lives and properties. Generally, the severity of the fire is calculated as the Normalized Burn Ratio (NBR) index. This is performed manually by comparing two images obtained afterward. Then by using the bitemporal difference of the preprocessed satellite images, the dNBR is calculated. The burnt area is then classified as either unburnt (dNBR<0.1) or burnt (dNBR>= 0.1). Furthermore, Wildfire Severity Assessment (WSA) classifies burnt areas and unburnt areas using classification levels proposed by USGS and comprises seven classes. This procedure generates a burn severity report for the area chosen by the user manually. This study is carried out with the objective of producing an automated tool for the above-mentioned process, namely the World Wildfire Severity Assessment Tool (WWSAT). It is implemented in Google Earth Engine (GEE), which is a free cloud-computing platform for satellite data processing, with several data catalogs at different resolutions (notably Landsat, Sentinel-2, and MODIS) and planetary-scale analysis capabilities. Sentinel-2 MSI is chosen to obtain regular processes related to burnt area severity mapping using a medium spatial resolution sensor (15m). This tool uses machine learning classification techniques to identify burnt areas using NBR and to classify their severity over the user-selected extent and period automatically. Cloud coverage is one of the biggest concerns when fire severity mapping is performed. In WWSAT based on GEE, we present a fully automatic workflow to aggregate cloud-free Sentinel-2 images for both pre-fire and post-fire image compositing. The parallel processing capabilities and preloaded geospatial datasets of GEE facilitated the production of this tool. This tool consists of a Graphical User Interface (GUI) to make it user-friendly. The advantage of this tool is the ability to obtain burn area severity over a large extent and more extended temporal periods. Two case studies were carried out to demonstrate the performance of this tool. The Blue Mountain national park forest affected by the Australian fire season between 2019 and 2020 is used to describe the workflow of the WWSAT. This site detected more than 7809 km2, using Sentinel-2 data, giving an error below 6.5% when compared with the area detected on the field. Furthermore, 86.77% of the detected area was recognized as fully burnt out, of which high severity (17.29%), moderate-high severity (19.63%), moderate-low severity (22.35%), and low severity (27.51%). The Arapaho and Roosevelt National Forest Park, California, the USA, which is affected by the Cameron peak fire in 2020, is chosen for the second case study. It was found that around 983 km2 had burned out, of which high severity (2.73%), moderate-high severity (1.57%), moderate-low severity (1.18%), and low severity (5.45%). These spots also can be detected through the visual inspection made possible by cloud-free images generated by WWSAT. This tool is cost-effective in calculating the burnt area since satellite images are free and the cost of field surveys is avoided.

Keywords: burnt area, burnt severity, fires, google earth engine (GEE), sentinel-2

Procedia PDF Downloads 235