Search results for: sensor calibration
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1773

Search results for: sensor calibration

153 Comparative Analysis of Reinforcement Learning Algorithms for Autonomous Driving

Authors: Migena Mana, Ahmed Khalid Syed, Abdul Malik, Nikhil Cherian

Abstract:

In recent years, advancements in deep learning enabled researchers to tackle the problem of self-driving cars. Car companies use huge datasets to train their deep learning models to make autonomous cars a reality. However, this approach has certain drawbacks in that the state space of possible actions for a car is so huge that there cannot be a dataset for every possible road scenario. To overcome this problem, the concept of reinforcement learning (RL) is being investigated in this research. Since the problem of autonomous driving can be modeled in a simulation, it lends itself naturally to the domain of reinforcement learning. The advantage of this approach is that we can model different and complex road scenarios in a simulation without having to deploy in the real world. The autonomous agent can learn to drive by finding the optimal policy. This learned model can then be easily deployed in a real-world setting. In this project, we focus on three RL algorithms: Q-learning, Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). To model the environment, we have used TORCS (The Open Racing Car Simulator), which provides us with a strong foundation to test our model. The inputs to the algorithms are the sensor data provided by the simulator such as velocity, distance from side pavement, etc. The outcome of this research project is a comparative analysis of these algorithms. Based on the comparison, the PPO algorithm gives the best results. When using PPO algorithm, the reward is greater, and the acceleration, steering angle and braking are more stable compared to the other algorithms, which means that the agent learns to drive in a better and more efficient way in this case. Additionally, we have come up with a dataset taken from the training of the agent with DDPG and PPO algorithms. It contains all the steps of the agent during one full training in the form: (all input values, acceleration, steering angle, break, loss, reward). This study can serve as a base for further complex road scenarios. Furthermore, it can be enlarged in the field of computer vision, using the images to find the best policy.

Keywords: autonomous driving, DDPG (deep deterministic policy gradient), PPO (proximal policy optimization), reinforcement learning

Procedia PDF Downloads 120
152 Alternative Epinephrine Injector to Combat Allergy Induced Anaphylaxis

Authors: Jeremy Bost, Matthew Brett, Jacob Flynn, Weihui Li

Abstract:

One response during anaphylaxis is reduced blood pressure due to blood vessels relaxing and dilating. Epinephrine causes the blood vessels to constrict, which raises blood pressure to counteract the symptoms. When going through an allergic reaction, an Epinephrine injector is used to administer a shot of epinephrine intramuscularly. Epinephrine injectors have become an integral part of day-to-day life for people with allergies. Current Epinephrine injectors (EpiPen) are completely mechanical and have no sensors to monitor the vital signs of patients or give suggestions the optimal time for the shot. The EpiPens are also large and inconvenient to carry daily. The current price of an EpiPen is roughly 600$ for a pack of two. This makes carrying an EpiPen very expensive, especially when they need to be switched out when the epinephrine expires. This new design is in the form of a bracelet, which has the ability to inject epinephrine. The bracelet will be equipped with vital signs monitors that can aid the patient to sense the allergic reaction. The vital signs that would be of interest are blood pressure, heart rate and Electrodermal activity (EDA). The heart rate of the patient will be tracked by a photoplethysmograph (PPG) that is incorporated into the sensors. The heart rate is expected to increase during anaphylaxis. Blood pressure will be monitored through a radar sensor, which monitors the phase changes in electromagnetic waves as they reflect off of the blood vessel. EDA is under autonomic control. Allergen-induced anaphylaxis is caused by a release of chemical mediators from mast cells and basophils, thus changes the autonomic activity of the patient. So by measuring EDA, it will give the wearer an alert on how their autonomic nervous system is reacting. After the vital signs are collected, they will be sent to an application on a smartphone to be analyzed, which can then alert an emergency contact if the epinephrine injector on the bracelet is activated. Overall, this design creates a safer system by aiding the user in keeping track of their epinephrine injector, while making it easier to track their vital signs. Also, our design will be more affordable and more convenient to replace. Rather than replacing the entire product, only the needle and drug will be switched out and not the entire design.

Keywords: allergy, anaphylaxis, epinephrine, injector, vital signs monitor

Procedia PDF Downloads 231
151 Cost-Effective Mechatronic Gaming Device for Post-Stroke Hand Rehabilitation

Authors: A. Raj Kumar, S. Bilaloglu

Abstract:

Stroke is a leading cause of adult disability worldwide. We depend on our hands for our activities of daily living(ADL). Although many patients regain the ability to walk, they continue to experience long-term hand motor impairments. As the number of individuals with young stroke is increasing, there is a critical need for effective approaches for rehabilitation of hand function post-stroke. Motor relearning for dexterity requires task-specific kinesthetic, tactile and visual feedback. However, when a stroke results in both sensory and motor impairment, it becomes difficult to ascertain when and what type of sensory substitutions can facilitate motor relearning. In an ideal situation, real-time task-specific data on the ability to learn and data-driven feedback to assist such learning will greatly assist rehabilitation for dexterity. We have found that kinesthetic and tactile information from the unaffected hand can assist patients re-learn the use of optimal fingertip forces during a grasp and lift task. Measurement of fingertip grip force (GF), load forces (LF), their corresponding rates (GFR and LFR), and other metrics can be used to gauge the impairment level and progress during learning. Currently ATI mini force-torque sensors are used in research settings to measure and compute the LF, GF, and their rates while grasping objects of different weights and textures. Use of the ATI sensor is cost prohibitive for deployment in clinical or at-home rehabilitation. A cost effective mechatronic device is developed to quantify GF, LF, and their rates for stroke rehabilitation purposes using off-the-shelf components such as load cells, flexi-force sensors, and an Arduino UNO microcontroller. A salient feature of the device is its integration with an interactive gaming environment to render a highly engaging user experience. This paper elaborates the integration of kinesthetic and tactile sensing through computation of LF, GF and their corresponding rates in real time, information processing, and interactive interfacing through augmented reality for visual feedback.

Keywords: feedback, gaming, kinesthetic, rehabilitation, tactile

Procedia PDF Downloads 222
150 Double Functionalization of Magnetic Colloids with Electroactive Molecules and Antibody for Platelet Detection and Separation

Authors: Feixiong Chen, Naoufel Haddour, Marie Frenea-Robin, Yves MéRieux, Yann Chevolot, Virginie Monnier

Abstract:

Neonatal thrombopenia occurs when the mother generates antibodies against her baby’s platelet antigens. It is particularly critical for newborns because it can cause coagulation troubles leading to intracranial hemorrhage. In this case, diagnosis must be done quickly to make platelets transfusion immediately after birth. Before transfusion, platelet antigens must be tested carefully to avoid rejection. The majority of thrombopenia (95 %) are caused by antibodies directed against Human Platelet Antigen 1a (HPA-1a) or 5b (HPA-5b). The common method for antigen platelets detection is polymerase chain reaction allowing for identification of gene sequence. However, it is expensive, time-consuming and requires significant blood volume which is not suitable for newborns. We propose to develop a point-of-care device based on double functionalized magnetic colloids with 1) antibodies specific to antigen platelets and 2) highly sensitive electroactive molecules in order to be detected by an electrochemical microsensor. These magnetic colloids will be used first to isolate platelets from other blood components, then to capture specifically platelets bearing HPA-1a and HPA-5b antigens and finally to attract them close to sensor working electrode for improved electrochemical signal. The expected advantages are an assay time lower than 20 min starting from blood volume smaller than 100 µL. Our functionalization procedure based on amine dendrimers and NHS-ester modification of initial carboxyl colloids will be presented. Functionalization efficiency was evaluated by colorimetric titration of surface chemical groups, zeta potential measurements, infrared spectroscopy, fluorescence scanning and cyclic voltammetry. Our results showed that electroactive molecules and antibodies can be immobilized successfully onto magnetic colloids. Application of a magnetic field onto working electrode increased the detected electrochemical signal. Magnetic colloids were able to capture specific purified antigens extracted from platelets.

Keywords: Magnetic Nanoparticles , Electroactive Molecules, Antibody, Platelet

Procedia PDF Downloads 243
149 A Xenon Mass Gauging through Heat Transfer Modeling for Electric Propulsion Thrusters

Authors: A. Soria-Salinas, M.-P. Zorzano, J. Martín-Torres, J. Sánchez-García-Casarrubios, J.-L. Pérez-Díaz, A. Vakkada-Ramachandran

Abstract:

The current state-of-the-art methods of mass gauging of Electric Propulsion (EP) propellants in microgravity conditions rely on external measurements that are taken at the surface of the tank. The tanks are operated under a constant thermal duty cycle to store the propellant within a pre-defined temperature and pressure range. We demonstrate using computational fluid dynamics (CFD) simulations that the heat-transfer within the pressurized propellant generates temperature and density anisotropies. This challenges the standard mass gauging methods that rely on the use of time changing skin-temperatures and pressures. We observe that the domes of the tanks are prone to be overheated, and that a long time after the heaters of the thermal cycle are switched off, the system reaches a quasi-equilibrium state with a more uniform density. We propose a new gauging method, which we call the Improved PVT method, based on universal physics and thermodynamics principles, existing TRL-9 technology and telemetry data. This method only uses as inputs the temperature and pressure readings of sensors externally attached to the tank. These sensors can operate during the nominal thermal duty cycle. The improved PVT method shows little sensitivity to the pressure sensor drifts which are critical towards the end-of-life of the missions, as well as little sensitivity to systematic temperature errors. The retrieval method has been validated experimentally with CO2 in gas and fluid state in a chamber that operates up to 82 bar within a nominal thermal cycle of 38 °C to 42 °C. The mass gauging error is shown to be lower than 1% the mass at the beginning of life, assuming an initial tank load at 100 bar. In particular, for a pressure of about 70 bar, just below the critical pressure of CO2, the error of the mass gauging in gas phase goes down to 0.1% and for 77 bar, just above the critical point, the error of the mass gauging of the liquid phase is 0.6% of initial tank load. This gauging method improves by a factor of 8 the accuracy of the standard PVT retrievals using look-up tables with tabulated data from the National Institute of Standards and Technology.

Keywords: electric propulsion, mass gauging, propellant, PVT, xenon

Procedia PDF Downloads 323
148 Analysis on the Feasibility of Landsat 8 Imagery for Water Quality Parameters Assessment in an Oligotrophic Mediterranean Lake

Authors: V. Markogianni, D. Kalivas, G. Petropoulos, E. Dimitriou

Abstract:

Lake water quality monitoring in combination with the use of earth observation products constitutes a major component in many water quality monitoring programs. Landsat 8 images of Trichonis Lake (Greece) acquired on 30/10/2013 and 30/08/2014 were used in order to explore the possibility of Landsat 8 to estimate water quality parameters and particularly CDOM absorption at specific wavelengths, chlorophyll-a and nutrient concentrations in this oligotrophic freshwater body, characterized by inexistent quantitative, temporal and spatial variability. Water samples have been collected at 22 different stations, on late August of 2014 and the satellite image of the same date was used to statistically correlate the in-situ measurements with various combinations of Landsat 8 bands in order to develop algorithms that best describe those relationships and calculate accurately the aforementioned water quality components. Optimal models were applied to the image of late October of 2013 and the validation of the results was conducted through their comparison with the respective available in-situ data of 2013. Initial results indicated the limited ability of the Landsat 8 sensor to accurately estimate water quality components in an oligotrophic waterbody. As resulted by the validation process, ammonium concentrations were proved to be the most accurately estimated component (R = 0.7), followed by chl-a concentration (R = 0.5) and the CDOM absorption at 420 nm (R = 0.3). In-situ nitrate, nitrite, phosphate and total nitrogen concentrations of 2014 were measured as lower than the detection limit of the instrument used, hence no statistical elaboration was conducted. On the other hand, multiple linear regression among reflectance measures and total phosphorus concentrations resulted in low and statistical insignificant correlations. Our results were concurrent with other studies in international literature, indicating that estimations for eutrophic and mesotrophic lakes are more accurate than oligotrophic, owing to the lack of suspended particles that are detectable by satellite sensors. Nevertheless, although those predictive models, developed and applied to Trichonis oligotrophic lake are less accurate, may still be useful indicators of its water quality deterioration.

Keywords: landsat 8, oligotrophic lake, remote sensing, water quality

Procedia PDF Downloads 373
147 Uncertainty Evaluation of Erosion Volume Measurement Using Coordinate Measuring Machine

Authors: Mohamed Dhouibi, Bogdan Stirbu, Chabotier André, Marc Pirlot

Abstract:

Internal barrel wear is a major factor affecting the performance of small caliber guns in their different life phases. Wear analysis is, therefore, a very important process for understanding how wear occurs, where it takes place, and how it spreads with the aim on improving the accuracy and effectiveness of small caliber weapons. This paper discusses the measurement and analysis of combustion chamber wear for a small-caliber gun using a Coordinate Measuring Machine (CMM). Initially, two different NATO small caliber guns: 5.56x45mm and 7.62x51mm, are considered. A Micura Zeiss Coordinate Measuring Machine (CMM) equipped with the VAST XTR gold high-end sensor is used to measure the inner profile of the two guns every 300-shot cycle. The CMM parameters, such us (i) the measuring force, (ii) the measured points, (iii) the time of masking, and (iv) the scanning velocity, are investigated. In order to ensure minimum measurement error, a statistical analysis is adopted to select the reliable CMM parameters combination. Next, two measurement strategies are developed to capture the shape and the volume of each gun chamber. Thus, a task-specific measurement uncertainty (TSMU) analysis is carried out for each measurement plan. Different approaches of TSMU evaluation have been proposed in the literature. This paper discusses two different techniques. The first is the substitution method described in ISO 15530 part 3. This approach is based on the use of calibrated workpieces with similar shape and size as the measured part. The second is the Monte Carlo simulation method presented in ISO 15530 part 4. Uncertainty evaluation software (UES), also known as the Virtual Coordinate Measuring Machine (VCMM), is utilized in this technique to perform a point-by-point simulation of the measurements. To conclude, a comparison between both approaches is performed. Finally, the results of the measurements are verified through calibrated gauges of several dimensions specially designed for the two barrels. On this basis, an experimental database is developed for further analysis aiming to quantify the relationship between the volume of wear and the muzzle velocity of small caliber guns.

Keywords: coordinate measuring machine, measurement uncertainty, erosion and wear volume, small caliber guns

Procedia PDF Downloads 122
146 A Computational Framework for Load Mediated Patellar Ligaments Damage at the Tropocollagen Level

Authors: Fadi Al Khatib, Raouf Mbarki, Malek Adouni

Abstract:

In various sport and recreational activities, the patellofemoral joint undergoes large forces and moments while accommodating the significant knee joint movement. In doing so, this joint is commonly the source of anterior knee pain related to instability in normal patellar tracking and excessive pressure syndrome. One well-observed explanation of the instability of the normal patellar tracking is the patellofemoral ligaments and patellar tendon damage. Improved knowledge of the damage mechanism mediating ligaments and tendon injuries can be a great help not only in rehabilitation and prevention procedures but also in the design of better reconstruction systems in the management of knee joint disorders. This damage mechanism, specifically due to excessive mechanical loading, has been linked to the micro level of the fibred structure precisely to the tropocollagen molecules and their connection density. We argue defining a clear frame starting from the bottom (micro level) to up (macro level) in the hierarchies of the soft tissue may elucidate the essential underpinning on the state of the ligaments damage. To do so, in this study a multiscale fibril reinforced hyper elastoplastic Finite Element model that accounts for the synergy between molecular and continuum syntheses was developed to determine the short-term stresses/strains patellofemoral ligaments and tendon response. The plasticity of the proposed model is associated only with the uniaxial deformation of the collagen fibril. The yield strength of the fibril is a function of the cross-link density between tropocollagen molecules, defined here by a density function. This function obtained through a Coarse-graining procedure linking nanoscale collagen features and the tissue level materials properties using molecular dynamics simulations. The hierarchies of the soft tissues were implemented using the rule of mixtures. Thereafter, the model was calibrated using a statistical calibration procedure. The model then implemented into a real structure of patellofemoral ligaments and patellar tendon (OpenKnee) and simulated under realistic loading conditions. With the calibrated material parameters the calculated axial stress lies well with the experimental measurement with a coefficient of determination (R2) equal to 0.91 and 0.92 for the patellofemoral ligaments and the patellar tendon respectively. The ‘best’ prediction of the yielding strength and strain as compared with the reported experimental data yielded when the cross-link density between the tropocollagen molecule of the fibril equal to 5.5 ± 0.5 (patellofemoral ligaments) and 12 (patellar tendon). Damage initiation of the patellofemoral ligaments was located at the femoral insertions while the damage of the patellar tendon happened in the middle of the structure. These predicted finding showed a meaningful correlation between the cross-link density of the tropocollagen molecules and the stiffness of the connective tissues of the extensor mechanism. Also, damage initiation and propagation were documented with this model, which were in satisfactory agreement with earlier observation. To the best of our knowledge, this is the first attempt to model ligaments from the bottom up, predicted depending to the tropocollagen cross-link density. This approach appears more meaningful towards a realistic simulation of a damaging process or repair attempt compared with certain published studies.

Keywords: tropocollagen, multiscale model, fibrils, knee ligaments

Procedia PDF Downloads 104
145 Method for Improving ICESAT-2 ATL13 Altimetry Data Utility on Rivers

Authors: Yun Chen, Qihang Liu, Catherine Ticehurst, Chandrama Sarker, Fazlul Karim, Dave Penton, Ashmita Sengupta

Abstract:

The application of ICESAT-2 altimetry data in river hydrology critically depends on the accuracy of the mean water surface elevation (WSE) at a virtual station (VS) where satellite observations intersect with water. The ICESAT-2 track generates multiple VSs as it crosses the different water bodies. The difficulties are particularly pronounced in large river basins where there are many tributaries and meanders often adjacent to each other. One challenge is to split photon segments along a beam to accurately partition them to extract only the true representative water height for individual elements. As far as we can establish, there is no automated procedure to make this distinction. Earlier studies have relied on human intervention or river masks. Both approaches are unsatisfactory solutions where the number of intersections is large, and river width/extent changes over time. We describe here an automated approach called “auto-segmentation”. The accuracy of our method was assessed by comparison with river water level observations at 10 different stations on 37 different dates along the Lower Murray River, Australia. The congruence is very high and without detectable bias. In addition, we compared different outlier removal methods on the mean WSE calculation at VSs post the auto-segmentation process. All four outlier removal methods perform almost equally well with the same R2 value (0.998) and only subtle variations in RMSE (0.181–0.189m) and MAE (0.130–0.142m). Overall, the auto-segmentation method developed here is an effective and efficient approach to deriving accurate mean WSE at river VSs. It provides a much better way of facilitating the application of ICESAT-2 ATL13 altimetry to rivers compared to previously reported studies. Therefore, the findings of our study will make a significant contribution towards the retrieval of hydraulic parameters, such as water surface slope along the river, water depth at cross sections, and river channel bathymetry for calculating flow velocity and discharge from remotely sensed imagery at large spatial scales.

Keywords: lidar sensor, virtual station, cross section, mean water surface elevation, beam/track segmentation

Procedia PDF Downloads 35
144 Determination of Bromides, Chlorides and Fluorides in Case of Their Joint Presence in Ion-Conducting Electrolyte

Authors: V. Golubeva, O. Vakhnina, I. Konopkina, N. Gerasimova, N. Taturina, K. Zhogova

Abstract:

To improve chemical current sources, the ion-conducting electrolytes based on Li halides (LiCl-KCl, LiCl-LiBr-KBr, LiCl-LiBr-LiF) are developed. It is necessary to have chemical analytical methods for determination of halides to control the electrolytes technology. The methods of classical analytical chemistry are of interest, as they are characterized by high accuracy. Using these methods is a difficult task because halides have similar chemical properties. The objective of this work is to develop a titrimetric method for determining the content of bromides, chlorides, and fluorides in their joint presence in an ion-conducting electrolyte. In accordance with the developed method of analysis to determine fluorides, electrolyte sample is dissolved in diluted HCl acid; fluorides are titrated by La(NO₃)₃ solution with potentiometric indication of equivalence point, fluoride ion-selective electrode is used as sensor. Chlorides and bromides do not form a hardly soluble compound with La and do not interfere in result of analysis. To determine the bromides, the sample is dissolved in a diluted H₂SO₄ acid. The bromides are oxidized with a solution of KIO₃ to Br₂, which is removed from the reaction zone by boiling. Excess of KIO₃ is titrated by iodometric method. The content of bromides is calculated from the amount of KIO₃ spent on Br₂ oxidation. Chlorides and fluorides are not oxidized by KIO₃ and do not interfere in result of analysis. To determine the chlorides, the sample is dissolved in diluted HNO₃ acid and the total content of chlorides and bromides is determined by method of visual mercurometric titration with diphenylcarbazone indicator. Fluorides do not form a hardly soluble compound with mercury and do not interfere with determination. The content of chlorides is calculated taking into account the content of bromides in the sample of electrolyte. The validation of the developed analytical method was evaluated by analyzing internal reference material with known chlorides, bromides and fluorides content. The analytical method allows to determine chlorides, bromides and fluorides in case of their joint presence in ion-conducting electrolyte within the range and with relative total error (δ): for bromides from 60.0 to 65.0 %, δ = ± 2.1 %; for chlorides from 8.0 to 15.0 %, δ = ± 3.6 %; for fluorides from 5.0 to 8.0%, ± 1.5% . The analytical method allows to analyze electrolytes and mixtures that contain chlorides, bromides, fluorides of alkali metals and their mixtures (K, Na, Li).

Keywords: bromides, chlorides, fluorides, ion-conducting electrolyte

Procedia PDF Downloads 105
143 Robots for City Life: Design Guidelines and Strategy Recommendations for Introducing Robots in Cities

Authors: Akshay Rege, Lara Gomaa, Maneesh Kumar Verma, Sem Carree

Abstract:

The aim of this paper is to articulate design strategies and recommendations for introducing robots into the city life of people based on experiments conducted with robots and semi-autonomous systems in three cities in the Netherlands. This research was carried out by the Spot robotics team of Impact Lab housed within YES!Delft, a start-up accelerator located in Delft, The Netherlands. The premise of this research is to inform the development of the ‘region of the future’ by the Municipality of Rotterdam-Den Haag (MRDH). The paper starts by reporting the desktop research carried out to find and develop multiple use cases for robots to support humans in various activities. Further, the paper reports the user research carried out by crowdsourcing responses collected in public spaces of Rotterdam-Den Haag region and on the internet. Furthermore, based on the knowledge gathered in the initial research, practical experiments were carried out using robots and semi-autonomous systems in order to test and validate our initial research. These experiments were conducted in three cities in the Netherlands which were Rotterdam, The Hague, and Delft. Custom sensor box, Drone, and Boston Dynamics' Spot robot were used to conduct these experiments. Out of thirty use cases, five were tested with experiments which were skyscraper emergency evacuation, human transportation and security, bike lane delivery, mobility tracking, and robot drama. The learnings from these experiments provided us with insights into human-robot interaction and symbiosis in cities which can be used to introduce robots in cities to support human activities, ultimately enabling the transitioning from a human only city life towards a blended one where robots can play a role. Based on these understandings, we formulated design guidelines and strategy recommendations for incorporating robots in the Rotterdam-Den Haag’s region of the future. Lastly, we discuss how our insights in the Rotterdam-Den Haag region can inspire and inform the incorporation of robots in different cities of the world.

Keywords: city life, design guidelines, human-robot Interaction, robot use cases, robotic experiments, strategy recommendations, user research

Procedia PDF Downloads 67
142 A Good Start for Digital Transformation of the Companies: A Literature and Experience-Based Predefined Roadmap

Authors: Batuhan Kocaoglu

Abstract:

Nowadays digital transformation is a hot topic both in service and production business. For the companies who want to stay alive in the following years, they should change how they do their business. Industry leaders started to improve their ERP (Enterprise Resource Planning) like backbone technologies to digital advances such as analytics, mobility, sensor-embedded smart devices, AI (Artificial Intelligence) and more. Selecting the appropriate technology for the related business problem also is a hot topic. Besides this, to operate in the modern environment and fulfill rapidly changing customer expectations, a digital transformation of the business is required and change the way the business runs, affect how they do their business. Even the digital transformation term is trendy the literature is limited and covers just the philosophy instead of a solid implementation plan. Current studies urge firms to start their digital transformation, but few tell us how to do. The huge investments scare companies with blur definitions and concepts. The aim of this paper to solidify the steps of the digital transformation and offer a roadmap for the companies and academicians. The proposed roadmap is developed based upon insights from the literature review, semi-structured interviews, and expert views to explore and identify crucial steps. We introduced our roadmap in the form of 8 main steps: Awareness; Planning; Operations; Implementation; Go-live; Optimization; Autonomation; Business Transformation; including a total of 11 sub-steps with examples. This study also emphasizes four dimensions of the digital transformation mainly: Readiness assessment; Building organizational infrastructure; Building technical infrastructure; Maturity assessment. Finally, roadmap corresponds the steps with three main terms used in digital transformation literacy as Digitization; Digitalization; and Digital Transformation. The resulted model shows that 'business process' and 'organizational issues' should be resolved before technology decisions and 'digitization'. Companies can start their journey with the solid steps, using the proposed roadmap to increase the success of their project implementation. Our roadmap is also adaptable for relevant Industry 4.0 and enterprise application projects. This roadmap will be useful for companies to persuade their top management for investments. Our results can be used as a baseline for further researches related to readiness assessment and maturity assessment studies.

Keywords: digital transformation, digital business, ERP, roadmap

Procedia PDF Downloads 136
141 Flexible Integration of Airbag Weakening Lines in Interior Components: Airbag Weakening with Jenoptik Laser Technology

Authors: Markus Remm, Sebastian Dienert

Abstract:

Vehicle interiors are not only changing in terms of design and functionality but also due to new driving situations in which, for example, autonomous operating modes are possible. Flexible seating positions are changing the requirements for passive safety system behavior and location in the interior of a vehicle. With fully autonomous driving, the driver can, for example, leave the position behind the steering wheel and take a seated position facing backward. Since autonomous and non-autonomous vehicles will share the same road network for the foreseeable future, accidents cannot be avoided, which makes the use of passive safety systems indispensable. With JENOPTIK-VOTAN® A technology, the trend towards flexible predetermined airbag weakening lines is enabled. With the help of laser beams, the predetermined weakening lines are introduced from the backside of the components so that they are absolutely invisible. This machining process is sensor-controlled and guarantees that a small residual wall thickness remains for the best quality and reliability for airbag weakening lines. Due to the wide processing range of the laser, the processing of almost all materials is possible. A CO₂ laser is used for many plastics, natural fiber materials, foams, foils and material composites. A femtosecond laser is used for natural materials and textiles that are very heat-sensitive. This laser type has extremely short laser pulses with very high energy densities. Supported by a high-precision and fast movement of the laser beam by a laser scanner system, the so-called cold ablation is enabled to predetermine weakening lines layer by layer until the desired residual wall thickness remains. In that way, for example, genuine leather can be processed in a material-friendly and process-reliable manner without design implications to the components A-Side. Passive safety in the vehicle is increased through the interaction of modern airbag technology and high-precision laser airbag weakening. The JENOPTIK-VOTAN® A product family has been representing this for more than 25 years and is pointing the way to the future with new and innovative technologies.

Keywords: design freedom, interior material processing, laser technology, passive safety

Procedia PDF Downloads 85
140 Comparison of On-Site Stormwater Detention Real Performance and Theoretical Simulations

Authors: Pedro P. Drumond, Priscilla M. Moura, Marcia M. L. P. Coelho

Abstract:

The purpose of On-site Stormwater Detention (OSD) system is to promote the detention of addition stormwater runoff caused by impervious areas, in order to maintain the peak flow the same as the pre-urbanization condition. In recent decades, these systems have been built in many cities around the world. However, its real efficiency continues to be unknown due to the lack of research, especially with regard to monitoring its real performance. Thus, this study aims to compare the water level monitoring data of an OSD built in Belo Horizonte/Brazil with the results of theoretical methods simulations, usually adopted in OSD design. There were made two theoretical simulations, one using the Rational Method and Modified Puls method and another using the Soil Conservation Service (SCS) method and Modified Puls method. The monitoring data were obtained with a water level sensor, installed inside the reservoir and connected to a data logger. The comparison of OSD performance was made for 48 rainfall events recorded from April/2015 to March/2017. The comparison of maximum water levels in the OSD showed that the results of the simulations with Rational/Puls and SCS/Puls methods were, on average 33% and 73%, respectively, lower than those monitored. The Rational/Puls results were significantly higher than the SCS/Puls results, only in the events with greater frequency. In the events with average recurrence interval of 5, 10 and 200 years, the maximum water heights were similar in both simulations. Also, the results showed that the duration of rainfall events was close to the duration of monitored hydrograph. The rising time and recession time of the hydrographs calculated with the Rational Method represented better the monitored hydrograph than SCS Method. The comparison indicates that the real discharge coefficient value could be higher than 0.61, adopted in Puls simulations. New researches evaluating OSD real performance should be developed. In order to verify the peak flow damping efficiency and the value of the discharge coefficient is necessary to monitor the inflow and outflow of an OSD, in addition to monitor the water level inside it.

Keywords: best management practices, on-site stormwater detention, source control, urban drainage

Procedia PDF Downloads 164
139 Structural Monitoring of Externally Confined RC Columns with Inadequate Lap-Splices, Using Fibre-Bragg-Grating Sensors

Authors: Petros M. Chronopoulos, Evangelos Z. Astreinidis

Abstract:

A major issue of the structural assessment and rehabilitation of existing RC structures is the inadequate lap-splicing of the longitudinal reinforcement. Although prohibited by modern Design Codes, the practice of arranging lap-splices inside the critical regions of RC elements was commonly applied in the past. Today this practice is still the rule, at least for conventional new buildings. Therefore, a lot of relevant research is ongoing in many earthquake prone countries. The rehabilitation of deficient lap-splices of RC elements by means of external confinement is widely accepted as the most efficient technique. If correctly applied, this versatile technique offers a limited increase of flexural capacity and a considerable increase of local ductility and of axial and shear capacities. Moreover, this intervention does not affect the stiffness of the elements and does not affect the dynamic characteristics of the structure. This technique has been extensively discussed and researched contributing to vast accumulation of technical and scientific knowledge that has been reported in relevant books, reports and papers, and included in recent Design Codes and Guides. These references are mostly dealing with modeling and redesign, covering both the enhanced (axial and) shear capacity (due to the additional external closed hoops or jackets) and the increased ductility (due to the confining action, preventing the unzipping of lap-splices and the buckling of continuous reinforcement). An analytical and experimental program devoted to RC members with lap-splices is completed in the Lab. of RC/NTU of Athens/GR. This program aims at the proposal of a rational and safe theoretical model and the calibration of the relevant Design Codes’ provisions. Tests, on forty two (42) full scale specimens, covering mostly beams and columns (not walls), strengthened or not, with adequate or inadequate lap-splices, have been already performed and evaluated. In this paper, the results of twelve (12) specimens under fully reversed cyclic actions are presented and discussed. In eight (8) specimens the lap-splices were inadequate (splicing length of 20 or 30 bar diameters) and they were retrofitted before testing by means of additional external confinement. The two (2) most commonly applied confining materials were used in this study, namely steel and FRPs. More specifically, jackets made of CFRP wraps or light cages made of mild steel were applied. The main parameters of these tests were (i) the degree of confinement (internal and external), and (ii) the length of lap-splices, equal to 20, 30 or 45 bar diameters. These tests were thoroughly instrumented and monitored, by means of conventional (LVDTs, strain gages, etc.) and innovative (optic fibre-Bragg-grating) sensors. This allowed for a thorough investigation of the most influencing design parameter, namely the hoop-stress developed in the confining material. Based on these test results and on comparisons with the provisions of modern Design Codes, it could be argued that shorter (than the normative) lap-splices, commonly found in old structures, could still be effective and safe (at least for lengths more than an absolute minimum), depending on the required ductility, if a properly arranged and adequately detailed external confinement is applied.

Keywords: concrete, fibre-Bragg-grating sensors, lap-splices, retrofitting / rehabilitation

Procedia PDF Downloads 228
138 Nanobiosensor System for Aptamer Based Pathogen Detection in Environmental Waters

Authors: Nimet Yildirim Tirgil, Ahmed Busnaina, April Z. Gu

Abstract:

Environmental waters are monitored worldwide to protect people from infectious diseases primarily caused by enteric pathogens. All long, Escherichia coli (E. coli) is a good indicator for potential enteric pathogens in waters. Thus, a rapid and simple detection method for E. coli is very important to predict the pathogen contamination. In this study, to the best of our knowledge, as the first time we developed a rapid, direct and reusable SWCNTs (single walled carbon nanotubes) based biosensor system for sensitive and selective E. coli detection in water samples. We use a novel and newly developed flexible biosensor device which was fabricated by high-rate nanoscale offset printing process using directed assembly and transfer of SWCNTs. By simple directed assembly and non-covalent functionalization, aptamer (biorecognition element that specifically distinguish the E. coli O157:H7 strain from other pathogens) based SWCNTs biosensor system was designed and was further evaluated for environmental applications with simple and cost-effective steps. The two gold electrode terminals and SWCNTs-bridge between them allow continuous resistance response monitoring for the E. coli detection. The detection procedure is based on competitive mode detection. A known concentration of aptamer and E. coli cells were mixed and after a certain time filtered. The rest of free aptamers injected to the system. With hybridization of the free aptamers and their SWCNTs surface immobilized probe DNA (complementary-DNA for E. coli aptamer), we can monitor the resistance difference which is proportional to the amount of the E. coli. Thus, we can detect the E. coli without injecting it directly onto the sensing surface, and we could protect the electrode surface from the aggregation of target bacteria or other pollutants that may come from real wastewater samples. After optimization experiments, the linear detection range was determined from 2 cfu/ml to 10⁵ cfu/ml with higher than 0.98 R² value. The system was regenerated successfully with 5 % SDS solution over 100 times without any significant deterioration of the sensor performance. The developed system had high specificity towards E. coli (less than 20 % signal with other pathogens), and it could be applied to real water samples with 86 to 101 % recovery and 3 to 18 % cv values (n=3).

Keywords: aptamer, E. coli, environmental detection, nanobiosensor, SWCTs

Procedia PDF Downloads 168
137 A Smartphone-Based Real-Time Activity Recognition and Fall Detection System

Authors: Manutchanok Jongprasithporn, Rawiphorn Srivilai, Paweena Pongsopha

Abstract:

Fall is the most serious accident leading to increased unintentional injuries and mortality. Falls are not only the cause of suffering and functional impairments to the individuals, but also the cause of increasing medical cost and days away from work. The early detection of falls could be an advantage to reduce fall-related injuries and consequences of falls. Smartphones, embedded accelerometer, have become a common device in everyday life due to decreasing technology cost. This paper explores a physical activity monitoring and fall detection application in smartphones which is a non-invasive biomedical device to determine physical activities and fall event. The combination of application and sensors could perform as a biomedical sensor to monitor physical activities and recognize a fall. We have chosen Android-based smartphone in this study since android operating system is an open-source and no cost. Moreover, android phone users become a majority of Thai’s smartphone users. We developed Thai 3 Axis (TH3AX) as a physical activities and fall detection application which included command, manual, results in Thai language. The smartphone was attached to right hip of 10 young, healthy adult subjects (5 males, 5 females; aged< 35y) to collect accelerometer and gyroscope data during performing physical activities (e.g., walking, running, sitting, and lying down) and falling to determine threshold for each activity. Dependent variables are including accelerometer data (acceleration, peak acceleration, average resultant acceleration, and time between peak acceleration). A repeated measures ANOVA was performed to test whether there are any differences between DVs’ means. Statistical analyses were considered significant at p<0.05. After finding threshold, the results were used as training data for a predictive model of activity recognition. In the future, accuracies of activity recognition will be performed to assess the overall performance of the classifier. Moreover, to help improve the quality of life, our system will be implemented with patients and elderly people who need intensive care in hospitals and nursing homes in Thailand.

Keywords: activity recognition, accelerometer, fall, gyroscope, smartphone

Procedia PDF Downloads 666
136 A Study on Adsorption Ability of MnO2 Nanoparticles to Remove Methyl Violet Dye from Aqueous Solution

Authors: Zh. Saffari, A. Naeimi, M. S. Ekrami-Kakhki, Kh. Khandan-Barani

Abstract:

The textile industries are becoming a major source of environmental contamination because an alarming amount of dye pollutants are generated during the dyeing processes. Organic dyes are one of the largest pollutants released into wastewater from textile and other industrial processes, which have shown severe impacts on human physiology. Nano-structure compounds have gained importance in this category due their anticipated high surface area and improved reactive sites. In recent years several novel adsorbents have been reported to possess great adsorption potential due to their enhanced adsorptive capacity. Nano-MnO2 has great potential applications in environment protection field and has gained importance in this category because it has a wide variety of structure with large surface area. The diverse structures, chemical properties of manganese oxides are taken advantage of in potential applications such as adsorbents, sensor catalysis and it is also used for wide catalytic applications, such as degradation of dyes. In this study, adsorption of Methyl Violet (MV) dye from aqueous solutions onto MnO2 nanoparticles (MNP) has been investigated. The surface characterization of these nano particles was examined by Particle size analysis, Scanning Electron Microscopy (SEM), Fourier Transform Infrared (FTIR) spectroscopy and X-Ray Diffraction (XRD). The effects of process parameters such as initial concentration, pH, temperature and contact duration on the adsorption capacities have been evaluated, in which pH has been found to be most effective parameter among all. The data were analyzed using the Langmuir and Freundlich for explaining the equilibrium characteristics of adsorption. And kinetic models like pseudo first- order, second-order model and Elovich equation were utilized to describe the kinetic data. The experimental data were well fitted with Langmuir adsorption isotherm model and pseudo second order kinetic model. The thermodynamic parameters, such as Free energy of adsorption (ΔG°), enthalpy change (ΔH°) and entropy change (ΔS°) were also determined and evaluated.

Keywords: MnO2 nanoparticles, adsorption, methyl violet, isotherm models, kinetic models, surface chemistry

Procedia PDF Downloads 236
135 Effective Use of X-Box Kinect in Rehabilitation Centers of Riyadh

Authors: Reem Alshiha, Tanzila Saba

Abstract:

Physical rehabilitation is the process of helping people to recover and be able to go back to their former activities that have been delayed due to external factors such as car accidents, old age and victims of strokes (chronic diseases and accidents, and those related to sport activities).The cost of hiring a personal nurse or driving the patient to and from the hospital could be costly and time-consuming. Also, there are other factors to take into account such as forgetfulness, boredom and lack of motivation. In order to solve this dilemma, some experts came up with rehabilitation software to be used with Microsoft Kinect to help the patients and their families for in-home rehabilitation. In home rehabilitation software is becoming more and more popular, since it is more convenient for all parties affiliated with the patient. In contrast to the other costly market-based systems that have no portability, Microsoft’s Kinect is a portable motion sensor that reads body movements and interprets it. New software development has made rehabilitation games available to be used at home for the convenience of the patient. The game will benefit its users (rehabilitation patients) in saving time and money. There are many software's that are used with the Kinect for rehabilitation, but the software that is chosen in this research is Kinectotherapy. Kinectotherapy software is used for rehabilitation patients in Riyadh clinics to test its acceptance by patients and their physicians. In this study, we used Kinect because it was affordable, portable and easy to access in contrast to expensive market-based motion sensors. This paper explores the importance of in-home rehabilitation by using Kinect with Kinectotherapy software. The software targets both upper and lower limbs, but in this research, the main focus is on upper-limb functionality. However, the in-home rehabilitation is applicable to be used by all patients with motor disability, since the patient must have some self-reliance. The targeted subjects are patients with minor motor impairment that are somewhat independent in their mobility. The presented work is the first to consider the implementation of in-home rehabilitation with real-time feedback to the patient and physician. This research proposes the implementation of in-home rehabilitation in Riyadh, Saudi Arabia. The findings show that most of the patients are interested and motivated in using the in-home rehabilitation system in the future. The main value of the software application is due to these factors: improve patient engagement through stimulating rehabilitation, be a low cost rehabilitation tool and reduce the need for expensive one-to-one clinical contact. Rehabilitation is a crucial treatment that can improve the quality of life and confidence of the patient as well as their self-esteem.

Keywords: x-box, rehabilitation, physical therapy, rehabilitation software, kinect

Procedia PDF Downloads 318
134 Digital Phase Shifting Holography in a Non-Linear Interferometer using Undetected Photons

Authors: Sebastian Töpfer, Marta Gilaberte Basset, Jorge Fuenzalida, Fabian Steinlechner, Juan P. Torres, Markus Gräfe

Abstract:

This work introduces a combination of digital phase-shifting holography with a non-linear interferometer using undetected photons. Non-linear interferometers can be used in combination with a measurement scheme called quantum imaging with undetected photons, which allows for the separation of the wavelengths used for sampling an object and detecting it in the imaging sensor. This method recently faced increasing attention, as it allows to use of exotic wavelengths (e.g., mid-infrared, ultraviolet) for object interaction while at the same time keeping the detection in spectral areas with highly developed, comparable low-cost imaging sensors. The object information, including its transmission and phase influence, is recorded in the form of an interferometric pattern. To collect these, this work combines the method of quantum imaging with undetected photons with digital phase-shifting holography with a minimal sampling of the interference. With this, the quantum imaging scheme gets extended in its measurement capabilities and brings it one step closer to application. Quantum imaging with undetected photons uses correlated photons generated by spontaneous parametric down-conversion in a non-linear interferometer to create indistinguishable photon pairs, which leads to an effect called induced coherence without induced emission. Placing an object inside changes the interferometric pattern depending on the object’s properties. Digital phase-shifting holography records multiple images of the interference with determined phase shifts to reconstruct the complete interference shape, which can afterward be used to analyze the changes introduced by the object and conclude its properties. An extensive characterization of this method was done using a proof-of-principle setup. The measured spatial resolution, phase accuracy, and transmission accuracy are compared for different combinations of camera exposure times and the number of interference sampling steps. The current limits of this method are shown to allow further improvements. To summarize, this work presents an alternative holographic measurement method using non-linear interferometers in combination with quantum imaging to enable new ways of measuring and motivating continuing research.

Keywords: digital holography, quantum imaging, quantum holography, quantum metrology

Procedia PDF Downloads 71
133 Shaped Crystal Growth of Fe-Ga and Fe-Al Alloy Plates by the Micro Pulling down Method

Authors: Kei Kamada, Rikito Murakami, Masahiko Ito, Mototaka Arakawa, Yasuhiro Shoji, Toshiyuki Ueno, Masao Yoshino, Akihiro Yamaji, Shunsuke Kurosawa, Yuui Yokota, Yuji Ohashi, Akira Yoshikawa

Abstract:

Techniques of energy harvesting y have been widely developed in recent years, due to high demand on the power supply for ‘Internet of things’ devices such as wireless sensor nodes. In these applications, conversion technique of mechanical vibration energy into electrical energy using magnetostrictive materials n have been brought to attention. Among the magnetostrictive materials, Fe-Ga and Fe-Al alloys are attractive materials due to the figure of merits such price, mechanical strength, high magnetostrictive constant. Up to now, bulk crystals of these alloys are produced by the Bridgman–Stockbarger method or the Czochralski method. Using these method big bulk crystal up to 2~3 inch diameter can be grown. However, non-uniformity of chemical composition along to the crystal growth direction cannot be avoid, which results in non-uniformity of magnetostriction constant and reduction of the production yield. The micro-pulling down (μ-PD) method has been developed as a shaped crystal growth technique. Our group have reported shaped crystal growth of oxide, fluoride single crystals with different shape such rod, plate tube, thin fiber, etc. Advantages of this method is low segregation due to high growth rate and small diffusion of melt at the solid-liquid interface, and small kerf loss due to near net shape crystal. In this presentation, we report the shaped long plate crystal growth of Fe-Ga and Fe-Al alloys using the μ-PD method. Alloy crystals were grown by the μ-PD method using calcium oxide crucible and induction heating system under the nitrogen atmosphere. The bottom hole of crucibles was 5 x 1mm² size. A <100> oriented iron-based alloy was used as a seed crystal. 5 x 1 x 320 mm³ alloy crystal plates were successfully grown. The results of crystal growth, chemical composition analysis, magnetostrictive properties and a prototype vibration energy harvester are reported. Furthermore, continuous crystal growth using powder supply system will be reported to minimize the chemical composition non-uniformity along the growth direction.

Keywords: crystal growth, micro-pulling-down method, Fe-Ga, Fe-Al

Procedia PDF Downloads 308
132 Regression-Based Approach for Development of a Cuff-Less Non-Intrusive Cardiovascular Health Monitor

Authors: Pranav Gulati, Isha Sharma

Abstract:

Hypertension and hypotension are known to have repercussions on the health of an individual, with hypertension contributing to an increased probability of risk to cardiovascular diseases and hypotension resulting in syncope. This prompts the development of a non-invasive, non-intrusive, continuous and cuff-less blood pressure monitoring system to detect blood pressure variations and to identify individuals with acute and chronic heart ailments, but due to the unavailability of such devices for practical daily use, it becomes difficult to screen and subsequently regulate blood pressure. The complexities which hamper the steady monitoring of blood pressure comprises of the variations in physical characteristics from individual to individual and the postural differences at the site of monitoring. We propose to develop a continuous, comprehensive cardio-analysis tool, based on reflective photoplethysmography (PPG). The proposed device, in the form of an eyewear captures the PPG signal and estimates the systolic and diastolic blood pressure using a sensor positioned near the temporal artery. This system relies on regression models which are based on extraction of key points from a pair of PPG wavelets. The proposed system provides an edge over the existing wearables considering that it allows for uniform contact and pressure with the temporal site, in addition to minimal disturbance by movement. Additionally, the feature extraction algorithms enhance the integrity and quality of the extracted features by reducing unreliable data sets. We tested the system with 12 subjects of which 6 served as the training dataset. For this, we measured the blood pressure using a cuff based BP monitor (Omron HEM-8712) and at the same time recorded the PPG signal from our cardio-analysis tool. The complete test was conducted by using the cuff based blood pressure monitor on the left arm while the PPG signal was acquired from the temporal site on the left side of the head. This acquisition served as the training input for the regression model on the selected features. The other 6 subjects were used to validate the model by conducting the same test on them. Results show that the developed prototype can robustly acquire the PPG signal and can therefore be used to reliably predict blood pressure levels.

Keywords: blood pressure, photoplethysmograph, eyewear, physiological monitoring

Procedia PDF Downloads 249
131 Tactile Sensory Digit Feedback for Cochlear Implant Electrode Insertion

Authors: Yusuf Bulale, Mark Prince, Geoff Tansley, Peter Brett

Abstract:

Cochlear Implantation (CI) which became a routine procedure for the last decades is an electronic device that provides a sense of sound for patients who are severely and profoundly deaf. Today, cochlear implantation technology uses electrode array (EA) implanted manually into the cochlea. The optimal success of this implantation depends on the electrode technology and deep insertion techniques. However, this manual insertion procedure may cause mechanical trauma which can lead to a severe destruction of the delicate intracochlear structure. Accordingly, future improvement of the cochlear electrode implant insertion needs reduction of the excessive force application during the cochlear implantation which causes tissue damage and trauma. This study is examined tool-tissue interaction of large prototype scale digit embedded with distributive tactile sensor based upon cochlear electrode and large prototype scale cochlea phantom for simulating the human cochlear which could lead to small-scale digit requirements. The digit, distributive tactile sensors embedded with silicon-substrate was inserted into the cochlea phantom to measure any digit/phantom interaction and position of the digit in order to minimize tissue and trauma damage during the electrode cochlear insertion. The digit has provided tactile information from the digit-phantom insertion interaction such as contact status, tip penetration, obstacles, relative shape and location, contact orientation and multiple contacts. The tests demonstrated that even devices of such a relative simple design with low cost have a potential to improve cochlear implant surgery and other lumen mapping applications by providing tactile sensory feedback information and thus controlling the insertion through sensing and control of the tip of the implant during the insertion. In that approach, the surgeon could minimize the tissue damage and potential damage to the delicate structures within the cochlear caused by current manual electrode insertion of the cochlear implantation. This approach also can be applied to other minimally invasive surgery applications as well as diagnosis and path navigation procedures.

Keywords: cochlear electrode insertion, distributive tactile sensory feedback information, flexible digit, minimally invasive surgery, tool/tissue interaction

Procedia PDF Downloads 364
130 GPU-Based Back-Projection of Synthetic Aperture Radar (SAR) Data onto 3D Reference Voxels

Authors: Joshua Buli, David Pietrowski, Samuel Britton

Abstract:

Processing SAR data usually requires constraints in extent in the Fourier domain as well as approximations and interpolations onto a planar surface to form an exploitable image. This results in a potential loss of data requires several interpolative techniques, and restricts visualization to two-dimensional plane imagery. The data can be interpolated into a ground plane projection, with or without terrain as a component, all to better view SAR data in an image domain comparable to what a human would view, to ease interpretation. An alternate but computationally heavy method to make use of more of the data is the basis of this research. Pre-processing of the SAR data is completed first (matched-filtering, motion compensation, etc.), the data is then range compressed, and lastly, the contribution from each pulse is determined for each specific point in space by searching the time history data for the reflectivity values for each pulse summed over the entire collection. This results in a per-3D-point reflectivity using the entire collection domain. New advances in GPU processing have finally allowed this rapid projection of acquired SAR data onto any desired reference surface (called backprojection). Mathematically, the computations are fast and easy to implement, despite limitations in SAR phase history data size and 3D-point cloud size. Backprojection processing algorithms are embarrassingly parallel since each 3D point in the scene has the same reflectivity calculation applied for all pulses, independent of all other 3D points and pulse data under consideration. Therefore, given the simplicity of the single backprojection calculation, the work can be spread across thousands of GPU threads allowing for accurate reflectivity representation of a scene. Furthermore, because reflectivity values are associated with individual three-dimensional points, a plane is no longer the sole permissible mapping base; a digital elevation model or even a cloud of points (collected from any sensor capable of measuring ground topography) can be used as a basis for the backprojection technique. This technique minimizes any interpolations and modifications of the raw data, maintaining maximum data integrity. This innovative processing will allow for SAR data to be rapidly brought into a common reference frame for immediate exploitation and data fusion with other three-dimensional data and representations.

Keywords: backprojection, data fusion, exploitation, three-dimensional, visualization

Procedia PDF Downloads 46
129 A Long Range Wide Area Network-Based Smart Pest Monitoring System

Authors: Yun-Chung Yu, Yan-Wen Wang, Min-Sheng Liao, Joe-Air Jiang, Yuen-Chung Lee

Abstract:

This paper proposes to use a Long Range Wide Area Network (LoRaWAN) for a smart pest monitoring system which aims at the oriental fruit fly (Bactrocera dorsalis) to improve the communication efficiency of the system. The oriental fruit fly is one of the main pests in Southeast Asia and the Pacific Rim. Different smart pest monitoring systems based on the Internet of Things (IoT) architecture have been developed to solve problems of employing manual measurement. These systems often use Octopus II, a communication module following the 2.4GHz IEEE 802.15.4 ZigBee specification, as sensor nodes. The Octopus II is commonly used in low-power and short-distance communication. However, the energy consumption increase as the logical topology becomes more complicate to have enough coverage in the large area. By comparison, LoRaWAN follows the Low Power Wide Area Network (LPWAN) specification, which targets the key requirements of the IoT technology, such as secure bi-directional communication, mobility, and localization services. The LoRaWAN network has advantages of long range communication, high stability, and low energy consumption. The 433MHz LoRaWAN model has two superiorities over the 2.4GHz ZigBee model: greater diffraction and less interference. In this paper, The Octopus II module is replaced by a LoRa model to increase the coverage of the monitoring system, improve the communication performance, and prolong the network lifetime. The performance of the LoRa-based system is compared with a ZigBee-based system using three indexes: the packet receiving rate, delay time, and energy consumption, and the experiments are done in different settings (e.g. distances and environmental conditions). In the distance experiment, a pest monitoring system using the two communication specifications is deployed in an area with various obstacles, such as buildings and living creatures, and the performance of employing the two communication specifications is examined. The experiment results show that the packet receiving the rate of the LoRa-based system is 96% , which is much higher than that of the ZigBee system when the distance between any two modules is about 500m. These results indicate the capability of a LoRaWAN-based monitoring system in long range transmission and ensure the stability of the system.

Keywords: LoRaWan, oriental fruit fly, IoT, Octopus II

Procedia PDF Downloads 326
128 Remote Sensing Application in Environmental Researches: Case Study of Iran Mangrove Forests Quantitative Assessment

Authors: Neda Orak, Mostafa Zarei

Abstract:

Environmental assessment is an important session in environment management. Since various methods and techniques have been produces and implemented. Remote sensing (RS) is widely used in many scientific and research fields such as geology, cartography, geography, agriculture, forestry, land use planning, environment, etc. It can show earth surface objects cyclical changes. Also, it can show earth phenomena limits on basis of electromagnetic reflectance changes and deviations records. The research has been done on mangrove forests assessment by RS techniques. Mangrove forests quantitative analysis in Basatin and Bidkhoon estuaries was the aim of this research. It has been done by Landsat satellite images from 1975- 2013 and match to ground control points. This part of mangroves are the last distribution in northern hemisphere. It can provide a good background to improve better management on this important ecosystem. Landsat has provided valuable images to earth changes detection to researchers. This research has used MSS, TM, +ETM, OLI sensors from 1975, 1990, 2000, 2003-2013. Changes had been studied after essential corrections such as fix errors, bands combination, georeferencing on 2012 images as basic image, by maximum likelihood and IPVI Index. It was done by supervised classification. 2004 google earth image and ground points by GPS (2010-2012) was used to compare satellite images obtained changes. Results showed mangrove area in bidkhoon was 1119072 m2 by GPS and 1231200 m2 by maximum likelihood supervised classification and 1317600 m2 by IPVI in 2012. Basatin areas is respectively: 466644 m2, 88200 m2, 63000 m2. Final results show forests have been declined naturally. It is due to human activities in Basatin. The defect was offset by planting in many years. Although the trend has been declining in recent years again. So, it mentioned satellite images have high ability to estimation all environmental processes. This research showed high correlation between images and indexes such as IPVI and NDVI with ground control points.

Keywords: IPVI index, Landsat sensor, maximum likelihood supervised classification, Nayband National Park

Procedia PDF Downloads 269
127 Finite Element Simulation of Four Point Bending of Laminated Veneer Lumber (LVL) Arch

Authors: Eliska Smidova, Petr Kabele

Abstract:

This paper describes non-linear finite element simulation of laminated veneer lumber (LVL) under tensile and shear loads that induce cracking along fibers. For this purpose, we use 2D homogeneous orthotropic constitutive model of tensile and shear fracture in timber that has been recently developed and implemented into ATENA® finite element software by the authors. The model captures (i) material orthotropy for small deformations in both linear and non-linear range, (ii) elastic behavior until anisotropic failure criterion is fulfilled, (iii) inelastic behavior after failure criterion is satisfied, (iv) different post-failure response for cracks along and across the grain, (v) unloading/reloading behavior. The post-cracking response is treated by fixed smeared crack model where Reinhardt-Hordijk function is used. The model requires in total 14 input parameters that can be obtained from standard tests, off-axis test results and iterative numerical simulation of compact tension (CT) or compact tension-shear (CTS) test. New engineered timber composites, such as laminated veneer lumber (LVL), offer improved structural parameters compared to sawn timber. LVL is manufactured by laminating 3 mm thick wood veneers aligned in one direction using water-resistant adhesives (e.g. polyurethane). Thus, 3 main grain directions, namely longitudinal (L), tangential (T), and radial (R), are observed within the layered LVL product. The core of this work consists in 3 numerical simulations of experiments where Radiata Pine LVL and Yellow Poplar LVL were involved. The first analysis deals with calibration and validation of the proposed model through off-axis tensile test (at a load-grain angle of 0°, 10°, 45°, and 90°) and CTS test (at a load-grain angle of 30°, 60°, and 90°), both of which were conducted for Radiata Pine LVL. The second finite element simulation reproduces load-CMOD curve of compact tension (CT) test of Yellow Poplar with the aim of obtaining cohesive law parameters to be used as an input in the third finite element analysis. That is four point bending test of small-size arch of 780 mm span that is made of Yellow Poplar LVL. The arch is designed with a through crack between two middle layers in the crown. Curved laminated beams are exposed to high radial tensile stress compared to timber strength in radial tension in the crown area. Let us note that in this case the latter parameter stands for tensile strength in perpendicular direction with respect to the grain. Standard tests deliver most of the relevant input data whereas traction-separation law for crack along the grain can be obtained partly by inverse analysis of compact tension (CT) test or compact tension-shear test (CTS). The initial crack was modeled as a narrow gap separating two layers in the middle the arch crown. Calculated load-deflection curve is in good agreement with the experimental ones. Furthermore, crack pattern given by numerical simulation coincides with the most important observed crack paths.

Keywords: compact tension (CT) test, compact tension shear (CTS) test, fixed smeared crack model, four point bending test, laminated arch, laminated veneer lumber LVL, off-axis test, orthotropic elasticity, orthotropic fracture criterion, Radiata Pine LVL, traction-separation law, yellow poplar LVL, 2D constitutive model

Procedia PDF Downloads 258
126 Assessment of Occupational Exposure and Individual Radio-Sensitivity in People Subjected to Ionizing Radiation

Authors: Oksana G. Cherednichenko, Anastasia L. Pilyugina, Sergey N.Lukashenko, Elena G. Gubitskaya

Abstract:

The estimation of accumulated radiation doses in people professionally exposed to ionizing radiation was performed using methods of biological (chromosomal aberrations frequency in lymphocytes) and physical (radionuclides analysis in urine, whole-body radiation meter, individual thermoluminescent dosimeters) dosimetry. A group of 84 "A" category employees after their work in the territory of former Semipalatinsk test site (Kazakhstan) was investigated. The dose rate in some funnels exceeds 40 μSv/h. After radionuclides determination in urine using radiochemical and WBC methods, it was shown that the total effective dose of personnel internal exposure did not exceed 0.2 mSv/year, while an acceptable dose limit for staff is 20 mSv/year. The range of external radiation doses measured with individual thermo-luminescent dosimeters was 0.3-1.406 µSv. The cytogenetic examination showed that chromosomal aberrations frequency in staff was 4.27±0.22%, which is significantly higher than at the people from non-polluting settlement Tausugur (0.87±0.1%) (р ≤ 0.01) and citizens of Almaty (1.6±0.12%) (р≤ 0.01). Chromosomal type aberrations accounted for 2.32±0.16%, 0.27±0.06% of which were dicentrics and centric rings. The cytogenetic analysis of different types group radiosensitivity among «professionals» (age, sex, ethnic group, epidemiological data) revealed no significant differences between the compared values. Using various techniques by frequency of dicentrics and centric rings, the average cumulative radiation dose for group was calculated, and that was 0.084-0.143 Gy. To perform comparative individual dosimetry using physical and biological methods of dose assessment, calibration curves (including own ones) and regression equations based on general frequency of chromosomal aberrations obtained after irradiation of blood samples by gamma-radiation with the dose rate of 0,1 Gy/min were used. Herewith, on the assumption of individual variation of chromosomal aberrations frequency (1–10%), the accumulated dose of radiation varied 0-0.3 Gy. The main problem in the interpretation of individual dosimetry results is reduced to different reaction of the objects to irradiation - radiosensitivity, which dictates the need of quantitative definition of this individual reaction and its consideration in the calculation of the received radiation dose. The entire examined contingent was assigned to a group based on the received dose and detected cytogenetic aberrations. Radiosensitive individuals, at the lowest received dose in a year, showed the highest frequency of chromosomal aberrations (5.72%). In opposite, radioresistant individuals showed the lowest frequency of chromosomal aberrations (2.8%). The cohort correlation according to the criterion of radio-sensitivity in our research was distributed as follows: radio-sensitive (26.2%) — medium radio-sensitivity (57.1%), radioresistant (16.7%). Herewith, the dispersion for radioresistant individuals is 2.3; for the group with medium radio-sensitivity — 3.3; and for radio-sensitive group — 9. These data indicate the highest variation of characteristic (reactions to radiation effect) in the group of radio-sensitive individuals. People with medium radio-sensitivity show significant long-term correlation (0.66; n=48, β ≥ 0.999) between the values of doses defined according to the results of cytogenetic analysis and dose of external radiation obtained with the help of thermoluminescent dosimeters. Mathematical models based on the type of violation of the radiation dose according to the professionals radiosensitivity level were offered.

Keywords: biodosimetry, chromosomal aberrations, ionizing radiation, radiosensitivity

Procedia PDF Downloads 155
125 Geomatic Techniques to Filter Vegetation from Point Clouds

Authors: M. Amparo Núñez-Andrés, Felipe Buill, Albert Prades

Abstract:

More and more frequently, geomatics techniques such as terrestrial laser scanning or digital photogrammetry, either terrestrial or from drones, are being used to obtain digital terrain models (DTM) used for the monitoring of geological phenomena that cause natural disasters, such as landslides, rockfalls, debris-flow. One of the main multitemporal analyses developed from these models is the quantification of volume changes in the slopes and hillsides, either caused by erosion, fall, or land movement in the source area or sedimentation in the deposition zone. To carry out this task, it is necessary to filter the point clouds of all those elements that do not belong to the slopes. Among these elements, vegetation stands out as it is the one we find with the greatest presence and its constant change, both seasonal and daily, as it is affected by factors such as wind. One of the best-known indexes to detect vegetation on the image is the NVDI (Normalized Difference Vegetation Index), which is obtained from the combination of the infrared and red channels. Therefore it is necessary to have a multispectral camera. These cameras are generally of lower resolution than conventional RGB cameras, while their cost is much higher. Therefore we have to look for alternative indices based on RGB. In this communication, we present the results obtained in Georisk project (PID2019‐103974RB‐I00/MCIN/AEI/10.13039/501100011033) by using the GLI (Green Leaf Index) and ExG (Excessive Greenness), as well as the change to the Hue-Saturation-Value (HSV) color space being the H coordinate the one that gives us the most information for vegetation filtering. These filters are applied both to the images, creating binary masks to be used when applying the SfM algorithms, and to the point cloud obtained directly by the photogrammetric process without any previous filter or the one obtained by TLS (Terrestrial Laser Scanning). In this last case, we have also tried to work with a Riegl VZ400i sensor that allows the reception, as in the aerial LiDAR, of several returns of the signal. Information to be used for the classification on the point cloud. After applying all the techniques in different locations, the results show that the color-based filters allow correct filtering in those areas where the presence of shadows is not excessive and there is a contrast between the color of the slope lithology and the vegetation. As we have advanced in the case of using the HSV color space, it is the H coordinate that responds best for this filtering. Finally, the use of the various returns of the TLS signal allows filtering with some limitations.

Keywords: RGB index, TLS, photogrammetry, multispectral camera, point cloud

Procedia PDF Downloads 105
124 The Second Generation of Tyrosine Kinase Inhibitor Afatinib Controls Inflammation by Regulating NLRP3 Inflammasome Activation

Authors: Shujun Xie, Shirong Zhang, Shenglin Ma

Abstract:

Background: Chronic inflammation might lead to many malignancies, and inadequate resolution could play a crucial role in tumor invasion, progression, and metastases. A randomised, double-blind, placebo-controlled trial shows that IL-1β inhibition with canakinumab could reduce incident lung cancer and lung cancer mortality in patients with atherosclerosis. The process and secretion of proinflammatory cytokine IL-1β are controlled by the inflammasome. Here we showed the correlation of the innate immune system and afatinib, a tyrosine kinase inhibitor targeting epidermal growth factor receptor (EGFR) in non-small cell lung cancer. Methods: Murine Bone marrow derived macrophages (BMDMs), peritoneal macrophages (PMs) and THP-1 were used to check the effect of afatinib on the activation of NLRP3 inflammasome. The assembly of NLRP3 inflammasome was check by co-immunoprecipitation of NLRP3 and apoptosis-associated speck-like protein containing CARD (ASC), disuccinimidyl suberate (DSS)-cross link of ASC. Lipopolysaccharide (LPS)-induced sepsis and Alum-induced peritonitis were conducted to confirm that afatinib could inhibit the activation of NLRP3 in vivo. Peripheral blood mononuclear cells (PBMCs) from non-small cell lung cancer (NSCLC) patients before or after taking afatinib were used to check that afatinib inhibits inflammation in NSCLC therapy. Results: Our data showed that afatinib could inhibit the secretion of IL-1β in a dose-dependent manner in macrophage. Moreover, afatinib could inhibit the maturation of IL-1β and caspase-1 without affecting the precursors of IL-1β and caspase-1. Next, we found that afatinib could block the assembly of NLRP3 inflammasome and the ASC speck by blocking the interaction of the sensor protein NLRP3 and the adaptor protein ASC. We also found that afatinib was able to alleviate the LPS-induced sepsis in vivo. Conclusion: Our study found that afatinib could inhibit the activation of NLRP3 inflammasome in macrophage, providing new evidence that afatinib could target the innate immune system to control chronic inflammation. These investigations will provide significant experimental evidence in afatinib as therapeutic drug for non-small cell lung cancer or other tumors and NLRP3-related diseases and will explore new targets for afatinib.

Keywords: inflammasome, afatinib, inflammation, tyrosine kinase inhibitor

Procedia PDF Downloads 96