Search results for: breath monitoring using pressure sensors
5266 Thermodynamic Modeling and Exergoeconomic Analysis of an Isobaric Adiabatic Compressed Air Energy Storage System
Authors: Youssef Mazloum, Haytham Sayah, Maroun Nemer
Abstract:
The penetration of renewable energy sources into the electric grid is significantly increasing. However, the intermittence of these sources breaks the balance between supply and demand for electricity. Hence, the importance of the energy storage technologies, they permit restoring the balance and reducing the drawbacks of intermittence of the renewable energies. This paper discusses the modeling and the cost-effectiveness of an isobaric adiabatic compressed air energy storage (IA-CAES) system. The proposed system is a combination among a compressed air energy storage (CAES) system with pumped hydro storage system and thermal energy storage system. The aim of this combination is to overcome the disadvantages of the conventional CAES system such as the losses due to the storage pressure variation, the loss of the compression heat and the use of fossil fuel sources. A steady state model is developed to perform an energy and exergy analyses of the IA-CAES system and calculate the distribution of the exergy losses in the latter system. A sensitivity analysis is also carried out to estimate the effects of some key parameters on the system’s efficiency, such as the pinch of the heat exchangers, the isentropic efficiency of the rotating machinery and the pressure losses. The conducted sensitivity analysis is a local analysis since the sensibility of each parameter changes with the variation of the other parameters. Therefore, an exergoeconomic study is achieved as well as a cost optimization in order to reduce the electricity cost produced during the production phase. The optimizer used is OmOptim which is a genetic algorithms based optimizer.Keywords: cost-effectiveness, Exergoeconomic analysis, isobaric adiabatic compressed air energy storage (IA-CAES) system, thermodynamic modeling
Procedia PDF Downloads 2475265 Geostatistical Models to Correct Salinity of Soils from Landsat Satellite Sensor: Application to the Oran Region, Algeria
Authors: Dehni Abdellatif, Lounis Mourad
Abstract:
The new approach of applied spatial geostatistics in materials sciences, agriculture accuracy, agricultural statistics, permitted an apprehension of managing and monitoring the water and groundwater qualities in a relationship with salt-affected soil. The anterior experiences concerning data acquisition, spatial-preparation studies on optical and multispectral data has facilitated the integration of correction models of electrical conductivity related with soils temperature (horizons of soils). For tomography apprehension, this physical parameter has been extracted from calibration of the thermal band (LANDSAT ETM+6) with a radiometric correction. Our study area is Oran region (Northern West of Algeria). Different spectral indices are determined such as salinity and sodicity index, the Combined Spectral Reflectance Index (CSRI), Normalized Difference Vegetation Index (NDVI), emissivity, Albedo, and Sodium Adsorption Ratio (SAR). The approach of geostatistical modeling of electrical conductivity (salinity), appears to be a useful decision support system for estimating corrected electrical resistivity related to the temperature of surface soils, according to the conversion models by substitution, the reference temperature at 25°C (where hydrochemical data are collected with this constraint). The Brightness temperatures extracted from satellite reflectance (LANDSAT ETM+) are used in consistency models to estimate electrical resistivity. The confusions that arise from the effects of salt stress and water stress removed followed by seasonal application of the geostatistical analysis in Geographic Information System (GIS) techniques investigation and monitoring the variation of the electrical conductivity in the alluvial aquifer of Es-Sénia for the salt-affected soil.Keywords: geostatistical modelling, landsat, brightness temperature, conductivity
Procedia PDF Downloads 4425264 Cross-Comparison between Land Surface Temperature from Polar and Geostationary Satellite over Heterogenous Landscape: A Case Study in Hong Kong
Authors: Ibrahim A. Adeniran, Rui F. Zhu, Man S. Wong
Abstract:
Owing to the insufficiency in the spatial representativeness and continuity of in situ temperature measurements from weather stations (WS), the use of temperature measurement from WS for large-range diurnal analysis in heterogenous landscapes has been limited. This has made the accurate estimation of land surface temperature (LST) from remotely sensed data more crucial. Moreover, the study of dynamic interaction between the atmosphere and the physical surface of the Earth could be enhanced at both annual and diurnal scales by using optimal LST data derived from satellite sensors. The tradeoff between the spatial and temporal resolution of LSTs from satellite’s thermal infrared sensors (TIRS) has, however, been a major challenge, especially when high spatiotemporal LST data are recommended. It is well-known from existing literature that polar satellites have the advantage of high spatial resolution, while geostationary satellites have a high temporal resolution. Hence, this study is aimed at designing a framework for the cross-comparison of LST data from polar and geostationary satellites in a heterogeneous landscape. This could help to understand the relationship between the LST estimates from the two satellites and, consequently, their integration in diurnal LST analysis. Landsat-8 satellite data will be used as the representative of the polar satellite due to the availability of its long-term series, while the Himawari-8 satellite will be used as the data source for the geostationary satellite because of its improved TIRS. For the study area, Hong Kong Special Administrative Region (HK SAR) will be selected; this is due to the heterogeneity in the landscape of the region. LST data will be retrieved from both satellites using the Split window algorithm (SWA), and the resulting data will be validated by comparing satellite-derived LST data with temperature data from automatic WS in HK SAR. The LST data from the satellite data will then be separated based on the land use classification in HK SAR using the Global Land Cover by National Mapping Organization version3 (GLCNMO 2013) data. The relationship between LST data from Landsat-8 and Himawari-8 will then be investigated based on the land-use class and over different seasons of the year in order to account for seasonal variation in their relationship. The resulting relationship will be spatially and statistically analyzed and graphically visualized for detailed interpretation. Findings from this study will reveal the relationship between the two satellite data based on the land use classification within the study area and the seasons of the year. While the information provided by this study will help in the optimal combination of LST data from Polar (Landsat-8) and geostationary (Himawari-8) satellites, it will also serve as a roadmap in the annual and diurnal urban heat (UHI) analysis in Hong Kong SAR.Keywords: automatic weather station, Himawari-8, Landsat-8, land surface temperature, land use classification, split window algorithm, urban heat island
Procedia PDF Downloads 755263 Assessing Solid Waste Management Practices in Port Harcourt City, Nigeria
Authors: Perpetual Onyejelem, Kenichi Matsui
Abstract:
Solid waste management is one essential area for urban administration to achieve environmental sustainability. Proper solid waste management (SWM) improves the environment by reducing diseases and increasing public health. On the other way, improper SWM practices negatively impact public health and environmental sustainability. This article evaluates SWM in Port Harcourt, Nigeria, with the goal of determining the current solid waste management practices and their health implications. This study used secondary data, which relies on existing published literature and official documents. The Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) statement and its four-stage inclusion/exclusion criteria were utilized as part of a systematic literature review technique to locate the literature that concerns SWM practices and the implementation of solid waste management policies between 2014-2023 in PortHarcourt and its health effects from specific databases (Scopus and Google Scholar). The results found that despite the existence and implementation of the Rivers State Waste Management Policy and the formulation of the National Policy on Solid Waste Management in Port Harcourt, residents continued to dump waste in drainages. They were unaware of waste sorting and dumped waste haphazardly. This trend has persisted due to a lack of political commitment to the effective implementation and monitoring of policies and strategies and a lack of training provided to waste collectors regarding the SWM approach, which involves sorting and separating waste. In addition, inadequate remuneration for waste collectors, the absence of community participation in policy formulation, and insufficient awareness among residents regarding the 3R approach are also contributory factors. This caused the emergence of vector-borne diseases such as malaria, lassa fever, and cholera in Port Harcourt, increasing the expense of healthcare for locals, particularly low-income households. The study urges the government to prioritize protecting the health of its citizens by studying the methods other nations have taken to address the problem of solid waste management and adopting those that work best for their region. The bottom-up strategy should be used to include locals in developing solutions. However, citizens who are always the most impacted by this issue should launch initiatives to address it and put pressure on the government to assist them when they have limitations.Keywords: health effects, solid waste management practices, environmental pollution, Port-Harcourt
Procedia PDF Downloads 615262 The Impact of Intelligent Control Systems on Biomedical Engineering and Research
Authors: Melkamu Tadesse Getachew
Abstract:
Intelligent control systems have revolutionized biomedical engineering, advancing research and enhancing medical practice. This review paper examines the impact of intelligent control on various aspects of biomedical engineering. It analyzes how these systems enhance precision and accuracy in biomedical instrumentation, improving diagnostics, monitoring, and treatment. Integration challenges are addressed, and potential solutions are proposed. The paper also investigates the optimization of drug delivery systems through intelligent control. It explores how intelligent systems contribute to precise dosing, targeted drug release, and personalized medicine. Challenges related to controlled drug release and patient variability are discussed, along with potential avenues for overcoming them. The comparison of algorithms used in intelligent control systems in biomedical control is also reviewed. The implications of intelligent control in computational and systems biology are explored, showcasing how these systems enable enhanced analysis and prediction of complex biological processes. Challenges such as interpretability, human-machine interaction, and machine reliability are examined, along with potential solutions. Intelligent control in biomedical engineering also plays a crucial role in risk management during surgical operations. This section demonstrates how intelligent systems improve patient safety and surgical outcomes when integrated into surgical robots, augmented reality, and preoperative planning. The challenges associated with these implementations and potential solutions are discussed in detail. In summary, this review paper comprehensively explores the widespread impact of intelligent control on biomedical engineering, showing the future of human health issues promising. It discusses application areas, challenges, and potential solutions, highlighting the transformative potential of these systems in advancing research and improving medical practice.Keywords: Intelligent control systems, biomedical instrumentation, drug delivery systems, robotic surgical instruments, Computational monitoring and modeling
Procedia PDF Downloads 475261 Non-Contact Digital Music Instrument Using Light Sensing Technology
Authors: Aishwarya Ravichandra, Kirtana Kirtivasan, Adithi Mahesh, Ashwini S.Savanth
Abstract:
A Non-Contact Digital Music System has been conceptualized and implemented to create a new era of digital music. This system replaces the strings of a traditional stringed instrument with laser beams to avoid bruising of the user’s hand. The system consists of seven laser modules, detector modules and distance sensors that form the basic hardware blocks of this instrument. Arduino ATmega2560 microcontroller is used as the primary interface between the hardware and the software. MIDI (Musical Instrument Digital Interface) is used as the protocol to establish communication between the instrument and the virtual synthesizer software.Keywords: Arduino, detector, laser, MIDI, note on, note off, pitch bend, Sharp IR distance sensor
Procedia PDF Downloads 4085260 Theoretical Analysis of the Existing Sheet Thickness in the Calendering of Pseudoplastic Material
Authors: Muhammad Zahid
Abstract:
The mechanical process of smoothing and compressing a molten material by passing it through a number of pairs of heated rolls in order to produce a sheet of desired thickness is called calendering. The rolls that are in combination are called calenders, a term derived from kylindros the Greek word for the cylinder. It infects the finishing process used on cloth, paper, textiles, leather cloth, or plastic film and so on. It is a mechanism which is used to strengthen surface properties, minimize sheet thickness, and yield special effects such as a glaze or polish. It has a wide variety of applications in industries in the manufacturing of textile fabrics, coated fabrics, and plastic sheeting to provide the desired surface finish and texture. An analysis has been presented for the calendering of Pseudoplastic material. The lubrication approximation theory (LAT) has been used to simplify the equations of motion. For the investigation of the nature of the steady solutions that exist, we make use of the combination of exact solution and numerical methods. The expressions for the velocity profile, rate of volumetric flow and pressure gradient are found in the form of exact solutions. Furthermore, the quantities of interest by engineering point of view, such as pressure distribution, roll-separating force, and power transmitted to the fluid by the rolls are also computed. Some results are shown graphically while others are given in the tabulated form. It is found that the non-Newtonian parameter and Reynolds number serve as the controlling parameters for the calendering process.Keywords: calendering, exact solutions, lubrication approximation theory, numerical solutions, pseudoplastic material
Procedia PDF Downloads 1495259 Electrochemical Radiofrequency Scanning Tunneling Microscopy Measurements for Fingerprinting Single Electron Transfer Processes
Authors: Abhishek Kumar, Mohamed Awadein, Georg Gramse, Luyang Song, He Sun, Wolfgang Schofberger, Stefan Müllegger
Abstract:
Electron transfer is a crucial part of chemical reactions which drive everyday processes. With the help of an electro-chemical radio frequency scanning tunneling microscopy (EC-RF-STM) setup, we are observing single electron mediated oxidation-reduction processes in molecules like ferrocene and transition metal corroles. Combining the techniques of scanning microwave microscopy and cyclic voltammetry allows us to monitor such processes with attoampere sensitivity. A systematic study of such phenomena would be critical to understanding the nano-scale behavior of catalysts, molecular sensors, and batteries relevant to the development of novel material and energy applications.Keywords: radiofrequency, STM, cyclic voltammetry, ferrocene
Procedia PDF Downloads 4825258 Predicting Aggregation Propensity from Low-Temperature Conformational Fluctuations
Authors: Hamza Javar Magnier, Robin Curtis
Abstract:
There have been rapid advances in the upstream processing of protein therapeutics, which has shifted the bottleneck to downstream purification and formulation. Finding liquid formulations with shelf lives of up to two years is increasingly difficult for some of the newer therapeutics, which have been engineered for activity, but their formulations are often viscous, can phase separate, and have a high propensity for irreversible aggregation1. We explore means to develop improved predictive ability from a better understanding of how protein-protein interactions on formulation conditions (pH, ionic strength, buffer type, presence of excipients) and how these impact upon the initial steps in protein self-association and aggregation. In this work, we study the initial steps in the aggregation pathways using a minimal protein model based on square-well potentials and discontinuous molecular dynamics. The effect of model parameters, including range of interaction, stiffness, chain length, and chain sequence, implies that protein models fold according to various pathways. By reducing the range of interactions, the folding- and collapse- transition come together, and follow a single-step folding pathway from the denatured to the native state2. After parameterizing the model interaction-parameters, we developed an understanding of low-temperature conformational properties and fluctuations, and the correlation to the folding transition of proteins in isolation. The model fluctuations increase with temperature. We observe a low-temperature point, below which large fluctuations are frozen out. This implies that fluctuations at low-temperature can be correlated to the folding transition at the melting temperature. Because proteins “breath” at low temperatures, defining a native-state as a single structure with conserved contacts and a fixed three-dimensional structure is misleading. Rather, we introduce a new definition of a native-state ensemble based on our understanding of the core conservation, which takes into account the native fluctuations at low temperatures. This approach permits the study of a large range of length and time scales needed to link the molecular interactions to the macroscopically observed behaviour. In addition, these models studied are parameterized by fitting to experimentally observed protein-protein interactions characterized in terms of osmotic second virial coefficients.Keywords: protein folding, native-ensemble, conformational fluctuation, aggregation
Procedia PDF Downloads 3635257 Standardization of a Methodology for Quantification of Antimicrobials Used for the Treatment of Multi-Resistant Bacteria Using Two Types of Biosensors and Production of Anti-Antimicrobial Antibodies
Authors: Garzon V., Bustos R., Salvador J. P., Marco M. P., Pinacho D. G.
Abstract:
Bacterial resistance to antimicrobial treatment has increased significantly in recent years, making it a public health problem. Large numbers of bacteria are resistant to all or nearly all known antimicrobials, creating the need for the development of new types of antimicrobials or the use of “last line” antimicrobial drug therapies for the treatment of multi-resistant bacteria. Some of the chemical groups of antimicrobials most used for the treatment of infections caused by multiresistant bacteria in the clinic are Glycopeptide (Vancomycin), Polymyxin (Colistin), Lipopeptide (Daptomycin) and Carbapenem (Meropenem). Molecules that require therapeutic drug monitoring (TDM). Due to the above, a methodology based on nanobiotechnology based on an optical and electrochemical biosensor is being developed, which allows the evaluation of the plasmatic levels of some antimicrobials such as glycopeptide, polymyxin, lipopeptide and carbapenem quickly, at a low cost, with a high specificity and sensitivity and that can be implemented in the future in public and private health hospitals. For this, the project was divided into five steps i) Design of specific anti-drug antibodies, produced in rabbits for each of the types of antimicrobials, evaluating the results by means of an immunoassay analysis (ELISA); ii) quantification by means of an electrochemical biosensor that allows quantification with high sensitivity and selectivity of the reference antimicrobials; iii) Comparison of antimicrobial quantification with an optical type biosensor; iv) Validation of the methodologies used with biosensor by means of an immunoassay. Finding as a result that it is possible to quantify antibiotics by means of the optical and electrochemical biosensor at concentrations on average of 1,000ng/mL, the antibodies being sensitive and specific for each of the antibiotic molecules, results that were compared with immunoassays and HPLC chromatography. Thus, contributing to the safe use of these drugs commonly used in clinical practice and new antimicrobial drugs.Keywords: antibiotics, electrochemical biosensor, optical biosensor, therapeutic drug monitoring
Procedia PDF Downloads 855256 An Analysis of Relation Between Soil Radon Anomalies and Geological Environment Change
Authors: Mengdi Zhang, Xufeng Liu, Zhenji Gao, Ying Li, Zhu Rao, Yi Huang
Abstract:
As an open system, the earth is constantly undergoing the transformation and release of matter and energy. Fault zones are relatively discontinuous and fragile geological structures, and the release of material and energy inside the Earth is strongest in relatively weak fault zones. Earthquake events frequently occur in fault zones and are closely related to tectonic activity in these zones. In earthquake precursor observation, monitoring the spatiotemporal changes in the release of related gases near fault zones (such as radon gas, hydrogen, carbon dioxide, helium), and analyzing earthquake precursor anomalies, can be effective means to forecast the occurrence of earthquake events. Radon gas, as an inert radioactive gas generated during the decay of uranium and thorium, is not only a indicator for monitoring tectonic and seismic activity, but also an important topic for ecological and environmental health, playing a crucial role in uranium exploration. At present, research on soil radon gas mainly focuses on the measurement of soil gas concentration and flux in fault zone profiles, while research on the correlation between spatiotemporal concentration changes in the same region and its geological background is relatively little. In this paper, Tangshan area in north China is chosen as research area. An analysis was conducted on the seismic geological background of Tangshan area firstly. Then based on quantitative analysis and comparison of measurement radon concentrations of 2023 and 2010, combined with the study of seismic activity and environmental changes during the time period, the spatiotemporal distribution characteristics and influencing factors were explored, in order to analyze the gas emission characteristics of the Tangshan fault zone and its relationship with fault activity, which aimed to be useful for the future work in earthquake monitor of Tangshan area.Keywords: radon, Northern China, soil gas, earthquake
Procedia PDF Downloads 845255 BIM4Cult Leveraging BIM and IoT for Enhancing Fire Safety in Historical Buildings
Authors: Anastasios Manos, Despina Elisabeth Filippidou
Abstract:
Introduction: Historical buildings are an inte-gral part of the cultural heritage of every place, and beyond the obvious need for protection against risks, they have specific requirements regarding the handling of hazards and disasters such as fire, floods, earthquakes, etc. Ensuring high levels of protection and safety for these buildings is impera-tive for two distinct but interconnected reasons: a) they themselves constitute cultural heritage, and b) they are often used as museums/cultural spaces, necessitating the protection of both human life (vis-itors and workers) and the cultural treasures they house. However, these buildings present serious constraints in implementing the necessary measures to protect them from destruction due to their unique architecture, construction methods, and/or the structural materials used in the past, which have created an existing condition that is sometimes challenging to reshape and operate within the framework of modern regulations and protection measures. One of the most devastating risks that threaten historical buildings is fire. Catastrophic fires demonstrate the need for timely evaluation of fire safety measures in historical buildings. Recog-nizing the criticality of protecting historical build-ings from the risk of fire, the Confederation of Fire Protection Associations in Europe (CFPA E) issued specific guidelines in 2013 (CFPA-E Guideline No 30:2013 F) for the fire protection of historical buildings at the European level. However, until now, few actions have been implemented towards leveraging modern technologies in the field of con-struction and maintenance of buildings, such as Building Information Modeling (BIM) and the Inter-net of Things (IoT), for the protection of historical buildings from risks like fires, floods, etc. The pro-ject BIM4Cult has bee developed in order to fill this gap. It is a tool for timely assessing and monitoring of the fire safety level of historical buildings using BIM and IoT technologies in an integrated manner. The tool serves as a decision support expert system for improving the fire safety of historical buildings by continuously monitoring, controlling and as-sessing critical risk factors for fire.Keywords: Iot, fire, BIM, expert system
Procedia PDF Downloads 715254 Measurements and Predictions of Hydrates of CO₂-rich Gas Mixture in Equilibrium with Multicomponent Salt Solutions
Authors: Abdullahi Jibril, Rod Burgass, Antonin Chapoy
Abstract:
Carbon dioxide (CO₂) is widely used in reservoirs to enhance oil and gas production, mixing with natural gas and other impurities in the process. However, hydrate formation frequently hinders the efficiency of CO₂-based enhanced oil recovery, causing pipeline blockages and pressure build-ups. Current hydrate prediction methods are primarily designed for gas mixtures with low CO₂ content and struggle to accurately predict hydrate formation in CO₂-rich streams in equilibrium with salt solutions. Given that oil and gas reservoirs are saline, experimental data for CO₂-rich streams in equilibrium with salt solutions are essential to improve these predictive models. This study investigates the inhibition of hydrate formation in a CO₂-rich gas mixture (CO₂, CH₄, N₂, H₂ at 84.73/15/0.19/0.08 mol.%) using multicomponent salt solutions at concentrations of 2.4 wt.%, 13.65 wt.%, and 27.3 wt.%. The setup, test fluids, methodology, and results for hydrates formed in equilibrium with varying salt solution concentrations are presented. Measurements were conducted using an isochoric pressure-search method at pressures up to 45 MPa. Experimental data were compared with predictions from a thermodynamic model based on the Cubic-Plus-Association equation of state (EoS), while hydrate-forming conditions were modeled using the van der Waals and Platteeuw solid solution theory. Water activity was evaluated based on hydrate suppression temperature to assess consistency in the inhibited systems. Results indicate that hydrate stability is significantly influenced by inhibitor concentration, offering valuable guidelines for the design and operation of pipeline systems involved in offshore gas transport of CO₂-rich streams.Keywords: CO₂-rich streams, hydrates, monoethylene glycol, phase equilibria
Procedia PDF Downloads 235253 Event Data Representation Based on Time Stamp for Pedestrian Detection
Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita
Abstract:
In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption
Procedia PDF Downloads 1015252 Vibration-Based Structural Health Monitoring of a 21-Story Building with Tuned Mass Damper in Seismic Zone
Authors: David Ugalde, Arturo Castillo, Leopoldo Breschi
Abstract:
The Tuned Mass Dampers (TMDs) are an effective system for mitigating vibrations in building structures. These dampers have traditionally focused on the protection of high-rise buildings against earthquakes and wind loads. The Camara Chilena de la Construction (CChC) building, built in 2018 in Santiago, Chile, is a 21-story RC wall building equipped with a 150-ton TMD and instrumented with six permanent accelerometers, offering an opportunity to monitor the dynamic response of this damped structure. This paper presents the system identification of the CChC building using power spectral density plots of ambient vibration and two seismic events (5.5 Mw and 6.7 Mw). Linear models of the building with and without the TMD are used to compute the theoretical natural periods through modal analysis and simulate the response of the building through response history analysis. Results show that natural periods obtained from both ambient vibrations and earthquake records are quite similar to the theoretical periods given by the modal analysis of the building model. Some of the experimental periods are noticeable by simple inspection of the earthquake records. The accelerometers in the first story better captured the modes related to the building podium while the upper accelerometers clearly captured the modes related to the tower. The earthquake simulation showed smaller accelerations in the model with TMD that are similar to that measured by the accelerometers. It is concluded that the system identification through power spectral density shows consistency with the expected dynamic properties. The structural health monitoring of the CChC building confirms the advantages of seismic protection technologies such as TMDs in seismic prone areas.Keywords: system identification, tuned mass damper, wall buildings, seismic protection
Procedia PDF Downloads 1265251 Neuron-Based Control Mechanisms for a Robotic Arm and Hand
Authors: Nishant Singh, Christian Huyck, Vaibhav Gandhi, Alexander Jones
Abstract:
A robotic arm and hand controlled by simulated neurons is presented. The robot makes use of a biological neuron simulator using a point neural model. The neurons and synapses are organised to create a finite state automaton including neural inputs from sensors, and outputs to effectors. The robot performs a simple pick-and-place task. This work is a proof of concept study for a longer term approach. It is hoped that further work will lead to more effective and flexible robots. As another benefit, it is hoped that further work will also lead to a better understanding of human and other animal neural processing, particularly for physical motion. This is a multidisciplinary approach combining cognitive neuroscience, robotics, and psychology.Keywords: cell assembly, force sensitive resistor, robot, spiking neuron
Procedia PDF Downloads 3495250 Management Tools for Assessment of Adverse Reactions Caused by Contrast Media at the Hospital
Authors: Pranee Suecharoen, Ratchadaporn Soontornpas, Jaturat Kanpittaya
Abstract:
Background: Contrast media has an important role for disease diagnosis through detection of pathologies. Contrast media can, however, cause adverse reactions after administration of its agents. Although non-ionic contrast media are commonly used, the incidence of adverse events is relatively low. The most common reactions found (10.5%) were mild and manageable and/or preventable. Pharmacists can play an important role in evaluating adverse reactions, including awareness of the specific preparation and the type of adverse reaction. As most common types of adverse reactions are idiosyncratic or pseudo-allergic reactions, common standards need to be established to prevent and control adverse reactions promptly and effectively. Objective: To measure the effect of using tools for symptom evaluation in order to reduce the severity, or prevent the occurrence, of adverse reactions from contrast media. Methods: Retrospective review descriptive research with data collected on adverse reactions assessment and Naranjo’s algorithm between June 2015 and May 2016. Results: 158 patients (10.53%) had adverse reactions. Of the 1,500 participants with an adverse event evaluation, 137 (9.13%) had a mild adverse reaction, including hives, nausea, vomiting, dizziness, and headache. These types of symptoms can be treated (i.e., with antihistamines, anti-emetics) and the patient recovers completely within one day. The group with moderate adverse reactions, numbering 18 cases (1.2%), had hypertension or hypotension, and shortness of breath. Severe adverse reactions numbered 3 cases (0.2%) and included swelling of the larynx, cardiac arrest, and loss of consciousness, requiring immediate treatment. No other complications under close medical supervision were recorded (i.e., corticosteroids use, epinephrine, dopamine, atropine, or life-saving devices). Using the guideline, therapies are divided into general and specific and are performed according to the severity, risk factors and ingestion of contrast media agents. Patients who have high-risk factors were screened and treated (i.e., prophylactic premedication) for prevention of severe adverse reactions, especially those with renal failure. Thus, awareness for the need for prescreening of different risk factors is necessary for early recognition and prompt treatment. Conclusion: Studying adverse reactions can be used to develop a model for reducing the level of severity and setting a guideline for a standardized, multidisciplinary approach to adverse reactions.Keywords: role of pharmacist, management of adverse reactions, guideline for contrast media, non-ionic contrast media
Procedia PDF Downloads 3035249 Creation of Ultrafast Ultra-Broadband High Energy Laser Pulses
Authors: Walid Tawfik
Abstract:
The interaction of high intensity ultrashort laser pulses with plasma generates many significant applications, including soft x-ray lasers, time-resolved laser induced plasma spectroscopy LIPS, and laser-driven accelerators. The development in producing of femtosecond down to ten femtosecond optical pulses has facilitates scientists with a vital tool in a variety of ultrashort phenomena, such as high field physics, femtochemistry and high harmonic generation HHG. In this research, we generate a two-octave-wide ultrashort supercontinuum pulses with an optical spectrum extending from 3.5 eV (ultraviolet) to 1.3 eV (near-infrared) using a capillary fiber filled with neon gas. These pulses are formed according to nonlinear self-phase modulation in the neon gas as a nonlinear medium. The investigations of the created pulses were made using spectral phase interferometry for direct electric-field reconstruction (SPIDER). A complete description of the output pulses was considered. The observed characterization of the produced pulses includes the beam profile, the pulse width, and the spectral bandwidth. After reaching optimization conditions, the intensity of the reconstructed pulse autocorrelation function was applied for the shorts pulse duration to achieve transform limited ultrashort pulses with durations below 6-fs energies up to 600μJ. Moreover, the effect of neon pressure variation on the pulse width was examined. The nonlinear self-phase modulation realized to be increased with the pressure of the neon gas. The observed results may lead to an advanced method to control and monitor ultrashort transit interaction in femtochemistry.Keywords: supercontinuum, ultrafast, SPIDER, ultra-broadband
Procedia PDF Downloads 2245248 A Study on the Establishment of a 4-Joint Based Motion Capture System and Data Acquisition
Authors: Kyeong-Ri Ko, Seong Bong Bae, Jang Sik Choi, Sung Bum Pan
Abstract:
A simple method for testing the posture imbalance of the human body is to check for differences in the bilateral shoulder and pelvic height of the target. In this paper, to check for spinal disorders the authors have studied ways to establish a motion capture system to obtain and express motions of 4-joints, and to acquire data based on this system. The 4 sensors are attached to the both shoulders and pelvis. To verify the established system, the normal and abnormal postures of the targets listening to a lecture were obtained using the established 4-joint based motion capture system. From the results, it was confirmed that the motions taken by the target was identical to the 3-dimensional simulation.Keywords: inertial sensor, motion capture, motion data acquisition, posture imbalance
Procedia PDF Downloads 5155247 Cognitive Theory and the Design of Integrate Curriculum
Authors: Bijan Gillani, Roya Gillani
Abstract:
The purpose of this paper is to propose a pedagogical model where engineering provides the interconnection to integrate the other topics of science, technology, engineering, and mathematics. The author(s) will first present a brief discussion of cognitive theory and then derive an integrated pedagogy to use engineering and technology, such as drones, sensors, camera, iPhone, radio waves as the nexus to an integrated curriculum development for the other topics of STEM. Based on this pedagogy, one example developed by the author(s) called “Drones and Environmental Science,” will be presented that uses a drone and related technology as an appropriate instructional delivery medium to apply Piaget’s cognitive theory to create environments that promote the integration of different STEM subjects that relate to environmental science.Keywords: cogntive theories, drone, environmental science, pedagogy
Procedia PDF Downloads 5765246 Mineralogy and Thermobarometry of Xenoliths in Basalt from the Chanthaburi-Trat Gem Fields, Thailand
Authors: Apichet Boonsoong
Abstract:
In the Chanthaburi-Trat basalts, xenoliths are composed of essentially ultramafic xenoliths (particularly spinel lherzolite) with a few of an aggregate of feldspar. Some 19 ultramafic xenoliths were collected from 13 different locations. They range in size from 3.5 to 60mm across. Most are weathered and oxidized on the surface but fresh samples are obtained from cut surfaces. Chemical analyses were performed on carbon-coated polished thin sections using a fully automated CAMECA SX-50 electron microprobe (EMPA) in wavelength-dispersive mode. In thin section, they are seen to consist of variable amounts of olivine, clinopyroxene, orthopyroxene with minor spinel and plagioclase, and are classed as lherzolite. Modal compositions of the ultramafic nodules vary with olivine (60-75%), clinopyroxene (20-30%), orthopyroxene (0-15%), minor spinel (1-3%) and plagioclase (<1%). The essential minerals form an equigranular, medium- to coarse-grained, granoblastic texture, and all are in mutual contact indicating attainment of equilibrium. Reaction rims are common along the nodule margins and in some are also present along grain boundaries. Zoning occurs in clinopyroxene, and to a lesser extent in orthopyroxene. The homogeneity of mineral compositions in lherzolite xenoliths suggests the attainment of equilibrium. The equilibration temperatures of these xenoliths are estimated to be in the range of 973 to 1063°C. Pressure estimates are not so easily obtained because no suitable barometer exists for garnet-free lherzolites and so an indirect method was used. The general mineral assemblage of the lherzolite xenoliths and the absence of garnet indicate a pressure range of approximately 12–19kbar, which is equivalent to depths approximately of 38 to 60km.Keywords: chanthaburi-trat basalts, spinel lherzolite, xenoliths, 973 to 1063°C, 38 to 60km
Procedia PDF Downloads 1205245 Specific Earthquake Ground Motion Levels That Would Affect Medium-To-High Rise Buildings
Authors: Rhommel Grutas, Ishmael Narag, Harley Lacbawan
Abstract:
Construction of high-rise buildings is a means to address the increasing population in Metro Manila, Philippines. The existence of the Valley Fault System within the metropolis and other nearby active faults poses threats to a densely populated city. The distant, shallow and large magnitude earthquakes have the potential to generate slow and long-period vibrations that would affect medium-to-high rise buildings. Heavy damage and building collapse are consequences of prolonged shaking of the structure. If the ground and the building have almost the same period, there would be a resonance effect which would cause the prolonged shaking of the building. Microzoning the long-period ground response would aid in the seismic design of medium to high-rise structures. The shear-wave velocity structure of the subsurface is an important parameter in order to evaluate ground response. Borehole drilling is one of the conventional methods of determining shear-wave velocity structure however, it is an expensive approach. As an alternative geophysical exploration, microtremor array measurements can be used to infer the structure of the subsurface. Microtremor array measurement system was used to survey fifty sites around Metro Manila including some municipalities of Rizal and Cavite. Measurements were carried out during the day under good weather conditions. The team was composed of six persons for the deployment and simultaneous recording of the microtremor array sensors. The instruments were laid down on the ground away from sewage systems and leveled using the adjustment legs and bubble level. A total of four sensors were deployed for each site, three at the vertices of an equilateral triangle with one sensor at the centre. The circular arrays were set up with a maximum side length of approximately four kilometers and the shortest side length for the smallest array is approximately at 700 meters. Each recording lasted twenty to sixty minutes. From the recorded data, f-k analysis was applied to obtain phase velocity curves. Inversion technique is applied to construct the shear-wave velocity structure. This project provided a microzonation map of the metropolis and a profile showing the long-period response of the deep sedimentary basin underlying Metro Manila which would be suitable for local administrators in their land use planning and earthquake resistant design of medium to high-rise buildings.Keywords: earthquake, ground motion, microtremor, seismic microzonation
Procedia PDF Downloads 4685244 Coordinative Remote Sensing Observation Technology for a High Altitude Barrier Lake
Authors: Zhang Xin
Abstract:
Barrier lakes are lakes formed by storing water in valleys, river valleys or riverbeds after being blocked by landslide, earthquake, debris flow, and other factors. They have great potential safety hazards. When the water is stored to a certain extent, it may burst in case of strong earthquake or rainstorm, and the lake water overflows, resulting in large-scale flood disasters. In order to ensure the safety of people's lives and property in the downstream, it is very necessary to monitor the barrier lake. However, it is very difficult and time-consuming to manually monitor the barrier lake in high altitude areas due to the harsh climate and steep terrain. With the development of earth observation technology, remote sensing monitoring has become one of the main ways to obtain observation data. Compared with a single satellite, multi-satellite remote sensing cooperative observation has more advantages; its spatial coverage is extensive, observation time is continuous, imaging types and bands are abundant, it can monitor and respond quickly to emergencies, and complete complex monitoring tasks. Monitoring with multi-temporal and multi-platform remote sensing satellites can obtain a variety of observation data in time, acquire key information such as water level and water storage capacity of the barrier lake, scientifically judge the situation of the barrier lake and reasonably predict its future development trend. In this study, The Sarez Lake, which formed on February 18, 1911, in the central part of the Pamir as a result of blockage of the Murgab River valley by a landslide triggered by a strong earthquake with magnitude of 7.4 and intensity of 9, is selected as the research area. Since the formation of Lake Sarez, it has aroused widespread international concern about its safety. At present, the use of mechanical methods in the international analysis of the safety of Lake Sarez is more common, and remote sensing methods are seldom used. This study combines remote sensing data with field observation data, and uses the 'space-air-ground' joint observation technology to study the changes in water level and water storage capacity of Lake Sarez in recent decades, and evaluate its safety. The situation of the collapse is simulated, and the future development trend of Lake Sarez is predicted. The results show that: 1) in recent decades, the water level of Lake Sarez has not changed much and remained at a stable level; 2) unless there is a strong earthquake or heavy rain, it is less likely that the Lake Sarez will be broken under normal conditions, 3) lake Sarez will remain stable in the future, but it is necessary to establish an early warning system in the Lake Sarez area for remote sensing of the area, 4) the coordinative remote sensing observation technology is feasible for the high altitude barrier lake of Sarez.Keywords: coordinative observation, disaster, remote sensing, geographic information system, GIS
Procedia PDF Downloads 1285243 Response Surface Methodology Approach to Defining Ultrafiltration of Steepwater from Corn Starch Industry
Authors: Zita I. Šereš, Ljubica P. Dokić, Dragana M. Šoronja Simović, Cecilia Hodur, Zsuzsanna Laszlo, Ivana Nikolić, Nikola Maravić
Abstract:
In this work the concentration of steep-water from corn starch industry is monitored using ultrafiltration membrane. The aim was to examine the conditions of ultrafiltration of steep-water by applying the membrane of 2.5nm. The parameters that vary during the course of ultrafiltration, were the transmembrane pressure, flow rate, while the permeate flux and the dry matter content of permeate and retentive were the dependent parameter constantly monitored during the process. Experiments of ultrafiltration are conducted on the samples of steep-water, which were obtained from the starch wet milling plant Jabuka, Pancevo. The procedure of ultrafiltration on a single-channel 250mm length, with inner diameter of 6.8mm and outer diameter of 10mm membrane were carried on. The membrane is made of a-Al2O3 with TiO2 layer obtained from GEA (Germany). The experiments are carried out at a flow rate ranging from 100 to 200lh-1 and transmembrane pressure of 1-3 bars. During the experiments of steep-water ultrafiltration, the change of permeate flux, dry matter content of permeate and retentive, as well as the absorbance changes of the permeate and retentive were monitored. The experimental results showed that the maximum flux reaches about 40lm-2h-1. For responses obtained after experiments, a polynomial model of the second degree is established to evaluate and quantify the influence of the variables. The quadratic equitation fits with the experimental values, where the coefficient of determination for flux is 0.96. The dry matter content of the retentive is increased for about 6%, while the dry matter content of permeate was reduced for about 35-40%, respectively. During steep-water ultrafiltration in permeate stays 40% less dry matter compared to the feed.Keywords: ultrafiltration, steep-water, starch industry, ceramic membrane
Procedia PDF Downloads 2845242 Numerical Solution of Momentum Equations Using Finite Difference Method for Newtonian Flows in Two-Dimensional Cartesian Coordinate System
Authors: Ali Ateş, Ansar B. Mwimbo, Ali H. Abdulkarim
Abstract:
General transport equation has a wide range of application in Fluid Mechanics and Heat Transfer problems. In this equation, generally when φ variable which represents a flow property is used to represent fluid velocity component, general transport equation turns into momentum equations or with its well known name Navier-Stokes equations. In these non-linear differential equations instead of seeking for analytic solutions, preferring numerical solutions is a more frequently used procedure. Finite difference method is a commonly used numerical solution method. In these equations using velocity and pressure gradients instead of stress tensors decreases the number of unknowns. Also, continuity equation, by integrating the system, number of equations is obtained as number of unknowns. In this situation, velocity and pressure components emerge as two important parameters. In the solution of differential equation system, velocities and pressures must be solved together. However, in the considered grid system, when pressure and velocity values are jointly solved for the same nodal points some problems confront us. To overcome this problem, using staggered grid system is a referred solution method. For the computerized solutions of the staggered grid system various algorithms were developed. From these, two most commonly used are SIMPLE and SIMPLER algorithms. In this study Navier-Stokes equations were numerically solved for Newtonian flow, whose mass or gravitational forces were neglected, for incompressible and laminar fluid, as a hydro dynamically fully developed region and in two dimensional cartesian coordinate system. Finite difference method was chosen as the solution method. This is a parametric study in which varying values of velocity components, pressure and Reynolds numbers were used. Differential equations were discritized using central difference and hybrid scheme. The discritized equation system was solved by Gauss-Siedel iteration method. SIMPLE and SIMPLER were used as solution algorithms. The obtained results, were compared for central difference and hybrid as discritization methods. Also, as solution algorithm, SIMPLE algorithm and SIMPLER algorithm were compared to each other. As a result, it was observed that hybrid discritization method gave better results over a larger area. Furthermore, as computer solution algorithm, besides some disadvantages, it can be said that SIMPLER algorithm is more practical and gave result in short time. For this study, a code was developed in DELPHI programming language. The values obtained in a computer program were converted into graphs and discussed. During sketching, the quality of the graph was increased by adding intermediate values to the obtained result values using Lagrange interpolation formula. For the solution of the system, number of grid and node was found as an estimated. At the same time, to indicate that the obtained results are satisfactory enough, by doing independent analysis from the grid (GCI analysis) for coarse, medium and fine grid system solution domain was obtained. It was observed that when graphs and program outputs were compared with similar studies highly satisfactory results were achieved.Keywords: finite difference method, GCI analysis, numerical solution of the Navier-Stokes equations, SIMPLE and SIMPLER algoritms
Procedia PDF Downloads 3925241 Soil Quality State and Trends in New Zealand’s Largest City after Fifteen Years
Authors: Fiona Curran-Cournane
Abstract:
Soil quality monitoring is a science-based soil management tool that assesses soil ecosystem health. A soil monitoring program in Auckland, New Zealand’s largest city, extends from 1995 to the present. The objective of this study was to firstly determine changes in soil parameters (basic soil properties and heavy metals) that were assessed from rural land in 1995-2000 and repeated in 2008-2012. The second objective was to determine differences in soil parameters across various land uses including native bush, rural (horticulture, pasture and plantation forestry) and urban land uses using soil data collected in more recent years (2009-2013). Across rural land, mean concentrations of Olsen P had significantly increased in the second sampling period and was identified as the indicator of most concern, followed by soil macroporosity, particularly for horticultural and pastoral land. Mean concentrations of Cd were also greatest for pastoral and horticultural land and a positive correlation existed between these two parameters, which highlights the importance of analysing basic soil parameters in conjunction with heavy metals. In contrast, mean concentrations of As, Cr, Pb, Ni and Zn were greatest for urban sites. Native bush sites had the lowest concentrations of heavy metals and were used to calculate a ‘pollution index’ (PI). The mean PI was classified as high (PI > 3) for Cd and Ni and moderate for Pb, Zn, Cr, Cu, As, and Hg, indicating high levels of heavy metal pollution across both rural and urban soils. From a land use perspective, the mean ‘integrated pollution index’ was highest for urban sites at 2.9 followed by pasture, horticulture and plantation forests at 2.7, 2.6, and 0.9, respectively. It is recommended that soil sampling continues over time because a longer spanning record will allow further identification of where soil problems exist and where resources need to be targeted in the future. Findings from this study will also inform policy and science direction in regional councils.Keywords: heavy metals, pollution index, rural and urban land use, soil quality
Procedia PDF Downloads 3785240 Smart Technology for Hygrothermal Performance of Low Carbon Material Using an Artificial Neural Network Model
Authors: Manal Bouasria, Mohammed-Hichem Benzaama, Valérie Pralong, Yassine El Mendili
Abstract:
Reducing the quantity of cement in cementitious composites can help to reduce the environmental effect of construction materials. By-products such as ferronickel slags (FNS), fly ash (FA), and Crepidula fornicata (CR) are promising options for cement replacement. In this work, we investigated the relevance of substituting cement with FNS-CR and FA-CR on the mechanical properties of mortar and on the thermal properties of concrete. Foraging intervals ranging from 2 to 28 days, the mechanical properties are obtained by 3-point bending and compression tests. The chosen mix is used to construct a prototype in order to study the material’s hygrothermal performance. The data collected by the sensors placed on the prototype was utilized to build an artificial neural network.Keywords: artificial neural network, cement, circular economy, concrete, by products
Procedia PDF Downloads 1145239 Cortex-M3 Based Virtual Platform Implementation for Software Development
Authors: Jun Young Moon, Hyeonggeon Lee, Jong Tae Kim
Abstract:
In this paper, we present Cortex-M3 based virtual platform which can virtualize wearable hardware platform and evaluate hardware performance. Cortex-M3 is very popular microcontroller in wearable devices, hardware sensors and display devices. This platform can be used to implement software layer for specific hardware architecture. By using the proposed platform the software development process can be parallelized with hardware development process. We present internal mechanism to implement the proposed virtual platform and describe how to use the proposed platform to develop software by using case study which is low cost wearable device that uses Cortex-M3.Keywords: electronic system level design, software development, virtual platform, wearable device
Procedia PDF Downloads 3755238 Reducing Defects through Organizational Learning within a Housing Association Environment
Authors: T. Hopkin, S. Lu, P. Rogers, M. Sexton
Abstract:
Housing Associations (HAs) contribute circa 20% of the UK’s housing supply. HAs are however under increasing pressure as a result of funding cuts and rent reductions. Due to the increased pressure, a number of processes are currently being reviewed by HAs, especially how they manage and learn from defects. Learning from defects is considered a useful approach to achieving defect reduction within the UK housebuilding industry. This paper contributes to our understanding of how HAs learn from defects by undertaking an initial round table discussion with key HA stakeholders as part of an ongoing collaborative research project with the National House Building Council (NHBC) to better understand how house builders and HAs learn from defects to reduce their prevalence. The initial discussion shows that defect information runs through a number of groups, both internal and external of a HA during both the defects management process and organizational learning (OL) process. Furthermore, HAs are reliant on capturing and recording defect data as the foundation for the OL process. During the OL process defect data analysis is the primary enabler to recognizing a need for a change to organizational routines. When a need for change has been recognized, new options are typically pursued to design out defects via updates to a HAs Employer’s Requirements. Proposed solutions are selected by a review board and committed to organizational routine. After implementing a change, both structured and unstructured feedback is sought to establish the change’s success. The findings from the HA discussion demonstrates that OL can achieve defect reduction within the house building sector in the UK. The paper concludes by outlining a potential ‘learning from defects model’ for the housebuilding industry as well as describing future work.Keywords: defects, new homes, housing association, organizational learning
Procedia PDF Downloads 3165237 Integration of EEG and Motion Tracking Sensors for Objective Measure of Attention-Deficit Hyperactivity Disorder in Pre-Schoolers
Authors: Neha Bhattacharyya, Soumendra Singh, Amrita Banerjee, Ria Ghosh, Oindrila Sinha, Nairit Das, Rajkumar Gayen, Somya Subhra Pal, Sahely Ganguly, Tanmoy Dasgupta, Tanusree Dasgupta, Pulak Mondal, Aniruddha Adhikari, Sharmila Sarkar, Debasish Bhattacharyya, Asim Kumar Mallick, Om Prakash Singh, Samir Kumar Pal
Abstract:
Background: We aim to develop an integrated device comprised of single-probe EEG and CCD-based motion sensors for a more objective measure of Attention-deficit Hyperactivity Disorder (ADHD). While the integrated device (MAHD) relies on the EEG signal (spectral density of beta wave) for the assessment of attention during a given structured task (painting three segments of a circle using three different colors, namely red, green and blue), the CCD sensor depicts movement pattern of the subjects engaged in a continuous performance task (CPT). A statistical analysis of the attention and movement patterns was performed, and the accuracy of the completed tasks was analysed using indigenously developed software. The device with the embedded software, called MAHD, is intended to improve certainty with criterion E (i.e. whether symptoms are better explained by another condition). Methods: We have used the EEG signal from a single-channel dry sensor placed on the frontal lobe of the head of the subjects (3-5 years old pre-schoolers). During the painting of three segments of a circle using three distinct colors (red, green, and blue), absolute power for delta and beta EEG waves from the subjects are found to be correlated with relaxation and attention/cognitive load conditions. While the relaxation condition of the subject hints at hyperactivity, a more direct CCD-based motion sensor is used to track the physical movement of the subject engaged in a continuous performance task (CPT) i.e., separation of the various colored balls from one table to another. We have used our indigenously developed software for the statistical analysis to derive a scale for the objective assessment of ADHD. We have also compared our scale with clinical ADHD evaluation. Results: In a limited clinical trial with preliminary statistical analysis, we have found a significant correlation between the objective assessment of the ADHD subjects with that of the clinician’s conventional evaluation. Conclusion: MAHD, the integrated device, is supposed to be an auxiliary tool to improve the accuracy of ADHD diagnosis by supporting greater criterion E certainty.Keywords: ADHD, CPT, EEG signal, motion sensor, psychometric test
Procedia PDF Downloads 99