Search results for: sound interface
52 Business Intelligence Dashboard Solutions for Improving Decision Making Process: A Focus on Prostate Cancer
Authors: Mona Isazad Mashinchi, Davood Roshan Sangachin, Francis J. Sullivan, Dietrich Rebholz-Schuhmann
Abstract:
Background: Decision-making processes are nowadays driven by data, data analytics and Business Intelligence (BI). BI as a software platform can provide a wide variety of capabilities such as organization memory, information integration, insight creation and presentation capabilities. Visualizing data through dashboards is one of the BI solutions (for a variety of areas) which helps managers in the decision making processes to expose the most informative information at a glance. In the healthcare domain to date, dashboard presentations are more frequently used to track performance related metrics and less frequently used to monitor those quality parameters which relate directly to patient outcomes. Providing effective and timely care for patients and improving the health outcome are highly dependent on presenting and visualizing data and information. Objective: In this research, the focus is on the presentation capabilities of BI to design a dashboard for prostate cancer (PC) data that allows better decision making for the patients, the hospital and the healthcare system related to a cancer dataset. The aim of this research is to customize a retrospective PC dataset in a dashboard interface to give a better understanding of data in the categories (risk factors, treatment approaches, disease control and side effects) which matter most to patients as well as other stakeholders. By presenting the outcome in the dashboard we address one of the major targets of a value-based health care (VBHC) delivery model which is measuring the value and presenting the outcome to different actors in HC industry (such as patients and doctors) for a better decision making. Method: For visualizing the stored data to users, three interactive dashboards based on the PC dataset have been developed (using the Tableau Software) to provide better views to the risk factors, treatment approaches, and side effects. Results: Many benefits derived from interactive graphs and tables in dashboards which helped to easily visualize and see the patients at risk, better understanding the relationship between patient's status after treatment and their initial status before treatment, or to choose better decision about treatments with fewer side effects regarding patient status and etc. Conclusions: Building a well-designed and informative dashboard is related to three important factors including; the users, goals and the data types. Dashboard's hierarchies, drilling, and graphical features can guide doctors to better navigate through information. The features of the interactive PC dashboard not only let doctors ask specific questions and filter the results based on the key performance indicators (KPI) such as: Gleason Grade, Patient's Age and Status, but may also help patients to better understand different treatment outcomes, such as side effects during the time, and have an active role in their treatment decisions. Currently, we are extending the results to the real-time interactive dashboard that users (either patients and doctors) can easily explore the data by choosing preferred attribute and data to make better near real-time decisions.Keywords: business intelligence, dashboard, decision making, healthcare, prostate cancer, value-based healthcare
Procedia PDF Downloads 14351 Experimental-Numerical Inverse Approaches in the Characterization and Damage Detection of Soft Viscoelastic Layers from Vibration Test Data
Authors: Alaa Fezai, Anuj Sharma, Wolfgang Mueller-Hirsch, André Zimmermann
Abstract:
Viscoelastic materials have been widely used in the automotive industry over the last few decades with different functionalities. Besides their main application as a simple and efficient surface damping treatment, they may ensure optimal operating conditions for on-board electronics as thermal interface or sealing layers. The dynamic behavior of viscoelastic materials is generally dependent on many environmental factors, the most important being temperature and strain rate or frequency. Prior to the reliability analysis of systems including viscoelastic layers, it is, therefore, crucial to accurately predict the dynamic and lifetime behavior of these materials. This includes the identification of the dynamic material parameters under critical temperature and frequency conditions along with a precise damage localization and identification methodology. The goal of this work is twofold. The first part aims at applying an inverse viscoelastic material-characterization approach for a wide frequency range and under different temperature conditions. For this sake, dynamic measurements are carried on a single lap joint specimen using an electrodynamic shaker and an environmental chamber. The specimen consists of aluminum beams assembled to adapter plates through a viscoelastic adhesive layer. The experimental setup is reproduced in finite element (FE) simulations, and frequency response functions (FRF) are calculated. The parameters of both the generalized Maxwell model and the fractional derivatives model are identified through an optimization algorithm minimizing the difference between the simulated and the measured FRFs. The second goal of the current work is to guarantee an on-line detection of the damage, i.e., delamination in the viscoelastic bonding of the described specimen during frequency monitored end-of-life testing. For this purpose, an inverse technique, which determines the damage location and size based on the modal frequency shift and on the change of the mode shapes, is presented. This includes a preliminary FE model-based study correlating the delamination location and size to the change in the modal parameters and a subsequent experimental validation achieved through dynamic measurements of specimen with different, pre-generated crack scenarios and comparing it to the virgin specimen. The main advantage of the inverse characterization approach presented in the first part resides in the ability of adequately identifying the material damping and stiffness behavior of soft viscoelastic materials over a wide frequency range and under critical temperature conditions. Classic forward characterization techniques such as dynamic mechanical analysis are usually linked to limitations under critical temperature and frequency conditions due to the material behavior of soft viscoelastic materials. Furthermore, the inverse damage detection described in the second part guarantees an accurate prediction of not only the damage size but also its location using a simple test setup and outlines; therefore, the significance of inverse numerical-experimental approaches in predicting the dynamic behavior of soft bonding layers applied in automotive electronics.Keywords: damage detection, dynamic characterization, inverse approaches, vibration testing, viscoelastic layers
Procedia PDF Downloads 20650 Urban Slum Communities Engage in the Fight Against TB in Karnataka, South India
Authors: N. Rambabu, H. Gururaj, Reynold Washington, Oommen George
Abstract:
Motivation: Under the USAID Strengthening Health Outcomes through Private Sector (SHOPS-TB) initiative, Karnataka Health Promotion Trust (KHPT) with technical support of Abt associates is implementing a TB prevention and care model in Karnataka State, South India. KHPT is the interface agency between the public and private sectors, and providers and the target community facilitating early TB case detection and enhancing treatment compliance through private health care providers (pHCP) engagement in RNTCP. The project coverage is 0.84 million urban poor from 663 slums in 12 districts of Karnataka. Problem Statement: India with the highest burden of global TB (26%) and two million cases annually, accounts for approximately one fifth of the global incidence. WHO estimates 300,000 people die from TB annually in India. India expanded the coverage of Directly Observed Treatment, Short-course chemotherapy (DOTS) to the entire country as early as 2006. However, the performance of RNTCP has not been uniform across states. While the national annual new smear-positive (NSP) case notification rate is 53, it is much lower at 47 in Karnataka. A third of TB patients in India reside in urban slums. Approach: Under SHOPS, KHPT actively engages with communities through key opinion leaders and community structures. Interpersonal communication, by Outreach workers through house-to-house visits and at aggregation points, is the primary method used for communication about TB and its management and to increase demand for sputum examination and DOTS. pHCP are mapped, trained and mentored by KHPT. ORWs also provide patient and family counseling on TB treatment, side effects and adherence, screen close contacts of index patients especially children under 6 years of age and screen co-morbidities including HIV, diabetes and malnutrition and risk factors including alcoholism, tobacco use, occupational hazards making appropriate accompanied or documented referrals. A treatment ‘buddy’ system for the patients involving close friends or family members, ICT-based support, DOTS Prerana (inspiration) groups of TB patients, family members and community, DOTS Mitra (friend) helpline services are also used for care and support services. Results: The intervention educated 39988 slum dwellers, referred 1731 chest symptomatics, tested 1061 patients and initiated 248 patients on anti-TB treatment within three months of intervention through continuous community engagement. Conclusions: The intervention’s potential to increase access to preferred health care providers, reduce patient and health system delays in diagnosis and initiation of treatment, improve health seeking behaviour and enhance compliance of pHCPs to standard treatment protocols is being monitored. Initial results are promising.Keywords: DOTS, KHPT, health outcomes, public and private sector
Procedia PDF Downloads 31749 Climate Indices: A Key Element for Climate Change Adaptation and Ecosystem Forecasting - A Case Study for Alberta, Canada
Authors: Stefan W. Kienzle
Abstract:
The increasing number of occurrences of extreme weather and climate events have significant impacts on society and are the cause of continued and increasing loss of human and animal lives, loss or damage to property (houses, cars), and associated stresses to the public in coping with a changing climate. A climate index breaks down daily climate time series into meaningful derivatives, such as the annual number of frost days. Climate indices allow for the spatially consistent analysis of a wide range of climate-dependent variables, which enables the quantification and mapping of historical and future climate change across regions. As trends of phenomena such as the length of the growing season change differently in different hydro-climatological regions, mapping needs to be carried out at a high spatial resolution, such as the 10km by 10km Canadian Climate Grid, which has interpolated daily values from 1950 to 2017 for minimum and maximum temperature and precipitation. Climate indices form the basis for the analysis and comparison of means, extremes, trends, the quantification of changes, and their respective confidence levels. A total of 39 temperature indices and 16 precipitation indices were computed for the period 1951 to 2017 for the Province of Alberta. Temperature indices include the annual number of days with temperatures above or below certain threshold temperatures (0, +-10, +-20, +25, +30ºC), frost days, and timing of frost days, freeze-thaw days, growing or degree days, and energy demands for air conditioning and heating. Precipitation indices include daily and accumulated 3- and 5-day extremes, days with precipitation, period of days without precipitation, and snow and potential evapotranspiration. The rank-based nonparametric Mann-Kendall statistical test was used to determine the existence and significant levels of all associated trends. The slope of the trends was determined using the non-parametric Sen’s slope test. The Google mapping interface was developed to create the website albertaclimaterecords.com, from which beach of the 55 climate indices can be queried for any of the 6833 grid cells that make up Alberta. In addition to the climate indices, climate normals were calculated and mapped for four historical 30-year periods and one future period (1951-1980, 1961-1990, 1971-2000, 1981-2017, 2041-2070). While winters have warmed since the 1950s by between 4 - 5°C in the South and 6 - 7°C in the North, summers are showing the weakest warming during the same period, ranging from about 0.5 - 1.5°C. New agricultural opportunities exist in central regions where the number of heat units and growing degree days are increasing, and the number of frost days is decreasing. While the number of days below -20ºC has about halved across Alberta, the growing season has expanded by between two and five weeks since the 1950s. Interestingly, both the number of days with heat waves and cold spells have doubled to four-folded during the same period. This research demonstrates the enormous potential of using climate indices at the best regional spatial resolution possible to enable society to understand historical and future climate changes of their region.Keywords: climate change, climate indices, habitat risk, regional, mapping, extremes
Procedia PDF Downloads 9348 Examining Influence of The Ultrasonic Power and Frequency on Microbubbles Dynamics Using Real-Time Visualization of Synchrotron X-Ray Imaging: Application to Membrane Fouling Control
Authors: Masoume Ehsani, Ning Zhu, Huu Doan, Ali Lohi, Amira Abdelrasoul
Abstract:
Membrane fouling poses severe challenges in membrane-based wastewater treatment applications. Ultrasound (US) has been considered an effective fouling remediation technique in filtration processes. Bubble cavitation in the liquid medium results from the alternating rarefaction and compression cycles during the US irradiation at sufficiently high acoustic pressure. Cavitation microbubbles generated under US irradiation can cause eddy current and turbulent flow within the medium by either oscillating or discharging energy to the system through microbubble explosion. Turbulent flow regime and shear forces created close to the membrane surface cause disturbing the cake layer and dislodging the foulants, which in turn improve the cleaning efficiency and filtration performance. Therefore, the number, size, velocity, and oscillation pattern of the microbubbles created in the liquid medium play a crucial role in foulant detachment and permeate flux recovery. The goal of the current study is to gain in depth understanding of the influence of the US power intensity and frequency on the microbubble dynamics and its characteristics generated under US irradiation. In comparison with other imaging techniques, the synchrotron in-line Phase Contrast Imaging technique at the Canadian Light Source (CLS) allows in-situ observation and real-time visualization of microbubble dynamics. At CLS biomedical imaging and therapy (BMIT) polychromatic beamline, the effective parameters were optimized to enhance the contrast gas/liquid interface for the accuracy of the qualitative and quantitative analysis of bubble cavitation within the system. With the high flux of photons and the high-speed camera, a typical high projection speed was achieved; and each projection of microbubbles in water was captured in 0.5 ms. ImageJ software was used for post-processing the raw images for the detailed quantitative analyses of microbubbles. The imaging has been performed under the US power intensity levels of 50 W, 60 W, and 100 W, in addition to the US frequency levels of 20 kHz, 28 kHz, and 40 kHz. For the duration of 2 seconds of imaging, the effect of the US power and frequency on the average number, size, and fraction of the area occupied by bubbles were analyzed. Microbubbles’ dynamics in terms of their velocity in water was also investigated. For the US power increase of 50 W to 100 W, the average bubble number and the average bubble diameter were increased from 746 to 880 and from 36.7 µm to 48.4 µm, respectively. In terms of the influence of US frequency, a fewer number of bubbles were created at 20 kHz (average of 176 bubbles rather than 808 bubbles at 40 kHz), while the average bubble size was significantly larger than that of 40 kHz (almost seven times). The majority of bubbles were captured close to the membrane surface in the filtration unit. According to the study observations, membrane cleaning efficiency is expected to be improved at higher US power and lower US frequency due to the higher energy release to the system by increasing the number of bubbles or growing their size during oscillation (optimum condition is expected to be at 20 kHz and 100 W).Keywords: bubble dynamics, cavitational bubbles, membrane fouling, ultrasonic cleaning
Procedia PDF Downloads 15147 Improved Morphology in Sequential Deposition of the Inverted Type Planar Heterojunction Solar Cells Using Cheap Additive (DI-H₂O)
Authors: Asmat Nawaz, Ceylan Zafer, Ali K. Erdinc, Kaiying Wang, M. Nadeem Akram
Abstract:
Hybrid halide Perovskites with the general formula ABX₃, where X = Cl, Br or I, are considered as an ideal candidates for the preparation of photovoltaic devices. The most commonly and successfully used hybrid halide perovskite for photovoltaic applications is CH₃NH₃PbI₃ and its analogue prepared from lead chloride, commonly symbolized as CH₃NH₃PbI₃_ₓClₓ. Some researcher groups are using lead free (Sn replaces Pb) and mixed halide perovskites for the fabrication of the devices. Both mesoporous and planar structures have been developed. By Comparing mesoporous structure in which the perovskite materials infiltrate into mesoporous metal oxide scaffold, the planar architecture is much simpler and easy for device fabrication. In a typical perovskite solar cell, a perovskite absorber layer is sandwiched between the hole and electron transport. Upon the irradiation, carriers are created in the absorber layer that can travel through hole and electron transport layers and the interface in between. We fabricated inverted planar heterojunction structure ITO/PEDOT/ Perovskite/PCBM/Al, based solar cell via two-step spin coating method. This is also called Sequential deposition method. A small amount of cheap additive H₂O was added into PbI₂/DMF to make a homogeneous solution. We prepared four different solution such as (W/O H₂O, 1% H₂O, 2% H₂O, 3% H₂O). After preparing, the whole night stirring at 60℃ is essential for the homogenous precursor solutions. We observed that the solution with 1% H₂O was much more homogenous at room temperature as compared to others. The solution with 3% H₂O was precipitated at once at room temperature. The four different films of PbI₂ were formed on PEDOT substrates by spin coating and after that immediately (before drying the PbI₂) the substrates were immersed in the methyl ammonium iodide solution (prepared in isopropanol) for the completion of the desired perovskite film. After getting desired films, rinse the substrates with isopropanol to remove the excess amount of methyl ammonium iodide and finally dried it on hot plate only for 1-2 minutes. In this study, we added H₂O in the PbI₂/DMF precursor solution. The concept of additive is widely used in the bulk- heterojunction solar cells to manipulate the surface morphology, leading to the enhancement of the photovoltaic performance. There are two most important parameters for the selection of additives. (a) Higher boiling point w.r.t host material (b) good interaction with the precursor materials. We observed that the morphology of the films was improved and we achieved a denser, uniform with less cavities and almost full surface coverage films but only using precursor solution having 1% H₂O. Therefore, we fabricated the complete perovskite solar cell by sequential deposition technique with precursor solution having 1% H₂O. We concluded that with the addition of additives in the precursor solutions one can easily be manipulate the morphology of the perovskite film. In the sequential deposition method, thickness of perovskite film is in µm and the charge diffusion length of PbI₂ is in nm. Therefore, by controlling the thickness using other deposition methods for the fabrication of solar cells, we can achieve the better efficiency.Keywords: methylammonium lead iodide, perovskite solar cell, precursor composition, sequential deposition
Procedia PDF Downloads 24646 Intelligent Crop Circle: A Blockchain-Driven, IoT-Based, AI-Powered Sustainable Agriculture System
Authors: Mishak Rahul, Naveen Kumar, Bharath Kumar
Abstract:
Conceived as a high-end engine to revolutionise sustainable agri-food production, the intelligent crop circle (ICC) aims to incorporate the Internet of Things (IoT), blockchain technology and artificial intelligence (AI) to bolster resource efficiency and prevent waste, increase the volume of production and bring about sustainable solutions with long-term ecosystem conservation as the guiding principle. The operating principle of the ICC relies on bringing together multidisciplinary bottom-up collaborations between producers, researchers and consumers. Key elements of the framework include IoT-based smart sensors for sensing soil moisture, temperature, humidity, nutrient and air quality, which provide short-interval and timely data; blockchain technology for data storage on a private chain, which maintains data integrity, traceability and transparency; and AI-based predictive analysis, which actively predicts resource utilisation, plant growth and environment. This data and AI insights are built into the ICC platform, which uses the resulting DSS (Decision Support System) outlined as help in decision making, delivered through an easy-touse mobile app or web-based interface. Farmers are assumed to use such a decision-making aid behind the power of the logic informed by the data pool. Building on existing data available in the farm management systems, the ICC platform is easily interoperable with other IoT devices. ICC facilitates connections and information sharing in real-time between users, including farmers, researchers and industrial partners, enabling them to cooperate in farming innovation and knowledge exchange. Moreover, ICC supports sustainable practice in agriculture by integrating gamification techniques to stimulate farm adopters, deploying VR technologies to model and visualise 3D farm environments and farm conditions, framing the field scenarios using VR headsets and Real-Time 3D engines, and leveraging edge technologies to facilitate secure and fast communication and collaboration between users involved. And through allowing blockchain-based marketplaces, ICC offers traceability from farm to fork – that is: from producer to consumer. It empowers informed decision-making through tailor-made recommendations generated by means of AI-driven analysis and technology democratisation, enabling small-scale and resource-limited farmers to get their voice heard. It connects with traditional knowledge, brings together multi-stakeholder interactions as well as establishes a participatory ecosystem to incentivise continuous growth and development towards more sustainable agro-ecological food systems. This integrated approach leverages the power of emerging technologies to provide sustainable solutions for a resilient food system, ensuring sustainable agriculture worldwide.Keywords: blockchain, internet of things, artificial intelligence, decision support system, virtual reality, gamification, traceability, sustainable agriculture
Procedia PDF Downloads 4545 A Hardware-in-the-loop Simulation for the Development of Advanced Control System Design for a Spinal Joint Wear Simulator
Authors: Kaushikk Iyer, Richard M Hall, David Keeling
Abstract:
Hardware-in-the-loop (HIL) simulation is an advanced technique for developing and testing complex real-time control systems. This paper presents the benefits of HIL simulation and how it can be implemented and used effectively to develop, test, and validate advanced control algorithms used in a spinal joint Wear simulator for the Tribological testing of spinal disc prostheses. spinal wear simulator is technologically the most advanced machine currently employed For the in-vitro testing of newly developed spinal Discimplants. However, the existing control techniques, such as a simple position control Does not allow the simulator to test non-sinusoidal waveforms. Thus, there is a need for better and advanced control methods that can be developed and tested Rigorouslybut safely before deploying it into the real simulator. A benchtop HILsetupis was created for experimentation, controller verification, and validation purposes, allowing different control strategies to be tested rapidly in a safe environment. The HIL simulation aspect in this setup attempts to replicate similar spinal motion and loading conditions. The spinal joint wear simulator containsa four-Barlinkpowered by electromechanical actuators. LabVIEW software is used to design a kinematic model of the spinal wear Simulator to Validatehow each link contributes towards the final motion of the implant under test. As a result, the implant articulates with an angular motion specified in the international standards, ISO-18192-1, that define fixed, simplified, and sinusoid motion and load profiles for wear testing of cervical disc implants. Using a PID controller, a velocity-based position control algorithm was developed to interface with the benchtop setup that performs HIL simulation. In addition to PID, a fuzzy logic controller (FLC) was also developed that acts as a supervisory controller. FLC provides intelligence to the PID controller by By automatically tuning the controller for profiles that vary in amplitude, shape, and frequency. This combination of the fuzzy-PID controller is novel to the wear testing application for spinal simulators and demonstrated superior performance against PIDwhen tested for a spectrum of frequency. Kaushikk Iyer is a Ph.D. Student at the University of Leeds and an employee at Key Engineering Solutions, Leeds, United Kingdom, (e-mail: [email protected], phone: +44 740 541 5502). Richard M Hall is with the University of Leeds, the United Kingdom as a professor in the Mechanical Engineering Department (e-mail: [email protected]). David Keeling is the managing director of Key Engineering Solutions, Leeds, United Kingdom (e-mail: [email protected]). Results obtained are successfully validated against the load and motion tolerances specified by the ISO18192-1 standard and fall within limits, that is, ±0.5° at the maxima and minima of the motion and ±2 % of the complete cycle for phasing. The simulation results prove the efficacy of the test setup using HIL simulation to verify and validate the accuracy and robustness of the prospective controller before its deployment into the spinal wear simulator. This method of testing controllers enables a wide range of possibilities to test advanced control algorithms that can potentially test even profiles of patients performing various dailyliving activities.Keywords: Fuzzy-PID controller, hardware-in-the-loop (HIL), real-time simulation, spinal wear simulator
Procedia PDF Downloads 17244 Thermodynamic Modeling of Cryogenic Fuel Tanks with a Model-Based Inverse Method
Authors: Pedro A. Marques, Francisco Monteiro, Alessandra Zumbo, Alessia Simonini, Miguel A. Mendez
Abstract:
Cryogenic fuels such as Liquid Hydrogen (LH₂) must be transported and stored at extremely low temperatures. Without expensive active cooling solutions, preventing fuel boil-off over time is impossible. Hence, one must resort to venting systems at the cost of significant energy and fuel mass loss. These losses increase significantly in propellant tanks installed on vehicles, as the presence of external accelerations induces sloshing. Sloshing increases heat and mass transfer rates and leads to significant pressure oscillations, which might further trigger propellant venting. To make LH₂ economically viable, it is essential to minimize these factors by using advanced control techniques. However, these require accurate modelling and a full understanding of the tank's thermodynamics. The present research aims to implement a simple thermodynamic model capable of predicting the state of a cryogenic fuel tank under different operating conditions (i.e., filling, pressurization, fuel extraction, long-term storage, and sloshing). Since this model relies on a set of closure parameters to drive the system's transient response, it must be calibrated using experimental or numerical data. This work focuses on the former approach, wherein the model is calibrated through an experimental campaign carried out on a reduced-scale model of a cryogenic tank. The thermodynamic model of the system is composed of three control volumes: the ullage, the liquid, and the insulating walls. Under this lumped formulation, the governing equations are derived from energy and mass balances in each region, with mass-averaged properties assigned to each of them. The gas-liquid interface is treated as an infinitesimally thin region across which both phases can exchange mass and heat. This results in a coupled system of ordinary differential equations, which must be closed with heat and mass transfer coefficients between each control volume. These parameters are linked to the system evolution via empirical relations derived from different operating regimes of the tank. The derivation of these relations is carried out using an inverse method to find the optimal relations that allow the model to reproduce the available data. This approach extends classic system identification methods beyond linear dynamical systems via a nonlinear optimization step. Thanks to the data-driven assimilation of the closure problem, the resulting model accurately predicts the evolution of the tank's thermodynamics at a negligible computational cost. The lumped model can thus be easily integrated with other submodels to perform complete system simulations in real time. Moreover, by setting the model in a dimensionless form, a scaling analysis allowed us to relate the tested configurations to a representative full-size tank for naval applications. It was thus possible to compare the relative importance of different transport phenomena between the laboratory model and the full-size prototype among the different operating regimes.Keywords: destratification, hydrogen, modeling, pressure-drop, pressurization, sloshing, thermodynamics
Procedia PDF Downloads 9543 A Bioinspired Anti-Fouling Coating for Implantable Medical Devices
Authors: Natalie Riley, Anita Quigley, Robert M. I. Kapsa, George W. Greene
Abstract:
As the fields of medicine and bionics grow rapidly in technological advancement, the future and success of it depends on the ability to effectively interface between the artificial and the biological worlds. The biggest obstacle when it comes to implantable, electronic medical devices, is maintaining a ‘clean’, low noise electrical connection that allows for efficient sharing of electrical information between the artificial and biological systems. Implant fouling occurs with the adhesion and accumulation of proteins and various cell types as a result of the immune response to protect itself from the foreign object, essentially forming an electrical insulation barrier that often leads to implant failure over time. Lubricin (LUB) functions as a major boundary lubricant in articular joints, a unique glycoprotein with impressive anti-adhesive properties that self-assembles to virtually any substrate to form a highly ordered, ‘telechelic’ polymer brush. LUB does not passivate electroactive surfaces which makes it ideal, along with its innate biocompatibility, as a coating for implantable bionic electrodes. It is the aim of the study to investigate LUB’s anti-fouling properties and its potential as a safe, bioinspired material for coating applications to enhance the performance and longevity of implantable medical devices as well as reducing the frequency of implant replacement surgeries. Native, bovine-derived LUB (N-LUB) and recombinant LUB (R-LUB) were applied to gold-coated mylar surfaces. Fibroblast, chondrocyte and neural cell types were cultured and grown on the coatings under both passive and electrically stimulated conditions to test the stability and anti-adhesive property of the LUB coating in the presence of an electric field. Lactate dehydrogenase (LDH) assays were conducted as a directly proportional cell population count on each surface along with immunofluorescent microscopy to visualize cells. One-way analysis of variance (ANOVA) with post-hoc Tukey’s test was used to test for statistical significance. Under both passive and electrically stimulated conditions, LUB significantly reduced cell attachment compared to bare gold. Comparing the two coating types, R-LUB reduced cell attachment significantly compared to its native counterpart. Immunofluorescent micrographs visually confirmed LUB’s antiadhesive property, R-LUB consistently demonstrating significantly less attached cells for both fibroblasts and chondrocytes. Preliminary results investigating neural cells have so far demonstrated that R-LUB has little effect on reducing neural cell attachment; the study is ongoing. Recombinant LUB coatings demonstrated impressive anti-adhesive properties, reducing cell attachment in fibroblasts and chondrocytes. These findings and the availability of recombinant LUB brings into question the results of previous experiments conducted using native-derived LUB, its potential not adequately represented nor realized due to unknown factors and impurities that warrant further study. R-LUB is stable and maintains its anti-fouling property under electrical stimulation, making it suitable for electroactive surfaces.Keywords: anti-fouling, bioinspired, cell attachment, lubricin
Procedia PDF Downloads 12442 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks
Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi
Abstract:
Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex
Procedia PDF Downloads 17841 Diffusion MRI: Clinical Application in Radiotherapy Planning of Intracranial Pathology
Authors: Pomozova Kseniia, Gorlachev Gennadiy, Chernyaev Aleksandr, Golanov Andrey
Abstract:
In clinical practice, and especially in stereotactic radiosurgery planning, the significance of diffusion-weighted imaging (DWI) is growing. This makes the existence of software capable of quickly processing and reliably visualizing diffusion data, as well as equipped with tools for their analysis in terms of different tasks. We are developing the «MRDiffusionImaging» software on the standard C++ language. The subject part has been moved to separate class libraries and can be used on various platforms. The user interface is Windows WPF (Windows Presentation Foundation), which is a technology for managing Windows applications with access to all components of the .NET 5 or .NET Framework platform ecosystem. One of the important features is the use of a declarative markup language, XAML (eXtensible Application Markup Language), with which you can conveniently create, initialize and set properties of objects with hierarchical relationships. Graphics are generated using the DirectX environment. The MRDiffusionImaging software package has been implemented for processing diffusion magnetic resonance imaging (dMRI), which allows loading and viewing images sorted by series. An algorithm for "masking" dMRI series based on T2-weighted images was developed using a deformable surface model to exclude tissues that are not related to the area of interest from the analysis. An algorithm of distortion correction using deformable image registration based on autocorrelation of local structure has been developed. Maximum voxel dimension was 1,03 ± 0,12 mm. In an elementary brain's volume, the diffusion tensor is geometrically interpreted using an ellipsoid, which is an isosurface of the probability density of a molecule's diffusion. For the first time, non-parametric intensity distributions, neighborhood correlations, and inhomogeneities are combined in one segmentation of white matter (WM), grey matter (GM), and cerebrospinal fluid (CSF) algorithm. A tool for calculating the coefficient of average diffusion and fractional anisotropy has been created, on the basis of which it is possible to build quantitative maps for solving various clinical problems. Functionality has been created that allows clustering and segmenting images to individualize the clinical volume of radiation treatment and further assess the response (Median Dice Score = 0.963 ± 0,137). White matter tracts of the brain were visualized using two algorithms: deterministic (fiber assignment by continuous tracking) and probabilistic using the Hough transform. The proposed algorithms test candidate curves in the voxel, assigning to each one a score computed from the diffusion data, and then selects the curves with the highest scores as the potential anatomical connections. White matter fibers were visualized using a Hough transform tractography algorithm. In the context of functional radiosurgery, it is possible to reduce the irradiation volume of the internal capsule receiving 12 Gy from 0,402 cc to 0,254 cc. The «MRDiffusionImaging» will improve the efficiency and accuracy of diagnostics and stereotactic radiotherapy of intracranial pathology. We develop software with integrated, intuitive support for processing, analysis, and inclusion in the process of radiotherapy planning and evaluating its results.Keywords: diffusion-weighted imaging, medical imaging, stereotactic radiosurgery, tractography
Procedia PDF Downloads 8540 Embedded Test Framework: A Solution Accelerator for Embedded Hardware Testing
Authors: Arjun Kumar Rath, Titus Dhanasingh
Abstract:
Embedded product development requires software to test hardware functionality during development and finding issues during manufacturing in larger quantities. As the components are getting integrated, the devices are tested for their full functionality using advanced software tools. Benchmarking tools are used to measure and compare the performance of product features. At present, these tests are based on a variety of methods involving varying hardware and software platforms. Typically, these tests are custom built for every product and remain unusable for other variants. A majority of the tests goes undocumented, not updated, unusable when the product is released. To bridge this gap, a solution accelerator in the form of a framework can address these issues for running all these tests from one place, using an off-the-shelf tests library in a continuous integration environment. There are many open-source test frameworks or tools (fuego. LAVA, AutoTest, KernelCI, etc.) designed for testing embedded system devices, with each one having several unique good features, but one single tool and framework may not satisfy all of the testing needs for embedded systems, thus an extensible framework with the multitude of tools. Embedded product testing includes board bring-up testing, test during manufacturing, firmware testing, application testing, and assembly testing. Traditional test methods include developing test libraries and support components for every new hardware platform that belongs to the same domain with identical hardware architecture. This approach will have drawbacks like non-reusability where platform-specific libraries cannot be reused, need to maintain source infrastructure for individual hardware platforms, and most importantly, time is taken to re-develop test cases for new hardware platforms. These limitations create challenges like environment set up for testing, scalability, and maintenance. A desirable strategy is certainly one that is focused on maximizing reusability, continuous integration, and leveraging artifacts across the complete development cycle during phases of testing and across family of products. To get over the stated challenges with the conventional method and offers benefits of embedded testing, an embedded test framework (ETF), a solution accelerator, is designed, which can be deployed in embedded system-related products with minimal customizations and maintenance to accelerate the hardware testing. Embedded test framework supports testing different hardwares including microprocessor and microcontroller. It offers benefits such as (1) Time-to-Market: Accelerates board brings up time with prepacked test suites supporting all necessary peripherals which can speed up the design and development stage(board bring up, manufacturing and device driver) (2) Reusability-framework components isolated from the platform-specific HW initialization and configuration makes the adaptability of test cases across various platform quick and simple (3) Effective build and test infrastructure with multiple test interface options and preintegrated with FUEGO framework (4) Continuos integration - pre-integrated with Jenkins which enabled continuous testing and automated software update feature. Applying the embedded test framework accelerator throughout the design and development phase enables to development of the well-tested systems before functional verification and improves time to market to a large extent.Keywords: board diagnostics software, embedded system, hardware testing, test frameworks
Procedia PDF Downloads 14739 Finite Element Analysis of Mini-Plate Stabilization of Mandible Fracture
Authors: Piotr Wadolowski, Grzegorz Krzesinski, Piotr Gutowski
Abstract:
The aim of the presented investigation is to recognize the possible mechanical issues of mini-plate connection used to treat mandible fractures and to check the impact of different factors for the stresses and displacements within the bone-stabilizer system. The mini-plate osteosynthesis technique is a common type of internal fixation using metal plates connected to the fractured bone parts by a set of screws. The selected two types of plate application methodology used by maxillofacial surgeons were investigated in the work. Those patterns differ in location and number of plates. The bone geometry was modeled on the base of computed tomography scans of hospitalized patient done just after mini-plate application. The solid volume geometry consisting of cortical and cancellous bone was created based on gained cloud of points. Temporomandibular joint and muscle system were simulated to imitate the real masticatory system behavior. Finite elements mesh and analysis were performed by ANSYS software. To simulate realistic connection behavior nonlinear contact conditions were used between the connecting elements and bones. The influence of the initial compression of the connected bone parts or the gap between them was analyzed. Nonlinear material properties of the bone tissues and elastic-plastic model of titanium alloy were used. The three cases of loading assuming the force of magnitude of 100N acting on the left molars, the right molars and the incisors were investigated. Stress distribution within connecting plate shows that the compression of the bone parts in the connection results in high stress concentration in the plate and the screws, however the maximum stress levels do not exceed material (titanium) yield limit. There are no significant differences between negative offset (gap) and no-offset conditions. The location of the external force influences the magnitude of stresses around both the plate and bone parts. Two-plate system gives generally lower von Misses stress under the same loading than the one-plating approach. Von Mises stress distribution within the cortical bone shows reduction of high stress field for the cases without the compression (neutral initial contact). For the initial prestressing there is a visible significant stress increase around the fixing holes at the bottom mini-plate due to the assembly stress. The local stress concentration may be the reason of bone destruction in those regions. The performed calculations prove that the bone-mini-plate system is able to properly stabilize the fractured mandible bone. There is visible strong dependency between the mini-plate location and stress distribution within the stabilizer structure and the surrounding bone tissue. The results (stresses within the bone tissues and within the devices, relative displacements of the bone parts at the interface) corresponding to different models of the connection provide a basis for the mechanical optimization of the mini-plate connections. The results of the performed numerical simulations were compared to clinical observation. They provide information helpful for better understanding of the load transfer in the mandible with the stabilizer and for improving stabilization techniques.Keywords: finite element modeling, mandible fracture, mini-plate connection, osteosynthesis
Procedia PDF Downloads 24738 Sentinel-2 Based Burn Area Severity Assessment Tool in Google Earth Engine
Authors: D. Madhushanka, Y. Liu, H. C. Fernando
Abstract:
Fires are one of the foremost factors of land surface disturbance in diverse ecosystems, causing soil erosion and land-cover changes and atmospheric effects affecting people's lives and properties. Generally, the severity of the fire is calculated as the Normalized Burn Ratio (NBR) index. This is performed manually by comparing two images obtained afterward. Then by using the bitemporal difference of the preprocessed satellite images, the dNBR is calculated. The burnt area is then classified as either unburnt (dNBR<0.1) or burnt (dNBR>= 0.1). Furthermore, Wildfire Severity Assessment (WSA) classifies burnt areas and unburnt areas using classification levels proposed by USGS and comprises seven classes. This procedure generates a burn severity report for the area chosen by the user manually. This study is carried out with the objective of producing an automated tool for the above-mentioned process, namely the World Wildfire Severity Assessment Tool (WWSAT). It is implemented in Google Earth Engine (GEE), which is a free cloud-computing platform for satellite data processing, with several data catalogs at different resolutions (notably Landsat, Sentinel-2, and MODIS) and planetary-scale analysis capabilities. Sentinel-2 MSI is chosen to obtain regular processes related to burnt area severity mapping using a medium spatial resolution sensor (15m). This tool uses machine learning classification techniques to identify burnt areas using NBR and to classify their severity over the user-selected extent and period automatically. Cloud coverage is one of the biggest concerns when fire severity mapping is performed. In WWSAT based on GEE, we present a fully automatic workflow to aggregate cloud-free Sentinel-2 images for both pre-fire and post-fire image compositing. The parallel processing capabilities and preloaded geospatial datasets of GEE facilitated the production of this tool. This tool consists of a Graphical User Interface (GUI) to make it user-friendly. The advantage of this tool is the ability to obtain burn area severity over a large extent and more extended temporal periods. Two case studies were carried out to demonstrate the performance of this tool. The Blue Mountain national park forest affected by the Australian fire season between 2019 and 2020 is used to describe the workflow of the WWSAT. This site detected more than 7809 km2, using Sentinel-2 data, giving an error below 6.5% when compared with the area detected on the field. Furthermore, 86.77% of the detected area was recognized as fully burnt out, of which high severity (17.29%), moderate-high severity (19.63%), moderate-low severity (22.35%), and low severity (27.51%). The Arapaho and Roosevelt National Forest Park, California, the USA, which is affected by the Cameron peak fire in 2020, is chosen for the second case study. It was found that around 983 km2 had burned out, of which high severity (2.73%), moderate-high severity (1.57%), moderate-low severity (1.18%), and low severity (5.45%). These spots also can be detected through the visual inspection made possible by cloud-free images generated by WWSAT. This tool is cost-effective in calculating the burnt area since satellite images are free and the cost of field surveys is avoided.Keywords: burnt area, burnt severity, fires, google earth engine (GEE), sentinel-2
Procedia PDF Downloads 23837 Suitability Assessment of Water Harvesting and Land Restoration in Catchment Comprising Abandoned Quarry Site in Addis Ababa, Ethiopia
Authors: Rahel Birhanu Kassaye, Ralf Otterpohl, Kumelachew Yeshitila
Abstract:
Water resource management and land degradation are among the critical issues threatening the suitable livability of many cities in developing countries such as Ethiopia. Rapid expansion of urban areas and fast growing population has increased the pressure on water security. On the other hand, the large transformation of natural green cover and agricultural land loss to settlement and industrial activities such as quarrying is contributing to environmental concerns. Integrated water harvesting is considered to play a crucial role in terms of providing alternative water source to insure water security and helping to improve soil condition, agricultural productivity and regeneration of ecosystem. Moreover, it helps to control stormwater runoff, thus reducing flood risks and pollution, thereby improving the quality of receiving water bodies and the health of inhabitants. The aim of this research was to investigate the potential of applying integrated water harvesting approaches as a provision for water source and enabling land restoration in Jemo river catchment consisting of abandoned quarry site adjacent to a settlement area that is facing serious water shortage in western hilly part of Addis Ababa city, Ethiopia. The abandoned quarry site, apart from its contribution to the loss of aesthetics, has resulted in poor water infiltration and increase in stormwater runoff leading to land degradation and flooding in the downstream. Application of GIS and multi-criteria based analysis are used for the assessment of potential water harvesting technologies considering the technology features and site characteristics of the case study area. Biophysical parameters including precipitation, surrounding land use, surface gradient, soil characteristics and geological aspects are used as site characteristic indicators and water harvesting technologies including retention pond, check dam, agro-forestation employing contour trench system were considered for evaluation with technical and socio-economic factors used as parameters in the assessment. The assessment results indicate the different suitability potential among the analyzed water harvesting and restoration techniques with respect to the abandoned quarry site characteristics. Application of agro-forestation with contour trench system with the revegetation of indigenous plants is found to be the most suitable option for reclamation and restoration of the quarry site. Successful application of the selected technologies and strategies for water harvesting and restoration is considered to play a significant role to provide additional water source, maintain good water quality, increase agricultural productivity at urban peri-urban interface scale and improve biodiversity in the catchment. The results of the study provide guideline for decision makers and contribute to the integration of decentralized water harvesting and restoration techniques in the water management and planning of the case study area.Keywords: abandoned quarry site, land reclamation and restoration, multi-criteria assessment, water harvesting
Procedia PDF Downloads 21736 Rapid, Direct, Real-Time Method for Bacteria Detection on Surfaces
Authors: Evgenia Iakovleva, Juha Koivisto, Pasi Karppinen, J. Inkinen, Mikko Alava
Abstract:
Preventing the spread of infectious diseases throughout the worldwide is one of the most important tasks of modern health care. Infectious diseases not only account for one fifth of the deaths in the world, but also cause many pathological complications for the human health. Touch surfaces pose an important vector for the spread of infections by varying microorganisms, including antimicrobial resistant organisms. Further, antimicrobial resistance is reply of bacteria to the overused or inappropriate used of antibiotics everywhere. The biggest challenges in bacterial detection by existing methods are non-direct determination, long time of analysis, the sample preparation, use of chemicals and expensive equipment, and availability of qualified specialists. Therefore, a high-performance, rapid, real-time detection is demanded in rapid practical bacterial detection and to control the epidemiological hazard. Among the known methods for determining bacteria on the surfaces, Hyperspectral methods can be used as direct and rapid methods for microorganism detection on different kind of surfaces based on fluorescence without sampling, sample preparation and chemicals. The aim of this study was to assess the relevance of such systems to remote sensing of surfaces for microorganisms detection to prevent a global spread of infectious diseases. Bacillus subtilis and Escherichia coli with different concentrations (from 0 to 10x8 cell/100µL) were detected with hyperspectral camera using different filters as visible visualization of bacteria and background spots on the steel plate. A method of internal standards was applied for monitoring the correctness of the analysis results. Distances from sample to hyperspectral camera and light source are 25 cm and 40 cm, respectively. Each sample is optically imaged from the surface by hyperspectral imaging system, utilizing a JAI CM-140GE-UV camera. Light source is BeamZ FLATPAR DMX Tri-light, 3W tri-colour LEDs (red, blue and green). Light colors are changed through DMX USB Pro interface. The developed system was calibrated following a standard procedure of setting exposure and focused for light with λ=525 nm. The filter is ThorLabs KuriousTM hyperspectral filter controller with wavelengths from 420 to 720 nm. All data collection, pro-processing and multivariate analysis was performed using LabVIEW and Python software. The studied human eye visible and invisible bacterial stains clustered apart from a reference steel material by clustering analysis using different light sources and filter wavelengths. The calculation of random and systematic errors of the analysis results proved the applicability of the method in real conditions. Validation experiments have been carried out with photometry and ATP swab-test. The lower detection limit of developed method is several orders of magnitude lower than for both validation methods. All parameters of the experiments were the same, except for the light. Hyperspectral imaging method allows to separate not only bacteria and surfaces, but also different types of bacteria, such as Gram-negative Escherichia coli and Gram-positive Bacillus subtilis. Developed method allows skipping the sample preparation and the use of chemicals, unlike all other microbiological methods. The time of analysis with novel hyperspectral system is a few seconds, which is innovative in the field of microbiological tests.Keywords: Escherichia coli, Bacillus subtilis, hyperspectral imaging, microorganisms detection
Procedia PDF Downloads 22935 Digitization and Morphometric Characterization of Botanical Collection of Indian Arid Zones as Informatics Initiatives Addressing Conservation Issues in Climate Change Scenario
Authors: Dipankar Saha, J. P. Singh, C. B. Pandey
Abstract:
Indian Thar desert being the seventh largest in the world is the main hot sand desert occupies nearly 385,000km2 and about 9% of the area of the country harbours several species likely the flora of 682 species (63 introduced species) belonging to 352 genera and 87 families. The degree of endemism of plant species in the Thar desert is 6.4 percent, which is relatively higher than the degree of endemism in the Sahara desert which is very significant for the conservationist to envisage. The advent and development of computer technology for digitization and data base management coupled with the rapidly increasing importance of biodiversity conservation resulted in the invention of biodiversity informatics as discipline of basic sciences with multiple applications. Aichi Target 19 as an outcome of Convention of Biological Diversity (CBD) specifically mandates the development of an advanced and shared biodiversity knowledge base. Information on species distributions in space is the crux of effective management of biodiversity in the rapidly changing world. The efficiency of biodiversity management is being increased rapidly by various stakeholders like researchers, policymakers, and funding agencies with the knowledge and application of biodiversity informatics. Herbarium specimens being a vital repository for biodiversity conservation especially in climate change scenario the digitization process usually aims to improve access and to preserve delicate specimens and in doing so creating large sets of images as a part of the existing repository as arid plant information facility for long-term future usage. As the leaf characters are important for describing taxa and distinguishing between them and they can be measured from herbarium specimens as well. As a part of this activity, laminar characterization (leaves being the most important characters in assessing climate change impact) initially resulted in classification of more than thousands collections belonging to ten families like Acanthaceae, Aizoaceae, Amaranthaceae, Asclepiadaceae, Anacardeaceae, Apocynaceae, Asteraceae, Aristolochiaceae, Berseraceae and Bignoniaceae etc. Taxonomic diversity indices has also been worked out being one of the important domain of biodiversity informatics approaches. The digitization process also encompasses workflows which incorporate automated systems to enable us to expand and speed up the digitisation process. The digitisation workflows used to be on a modular system which has the potential to be scaled up. As they are being developed with a geo-referencing tool and additional quality control elements and finally placing specimen images and data into a fully searchable, web-accessible database. Our effort in this paper is to elucidate the role of BIs, present effort of database development of the existing botanical collection of institute repository. This effort is expected to be considered as a part of various global initiatives having an effective biodiversity information facility. This will enable access to plant biodiversity data that are fit-for-use by scientists and decision makers working on biodiversity conservation and sustainable development in the region and iso-climatic situation of the world.Keywords: biodiversity informatics, climate change, digitization, herbarium, laminar characters, web accessible interface
Procedia PDF Downloads 23134 Digital Twin for a Floating Solar Energy System with Experimental Data Mining and AI Modelling
Authors: Danlei Yang, Luofeng Huang
Abstract:
The integration of digital twin technology with renewable energy systems offers an innovative approach to predicting and optimising performance throughout the entire lifecycle. A digital twin is a continuously updated virtual replica of a real-world entity, synchronised with data from its physical counterpart and environment. Many digital twin companies today claim to have mature digital twin products, but their focus is primarily on equipment visualisation. However, the core of a digital twin should be its model, which can mirror, shadow, and thread with the real-world entity, which is still underdeveloped. For a floating solar energy system, a digital twin model can be defined in three aspects: (a) the physical floating solar energy system along with environmental factors such as solar irradiance and wave dynamics, (b) a digital model powered by artificial intelligence (AI) algorithms, and (c) the integration of real system data with the AI-driven model and a user interface. The experimental setup for the floating solar energy system, is designed to replicate real-ocean conditions of floating solar installations within a controlled laboratory environment. The system consists of a water tank that simulates an aquatic surface, where a floating catamaran structure supports a solar panel. The solar simulator is set up in three positions: one directly above and two inclined at a 45° angle in front and behind the solar panel. This arrangement allows the simulation of different sun angles, such as sunrise, midday, and sunset. The solar simulator is positioned 400 mm away from the solar panel to maintain consistent solar irradiance on its surface. Stability for the floating structure is achieved through ropes attached to anchors at the bottom of the tank, which simulates the mooring systems used in real-world floating solar applications. The floating solar energy system's sensor setup includes various devices to monitor environmental and operational parameters. An irradiance sensor measures solar irradiance on the photovoltaic (PV) panel. Temperature sensors monitor ambient air and water temperatures, as well as the PV panel temperature. Wave gauges measure wave height, while load cells capture mooring force. Inclinometers and ultrasonic sensors record heave and pitch amplitudes of the floating system’s motions. An electric load measures the voltage and current output from the solar panel. All sensors collect data simultaneously. Artificial neural network (ANN) algorithms are central to developing the digital model, which processes historical and real-time data, identifies patterns, and predicts the system’s performance in real time. The data collected from various sensors are partly used to train the digital model, with the remaining data reserved for validation and testing. The digital twin model combines the experimental setup with the ANN model, enabling monitoring, analysis, and prediction of the floating solar energy system's operation. The digital model mirrors the functionality of the physical setup, running in sync with the experiment to provide real-time insights and predictions. It provides useful industrial benefits, such as informing maintenance plans as well as design and control strategies for optimal energy efficiency. In long term, this digital twin will help improve overall solar energy yield whilst minimising the operational costs and risks.Keywords: digital twin, floating solar energy system, experiment setup, artificial intelligence
Procedia PDF Downloads 1433 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence
Authors: Gus Calderon, Richard McCreight, Tammy Schwartz
Abstract:
Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.
Procedia PDF Downloads 10832 Fueling Efficient Reporting And Decision-Making In Public Health With Large Data Automation In Remote Areas, Neno Malawi
Authors: Wiseman Emmanuel Nkhomah, Chiyembekezo Kachimanga, Julia Huggins, Fabien Munyaneza
Abstract:
Background: Partners In Health – Malawi introduced one of Operational Researches called Primary Health Care (PHC) Surveys in 2020, which seeks to assess progress of delivery of care in the district. The study consists of 5 long surveys, namely; Facility assessment, General Patient, Provider, Sick Child, Antenatal Care (ANC), primarily conducted in 4 health facilities in Neno district. These facilities include Neno district hospital, Dambe health centre, Chifunga and Matope. Usually, these annual surveys are conducted from January, and the target is to present final report by June. Once data is collected and analyzed, there are a series of reviews that take place before reaching final report. In the first place, the manual process took over 9 months to present final report. Initial findings reported about 76.9% of the data that added up when cross-checked with paper-based sources. Purpose: The aim of this approach is to run away from manually pulling the data, do fresh analysis, and reporting often associated not only with delays in reporting inconsistencies but also with poor quality of data if not done carefully. This automation approach was meant to utilize features of new technologies to create visualizations, reports, and dashboards in Power BI that are directly fished from the data source – CommCare hence only require a single click of a ‘refresh’ button to have the updated information populated in visualizations, reports, and dashboards at once. Methodology: We transformed paper-based questionnaires into electronic using CommCare mobile application. We further connected CommCare Mobile App directly to Power BI using Application Program Interface (API) connection as data pipeline. This provided chance to create visualizations, reports, and dashboards in Power BI. Contrary to the process of manually collecting data in paper-based questionnaires, entering them in ordinary spreadsheets, and conducting analysis every time when preparing for reporting, the team utilized CommCare and Microsoft Power BI technologies. We utilized validations and logics in CommCare to capture data with less errors. We utilized Power BI features to host the reports online by publishing them as cloud-computing process. We switched from sharing ordinary report files to sharing the link to potential recipients hence giving them freedom to dig deep into extra findings within Power BI dashboards and also freedom to export to any formats of their choice. Results: This data automation approach reduced research timelines from the initial 9 months’ duration to 5. It also improved the quality of the data findings from the original 76.9% to 98.9%. This brought confidence to draw conclusions from the findings that help in decision-making and gave opportunities for further researches. Conclusion: These results suggest that automating the research data process has the potential of reducing overall amount of time spent and improving the quality of the data. On this basis, the concept of data automation should be taken into serious consideration when conducting operational research for efficiency and decision-making.Keywords: reporting, decision-making, power BI, commcare, data automation, visualizations, dashboards
Procedia PDF Downloads 11831 Geographic Information System Based Multi-Criteria Subsea Pipeline Route Optimisation
Authors: James Brown, Stella Kortekaas, Ian Finnie, George Zhang, Christine Devine, Neil Healy
Abstract:
The use of GIS as an analysis tool for engineering decision making is now best practice in the offshore industry. GIS enables multidisciplinary data integration, analysis and visualisation which allows the presentation of large and intricate datasets in a simple map-interface accessible to all project stakeholders. Presenting integrated geoscience and geotechnical data in GIS enables decision makers to be well-informed. This paper is a successful case study of how GIS spatial analysis techniques were applied to help select the most favourable pipeline route. Routing a pipeline through any natural environment has numerous obstacles, whether they be topographical, geological, engineering or financial. Where the pipeline is subjected to external hydrostatic water pressure and is carrying pressurised hydrocarbons, the requirement to safely route the pipeline through hazardous terrain becomes absolutely paramount. This study illustrates how the application of modern, GIS-based pipeline routing techniques enabled the identification of a single most-favourable pipeline route crossing of a challenging seabed terrain. Conventional approaches to pipeline route determination focus on manual avoidance of primary constraints whilst endeavouring to minimise route length. Such an approach is qualitative, subjective and is liable to bias towards the discipline and expertise that is involved in the routing process. For very short routes traversing benign seabed topography in shallow water this approach may be sufficient, but for deepwater geohazardous sites, the need for an automated, multi-criteria, and quantitative approach is essential. This study combined multiple routing constraints using modern least-cost-routing algorithms deployed in GIS, hitherto unachievable with conventional approaches. The least-cost-routing procedure begins with the assignment of geocost across the study area. Geocost is defined as a numerical penalty score representing hazard posed by each routing constraint (e.g. slope angle, rugosity, vulnerability to debris flows) to the pipeline. All geocosted routing constraints are combined to generate a composite geocost map that is used to compute the least geocost route between two defined terminals. The analyses were applied to select the most favourable pipeline route for a potential gas development in deep water. The study area is geologically complex with a series of incised, potentially active, canyons carved into a steep escarpment, with evidence of extensive debris flows. A similar debris flow in the future could cause significant damage to a poorly-placed pipeline. Protruding inter-canyon spurs offer lower-gradient options for ascending an escarpment but the vulnerability of periodic failure of these spurs is not well understood. Close collaboration between geoscientists, pipeline engineers, geotechnical engineers and of course the gas export pipeline operator guided the analyses and assignment of geocosts. Shorter route length, less severe slope angles, and geohazard avoidance were the primary drivers in identifying the most favourable route.Keywords: geocost, geohazard, pipeline route determination, pipeline route optimisation, spatial analysis
Procedia PDF Downloads 40630 Improving the Utility of Social Media in Pharmacovigilance: A Mixed Methods Study
Authors: Amber Dhoot, Tarush Gupta, Andrea Gurr, William Jenkins, Sandro Pietrunti, Alexis Tang
Abstract:
Background: The COVID-19 pandemic has driven pharmacovigilance towards a new paradigm. Nowadays, more people than ever before are recognising and reporting adverse reactions from medications, treatments, and vaccines. In the modern era, with over 3.8 billion users, social media has become the most accessible medium for people to voice their opinions and so provides an opportunity to engage with more patient-centric and accessible pharmacovigilance. However, the pharmaceutical industry has been slow to incorporate social media into its modern pharmacovigilance strategy. This project aims to make social media a more effective tool in pharmacovigilance, and so reduce drug costs, improve drug safety and improve patient outcomes. This will be achieved by firstly uncovering and categorising the barriers facing the widespread adoption of social media in pharmacovigilance. Following this, the potential opportunities of social media will be explored. We will then propose realistic, practical recommendations to make social media a more effective tool for pharmacovigilance. Methodology: A comprehensive systematic literature review was conducted to produce a categorised summary of these barriers. This was followed by conducting 11 semi-structured interviews with pharmacovigilance experts to confirm the literature review findings whilst also exploring the unpublished and real-life challenges faced by those in the pharmaceutical industry. Finally, a survey of the general public (n = 112) ascertained public knowledge, perception, and opinion regarding the use of their social media data for pharmacovigilance purposes. This project stands out by offering perspectives from the public and pharmaceutical industry that fill the research gaps identified in the literature review. Results: Our results gave rise to several key analysis points. Firstly, inadequacies of current Natural Language Processing algorithms hinder effective pharmacovigilance data extraction from social media, and where data extraction is possible, there are significant questions over its quality. Social media also contains a variety of biases towards common drugs, mild adverse drug reactions, and the younger generation. Additionally, outdated regulations for social media pharmacovigilance do not align with new, modern General Data Protection Regulations (GDPR), creating ethical ambiguity about data privacy and level of access. This leads to an underlying mindset of avoidance within the pharmaceutical industry, as firms are disincentivised by the legal, financial, and reputational risks associated with breaking ambiguous regulations. Conclusion: Our project uncovered several barriers that prevent effective pharmacovigilance on social media. As such, social media should be used to complement traditional sources of pharmacovigilance rather than as a sole source of pharmacovigilance data. However, this project adds further value by proposing five practical recommendations that improve the effectiveness of social media pharmacovigilance. These include: prioritising health-orientated social media; improving technical capabilities through investment and strategic partnerships; setting clear regulatory guidelines using multi-stakeholder processes; creating an adverse drug reaction reporting interface inbuilt into social media platforms; and, finally, developing educational campaigns to raise awareness of the use of social media in pharmacovigilance. Implementation of these recommendations would speed up the efficient, ethical, and systematic adoption of social media in pharmacovigilance.Keywords: adverse drug reaction, drug safety, pharmacovigilance, social media
Procedia PDF Downloads 8329 Simulation and Analysis of Mems-Based Flexible Capacitive Pressure Sensors with COMSOL
Authors: Ding Liangxiao
Abstract:
The technological advancements in Micro-Electro-Mechanical Systems (MEMS) have significantly contributed to the development of new, flexible capacitive pressure sensors,which are pivotal in transforming wearable and medical device technologies. This study employs the sophisticated simulation tools available in COMSOL Multiphysics® to develop and analyze a MEMS-based sensor with a tri-layered design. This sensor comprises top and bottom electrodes made from gold (Au), noted for their excellent conductivity, a middle dielectric layer made from a composite of Silver Nanowires (AgNWs) embedded in Thermoplastic Polyurethane (TPU), and a flexible, durable substrate of Polydimethylsiloxane (PDMS). This research was directed towards understanding how changes in the physical characteristics of the AgNWs/TPU dielectric layer—specifically, its thickness and surface area—impact the sensor's operational efficacy. We assessed several key electrical properties: capacitance, electric potential, and membrane displacement under varied pressure conditions. These investigations are crucial for enhancing the sensor's sensitivity and ensuring its adaptability across diverse applications, including health monitoring systems and dynamic user interface technologies. To ensure the reliability of our simulations, we applied the Effective Medium Theory to calculate the dielectric constant of the AgNWs/TPU composite accurately. This approach is essential for predicting how the composite material will perform under different environmental and operational stresses, thus facilitating the optimization of the sensor design for enhanced performance and longevity. Moreover, we explored the potential benefits of innovative three-dimensional structures for the dielectric layer compared to traditional flat designs. Our hypothesis was that 3D configurations might improve the stress distribution and optimize the electrical field interactions within the sensor, thereby boosting its sensitivity and accuracy. Our simulation protocol includes comprehensive performance testing under simulated environmental conditions, such as temperature fluctuations and mechanical pressures, which mirror the actual operational conditions. These tests are crucial for assessing the sensor's robustness and its ability to function reliably over extended periods, ensuring high reliability and accuracy in complex real-world environments. In our current research, although a full dynamic simulation analysis of the three-dimensional structures has not yet been conducted, preliminary explorations through three-dimensional modeling have indicated the potential for mechanical and electrical performance improvements over traditional planar designs. These initial observations emphasize the potential advantages and importance of incorporating advanced three-dimensional modeling techniques in the development of Micro-Electro-Mechanical Systems (MEMS)sensors, offering new directions for the design and functional optimization of future sensors. Overall, this study not only highlights the powerful capabilities of COMSOL Multiphysics® for modeling sophisticated electronic devices but also underscores the potential of innovative MEMS technology in advancing the development of more effective, reliable, and adaptable sensor solutions for a broad spectrum of technological applications.Keywords: MEMS, flexible sensors, COMSOL Multiphysics, AgNWs/TPU, PDMS, 3D modeling, sensor durability
Procedia PDF Downloads 4728 Methodology for Temporary Analysis of Production and Logistic Systems on the Basis of Distance Data
Authors: M. Mueller, M. Kuehn, M. Voelker
Abstract:
In small and medium-sized enterprises (SMEs), the challenge is to create a well-grounded and reliable basis for process analysis, optimization and planning due to a lack of data. SMEs have limited access to methods with which they can effectively and efficiently analyse processes and identify cause-and-effect relationships in order to generate the necessary database and derive optimization potential from it. The implementation of digitalization within the framework of Industry 4.0 thus becomes a particular necessity for SMEs. For these reasons, the abstract presents an analysis methodology that is subject to the objective of developing an SME-appropriate methodology for efficient, temporarily feasible data collection and evaluation in flexible production and logistics systems as a basis for process analysis and optimization. The overall methodology focuses on retrospective, event-based tracing and analysis of material flow objects. The technological basis consists of Bluetooth low energy (BLE)-based transmitters, so-called beacons, and smart mobile devices (SMD), e.g. smartphones as receivers, between which distance data can be measured and derived motion profiles. The distance is determined using the Received Signal Strength Indicator (RSSI), which is a measure of signal field strength between transmitter and receiver. The focus is the development of a software-based methodology for interpretation of relative movements of transmitters and receivers based on distance data. The main research is on selection and implementation of pattern recognition methods for automatic process recognition as well as methods for the visualization of relative distance data. Due to an existing categorization of the database regarding process types, classification methods (e.g. Support Vector Machine) from the field of supervised learning are used. The necessary data quality requires selection of suitable methods as well as filters for smoothing occurring signal variations of the RSSI, the integration of methods for determination of correction factors depending on possible signal interference sources (columns, pallets) as well as the configuration of the used technology. The parameter settings on which respective algorithms are based have a further significant influence on result quality of the classification methods, correction models and methods for visualizing the position profiles used. The accuracy of classification algorithms can be improved up to 30% by selected parameter variation; this has already been proven in studies. Similar potentials can be observed with parameter variation of methods and filters for signal smoothing. Thus, there is increased interest in obtaining detailed results on the influence of parameter and factor combinations on data quality in this area. The overall methodology is realized with a modular software architecture consisting of independently modules for data acquisition, data preparation and data storage. The demonstrator for initialization and data acquisition is available as mobile Java-based application. The data preparation, including methods for signal smoothing, are Python-based with the possibility to vary parameter settings and to store them in the database (SQLite). The evaluation is divided into two separate software modules with database connection: the achievement of an automated assignment of defined process classes to distance data using selected classification algorithms and the visualization as well as reporting in terms of a graphical user interface (GUI).Keywords: event-based tracing, machine learning, process classification, parameter settings, RSSI, signal smoothing
Procedia PDF Downloads 13427 Exploring the Dose-Response Association of Lifestyle Behaviors and Mental Health among High School Students in the US: A Secondary Analysis of 2021 Adolescent Behaviors and Experiences Survey Data
Authors: Layla Haidar, Shari Esquenazi-Karonika
Abstract:
Introduction: Mental health includes one’s emotional, psychological, and interpersonal well-being; it ranges from “good” to “poor” on a continuum. At the individual-level, it affects how a person thinks, feels, and acts. Moreover, it determines how they cope with stress, relate to others, and interface with their surroundings. Research has yielded that mental health is directly related with short- and long-term physical health (including chronic disease), health risk behaviors, education-level, employment, and social relationships. As is the case with physical conditions like diabetes, heart disease, and cancer, mitigating the behavioral and genetic risks of debilitating mental health conditions like anxiety and depression can nurture a healthier quality of mental health throughout one’s life. In order to maximize the benefits of prevention, it is important to identify modifiable risks and develop protective habits earlier in life. Methods: The Adolescent Behaviors and Experiences Survey (ABES) dataset was used for this study. The ABES survey was administered to high school students (9th-12th grade) during January 2021- June 2021 by the Centers for Disease Control and Prevention (CDC). The data was analyzed to identify any associations between feelings of sadness, hopelessness, or increased suicidality among high school students with relation to their participation on one or more sports teams and their average daily consumed screen time. Data was analyzed using descriptive and multivariable analytic techniques. A multinomial logistic regression of each variable was conducted to examine if there was an association, while controlling for grade-level, sex, and race. Results: The findings from this study are insightful for administrators and policymakers who wish to address mounting concerns related to student mental health. The study revealed that compared to a student who participated on zero sports teams, students who participated in 1 or more sports teams showed a significantly increased risk of depression (p<0.05). Conversely, the rate of depression in students was significantly less in those who consumed 5 or more hours of screen time per day, compared to those who consumed less than 1 hour per day of screen time (p<0.05). Conclusion: These findings are informative and highlight the importance of understanding the nuances of student participation on sports teams (e.g., physical exertion, social dynamics of team, and the level of competitiveness within the sport). Likewise, the context of an individual’s screen time (e.g., social media, engaging in team-based video games, or watching television) can inform parental or school-based policies about screen time activity. Although physical activity has been proven to be important for emotional and physical well-being of youth, playing on multiple teams could have negative consequences on the emotional state of high school students potentially due to fatigue, overtraining, and injuries. Existing literature has highlighted the negative effects of screen time; however, further research needs to consider the type of screen-based consumption to better understand its effects on mental health.Keywords: behavioral science, mental health, adolescents, prevention
Procedia PDF Downloads 10826 Membrane Permeability of Middle Molecules: A Computational Chemistry Approach
Authors: Sundaram Arulmozhiraja, Kanade Shimizu, Yuta Yamamoto, Satoshi Ichikawa, Maenaka Katsumi, Hiroaki Tokiwa
Abstract:
Drug discovery is shifting from small molecule based drugs targeting local active site to middle molecules (MM) targeting large, flat, and groove-shaped binding sites, for example, protein-protein interface because at least half of all targets assumed to be involved in human disease have been classified as “difficult to drug” with traditional small molecules. Hence, MMs such as peptides, natural products, glycans, nucleic acids with various high potent bioactivities become important targets for drug discovery programs in the recent years as they could be used for ‘undruggable” intracellular targets. Cell membrane permeability is one of the key properties of pharmacodynamically active MM drug compounds and so evaluating this property for the potential MMs is crucial. Computational prediction for cell membrane permeability of molecules is very challenging; however, recent advancement in the molecular dynamics simulations help to solve this issue partially. It is expected that MMs with high membrane permeability will enable drug discovery research to expand its borders towards intracellular targets. Further to understand the chemistry behind the permeability of MMs, it is necessary to investigate their conformational changes during the permeation through membrane and for that their interactions with the membrane field should be studied reliably because these interactions involve various non-bonding interactions such as hydrogen bonding, -stacking, charge-transfer, polarization dispersion, and non-classical weak hydrogen bonding. Therefore, parameters-based classical mechanics calculations are hardly sufficient to investigate these interactions rather, quantum mechanical (QM) calculations are essential. Fragment molecular orbital (FMO) method could be used for such purpose as it performs ab initio QM calculations by dividing the system into fragments. The present work is aimed to study the cell permeability of middle molecules using molecular dynamics simulations and FMO-QM calculations. For this purpose, a natural compound syringolin and its analogues were considered in this study. Molecular simulations were performed using NAMD and Gromacs programs with CHARMM force field. FMO calculations were performed using the PAICS program at the correlated Resolution-of-Identity second-order Moller Plesset (RI-MP2) level with the cc-pVDZ basis set. The simulations clearly show that while syringolin could not permeate the membrane, its selected analogues go through the medium in nano second scale. These correlates well with the existing experimental evidences that these syringolin analogues are membrane-permeable compounds. Further analyses indicate that intramolecular -stacking interactions in the syringolin analogues influenced their permeability positively. These intramolecular interactions reduce the polarity of these analogues so that they could permeate the lipophilic cell membrane. Conclusively, the cell membrane permeability of various middle molecules with potent bioactivities is efficiently studied using molecular dynamics simulations. Insight of this behavior is thoroughly investigated using FMO-QM calculations. Results obtained in the present study indicate that non-bonding intramolecular interactions such as hydrogen-bonding and -stacking along with the conformational flexibility of MMs are essential for amicable membrane permeation. These results are interesting and are nice example for this theoretical calculation approach that could be used to study the permeability of other middle molecules. This work was supported by Japan Agency for Medical Research and Development (AMED) under Grant Number 18ae0101047.Keywords: fragment molecular orbital theory, membrane permeability, middle molecules, molecular dynamics simulation
Procedia PDF Downloads 18925 IoT Continuous Monitoring Biochemical Oxygen Demand Wastewater Effluent Quality: Machine Learning Algorithms
Authors: Sergio Celaschi, Henrique Canavarro de Alencar, Claaudecir Biazoli
Abstract:
Effluent quality is of the highest priority for compliance with the permit limits of environmental protection agencies and ensures the protection of their local water system. Of the pollutants monitored, the biochemical oxygen demand (BOD) posed one of the greatest challenges. This work presents a solution for wastewater treatment plants - WWTP’s ability to react to different situations and meet treatment goals. Delayed BOD5 results from the lab take 7 to 8 analysis days, hindered the WWTP’s ability to react to different situations and meet treatment goals. Reducing BOD turnaround time from days to hours is our quest. Such a solution is based on a system of two BOD bioreactors associated with Digital Twin (DT) and Machine Learning (ML) methodologies via an Internet of Things (IoT) platform to monitor and control a WWTP to support decision making. DT is a virtual and dynamic replica of a production process. DT requires the ability to collect and store real-time sensor data related to the operating environment. Furthermore, it integrates and organizes the data on a digital platform and applies analytical models allowing a deeper understanding of the real process to catch sooner anomalies. In our system of continuous time monitoring of the BOD suppressed by the effluent treatment process, the DT algorithm for analyzing the data uses ML on a chemical kinetic parameterized model. The continuous BOD monitoring system, capable of providing results in a fraction of the time required by BOD5 analysis, is composed of two thermally isolated batch bioreactors. Each bioreactor contains input/output access to wastewater sample (influent and effluent), hydraulic conduction tubes, pumps, and valves for batch sample and dilution water, air supply for dissolved oxygen (DO) saturation, cooler/heater for sample thermal stability, optical ODO sensor based on fluorescence quenching, pH, ORP, temperature, and atmospheric pressure sensors, local PLC/CPU for TCP/IP data transmission interface. The dynamic BOD system monitoring range covers 2 mg/L < BOD < 2,000 mg/L. In addition to the BOD monitoring system, there are many other operational WWTP sensors. The CPU data is transmitted/received to/from the digital platform, which in turn performs analyses at periodic intervals, aiming to feed the learning process. BOD bulletins and their credibility intervals are made available in 12-hour intervals to web users. The chemical kinetics ML algorithm is composed of a coupled system of four first-order ordinary differential equations for the molar masses of DO, organic material present in the sample, biomass, and products (CO₂ and H₂O) of the reaction. This system is solved numerically linked to its initial conditions: DO (saturated) and initial products of the kinetic oxidation process; CO₂ = H₂0 = 0. The initial values for organic matter and biomass are estimated by the method of minimization of the mean square deviations. A real case of continuous monitoring of BOD wastewater effluent quality is being conducted by deploying an IoT application on a large wastewater purification system located in S. Paulo, Brazil.Keywords: effluent treatment, biochemical oxygen demand, continuous monitoring, IoT, machine learning
Procedia PDF Downloads 7424 Geovisualization of Human Mobility Patterns in Los Angeles Using Twitter Data
Authors: Linna Li
Abstract:
The capability to move around places is doubtless very important for individuals to maintain good health and social functions. People’s activities in space and time have long been a research topic in behavioral and socio-economic studies, particularly focusing on the highly dynamic urban environment. By analyzing groups of people who share similar activity patterns, many socio-economic and socio-demographic problems and their relationships with individual behavior preferences can be revealed. Los Angeles, known for its large population, ethnic diversity, cultural mixing, and entertainment industry, faces great transportation challenges such as traffic congestion, parking difficulties, and long commuting. Understanding people’s travel behavior and movement patterns in this metropolis sheds light on potential solutions to complex problems regarding urban mobility. This project visualizes people’s trajectories in Greater Los Angeles (L.A.) Area over a period of two months using Twitter data. A Python script was used to collect georeferenced tweets within the Greater L.A. Area including Ventura, San Bernardino, Riverside, Los Angeles, and Orange counties. Information associated with tweets includes text, time, location, and user ID. Information associated with users includes name, the number of followers, etc. Both aggregated and individual activity patterns are demonstrated using various geovisualization techniques. Locations of individual Twitter users were aggregated to create a surface of activity hot spots at different time instants using kernel density estimation, which shows the dynamic flow of people’s movement throughout the metropolis in a twenty-four-hour cycle. In the 3D geovisualization interface, the z-axis indicates time that covers 24 hours, and the x-y plane shows the geographic space of the city. Any two points on the z axis can be selected for displaying activity density surface within a particular time period. In addition, daily trajectories of Twitter users were created using space-time paths that show the continuous movement of individuals throughout the day. When a personal trajectory is overlaid on top of ancillary layers including land use and road networks in 3D visualization, the vivid representation of a realistic view of the urban environment boosts situational awareness of the map reader. A comparison of the same individual’s paths on different days shows some regular patterns on weekdays for some Twitter users, but for some other users, their daily trajectories are more irregular and sporadic. This research makes contributions in two major areas: geovisualization of spatial footprints to understand travel behavior using the big data approach and dynamic representation of activity space in the Greater Los Angeles Area. Unlike traditional travel surveys, social media (e.g., Twitter) provides an inexpensive way of data collection on spatio-temporal footprints. The visualization techniques used in this project are also valuable for analyzing other spatio-temporal data in the exploratory stage, thus leading to informed decisions about generating and testing hypotheses for further investigation. The next step of this research is to separate users into different groups based on gender/ethnic origin and compare their daily trajectory patterns.Keywords: geovisualization, human mobility pattern, Los Angeles, social media
Procedia PDF Downloads 12123 Culture and Health Equity: Unpacking the Sociocultural Determinants of Eye Health for Indigenous Australian Diabetics
Authors: Aryati Yashadhana, Ted Fields Jnr., Wendy Fernando, Kelvin Brown, Godfrey Blitner, Francis Hayes, Ruby Stanley, Brian Donnelly, Bridgette Jerrard, Anthea Burnett, Anthony B. Zwi
Abstract:
Indigenous Australians experience some of the worst health outcomes globally, with life expectancy being significantly poorer than those of non-Indigenous Australians. This is largely attributed to preventable diseases such as diabetes (prevalence 39% in Indigenous Australian adults > 55 years), which is attributed to a raised risk of diabetic visual impairment and cataract among Indigenous adults. Our study aims to explore the interface between structural and sociocultural determinants and human agency, in order to understand how they impact (1) accessibility of eye health and chronic disease services and (2) the potential for Indigenous patients to achieve positive clinical eye health outcomes. We used Participatory Action Research methods, and aimed to privilege the voices of Indigenous people through community collaboration. Semi-structured interviews (n=82) and patient focus groups (n=8) were conducted by Indigenous Community-Based Researchers (CBRs) with diabetic Indigenous adults (> 40 years) in four remote communities in Australia. Interviews (n=25) and focus groups (n=4) with primary health care clinicians in each community were also conducted. Data were audio recorded, transcribed verbatim, and analysed thematically using grounded theory, comparative analysis and Nvivo 10. Preliminary analysis occurred in tandem with data collection to determine theoretical saturation. The principal investigator (AY) led analysis sessions with CBRs, fostering cultural and contextual appropriateness to interpreting responses, knowledge exchange and capacity building. Identified themes were conceptualised into three spheres of influence: structural (health services, government), sociocultural (Indigenous cultural values, distrust of the health system, ongoing effects of colonialism and dispossession) and individual (health beliefs/perceptions, patient phenomenology). Permeating these spheres of influence were three core determinants: economic disadvantage, health literacy/education, and cultural marginalisation. These core determinants affected accessibility of services, and the potential for patients to achieve positive clinical outcomes at every level of care (primary, secondary, tertiary). Our findings highlight the clinical realities of institutionalised and structural inequities, illustrated through the lived experiences of Indigenous patients and primary care clinicians in the four sampled communities. The complex determinants surrounding inequity in health for Indigenous Australians, are entrenched through a longstanding experience of cultural discrimination and ostracism. Secure and long term funding of Aboriginal Community Controlled Health Services will be valuable, but are insufficient to address issues of inequity. Rather, working collaboratively with communities to build trust, and identify needs and solutions at the grassroots level, while leveraging community voices to drive change at the systemic/policy level are recommended.Keywords: indigenous, Australia, culture, public health, eye health, diabetes, social determinants of health, sociology, anthropology, health equity, aboriginal and Torres strait islander, primary care
Procedia PDF Downloads 303