Search results for: station’s accessibility reliability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3258

Search results for: station’s accessibility reliability

258 Validation of a Placebo Method with Potential for Blinding in Ultrasound-Guided Dry Needling

Authors: Johnson C. Y. Pang, Bo Pengb, Kara K. L. Reevesc, Allan C. L. Fud

Abstract:

Objective: Dry needling (DN) has long been used as a treatment method for various musculoskeletal pain conditions. However, the evidence level of the studies was low due to the limitations of the methodology. Lack of randomization and inappropriate blinding are potentially the main sources of bias. A method that can differentiate clinical results due to the targeted experimental procedure from its placebo effect is needed to enhance the validity of the trial. Therefore, this study aimed to validate the method as a placebo ultrasound(US)-guided DN for patients with knee osteoarthritis (KOA). Design: This is a randomized controlled trial (RCT). Ninety subjects (25 males and 65 females) aged between 51 and 80 (61.26±5.57) with radiological KOA were recruited and randomly assigned into three groups with a computer program. Group 1 (G1) received real US-guided DN, Group 2 (G2) received placebo US-guided DN, and Group 3 (G3) was the control group. Both G1 and G2 subjects received the same procedure of US-guided DN, except the US monitor was turned off in G2, blinding the G2 subjects to the incorporation of faux US guidance. This arrangement created the placebo effect intended to permit comparison of their results to those who received actual US-guided DN. Outcome measures, including the visual analog scale (VAS) and Knee injury and Osteoarthritis Outcome Score (KOOS) subscales of pain, symptoms and quality of life (QOL), were analyzed by repeated-measures analysis of covariance (ANCOVA) for time effects and group effects. The data regarding the perception of receiving real US-guided DN or placebo US-guided DN were analyzed by the chi-squared test. The missing data were analyzed with the intention-to-treat (ITT) approach if more than 5% of the data were missing. Results: The placebo US-guided DN (G2) subjects had the same perceptions as the use of real US guidance in the advancement of DN (p<0.128). G1 had significantly higher pain reduction (VAS and KOOS-pain) than G2 and G3 at 8 weeks (both p<0.05) only. There was no significant difference between G2 and G3 at 8 weeks (both p>0.05). Conclusion: The method with the US monitor turned off during the application of DN is credible for blinding the participants and allowing researchers to incorporate faux US guidance. The validated placebo US-guided DN technique can aid in investigations of the effects of US-guided DN with short-term effects of pain reduction for patients with KOA. Acknowledgment: This work was supported by the Caritas Institute of Higher Education [grant number IDG200101].

Keywords: reliability, jumping, 3D motion analysis, anterior crucial ligament reconstruction

Procedia PDF Downloads 97
257 Impact of Lack of Testing on Patient Recovery in the Early Phase of COVID-19: Narratively Collected Perspectives from a Remote Monitoring Program

Authors: Nicki Mohammadi, Emma Reford, Natalia Romano Spica, Laura Tabacof, Jenna Tosto-Mancuso, David Putrino, Christopher P. Kellner

Abstract:

Introductory Statement: The onset of the COVID-19 pandemic demanded an unprecedented need for the rapid development, dispersal, and application of infection testing. However, despite the impressive mobilization of resources, individuals were incredibly limited in their access to tests, particularly during the initial months of the pandemic (March-April 2020) in New York City (NYC). Access to COVID-19 testing is crucial in understanding patients’ illness experiences and integral to the development of COVID-19 standard-of-care protocols, especially in the context of overall access to healthcare resources. Succinct Description of basic methodologies: 18 Patients in a COVID-19 Remote Patient Monitoring Program (Precision Recovery within the Mount Sinai Health System) were interviewed regarding their experience with COVID-19 during the first wave (March-May 2020) of the COVID-19 pandemic in New York City. Patients were asked about their experiences navigating COVID-19 diagnoses, the health care system, and their recovery process. Transcribed interviews were analyzed for thematic codes, using grounded theory to guide the identification of emergent themes and codebook development through an iterative process. Data coding was performed using NVivo12. References for the domain “testing” were then extracted and analyzed for themes and statistical patterns. Clear Indication of Major Findings of the study: 100% of participants (18/18) referenced COVID-19 testing in their interviews, with a total of 79 references across the 18 transcripts (average: 4.4 references/interview; 2.7% interview coverage). 89% of participants (16/18) discussed the difficulty of access to testing, including denial of testing without high severity of symptoms, geographical distance to the testing site, and lack of testing resources at healthcare centers. Participants shared varying perspectives on how the lack of certainty regarding their COVID-19 status affected their course of recovery. One participant shared that because she never tested positive she was shielded from her anxiety and fear, given the death toll in NYC. Another group of participants shared that not having a concrete status to share with family, friends and professionals affected how seriously onlookers took their symptoms. Furthermore, the absence of a positive test barred some individuals from access to treatment programs and employment support. Concluding Statement: Lack of access to COVID-19 testing in the first wave of the pandemic in NYC was a prominent element of patients’ illness experience, particularly during their recovery phase. While for some the lack of concrete results was protective, most emphasized the invalidating effect this had on the perception of illness for both self and others. COVID-19 testing is now widely accessible; however, those who are unable to demonstrate a positive test result but who are still presumed to have had COVID-19 in the first wave must continue to adapt to and live with the effects of this gap in knowledge and care on their recovery. Future efforts are required to ensure that patients do not face barriers to care due to the lack of testing and are reassured regarding their access to healthcare. Affiliations- 1Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, NY 2Abilities Research Center, Department of Rehabilitation and Human Performance, Icahn School of Medicine at Mount Sinai, New York, NY

Keywords: accessibility, COVID-19, recovery, testing

Procedia PDF Downloads 174
256 The Influence of Cognitive Load in the Acquisition of Words through Sentence or Essay Writing

Authors: Breno Barrreto Silva, Agnieszka Otwinowska, Katarzyna Kutylowska

Abstract:

Research comparing lexical learning following the writing of sentences and longer texts with keywords is limited and contradictory. One possibility is that the recursivity of writing may enhance processing and increase lexical learning; another possibility is that the higher cognitive load of complex-text writing (e.g., essays), at least when timed, may hinder the learning of words. In our study, we selected 2 sets of 10 academic keywords matched for part of speech, length (number of characters), frequency (SUBTLEXus), and concreteness, and we asked 90 L1-Polish advanced-level English majors to use the keywords when writing sentences, timed (60 minutes) or untimed essays. First, all participants wrote a timed Control essay (60 minutes) without keywords. Then different groups produced Timed essays (60 minutes; n=33), Untimed essays (n=24), or Sentences (n=33) using the two sets of glossed keywords (counterbalanced). The comparability of the participants in the three groups was ensured by matching them for proficiency in English (LexTALE), and for few measures derived from the control essay: VocD (assessing productive lexical diversity), normed errors (assessing productive accuracy), words per minute (assessing productive written fluency), and holistic scores (assessing overall quality of production). We measured lexical learning (depth and breadth) via an adapted Vocabulary Knowledge Scale (VKS) and a free association test. Cognitive load was measured in the three essays (Control, Timed, Untimed) using normed number of errors and holistic scores (TOEFL criteria). The number of errors and essay scores were obtained from two raters (interrater reliability Pearson’s r=.78-91). Generalized linear mixed models showed no difference in the breadth and depth of keyword knowledge after writing Sentences, Timed essays, and Untimed essays. The task-based measurements found that Control and Timed essays had similar holistic scores, but that Untimed essay had better quality than Timed essay. Also, Untimed essay was the most accurate, and Timed essay the most error prone. Concluding, using keywords in Timed, but not Untimed, essays increased cognitive load, leading to more errors and lower quality. Still, writing sentences and essays yielded similar lexical learning, and differences in the cognitive load between Timed and Untimed essays did not affect lexical acquisition.

Keywords: learning academic words, writing essays, cognitive load, english as an L2

Procedia PDF Downloads 52
255 Production Optimization under Geological Uncertainty Using Distance-Based Clustering

Authors: Byeongcheol Kang, Junyi Kim, Hyungsik Jung, Hyungjun Yang, Jaewoo An, Jonggeun Choe

Abstract:

It is important to figure out reservoir properties for better production management. Due to the limited information, there are geological uncertainties on very heterogeneous or channel reservoir. One of the solutions is to generate multiple equi-probable realizations using geostatistical methods. However, some models have wrong properties, which need to be excluded for simulation efficiency and reliability. We propose a novel method of model selection scheme, based on distance-based clustering for reliable application of production optimization algorithm. Distance is defined as a degree of dissimilarity between the data. We calculate Hausdorff distance to classify the models based on their similarity. Hausdorff distance is useful for shape matching of the reservoir models. We use multi-dimensional scaling (MDS) to describe the models on two dimensional space and group them by K-means clustering. Rather than simulating all models, we choose one representative model from each cluster and find out the best model, which has the similar production rates with the true values. From the process, we can select good reservoir models near the best model with high confidence. We make 100 channel reservoir models using single normal equation simulation (SNESIM). Since oil and gas prefer to flow through the sand facies, it is critical to characterize pattern and connectivity of the channels in the reservoir. After calculating Hausdorff distances and projecting the models by MDS, we can see that the models assemble depending on their channel patterns. These channel distributions affect operation controls of each production well so that the model selection scheme improves management optimization process. We use one of useful global search algorithms, particle swarm optimization (PSO), for our production optimization. PSO is good to find global optimum of objective function, but it takes too much time due to its usage of many particles and iterations. In addition, if we use multiple reservoir models, the simulation time for PSO will be soared. By using the proposed method, we can select good and reliable models that already matches production data. Considering geological uncertainty of the reservoir, we can get well-optimized production controls for maximum net present value. The proposed method shows one of novel solutions to select good cases among the various probabilities. The model selection schemes can be applied to not only production optimization but also history matching or other ensemble-based methods for efficient simulations.

Keywords: distance-based clustering, geological uncertainty, particle swarm optimization (PSO), production optimization

Procedia PDF Downloads 122
254 Experimental Study on Heat and Mass Transfer of Humidifier for Fuel Cell

Authors: You-Kai Jhang, Yang-Cheng Lu

Abstract:

Major contributions of this study are threefold: designing a new model of planar-membrane humidifier for Proton Exchange Membrane Fuel Cell (PEMFC), an index to measure the Effectiveness (εT) of that humidifier, and an air compressor system to replicate related planar-membrane humidifier experiments. PEMFC as a kind of renewable energy has become more and more important in recent years due to its reliability and durability. To maintain the efficiency of the fuel cell, the membrane of PEMFC need to be controlled in a good hydration condition. How to maintain proper membrane humidity is one of the key issues to optimize PEMFC. We developed new humidifier to recycle water vapor from cathode air outlet so as to keep the moisture content of cathode air inlet in a PEMFC. By measuring parameters such as dry side air outlet dew point temperature, dry side air inlet temperature and humidity, wet side air inlet temperature and humidity, and differential pressure between dry side and wet side, we calculated indices obtained by dew point approach temperature (DPAT), water flux (J), water recovery ratio (WRR), effectiveness (εT), and differential pressure (ΔP). We discussed six topics including sealing effect, flow rate effect, flow direction effect, channel effect, temperature effect, and humidity effect by using these indices. Gas cylinders are used as sources of air supply in many studies of humidifiers. Gas cylinder depletes quickly during experiment at 1kW air flow rate, and it causes replication difficult. In order to ensure high stable air quality and better replication of experimental data, this study designs an air supply system to overcome this difficulty. The experimental result shows that the best rate of pressure loss of humidifier is 0.133×10³ Pa(g)/min at the torque of 25 (N.m). The best humidifier performance ranges from 30-40 (LPM) of air flow rates. The counter flow configured humidifies moisturizes the dry side inlet air more effectively than the parallel flow humidifier. From the performance measurements of the channel plates various rib widths studied in this study, it is found that the narrower the rib width is, the more the performance of humidifier improves. Raising channel width in same hydraulic diameter (Dh ) will obtain higher εT and lower ΔP. Moreover, increasing the dry side air inlet temperature or humidity will lead to lower εT. In addition, when the dry side air inlet temperature exceeds 50°C, the effect becomes even more obvious.

Keywords: PEM fuel cell, water management, membrane humidifier, heat and mass transfer, humidifier performance

Procedia PDF Downloads 149
253 Mobile and Hot Spot Measurement with Optical Particle Counting Based Dust Monitor EDM264

Authors: V. Ziegler, F. Schneider, M. Pesch

Abstract:

With the EDM264, GRIMM offers a solution for mobile short- and long-term measurements in outdoor areas and at production sites. For research as well as permanent areal observations on a near reference quality base. The model EDM264 features a powerful and robust measuring cell based on optical particle counting (OPC) principle with all the advantages that users of GRIMM's portable aerosol spectrometers are used to. The system is embedded in a compact weather-protection housing with all-weather sampling, heated inlet system, data logger, and meteorological sensor. With TSP, PM10, PM4, PM2.5, PM1, and PMcoarse, the EDM264 provides all fine dust fractions real-time, valid for outdoor applications and calculated with the proven GRIMM enviro-algorithm, as well as six additional dust mass fractions pm10, pm2.5, pm1, inhalable, thoracic and respirable for IAQ and workplace measurements. This highly versatile instrument performs real-time monitoring of particle number, particle size and provides information on particle surface distribution as well as dust mass distribution. GRIMM's EDM264 has 31 equidistant size channels, which are PSL traceable. A high-end data logger enables data acquisition and wireless communication via LTE, WLAN, or wired via Ethernet. Backup copies of the measurement data are stored in the device directly. The rinsing air function, which protects the laser and detector in the optical cell, further increases the reliability and long term stability of the EDM264 under different environmental and climatic conditions. The entire sample volume flow of 1.2 L/min is analyzed by 100% in the optical cell, which assures excellent counting efficiency at low and high concentrations and complies with the ISO 21501-1standard for OPCs. With all these features, the EDM264 is a world-leading dust monitor for precise monitoring of particulate matter and particle number concentration. This highly reliable instrument is an indispensable tool for many users who need to measure aerosol levels and air quality outdoors, on construction sites, or at production facilities.

Keywords: aerosol research, aerial observation, fence line monitoring, wild fire detection

Procedia PDF Downloads 126
252 An Autonomous Passive Acoustic System for Detection, Tracking and Classification of Motorboats in Portofino Sea

Authors: A. Casale, J. Alessi, C. N. Bianchi, G. Bozzini, M. Brunoldi, V. Cappanera, P. Corvisiero, G. Fanciulli, D. Grosso, N. Magnoli, A. Mandich, C. Melchiorre, C. Morri, P. Povero, N. Stasi, M. Taiuti, G. Viano, M. Wurtz

Abstract:

This work describes a real-time algorithm for detecting, tracking and classifying single motorboats, developed using the acoustic data recorded by a hydrophone array within the framework of EU LIFE + project ARION (LIFE09NAT/IT/000190). The project aims to improve the conservation status of bottlenose dolphins through a real-time simultaneous monitoring of their population and surface ship traffic. A Passive Acoustic Monitoring (PAM) system is installed on two autonomous permanent marine buoys, located close to the boundaries of the Marine Protected Area (MPA) of Portofino (Ligurian Sea- Italy). Detecting surface ships is also a necessity in many other sensible areas, such as wind farms, oil platforms, and harbours. A PAM system could be an effective alternative to the usual monitoring systems, as radar or active sonar, for localizing unauthorized ship presence or illegal activities, with the advantage of not revealing its presence. Each ARION buoy consists of a particular type of structure, named meda elastica (elastic beacon) composed of a main pole, about 30-meter length, emerging for 7 meters, anchored to a mooring of 30 tons at 90 m depth by an anti-twist steel wire. Each buoy is equipped with a floating element and a hydrophone tetrahedron array, whose raw data are send via a Wi-Fi bridge to a ground station where real-time analysis is performed. Bottlenose dolphin detection algorithm and ship monitoring algorithm are operating in parallel and in real time. Three modules were developed and commissioned for ship monitoring. The first is the detection algorithm, based on Time Difference Of Arrival (TDOA) measurements, i.e., the evaluation of angular direction of the target respect to each buoy and the triangulation for obtaining the target position. The second is the tracking algorithm, based on a Kalman filter, i.e., the estimate of the real course and speed of the target through a predictor filter. At last, the classification algorithm is based on the DEMON method, i.e., the extraction of the acoustic signature of single vessels. The following results were obtained; the detection algorithm succeeded in evaluating the bearing angle with respect to each buoy and the position of the target, with an uncertainty of 2 degrees and a maximum range of 2.5 km. The tracking algorithm succeeded in reconstructing the real vessel courses and estimating the speed with an accuracy of 20% respect to the Automatic Identification System (AIS) signals. The classification algorithm succeeded in isolating the acoustic signature of single vessels, demonstrating its temporal stability and the consistency of both buoys results. As reference, the results were compared with the Hilbert transform of single channel signals. The algorithm for tracking multiple targets is ready to be developed, thanks to the modularity of the single ship algorithm: the classification module will enumerate and identify all targets present in the study area; for each of them, the detection module and the tracking module will be applied to monitor their course.

Keywords: acoustic-noise, bottlenose-dolphin, hydrophone, motorboat

Procedia PDF Downloads 149
251 Variable Renewable Energy Droughts in the Power Sector – A Model-based Analysis and Implications in the European Context

Authors: Martin Kittel, Alexander Roth

Abstract:

The continuous integration of variable renewable energy sources (VRE) in the power sector is required for decarbonizing the European economy. Power sectors become increasingly exposed to weather variability, as the availability of VRE, i.e., mainly wind and solar photovoltaic, is not persistent. Extreme events, e.g., long-lasting periods of scarce VRE availability (‘VRE droughts’), challenge the reliability of supply. Properly accounting for the severity of VRE droughts is crucial for designing a resilient renewable European power sector. Energy system modeling is used to identify such a design. Our analysis reveals the sensitivity of the optimal design of the European power sector towards VRE droughts. We analyze how VRE droughts impact optimal power sector investments, especially in generation and flexibility capacity. We draw upon work that systematically identifies VRE drought patterns in Europe in terms of frequency, duration, and seasonality, as well as the cross-regional and cross-technological correlation of most extreme drought periods. Based on their analysis, the authors provide a selection of relevant historical weather years representing different grades of VRE drought severity. These weather years will serve as input for the capacity expansion model for the European power sector used in this analysis (DIETER). We additionally conduct robustness checks varying policy-relevant assumptions on capacity expansion limits, interconnections, and level of sector coupling. Preliminary results illustrate how an imprudent selection of weather years may cause underestimating the severity of VRE droughts, flawing modeling insights concerning the need for flexibility. Sub-optimal European power sector designs vulnerable to extreme weather can result. Using relevant weather years that appropriately represent extreme weather events, our analysis identifies a resilient design of the European power sector. Although the scope of this work is limited to the European power sector, we are confident that our insights apply to other regions of the world with similar weather patterns. Many energy system studies still rely on one or a limited number of sometimes arbitrarily chosen weather years. We argue that the deliberate selection of relevant weather years is imperative for robust modeling results.

Keywords: energy systems, numerical optimization, variable renewable energy sources, energy drought, flexibility

Procedia PDF Downloads 53
250 Laser-Dicing Modeling: Implementation of a High Accuracy Tool for Laser-Grooving and Cutting Application

Authors: Jeff Moussodji, Dominique Drouin

Abstract:

The highly complex technology requirements of today’s integrated circuits (ICs), lead to the increased use of several materials types such as metal structures, brittle and porous low-k materials which are used in both front end of line (FEOL) and back end of line (BEOL) process for wafer manufacturing. In order to singulate chip from wafer, a critical laser-grooving process, prior to blade dicing, is used to remove these layers of materials out of the dicing street. The combination of laser-grooving and blade dicing allows to reduce the potential risk of induced mechanical defects such micro-cracks, chipping, on the wafer top surface where circuitry is located. It seems, therefore, essential to have a fundamental understanding of the physics involving laser-dicing in order to maximize control of these critical process and reduce their undesirable effects on process efficiency, quality, and reliability. In this paper, the study was based on the convergence of two approaches, numerical and experimental studies which allowed us to investigate the interaction of a nanosecond pulsed laser and BEOL wafer materials. To evaluate this interaction, several laser grooved samples were compared with finite element modeling, in which three different aspects; phase change, thermo-mechanical and optic sensitive parameters were considered. The mathematical model makes it possible to highlight a groove profile (depth, width, etc.) of a single pulse or multi-pulses on BEOL wafer material. Moreover, the heat affected zone, and thermo-mechanical stress can be also predicted as a function of laser operating parameters (power, frequency, spot size, defocus, speed, etc.). After modeling validation and calibration, a satisfying correlation between experiment and modeling, results have been observed in terms of groove depth, width and heat affected zone. The study proposed in this work is a first step toward implementing a quick assessment tool for design and debug of multiple laser grooving conditions with limited experiments on hardware in industrial application. More correlations and validation tests are in progress and will be included in the full paper.

Keywords: laser-dicing, nano-second pulsed laser, wafer multi-stack, multiphysics modeling

Procedia PDF Downloads 188
249 Design Evaluation Tool for Small Wind Turbine Systems Based on the Simple Load Model

Authors: Jihane Bouabid

Abstract:

The urgency to transition towards sustainable energy sources has revealed itself imperative. Today, in the 21st Century, the intellectual society have imposed technological advancements and improvements, and anticipates expeditious outcomes as an integral component of its relentless pursuit of an elevated standard of living. As a part of empowering human development, driving economic growth and meeting social needs, the access to energy services has become a necessity. As a part of these improvements, we are introducing the project "Mywindturbine" - an interactive web user interface for design and analysis in the field of wind energy, with a particular adherence to the IEC (International Electrotechnical Commission) standard 61400-2 "Wind turbines – Part 2: Design requirements for small wind turbines". Wind turbines play a pivotal role in Morocco's renewable energy strategy, leveraging the nation's abundant wind resources. The IEC 61400-2 standard ensures the safety and design integrity of small wind turbines deployed in Morocco, providing guidelines for performance and safety protocols. The conformity with this standard ensures turbine reliability, facilitates standards alignment, and accelerates the integration of wind energy into Morocco's energy landscape. The aim of the GUI (Graphical User Interface) for engineers and professionals from the field of wind energy systems who would like to design a small wind turbine system following the safety requirements of the international standards IEC 61400-2. The interface provides an easy way to analyze the structure of the turbine machine under normal and extreme load conditions based on the specific inputs provided by the user. The platform introduces an overview to sustainability and renewable energy, with a focus on wind turbines. It features a cross-examination of the input parameters provided from the user for the SLM (Simple Load Model) of small wind turbines, and results in an analysis according to the IEC 61400-2 standard. The analysis of the simple load model encompasses calculations for fatigue loads on blades and rotor shaft, yaw error load on blades, etc. for the small wind turbine performance. Through its structured framework and adherence to the IEC standard, "Mywindturbine" aims to empower professionals, engineers, and intellectuals with the knowledge and tools necessary to contribute towards a sustainable energy future.

Keywords: small wind turbine, IEC 61400-2 standard, user interface., simple load model

Procedia PDF Downloads 36
248 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows

Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid

Abstract:

Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.

Keywords: erodible beds, finite element method, finite volume method, nonlinear elasticity, shallow water equations, stresses in soil

Procedia PDF Downloads 113
247 Parents and Stakeholders’ Perspectives on Early Reading Intervention Implemented as a Curriculum for Children with Learning Disabilities

Authors: Bander Mohayya Alotaibi

Abstract:

The valuable partnerships between parents and teachers may develop positive and effective interactions between home and school. This will help these stakeholders share information and resources regarding student academics during ongoing interactions. Thus, partnerships will build a solid foundation for both families and schools to help children succeed in school. Parental involvement can be seen as an effective tool that can change homes and communities and not just schools’ systems. Seeking parents and stakeholders’ attitudes toward learning and learners can help schools design a curriculum. Subsequently, this information can be used to find ways to help improve the academic performance of students, especially in low performing schools. There may be some conflicts when designing curriculum. In addition, designing curriculum might bring more educational expectations to all the sides. There is a lack of research that targets the specific attitude of parents toward specific concepts on curriculum contents. More research is needed to study the perspective that parents of children with learning disabilities (LD) have regarding early reading curriculum. Parents and stakeholders’ perspectives on early reading intervention implemented as a curriculum for children with LD was studied through an advanced quantitative research. The purpose of this study seeks to understand stakeholders and parents’ perspectives of key concepts and essential early reading skills that impact the design of curriculum that will serve as an intervention for early struggler readers who have LD. Those concepts or stages include phonics, phonological awareness, and reading fluency as well as strategies used in house by parents. A survey instrument was used to gather the data. Participants were recruited through 29 schools and districts of the metropolitan area of the northern part of Saudi Arabia. Participants were stakeholders including parents of children with learning disability. Data were collected using distribution of paper and pen survey to schools. Psychometric properties of the instrument were evaluated for the validity and reliability of the survey; face validity, content validity, and construct validity including an Exploratory Factor Analysis were used to shape and reevaluate the structure of the instrument. Multivariate analysis of variance (MANOVA) used to find differences between the variables. The study reported the results of the perspectives of stakeholders toward reading strategies, phonics, phonological awareness, and reading fluency. Also, suggestions and limitations are discussed.

Keywords: stakeholders, learning disability, early reading, perspectives, parents, intervention, curriculum

Procedia PDF Downloads 133
246 Sustainable Housing and Urban Development: A Study on the Soon-To-Be-Old Population's Impetus to Migrate

Authors: Tristance Kee

Abstract:

With the unprecedented increase in elderly population globally, it is critical to search for new sustainable housing and urban development alternatives to traditional housing options. This research examines concepts of elderly migration pattern in the context of a high density city in Hong Kong to Mainland China. The research objectives are to: 1) explore the relationships between soon-to-be-old elderly and their intentions to move to Mainland upon retirement and their demographic characteristics; and 2) What are the desired amenities, locational factors and activities that are expected in the soon-to-be-old generation’s retirement housing environment? Primary data was collected through questionnaire survey conducted using random sampling method with respondents aged between 45-64 years old. The face-to-face survey was completed by 500 respondents. The survey was divided into four sections. The first section focused on respondent’s demographic information such as gender, age, education attainment, monthly income, housing tenure type and their visits to Mainland China. The second section focused on their retirement plans in terms of intended retirement age, prospective retirement funding and retirement housing options. The third section focused on the respondent’s attitudes toward retiring in Mainland for housing. It asked about their intentions to migrate retire into Mainland and incentives to retire in Hong Kong. The fourth section focused on respondent’s ideal housing environment including preferred housing amenities, desired living environment and retirement activities. The dependent variable in this study was ‘respondent’s consideration to move to Mainland China upon retirement’. Eight primary independent variables were integrated into the study to identify the correlations between them and retirement migration plan. The independent variables include: gender, age, marital status, monthly income, present housing tenure type, property ownership in Hong Kong, relationship with Mainland and the frequency of visiting Mainland China. In addition to the above independent variables, respondents were asked to indicate their retirement plans (retirement age, funding sources and retirement housing options), incentives to migrate to retire (choices included: property ownership, family relations, cost of living, living environment, medical facilities, government welfare benefits, etc.), perceived ideal retirement life qualities including desired amenities (sports, medical and leisure facilities etc.), desired locational qualities (green open space, convenient transport options and accessibility to urban settings etc.) and desired retirement activities (home-based leisure, elderly friendly sports, cultural activities, child care, social activities, etc.). The finding shows correlations between the used independent variables and consideration to migrate for housing options. The two independent variables indicated a possible correlation were gender and the frequency of visiting Mainland at present. When considering the increasing property prices across the border and strong social relationships, potential retirement migration is a very subjective decision that could vary from person to person. This research adds knowledge to housing research and migration study. Although the research is based in Mainland, most of the characteristics identified including better medical services, government welfare and sound urban amenities are shared qualities for all sustainable urban development and housing strategies.

Keywords: elderly migration, housing alternative, soon-to-be-old, sustainable environment

Procedia PDF Downloads 190
245 Implementation of Correlation-Based Data Analysis as a Preliminary Stage for the Prediction of Geometric Dimensions Using Machine Learning in the Forming of Car Seat Rails

Authors: Housein Deli, Loui Al-Shrouf, Mohieddine Jelali

Abstract:

When forming metallic materials, fluctuations in material properties, process conditions and wear lead to deviations in the component geometry. Several hundred features sometimes need to be measured, especially in the case of functional and safety-relevant components. These can only be measured offline due to the large number of features and the accuracy requirements. The risk of producing components outside the tolerances is minimized but not eliminated by the statistical evaluation of process capability and control measurements. The inspection intervals are based on the acceptable risk and are at the expense of productivity but remain reactive and, in some cases, considerably delayed. Due to the considerable progress made in the field of condition monitoring and measurement technology, permanently installed sensor systems in combination with machine learning and artificial intelligence, in particular, offer the potential to independently derive forecasts for component geometry and thus eliminate the risk of defective products - actively and preventively. The reliability of forecasts depends on the quality, completeness and timeliness of the data. Measuring all geometric characteristics is neither sensible nor technically possible. This paper, therefore, uses the example of car seat rail production to discuss the necessary first step of feature selection and reduction by correlation analysis, as otherwise, it would not be possible to forecast components in real-time and inline. Four different car seat rails with an average of 130 features were selected and measured using a coordinate measuring machine (CMM). The run of such measuring programs alone takes up to 20 minutes. In practice, this results in the risk of faulty production of at least 2000 components that have to be sorted or scrapped if the measurement results are negative. Over a period of 2 months, all measurement data (> 200 measurements/ variant) was collected and evaluated using correlation analysis. As part of this study, the number of characteristics to be measured for all 6 car seat rail variants was reduced by over 80%. Specifically, direct correlations for almost 100 characteristics were proven for an average of 125 characteristics for 4 different products. A further 10 features correlate via indirect relationships so that the number of features required for a prediction could be reduced to less than 20. A correlation factor >0.8 was assumed for all correlations.

Keywords: long-term SHM, condition monitoring, machine learning, correlation analysis, component prediction, wear prediction, regressions analysis

Procedia PDF Downloads 12
244 Estimation of Scour Using a Coupled Computational Fluid Dynamics and Discrete Element Model

Authors: Zeinab Yazdanfar, Dilan Robert, Daniel Lester, S. Setunge

Abstract:

Scour has been identified as the most common threat to bridge stability worldwide. Traditionally, scour around bridge piers is calculated using the empirical approaches that have considerable limitations and are difficult to generalize. The multi-physic nature of scouring which involves turbulent flow, soil mechanics and solid-fluid interactions cannot be captured by simple empirical equations developed based on limited laboratory data. These limitations can be overcome by direct numerical modeling of coupled hydro-mechanical scour process that provides a robust prediction of bridge scour and valuable insights into the scour process. Several numerical models have been proposed in the literature for bridge scour estimation including Eulerian flow models and coupled Euler-Lagrange models incorporating an empirical sediment transport description. However, the contact forces between particles and the flow-particle interaction haven’t been taken into consideration. Incorporating collisional and frictional forces between soil particles as well as the effect of flow-driven forces on particles will facilitate accurate modeling of the complex nature of scour. In this study, a coupled Computational Fluid Dynamics and Discrete Element Model (CFD-DEM) has been developed to simulate the scour process that directly models the hydro-mechanical interactions between the sediment particles and the flowing water. This approach obviates the need for an empirical description as the fundamental fluid-particle, and particle-particle interactions are fully resolved. The sediment bed is simulated as a dense pack of particles and the frictional and collisional forces between particles are calculated, whilst the turbulent fluid flow is modeled using a Reynolds Averaged Navier Stocks (RANS) approach. The CFD-DEM model is validated against experimental data in order to assess the reliability of the CFD-DEM model. The modeling results reveal the criticality of particle impact on the assessment of scour depth which, to the authors’ best knowledge, hasn’t been considered in previous studies. The results of this study open new perspectives to the scour depth and time assessment which is the key to manage the failure risk of bridge infrastructures.

Keywords: bridge scour, discrete element method, CFD-DEM model, multi-phase model

Procedia PDF Downloads 111
243 Improving the Dielectric Strength of Transformer Oil for High Health Index: An FEM Based Approach Using Nanofluids

Authors: Fatima Khurshid, Noor Ul Ain, Syed Abdul Rehman Kashif, Zainab Riaz, Abdullah Usman Khan, Muhammad Imran

Abstract:

As the world is moving towards extra-high voltage (EHV) and ultra-high voltage (UHV) power systems, the performance requirements of power transformers are becoming crucial to the system reliability and security. With the transformers being an essential component of a power system, low health index of transformers poses greater risks for safe and reliable operation. Therefore, to meet the rising demands of the power system and transformer performance, researchers are being prompted to provide solutions for enhanced thermal and electrical properties of transformers. This paper proposes an approach to improve the health index of a transformer by using nano-technology in conjunction with bio-degradable oils. Vegetable oils can serve as potential dielectric fluid alternatives to the conventional mineral oils, owing to their numerous inherent benefits; namely, higher fire and flashpoints, and being environment-friendly in nature. Moreover, the addition of nanoparticles in the dielectric fluid further serves to improve the dielectric strength of the insulation medium. In this research, using the finite element method (FEM) in COMSOL Multiphysics environment, and a 2D space dimension, three different oil samples have been modelled, and the electric field distribution is computed for each sample at various electric potentials, i.e., 90 kV, 100 kV, 150 kV, and 200 kV. Furthermore, each sample has been modified with the addition of nanoparticles of different radii (50 nm and 100 nm) and at different interparticle distance (5 mm and 10 mm), considering an instant of time. The nanoparticles used are non-conductive and have been modelled as alumina (Al₂O₃). The geometry has been modelled according to IEC standard 60897, with a standard electrode gap distance of 25 mm. For an input supply voltage of 100 kV, the maximum electric field stresses obtained for the samples of synthetic vegetable oil, olive oil, and mineral oil are 5.08 ×10⁶ V/m, 5.11×10⁶ V/m and 5.62×10⁶ V/m, respectively. It is observed that for the unmodified samples, vegetable oils have a greater dielectric strength as compared to the conventionally used mineral oils because of their higher flash points and higher values of relative permittivity. Also, for the modified samples, the addition of nanoparticles inhibits the streamer propagation inside the dielectric medium and hence, serves to improve the dielectric properties of the medium.

Keywords: dielectric strength, finite element method, health index, nanotechnology, streamer propagation

Procedia PDF Downloads 123
242 Structural Equation Modeling Exploration for the Multiple College Admission Criteria in Taiwan

Authors: Tzu-Ling Hsieh

Abstract:

When the Taiwan Ministry of Education implemented a new university multiple entrance policy in 2002, most colleges and universities still use testing scores as mainly admission criteria. With forthcoming 12 basic-year education curriculum, the Ministry of Education provides a new college admission policy, which will be implemented in 2021. The new college admission policy will highlight the importance of holistic education by more emphases on the learning process of senior high school, except only on the outcome of academic testing. However, the development of college admission criteria doesn’t have a thoughtful process. Universities and colleges don’t have an idea about how to make suitable multi-admission criteria. Although there are lots of studies in other countries which have implemented multi-college admission criteria for years, these studies still cannot represent Taiwanese students. Also, these studies are limited without the comparison of two different academic fields. Therefore, this study investigated multiple admission criteria and its relationship with college success. This study analyzed the Taiwan Higher Education Database with 12,747 samples from 156 universities and tested a conceptual framework that examines factors by structural equation model (SEM). The conceptual framework of this study was adapted from Pascarella's general causal model and focused on how different admission criteria predict students’ college success. It discussed the relationship between admission criteria and college success, also the relationship how motivation (one of admission standard) influence college success through engagement behaviors of student effort and interactions with agents of socialization. After processing missing value, reliability and validity analysis, the study found three indicators can significantly predict students’ college success which was defined as average grade of last semester. These three indicators are the Chinese language scores at college entrance exam, high school class rank, and quality of student academic engagement. In addition, motivation can significantly predict quality of student academic engagement and interactions with agents of socialization. However, the multi-group SEM analysis showed that there is no difference to predict college success between the students from liberal arts and science. Finally, this study provided some suggestions for universities and colleges to develop multi-admission criteria through the empirical research of Taiwanese higher education students.

Keywords: college admission, admission criteria, structural equation modeling, higher education, education policy

Procedia PDF Downloads 159
241 Self-Medication with Antibiotics, Evidence of Factors Influencing the Practice in Low and Middle-Income Countries: A Systematic Scoping Review

Authors: Neusa Fernanda Torres, Buyisile Chibi, Lyn E. Middleton, Vernon P. Solomon, Tivani P. Mashamba-Thompson

Abstract:

Background: Self-medication with antibiotics (SMA) is a global concern, with a higher incidence in low and middle-income countries (LMICs). Despite intense world-wide efforts to control and promote the rational use of antibiotics, continuing practices of SMA systematically exposes individuals and communities to the risk of antibiotic resistance and other undesirable antibiotic side effects. Moreover, it increases the health systems costs of acquiring more powerful antibiotics to treat the resistant infection. This review thus maps evidence on the factors influencing self-medication with antibiotics in these settings. Methods: The search strategy for this review involved electronic databases including PubMed, Web of Knowledge, Science Direct, EBSCOhost (PubMed, CINAHL with Full Text, Health Source - Consumer Edition, MEDLINE), Google Scholar, BioMed Central and World Health Organization library, using the search terms:’ Self-Medication’, ‘antibiotics’, ‘factors’ and ‘reasons’. Our search included studies published from 2007 to 2017. Thematic analysis was performed to identify the patterns of evidence on SMA in LMICs. The mixed method quality appraisal tool (MMAT) version 2011 was employed to assess the quality of the included primary studies. Results: Fifteen studies met the inclusion criteria. Studies included population from the rural (46,4%), urban (33,6%) and combined (20%) settings, of the following LMICs: Guatemala (2 studies), India (2), Indonesia (2), Kenya (1), Laos (1), Nepal (1), Nigeria (2), Pakistan (2), Sri Lanka (1), and Yemen (1). The total sample size of all 15 included studies was 7676 participants. The findings of the review show a high prevalence of SMA ranging from 8,1% to 93%. Accessibility, affordability, conditions of health facilities (long waiting, quality of services and workers) as long well as poor health-seeking behavior and lack of information are factors that influence SMA in LMICs. Antibiotics such as amoxicillin, metronidazole, amoxicillin/clavulanic, ampicillin, ciprofloxacin, azithromycin, penicillin, and tetracycline, were the most frequently used for SMA. The major sources of antibiotics included pharmacies, drug stores, leftover drugs, family/friends and old prescription. Sore throat, common cold, cough with mucus, headache, toothache, flu-like symptoms, pain relief, fever, running nose, toothache, upper respiratory tract infections, urinary symptoms, urinary tract infection were the common disease symptoms managed with SMA. Conclusion: Although the information on factors influencing SMA in LMICs is unevenly distributed, the available information revealed the existence of research evidence on antibiotic self-medication in some countries of LMICs. SMA practices are influenced by social-cultural determinants of health and frequently associated with poor dispensing and prescribing practices, deficient health-seeking behavior and consequently with inappropriate drug use. Therefore, there is still a need to conduct further studies (qualitative, quantitative and randomized control trial) on factors and reasons for SMA to correctly address the public health problem in LMICs.

Keywords: antibiotics, factors, reasons, self-medication, low and middle-income countries (LMICs)

Procedia PDF Downloads 194
240 Investigation of Chemical Effects on the Lγ2,3 and Lγ4 X-ray Production Cross Sections for Some Compounds of 66dy at Photon Energies Close to L1 Absorption-edge Energy

Authors: Anil Kumar, Rajnish Kaur, Mateusz Czyzycki, Alessandro Migilori, Andreas Germanos Karydas, Sanjiv Puri

Abstract:

The radiative decay of Li(i=1-3) sub-shell vacancies produced through photoionization results in production of the characteristic emission spectrum comprising several X-ray lines, whereas non-radiative vacancy decay results in Auger electron spectrum. Accurate reliable data on the Li(i=1-3) sub-shell X-ray production (XRP) cross sections is of considerable importance for investigation of atomic inner-shell ionization processes as well as for quantitative elemental analysis of different types of samples employing the energy dispersive X-ray fluorescence (EDXRF) analysis technique. At incident photon energies in vicinity of the absorption edge energies of an element, the many body effects including the electron correlation, core relaxation, inter-channel coupling and post-collision interactions become significant in the photoionization of atomic inner-shells. Further, in case of compounds, the characteristic emission spectrum of the specific element is expected to get influenced by the chemical environment (coordination number, oxidation state, nature of ligand/functional groups attached to central atom, etc.). These chemical effects on L X-ray fluorescence parameters have been investigated by performing the measurements at incident photon energies much higher than the Li(i=1-3) sub-shell absorption edge energies using EDXRF spectrometers. In the present work, the cross sections for production of the Lk(k= γ2,3, γ4) X-rays have been measured for some compounds of 66Dy, namely, Dy2O3, Dy2(CO3)3, Dy2(SO4)3.8H2O, DyI2 and Dy metal by tuning the incident photon energies few eV above the L1 absorption-edge energy in order to investigate the influence of chemical effects on these cross sections in presence of the many body effects which become significant at photon energies close to the absorption-edge energies. The present measurements have been performed under vacuum at the IAEA end-station of the X-ray fluorescence beam line (10.1L) of ELETTRA synchrotron radiation facility (Trieste, Italy) using self-supporting pressed pellet targets (1.3 cm diameter, nominal thicknesses ~ 176 mg/cm2) of 66Dy compounds (procured from Sigma Aldrich) and a metallic foil of 66Dy (nominal thickness ~ 3.9 mg/cm2, procured from Good Fellow, UK). The present measured cross sections have been compared with theoretical values calculated using the Dirac-Hartree-Slater(DHS) model based fluorescence and Coster-Kronig yields, Dirac-Fock(DF) model based X-ray emission rates and two sets of L1 sub-shell photoionization cross sections based on the non-relativistic Hartree-Fock-Slater(HFS) model and those deduced from the self-consistent Dirac-Hartree-Fock(DHF) model based total photoionization cross sections. The present measured XRP cross sections for 66Dy as well as for its compounds for the L2,3 and L4 X-rays, are found to be higher by ~14-36% than the two calculated set values. It is worth to be mentioned that L2,3 and L4 X-ray lines are originated by filling up of the L1 sub-shell vacancies by the outer sub-shell (N2,3 and O2,3) electrons which are much more sensitive to the chemical environment around the central atom. The present observed differences between measured and theoretical values are expected due to combined influence of the many-body effects and the chemical effects.

Keywords: chemical effects, L X-ray production cross sections, Many body effects, Synchrotron radiation

Procedia PDF Downloads 112
239 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence

Authors: Gus Calderon, Richard McCreight, Tammy Schwartz

Abstract:

Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.

Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.

Procedia PDF Downloads 88
238 Design and Validation of the 'Teachers' Resilience Scale' for Assessing Protective Factors

Authors: Athena Daniilidou, Maria Platsidou

Abstract:

Resilience is considered to greatly affect the personal and occupational wellbeing and efficacy of individuals; therefore, it has been widely studied in the social and behavioral sciences. Given its significance, several scales have been created to assess resilience of children and adults. However, most of these scales focus on examining only the internal protective or risk factors that affect the levels of resilience. The aim of the present study is to create a reliable scale that assesses both the internal and the external protective factors that affect Greek teachers’ levels of resilience. Participants were 136 secondary school teachers (89 females, 47 males) from urban areas of Greece. Connor-Davidson Resilience Scale (CD-Risc) and Resilience Scale for Adults (RSA) were used to collect the data. First, exploratory factor analysis was employed to investigate the inner structure of each scale. For both scales, the analyses revealed a differentiated factor solution compared to the ones proposed by the creators. That prompt us to create a scale that would combine the best fitting subscales of the CD-Risc and the RSA. To this end, the items of the four factors with the best fit and highest reliability were used to create the ‘Teachers' resilience scale’. Exploratory factor analysis revealed that the scale assesses the following protective/risk factors: Personal Competence and Strength (9 items, α=.83), Family Cohesion Spiritual Influences (7 items, α=.80), Social Competence and Peers Support (7 items, α=.78) and Spiritual Influence (3 items, α=.58). This four-factor model explained 49,50% of the total variance. In the next step, a confirmatory factor analysis was performed on the 26 items of the derived scale to test the above factor solution. The fit of the model to the data was good (χ2/292 = 1.245, CFI = .921, GFI = .829, SRMR = .074, CI90% = .026-,056, RMSEA = 0.43), indicating that the proposed scale can validly measure the aforementioned four aspects of teachers' resilience and thus confirmed its factorial validity. Finally, analyses of variance were performed to check for individual differences in the levels of teachers' resilience in relation to their gender, age, marital status, level of studies, and teaching specialty. Results were consistent to previous findings, thus providing an indication of discriminant validity for the instrument. This scale has the advantage of assessing both the internal and the external protective factors of resilience in a brief yet comprehensive way, since it consists 26 items instead of the total of 58 of the CD-Risc and RSA scales. Its factorial inner structure is supported by the relevant literature on resilience, as it captures the major protective factors of resilience identified in previous studies.

Keywords: protective factors, resilience, scale development, teachers

Procedia PDF Downloads 281
237 Integrating the Modbus SCADA Communication Protocol with Elliptic Curve Cryptography

Authors: Despoina Chochtoula, Aristidis Ilias, Yannis Stamatiou

Abstract:

Modbus is a protocol that enables the communication among devices which are connected to the same network. This protocol is, often, deployed in connecting sensor and monitoring units to central supervisory servers in Supervisory Control and Data Acquisition, or SCADA, systems. These systems monitor critical infrastructures, such as factories, power generation stations, nuclear power reactors etc. in order to detect malfunctions and ignite alerts and corrective actions. However, due to their criticality, SCADA systems are vulnerable to attacks that range from simple eavesdropping on operation parameters, exchanged messages, and valuable infrastructure information to malicious modification of vital infrastructure data towards infliction of damage. Thus, the SCADA research community has been active over strengthening SCADA systems with suitable data protection mechanisms based, to a large extend, on cryptographic methods for data encryption, device authentication, and message integrity protection. However, due to the limited computation power of many SCADA sensor and embedded devices, the usual public key cryptographic methods are not appropriate due to their high computational requirements. As an alternative, Elliptic Curve Cryptography has been proposed, which requires smaller key sizes and, thus, less demanding cryptographic operations. Until now, however, no such implementation has been proposed in the SCADA literature, to the best of our knowledge. In order to fill this gap, our methodology was focused on integrating Modbus, a frequently used SCADA communication protocol, with Elliptic Curve based cryptography and develop a server/client application to demonstrate the proof of concept. For the implementation we deployed two C language libraries, which were suitably modify in order to be successfully integrated: libmodbus (https://github.com/stephane/libmodbus) and ecc-lib https://www.ceid.upatras.gr/webpages/faculty/zaro/software/ecc-lib/). The first library provides a C implementation of the Modbus/TCP protocol while the second one offers the functionality to develop cryptographic protocols based on Elliptic Curve Cryptography. These two libraries were combined, after suitable modifications and enhancements, in order to give a modified version of the Modbus/TCP protocol focusing on the security of the data exchanged among the devices and the supervisory servers. The mechanisms we implemented include key generation, key exchange/sharing, message authentication, data integrity check, and encryption/decryption of data. The key generation and key exchange protocols were implemented with the use of Elliptic Curve Cryptography primitives. The keys established by each device are saved in their local memory and are retained during the whole communication session and are used in encrypting and decrypting exchanged messages as well as certifying entities and the integrity of the messages. Finally, the modified library was compiled for the Android environment in order to run the server application as an Android app. The client program runs on a regular computer. The communication between these two entities is an example of the successful establishment of an Elliptic Curve Cryptography based, secure Modbus wireless communication session between a portable device acting as a supervisor station and a monitoring computer. Our first performance measurements are, also, very promising and demonstrate the feasibility of embedding Elliptic Curve Cryptography into SCADA systems, filling in a gap in the relevant scientific literature.

Keywords: elliptic curve cryptography, ICT security, modbus protocol, SCADA, TCP/IP protocol

Procedia PDF Downloads 237
236 Chemicals to Remove and Prevent Biofilm

Authors: Cynthia K. Burzell

Abstract:

Aequor's Founder, a Marine and Medical Microbiologist, discovered novel, non-toxic chemicals in the ocean that uniquely remove biofilm in minutes and prevent its formation for days. These chemicals and over 70 synthesized analogs that Aequor developed can replace thousands of toxic biocides used in consumer and industrial products and, as new drug candidates, kill biofilm-forming bacteria and fungi Superbugs -the antimicrobial-resistant (AMR) pathogens for which there is no cure. Cynthia Burzell, PhD., is a Marine and Medical Microbiologist studying natural mechanisms that inhibit biofilm formation on surfaces in contact with water. In 2002, she discovered a new genus and several new species of marine microbes that produce small molecules that remove biofilm in minutes and prevent its formation for days. The molecules include new antimicrobials that can replace thousands of toxic biocides used in consumer and industrial products and can be developed into new drug candidates to kill the biofilm-forming bacteria and fungi -- including the antimicrobial-resistant (AMR) Superbugs for which there is no cure. Today, Aequor has over 70 chemicals that are divided into categories: (1) Novel natural chemicals. Lonza validated that the primary natural chemical removed biofilm in minutes and stated: "Nothing else known can do this at non-toxic doses." (2) Specialty chemicals. 25 of these structural analogs are already approved under the U.S. Environmental Protection Agency (EPA)'s Toxic Substances Control Act, certified as "green" and available for immediate sale. These have been validated for the following agro-industrial verticals: (a) Surface cleaners: The U.S. Department of Agriculture validated that low concentrations of Aequor's formulations provide deep cleaning of inert, nano and organic surfaces and materials; (b) Water treatments: NASA validated that one dose of Aequor's treatment in the International Space Station's water reuse/recycling system lasted 15 months without replenishment. DOE validated that our treatments lower energy consumption by over 10% in buildings and industrial processes. Future validations include pilot projects with the EPA to test efficacy in hospital plumbing systems. (c) Algae cultivation and yeast fermentation: The U.S. Department of Energy (DOE) validated that Aequor's treatment boosted biomass of renewable feedstocks by 40% in half the time -- increasing the profitability of biofuels and biobased co-products. DOE also validated increased yields and crop protection of algae under cultivation in open ponds. A private oil and gas company validated decontamination of oilfield water. (3) New structural analogs. These kill Gram-negative and Gram-positive bacteria and fungi alone, in combinations with each other, and in combination with low doses of existing, ineffective antibiotics (including Penicillin), "potentiating" them to kill AMR pathogens at doses too low to trigger resistance. Both the U.S. National Institutes for Health (NIH) and Department of Defense (DOD) has executed contracts with Aequor to provide the pre-clinical trials needed for these new drug candidates to enter the regulatory approval pipelines. Aequor seeks partners/licensees to commercialize its specialty chemicals and support to evaluate the optimal methods to scale-up of several new structural analogs via activity-guided fractionation and/or biosynthesis in order to initiate the NIH and DOD pre-clinical trials.

Keywords: biofilm, potentiation, prevention, removal

Procedia PDF Downloads 77
235 An Integrated Solid Waste Management Strategy for Semi-Urban and Rural Areas of Pakistan

Authors: Z. Zaman Asam, M. Ajmal, R. Saeed, H. Miraj, M. Muhammad Ahtisham, B. Hameed, A. -Sattar Nizami

Abstract:

In Pakistan, environmental degradation and consequent human health deterioration has rapidly accelerated in the past decade due to solid waste mismanagement. As the situation worsens with time, establishment of proper waste management practices is urgently needed especially in semi urban and rural areas of Pakistan. This study uses a concept of Waste Bank, which involves a transfer station for collection of sorted waste fractions and its delivery to the targeted market such as recycling industries, biogas plants, composting facilities etc. The management efficiency and effectiveness of Waste Bank depend strongly on the proficient sorting and collection of solid waste fractions at household level. However, the social attitude towards such a solution in semi urban/rural areas of Pakistan demands certain prerequisites to make it workable. Considering these factors the objectives of this study are to: [A] Obtain reliable data about quantity and characteristics of generated waste to define feasibility of business and design factors, such as required storage area, retention time, transportation frequency of the system etc. [B] Analyze the effects of various social factors on waste generation to foresee future projections. [C] Quantify the improvement in waste sorting efficiency after awareness campaign. We selected Gujrat city of Central Punjab province of Pakistan as it is semi urban adjoined by rural areas. A total of 60 houses (20 from each of the three selected colonies), belonging to different social status were selected. Awareness sessions about waste segregation were given through brochures and individual lectures in each selected household. Sampling of waste, that households had attempted to sort, was then carried out in the three colored bags that were provided as part of the awareness campaign. Finally, refined waste sorting, weighing of various fractions and measurement of dry mass was performed in environmental laboratory using standard methods. It was calculated that sorting efficiency of waste improved from 0 to 52% as a result of the awareness campaign. The generation of waste (dry mass basis) on average from one household was 460 kg/year whereas per capita generation was 68 kg/year. Extrapolating these values for Gujrat Tehsil, the total waste generation per year is calculated to be 101921 tons dry mass (DM). Characteristics found in waste were (i) organic decomposable (29.2%, 29710 tons/year DM), (ii) recyclables (37.0%, 37726 tons/year DM) that included plastic, paper, metal and glass, and (iii) trash (33.8%, 34485 tons/year DM) that mainly comprised of polythene bags, medicine packaging, pampers and wrappers. Waste generation was more in colonies with comparatively higher income and better living standards. In future, data collection for all four seasons and improvements due to expansion of awareness campaign to educational institutes will be quantified. This waste management system can potentially fulfill vital sustainable development goals (e.g. clean water and sanitation), reduce the need to harvest fresh resources from the ecosystem, create business and job opportunities and consequently solve one of the most pressing environmental issues of the country.

Keywords: integrated solid waste management, waste segregation, waste bank, community development

Procedia PDF Downloads 124
234 Engineering Topology of Ecological Model for Orientation Impact of Sustainability Urban Environments: The Spatial-Economic Modeling

Authors: Moustafa Osman Mohammed

Abstract:

The modeling of a spatial-economic database is crucial in recitation economic network structure to social development. Sustainability within the spatial-economic model gives attention to green businesses to comply with Earth’s Systems. The natural exchange patterns of ecosystems have consistent and periodic cycles to preserve energy and materials flow in systems ecology. When network topology influences formal and informal communication to function in systems ecology, ecosystems are postulated to valence the basic level of spatial sustainable outcome (i.e., project compatibility success). These referred instrumentalities impact various aspects of the second level of spatial sustainable outcomes (i.e., participant social security satisfaction). The sustainability outcomes are modeling composite structure based on a network analysis model to calculate the prosperity of panel databases for efficiency value, from 2005 to 2025. The database is modeling spatial structure to represent state-of-the-art value-orientation impact and corresponding complexity of sustainability issues (e.g., build a consistent database necessary to approach spatial structure; construct the spatial-economic-ecological model; develop a set of sustainability indicators associated with the model; allow quantification of social, economic and environmental impact; use the value-orientation as a set of important sustainability policy measures), and demonstrate spatial structure reliability. The structure of spatial-ecological model is established for management schemes from the perspective pollutants of multiple sources through the input–output criteria. These criteria evaluate the spillover effect to conduct Monte Carlo simulations and sensitivity analysis in a unique spatial structure. The balance within “equilibrium patterns,” such as collective biosphere features, has a composite index of many distributed feedback flows. The following have a dynamic structure related to physical and chemical properties for gradual prolong to incremental patterns. While these spatial structures argue from ecological modeling of resource savings, static loads are not decisive from an artistic/architectural perspective. The model attempts to unify analytic and analogical spatial structure for the development of urban environments in a relational database setting, using optimization software to integrate spatial structure where the process is based on the engineering topology of systems ecology.

Keywords: ecological modeling, spatial structure, orientation impact, composite index, industrial ecology

Procedia PDF Downloads 37
233 DC Bus Voltage Ripple Control of Photo Voltaic Inverter in Low Voltage Ride-Trough Operation

Authors: Afshin Kadri

Abstract:

Using Renewable Energy Resources (RES) as a type of DG unit is developing in distribution systems. The connection of these generation units to existing AC distribution systems changes the structure and some of the operational aspects of these grids. Most of the RES requires to power electronic-based interfaces for connection to AC systems. These interfaces consist of at least one DC/AC conversion unit. Nowadays, grid-connected inverters must have the required feature to support the grid under sag voltage conditions. There are two curves in these conditions that show the magnitude of the reactive component of current as a function of voltage drop value and the required minimum time value, which must be connected to the grid. This feature is named low voltage ride-through (LVRT). Implementing this feature causes problems in the operation of the inverter that increases the amplitude of high-frequency components of the injected current and working out of maximum power point in the photovoltaic panel connected inverters are some of them. The important phenomenon in these conditions is ripples in the DC bus voltage that affects the operation of the inverter directly and indirectly. The losses of DC bus capacitors which are electrolytic capacitors, cause increasing their temperature and decreasing its lifespan. In addition, if the inverter is connected to the photovoltaic panels directly and has the duty of maximum power point tracking, these ripples cause oscillations around the operating point and decrease the generating energy. Using a bidirectional converter in the DC bus, which works as a buck and boost converter and transfers the ripples to its DC bus, is the traditional method to eliminate these ripples. In spite of eliminating the ripples in the DC bus, this method cannot solve the problem of reliability because it uses an electrolytic capacitor in its DC bus. In this work, a control method is proposed which uses the bidirectional converter as the fourth leg of the inverter and eliminates the DC bus ripples using an injection of unbalanced currents into the grid. Moreover, the proposed method works based on constant power control. In this way, in addition, to supporting the amplitude of grid voltage, it stabilizes its frequency by injecting active power. Also, the proposed method can eliminate the DC bus ripples in deep voltage drops, which cause increasing the amplitude of the reference current more than the nominal current of the inverter. The amplitude of the injected current for the faulty phases in these conditions is kept at the nominal value and its phase, together with the phase and amplitude of the other phases, are adjusted, which at the end, the ripples in the DC bus are eliminated, however, the generated power decreases.

Keywords: renewable energy resources, voltage drop value, DC bus ripples, bidirectional converter

Procedia PDF Downloads 52
232 Enhancing Fault Detection in Rotating Machinery Using Wiener-CNN Method

Authors: Mohamad R. Moshtagh, Ahmad Bagheri

Abstract:

Accurate fault detection in rotating machinery is of utmost importance to ensure optimal performance and prevent costly downtime in industrial applications. This study presents a robust fault detection system based on vibration data collected from rotating gears under various operating conditions. The considered scenarios include: (1) both gears being healthy, (2) one healthy gear and one faulty gear, and (3) introducing an imbalanced condition to a healthy gear. Vibration data was acquired using a Hentek 1008 device and stored in a CSV file. Python code implemented in the Spider environment was used for data preprocessing and analysis. Winner features were extracted using the Wiener feature selection method. These features were then employed in multiple machine learning algorithms, including Convolutional Neural Networks (CNN), Multilayer Perceptron (MLP), K-Nearest Neighbors (KNN), and Random Forest, to evaluate their performance in detecting and classifying faults in both the training and validation datasets. The comparative analysis of the methods revealed the superior performance of the Wiener-CNN approach. The Wiener-CNN method achieved a remarkable accuracy of 100% for both the two-class (healthy gear and faulty gear) and three-class (healthy gear, faulty gear, and imbalanced) scenarios in the training and validation datasets. In contrast, the other methods exhibited varying levels of accuracy. The Wiener-MLP method attained 100% accuracy for the two-class training dataset and 100% for the validation dataset. For the three-class scenario, the Wiener-MLP method demonstrated 100% accuracy in the training dataset and 95.3% accuracy in the validation dataset. The Wiener-KNN method yielded 96.3% accuracy for the two-class training dataset and 94.5% for the validation dataset. In the three-class scenario, it achieved 85.3% accuracy in the training dataset and 77.2% in the validation dataset. The Wiener-Random Forest method achieved 100% accuracy for the two-class training dataset and 85% for the validation dataset, while in the three-class training dataset, it attained 100% accuracy and 90.8% accuracy for the validation dataset. The exceptional accuracy demonstrated by the Wiener-CNN method underscores its effectiveness in accurately identifying and classifying fault conditions in rotating machinery. The proposed fault detection system utilizes vibration data analysis and advanced machine learning techniques to improve operational reliability and productivity. By adopting the Wiener-CNN method, industrial systems can benefit from enhanced fault detection capabilities, facilitating proactive maintenance and reducing equipment downtime.

Keywords: fault detection, gearbox, machine learning, wiener method

Procedia PDF Downloads 57
231 Commercial Winding for Superconducting Cables and Magnets

Authors: Glenn Auld Knierim

Abstract:

Automated robotic winding of high-temperature superconductors (HTS) addresses precision, efficiency, and reliability critical to the commercialization of products. Today’s HTS materials are mature and commercially promising but require manufacturing attention. In particular to the exaggerated rectangular cross-section (very thin by very wide), winding precision is critical to address the stress that can crack the fragile ceramic superconductor (SC) layer and destroy the SC properties. Damage potential is highest during peak operations, where winding stress magnifies operational stress. Another challenge is operational parameters such as magnetic field alignment affecting design performance. Winding process performance, including precision, capability for geometric complexity, and efficient repeatability, are required for commercial production of current HTS. Due to winding limitations, current HTS magnets focus on simple pancake configurations. HTS motors, generators, MRI/NMR, fusion, and other projects are awaiting robotic wound solenoid, planar, and spherical magnet configurations. As with conventional power cables, full transposition winding is required for long length alternating current (AC) and pulsed power cables. Robotic production is required for transposition, periodic swapping of cable conductors, and placing into precise positions, which allows power utility required minimized reactance. A full transposition SC cable, in theory, has no transmission length limits for AC and variable transient operation due to no resistance (a problem with conventional cables), negligible reactance (a problem for helical wound HTS cables), and no long length manufacturing issues (a problem with both stamped and twisted stacked HTS cables). The Infinity Physics team is solving manufacturing problems by developing automated manufacturing to produce the first-ever reliable and utility-grade commercial SC cables and magnets. Robotic winding machines combine mechanical and process design, specialized sense and observer, and state-of-the-art optimization and control sequencing to carefully manipulate individual fragile SCs, especially HTS, to shape previously unattainable, complex geometries with electrical geometry equivalent to commercially available conventional conductor devices.

Keywords: automated winding manufacturing, high temperature superconductor, magnet, power cable

Procedia PDF Downloads 121
230 Profile of the Renal Failure Patients under Haemodialysis at B. P. Koirala Institute of Health Sciences Nepal

Authors: Ram Sharan Mehta, Sanjeev Sharma

Abstract:

Introduction: Haemodialysis (HD) is a mechanical process of removing waste products from the blood and replacing essential substances in patients with renal failure. First artificial kidney developed in Netherlands in 1943 AD First successful treatment of CRF reported in 1960AD, life-saving treatment begins for CRF in 1972 AD. In 1973 AD Medicare took over financial responsibility for many clients and after that method become popular. BP Koirala institute of health science is the only center outside the Kathmandu, where HD service is available. In BPKIHS PD started in Jan.1998, HD started in August 2002 till September 2003 about 278 patients received HD. Day by day the number of HD patients is increasing in BPKIHS as with institutional growth. No such type of study was conducted in past hence there is lack of valid & reliable baseline data. Hence, the investigators were interested to conduct the study on " Profile of the Renal Failure patients under Haemodialysis at B.P. Koirala Institute of Health Sciences Nepal". Objectives: The objectives of the study were: to find out the Socio-demographic characteristics of the patients, to explore the knowledge of the patients regarding disease process and Haemodialysis and to identify the problems encountered by the patients. Methods: It is a hospital-based exploratory study. The population of the study was the clients under HD and the sampling method was purposive. Fifty-four patients undergone HD during the period of 17 July 2012 to 16 July 2013 of complete one year were included in the study. Structured interview schedule was used for collect data after obtaining validity and reliability. Results: Total 54 subjects had undergone for HD, having age range of 5-75 years and majority of them were male (74%) and Hindu (93 %). Thirty-one percent illiterate, 28% had agriculture their occupation, 80% of them were from very poor community, and about 30% subjects were unaware about the disease they suffering. Majority of subjects reported that they had no complications during dialysis (61%), where as 20% reported nausea and vomiting, 9% Hypotension, 4% headache and 2%chest pain during dialysis. Conclusions: CRF leading to HD is a long battle for patients, required to make major and continuous adjustment, both physiologically and psychologically. The study suggests that non-compliance with HD regimen were common. The socio-demographic and knowledge profile will help in the management and early prevention of disease and evaluate aspects that will influence care and patients can select mode of treatment themselves properly.

Keywords: profile, haemodialysis, Nepal, patients, treatment

Procedia PDF Downloads 360
229 BIM Modeling of Site and Existing Buildings: Case Study of ESTP Paris Campus

Authors: Rita Sassine, Yassine Hassani, Mohamad Al Omari, Stéphanie Guibert

Abstract:

Building Information Modelling (BIM) is the process of creating, managing, and centralizing information during the building lifecycle. BIM can be used all over a construction project, from the initiation phase to the planning and execution phases to the maintenance and lifecycle management phase. For existing buildings, BIM can be used for specific applications such as lifecycle management. However, most of the existing buildings don’t have a BIM model. Creating a compatible BIM for existing buildings is very challenging. It requires special equipment for data capturing and efforts to convert these data into a BIM model. The main difficulties for such projects are to define the data needed, the level of development (LOD), and the methodology to be adopted. In addition to managing information for an existing building, studying the impact of the built environment is a challenging topic. So, integrating the existing terrain that surrounds buildings into the digital model is essential to be able to make several simulations as flood simulation, energy simulation, etc. Making a replication of the physical model and updating its information in real-time to make its Digital Twin (DT) is very important. The Digital Terrain Model (DTM) represents the ground surface of the terrain by a set of discrete points with unique height values over 2D points based on reference surface (e.g., mean sea level, geoid, and ellipsoid). In addition, information related to the type of pavement materials, types of vegetation and heights and damaged surfaces can be integrated. Our aim in this study is to define the methodology to be used in order to provide a 3D BIM model for the site and the existing building based on the case study of “Ecole Spéciale des Travaux Publiques (ESTP Paris)” school of engineering campus. The property is located on a hilly site of 5 hectares and is composed of more than 20 buildings with a total area of 32 000 square meters and a height between 50 and 68 meters. In this work, the campus precise levelling grid according to the NGF-IGN69 altimetric system and the grid control points are computed according to (Réseau Gédésique Français) RGF93 – Lambert 93 french system with different methods: (i) Land topographic surveying methods using robotic total station, (ii) GNSS (Global Network Satellite sytem) levelling grid with NRTK (Network Real Time Kinematic) mode, (iii) Point clouds generated by laser scanning. These technologies allow the computation of multiple building parameters such as boundary limits, the number of floors, the floors georeferencing, the georeferencing of the 4 base corners of each building, etc. Once the entry data are identified, the digital model of each building is done. The DTM is also modeled. The process of altimetric determination is complex and requires efforts in order to collect and analyze multiple data formats. Since many technologies can be used to produce digital models, different file formats such as DraWinG (DWG), LASer (LAS), Comma-separated values (CSV), Industry Foundation Classes (IFC) and ReViT (RVT) will be generated. Checking the interoperability between BIM models is very important. In this work, all models are linked together and shared on 3DEXPERIENCE collaborative platform.

Keywords: building information modeling, digital terrain model, existing buildings, interoperability

Procedia PDF Downloads 83