Search results for: multiple variations
1834 Study of the Influence of Non Genetic Factors Affecting over Nutrition Students in Ayutthaya Province, Thailand
Authors: Thananyada Buapian
Abstract:
Overnutrition is emerging as a morbid disease in developing and Westernized countries. Because of its comorbidity diseases, it is cost-effective to prevent and manage this disease earlier. In Thailand, this alarming disease has long been studied, but the prevalence is still higher than that in the past. Physicians should recognize it well and have a definite direction to face and combat this dangerous disease. Rapid changes in the tremendous figure of overnutrition students indicate that genetic factors are not the primary determinants since human genes have remained unchanged for a century. This study aims to assess the prevalence of overnutrition students and to investigate the non-genetic factors affecting over nutrition students. A cross-sectional school-based survey was conducted. A two-stage sampling was adopted. Respondents included 1,850 students in grades 4 to 6 in Ayutthaya Province. An anthropometric measurement and questionnaire were developed. Childhood over nutrition was defined as a weight-for-height Z-score above +2SD of NCHS/WHO references. About thirty three percent of the children were over nutrition in Ayutthaya province. Stepwise multiple logistic regression analysis showed that 8 statistically significant non genetic factors explain the variation of childhood over nutrition by 18 percent. Sex is the prime factor to explain the variation of childhood over nutrition, followed by duration of light physical activities, duration of moderate physical activities, having been breastfed, the presence of a healthy role model of the caregiver, number of siblings, birth order, and occupation of the caregiver, respectively. Non genetic factors, especially the subjects’ demographic and physical activities, as well as the caregivers’ background and family environment, should be considered in viable approach to remedy this health imbalance in children.Keywords: non genetic factors, non-genetic, over nutrition, over nutrition students
Procedia PDF Downloads 2741833 Psychiatric Risk Assessment in the Emergency Department: The Impact of NEAT on the Management of Mental Health Patients
Authors: Euan Donley
Abstract:
Emergency Departments (EDs) are heavily burdened as presentation rates continue to rise. To improve patient flow National Emergency Access Targets (NEAT) were introduced. NEAT implements timelines for ED presentations, such as discharging patients within four hours of arrival. Mental health patients use EDs more than the general population and are generally more complex in their presentations. The aim of this study is to examine the impact of NEAT on psychiatric risk assessment of mental health patients in the ED. Seventy-eight mental health clinicians from 7 Victoria, Australia, hospital EDs participated in a mixed method analysis via anonymous online survey. NEAT was considered helpful as mental health patients were seen quicker, were less likely to abscond, could improve teamwork amongst ED staff, and in some cases administrative processes were better streamlined. However, clinicians felt that NEAT was also responsible for less time with patients and relatives’, resulted in rushed assessments, placed undue pressure on mental health clinicians, was not conducive to training, and the emphasis on time was the wrong focus for patient treatment. The profile of a patient typically likely to be treated within NEAT timelines showed a perfect storm of luck and compliance. If a patient was sober, medically stable, referred early, did not require much collateral information and did not have distressed relatives, NEAT was more likely to be met. Organisationally participants reported no organisational change or training to meet NEAT. Poor mental health staffing, multiple ED presentations and a shortage of mental health beds also hamper meeting NEAT. Findings suggest participants were supportive of NEAT in principle, but a demanding workload and organisational barriers meant NEAT had an overall negative effect on psychiatric risk assessment of mental health patients in ED.Keywords: assessment, emergency, risk, psychiatric
Procedia PDF Downloads 5171832 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface
Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto
Abstract:
Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns
Procedia PDF Downloads 1291831 Role of Baseline Measurements in Assessing Air Quality Impact of Shale Gas Operations
Authors: Paula Costa, Ana Picado, Filomena Pinto, Justina Catarino
Abstract:
Environmental impact associated with large scale shale gas development is of major concern to the public, policy makers and other stakeholders. To assess this impact on the atmosphere, it is important to monitoring ambient air quality prior to and during all shale gas operation stages. Baseline observations can provide a standard of the pre-shale gas development state of the environment. The lack of baseline concentrations was identified as an important knowledge gap to assess the impact of emissions to the air due to shale gas operations. In fact baseline monitoring of air quality are missing in several regions, where there is a strong possibility of future shale gas exploration. This makes it difficult to properly identify, quantify and characterize environmental impacts that may be associated with shale gas development. The implementation of a baseline air monitoring program is imperative to be able to assess the total emissions related with shale gas operations. In fact, any monitoring programme should be designed to provide indicative information on background levels. A baseline air monitoring program should identify and characterize targeted air pollutants, most frequently described from monitoring and emission measurements, as well as those expected from hydraulic fracturing activities, and establish ambient air conditions prior to start-up of potential emission sources from shale gas operations. This program has to be planned for at least one year accounting for ambient variations. In the literature, in addition to GHG emissions of CH4, CO2 and nitrogen oxides (NOx), fugitive emissions from shale gas production can release volatile organic compounds (VOCs), aldehydes (formaldehyde, acetaldehyde) and hazardous air pollutants (HAPs). The VOCs include a.o., benzene, toluene, ethyl benzene, xylenes, hexanes, 2,2,4-trimethylpentane, styrene. The concentrations of six air pollutants (ozone, particulate matter (PM), carbon monoxide (CO), nitrogen oxides (NOx), sulphur oxides (SOx), and lead) whose regional ambient air levels are regulated by the Environmental Protection Agency (EPA), are often discussed. However, the main concern in the emissions to air associated to shale gas operations, seems to be the leakage of methane. Methane is identified as a compound of major concern due to its strong global warming potential. The identification of methane leakage from shale gas activities is complex due to the existence of several other CH4 sources (e.g. landfill, agricultural activity or gas pipeline/compressor station). An integrated monitoring study of methane emissions may be a suitable mean of distinguishing the contribution of different sources of methane to ambient levels. All data analysis needs to be carefully interpreted taking, also, into account the meteorological conditions of the site. This may require the implementation of a more intensive monitoring programme. So, it is essential the development of a low-cost sampling strategy, suitable for establishing pre-operations baseline data as well as an integrated monitoring program to assess the emissions from shale gas operation sites. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 640715.Keywords: air emissions, baseline, green house gases, shale gas
Procedia PDF Downloads 3321830 Non-Invasive Data Extraction from Machine Display Units Using Video Analytics
Authors: Ravneet Kaur, Joydeep Acharya, Sudhanshu Gaur
Abstract:
Artificial Intelligence (AI) has the potential to transform manufacturing by improving shop floor processes such as production, maintenance and quality. However, industrial datasets are notoriously difficult to extract in a real-time, streaming fashion thus, negating potential AI benefits. The main example is some specialized industrial controllers that are operated by custom software which complicates the process of connecting them to an Information Technology (IT) based data acquisition network. Security concerns may also limit direct physical access to these controllers for data acquisition. To connect the Operational Technology (OT) data stored in these controllers to an AI application in a secure, reliable and available way, we propose a novel Industrial IoT (IIoT) solution in this paper. In this solution, we demonstrate how video cameras can be installed in a factory shop floor to continuously obtain images of the controller HMIs. We propose image pre-processing to segment the HMI into regions of streaming data and regions of fixed meta-data. We then evaluate the performance of multiple Optical Character Recognition (OCR) technologies such as Tesseract and Google vision to recognize the streaming data and test it for typical factory HMIs and realistic lighting conditions. Finally, we use the meta-data to match the OCR output with the temporal, domain-dependent context of the data to improve the accuracy of the output. Our IIoT solution enables reliable and efficient data extraction which will improve the performance of subsequent AI applications.Keywords: human machine interface, industrial internet of things, internet of things, optical character recognition, video analytics
Procedia PDF Downloads 1121829 Understanding the Prevalence and Expression of Virulence Factors Harbored by Enterotoxigenic Escherichia Coli
Authors: Debjyoti Bhakat, Indranil Mondal, Asish K. Mukhopadayay, Nabendu S. Chatterjee
Abstract:
Enterotoxigenic Escherichia coli is one of the leading causes of diarrhea in infants and travelers in developing countries. Colonization factors play an important role in pathogenesis and are one of the main targets for Enterotoxigenic Escherichia coli (ETEC) vaccine development. However, ETEC vaccines had poorly performed in the past, as the prevalence of colonization factors is region-dependent. There are more than 25 classical colonization factors presently known to be expressed by ETEC, although all are not expressed together. Further, there are other multiple non-classical virulence factors that are also identified. Here the presence and expression of common classical and non-classical virulence factors were studied. Further studies were done on the expression of prevalent colonization factors in different strains. For the prevalence determination, multiplex polymerase chain reaction (PCR) was employed, which was confirmed by simplex PCR. Quantitative RT-PCR was done to study the RNA expression of these virulence factors. Strains negative for colonization factors expression were confirmed by SDS-PAGE. Among the clinical isolates, the most prevalent toxin was est+elt, followed by est and elt, while the pattern was reversed in the control strains. There were 29% and 40% strains negative for any classical colonization factors (CF) or non-classical virulence factors (NCVF) among the clinical and control strains, respectively. Among CF positive ETEC strains, CS6 and CS21 were the prevalent ones in the clinical strains, whereas in control strains, CS6 was the predominant one. For NCVF genes, eatA was the most prevalent among the clinical isolates and etpA for control. CS6 was the most expressed CF, and eatA was the predominantly expressed NCVF for both clinical and controlled ETEC isolates. CS6 expression was more in strains having CS6 alone. Different strains express CS6 at different levels. Not all strains expressed their respective virulence factors. Understanding the prevalent colonization factor, CS6, and its nature of expression will contribute to designing an effective vaccine against ETEC in this region of the globe. The expression pattern of CS6 also will help in examining the relatedness between the ETEC subtypes.Keywords: classical virulence factors, CS6, diarrhea, enterotoxigenic escherichia coli, expression, non-classical virulence factors
Procedia PDF Downloads 1601828 Evaluation of the Discoloration of Methyl Orange Using Black Sand as Semiconductor through Photocatalytic Oxidation and Reduction
Authors: P. Acosta-Santamaría, A. Ibatá-Soto, A. López-Vásquez
Abstract:
Organic compounds in wastewaters coming from textile and pharmaceutical industry generated multiple harmful effects on the environment and the human health. One of them is the methyl orange (MeO), an azoic dye considered to be a recalcitrant compound. The heterogeneous photocatalysis emerges as an alternative for treating this type of hazardous compounds, through the generation of OH radicals using radiation and a semiconductor oxide. According to the author’s knowledge, catalysts such as TiO2 doped with metals show high efficiency in degrading MeO; however, this presents economic limitations on industrial scale. Black sand can be considered as a naturally doped catalyst because in its structure is common to find compounds such as titanium, iron and aluminum oxides, also elements such as zircon, cadmium, manganese, etc. This study reports the photocatalytic activity of the mineral black sand used as semiconductor in the discoloration of MeO by oxidation and reduction photocatalytic techniques. For this, magnetic composites from the mineral were prepared (RM, M1, M2 and NM) and their activity were tested through MeO discoloration while TiO2 was used as reference. For the fractions, chemical, morphological and structural characterizations were performed using Scanning Electron Microscopy with Energy Dispersive X-Ray (SEM-EDX), X-Ray Diffraction (XRD) and X-Ray Fluorescence (XRF) analysis. M2 fraction showed higher MeO discoloration (93%) in oxidation conditions at pH 2 and it could be due to the presence of ferric oxides. However, the best result to reduction process was using M1 fraction (20%) at pH 2, which contains a higher titanium percentage. In the first process, hydrogen peroxide (H2O2) was used as electron donor agent. According to the results, black sand mineral can be used as natural semiconductor in photocatalytic process. It could be considered as a photocatalyst precursor in such processes, due to its low cost and easy access.Keywords: black sand mineral, methyl orange, oxidation, photocatalysis, reduction
Procedia PDF Downloads 3831827 Evaluation of the Grammar Questions at the Undergraduate Level
Authors: Preeti Gacche
Abstract:
A considerable part of undergraduate level English Examination papers is devoted to grammar. Hence the grammar questions in the question papers are evaluated and the opinions of both students and teachers about them are obtained and analyzed. A grammar test of 100 marks is administered to 43 students to check their performance. The question papers have been evaluated by 10 different teachers and their scores compared. The analysis of 38 University question papers reveals that on an average 20 percent marks are allotted to grammar. Almost all the grammar topics are tested. Abundant use of grammatical terminology is observed in the questions. Decontextualization, repetition, possibility of multiple correct answers and grammatical errors in framing the questions have been observed. Opinions of teachers and students about grammar questions vary in many respects. The students responses are analyzed medium-wise and sex-wise. The Medium at the School level and the sex of the students are found to play no role as far as interest in the study of grammar is concerned. English medium students solve grammar questions intuitively whereas non-English medium students are required to recollect the rules of grammar. Prepositions, Verbs, Articles and Model auxiliaries are found to be easy topics for most students whereas the use of conjunctions is the most difficult topic. Out of context items of grammar are difficult to answer in comparison with contextualized items of grammar. Hence contextualized texts to test grammar items are desirable. No formal training in setting questions is imparted to teachers by the competent authorities like the University. They need to be trained in testing. Statistically there is no significant change of score with the change in the rater in testing of grammar items. There is scope of future improvement. The question papers need to be evaluated and feedback needs to be obtained from students and teachers for future improvement.Keywords: context, evaluation, grammar, tests
Procedia PDF Downloads 3571826 Germplasm Collections and Morphological Studies of Andropogongayanus-Andropogon tectorum Complex in Southwestern Nigeria
Authors: Ojo F. M., Nwekeocha C. C., Faluyi J. O.
Abstract:
Morphological studies were carried out on Andropogongayanus-Andropogontectorum complex collected in Southwestern Nigeria to provide full characterizationof the two species of Andropogon; elucidating their population dynamics. Morphological data from selected accessions of A. gayanus and A. tectorum from different parts of Southwestern Nigeria were collected and characterized using an adaptation of the Descriptors for Wild and Cultivated Rice (Oryza spp). Preliminary morphological descriptions were carried out at the points of collection. Garden populations were raised from the vegetative parts of some accessions, and hybrids were maintained in Botanical Garden of the Obafemi Awolowo University, Ile- Ife. The data obtained were subjected to inferential tests and Duncan’s multiple range test. This study has revealed distribution pattern of the two species in the area of study, which suggests a south-ward migration of Andropogongayanus from the northern vegetational zones of Nigeria to the southern ecological zones. The migration of A. gayanus around Igbeti with occasional occurrence of A. tectorum along the roadsides without any distinct phenotypic hybrid and Budo-Ode in Oyo State has been established as the southern limit of the spread of A. gayanus, the migration of A. gayanus to the South is not an invasion but a slow process. A. gayanus was not encountered in Osun, Ondo, Ekiti, and Ogun States. Andropogongayanus and Andropogon tectorum not only emerge from the rootstocks rapidly but can also produce independent propagules by rooting at some nodes. The plants can spread by means of these propagules even if it does not produce sexual or apomictic seeds. This potential for vegetative propagation, in addition to the perennial habit, confer considerable advantage for colonization by the Andropogongayanus-AndropogontectorumComplex.Keywords: accessions, distribution, migration, propagation
Procedia PDF Downloads 1171825 Structural Model on Organizational Climate, Leadership Behavior and Organizational Commitment: Work Engagement of Private Secondary School Teachers in Davao City
Authors: Genevaive Melendres
Abstract:
School administrators face the reality of teachers losing their engagement, or schools losing the teachers. This study is then conducted to identify a structural model that best predict work engagement of private secondary teachers in Davao City. Ninety-three teachers from four sectarian schools and 56 teachers from four non-sectarian schools were involved in the completion of four survey instruments namely Organizational Climate Questionnaire, Leader Behavior Descriptive Questionnaire, Organizational Commitment Scales, and Utrecht Work Engagement Scales. Data were analyzed using frequency distribution, mean, standardized deviation, t-test for independent sample, Pearson r, stepwise multiple regression analysis, and structural equation modeling. Results show that schools have high level of organizational climate dimensions; leaders oftentimes show work-oriented and people-oriented behavior; teachers have high normative commitment and they are very often engaged at their work. Teachers from non-sectarian schools have higher organizational commitment than those from sectarian schools. Organizational climate and leadership behavior are positively related to and predict work engagement whereas commitment did not show any relationship. This study underscores the relative effects of three variables on the work engagement of teachers. After testing network of relationships and evaluating several models, a best-fitting model was found between leadership behavior and work engagement. The noteworthy findings suggest that principals pay attention and consistently evaluate their behavior for this best predicts the work engagement of the teachers. The study provides value to administrators who take decisions and create conditions in which teachers derive fulfillment.Keywords: leadership behavior, organizational climate, organizational commitment, private secondary school teachers, structural model on work engagement
Procedia PDF Downloads 2751824 Establishing a Computational Screening Framework to Identify Environmental Exposures Using Untargeted Gas-Chromatography High-Resolution Mass Spectrometry
Authors: Juni C. Kim, Anna R. Robuck, Douglas I. Walker
Abstract:
The human exposome, which includes chemical exposures over the lifetime and their effects, is now recognized as an important measure for understanding human health; however, the complexity of the data makes the identification of environmental chemicals challenging. The goal of our project was to establish a computational workflow for the improved identification of environmental pollutants containing chlorine or bromine. Using the “pattern. search” function available in the R package NonTarget, we wrote a multifunctional script that searches mass spectral clusters from untargeted gas-chromatography high-resolution mass spectrometry (GC-HRMS) for the presence of spectra consistent with chlorine and bromine-containing organic compounds. The “pattern. search” function was incorporated into a different function that allows the evaluation of clusters containing multiple analyte fragments, has multi-core support, and provides a simplified output identifying listing compounds containing chlorine and/or bromine. The new function was able to process 46,000 spectral clusters in under 8 seconds and identified over 150 potential halogenated spectra. We next applied our function to a deidentified dataset from patients diagnosed with primary biliary cholangitis (PBC), primary sclerosing cholangitis (PSC), and healthy controls. Twenty-two spectra corresponded to potential halogenated compounds in the PSC and PBC dataset, including six significantly different in PBC patients, while four differed in PSC patients. We have developed an improved algorithm for detecting halogenated compounds in GC-HRMS data, providing a strategy for prioritizing exposures in the study of human disease.Keywords: exposome, metabolome, computational metabolomics, high-resolution mass spectrometry, exposure, pollutants
Procedia PDF Downloads 1401823 Stressors Faced by Border Security Officers: The Singapore Experience
Authors: Jansen Ang, Andrew Neo, Dawn Chia
Abstract:
Border Security is unlike mainstream policing in that officers are essentially in static deployment, working round the clock every day and every hour of the year looking for illegitimate entry of persons and goods. In Singapore, Border Security officers perform multiple functions to ensure the nation’s safety and security. They are responsible for safeguarding the borders of Singapore to prevent threats from entering the country. Being the first line of defence in ensuring the nation’s border security officers are entrusted with the responsibility of screening travellers inbound and outbound of Singapore daily. They examined 99 million arrivals and departures at the various checkpoints in 2014, which is a considerable volume compared to most immigration agencies. The officers’ work scopes also include cargo clearance, protective and security functions of checkpoints. The officers work in very demanding environment which can range from the smog at the land checkpoints to the harshness of the ports at the sea checkpoints. In addition, all immigration checkpoints are located at the boundaries, posing commuting challenges for officers. At the land checkpoints, festive seasons and school breaks are peak periods as given the surge of inbound and outbound travellers at the various checkpoints. Such work provides unique challenges in comparison to other law enforcement duties. This paper assesses the current stressors faced by officers of a border security agency through the conduct of ground observations as well as a perceived stress survey as well as recommendations in combating stressors faced by border security officers. The findings from the field observations and surveys indicate organisational and operational stressors that are unique to border security and recommends interventions in managing these stressors. Understanding these stressors would better inform border security agencies on the interventions needed to enhance the resilience of border security officers.Keywords: border security, Singapore, stress, operations
Procedia PDF Downloads 3271822 Safety Testing of Commercial Lithium-Ion Batteries and Failure Modes Analysis
Authors: Romeo Malik, Yashraj Tripathy, Anup Barai
Abstract:
Transportation safety is a major concern for vehicle electrification on a large-scale. The failure cost of lithium-ion batteries is substantial and is significantly impacted by higher liability and replacement cost. With continuous advancement on the material front in terms of higher energy density, upgrading safety characteristics are becoming more crucial for broader integration of lithium-ion batteries. Understanding and impeding thermal runaway is the prime issue for battery safety researchers. In this study, a comprehensive comparison of thermal runaway mechanisms for two different cathode types, Li(Ni₀.₃Co₀.₃Mn₀.₃)O₂ and Li(Ni₀.₈Co₀.₁₅Al₀.₀₅)O₂ is explored. Both the chemistries were studied for different states of charge, and the various abuse scenarios that lead to thermal runaway is investigated. Abuse tests include mechanical abuse, electrical abuse, and thermal abuse. Batteries undergo thermal runaway due to a series of combustible reactions taking place internally; this is observed as multiple jets of flame reaching temperatures of the order of 1000ºC. The physicochemical characterisation was performed on cells, prior to and after abuse. Battery’s state of charge and chemistry have a significant effect on the flame temperature profiles which is otherwise quantified as heat released. Majority of the failures during transportation is due to these external short circuit. Finally, a mitigation approach is proposed to impede the thermal runaway hazard. Transporting lithium-ion batteries under low states of charge is proposed as a way forward. Batteries at low states of charge have demonstrated minimal heat release under thermal runaway reducing the risk of secondary hazards such as thermal runaway propagation.Keywords: battery reliability, lithium-ion batteries, thermal runaway characterisation, tomography
Procedia PDF Downloads 1241821 Fault Diagnosis and Fault-Tolerant Control of Bilinear-Systems: Application to Heating, Ventilation, and Air Conditioning Systems in Multi-Zone Buildings
Authors: Abderrhamane Jarou, Dominique Sauter, Christophe Aubrun
Abstract:
Over the past decade, the growing demand for energy efficiency in buildings has attracted the attention of the control community. Failures in HVAC (heating, ventilation and air conditioning) systems in buildings can have a significant impact on the desired and expected energy performance of buildings and on the user's comfort as well. FTC is a recent technology area that studies the adaptation of control algorithms to faulty operating conditions of a system. The application of Fault-Tolerant Control (FTC) in HVAC systems has gained attention in the last two decades. The objective is to maintain the variations in system performance due to faults within an acceptable range with respect to the desired nominal behavior. This paper considers the so-called active approach, which is based on fault and identification scheme combined with a control reconfiguration algorithm that consists in determining a new set of control parameters so that the reconfigured performance is "as close as possible, "in some sense, to the nominal performance. Thermal models of buildings and their HVAC systems are described by non-linear (usually bi-linear) equations. Most of the works carried out so far in FDI (fault diagnosis and isolation) or FTC consider a linearized model of the studied system. However, this model is only valid in a reduced range of variation. This study presents a new fault diagnosis (FD) algorithm based on a bilinear observer for the detection and accurate estimation of the magnitude of the HVAC system failure. The main contribution of the proposed FD algorithm is that instead of using specific linearized models, the algorithm inherits the structure of the actual bilinear model of the building thermal dynamics. As an immediate consequence, the algorithm is applicable to a wide range of unpredictable operating conditions, i.e., weather dynamics, outdoor air temperature, zone occupancy profile. A bilinear fault detection observer is proposed for a bilinear system with unknown inputs. The residual vector in the observer design is decoupled from the unknown inputs and, under certain conditions, is made sensitive to all faults. Sufficient conditions are given for the existence of the observer and results are given for the explicit computation of observer design matrices. Dedicated observer schemes (DOS) are considered for sensor FDI while unknown input bilinear observers are considered for actuator or system components FDI. The proposed strategy for FTC works as follows: At a first level, FDI algorithms are implemented, making it also possible to estimate the magnitude of the fault. Once the fault is detected, the fault estimation is then used to feed the second level and reconfigure the control low so that that expected performances are recovered. This paper is organized as follows. A general structure for fault-tolerant control of buildings is first presented and the building model under consideration is introduced. Then, the observer-based design for Fault Diagnosis of bilinear systems is studied. The FTC approach is developed in Section IV. Finally, a simulation example is given in Section V to illustrate the proposed method.Keywords: bilinear systems, fault diagnosis, fault-tolerant control, multi-zones building
Procedia PDF Downloads 1741820 Airborne SAR Data Analysis for Impact of Doppler Centroid on Image Quality and Registration Accuracy
Authors: Chhabi Nigam, S. Ramakrishnan
Abstract:
This paper brings out the analysis of the airborne Synthetic Aperture Radar (SAR) data to study the impact of Doppler centroid on Image quality and geocoding accuracy from the perspective of Stripmap mode of data acquisition. Although in Stripmap mode of data acquisition radar beam points at 90 degrees broad side (side looking), shift in the Doppler centroid is invariable due to platform motion. In-accurate estimation of Doppler centroid leads to poor image quality and image miss-registration. The effect of Doppler centroid is analyzed in this paper using multiple sets of data collected from airborne platform. Occurrences of ghost (ambiguous) targets and their power levels have been analyzed that impacts appropriate choice of PRF. Effect of aircraft attitudes (roll, pitch and yaw) on the Doppler centroid is also analyzed with the collected data sets. Various stages of the RDA (Range Doppler Algorithm) algorithm used for image formation in Stripmap mode, range compression, Doppler centroid estimation, azimuth compression, range cell migration correction are analyzed to find the performance limits and the dependence of the imaging geometry on the final image. The ability of Doppler centroid estimation to enhance the imaging accuracy for registration are also illustrated in this paper. The paper also tries to bring out the processing of low squint SAR data, the challenges and the performance limits imposed by the imaging geometry and the platform dynamics on the final image quality metrics. Finally, the effect on various terrain types, including land, water and bright scatters is also presented.Keywords: ambiguous target, Doppler Centroid, image registration, Airborne SAR
Procedia PDF Downloads 2181819 Seven Brothers and Sisters of Severely Disabled Children Speak up about Their Everyday Challenges and Needs : A Multiple Case Study
Authors: Myriam Castonguay, Florence Vinit
Abstract:
This study aims to gain a better understanding of the lived experience of seven children growing up in a family where another child is severely disabled, informed by family systems theory and the socio-ecological model of development. In depth semi-structured interviews were conducted with seven children who described they everyday life since their brother’s or sister’s diagnosis. Thematic analysis revealed four themes : struggling with loneliness inside the family, supporting the disabled child through its journey, accommodating to a changing routine and keeping a “bubble” for oneself. Brothers and sisters depict a family life characterized by much loneliness, with severe disabilities requiring ongoing care and prolonged hospitalizations. In the midst of adversity, siblings describe themselves as highly committed to supporting the disabled child and to preserve family cohesion, even if that means getting exposed to emotionally challenging situations and adjusting their daily routine frequently. Children recount that keeping up with schoolwork and leisure activities of their own is central to their well-being. Having a space where one can reconnect with his ordinary life as a kid is also deemed very important. This study reminds us that more needs to be done to counteract the loneliness experienced by siblings through the family experience of disability. Family members and clinicians need to be extra vigilant to ensure siblings’ needs don’t go unnoticed or dismissed, as it may be difficult for this population of children to voice their own experience and needs. Family, school and other actors in the community may help brothers and sisters pursue their personal dreams, goals and projects, to continue experiencing well-being despite adverse life circumstances.Keywords: sibling’s lived experience of disability, sibling’s needs at various levels of the ecosystem, family adjustment to the disability experience, supporting family wellness through the disability experience
Procedia PDF Downloads 1171818 Comparison of Safety and Efficacy between Thulium Fibre Laser and Holmium YAG Laser for Retrograde Intrarenal Surgery
Authors: Sujeet Poudyal
Abstract:
Introduction: After Holmium:yttrium-aluminum-garnet (Ho: YAG) laser has revolutionized the management of urolithiasis, the introduction of Thulium fibre laser (TFL) has already challenged Ho:YAG laser due to its multiple commendable properties. Nevertheless, there are only few studies comparing TFL and holmium laser in Retrograde Intrarenal Surgery(RIRS). Therefore, this study was carried out to compare the efficacy and safety of thulium fiber laser (TFL) and holmium laser in RIRS. Methods: This prospective comparative study, which included all patients undergoing laser lithotripsy (RIRS) for proximal ureteric calculus and nephrolithiasis from March 2022 to March 2023, consisted of 63 patients in Ho:YAG laser group and 65 patients in TFL group. Stone free rate, operative time, laser utilization time, energy used, and complications were analysed between the two groups. Results: Mean stone size was comparable in TFL (14.23±4.1 mm) and Ho:YAG (13.88±3.28 mm) group, p-0.48. Similarly, mean stone density in TFL (1269±262 HU) was comparable to Ho:YAG (1189±212 HU), p-0.48. There was significant difference in lasing time between TFL (12.69±7.41 mins) and Ho:YAG (20.44±14 mins), p-0.012). TFL group had operative time of 43.47± 16.8 mins which was shorter than Ho:YAG group (58±26.3 mins),p-0.005. Both TFL and Ho:YAG groups had comparable total energy used(11.4±6.2 vs 12±8 respectively, p-0.758). Stone free rate was 87%for TFL, whereas it was 79.5% for Ho:YAG, p-0.25). Two cases of sepsis and one ureteric stricture were encountered in TFL, whereas three cases suffered from sepsis apart from one ureteric stricture in Ho:YAG group, p-0.62). Conclusion: Thulium Fibre Laser has similar efficacy as Holmium: YAG Laser in terms of safety and stone free rate. However, due to better stone ablation rate in TFL, it can become the game changer in management of urolithiasis in the coming days.Keywords: retrograde intrarenal surgery, thulium fibre laser, holmium:yttrium-aluminum-garnet (ho:yag) laser, nephrolithiasis
Procedia PDF Downloads 811817 An Evaluation of Education Provision for Students with Autism Spectrum Disorder in Ireland: The Role of the Special Needs Assistant
Authors: Claire P. Griffin
Abstract:
The education provision for students with special educational needs, including students with Autism Spectrum Disorder (ASD), has undergone significant national and international changes in recent years. In particular, an increase in resource-based provision has occurred across educational settings in an effort to support inclusive practices. This paper seeks to explore the role of the Special Needs Assistant (SNA) in supporting children with ASD in Irish schools. This research stems from the second national evaluation of ‘Education Provision for Students with Autism Spectrum Disorder in Ireland’ (NCSE, 2016). This research was commissioned by the National Council for Special Education (NCSE) in Ireland and conducted by a team of researchers from Mary Immaculate College, Limerick from February to July 2014. This study involved a multiple case study research strategy across 24 educational sites, as selected through a stratified sampling process. Research strategies included semi-structured interviews, classroom observations, documentary review and child conversations. Data analysis was conducted electronically using Nvivo software, with use of an additional quantitative recording mechanism based on scaled weighting criteria for collected data. Based on such information, key findings from the NCSE national evaluation will be presented and critically reviewed, with particular reference to the role of the SNA in supporting pupils with ASD. Examples of positive practice inherent within the SNA role will be outlined and contrasted with discrete areas for development. Based on such findings, recommendations for the evolving role of the SNA will be presented, with the aim of informing both policy and best practice within the field.Keywords: autism spectrum disorder, inclusive education , paraprofessional, special needs assistant
Procedia PDF Downloads 2811816 The Richtmyer-Meshkov Instability Impacted by the Interface with Different Components Distribution
Authors: Sheng-Bo Zhang, Huan-Hao Zhang, Zhi-Hua Chen, Chun Zheng
Abstract:
In this paper, the Richtmyer-Meshkov instability has been studied numerically by using the high-resolution Roe scheme based on the two-dimensional unsteady Euler equation, which was caused by the interaction between shock wave and the helium circular light gas cylinder with different component distributions. The numerical results further discuss the deformation process of the gas cylinder, the wave structure of the flow field and quantitatively analyze the characteristic dimensions (length, height, and central axial width) of the gas cylinder, the volume compression ratio of the cylinder over time. In addition, the flow mechanism of shock-driven interface gas mixing is analyzed from multiple perspectives by combining it with the flow field pressure, velocity, circulation, and gas mixing rate. Then the effects of different initial component distribution conditions on interface instability are investigated. The results show when the diffusion interface transit to the sharp interface, the reflection coefficient gradually increases on both sides of the interface. When the incident shock wave interacts with the cylinder, the transmission of the shock wave will transit from conventional transmission to unconventional transmission. At the same time, the reflected shock wave is gradually strengthened, and the transmitted shock wave is gradually weakened, which leads to an increase in the Richtmyer-Meshkov instability. Moreover, the Atwood number on both sides of the interface also increases as the diffusion interface transit to the sharp interface, which leads to an increase in the Rayleigh-Taylor instability and the Kelvin-Helmholtz instability. Therefore, the increase in instability will lead to an increase the circulation, resulting in an increase in the growth rate of gas mixing rate.Keywords: shock wave, He light cylinder, Richtmyer-Meshkov instability, Gaussian distribution
Procedia PDF Downloads 811815 Optimization of the Performance of a Solar Concentrator System with a Cavity Receiver Using the Genetic Algorithm
Authors: Foozhan Gharehkhani
Abstract:
The use of solar energy as a sustainable renewable energy source has gained significant attention in recent years. Solar concentrating systems (CSP), which direct solar radiation onto a receiver, are an effective means of producing high-temperature thermal energy. Cavity receivers, known for their high thermal efficiency and reduced heat losses, are particularly noteworthy in these systems. Optimizing their design can enhance energy efficiency and reduce costs. This study leverages the genetic algorithm, a powerful optimization tool inspired by natural evolution, to optimize the performance of a solar concentrator system with a cavity receiver, aiming for a more efficient and cost-effective design. In this study, a system consisting of a solar concentrator and a cavity receiver was analyzed. The concentrator was designed as a parabolic dish, and the receiver had a cylindrical cavity with a helical structure. The primary parameters were defined as the cavity diameter (D), the receiver height (h), and the helical pipe diameter (d). Initially, the system was optimized to achieve the maximum heat flux, and the optimal parameter values along with the maximum heat flux were obtained. Subsequently, a multi-objective optimization approach was applied, aiming to maximize the heat flux while minimizing the system construction cost. The optimization process was conducted using the genetic algorithm implemented in MATLAB with precise execution. The results of this study revealed that the optimal dimensions of the receiver, including the cavity diameter (D), receiver height (h), and helical pipe diameter (d), were determined to be 0.142 m, 0.1385 m, and 0.011 m, respectively. This optimization resulted in improvements of 3% in the cavity diameter, 8% in the height, and 5% in the helical pipe diameter. Furthermore, the results indicated that the primary focus of this research was the accurate thermal modeling of the solar collection system. The simulations and the obtained results demonstrated that the optimization applied to this system maximized its thermal performance and elevated its energy efficiency to a desirable level. Moreover, this study successfully modeled and controlled effective temperature variations at different angles of solar irradiation, highlighting significant improvements in system efficiency. The significance of this research lies in leveraging solar energy as one of the prominent renewable energy sources, playing a key role in replacing fossil fuels. Considering the environmental and economic challenges associated with the excessive use of fossil resources—such as increased greenhouse gas emissions, environmental degradation, and the depletion of fossil energy reserves—developing technologies related to renewable energy has become a vital priority. Among these, solar concentrating systems, capable of achieving high temperatures, are particularly important for industrial and heating applications. This research aims to optimize the performance of such systems through precise design and simulation, making a significant contribution to the advancement of advanced technologies and the efficient utilization of solar energy in Iran, thereby addressing the country's future energy needs effectively.Keywords: cavity receiver, genetic algorithm, optimization, solar concentrator system performance
Procedia PDF Downloads 111814 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data
Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora
Abstract:
Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.Keywords: drilling optimization, geological formations, machine learning, rate of penetration
Procedia PDF Downloads 1331813 TAXAPRO, A Streamlined Pipeline to Analyze Shotgun Metagenomes
Authors: Sofia Sehli, Zainab El Ouafi, Casey Eddington, Soumaya Jbara, Kasambula Arthur Shem, Islam El Jaddaoui, Ayorinde Afolayan, Olaitan I. Awe, Allissa Dillman, Hassan Ghazal
Abstract:
The ability to promptly sequence whole genomes at a relatively low cost has revolutionized the way we study the microbiome. Microbiologists are no longer limited to studying what can be grown in a laboratory and instead are given the opportunity to rapidly identify the makeup of microbial communities in a wide variety of environments. Analyzing whole genome sequencing (WGS) data is a complex process that involves multiple moving parts and might be rather unintuitive for scientists that don’t typically work with this type of data. Thus, to help lower the barrier for less-computationally inclined individuals, TAXAPRO was developed at the first Omics Codeathon held virtually by the African Society for Bioinformatics and Computational Biology (ASBCB) in June 2021. TAXAPRO is an advanced metagenomics pipeline that accurately assembles organelle genomes from whole-genome sequencing data. TAXAPRO seamlessly combines WGS analysis tools to create a pipeline that automatically processes raw WGS data and presents organism abundance information in both a tabular and graphical format. TAXAPRO was evaluated using COVID-19 patient gut microbiome data. Analysis performed by TAXAPRO demonstrated a high abundance of Clostridia and Bacteroidia genera and a low abundance of Proteobacteria genera relative to others in the gut microbiome of patients hospitalized with COVID-19, consistent with the original findings derived using a different analysis methodology. This provides crucial evidence that the TAXAPRO workflow dispenses reliable organism abundance information overnight without the hassle of performing the analysis manually.Keywords: metagenomics, shotgun metagenomic sequence analysis, COVID-19, pipeline, bioinformatics
Procedia PDF Downloads 2251812 Ontology based Fault Detection and Diagnosis system Querying and Reasoning examples
Authors: Marko Batic, Nikola Tomasevic, Sanja Vranes
Abstract:
One of the strongholds in the ubiquitous efforts related to the energy conservation and energy efficiency improvement is represented by the retrofit of high energy consumers in buildings. In general, HVAC systems represent the highest energy consumers in buildings. However they usually suffer from mal-operation and/or malfunction, causing even higher energy consumption than necessary. Various Fault Detection and Diagnosis (FDD) systems can be successfully employed for this purpose, especially when it comes to the application at a single device/unit level. In the case of more complex systems, where multiple devices are operating in the context of the same building, significant energy efficiency improvements can only be achieved through application of comprehensive FDD systems relying on additional higher level knowledge, such as their geographical location, served area, their intra- and inter- system dependencies etc. This paper presents a comprehensive FDD system that relies on the utilization of common knowledge repository that stores all critical information. The discussed system is deployed as a test-bed platform at the two at Fiumicino and Malpensa airports in Italy. This paper aims at presenting advantages of implementation of the knowledge base through the utilization of ontology and offers improved functionalities of such system through examples of typical queries and reasoning that enable derivation of high level energy conservation measures (ECM). Therefore, key SPARQL queries and SWRL rules, based on the two instantiated airport ontologies, are elaborated. The detection of high level irregularities in the operation of airport heating/cooling plants is discussed and estimation of energy savings is reported.Keywords: airport ontology, knowledge management, ontology modeling, reasoning
Procedia PDF Downloads 5411811 A Methodology to Integrate Data in the Company Based on the Semantic Standard in the Context of Industry 4.0
Authors: Chang Qin, Daham Mustafa, Abderrahmane Khiat, Pierre Bienert, Paulo Zanini
Abstract:
Nowadays, companies are facing lots of challenges in the process of digital transformation, which can be a complex and costly undertaking. Digital transformation involves the collection and analysis of large amounts of data, which can create challenges around data management and governance. Furthermore, it is also challenged to integrate data from multiple systems and technologies. Although with these pains, companies are still pursuing digitalization because by embracing advanced technologies, companies can improve efficiency, quality, decision-making, and customer experience while also creating different business models and revenue streams. In this paper, the issue that data is stored in data silos with different schema and structures is focused. The conventional approaches to addressing this issue involve utilizing data warehousing, data integration tools, data standardization, and business intelligence tools. However, these approaches primarily focus on the grammar and structure of the data and neglect the importance of semantic modeling and semantic standardization, which are essential for achieving data interoperability. In this session, the challenge of data silos in Industry 4.0 is addressed by developing a semantic modeling approach compliant with Asset Administration Shell (AAS) models as an efficient standard for communication in Industry 4.0. The paper highlights how our approach can facilitate the data mapping process and semantic lifting according to existing industry standards such as ECLASS and other industrial dictionaries. It also incorporates the Asset Administration Shell technology to model and map the company’s data and utilize a knowledge graph for data storage and exploration.Keywords: data interoperability in industry 4.0, digital integration, industrial dictionary, semantic modeling
Procedia PDF Downloads 961810 The Preparation and Training of Expert Studio Reviewers
Authors: Diane M. Bender
Abstract:
In design education, professional education is delivered in a studio, where students learn and understand their discipline. This learning methodology culminates in a final review, where students present their work before instructors and invited reviewers, known as jurors. These jurors are recognized experts who add a wide diversity of opinions in their feedback to students. This feedback can be provided in multiple formats, mainly a verbal critique of the work. To better understand how these expert reviewers prepare for a studio review, a survey was distributed to reviewers at a multi-disciplinary design school within the United States. Five design disciplines are involved in this case study: architecture, graphic design, industrial design, interior design, and landscape architecture. Respondents (n=122) provided information about if and how they received training on how to critique and participate in a final review. Common forms of training included mentorship, modeled behavior from other designers/past professors, workshops on critique from the instructing faculty prior to the crit session, and by being a practicing design professional. Respondents also gave feedback about how much the instructor provided course materials prior to the review in order to better prepare for student interaction. Finally, respondents indicated if they had interaction, and in what format, with students prior to the final review. Typical responses included participation in studio desk crits, a midterm jury member, meetings with students, and email or social media correspondence. While the focus of this study is the studio review, the findings are equally applicable to other disciplines. Suggestions will be provided on how to improve the preparation of guests in the learning process and how their interaction can positively influence student engagement.Keywords: critique, design, education, evaluation, juror
Procedia PDF Downloads 831809 Flexible and Color Tunable Inorganic Light Emitting Diode Array for High Resolution Optogenetic Devices
Authors: Keundong Lee, Dongha Yoo, Youngbin Tchoe, Gyu-Chul Yi
Abstract:
Light emitting diode (LED) array is an ideal optical stimulation tool for optogenetics, which controls inhibition and excitation of specific neurons with light-sensitive ion channels or pumps. Although a fiber-optic cable with an external light source, either a laser or LED mechanically connected to the end of the fiber-optic cable has widely been used for illumination on neural tissue, a new approach to use micro LEDs (µLEDs) has recently been demonstrated. The LEDs can be placed directly either on the cortical surface or within the deep brain using a penetrating depth probe. Accordingly, this method would not need a permanent opening in the skull if the LEDs are integrated with miniature electrical power source and wireless communication. In addition, multiple color generation from single µLED cell would enable to excite and/or inhibit neurons in localized regions. Here, we demonstrate flexible and color tunable µLEDs for the optogenetic device applications. The flexible and color tunable LEDs was fabricated using multifaceted gallium nitride (GaN) nanorod arrays with GaN nanorods grown on InxGa1−xN/GaN single quantum well structures (SQW) anisotropically formed on the nanorod tips and sidewalls. For various electroluminescence (EL) colors, current injection paths were controlled through a continuous p-GaN layer depending on the applied bias voltage. The electric current was injected through different thickness and composition, thus changing the color of light from red to blue that the LED emits. We believe that the flexible and color tunable µLEDs enable us to control activities of the neuron by emitting various colors from the single µLED cell.Keywords: light emitting diode, optogenetics, graphene, flexible optoelectronics
Procedia PDF Downloads 2111808 Contribution to the Understanding of the Hydrodynamic Behaviour of Aquifers of the Taoudéni Sedimentary Basin (South-eastern Part, Burkina Faso)
Authors: Kutangila Malundama Succes, Koita Mahamadou
Abstract:
In the context of climate change and demographic pressure, groundwater has emerged as an essential and strategic resource whose sustainability relies on good management. The accuracy and relevance of decisions made in managing these resources depend on the availability and quality of scientific information they must rely on. It is, therefore, more urgent to improve the state of knowledge on groundwater to ensure sustainable management. This study is conducted for the particular case of the aquifers of the transboundary sedimentary basin of Taoudéni in its Burkinabe part. Indeed, Burkina Faso (and the Sahel region in general), marked by low rainfall, has experienced episodes of severe drought, which have justified the use of groundwater as the primary source of water supply. This study aims to improve knowledge of the hydrogeology of this area to achieve sustainable management of transboundary groundwater resources. The methodological approach first described lithological units regarding the extension and succession of different layers. Secondly, the hydrodynamic behavior of these units was studied through the analysis of spatio-temporal variations of piezometric. The data consists of 692 static level measurement points and 8 observation wells located in the usual manner in the area and capturing five of the identified geological formations. Monthly piezometric level chronicles are available for each observation and cover the period from 1989 to 2020. The temporal analysis of piezometric, carried out in comparison with rainfall chronicles, revealed a general upward trend in piezometric levels throughout the basin. The reaction of the groundwater generally occurs with a delay of 1 to 2 months relative to the flow of the rainy season. Indeed, the peaks of the piezometric level generally occur between September and October in reaction to the rainfall peaks between July and August. Low groundwater levels are observed between May and July. This relatively slow reaction of the aquifer is observed in all wells. The influence of the geological nature through the structure and hydrodynamic properties of the layers was deduced. The spatial analysis reveals that piezometric contours vary between 166 and 633 m with a trend indicating flow that generally goes from southwest to northeast, with the feeding areas located towards the southwest and northwest. There is a quasi-concordance between the hydrogeological basins and the overlying hydrological basins, as well as a bimodal flow with a component following the topography and another significant component deeper, controlled by the regional gradient SW-NE. This latter component may present flows directed from the high reliefs towards the sources of Nasso. In the source area (Kou basin), the maximum average stock variation, calculated by the Water Table Fluctuation (WTF) method, varies between 35 and 48.70 mm per year for 2012-2014.Keywords: hydrodynamic behaviour, taoudeni basin, piezometry, water table fluctuation
Procedia PDF Downloads 671807 Effectiveness of Dry Needling with and without Ultrasound Guidance in Patients with Knee Osteoarthritis and Patellofemoral Pain Syndrome: A Systematic Review and Meta-Analysis
Authors: Johnson C. Y. Pang, Amy S. N. Fu, Ryan K. L. Lee, Allan C. L. Fu
Abstract:
Dry needling (DN) is one of the puncturing methods that involves the insertion of needles into the tender spots of the human body without the injection of any substance. DN has long been used to treat the patient with knee pain caused by knee osteoarthritis (KOA) and patellofemoral pain syndrome (PFPS), but the effectiveness is still inconsistent. This study aimed to conduct a systematic review and meta-analysis to assess the intervention methods and effects of DN with and without ultrasound guidance for treating pain and dysfunctions in people with KOA and PFPS. Design: This systematic review adhered to the PRISMA reporting guidelines. The registration number of the study protocol published in the PROSPERO database was CRD42021221419. Six electronic databases were searched manually through CINAHL Complete (1976-2020), Cochrane Library (1996-2020), EMBASE (1947-2020), Medline (1946-2020), PubMed (1966-2020), and Psychinfo (1806-2020) in November 2020. Randomized controlled trials (RCTs) and controlled clinical trials were included to examine the effects of DN on knee pain, including KOA and PFPS. The key concepts included were: DN, acupuncture, ultrasound guidance, KOA, and PFPS. Risk of bias assessment and qualitative analysis were conducted by two independent reviewers using the PEDro score. Results: Fourteen articles met the inclusion criteria, and eight of them were high-quality papers in accordance with the PEDro score. There were variations in the techniques of DN. These included the direction, depth of insertion, number of needles, duration of stay, needle manipulation, and the number of treatment sessions. Meta-analysis was conducted on eight articles. DN group showed positive short-term effects (from immediate after DN to less than 3 months) on pain reduction for both KOA and PFPS with the overall standardized mean difference (SMD) of -1.549 (95% CI=-0.588 to -2.511); with great heterogeneity (P=0.002, I²=96.3%). In subgroup analysis, DN demonstrated significant effects in pain reduction on PFPS (p < 0.001) that could not be found in subjects with KOA (P=0.302). At 3-month post-intervention, DN also induced significant pain reduction in both subjects with KOA and PFPS with an overall SMD of -0.916 (95% CI=-0.133 to -1.699, and great heterogeneity (P=0.022, I²=95.63%). Besides, DN induced significant short-term improvement in function with the overall SMD=6.069; 95% CI=8.595 to 3.544; with great heterogeneity (P<0.001, I²=98.56%) when analyzed was conducted on both KOA and PFPS groups. In subgroup analysis, only PFPS showed a positive result with SMD=6.089, P<0.001; while KOA showed statistically insignificant with P=0.198 in short-term effect. Similarly, at 3-month post-intervention, significant improvement in function after DN was found when the analysis was conducted in both groups with the overall SMD=5.840; 95% CI=9.252 to 2.428; with great heterogeneity (P<0.001, I²=99.1%), but only PFPS showed significant improvement in sub-group analysis (P=0.002, I²=99.1%). Conclusions: The application of DN in KOA and PFPS patients varies among practitioners. DN is effective in reducing pain and dysfunction at short-term and 3-month post-intervention in individuals with PFPS. To our best knowledge, no study has reported the effects of DN with ultrasound guidance on KOA and PFPS. The longer-term effects of DN on KOA and PFPS are waiting for further study.Keywords: dry needling, knee osteoarthritis, patellofemoral pain syndrome, ultrasound guidance
Procedia PDF Downloads 1371806 Empirical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;
Procedia PDF Downloads 831805 Development of New Localized Surface Plasmon Resonance Interfaces Based on ITO Au NPs/ Polymer for Nickel Detection
Authors: F. Z. Tighilt, N. Belhaneche-Bensemra, S. Belhousse, S. Sam, K. Lasmi, N. Gabouze
Abstract:
Recently, the gold nanoparticles (Au NPs) became an active multidisciplinary research topic. First, Au thin films fabricated by alkylthiol-functionalized Au NPs were found to have vapor sensitive conductivities, they were hence widely investigated as electrical chemiresistors for sensing different vapor analytes and even organic molecules in aqueous solutions. Second, Au thin films were demonstrated to have speciallocalized surface plasmon resonances (LSPR), so that highly ordered 2D Au superlattices showed strong collective LSPR bands due to the near-field coupling of adjacent nanoparticles and were employed to detect biomolecular binding. Particularly when alkylthiol ligands were replaced by thiol-terminated polymers, the resulting polymer-modified Au NPs could be readily assembled into 2D nanostructures on solid substrates. Monolayers of polystyrene-coated Au NPs showed typical dipolar near-field interparticle plasmon coupling of LSPR. Such polymer-modified Au nanoparticle films have an advantage that the polymer thickness can be feasibly controlled by changing the polymer molecular weight. In this article, the effect of tin-doped indium oxide (ITO) coatings on the plasmonic properties of ITO interfaces modified with gold nanostructures (Au NSs) is investigated. The interest in developing ITO overlayers is multiple. The presence of a con-ducting ITO overlayer creates a LSPR-active interface, which can serve simultaneously as a working electrode in an electro-chemical setup. The surface of ITO/ Au NPs contains hydroxyl groups that can be used to link functional groups to the interface. Here the covalent linking of nickel /Au NSs/ITO hybrid LSPR platforms will be presented.Keywords: conducting polymer, metal nanoparticles (NPs), LSPR, poly (3-(pyrrolyl)–carboxylic acid), polypyrrole
Procedia PDF Downloads 268