Search results for: robust switching vector
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2771

Search results for: robust switching vector

161 Electrical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: electrical disaggregation, DTW, general appliance modeling, event detection

Procedia PDF Downloads 49
160 Simulation of Hydraulic Fracturing Fluid Cleanup for Partially Degraded Fracturing Fluids in Unconventional Gas Reservoirs

Authors: Regina A. Tayong, Reza Barati

Abstract:

A stable, fast and robust three-phase, 2D IMPES simulator has been developed for assessing the influence of; breaker concentration on yield stress of filter cake and broken gel viscosity, varying polymer concentration/yield stress along the fracture face, fracture conductivity, fracture length, capillary pressure changes and formation damage on fracturing fluid cleanup in tight gas reservoirs. This model has been validated as against field data reported in the literature for the same reservoir. A 2-D, two-phase (gas/water) fracture propagation model is used to model our invasion zone and create the initial conditions for our clean-up model by distributing 200 bbls of water around the fracture. A 2-D, three-phase IMPES simulator, incorporating a yield-power-law-rheology has been developed in MATLAB to characterize fluid flow through a hydraulically fractured grid. The variation in polymer concentration along the fracture is computed from a material balance equation relating the initial polymer concentration to total volume of injected fluid and fracture volume. All governing equations and the methods employed have been adequately reported to permit easy replication of results. The effect of increasing capillary pressure in the formation simulated in this study resulted in a 10.4% decrease in cumulative production after 100 days of fluid recovery. Increasing the breaker concentration from 5-15 gal/Mgal on the yield stress and fluid viscosity of a 200 lb/Mgal guar fluid resulted in a 10.83% increase in cumulative gas production. For tight gas formations (k=0.05 md), fluid recovery increases with increasing shut-in time, increasing fracture conductivity and fracture length, irrespective of the yield stress of the fracturing fluid. Mechanical induced formation damage combined with hydraulic damage tends to be the most significant. Several correlations have been developed relating pressure distribution and polymer concentration to distance along the fracture face and average polymer concentration variation with injection time. The gradient in yield stress distribution along the fracture face becomes steeper with increasing polymer concentration. The rate at which the yield stress (τ_o) is increasing is found to be proportional to the square of the volume of fluid lost to the formation. Finally, an improvement on previous results was achieved through simulating yield stress variation along the fracture face rather than assuming constant values because fluid loss to the formation and the polymer concentration distribution along the fracture face decreases as we move away from the injection well. The novelty of this three-phase flow model lies in its ability to (i) Simulate yield stress variation with fluid loss volume along the fracture face for different initial guar concentrations. (ii) Simulate increasing breaker activity on yield stress and broken gel viscosity and the effect of (i) and (ii) on cumulative gas production within reasonable computational time.

Keywords: formation damage, hydraulic fracturing, polymer cleanup, multiphase flow numerical simulation

Procedia PDF Downloads 104
159 Artificial Neural Network and Satellite Derived Chlorophyll Indices for Estimation of Wheat Chlorophyll Content under Rainfed Condition

Authors: Muhammad Naveed Tahir, Wang Yingkuan, Huang Wenjiang, Raheel Osman

Abstract:

Numerous models used in prediction and decision-making process but most of them are linear in natural environment, and linear models reach their limitations with non-linearity in data. Therefore accurate estimation is difficult. Artificial Neural Networks (ANN) found extensive acceptance to address the modeling of the complex real world for the non-linear environment. ANN’s have more general and flexible functional forms than traditional statistical methods can effectively deal with. The link between information technology and agriculture will become more firm in the near future. Monitoring crop biophysical properties non-destructively can provide a rapid and accurate understanding of its response to various environmental influences. Crop chlorophyll content is an important indicator of crop health and therefore the estimation of crop yield. In recent years, remote sensing has been accepted as a robust tool for site-specific management by detecting crop parameters at both local and large scales. The present research combined the ANN model with satellite-derived chlorophyll indices from LANDSAT 8 imagery for predicting real-time wheat chlorophyll estimation. The cloud-free scenes of LANDSAT 8 were acquired (Feb-March 2016-17) at the same time when ground-truthing campaign was performed for chlorophyll estimation by using SPAD-502. Different vegetation indices were derived from LANDSAT 8 imagery using ERADAS Imagine (v.2014) software for chlorophyll determination. The vegetation indices were including Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), Chlorophyll Absorbed Ratio Index (CARI), Modified Chlorophyll Absorbed Ratio Index (MCARI) and Transformed Chlorophyll Absorbed Ratio index (TCARI). For ANN modeling, MATLAB and SPSS (ANN) tools were used. Multilayer Perceptron (MLP) in MATLAB provided very satisfactory results. For training purpose of MLP 61.7% of the data, for validation purpose 28.3% of data and rest 10% of data were used to evaluate and validate the ANN model results. For error evaluation, sum of squares error and relative error were used. ANN model summery showed that sum of squares error of 10.786, the average overall relative error was .099. The MCARI and NDVI were revealed to be more sensitive indices for assessing wheat chlorophyll content with the highest coefficient of determination R²=0.93 and 0.90 respectively. The results suggested that use of high spatial resolution satellite imagery for the retrieval of crop chlorophyll content by using ANN model provides accurate, reliable assessment of crop health status at a larger scale which can help in managing crop nutrition requirement in real time.

Keywords: ANN, chlorophyll content, chlorophyll indices, satellite images, wheat

Procedia PDF Downloads 122
158 Comparative Chromatographic Profiling of Wild and Cultivated Macrocybe Gigantea (Massee) Pegler & Lodge

Authors: Gagan Brar, Munruchi Kaur

Abstract:

Macrocybe gigantea was collected from the wild, growing as pure white, fleshy, robust fruit bodies in caespitose clusters. Initially, the few ladies collecting these fruiting bodies for cooking revealed their edibility status, which was later confirmed through classical and molecular taxonomy. The culture of this potential wild edible taxa was raised with an aim of domesticating it. Various solid and liquid media were evaluated for their vegetative growth, in which Malt Extract Agar was found to be the best solid medium and Glucose Peptone medium as the best liquid medium. The effect of different temperatures as well as pH was also evaluated for the vegetative growth of M. gigantea, and it was found that it shows maximum vegetative growth at 30° and pH 5. For spawn preparation, various grains viz. Wheat grains, Jowar grains, Bajra grains and Maize grains were evaluated, and it was found that wheat grains boiled for 30 minutes gave the maximum mycelial growth. Mother spawn was thus prepared on wheat grains boiled for 30 minutes. For raising the fruiting bodies, different locally available agro-wastes were tried, and it was found that paddy straw gives the best growth. Both wilds as well as cultivated M. gigantea were compared through HPLC to evaluate the different nutritional and nutraceutical values. For the evaluation of different sugars in wild and cultivated M. gigantea, 15 sugars were taken for analysis. Among these Melezitose, Trehalose, Glucose, Xylose and Mannitol were found in the wild collection of M. gigantea; in the cultivated sample, Melezitose, Trehalose, Xylose and Dulcitol were detected. Among the 20 different amino acids, 18 amino acids were found, except Asparagine and Glutamine in both wild as well as cultivated samples. Among the 37 tested fatty acids, only 6 fatty acids, namely Palmitic acid, Stearic acid, Cis-9 Oleic acid, Linoleic acid, Gamma-Linolenic acid and Tricosanoic acid, were found in both wild and cultivated samples, although the concentration of these fatty acids was more in the cultivated sample. From the various vitamins tested, Vitamin C, D and E were present in both wild and cultivated samples. Both wild as well as cultivated samples were evaluated for the presence of phenols; for this purpose, eleven phenols were taken as standards in HPLC analysis, and it was found that Gallic acid, Resorcinol, Ferulic acid and Pyrogallol were present in the wild mushroom sample whereas in the cultivated sample Ferulic acid, Caffeic Acid, Vanillic acid and Vanillin are present. The flavonoid analysis revealed the presence of Rutin, Naringin and Quercetin in wild M. gigantea, while 5 Naringin, Catechol, Myrecetin, Gossypin and Quercetin were found in cultivated one. From the comparative chromatographic profiling of both wild as well as cultivated M. gigantea, it is concluded that no nutrient loss was found during its cultivation. An increase in percentage of secondary metabolites (i.e., phenols and flavonoids) was found in cultivated one as compared to wild M. gigantea. Thus, from future perspective cultivated species of M. gigantea can be recommended for the commercial purpose as a good food supplement.

Keywords: culture, edible, fruit bodies, wild

Procedia PDF Downloads 40
157 Machine Learning Approaches Based on Recency, Frequency, Monetary (RFM) and K-Means for Predicting Electrical Failures and Voltage Reliability in Smart Cities

Authors: Panaya Sudta, Wanchalerm Patanacharoenwong, Prachya Bumrungkun

Abstract:

As With the evolution of smart grids, ensuring the reliability and efficiency of electrical systems in smart cities has become crucial. This paper proposes a distinct approach that combines advanced machine learning techniques to accurately predict electrical failures and address voltage reliability issues. This approach aims to improve the accuracy and efficiency of reliability evaluations in smart cities. The aim of this research is to develop a comprehensive predictive model that accurately predicts electrical failures and voltage reliability in smart cities. This model integrates RFM analysis, K-means clustering, and LSTM networks to achieve this objective. The research utilizes RFM analysis, traditionally used in customer value assessment, to categorize and analyze electrical components based on their failure recency, frequency, and monetary impact. K-means clustering is employed to segment electrical components into distinct groups with similar characteristics and failure patterns. LSTM networks are used to capture the temporal dependencies and patterns in customer data. This integration of RFM, K-means, and LSTM results in a robust predictive tool for electrical failures and voltage reliability. The proposed model has been tested and validated on diverse electrical utility datasets. The results show a significant improvement in prediction accuracy and reliability compared to traditional methods, achieving an accuracy of 92.78% and an F1-score of 0.83. This research contributes to the proactive maintenance and optimization of electrical infrastructures in smart cities. It also enhances overall energy management and sustainability. The integration of advanced machine learning techniques in the predictive model demonstrates the potential for transforming the landscape of electrical system management within smart cities. The research utilizes diverse electrical utility datasets to develop and validate the predictive model. RFM analysis, K-means clustering, and LSTM networks are applied to these datasets to analyze and predict electrical failures and voltage reliability. The research addresses the question of how accurately electrical failures and voltage reliability can be predicted in smart cities. It also investigates the effectiveness of integrating RFM analysis, K-means clustering, and LSTM networks in achieving this goal. The proposed approach presents a distinct, efficient, and effective solution for predicting and mitigating electrical failures and voltage issues in smart cities. It significantly improves prediction accuracy and reliability compared to traditional methods. This advancement contributes to the proactive maintenance and optimization of electrical infrastructures, overall energy management, and sustainability in smart cities.

Keywords: electrical state prediction, smart grids, data-driven method, long short-term memory, RFM, k-means, machine learning

Procedia PDF Downloads 27
156 Fault Diagnosis and Fault-Tolerant Control of Bilinear-Systems: Application to Heating, Ventilation, and Air Conditioning Systems in Multi-Zone Buildings

Authors: Abderrhamane Jarou, Dominique Sauter, Christophe Aubrun

Abstract:

Over the past decade, the growing demand for energy efficiency in buildings has attracted the attention of the control community. Failures in HVAC (heating, ventilation and air conditioning) systems in buildings can have a significant impact on the desired and expected energy performance of buildings and on the user's comfort as well. FTC is a recent technology area that studies the adaptation of control algorithms to faulty operating conditions of a system. The application of Fault-Tolerant Control (FTC) in HVAC systems has gained attention in the last two decades. The objective is to maintain the variations in system performance due to faults within an acceptable range with respect to the desired nominal behavior. This paper considers the so-called active approach, which is based on fault and identification scheme combined with a control reconfiguration algorithm that consists in determining a new set of control parameters so that the reconfigured performance is "as close as possible, "in some sense, to the nominal performance. Thermal models of buildings and their HVAC systems are described by non-linear (usually bi-linear) equations. Most of the works carried out so far in FDI (fault diagnosis and isolation) or FTC consider a linearized model of the studied system. However, this model is only valid in a reduced range of variation. This study presents a new fault diagnosis (FD) algorithm based on a bilinear observer for the detection and accurate estimation of the magnitude of the HVAC system failure. The main contribution of the proposed FD algorithm is that instead of using specific linearized models, the algorithm inherits the structure of the actual bilinear model of the building thermal dynamics. As an immediate consequence, the algorithm is applicable to a wide range of unpredictable operating conditions, i.e., weather dynamics, outdoor air temperature, zone occupancy profile. A bilinear fault detection observer is proposed for a bilinear system with unknown inputs. The residual vector in the observer design is decoupled from the unknown inputs and, under certain conditions, is made sensitive to all faults. Sufficient conditions are given for the existence of the observer and results are given for the explicit computation of observer design matrices. Dedicated observer schemes (DOS) are considered for sensor FDI while unknown input bilinear observers are considered for actuator or system components FDI. The proposed strategy for FTC works as follows: At a first level, FDI algorithms are implemented, making it also possible to estimate the magnitude of the fault. Once the fault is detected, the fault estimation is then used to feed the second level and reconfigure the control low so that that expected performances are recovered. This paper is organized as follows. A general structure for fault-tolerant control of buildings is first presented and the building model under consideration is introduced. Then, the observer-based design for Fault Diagnosis of bilinear systems is studied. The FTC approach is developed in Section IV. Finally, a simulation example is given in Section V to illustrate the proposed method.

Keywords: bilinear systems, fault diagnosis, fault-tolerant control, multi-zones building

Procedia PDF Downloads 149
155 Evaluation of the Role of Advocacy and the Quality of Care in Reducing Health Inequalities for People with Autism, Intellectual and Developmental Disabilities at Sheffield Teaching Hospitals

Authors: Jonathan Sahu, Jill Aylott

Abstract:

Individuals with Autism, Intellectual and Developmental disabilities (AIDD) are one of the most vulnerable groups in society, hampered not only by their own limitations to understand and interact with the wider society, but also societal limitations in perception and understanding. Communication to express their needs and wishes is fundamental to enable such individuals to live and prosper in society. This research project was designed as an organisational case study, in a large secondary health care hospital within the National Health Service (NHS), to assess the quality of care provided to people with AIDD and to review the role of advocacy to reduce health inequalities in these individuals. Methods: The research methodology adopted was as an “insider researcher”. Data collection included both quantitative and qualitative data i.e. a mixed method approach. A semi-structured interview schedule was designed and used to obtain qualitative and quantitative primary data from a wide range of interdisciplinary frontline health care workers to assess their understanding and awareness of systems, processes and evidence based practice to offer a quality service to people with AIDD. Secondary data were obtained from sources within the organisation, in keeping with “Case Study” as a primary method, and organisational performance data were then compared against national benchmarking standards. Further data sources were accessed to help evaluate the effectiveness of different types of advocacy that were present in the organisation. This was gauged by measures of user and carer experience in the form of retrospective survey analysis, incidents and complaints. Results: Secondary data demonstrate near compliance of the Organisation with the current national benchmarking standard (Monitor Compliance Framework). However, primary data demonstrate poor knowledge of the Mental Capacity Act 2005, poor knowledge of organisational systems, processes and evidence based practice applied for people with AIDD. In addition there was poor knowledge and awareness of frontline health care workers of advocacy and advocacy schemes for this group. Conclusions: A significant amount of work needs to be undertaken to improve the quality of care delivered to individuals with AIDD. An operational strategy promoting the widespread dissemination of information may not be the best approach to deliver quality care and optimal patient experience and patient advocacy. In addition, a more robust set of standards, with appropriate metrics, needs to be developed to assess organisational performance which will stand the test of professional and public scrutiny.

Keywords: advocacy, autism, health inequalities, intellectual developmental disabilities, quality of care

Procedia PDF Downloads 194
154 Border Security: Implementing the “Memory Effect” Theory in Irregular Migration

Authors: Iliuta Cumpanasu, Veronica Oana Cumpanasu

Abstract:

This paper focuses on studying the conjunction between the new emerged theory of “Memory Effect” in Irregular Migration and Related Criminality and the notion of securitization, and its impact on border management, bringing about a scientific advancement in the field by identifying the patterns corresponding to the linkage of the two concepts, for the first time, and developing a theoretical explanation, with respect to the effects of the non-military threats on border security. Over recent years, irregular migration has experienced a significant increase worldwide. The U.N.'s refugee agency reports that the number of displaced people is at its highest ever - surpassing even post-World War II numbers when the world was struggling to come to terms with the most devastating event in history. This is also the fresh reality within the core studied coordinate, the Balkan Route of Irregular Migration, which starts from Asia and Africa and continues to Turkey, Greece, North Macedonia or Bulgaria, Serbia, and ends in Romania, where thousands of migrants find themselves in an irregular situation concerning their entry to the European Union, with its important consequences concerning the related criminality. The data from the past six years was collected by making use of semi-structured interviews with experts in the field of migration and desk research within some organisations involved in border security, pursuing the gathering of genuine insights from the aforementioned field, which was constantly addressed the existing literature and subsequently subjected to the mixed methods of analysis, including the use of the Vector Auto-Regression estimates model. Thereafter, the analysis of the data followed the processes and outcomes in Grounded Theory, and a new Substantive Theory emerged, explaining how the phenomena of irregular migration and cross-border criminality are the decisive impetus for implementing the concept of securitization in border management by using the proposed pattern. The findings of the study are therefore able to capture an area that has not yet benefitted from a comprehensive approach in the scientific community, such as the seasonality, stationarity, dynamics, predictions, or the pull and push factors in Irregular Migration, also highlighting how the recent ‘Pandemic’ interfered with border security. Therefore, the research uses an inductive revelatory theoretical approach which aims at offering a new theory in order to explain a phenomenon, triggering a practically handy contribution for the scientific community, research institutes or Academia and also usefulness to organizational practitioners in the field, among which UN, IOM, UNHCR, Frontex, Interpol, Europol, or national agencies specialized in border security. The scientific outcomes of this study were validated on June 30, 2021, when the author defended his dissertation for the European Joint Master’s in Strategic Border Management, a two years prestigious program supported by the European Commission and Frontex Agency and a Consortium of six European Universities and is currently one of the research objectives of his pending PhD research at the West University Timisoara.

Keywords: migration, border, security, memory effect

Procedia PDF Downloads 57
153 Migrant Women English Instructors' Transformative Workplace Learning Experiences in Post-Secondary English Language Programs in Ontario, Canada

Authors: Justine Jun

Abstract:

This study aims to reveal migrant women English instructors' workplace learning experiences in Canadian post-secondary institutions in Ontario. Although many scholars have conducted research studies on internationally educated teachers and their professional and employment challenges, few studies have recorded migrant women English language instructors’ professional learning and support experiences in post-secondary English language programs in Canada. This study employs a qualitative research paradigm. Mezirow’s Transformative Learning Theory is an essential lens for the researcher to explain, analyze, and interpret the research data. It is a collaborative research project. The researcher and participants cooperatively create photographic or other artwork data responding to the research questions. Photovoice and arts-informed data collection methodology are the main methods. Research participants engage in the study as co-researchers and inquire about their own workplace learning experiences, actively utilizing their critical self-reflective and dialogic skills. Co-researchers individually select the forms of artwork they prefer to engage with to represent their transformative workplace learning experiences about the Canadian workplace cultures that they underwent while working with colleagues and administrators in the workplace. Once the co-researchers generate their cultural artifacts as research data, they collaboratively interpret their artworks with the researcher and other volunteer co-researchers. Co-researchers jointly investigate the themes emerging from the artworks. They also interpret the meanings of their own and others’ workplace learning experiences embedded in the artworks through interactive one-on-one or group interviews. The following are the research questions that the migrant women English instructor participants examine and answer: (1) What have they learned about their workplace culture and how do they explain their learning experiences?; (2) How transformative have their learning experiences been at work?; (3) How have their colleagues and administrators influenced their transformative learning?; (4) What kind of support have they received? What supports have been valuable to them and what changes would they like to see?; (5) What have their learning experiences transformed?; (6) What has this arts-informed research process transformed? The study findings implicate English language instructor support currently practiced in post-secondary English language programs in Ontario, Canada, especially for migrant women English instructors. This research is a doctoral empirical study in progress. This research has the urgency to address the research problem that few studies have investigated migrant English instructors’ professional learning and support issues in the workplace, precisely that of English instructors working with adult learners in Canada. While appropriate social and professional support for migrant English instructors is required throughout the country, the present workplace realities in Ontario's English language programs need to be heard soon. For that purpose, the conceptualization of this study is crucial. It makes the investigation of under-represented instructors’ under-researched social phenomena, workplace learning and support, viable and rigorous. This paper demonstrates the robust theorization of English instructors’ workplace experiences using Mezirow’s Transformative Learning Theory in the English language teacher education field.

Keywords: English teacher education, professional learning, transformative learning theory, workplace learning

Procedia PDF Downloads 108
152 Empirical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;

Procedia PDF Downloads 51
151 Impact of Ethiopia's Productive Safety Net Program on Household Dietary Diversity and Child Nutrition in Rural Ethiopia

Authors: Tagel Gebrehiwot, Carolina Castilla

Abstract:

Food insecurity and child malnutrition are among the most critical issues in Ethiopia. Accordingly, different reform programs have been carried to improve household food security. The Food Security Program (FSP) (among others) was introduced to combat the persistent food insecurity problem in the country. The FSP combines a safety net component called the Productive Safety Net Program (PSNP) started in 2005. The goal of PSNP is to offer multi-annual transfers, such as food, cash or a combination of both to chronically food insecure households to break the cycle of food aid. Food or cash transfers are the main elements of PSNP. The case for cash transfers builds on the Sen’s analysis of ‘entitlement to food’, where he argues that restoring access to food by improving demand is a more effective and sustainable response to food insecurity than food aid. Cash-based schemes offer a greater choice of use of the transfer and can allow a greater diversity of food choice. It has been proven that dietary diversity is positively associated with the key pillars of food security. Thus, dietary diversity is considered as a measure of household’s capacity to access a variety of food groups. Studies of dietary diversity among Ethiopian rural households are somewhat rare and there is still a dearth of evidence on the impact of PSNP on household dietary diversity. In this paper, we examine the impact of the Ethiopia’s PSNP on household dietary diversity and child nutrition using panel household surveys. We employed different methodologies for identification. We exploit the exogenous increase in kebeles’ PSNP budget to identify the effect of the change in the amount of money households received in transfers between 2012 and 2014 on the change in dietary diversity. We use three different approaches to identify this effect: two-stage least squares, reduced form IV, and generalized propensity score matching using a continuous treatment. The results indicate the increase in PSNP transfers between 2012 and 2014 had no effect on household dietary diversity. Estimates for different household dietary indicators reveal that the effect of the change in the cash transfer received by the household is statistically and economically insignificant. This finding is robust to different identification strategies and the inclusion of control variables that determine eligibility to become a PSNP beneficiary. To identify the effect of PSNP participation on children height-for-age and stunting we use a difference-in-difference approach. We use children between 2 and 5 in 2012 as a baseline because by then they have achieved long-term failure to grow. The treatment group comprises children ages 2 to 5 in 2014 in PSNP participant households. While changes in height-for-age take time, two years of additional transfers among children who were not born or under the age of 2-3 in 2012 have the potential to make a considerable impact on reducing the prevalence of stunting. The results indicate that participation in PSNP had no effect on child nutrition measured as height-for-age or probability of beings stunted, suggesting that PSNP should be designed in a more nutrition-sensitive way.

Keywords: continuous treatment, dietary diversity, impact, nutrition security

Procedia PDF Downloads 305
150 Design of a Human-in-the-Loop Aircraft Taxiing Optimisation System Using Autonomous Tow Trucks

Authors: Stefano Zaninotto, Geoffrey Farrugia, Johan Debattista, Jason Gauci

Abstract:

The need to reduce fuel and noise during taxi operations in the airports with a scenario of constantly increasing air traffic has resulted in an effort by the aerospace industry to move towards electric taxiing. In fact, this is one of the problems that is currently being addressed by SESAR JU and two main solutions are being proposed. With the first solution, electric motors are installed in the main (or nose) landing gear of the aircraft. With the second solution, manned or unmanned electric tow trucks are used to tow aircraft from the gate to the runway (or vice-versa). The presence of the tow trucks results in an increase in vehicle traffic inside the airport. Therefore, it is important to design the system in a way that the workload of Air Traffic Control (ATC) is not increased and the system assists ATC in managing all ground operations. The aim of this work is to develop an electric taxiing system, based on the use of autonomous tow trucks, which optimizes aircraft ground operations while keeping ATC in the loop. This system will consist of two components: an optimization tool and a Graphical User Interface (GUI). The optimization tool will be responsible for determining the optimal path for arriving and departing aircraft; allocating a tow truck to each taxiing aircraft; detecting conflicts between aircraft and/or tow trucks; and proposing solutions to resolve any conflicts. There are two main optimization strategies proposed in the literature. With centralized optimization, a central authority coordinates and makes the decision for all ground movements, in order to find a global optimum. With the second strategy, called decentralized optimization or multi-agent system, the decision authority is distributed among several agents. These agents could be the aircraft, the tow trucks, and taxiway or runway intersections. This approach finds local optima; however, it scales better with the number of ground movements and is more robust to external disturbances (such as taxi delays or unscheduled events). The strategy proposed in this work is a hybrid system combining aspects of these two approaches. The GUI will provide information on the movement and status of each aircraft and tow truck, and alert ATC about any impending conflicts. It will also enable ATC to give taxi clearances and to modify the routes proposed by the system. The complete system will be tested via computer simulation of various taxi scenarios at multiple airports, including Malta International Airport, a major international airport, and a fictitious airport. These tests will involve actual Air Traffic Controllers in order to evaluate the GUI and assess the impact of the system on ATC workload and situation awareness. It is expected that the proposed system will increase the efficiency of taxi operations while reducing their environmental impact. Furthermore, it is envisaged that the system will facilitate various controller tasks and improve ATC situation awareness.

Keywords: air traffic control, electric taxiing, autonomous tow trucks, graphical user interface, ground operations, multi-agent, route optimization

Procedia PDF Downloads 104
149 Web-Based Instructional Program to Improve Professional Development: Recommendations and Standards for Radioactive Facilities in Brazil

Authors: Denise Levy, Gian M. A. A. Sordi

Abstract:

This web based project focuses on continuing corporate education and improving workers' skills in Brazilian radioactive facilities throughout the country. The potential of Information and Communication Technologies (ICTs) shall contribute to improve the global communication in this very large country, where it is a strong challenge to ensure high quality professional information to as many people as possible. The main objective of this system is to provide Brazilian radioactive facilities a complete web-based repository - in Portuguese - for research, consultation and information, offering conditions for learning and improving professional and personal skills. UNIPRORAD is a web based system to offer unified programs and inter-related information about radiological protection programs. The content includes the best practices for radioactive facilities in order to meet both national standards and international recommendations published by different organizations over the past decades: International Commission on Radiological Protection (ICRP), International Atomic Energy Agency (IAEA) and National Nuclear Energy Commission (CNEN). The website counts on concepts, definitions and theory about optimization and ionizing radiation monitoring procedures. Moreover, the content presents further discussions related to some national and international recommendations, such as potential exposure, which is currently one of the most important research fields in radiological protection. Only two publications of ICRP develop expressively the issue and there is still a lack of knowledge of fail probabilities, for there are still uncertainties to find effective paths to quantify probabilistically the occurrence of potential exposures and the probabilities to reach a certain level of dose. To respond to this challenge, this project discusses and introduces potential exposures in a more quantitative way than national and international recommendations. Articulating ICRP and AIEA valid recommendations and official reports, in addition to scientific papers published in major international congresses, the website discusses and suggests a number of effective actions towards safety which can be incorporated into labor practice. The WEB platform was created according to corporate public needs, taking into account the development of a robust but flexible system, which can be easily adapted to future demands. ICTs provide a vast array of new communication capabilities and allow to spread information to as many people as possible at low costs and high quality communication. This initiative shall provide opportunities for employees to increase professional skills, stimulating development in this large country where it is an enormous challenge to ensure effective and updated information to geographically distant facilities, minimizing costs and optimizing results.

Keywords: distance learning, information and communication technology, nuclear science, radioactive facilities

Procedia PDF Downloads 173
148 Electromagnetic Simulation Based on Drift and Diffusion Currents for Real-Time Systems

Authors: Alexander Norbach

Abstract:

The script in this paper describes the use of advanced simulation environment using electronic systems (Microcontroller, Operational Amplifiers, and FPGA). The simulation may be used for all dynamic systems with the diffusion and the ionisation behaviour also. By additionally required observer structure, the system works with parallel real-time simulation based on diffusion model and the state-space representation for other dynamics. The proposed deposited model may be used for electrodynamic effects, including ionising effects and eddy current distribution also. With the script and proposed method, it is possible to calculate the spatial distribution of the electromagnetic fields in real-time. For further purpose, the spatial temperature distribution may be used also. With upon system, the uncertainties, unknown initial states and disturbances may be determined. This provides the estimation of the more precise system states for the required system, and additionally, the estimation of the ionising disturbances that occur due to radiation effects. The results have shown that a system can be also developed and adopted specifically for space systems with the real-time calculation of the radiation effects only. Electronic systems can take damage caused by impacts with charged particle flux in space or radiation environment. In order to be able to react to these processes, it must be calculated within a shorter time that ionising radiation and dose is present. All available sensors shall be used to observe the spatial distributions. By measured value of size and known location of the sensors, the entire distribution can be calculated retroactively or more accurately. With the formation, the type of ionisation and the direct effect to the systems and thus possible prevent processes can be activated up to the shutdown. The results show possibilities to perform more qualitative and faster simulations independent of kind of systems space-systems and radiation environment also. The paper gives additionally an overview of the diffusion effects and their mechanisms. For the modelling and derivation of equations, the extended current equation is used. The size K represents the proposed charge density drifting vector. The extended diffusion equation was derived and shows the quantising character and has similar law like the Klein-Gordon equation. These kinds of PDE's (Partial Differential Equations) are analytically solvable by giving initial distribution conditions (Cauchy problem) and boundary conditions (Dirichlet boundary condition). For a simpler structure, a transfer function for B- and E- fields was analytically calculated. With known discretised responses g₁(k·Ts) and g₂(k·Ts), the electric current or voltage may be calculated using a convolution; g₁ is the direct function and g₂ is a recursive function. The analytical results are good enough for calculation of fields with diffusion effects. Within the scope of this work, a proposed model of the consideration of the electromagnetic diffusion effects of arbitrary current 'waveforms' has been developed. The advantage of the proposed calculation of diffusion is the real-time capability, which is not really possible with the FEM programs available today. It makes sense in the further course of research to use these methods and to investigate them thoroughly.

Keywords: advanced observer, electrodynamics, systems, diffusion, partial differential equations, solver

Procedia PDF Downloads 105
147 Improving Binding Selectivity in Molecularly Imprinted Polymers from Templates of Higher Biomolecular Weight: An Application in Cancer Targeting and Drug Delivery

Authors: Ben Otange, Wolfgang Parak, Florian Schulz, Michael Alexander Rubhausen

Abstract:

The feasibility of extending the usage of molecular imprinting technique in complex biomolecules is demonstrated in this research. This technique is promising in diverse applications in areas such as drug delivery, diagnosis of diseases, catalysts, and impurities detection as well as treatment of various complications. While molecularly imprinted polymers MIP remain robust in the synthesis of molecules with remarkable binding sites that have high affinities to specific molecules of interest, extending the usage to complex biomolecules remains futile. This work reports on the successful synthesis of MIP from complex proteins: BSA, Transferrin, and MUC1. We show in this research that despite the heterogeneous binding sites and higher conformational flexibility of the chosen proteins, relying on their respective epitopes and motifs rather than the whole template produces highly sensitive and selective MIPs for specific molecular binding. Introduction: Proteins are vital in most biological processes, ranging from cell structure and structural integrity to complex functions such as transport and immunity in biological systems. Unlike other imprinting templates, proteins have heterogeneous binding sites in their complex long-chain structure, which makes their imprinting to be marred by challenges. In addressing this challenge, our attention is inclined toward the targeted delivery, which will use molecular imprinting on the particle surface so that these particles may recognize overexpressed proteins on the target cells. Our goal is thus to make surfaces of nanoparticles that specifically bind to the target cells. Results and Discussions: Using epitopes of BSA and MUC1 proteins and motifs with conserved receptors of transferrin as the respective templates for MIPs, significant improvement in the MIP sensitivity to the binding of complex protein templates was noted. Through the Fluorescence Correlation Spectroscopy FCS measurements on the size of protein corona after incubation of the synthesized nanoparticles with proteins, we noted a high affinity of MIPs to the binding of their respective complex proteins. In addition, quantitative analysis of hard corona using SDS-PAGE showed that only a specific protein was strongly bound on the respective MIPs when incubated with similar concentrations of the protein mixture. Conclusion: Our findings have shown that the merits of MIPs can be extended to complex molecules of higher biomolecular mass. As such, the unique merits of the technique, including high sensitivity and selectivity, relative ease of synthesis, production of materials with higher physical robustness, and higher stability, can be extended to more templates that were previously not suitable candidates despite their abundance and usage within the body.

Keywords: molecularly imprinted polymers, specific binding, drug delivery, high biomolecular mass-templates

Procedia PDF Downloads 27
146 High Speed Motion Tracking with Magnetometer in Nonuniform Magnetic Field

Authors: Jeronimo Cox, Tomonari Furukawa

Abstract:

Magnetometers have become more popular in inertial measurement units (IMU) for their ability to correct estimations using the earth's magnetic field. Accelerometer and gyroscope-based packages fail with dead-reckoning errors accumulated over time. Localization in robotic applications with magnetometer-inclusive IMUs has become popular as a way to track the odometry of slower-speed robots. With high-speed motions, the accumulated error increases over smaller periods of time, making them difficult to track with IMU. Tracking a high-speed motion is especially difficult with limited observability. Visual obstruction of motion leaves motion-tracking cameras unusable. When motions are too dynamic for estimation techniques reliant on the observability of the gravity vector, the use of magnetometers is further justified. As available magnetometer calibration methods are limited with the assumption that background magnetic fields are uniform, estimation in nonuniform magnetic fields is problematic. Hard iron distortion is a distortion of the magnetic field by other objects that produce magnetic fields. This kind of distortion is often observed as the offset from the origin of the center of data points when a magnetometer is rotated. The magnitude of hard iron distortion is dependent on proximity to distortion sources. Soft iron distortion is more related to the scaling of the axes of magnetometer sensors. Hard iron distortion is more of a contributor to the error of attitude estimation with magnetometers. Indoor environments or spaces inside ferrite-based structures, such as building reinforcements or a vehicle, often cause distortions with proximity. As positions correlate to areas of distortion, methods of magnetometer localization include the production of spatial mapping of magnetic field and collection of distortion signatures to better aid location tracking. The goal of this paper is to compare magnetometer methods that don't need pre-productions of magnetic field maps. Mapping the magnetic field in some spaces can be costly and inefficient. Dynamic measurement fusion is used to track the motion of a multi-link system with us. Conventional calibration by data collection of rotation at a static point, real-time estimation of calibration parameters each time step, and using two magnetometers for determining local hard iron distortion are compared to confirm the robustness and accuracy of each technique. With opposite-facing magnetometers, hard iron distortion can be accounted for regardless of position, Rather than assuming that hard iron distortion is constant regardless of positional change. The motion measured is a repeatable planar motion of a two-link system connected by revolute joints. The links are translated on a moving base to impulse rotation of the links. Equipping the joints with absolute encoders and recording the motion with cameras to enable ground truth comparison to each of the magnetometer methods. While the two-magnetometer method accounts for local hard iron distortion, the method fails where the magnetic field direction in space is inconsistent.

Keywords: motion tracking, sensor fusion, magnetometer, state estimation

Procedia PDF Downloads 56
145 Categorical Metadata Encoding Schemes for Arteriovenous Fistula Blood Flow Sound Classification: Scaling Numerical Representations Leads to Improved Performance

Authors: George Zhou, Yunchan Chen, Candace Chien

Abstract:

Kidney replacement therapy is the current standard of care for end-stage renal diseases. In-center or home hemodialysis remains an integral component of the therapeutic regimen. Arteriovenous fistulas (AVF) make up the vascular circuit through which blood is filtered and returned. Naturally, AVF patency determines whether adequate clearance and filtration can be achieved and directly influences clinical outcomes. Our aim was to build a deep learning model for automated AVF stenosis screening based on the sound of blood flow through the AVF. A total of 311 patients with AVF were enrolled in this study. Blood flow sounds were collected using a digital stethoscope. For each patient, blood flow sounds were collected at 6 different locations along the patient’s AVF. The 6 locations are artery, anastomosis, distal vein, middle vein, proximal vein, and venous arch. A total of 1866 sounds were collected. The blood flow sounds are labeled as “patent” (normal) or “stenotic” (abnormal). The labels are validated from concurrent ultrasound. Our dataset included 1527 “patent” and 339 “stenotic” sounds. We show that blood flow sounds vary significantly along the AVF. For example, the blood flow sound is loudest at the anastomosis site and softest at the cephalic arch. Contextualizing the sound with location metadata significantly improves classification performance. How to encode and incorporate categorical metadata is an active area of research1. Herein, we study ordinal (i.e., integer) encoding schemes. The numerical representation is concatenated to the flattened feature vector. We train a vision transformer (ViT) on spectrogram image representations of the sound and demonstrate that using scalar multiples of our integer encodings improves classification performance. Models are evaluated using a 10-fold cross-validation procedure. The baseline performance of our ViT without any location metadata achieves an AuROC and AuPRC of 0.68 ± 0.05 and 0.28 ± 0.09, respectively. Using the following encodings of Artery:0; Arch: 1; Proximal: 2; Middle: 3; Distal 4: Anastomosis: 5, the ViT achieves an AuROC and AuPRC of 0.69 ± 0.06 and 0.30 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 10; Proximal: 20; Middle: 30; Distal 40: Anastomosis: 50, the ViT achieves an AuROC and AuPRC of 0.74 ± 0.06 and 0.38 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 100; Proximal: 200; Middle: 300; Distal 400: Anastomosis: 500, the ViT achieves an AuROC and AuPRC of 0.78 ± 0.06 and 0.43 ± 0.11. respectively. Interestingly, we see that using increasing scalar multiples of our integer encoding scheme (i.e., encoding “venous arch” as 1,10,100) results in progressively improved performance. In theory, the integer values do not matter since we are optimizing the same loss function; the model can learn to increase or decrease the weights associated with location encodings and converge on the same solution. However, in the setting of limited data and computation resources, increasing the importance at initialization either leads to faster convergence or helps the model escape a local minimum.

Keywords: arteriovenous fistula, blood flow sounds, metadata encoding, deep learning

Procedia PDF Downloads 58
144 Engineering Topology of Photonic Systems for Sustainable Molecular Structure: Autopoiesis Systems

Authors: Moustafa Osman Mohammed

Abstract:

This paper introduces topological order in descried social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. Topological order is important in describing the physical systems for exploiting optical systems and improving photonic devices. The stats of topological order have some interesting properties of topological degeneracy and fractional statistics that reveal the entanglement origin of topological order, etc. Topological ideas in photonics form exciting developments in solid-state materials, that being; insulating in the bulk, conducting electricity on their surface without dissipation or back-scattering, even in the presence of large impurities. A specific type of autopoiesis system is interrelated to the main categories amongst existing groups of the ecological phenomena interaction social and medical sciences. The hypothesis, nevertheless, has a nonlinear interaction with its natural environment 'interactional cycle' for exchange photon energy with molecules without changes in topology. The engineering topology of a biosensor is based on the excitation boundary of surface electromagnetic waves in photonic band gap multilayer films. The device operation is similar to surface Plasmonic biosensors in which a photonic band gap film replaces metal film as the medium when surface electromagnetic waves are excited. The use of photonic band gap film offers sharper surface wave resonance leading to the potential of greatly enhanced sensitivity. So, the properties of the photonic band gap material are engineered to operate a sensor at any wavelength and conduct a surface wave resonance that ranges up to 470 nm. The wavelength is not generally accessible with surface Plasmon sensing. Lastly, the photonic band gap films have robust mechanical functions that offer new substrates for surface chemistry to understand the molecular design structure and create sensing chips surface with different concentrations of DNA sequences in the solution to observe and track the surface mode resonance under the influences of processes that take place in the spectroscopic environment. These processes led to the development of several advanced analytical technologies: which are; automated, real-time, reliable, reproducible, and cost-effective. This results in faster and more accurate monitoring and detection of biomolecules on refractive index sensing, antibody-antigen reactions with a DNA or protein binding. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other in order to form unique spatial structure and dynamics of biological molecules for providing the environment mutual contribution in investigation of changes due to the pathogenic archival architecture of cell clusters.

Keywords: autopoiesis, photonics systems, quantum topology, molecular structure, biosensing

Procedia PDF Downloads 65
143 The Roles of Mandarin and Local Dialect in the Acquisition of L2 English Consonants Among Chinese Learners of English: Evidence From Suzhou Dialect Areas

Authors: Weijing Zhou, Yuting Lei, Francis Nolan

Abstract:

In the domain of second language acquisition, whenever pronunciation errors or acquisition difficulties are found, researchers habitually attribute them to the negative transfer of the native language or local dialect. To what extent do Mandarin and local dialects affect English phonological acquisition for Chinese learners of English as a foreign language (EFL)? Little evidence, however, has been found via empirical research in China. To address this core issue, the present study conducted phonetic experiments to explore the roles of local dialects and Mandarin in Chinese EFL learners’ acquisition of L2 English consonants. Besides Mandarin, the sole national language in China, Suzhou dialect was selected as the target local dialect because of its distinct phonology from Mandarin. The experimental group consisted of 30 junior English majors at Yangzhou University, who were born and lived in Suzhou, acquired Suzhou Dialect since their early childhood, and were able to communicate freely and fluently with each other in Suzhou Dialect, Mandarin as well as English. The consonantal target segments were all the consonants of English, Mandarin and Suzhou Dialect in typical carrier words embedded in the carrier sentence Say again. The control group consisted of two Suzhou Dialect experts, two Mandarin radio broadcasters, and two British RP phoneticians, who served as the standard speakers of the three languages. The reading corpus was recorded and sampled in the phonetic laboratories at Yangzhou University, Soochow University and Cambridge University, respectively, then transcribed, segmented and analyzed acoustically via Praat software, and finally analyzed statistically via EXCEL and SPSS software. The main findings are as follows: First, in terms of correct acquisition rates (CARs) of all the consonants, Mandarin ranked top (92.83%), English second (74.81%) and Suzhou Dialect last (70.35%), and significant differences were found only between the CARs of Mandarin and English and between the CARs of Mandarin and Suzhou Dialect, demonstrating Mandarin was overwhelmingly more robust than English or Suzhou Dialect in subjects’ multilingual phonological ecology. Second, in terms of typical acoustic features, the average duration of all the consonants plus the voice onset time (VOT) of plosives, fricatives, and affricatives in 3 languages were much longer than those of standard speakers; the intensities of English fricatives and affricatives were higher than RP speakers but lower than Mandarin and Suzhou Dialect standard speakers; the formants of English nasals and approximants were significantly different from those of Mandarin and Suzhou Dialects, illustrating the inconsistent acoustic variations between the 3 languages. Thirdly, in terms of typical pronunciation variations or errors, there were significant interlingual interactions between the 3 consonant systems, in which Mandarin consonants were absolutely dominant, accounting for the strong transfer from L1 Mandarin to L2 English instead of from earlier-acquired L1 local dialect to L2 English. This is largely because the subjects were knowingly exposed to Mandarin since their nursery and were strictly required to speak in Mandarin through all the formal education periods from primary school to university.

Keywords: acquisition of L2 English consonants, role of Mandarin, role of local dialect, Chinese EFL learners from Suzhou Dialect areas

Procedia PDF Downloads 69
142 Identification of Hub Genes in the Development of Atherosclerosis

Authors: Jie Lin, Yiwen Pan, Li Zhang, Zhangyong Xia

Abstract:

Atherosclerosis is a chronic inflammatory disease characterized by the accumulation of lipids, immune cells, and extracellular matrix in the arterial walls. This pathological process can lead to the formation of plaques that can obstruct blood flow and trigger various cardiovascular diseases such as heart attack and stroke. The underlying molecular mechanisms still remain unclear, although many studies revealed the dysfunction of endothelial cells, recruitment and activation of monocytes and macrophages, and the production of pro-inflammatory cytokines and chemokines in atherosclerosis. This study aimed to identify hub genes involved in the progression of atherosclerosis and to analyze their biological function in silico, thereby enhancing our understanding of the disease’s molecular mechanisms. Through the analysis of microarray data, we examined the gene expression in media and neo-intima from plaques, as well as distant macroscopically intact tissue, across a cohort of 32 hypertensive patients. Initially, 112 differentially expressed genes (DEGs) were identified. Subsequent immune infiltration analysis indicated a predominant presence of 27 immune cell types in the atherosclerosis group, particularly noting an increase in monocytes and macrophages. In the Weighted gene co-expression network analysis (WGCNA), 10 modules with a minimum of 30 genes were defined as key modules, with blue, dark, Oliver green and sky-blue modules being the most significant. These modules corresponded respectively to monocyte, activated B cell, and activated CD4 T cell gene patterns, revealing a strong morphological-genetic correlation. From these three gene patterns (modules morphology), a total of 2509 key genes (Gene Significance >0.2, module membership>0.8) were extracted. Six hub genes (CD36, DPP4, HMOX1, PLA2G7, PLN2, and ACADL) were then identified by intersecting 2509 key genes, 102 DEGs with lipid-related genes from the Genecard database. The bio-functional analysis of six hub genes was estimated by a robust classifier with an area under the curve (AUC) of 0.873 in the ROC plot, indicating excellent efficacy in differentiating between the disease and control group. Moreover, PCA visualization demonstrated clear separation between the groups based on these six hub genes, suggesting their potential utility as classification features in predictive models. Protein-protein interaction (PPI) analysis highlighted DPP4 as the most interconnected gene. Within the constructed key gene-drug network, 462 drugs were predicted, with ursodeoxycholic acid (UDCA) being identified as a potential therapeutic agent for modulating DPP4 expression. In summary, our study identified critical hub genes implicated in the progression of atherosclerosis through comprehensive bioinformatic analyses. These findings not only advance our understanding of the disease but also pave the way for applying similar analytical frameworks and predictive models to other diseases, thereby broadening the potential for clinical applications and therapeutic discoveries.

Keywords: atherosclerosis, hub genes, drug prediction, bioinformatics

Procedia PDF Downloads 36
141 Performance of the Abbott RealTime High Risk HPV Assay with SurePath Liquid Based Cytology Specimens from Women with Low Grade Cytological Abnormalities

Authors: Alexandra Sargent, Sarah Ferris, Ioannis Theofanous

Abstract:

The Abbott RealTime High Risk HPV test (RealTime HPV) is one of five assays clinically validated and approved by the English NHS Cervical Screening Programme (CSP) for HPV triage of low grade dyskaryosis and test-of-cure of treated Cervical Intraepithelial Neoplasia. The assay is a highly automated multiplex real-time PCR test for detecting 14 high risk (hr) HPV types, with simultaneous differentiation of HPV 16 and HPV 18 versus non-HPV 16/18 hrHPV. An endogenous internal control ensures sample cellularity, controls extraction efficiency and PCR inhibition. The original cervical specimen collected in SurePath (SP) liquid-based cytology (LBC) medium (BD Diagnostics) and the SP post-gradient cell pellets (SPG) after cytological processing are both CE marked for testing with the RealTime HPV test. During the 2011 NHSCSP validation of new tests only the original aliquot of SP LBC medium was investigated. Residual sample volume left after cytology slide preparation is low and may not always have sufficient volume for repeat HPV testing or for testing of other biomarkers that may be implemented in testing algorithms in the future. The SPG samples, however, have sufficient volumes to carry out additional testing and necessary laboratory validation procedures. This study investigates the correlation of RealTime HPV results of cervical specimens collected in SP LBC medium from women with low grade cytological abnormalities observed with matched pairs of original SP LBC medium and SP post-gradient cell pellets (SPG) after cytology processing. Matched pairs of SP and SPG samples from 750 women with borderline (N = 392) and mild (N = 351) cytology were available for this study. Both specimen types were processed and parallel tested for the presence of hrHPV with RealTime HPV according to the manufacturer´s instructions. HrHPV detection rates and concordance between test results from matched SP and SPGCP pairs were calculated. A total of 743 matched pairs with valid test results on both sample types were available for analysis. An overall-agreement of hrHPV test results of 97.5% (k: 0.95) was found with matched SP/SPG pairs and slightly lower concordance (96.9%; k: 0.94) was observed on 392 pairs from women with borderline cytology compared to 351 pairs from women with mild cytology (98.0%; k: 0.95). Partial typing results were highly concordant in matched SP/SPG pairs for HPV 16 (99.1%), HPV 18 (99.7%) and non-HPV16/18 hrHPV (97.0%), respectively. 19 matched pairs were found with discrepant results: 9 from women with borderline cytology and 4 from women with mild cytology were negative on SPG and positive on SP; 3 from women with borderline cytology and 3 from women with mild cytology were negative on SP and positive on SPG. Excellent correlation of hrHPV DNA test results was found between matched pairs of SP original fluid and post-gradient cell pellets from women with low grade cytological abnormalities tested with the Abbott RealTime High-Risk HPV assay, demonstrating robust performance of the test with both specimen types and reassuring the utility of the assay for cytology triage with both specimen types.

Keywords: Abbott realtime test, HPV, SurePath liquid based cytology, surepath post-gradient cell pellet

Procedia PDF Downloads 229
140 LaeA/1-Velvet Interplay in Aspergillus and Trichoderma: Regulation of Secondary Metabolites and Cellulases

Authors: Razieh Karimi Aghcheh, Christian Kubicek, Joseph Strauss, Gerhard Braus

Abstract:

Filamentous fungi are of considerable economic and social significance for human health, nutrition and in white biotechnology. These organisms are dominant producers of a range of primary metabolites such as citric acid, microbial lipids (biodiesel) and higher unsaturated fatty acids (HUFAs). In particular, they produce also important but structurally complex secondary metabolites with enormous therapeutic applications in pharmaceutical industry, for example: cephalosporin, penicillin, taxol, zeranol and ergot alkaloids. Several fungal secondary metabolites, which are significantly relevant to human health do not only include antibiotics, but also e.g. lovastatin, a well-known antihypercholesterolemic agent produced by Aspergillus. terreus, or aflatoxin, a carcinogen produced by A. flavus. In addition to their roles for human health and agriculture, some fungi are industrially and commercially important: Species of the ascomycete genus Hypocrea spp. (teleomorph of Trichoderma) have been demonstrated as efficient producer of highly active cellulolytic enzymes. This trait makes them effective in disrupting and depolymerization of lignocellulosic materials and thus applicable tools in number of biotechnological areas as diverse as clothes-washing detergent, animal feed, and pulp and fuel productions. Fungal LaeA/LAE1 (Loss of aflR Expression A) homologs their gene products act at the interphase between secondary metabolisms, cellulase production and development. Lack of the corresponding genes results in significant physiological changes including loss of secondary metabolite and lignocellulose degrading enzymes production. At the molecular level, the encoded proteins are presumably methyltransferases or demethylases which act directly or indirectly at heterochromatin and interact with velvet domain proteins. Velvet proteins bind to DNA and affect expression of secondary metabolites (SMs) genes and cellulases. The dynamic interplay between LaeA/LAE1, velvet proteins and additional interaction partners is the key for an understanding of the coordination of metabolic and morphological functions of fungi and is required for a biotechnological control of the formation of desired bioactive products. Aspergilli and Trichoderma represent different biotechnologically significant species with significant differences in the LaeA/LAE1-Velvet protein machinery and their target proteins. We, therefore, performed a comparative study of the interaction partners of this machinery and the dynamics of the various protein-protein interactions using our robust proteomic and mass spectrometry techniques. This enhances our knowledge about the fungal coordination of secondary metabolism, cellulase production and development and thereby will certainly improve recombinant fungal strain construction for the production of industrial secondary metabolite or lignocellulose hydrolytic enzymes.

Keywords: cellulases, LaeA/1, proteomics, secondary metabolites

Procedia PDF Downloads 246
139 Interferon-Induced Transmembrane Protein-3 rs12252-CC Associated with the Progress of Hepatocellular Carcinoma by Up-Regulating the Expression of Interferon-Induced Transmembrane Protein 3

Authors: Yuli Hou, Jianping Sun, Mengdan Gao, Hui Liu, Ling Qin, Ang Li, Dongfu Li, Yonghong Zhang, Yan Zhao

Abstract:

Background and Aims: Interferon-induced transmembrane protein 3 (IFITM3) is a component of ISG (Interferon-Stimulated Gene) family. IFITM3 has been recognized as a key signal molecule regulating cell growth in some tumors. However, the function of IFITM3 rs12252-CC genotype in the hepatocellular carcinoma (HCC) remains unknown to author’s best knowledge. A cohort study was employed to clarify the relationship between IFITM3 rs12252-CC genotype and HCC progression, and cellular experiments were used to investigate the correlation of function of IFITM3 and the progress of HCC. Methods: 336 candidates were enrolled in study, including 156 with HBV related HCC and 180 with chronic Hepatitis B infections or liver cirrhosis. Polymerase chain reaction (PCR) was employed to determine the gene polymorphism of IFITM3. The functions of IFITM3 were detected in PLC/PRF/5 cell with different treated:LV-IFITM3 transfected with lentivirus to knockdown the expression of IFITM3 and LV-NC transfected with empty lentivirus as negative control. The IFITM3 expression, proliferation and migration were detected by Quantitative reverse transcription polymerase chain reaction (qRT-PCR), QuantiGene Plex 2.0 assay, western blotting, immunohistochemistry, Cell Counting Kit(CCK)-8 and wound healing respectively. Six samples (three infected with empty lentiviral as control; three infected with LV-IFITM3 vector lentiviral as experimental group ) of PLC/PRF/5 were sequenced at BGI (Beijing Genomics Institute, Shenzhen,China) using RNA-seq technology to identify the IFITM3-related signaling pathways and chose PI3K/AKT pathway as related signaling to verify. Results: The patients with HCC had a significantly higher proportion of IFITM3 rs12252-CC compared with the patients with chronic HBV infection or liver cirrhosis. The distribution of CC genotype in HCC patients with low differentiation was significantly higher than that in those with high differentiation. Patients with CC genotype found with bigger tumor size, higher percentage of vascular thrombosis, higher distribution of low differentiation and higher 5-year relapse rate than those with CT/TT genotypes. The expression of IFITM3 was higher in HCC tissues than adjacent normal tissues, and the level of IFITM3 was higher in HCC tissues with low differentiation and metastatic than high/medium differentiation and without metastatic. Higher RNA level of IFITM3 was found in CC genotype than TT genotype. In PLC/PRF/5 cell with knockdown, the ability of cell proliferation and migration was inhibited. Analysis RNA sequencing and verification of RT-PCR found out the phosphatidylinositol 3-kinase/protein kinase B/mammalian target of rapamycin(PI3K/AKT/mTOR) pathway was associated with knockdown IFITM3.With the inhibition of IFITM3, the expression of PI3K/AKT/mTOR signaling pathway was blocked and the expression of vimentin was decreased. Conclusions: IFITM3 rs12252-CC with the higher expression plays a vital role in the progress of HCC by regulating HCC cell proliferation and migration. These effects are associated with PI3K/AKT/mTOR signaling pathway.

Keywords: IFITM3, interferon-induced transmembrane protein 3, HCC, hepatocellular carcinoma, PI3K/ AKT/mTOR, phosphatidylinositol 3-kinase/protein kinase B/mammalian target of rapamycin

Procedia PDF Downloads 106
138 Investigating the Nature of Transactions Behind Violations Along Bangalore’s Lakes

Authors: Sakshi Saxena

Abstract:

Bangalore is an IT industry-based metropolitan city in the state of Karnataka in India. It has experienced tremendous urbanization at the expense of the environment. The reasons behind development over and near ecologically sensitive areas have been raised by several instances of disappearing lakes. Lakes in Bangalore can be considered commons on both a local and a regional scale and these water bodies are becoming less interconnected because of encroachment in the catchment area. Other sociocultural environmental risks that have led to social issues are now a source of concern. They serve as an example of the transformations in commons, a dilemma that as is transformed from rural to urban areas, as well as the complicated institutional issues associated with governance. According to some scholarly work and ecologists, a nexus of public and commercial institutions is primarily responsible for the depletion of water tanks and the inefficiency of the planning process. It is said that Bangalore's growth as an urban centre, together with the demands it created, particularly on land and water, resulted in the emergence of a middle and upper class that was demanding and self-assured. For the report in focus, it is evident to understand the issues and problems which led to these encroachments and captured violations if any around these lakes and tanks which arose during these decades. To claim watersheds and lake edges as properties, institutional arrangements (organizations, laws, and policies) intersect with planning authorities. Because of unregulated or indiscriminate forms of urbanization, it is claimed that the engagement of actors and negotiations of the process, including government ignorance, are allowing this problem to flourish. In general, the governance of natural resources in India is largely state-based. This is due to the constitutional scheme, which since the Government of India Act, of 1935 has in principle given the power to the states to legislate in this area. Thus, states have the exclusive power to regulate water supplies, irrigation and canals, drainage and embankments, water storage, hydropower, and fisheries. Thus, The main aim is to understand institutional arrangements and the master planning processes behind these arrangements. To understand the ambiguity through an example, it is noted that, Custodianship alone is a role divided between two state and two city-level bodies. This creates regulatory ambiguity and the effects on the environment are such as changes in city temperature, urban flooding, etc. As established, the main kinds of issues around lakes/tanks in Bangalore are encroachment and depletion. This study will further be enhanced by doing a physical survey of three of these lakes focusing on the Bellandur site and the stakeholders involved. According to the study's findings thus far, corrupt politicians and dubious land transaction tools are involved in the real estate industry. It appears that some destruction could have been stopped or at least mitigated in this case if there had been a robust system of urban planning processes involved along with strong institutional arrangements to protect lakes.

Keywords: wetlands, lakes, urbanization, bangalore, politics, reservoirs, municipal jurisdiction, lake connections, institutions

Procedia PDF Downloads 60
137 Comparison and Validation of a dsDNA biomimetic Quality Control Reference for NGS based BRCA CNV analysis versus MLPA

Authors: A. Delimitsou, C. Gouedard, E. Konstanta, A. Koletis, S. Patera, E. Manou, K. Spaho, S. Murray

Abstract:

Background: There remains a lack of International Standard Control Reference materials for Next Generation Sequencing-based approaches or device calibration. We have designed and validated dsDNA biomimetic reference materials for targeted such approaches incorporating proprietary motifs (patent pending) for device/test calibration. They enable internal single-sample calibration, alleviating sample comparisons to pooled historical population-based data assembly or statistical modelling approaches. We have validated such an approach for BRCA Copy Number Variation analytics using iQRS™-CNVSUITE versus Mixed Ligation-dependent Probe Amplification. Methods: Standard BRCA Copy Number Variation analysis was compared between mixed ligation-dependent probe amplification and next generation sequencing using a cohort of 198 breast/ovarian cancer patients. Next generation sequencing based copy number variation analysis of samples spiked with iQRS™ dsDNA biomimetics were analysed using proprietary CNVSUITE software. Mixed ligation-dependent probe amplification analyses were performed on an ABI-3130 Sequencer and analysed with Coffalyser software. Results: Concordance of BRCA – copy number variation events for mixed ligation-dependent probe amplification and CNVSUITE indicated an overall sensitivity of 99.88% and specificity of 100% for iQRS™-CNVSUITE. The negative predictive value of iQRS-CNVSUITE™ for BRCA was 100%, allowing for accurate exclusion of any event. The positive predictive value was 99.88%, with no discrepancy between mixed ligation-dependent probe amplification and iQRS™-CNVSUITE. For device calibration purposes, precision was 100%, spiking of patient DNA demonstrated linearity to 1% (±2.5%) and range from 100 copies. Traditional training was supplemented by predefining the calibrator to sample cut-off (lock-down) for amplicon gain or loss based upon a relative ratio threshold, following training of iQRS™-CNVSUITE using spiked iQRS™ calibrator and control mocks. BRCA copy number variation analysis using iQRS™-CNVSUITE™ was successfully validated and ISO15189 accredited and now enters CE-IVD performance evaluation. Conclusions: The inclusion of a reference control competitor (iQRS™ dsDNA mimetic) to next generation sequencing-based sequencing offers a more robust sample-independent approach for the assessment of copy number variation events compared to mixed ligation-dependent probe amplification. The approach simplifies data analyses, improves independent sample data analyses, and allows for direct comparison to an internal reference control for sample-specific quantification. Our iQRS™ biomimetic reference materials allow for single sample copy number variation analytics and further decentralisation of diagnostics to single patient sample assessment.

Keywords: validation, diagnostics, oncology, copy number variation, reference material, calibration

Procedia PDF Downloads 50
136 Swedish–Nigerian Extrusion Research: Channel for Traditional Grain Value Addition

Authors: Kalep Filli, Sophia Wassén, Annika Krona, Mats Stading

Abstract:

Food security challenge and the growing population in Sub-Saharan Africa centers on its agricultural transformation, where about 70% of its population is directly involved in farming. Research input can create economic opportunities, reduce malnutrition and poverty, and generate faster, fairer growth. Africa is discarding $4 billion worth of grain annually due to pre and post-harvest losses. Grains and tubers play a central role in food supply in the region but their production has generally lagged behind because no robust scientific input to meet up with the challenge. The African grains are still chronically underutilized to the detriment of the well-being of the people of Africa and elsewhere. The major reason for their underutilization is because they are under-researched. Any commitment by scientific community to intervene needs creative solutions focused on innovative approaches that will meet the economic growth. In order to mitigate this hurdle, co-creation activities and initiatives are necessary.An example of such initiatives has been initiated through Modibbo Adama University of Technology Yola, Nigeria and RISE (The Research Institutes of Sweden) Gothenburg, Sweden. Exchange of expertise in research activities as a possibility to create channel for value addition to agricultural commodities in the region under the ´Traditional Grain Network programme´ is in place. Process technologies, such as extrusion offers the possibility of creating products in the food and feed sectors, with better storage stability, added value, lower transportation cost and new markets. The Swedish–Nigerian initiative has focused on the development of high protein pasta. Dry microscopy of pasta sample result shows a continuous structural framework of proteins and starch matrix. The water absorption index (WAI) results showed that water was absorbed steadily and followed the master curve pattern. The WAI values ranged between 250 – 300%. In all aspect, the water absorption history was within a narrow range for all the eight samples. The total cooking time for all the eight samples in our study ranged between 5 – 6 minutes with their respective dry sample diameter ranging between 1.26 – 1.35 mm. The percentage water solubility index (WSI) ranged from 6.03 – 6.50% which was within a narrow range and the cooking loss which is a measure of WSI is considered as one of the main parameters taken into consideration during the assessment of pasta quality. The protein contents of the samples ranged between 17.33 – 18.60 %. The value of the cooked pasta firmness ranged from 0.28 - 0.86 N. The result shows that increase in ratio of cowpea flour and level of pregelatinized cowpea tends to increase the firmness of the pasta. The breaking strength represent index of toughness of the dry pasta ranged and it ranged from 12.9 - 16.5 MPa.

Keywords: cowpea, extrusion, gluten free, high protein, pasta, sorghum

Procedia PDF Downloads 157
135 Soft Pneumatic Actuators Fabricated Using Soluble Polymer Inserts and a Single-Pour System for Improved Durability

Authors: Alexander Harrison Greer, Edward King, Elijah Lee, Safa Obuz, Ruhao Sun, Aditya Sardesai, Toby Ma, Daniel Chow, Bryce Broadus, Calvin Costner, Troy Barnes, Biagio DeSimone, Yeshwin Sankuratri, Yiheng Chen, Holly Golecki

Abstract:

Although a relatively new field, soft robotics is experiencing a rise in applicability in the secondary school setting through The Soft Robotics Toolkit, shared fabrication resources and a design competition. Exposing students outside of university research groups to this rapidly growing field allows for development of the soft robotics industry in new and imaginative ways. Soft robotic actuators have remained difficult to implement in classrooms because of their relative cost or difficulty of fabrication. Traditionally, a two-part molding system is used; however, this configuration often results in delamination. In an effort to make soft robotics more accessible to young students, we aim to develop a simple, single-mold method of fabricating soft robotic actuators from common household materials. These actuators are made by embedding a soluble polymer insert into silicone. These inserts can be made from hand-cut polystyrene, 3D-printed polyvinyl alcohol (PVA) or acrylonitrile butadiene styrene (ABS), or molded sugar. The insert is then dissolved using an appropriate solvent such as water or acetone, leaving behind a negative form which can be pneumatically actuated. The resulting actuators are seamless, eliminating the instability of adhering multiple layers together. The benefit of this approach is twofold: it simplifies the process of creating a soft robotic actuator, and in turn, increases its effectiveness and durability. To quantify the increased durability of the single-mold actuator, it was tested against the traditional two-part mold. The single-mold actuator could withstand actuation at 20psi for 20 times the duration when compared to the traditional method. The ease of fabrication of these actuators makes them more accessible to hobbyists and students in classrooms. After developing these actuators, they were applied, in collaboration with a ceramics teacher at our school, to a glove used to transfer nuanced hand motions used to throw pottery from an expert artist to a novice. We quantified the improvement in the users’ pottery-making skill when wearing the glove using image analysis software. The seamless actuators proved to be robust in this dynamic environment. Seamless soft robotic actuators created by high school students show the applicability of the Soft Robotics Toolkit for secondary STEM education and outreach. Making students aware of what is possible through projects like this will inspire the next generation of innovators in materials science and robotics.

Keywords: pneumatic actuator fabrication, soft robotic glove, soluble polymers, STEM outreach

Procedia PDF Downloads 103
134 Biophysical and Structural Characterization of Transcription Factor Rv0047c of Mycobacterium Tuberculosis H37Rv

Authors: Md. Samsuddin Ansari, Ashish Arora

Abstract:

Every year 10 million people fall ill with one of the oldest diseases known as tuberculosis, caused by Mycobacterium tuberculosis. The success of M. tuberculosis as a pathogen is because of its ability to persist in host tissues. Multidrug resistance (MDR) mycobacteria cases increase every day, which is associated with efflux pumps controlled at the level of transcription. The transcription regulators of MDR transporters in bacteria belong to one of the following four regulatory protein families: AraC, MarR, MerR, and TetR. Phenolic acid decarboxylase repressor (PadR), like a family of transcription regulators, is closely related to the MarR family. Phenolic acid decarboxylase repressor (PadR) was first identified as a transcription factor involved in the regulation of phenolic acid stress response in various microorganisms (including Mycobacterium tuberculosis H37Rv). Recently research has shown that the PadR family transcription factors are global, multifunction transcription regulators. Rv0047c is a PadR subfamily-1 protein. We are exploring the biophysical and structural characterization of Rv0047c. The Rv0047 gene was amplified by PCR using the primers containing EcoRI and HindIII restriction enzyme sites cloned in pET-NH6 vector and overexpressed in DH5α and BL21 (λDE3) cells of E. coli following purification with Ni2+-NTA column and size exclusion chromatography. We did DSC to know the thermal stability; the Tm (transition temperature) of protein is 55.29ºC, and ΔH (enthalpy change) of 6.92 kcal/mol. Circular dichroism to know the secondary structure and conformation and fluorescence spectroscopy for tertiary structure study of protein. To understand the effect of pH on the structure, function, and stability of Rv0047c we employed spectroscopy techniques such as circular dichroism, fluorescence, and absorbance measurements in a wide range of pH (from pH-2.0 to pH-12). At low and high pH, it shows drastic changes in the secondary and tertiary structure of the protein. EMSA studies showed the specific binding of Rv0047c with its own 30-bp promoter region. To determine the effect of complex formation on the secondary structure of Rv0047c, we examined the CD spectra of the complex of Rv0047c with promoter DNA of rv0047. The functional role of Rv0047c was characterized by over-expressing the Rv0047c gene under the control of hsp60 promoter in Mycobacterium tuberculosis H37Rv. We have predicted the three-dimensional structure of Rv0047c using the Swiss Model and Modeller, with validity checked by the Ramachandra plot. We did molecular docking of Rv0047c with dnaA, through PatchDock following refinement through FireDock. Through this, it is possible to easily identify the binding hot-stop of the receptor molecule with that of the ligand, the nature of the interface itself, and the conformational change undergone by the protein pattern. We are using X-crystallography to unravel the structure of Rv0047c. Overall the studies show that Rv0047c may have transcription regulation along with providing an insight into the activity of Rv0047c in the pH range of subcellular environment and helps to understand the protein-protein interaction, a novel target to kill dormant bacteria and potential strategy for tuberculosis control.

Keywords: mycobacterium tuberculosis, phenolic acid decarboxylase repressor, Rv0047c, Circular dichroism, fluorescence spectroscopy, docking, protein-protein interaction

Procedia PDF Downloads 78
133 Aerobic Biodegradation of a Chlorinated Hydrocarbon by Bacillus Cereus 2479

Authors: Srijata Mitra, Mobina Parveen, Pranab Roy, Narayan Chandra Chattopadhyay

Abstract:

Chlorinated hydrocarbon can be a major pollution problem in groundwater as well as soil. Many people interact with these chemicals on daily accidentally or by professionally in the laboratory. One of the most common sources for Chlorinated hydrocarbon contamination of soil and groundwater are industrial effluents. The wide use and discharge of Trichloroethylene (TCE), a volatile chlorohydrocarbon from chemical industry, led to major water pollution in rural areas. TCE is an mainly used as an industrial metal degreaser in industries. Biotransformation of TCE to the potent carcinogen vinyl chloride (VC) by consortia of anaerobic bacteria might have role for the above purpose. For these reasons, the aim of current study was to isolate and characterized the genes involved in TCE metabolism and also to investigate the in silico study of those genes. To our knowledge, only one aromatic dioxygenase system, the toluene dioxygenase in Pseudomonas putida F1 has been shown to be involved in TCE degradation. This is first instance where Bacillus cereus group being used in biodegradation of trichloroethylene. A novel bacterial strain 2479 was isolated from oil depot site at Rajbandh, Durgapur (West Bengal, India) by enrichment culture technique. It was identified based on polyphasic approach and ribotyping. The bacterium was gram positive, rod shaped, endospore forming and capable of degrading trichloroethylene as the sole carbon source. On the basis of phylogenetic data and Fatty Acid Methyl Ester Analysis, strain 2479 should be placed within the genus Bacillus and species cereus. However, the present isolate (strain 2479) is unique and sharply different from the usual Bacillus strains in its biodegrading nature. Fujiwara test was done to estimate that the strain 2479 could degrade TCE efficiently. The gene for TCE biodegradation was PCR amplified from genomic DNA of Bacillus cereus 2479 by using todC1 gene specific primers. The 600bp amplicon was cloned into expression vector pUC I8 in the E. coli host XL1-Blue and expressed under the control of lac promoter and nucleotide sequence was determined. The gene sequence was deposited at NCBI under the Accession no. GU183105. In Silico approach involved predicting the physico-chemical properties of deduced Tce1 protein by using ProtParam tool. The tce1 gene contained 342 bp long ORF encoding 114 amino acids with a predicted molecular weight 12.6 kDa and the theoretical pI value of the polypeptide was 5.17, molecular formula: C559H886N152O165S8, total number of atoms: 1770, aliphatic index: 101.93, instability index: 28.60, Grand Average of Hydropathicity (GRAVY): 0.152. Three differentially expressed proteins (97.1, 40 and 30 kDa) were directly involved in TCE biodegradation, found to react immunologically to the antibodies raised against TCE inducible proteins in Western blot analysis. The present study suggested that cloned gene product (TCE1) was capable of degrading TCE as verified chemically.

Keywords: cloning, Bacillus cereus, in silico analysis, TCE

Procedia PDF Downloads 372
132 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland

Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski

Abstract:

PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.

Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks

Procedia PDF Downloads 113