Search results for: fault monitoring and detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6271

Search results for: fault monitoring and detection

301 The Effects of Goal Setting and Feedback on Inhibitory Performance

Authors: Mami Miyasaka, Kaichi Yanaoka

Abstract:

Attention Deficit/Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder characterized by inattention, hyperactivity, and impulsivity; symptoms often manifest during childhood. In children with ADHD, the development of inhibitory processes is impaired. Inhibitory control allows people to avoid processing unnecessary stimuli and to behave appropriately in various situations; thus, people with ADHD require interventions to improve inhibitory control. Positive or negative reinforcements (i.e., reward or punishment) help improve the performance of children with such difficulties. However, in order to optimize impact, reward and punishment must be presented immediately following the relevant behavior. In regular elementary school classrooms, such supports are uncommon; hence, an alternative practical intervention method is required. One potential intervention involves setting goals to keep children motivated to perform tasks. This study examined whether goal setting improved inhibitory performances, especially for children with severe ADHD-related symptoms. We also focused on giving feedback on children's task performances. We expected that giving children feedback would help them set reasonable goals and monitor their performance. Feedback can be especially effective for children with severe ADHD-related symptoms because they have difficulty monitoring their own performance, perceiving their errors, and correcting their behavior. Our prediction was that goal setting by itself would be effective for children with mild ADHD-related symptoms, and goal setting based on feedback would be effective for children with severe ADHD-related symptoms. Japanese elementary school children and their parents were the sample for this study. Children performed two kinds of go/no-go tasks, and parents completed a checklist about their children's ADHD symptoms, the ADHD Rating Scale-IV, and the Conners 3rd edition. The go/no-go task is a cognitive task to measure inhibitory performance. Children were asked to press a key on the keyboard when a particular symbol appeared on the screen (go stimulus) and to refrain from doing so when another symbol was displayed (no-go stimulus). Errors obtained in response to a no-go stimulus indicated inhibitory impairment. To examine the effect of goal-setting on inhibitory control, 37 children (Mage = 9.49 ± 0.51) were required to set a performance goal, and 34 children (Mage = 9.44 ± 0.50) were not. Further, to manipulate the presence of feedback, in one go/no-go task, no information about children’s scores was provided; however, scores were revealed for the other type of go/no-go tasks. The results revealed a significant interaction between goal setting and feedback. However, three-way interaction between ADHD-related inattention, feedback, and goal setting was not significant. These results indicated that goal setting was effective for improving the performance of the go/no-go task only with feedback, regardless of ADHD severity. Furthermore, we found an interaction between ADHD-related inattention and feedback, indicating that informing inattentive children of their scores made them unexpectedly more impulsive. Taken together, giving feedback was, unexpectedly, too demanding for children with severe ADHD-related symptoms, but the combination of goal setting with feedback was effective for improving their inhibitory control. We discuss effective interventions for children with ADHD from the perspective of goal setting and feedback. This work was supported by the 14th Hakuho Research Grant for Child Education of the Hakuho Foundation.

Keywords: attention deficit disorder with hyperactivity, feedback, goal-setting, go/no-go task, inhibitory control

Procedia PDF Downloads 83
300 Decarbonising Urban Building Heating: A Case Study on the Benefits and Challenges of Fifth-Generation District Heating Networks

Authors: Mazarine Roquet, Pierre Dewallef

Abstract:

The building sector, both residential and tertiary, accounts for a significant share of greenhouse gas emissions. In Belgium, partly due to poor insulation of the building stock, but certainly because of the massive use of fossil fuels for heating buildings, this share reaches almost 30%. To reduce carbon emissions from urban building heating, district heating networks emerge as a promising solution as they offer various assets such as improving the load factor, integrating combined heat and power systems, and enabling energy source diversification, including renewable sources and waste heat recovery. However, mainly for sake of simple operation, most existing district heating networks still operate at high or medium temperatures ranging between 120°C and 60°C (the socalled second and third-generations district heating networks). Although these district heating networks offer energy savings in comparison with individual boilers, such temperature levels generally require the use of fossil fuels (mainly natural gas) with combined heat and power. The fourth-generation district heating networks improve the transport and energy conversion efficiency by decreasing the operating temperature between 50°C and 30°C. Yet, to decarbonise the building heating one must increase the waste heat recovery and use mainly wind, solar or geothermal sources for the remaining heat supply. Fifth-generation networks operating between 35°C and 15°C offer the possibility to decrease even more the transport losses, to increase the share of waste heat recovery and to use electricity from renewable resources through the use of heat pumps to generate low temperature heat. The main objective of this contribution is to exhibit on a real-life test case the benefits of replacing an existing third-generation network by a fifth-generation one and to decarbonise the heat supply of the building stock. The second objective of the study is to highlight the difficulties resulting from the use of a fifth-generation, low-temperature, district heating network. To do so, a simulation model of the district heating network including its regulation is implemented in the modelling language Modelica. This model is applied to the test case of the heating network on the University of Liège's Sart Tilman campus, consisting of around sixty buildings. This model is validated with monitoring data and then adapted for low-temperature networks. A comparison of primary energy consumptions as well as CO2 emissions is done between the two cases to underline the benefits in term of energy independency and GHG emissions. To highlight the complexity of operating a lowtemperature network, the difficulty of adapting the mass flow rate to the heat demand is considered. This shows the difficult balance between the thermal comfort and the electrical consumption of the circulation pumps. Several control strategies are considered and compared to the global energy savings. The developed model can be used to assess the potential for energy and CO2 emissions savings retrofitting an existing network or when designing a new one.

Keywords: building simulation, fifth-generation district heating network, low-temperature district heating network, urban building heating

Procedia PDF Downloads 51
299 Inflation and Deflation of Aircraft's Tire with Intelligent Tire Pressure Regulation System

Authors: Masoud Mirzaee, Ghobad Behzadi Pour

Abstract:

An aircraft tire is designed to tolerate extremely heavy loads for a short duration. The number of tires increases with the weight of the aircraft, as it is needed to be distributed more evenly. Generally, aircraft tires work at high pressure, up to 200 psi (14 bar; 1,400 kPa) for airliners and higher for business jets. Tire assemblies for most aircraft categories provide a recommendation of compressed nitrogen that supports the aircraft’s weight on the ground, including a mechanism for controlling the aircraft during taxi, takeoff; landing; and traction for braking. Accurate tire pressure is a key factor that enables tire assemblies to perform reliably under high static and dynamic loads. Concerning ambient temperature change, considering the condition in which the temperature between the origin and destination airport was different, tire pressure should be adjusted and inflated to the specified operating pressure at the colder airport. This adjustment superseding the normal tire over an inflation limit of 5 percent at constant ambient temperature is required because the inflation pressure remains constant to support the load of a specified aircraft configuration. On the other hand, without this adjustment, a tire assembly would be significantly under/over-inflated at the destination. Due to an increase of human errors in the aviation industry, exorbitant costs are imposed on the airlines for providing consumable parts such as aircraft tires. The existence of an intelligent system to adjust the aircraft tire pressure based on weight, load, temperature, and weather conditions of origin and destination airports, could have a significant effect on reducing the aircraft maintenance costs, aircraft fuel and further improving the environmental issues related to the air pollution. An intelligent tire pressure regulation system (ITPRS) contains a processing computer, a nitrogen bottle with 1800 psi, and distribution lines. Nitrogen bottle’s inlet and outlet valves are installed in the main wheel landing gear’s area and are connected through nitrogen lines to main wheels and nose wheels assy. Controlling and monitoring of nitrogen will be performed by a computer, which is adjusted according to the calculations of received parameters, including the temperature of origin and destination airport, the weight of cargo loads and passengers, fuel quantity, and wind direction. Correct tire inflation and deflation are essential in assuring that tires can withstand the centrifugal forces and heat of normal operations, with an adequate margin of safety for unusual operating conditions such as rejected takeoff and hard landings. ITPRS will increase the performance of the aircraft in all phases of takeoff, landing, and taxi. Moreover, this system will reduce human errors, consumption materials, and stresses imposed on the aircraft body.

Keywords: avionic system, improve efficiency, ITPRS, human error, reduced cost, tire pressure

Procedia PDF Downloads 216
298 Fabrication of SnO₂ Nanotube Arrays for Enhanced Gas Sensing Properties

Authors: Hsyi-En Cheng, Ying-Yi Liou

Abstract:

Metal-oxide semiconductor (MOS) gas sensors are widely used in the gas-detection market due to their high sensitivity, fast response, and simple device structures. However, the high working temperature of MOS gas sensors makes them difficult to integrate with the appliance or consumer goods. One-dimensional (1-D) nanostructures are considered to have the potential to lower their working temperature due to their large surface-to-volume ratio, confined electrical conduction channels, and small feature sizes. Unfortunately, the difficulty of fabricating 1-D nanostructure electrodes has hindered the development of low-temperature MOS gas sensors. In this work, we proposed a method to fabricate nanotube-arrays, and the SnO₂ nanotube-array sensors with different wall thickness were successfully prepared and examined. The fabrication of SnO₂ nanotube arrays incorporates the techniques of barrier-free anodic aluminum oxide (AAO) template and atomic layer deposition (ALD) of SnO₂. First, 1.0 µm Al film was deposited on ITO glass substrate by electron beam evaporation and then anodically oxidized by five wt% phosphoric acid solution at 5°C under a constant voltage of 100 V to form porous aluminum oxide. As the Al film was fully oxidized, a 15 min over anodization and a 30 min post chemical dissolution were used to remove the barrier oxide at the bottom end of pores to generate a barrier-free AAO template. The ALD using reactants of TiCl4 and H₂O was followed to grow a thin layer of SnO₂ on the template to form SnO₂ nanotube arrays. After removing the surface layer of SnO₂ by H₂ plasma and dissolving the template by 5 wt% phosphoric acid solution at 50°C, upright standing SnO₂ nanotube arrays on ITO glass were produced. Finally, Ag top electrode with line width of 5 μm was printed on the nanotube arrays to form SnO₂ nanotube-array sensor. Two SnO₂ nanotube-arrays with wall thickness of 30 and 60 nm were produced in this experiment for the evaluation of gas sensing ability. The flat SnO₂ films with thickness of 30 and 60 nm were also examined for comparison. The results show that the properties of ALD SnO₂ films were related to the deposition temperature. The films grown at 350°C had a low electrical resistivity of 3.6×10-3 Ω-cm and were, therefore, used for the nanotube-array sensors. The carrier concentration and mobility of the SnO₂ films were characterized by Ecopia HMS-3000 Hall-effect measurement system and were 1.1×1020 cm-3 and 16 cm3/V-s, respectively. The electrical resistance of SnO₂ film and nanotube-array sensors in air and in a 5% H₂-95% N₂ mixture gas was monitored by Pico text M3510A 6 1/2 Digits Multimeter. It was found that, at 200 °C, the 30-nm-wall SnO₂ nanotube-array sensor performs the highest responsivity to 5% H₂, followed by the 30-nm SnO₂ film sensor, the 60-nm SnO₂ film sensor, and the 60-nm-wall SnO₂ nanotube-array sensor. However, at temperatures below 100°C, all the samples were insensitive to the 5% H₂ gas. Further investigation on the sensors with thinner SnO₂ is necessary for improving the sensing ability at temperatures below 100 °C.

Keywords: atomic layer deposition, nanotube arrays, gas sensor, tin dioxide

Procedia PDF Downloads 220
297 Analysis of Differentially Expressed Genes in Spontaneously Occurring Canine Melanoma

Authors: Simona Perga, Chiara Beltramo, Floriana Fruscione, Isabella Martini, Federica Cavallo, Federica Riccardo, Paolo Buracco, Selina Iussich, Elisabetta Razzuoli, Katia Varello, Lorella Maniscalco, Elena Bozzetta, Angelo Ferrari, Paola Modesto

Abstract:

Introduction: Human and canine melanoma have common clinical, histologic characteristics making dogs a good model for comparative oncology. The identification of specific genes and a better understanding of the genetic landscape, signaling pathways, and tumor–microenvironmental interactions involved in the cancer onset and progression is essential for the development of therapeutic strategies against this tumor in both species. In the present study, the differential expression of genes in spontaneously occurring canine melanoma and in paired normal tissue was investigated by targeted RNAseq. Material and Methods: Total RNA was extracted from 17 canine malignant melanoma (CMM) samples and from five paired normal tissues stored in RNA-later. In order to capture the greater genetic variability, gene expression analysis was carried out using two panels (Qiagen): Human Immuno-Oncology (HIO) and Mouse-Immuno-Oncology (MIO) and the miSeq platform (Illumina). These kits allow the detection of the expression profile of 990 genes involved in the immune response against tumors in humans and mice. The data were analyzed through the CLCbio Genomics Workbench (Qiagen) software using the Canis lupus familiaris genome as a reference. Data analysis were carried out both comparing the biologic group (tumoral vs. healthy tissues) and comparing neoplastic tissue vs. paired healthy tissue; a Fold Change greater than two and a p-value less than 0.05 were set as the threshold to select interesting genes. Results and Discussion: Using HIO 63, down-regulated genes were detected; 13 of those were also down-regulated comparing neoplastic sample vs. paired healthy tissue. Eighteen genes were up-regulated, 14 of those were also down-regulated comparing neoplastic sample vs. paired healthy tissue. Using the MIO, 35 down regulated-genes were detected; only four of these were down-regulated, also comparing neoplastic sample vs. paired healthy tissue. Twelve genes were up-regulated in both types of analysis. Considering the two kits, the greatest variation in Fold Change was in up-regulated genes. Dogs displayed a greater genetic homology with humans than mice; moreover, the results have shown that the two kits are able to detect different genes. Most of these genes have specific cellular functions or belong to some enzymatic categories; some have already been described to be correlated to human melanoma and confirm the validity of the dog as a model for the study of molecular aspects of human melanoma.

Keywords: animal model, canine melanoma, gene expression, spontaneous tumors, targeted RNAseq

Procedia PDF Downloads 173
296 Relationship between Glycated Hemoglobin in Adolescents with Type 1 Diabetes Mellitus and Parental Anxiety and Depression

Authors: Evija Silina, Maris Taube, Maksims Zolovs

Abstract:

Background: Type 1 diabetes mellitus (T1D) is the most common chronic endocrine pathology in children. The management of type 1 diabetes requires a strong diet, physical activity, lifelong insulin therapy, and proper self-monitoring of blood glucose and is usually complicated and, therefore, may result in a variety of psychosocial problems for children, adolescents, and their families. Metabolic control of the disease is determined by glycated haemoglobin (HbA1c), the main criterion for diabetes compensation. A correlation was observed between anxiety and depression levels and glycaemic control in many previous studies. It is assumed that anxiety and depression symptoms negatively affect glycaemic control. Parental psychological distress was associated with higher child self-report of stress and depressive symptoms, and it had negative effects on diabetes management. Objective: The main objective of this paper is to evaluate the relationship between parental mental health conditions (depression and anxiety) and metabolic control of their adolescents with T1DM. Methods: This cross-sectional study recruited adolescents with T1D (N=251) and their parents (N=251). The respondents completed questionnaires. The 7-item Generalized Anxiety Disorder (GAD-7) scale measured anxiety level; The Patient Health Questionnaire – 9 (PHQ-9) measured depressive symptoms. Glycaemic control of patients was assessed using the last glycated haemoglobin (HbA1c) values. GLM mediation analysis was performed to determine the potential mediating effect of the parent’s mental health conditions (depression and anxiety) on the relationship between the mental health conditions (depression and anxiety) of a child on the level of glycated hemoglobin (HbA1c). To test the significance of the mediated effect (ME) for non-normally distributed data, bootstrapping procedures (10,000 bootstrapped samples) were used. Results: 502 respondents were eligible for screening to detect anxiety and depression symptoms. Mediation analysis was performed to assess the mediating role of parent GAD-7 on the linkage between a dependent variable (HbA1c) and independent variables (child GAD-7 un child PHQ-9). The results revealed that the total effect of child GAD-7 (B = 0.479, z = 4.30, p < 0.001) on HbA1c was significant but the total effect of child PHQ-9 (B = 0.166, z = 1.49, p = 0.135) was not significant. With the inclusion of the mediating variable (parent GAD-7), the impact of child GAD-7 on HbA1c was found insignificant (B = 0.113, z=0.98, p = 0.326), the impact of child PHQ-9 on HbA1c was found also insignificant (B = 0.068, z=0.74, p = 0.458). The indirect effect of child GAD-7 on HbA1c through parent GAD-7 was found significant (B = 0.366, z = 4.31, p < 0.001) and the indirect effect of child PHQ-9 on HbA1c through parent GAD-7 was found also significant (B = 0.098, z = 2.56, p = 0.010). This indicates that the relationship between a dependent variable (HbA1c) and independent variables (child GAD-7 un child PHQ-9) is fully mediated by parent GAD-7. Conclusion: The main result suggests that glycated haemoglobin in adolescents with Type 1 diabetes is related to adolescents’ mental health via parents’ anxiety. It means that parents’ anxiety plays a more significant role in the level of glycated haemoglobin in adolescents than depression and anxiety in the adolescent.

Keywords: type 1 diabetes, adolescents, parental diabetes-specific mental health conditions, glycated haemoglobin, anxiety, depression

Procedia PDF Downloads 57
295 Saco Sweet Cherry: Phenolic Profile and Biological Activity of Coloured and Non-Coloured Fractions

Authors: Catarina Bento, Ana Carolina Gonçalves, Fábio Jesus, Luís Rodrigues Silva

Abstract:

Increasing evidence suggests that a diet rich in fruits and vegetables plays important roles in the prevention of chronic diseases, such as heart disease, cancer, stroke, diabetes, Alzheimer’s disease, among others. Fruits and vegetables gained prominence due their richness in bioactive compounds, being the focus of many studies due to their biological properties acting as health promoters. Prunus avium Linnaeus (L.), commonly known as sweet cherry has been the centre of attention due to its health benefits, and has been highly studied. In Portugal, most of the cherry production comes from the Fundão region. The Saco is one of the most important cultivar produced in this region, attributed with geographical protection. In this work, we prepared 3 extracts through solid-phase extraction (SPE): a whole extract, fraction I (non-coloured phenolics) and fraction II (coloured phenolics). The three extracts were used to determine the phenolic profile of Saco cultivar by liquid chromatography with diode array detection (LC-DAD) technique. This was followed by the evaluation of their biological potential, testing the extracts’ capacity to scavenge free-radicals (DPPH•, nitric oxide (•NO) and superoxide radical (O2●-)) and to inhibit α-glucosidase enzyme of all extracts. Additionally, we evaluated, for the first time, the protective effects against peroxyl radical (ROO•)-induced hemoglobin oxidation and hemolysis in human erythrocytes. A total of 16 non-coloured phenolics were detected, 3-O-caffeoylquinic and ρ-coumaroylquinic acids were the main ones, and 6 anthocyanins were found, among which cyanidin-3-O-rutinoside represented the majority. In respect to antioxidant activity, Saco showed great antioxidant potential in a concentration-dependent manner, demonstrated through the DPPH•,•NO and O2●-radicals, and greater ability to inhibit the α-glucosidase enzyme in comparison to the regular drug acarbose used to treat diabetes. Additionally, Saco proved to be effective to protect erythrocytes against oxidative damage in a concentration-dependent manner against hemoglobin oxidation and hemolysis. Our work demonstrated that Saco cultivar is an excellent source of phenolic compounds which are natural antioxidants that easily capture reactive species, such as ROO• before they can attack the erythrocytes’ membrane. In a general way, the whole extract showed the best efficiency, most likely due to a synergetic interaction between the different compounds. Finally, comparing the two separate fractions, the coloured fraction showed the most activity in all the assays, proving to be the biggest contributor of Saco cherries’ biological activity.

Keywords: biological potential, coloured phenolics, non-coloured phenolics, sweet cherry

Procedia PDF Downloads 225
294 Machine Learning for Disease Prediction Using Symptoms and X-Ray Images

Authors: Ravija Gunawardana, Banuka Athuraliya

Abstract:

Machine learning has emerged as a powerful tool for disease diagnosis and prediction. The use of machine learning algorithms has the potential to improve the accuracy of disease prediction, thereby enabling medical professionals to provide more effective and personalized treatments. This study focuses on developing a machine-learning model for disease prediction using symptoms and X-ray images. The importance of this study lies in its potential to assist medical professionals in accurately diagnosing diseases, thereby improving patient outcomes. Respiratory diseases are a significant cause of morbidity and mortality worldwide, and chest X-rays are commonly used in the diagnosis of these diseases. However, accurately interpreting X-ray images requires significant expertise and can be time-consuming, making it difficult to diagnose respiratory diseases in a timely manner. By incorporating machine learning algorithms, we can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The study utilized the Mask R-CNN algorithm, which is a state-of-the-art method for object detection and segmentation in images, to process chest X-ray images. The model was trained and tested on a large dataset of patient information, which included both symptom data and X-ray images. The performance of the model was evaluated using a range of metrics, including accuracy, precision, recall, and F1-score. The results showed that the model achieved an accuracy rate of over 90%, indicating that it was able to accurately detect and segment regions of interest in the X-ray images. In addition to X-ray images, the study also incorporated symptoms as input data for disease prediction. The study used three different classifiers, namely Random Forest, K-Nearest Neighbor and Support Vector Machine, to predict diseases based on symptoms. These classifiers were trained and tested using the same dataset of patient information as the X-ray model. The results showed promising accuracy rates for predicting diseases using symptoms, with the ensemble learning techniques significantly improving the accuracy of disease prediction. The study's findings indicate that the use of machine learning algorithms can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The model developed in this study has the potential to assist medical professionals in diagnosing respiratory diseases more accurately and efficiently. However, it is important to note that the accuracy of the model can be affected by several factors, including the quality of the X-ray images, the size of the dataset used for training, and the complexity of the disease being diagnosed. In conclusion, the study demonstrated the potential of machine learning algorithms for disease prediction using symptoms and X-ray images. The use of these algorithms can improve the accuracy of disease diagnosis, ultimately leading to better patient care. Further research is needed to validate the model's accuracy and effectiveness in a clinical setting and to expand its application to other diseases.

Keywords: K-nearest neighbor, mask R-CNN, random forest, support vector machine

Procedia PDF Downloads 112
293 Aquatic Sediment and Honey of Apis mellifera as Bioindicators of Pesticide Residues

Authors: Luana Guerra, Silvio C. Sampaio, Vladimir Pavan Margarido, Ralpho R. Reis

Abstract:

Brazil is the world's largest consumer of pesticides. The excessive use of these compounds has negative impacts on animal and human life, the environment, and food security. Bees, crucial for pollination, are exposed to pesticides during the collection of nectar and pollen, posing risks to their health and the food chain, including honey contamination. Aquatic sediments are also affected, impacting water quality and the microbiota. Therefore, the analysis of aquatic sediments and bee honey is essential to identify environmental contamination and monitor ecosystems. The aim of this study was to use samples of honey from honeybees (Apis mellifera) and aquatic sediment as bioindicators of environmental contamination by pesticides and their relationship with agricultural use in the surrounding areas. The sample collections of sediment and honey were carried out in two stages. The first stage was conducted in the Bituruna municipality region in the second half of the year 2022, and the second stage took place in the regions of Laranjeiras do Sul, Quedas do Iguaçu, and Nova Laranjeiras in the first half of the year 2023. In total, 10 collection points were selected, with 5 points in the first stage and 5 points in the second stage, where one sediment sample and one honey sample were collected for each point, totaling 20 samples. The honey and sediment samples were analyzed at the Laboratory of the Paraná Institute of Technology, with ten samples of honey and ten samples of sediment. The selected extraction method was QuEChERS, and the analysis of the components present in the sample was performed using liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). The pesticides Azoxystrobin, Epoxiconazole, Boscalid, Carbendazim, Haloxifope, Fomesafen, Fipronil, Chlorantraniliprole, Imidacloprid, and Bifenthrin were detected in the sediment samples from the study area in Laranjeiras do Sul, Paraná, with Carbendazim being the compound with the highest concentration (0.47 mg/kg). The honey samples obtained from the apiaries showed satisfactory results, as they did not show any detection or quantification of the analyzed pesticides, except for Point 9, which had the fungicide tebuconazole but with a concentration Keywords: contamination, water research, agrochemicals, beekeeping activity

Procedia PDF Downloads 19
292 Combating the Practice of Open Defecation through Appropriate Communication Strategies in Rural India

Authors: Santiagomani Alex Parimalam

Abstract:

Lack of awareness on the consequences of open defecation and myths and misconceptions related to use of toilets have led to the continued practice of open defecation in India. Government of India initiated a multi-pronged intensive communication campaign against the practice of open defecation in the last few years. The primary vision of this communication campaign was to provide increased demand for toilets and to ensure that all have access to safe sanitation. The campaign strategy included the use of mass media, group and folk media, and interpersonal communication to expedite achieving its objectives. The campaign included the use of various media such as posters, wall writings, slides in cinema theatres, kiosks, pamphlets, newsletters, flip charts and folk media to bring behavioural changes in the communities. The author did a concurrent monitoring and process documentation of the campaigns initiated by the state of Tamilnandu, India between 2013 and 2016 commissioned by UNICEF India. The study was carried out to assess the effectiveness of the communication campaigns in combating the practice of open defecation and promote construction of toilets in the state of Tamilnadu, India. Initial findings revealed the gap in understanding the audience and the use of appropriate media. The first phase of the communication campaign by name as Chi Chi Chollapa (bringing shame concept) also revealed that use of interpersonal communication, group and community media were the most effective strategy in reaching the rural masses. The failure of various other media used especially the print media (poster, handbills, newsletter, kiosks) provides insights as to where the government needs to invest its resources in bringing health-seeking behaviour in the community. The findings shared with the government enabled to strengthen the campaign resulting in improved response. Taking cues from the study, the government understood the potency of the women, school children, youth and community leaders as the effective carriers of the message. The government narrowed down its focus and invested on the voluntary workers (village poverty reduction committee workers VPRCs) in the community. The effectiveness of interpersonal communication and peer education by the credible community worker threw light on the need for localising the content and communicator. From this study, we could derive that only community and group media are preferred by the people in the rural community. Children, youth, women, and credible local leaders are proved to be ambassadors in behaviour change communication. This study discloses the lacunae involved in the communication campaign and points out that the state should have carried out a proper communication need analysis and piloting. The study used a survey method with random sampling. The study used both quantitative and qualitative tools such as interview schedules, in-depth interviews, and focus group discussions in rural areas of Tamilnadu in phases. The findings of the study would provide directions to future campaigns to any campaign concerning health and rural development.

Keywords: appropriate, communication, combating, open defecation

Procedia PDF Downloads 103
291 Artificial Intelligence in Management Simulators

Authors: Nuno Biga

Abstract:

Artificial Intelligence (AI) allows machines to interpret information and learn from context analysis, giving them the ability to make predictions adjusted to each specific situation. In addition to learning by performing deterministic and probabilistic calculations, the 'artificial brain' also learns through information and data provided by those who train it, namely its users. The "Assisted-BIGAMES" version of the Accident & Emergency (A&E) simulator introduces the concept of a "Virtual Assistant" (VA) that provides users with useful suggestions, namely to pursue the following operations: a) to relocate workstations in order to shorten travelled distances and minimize the stress of those involved; b) to identify in real time the bottleneck(s) in the operations system so that it is possible to quickly act upon them; c) to identify resources that should be polyvalent so that the system can be more efficient; d) to identify in which specific processes it may be advantageous to establish partnership with other teams; and e) to assess possible solutions based on the suggested KPIs allowing action monitoring to guide the (re)definition of future strategies. This paper is built on the BIGAMES© simulator and presents the conceptual AI model developed in a pilot project. Each Virtual Assisted BIGAME is a management simulator developed by the author that guides operational and strategic decision making, providing users with useful information in the form of management recommendations that make it possible to predict the actual outcome of different alternative management strategic actions. The pilot project developed incorporates results from 12 editions of the BIGAME A&E that took place between 2017 and 2022 at AESE Business School, based on the compilation of data that allows establishing causal relationships between decisions taken and results obtained. The systemic analysis and interpretation of this information is materialised in the Assisted-BIGAMES through a computer application called "BIGAMES Virtual Assistant" that players can use during the Game. Each participant in the Virtual Assisted-BIGAMES permanently asks himself about the decisions he should make during the game in order to win the competition. To this end, the role of the VA of each team consists in guiding the players to be more effective in their decision making through presenting recommendations based on AI methods. It is important to note that the VA's suggestions for action can be accepted or rejected by the managers of each team, and as the participants gain a better understanding of the game, they will more easily dispense with the VA's recommendations and rely more on their own experience, capability, and knowledge to support their own decisions. Preliminary results show that the introduction of the VA provides a faster learning of the decision-making process. The facilitator (Serious Game Controller) is responsible for supporting the players with further analysis and the recommended action may be (or not) aligned with the previous recommendations of the VA. All the information should be jointly analysed and assessed by each player, who are expected to add “Emotional Intelligence”, a component absent from the machine learning process.

Keywords: artificial intelligence (AI), gamification, key performance indicators (KPI), machine learning, management simulators, serious games, virtual assistant

Procedia PDF Downloads 77
290 ReactorDesign App: An Interactive Software for Self-Directed Explorative Learning

Authors: Chia Wei Lim, Ning Yan

Abstract:

The subject of reactor design, dealing with the transformation of chemical feedstocks into more valuable products, constitutes the central idea of chemical engineering. Despite its importance, the way it is taught to chemical engineering undergraduates has stayed virtually the same over the past several decades, even as the chemical industry increasingly leans towards the use of software for the design and daily monitoring of chemical plants. As such, there has been a widening learning gap as chemical engineering graduates transition from university to the industry since they are not exposed to effective platforms that relate the fundamental concepts taught during lectures to industrial applications. While the success of technology enhanced learning (TEL) has been demonstrated in various chemical engineering subjects, TELs in the teaching of reactor design appears to focus on the simulation of reactor processes, as opposed to arguably more important ideas such as the selection and optimization of reactor configuration for different types of reactions. This presents an opportunity for us to utilize the readily available easy-to-use MATLAB App platform to create an educational tool to aid the learning of fundamental concepts of reactor design and to link these concepts to the industrial context. Here, interactive software for the learning of reactor design has been developed to narrow the learning gap experienced by chemical engineering undergraduates. Dubbed the ReactorDesign App, it enables students to design reactors involving complex design equations for industrial applications without being overly focused on the tedious mathematical steps. With the aid of extensive visualization features, the concepts covered during lectures are explicitly utilized, allowing students to understand how these fundamental concepts are applied in the industrial context and equipping them for their careers. In addition, the software leverages the easily accessible MATLAB App platform to encourage self-directed learning. It is useful for reinforcing concepts taught, complementing homework assignments, and aiding exam revision. Accordingly, students are able to identify any lapses in understanding and clarify them accordingly. In terms of the topics covered, the app incorporates the design of different types of isothermal and non-isothermal reactors, in line with the lecture content and industrial relevance. The main features include the design of single reactors, such as batch reactors (BR), continuously stirred tank reactors (CSTR), plug flow reactors (PFR), and recycle reactors (RR), as well as multiple reactors consisting of any combination of ideal reactors. A version of the app, together with some guiding questions to aid explorative learning, was released to the undergraduates taking the reactor design module. A survey was conducted to assess its effectiveness, and an overwhelmingly positive response was received, with 89% of the respondents agreeing or strongly agreeing that the app has “helped [them] with understanding the unit” and 87% of the respondents agreeing or strongly agreeing that the app “offers learning flexibility”, compared to the conventional lecture-tutorial learning framework. In conclusion, the interactive ReactorDesign App has been developed to encourage self-directed explorative learning of the subject and demonstrate the industrial applications of the taught design concepts.

Keywords: explorative learning, reactor design, self-directed learning, technology enhanced learning

Procedia PDF Downloads 72
289 Development and Total Error Concept Validation of Common Analytical Method for Quantification of All Residual Solvents Present in Amino Acids by Gas Chromatography-Head Space

Authors: A. Ramachandra Reddy, V. Murugan, Prema Kumari

Abstract:

Residual solvents in Pharmaceutical samples are monitored using gas chromatography with headspace (GC-HS). Based on current regulatory and compendial requirements, measuring the residual solvents are mandatory for all release testing of active pharmaceutical ingredients (API). Generally, isopropyl alcohol is used as the residual solvent in proline and tryptophan; methanol in cysteine monohydrate hydrochloride, glycine, methionine and serine; ethanol in glycine and lysine monohydrate; acetic acid in methionine. In order to have a single method for determining these residual solvents (isopropyl alcohol, ethanol, methanol and acetic acid) in all these 7 amino acids a sensitive and simple method was developed by using gas chromatography headspace technique with flame ionization detection. During development, no reproducibility, retention time variation and bad peak shape of acetic acid peaks were identified due to the reaction of acetic acid with the stationary phase (cyanopropyl dimethyl polysiloxane phase) of column and dissociation of acetic acid with water (if diluent) while applying temperature gradient. Therefore, dimethyl sulfoxide was used as diluent to avoid these issues. But most the methods published for acetic acid quantification by GC-HS uses derivatisation technique to protect acetic acid. As per compendia, risk-based approach was selected as appropriate to determine the degree and extent of the validation process to assure the fitness of the procedure. Therefore, Total error concept was selected to validate the analytical procedure. An accuracy profile of ±40% was selected for lower level (quantitation limit level) and for other levels ±30% with 95% confidence interval (risk profile 5%). The method was developed using DB-Waxetr column manufactured by Agilent contains 530 µm internal diameter, thickness: 2.0 µm, and length: 30 m. A constant flow of 6.0 mL/min. with constant make up mode of Helium gas was selected as a carrier gas. The present method is simple, rapid, and accurate, which is suitable for rapid analysis of isopropyl alcohol, ethanol, methanol and acetic acid in amino acids. The range of the method for isopropyl alcohol is 50ppm to 200ppm, ethanol is 50ppm to 3000ppm, methanol is 50ppm to 400ppm and acetic acid 100ppm to 400ppm, which covers the specification limits provided in European pharmacopeia. The accuracy profile and risk profile generated as part of validation were found to be satisfactory. Therefore, this method can be used for testing of residual solvents in amino acids drug substances.

Keywords: amino acid, head space, gas chromatography, total error

Procedia PDF Downloads 122
288 The Solid-Phase Sensor Systems for Fluorescent and SERS-Recognition of Neurotransmitters for Their Visualization and Determination in Biomaterials

Authors: Irina Veselova, Maria Makedonskaya, Olga Eremina, Alexandr Sidorov, Eugene Goodilin, Tatyana Shekhovtsova

Abstract:

Such catecholamines as dopamine, norepinephrine, and epinephrine are the principal neurotransmitters in the sympathetic nervous system. Catecholamines and their metabolites are considered to be important markers of socially significant diseases such as atherosclerosis, diabetes, coronary heart disease, carcinogenesis, Alzheimer's and Parkinson's diseases. Currently, neurotransmitters can be studied via electrochemical and chromatographic techniques that allow their characterizing and quantification, although these techniques can only provide crude spatial information. Besides, the difficulty of catecholamine determination in biological materials is associated with their low normal concentrations (~ 1 nM) in biomaterials, which may become even one more order lower because of some disorders. In addition, in blood they are rapidly oxidized by monoaminooxidases from thrombocytes and, for this reason, the determination of neurotransmitter metabolism indicators in an organism should be very rapid (15—30 min), especially in critical states. Unfortunately, modern instrumental analysis does not offer a complex solution of this problem: despite its high sensitivity and selectivity, HPLC-MS cannot provide sufficiently rapid analysis, while enzymatic biosensors and immunoassays for the determination of the considered analytes lack sufficient sensitivity and reproducibility. Fluorescent and SERS-sensors remain a compelling technology for approaching the general problem of selective neurotransmitter detection. In recent years, a number of catecholamine sensors have been reported including RNA aptamers, fluorescent ribonucleopeptide (RNP) complexes, and boronic acid based synthetic receptors and the sensor operated in a turn-off mode. In this work we present the fluorescent and SERS turn-on sensor systems based on the bio- or chemorecognizing nanostructured films {chitosan/collagen-Tb/Eu/Cu-nanoparticles-indicator reagents} that provide the selective recognition, visualization, and sensing of the above mentioned catecholamines on the level of nanomolar concentrations in biomaterials (cell cultures, tissue etc.). We have (1) developed optically transparent porous films and gels of chitosan/collagen; (2) ensured functionalization of the surface by molecules-'recognizers' (by impregnation and immobilization of components of the indicator systems: biorecognizing and auxiliary reagents); (3) performed computer simulation for theoretical prediction and interpretation of some properties of the developed materials and obtained analytical signals in biomaterials. We are grateful for the financial support of this research from Russian Foundation for Basic Research (grants no. 15-03-05064 a, and 15-29-01330 ofi_m).

Keywords: biomaterials, fluorescent and SERS-recognition, neurotransmitters, solid-phase turn-on sensor system

Procedia PDF Downloads 378
287 Numerical Simulation of Hydraulic Fracture Propagation in Marine-continental Transitional Tight Sandstone Reservoirs by Boundary Element Method: A Case Study of Shanxi Formation in China

Authors: Jiujie Cai, Fengxia LI, Haibo Wang

Abstract:

After years of research, offshore oil and gas development now are shifted to unconventional reservoirs, where multi-stage hydraulic fracturing technology has been widely used. However, the simulation of complex hydraulic fractures in tight reservoirs is faced with geological and engineering difficulties, such as large burial depths, sand-shale interbeds, and complex stress barriers. The objective of this work is to simulate the hydraulic fracture propagation in the tight sandstone matrix of the marine-continental transitional reservoirs, where the Shanxi Formation in Tianhuan syncline of the Dongsheng gas field was used as the research target. The characteristic parameters of the vertical rock samples with rich beddings were clarified through rock mechanics experiments. The influence of rock mechanical parameters, vertical stress difference of pay-zone and bedding layer, and fracturing parameters (such as injection rates, fracturing fluid viscosity, and number of perforation clusters within single stage) on fracture initiation and propagation were investigated. In this paper, a 3-D fracture propagation model was built to investigate the complex fracture propagation morphology by boundary element method, considering the strength of bonding surface between layers, vertical stress difference and fracturing parameters (such as injection rates, fluid volume and viscosity). The research results indicate that on the condition of vertical stress difference (3 MPa), the fracture height can break through and enter the upper interlayer when the thickness of the overlying bedding layer is 6-9 m, considering effect of the weak bonding surface between layers. The fracture propagates within the pay zone when overlying interlayer is greater than 13 m. Difference in fluid volume distribution between clusters could be more than 20% when the stress difference of each cluster in the segment exceeds 2MPa. Fracture cluster in high stress zones cannot initiate when the stress difference in the segment exceeds 5MPa. The simulation results of fracture height are much higher if the effect of weak bonding surface between layers is not involved. By increasing the injection rates, increasing fracturing fluid viscosity, and reducing the number of clusters within single stage can promote the fracture height propagation through layers. Optimizing the perforation position and reducing the number of perforations can promote the uniform expansion of fractures. Typical curves of fracture height estimation were established for the tight sandstone of the Lower Permian Shanxi Formation. The model results have good consistency with micro-seismic monitoring results of hydraulic fracturing in Well 1HF.

Keywords: fracture propagation, boundary element method, fracture height, offshore oil and gas, marine-continental transitional reservoirs, rock mechanics experiment

Procedia PDF Downloads 97
286 Influence of a High-Resolution Land Cover Classification on Air Quality Modelling

Authors: C. Silveira, A. Ascenso, J. Ferreira, A. I. Miranda, P. Tuccella, G. Curci

Abstract:

Poor air quality is one of the main environmental causes of premature deaths worldwide, and mainly in cities, where the majority of the population lives. It is a consequence of successive land cover (LC) and use changes, as a result of the intensification of human activities. Knowing these landscape modifications in a comprehensive spatiotemporal dimension is, therefore, essential for understanding variations in air pollutant concentrations. In this sense, the use of air quality models is very useful to simulate the physical and chemical processes that affect the dispersion and reaction of chemical species into the atmosphere. However, the modelling performance should always be evaluated since the resolution of the input datasets largely dictates the reliability of the air quality outcomes. Among these data, the updated LC is an important parameter to be considered in atmospheric models, since it takes into account the Earth’s surface changes due to natural and anthropic actions, and regulates the exchanges of fluxes (emissions, heat, moisture, etc.) between the soil and the air. This work aims to evaluate the performance of the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem), when different LC classifications are used as an input. The influence of two LC classifications was tested: i) the 24-classes USGS (United States Geological Survey) LC database included by default in the model, and the ii) CLC (Corine Land Cover) and specific high-resolution LC data for Portugal, reclassified according to the new USGS nomenclature (33-classes). Two distinct WRF-Chem simulations were carried out to assess the influence of the LC on air quality over Europe and Portugal, as a case study, for the year 2015, using the nesting technique over three simulation domains (25 km2, 5 km2 and 1 km2 horizontal resolution). Based on the 33-classes LC approach, particular emphasis was attributed to Portugal, given the detail and higher LC spatial resolution (100 m x 100 m) than the CLC data (5000 m x 5000 m). As regards to the air quality, only the LC impacts on tropospheric ozone concentrations were evaluated, because ozone pollution episodes typically occur in Portugal, in particular during the spring/summer, and there are few research works relating to this pollutant with LC changes. The WRF-Chem results were validated by season and station typology using background measurements from the Portuguese air quality monitoring network. As expected, a better model performance was achieved in rural stations: moderate correlation (0.4 – 0.7), BIAS (10 – 21µg.m-3) and RMSE (20 – 30 µg.m-3), and where higher average ozone concentrations were estimated. Comparing both simulations, small differences grounded on the Leaf Area Index and air temperature values were found, although the high-resolution LC approach shows a slight enhancement in the model evaluation. This highlights the role of the LC on the exchange of atmospheric fluxes, and stresses the need to consider a high-resolution LC characterization combined with other detailed model inputs, such as the emission inventory, to improve air quality assessment.

Keywords: land use, spatial resolution, WRF-Chem, air quality assessment

Procedia PDF Downloads 135
285 Investigation of Cavitation in a Centrifugal Pump Using Synchronized Pump Head Measurements, Vibration Measurements and High-Speed Image Recording

Authors: Simon Caba, Raja Abou Ackl, Svend Rasmussen, Nicholas E. Pedersen

Abstract:

It is a challenge to directly monitor cavitation in a pump application during operation because of a lack of visual access to validate the presence of cavitation and its form of appearance. In this work, experimental investigations are carried out in an inline single-stage centrifugal pump with optical access. Hence, it gives the opportunity to enhance the value of CFD tools and standard cavitation measurements. Experiments are conducted using two impellers running in the same volute at 3000 rpm and the same flow rate. One of the impellers used is optimized for lower NPSH₃% by its blade design, whereas the other one is manufactured using a standard casting method. The cavitation is detected by pump performance measurements, vibration measurements and high-speed image recordings. The head drop and the pump casing vibration caused by cavitation are correlated with the visual appearance of the cavitation. The vibration data is recorded in an axial direction of the impeller using accelerometers recording at a sample rate of 131 kHz. The vibration frequency domain data (up to 20 kHz) and the time domain data are analyzed as well as the root mean square values. The high-speed recordings, focusing on the impeller suction side, are taken at 10,240 fps to provide insight into the flow patterns and the cavitation behavior in the rotating impeller. The videos are synchronized with the vibration time signals by a trigger signal. A clear correlation between cloud collapses and abrupt peaks in the vibration signal can be observed. The vibration peaks clearly indicate cavitation, especially at higher NPSHA values where the hydraulic performance is not affected. It is also observed that below a certain NPSHA value, the cavitation started in the inlet bend of the pump. Above this value, cavitation occurs exclusively on the impeller blades. The impeller optimized for NPSH₃% does show a lower NPSH₃% than the standard impeller, but the head drop starts at a higher NPSHA value and is more gradual. Instabilities in the head drop curve of the optimized impeller were observed in addition to a higher vibration level. Furthermore, the cavitation clouds on the suction side appear more unsteady when using the optimized impeller. The shape and location of the cavitation are compared to 3D fluid flow simulations. The simulation results are in good agreement with the experimental investigations. In conclusion, these investigations attempt to give a more holistic view on the appearance of cavitation by comparing the head drop, vibration spectral data, vibration time signals, image recordings and simulation results. Data indicates that a criterion for cavitation detection could be derived from the vibration time-domain measurements, which requires further investigation. Usually, spectral data is used to analyze cavitation, but these investigations indicate that the time domain could be more appropriate for some applications.

Keywords: cavitation, centrifugal pump, head drop, high-speed image recordings, pump vibration

Procedia PDF Downloads 159
284 Comparative Vector Susceptibility for Dengue Virus and Their Co-Infection in A. aegypti and A. albopictus

Authors: Monika Soni, Chandra Bhattacharya, Siraj Ahmed Ahmed, Prafulla Dutta

Abstract:

Dengue is now a globally important arboviral disease. Extensive vector surveillance has already established A.aegypti as a primary vector, but A.albopictus is now accelerating the situation through gradual adaptation to human surroundings. Global destabilization and gradual climatic shift with rising in temperature have significantly expanded the geographic range of these species These versatile vectors also host Chikungunya, Zika, and yellow fever virus. Biggest challenge faced by endemic countries now is upsurge in co-infection reported with multiple serotypes and virus co-circulation. To foster vector control interventions and mitigate disease burden, there is surge for knowledge on vector susceptibility and viral tolerance in response to multiple infections. To address our understanding on transmission dynamics and reproductive fitness, both the vectors were exposed to single and dual combinations of all four dengue serotypes by artificial feeding and followed up to third generation. Artificial feeding observed significant difference in feeding rate for both the species where A.albopictus was poor artificial feeder (35-50%) compared to A.aegypti (95-97%) Robust sequential screening of viral antigen in mosquitoes was followed by Dengue NS1 ELISA, RT-PCR and Quantitative PCR. To observe viral dissemination in different mosquito tissues Indirect immunofluorescence assay was performed. Result showed that both the vectors were infected initially with all dengue(1-4)serotypes and its co-infection (D1 and D2, D1 and D3, D1 and D4, D2 and D4) combinations. In case of DENV-2 there was significant difference in the peak titer observed at 16th day post infection. But when exposed to dual infections A.aegypti supported all combinations of virus where A.albopictus only continued single infections in successive days. There was a significant negative effect on the fecundity and fertility of both the vectors compared to control (PANOVA < 0.001). In case of dengue 2 infected mosquito, fecundity in parent generation was significantly higher (PBonferroni < 0.001) for A.albopicus compare to A.aegypti but there was a complete loss of fecundity from second to third generation for A.albopictus. It was observed that A.aegypti becomes infected with multiple serotypes frequently even at low viral titres compared to A.albopictus. Possible reason for this could be the presence of wolbachia infection in A.albopictus or mosquito innate immune response, small RNA interference etc. Based on the observations it could be anticipated that transovarial transmission may not be an important phenomenon for clinical disease outcome, due to the absence of viral positivity by third generation. Also, Dengue NS1 ELISA can be used for preliminary viral detection in mosquitoes as more than 90% of the samples were found positive compared to RT-PCR and viral load estimation.

Keywords: co-infection, dengue, reproductive fitness, viral quantification

Procedia PDF Downloads 178
283 Seafloor and Sea Surface Modelling in the East Coast Region of North America

Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk

Abstract:

Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.

Keywords: seafloor, sea surface height, bathymetry, satellite altimetry

Procedia PDF Downloads 56
282 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data

Authors: Kai Warsoenke, Maik Mackiewicz

Abstract:

To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.

Keywords: automotive production, machine learning, process optimization, smart tolerancing

Procedia PDF Downloads 91
281 The Administration of Infection Diseases During the Pandemic COVID-19 and the Role of the Differential Diagnosis with Biomarkers VB10

Authors: Sofia Papadimitriou

Abstract:

INTRODUCTION: The differential diagnosis between acute viral and bacterial infections is an important cost-effectiveness parameter at the stage of the treatment process in order to achieve the maximum benefits in therapeutic intervention by combining the minimum cost to ensure the proper use of antibiotics.The discovery of sensitive and robust molecular diagnostic tests in response to the role of the host in infections has enhanced the accurate diagnosis and differentiation of infections. METHOD: The study used a sample of six independent blood samples (total=756) which are associated with human proteins-proteins, each of which at the transcription stage expresses a different response in the host network between viral and bacterial infections.Τhe individual blood samples are subjected to a sequence of computer filters that identify a gene panel corresponding to an autonomous diagnostic score. The data set and the correspondence of the gene panel to the diagnostic patents a new Bangalore -Viral Bacterial (BL-VB). FINDING: We use a biomarker based on the blood of 10 genes(Panel-VB) that are an important prognostic value for the detection of viruses from bacterial infections with a weighted average AUROC of 0.97(95% CL:0.96-0.99) in eleven independent samples (sets n=898). We discovered a base with a patient score (VB 10 ) according to the table, which is a significant diagnostic value with a weighted average of AUROC 0.94(95% CL: 0.91-0.98) in 2996 patient samples from 56 public sets of data from 19 different countries. We also studied VB 10 in a new cohort of South India (BL-VB,n=56) and found 97% accuracy in confirmed cases of viral and bacterial infections. We found that VB 10 (a)accurately identifies the type of infection even in unspecified cases negative to the culture (b) shows its clinical condition recovery and (c) applies to all age groups, covering a wide range of acute bacterial and viral infectious, including non-specific pathogens. We applied our VB 10 rating to publicly available COVID 19 data and found that our rating diagnosed viral infection in patient samples. RESULTS: Τhe results of the study showed the diagnostic power of the biomarker VB 10 as a diagnostic test for the accurate diagnosis of acute infections in recovery conditions. We look forward to helping you make clinical decisions about prescribing antibiotics and integrating them into your policies management of antibiotic stewardship efforts. CONCLUSIONS: Overall, we are developing a new property of the RNA-based biomarker and a new blood test to differentiate between viral and bacterial infections to assist a physician in designing the optimal treatment regimen to contribute to the proper use of antibiotics and reduce the burden on antimicrobial resistance, AMR.

Keywords: acute infections, antimicrobial resistance, biomarker, blood transcriptome, systems biology, classifier diagnostic score

Procedia PDF Downloads 131
280 Evaluation of Magnificent Event of India with Special Reference to Maha Kumbha Mela (Fair) 2013-A Congregation of Millions

Authors: Sharad Kumar Kulshreshtha

Abstract:

India is a great land of cultural and traditional diversity. Its spectrums create a unique ambiance in all over the country. Specially, fairs and festivals are ancient phenomena in Indian culture. In India, there are thousands of such religious, spiritual, cultural fairs organized on auspicious occasions. These fairs reflect the effective and efficient role of social governance and responsibility of Indian society. In this context a mega event known as ‘Kumbha Mela’ literally mean ‘Kumbha Fair’ which is organize after every twelve years at (Prayaag) Allahabad an ancient city of India, now is in the state of Uttar Pradesh. Kumbh Mela is one of the largest human congregations on the Earth. The Kumbha Mela that is held here is considered to be the largest and holiest city among the four cities where Kubha fair organize. According to the Hindu religious scripture a dip for possessing the holy confluence, known as Triveni Sangam, which is a meeting point of the three sacred rivers of India i.e., –Ganges, Yamuna and Saraswati (mythical). During the Kumbha fair the River Ganges is believed to turn to nectar, bringing great blessing to everyone who bathes in it. Other activities include religious discussions, devotional singings and mass feedings pilgrims and poor. The venue for Kumbh Mela (fair) depends on the position Sun, Moon, and Jupiter which holds in that period in different zodiac signs. More than 120 Millions (12 Crore) people visited in the Kumbha Fair-2013 in Allahabad. A temporary tented city was set up for the pilgrims over an area of 2 hectares of the land along the river of Ganges. As many as 5 power substations, temporary police stations, hospitals, bus terminals, stalls were set up for providing various facilities to the visitors and thousands of volunteers participated for assistance of this event. All efforts made by fair administration to provide facility to visitors, such security and sanitation, medical care and frequent water and power supply. The efficient and timely arrangement at the Kumbha Mela attracted the attention of many government and institutions. The Harvard University of USA conducted research to find out how it was made possible. This paper will focuses on effective and efficient planning and preparation of Kumbha Fair which includes facilitation process, role of various coordinating agencies. risk management crisis management strategies Prevention, Preparedness, Response, and Recovery (PPRR Approach), emergency response plan (ERP), safety and security issues, various environmental aspects along with health hazards and hygiene crowd management, evacuation, monitoring, control and evaluation.

Keywords: event planning and facility arrangement, risk management, crowd management, India

Procedia PDF Downloads 281
279 An Integrated HCV Testing Model as a Method to Improve Identification and Linkage to Care in a Network of Community Health Centers in Philadelphia, PA

Authors: Catelyn Coyle, Helena Kwakwa

Abstract:

Objective: As novel and better tolerated therapies become available, effective HCV testing and care models become increasingly necessary to not only identify individuals with active infection but also link them to HCV providers for medical evaluation and treatment. Our aim is to describe an effective HCV testing and linkage to care model piloted in a network of five community health centers located in Philadelphia, PA. Methods: In October 2012, National Nursing Centers Consortium piloted a routine opt-out HCV testing model in a network of community health centers, one of which treats HCV, HIV, and co-infected patients. Key aspects of the model were medical assistant initiated testing, the use of laboratory-based reflex test technology, and electronic medical record modifications to prompt, track, report and facilitate payment of test costs. Universal testing on all adult patients was implemented at health centers serving patients at high-risk for HCV. The other sites integrated high-risk based testing, where patients meeting one or more of the CDC testing recommendation risk factors or had a history of homelessness were eligible for HCV testing. Mid-course adjustments included the integration of dual HIV testing, development of a linkage to care coordinator position to facilitate the transition of HIV and/or HCV-positive patients from primary to specialist care, and the transition to universal HCV testing across all testing sites. Results: From October 2012 to June 2015, the health centers performed 7,730 HCV tests and identified 886 (11.5%) patients with a positive HCV-antibody test. Of those with positive HCV-antibody tests, 838 (94.6%) had an HCV-RNA confirmatory test and 590 (70.4%) progressed to current HCV infection (overall prevalence=7.6%); 524 (88.8%) received their RNA-positive test result; 429 (72.7%) were referred to an HCV care specialist and 271 (45.9%) were seen by the HCV care specialist. The best linkage to care results were seen at the test and treat the site, where of the 333 patients were current HCV infection, 175 (52.6%) were seen by an HCV care specialist. Of the patients with active HCV infection, 349 (59.2%) were unaware of their HCV-positive status at the time of diagnosis. Since the integration of dual HCV/HIV testing in September 2013, 9,506 HIV tests were performed, 85 (0.9%) patients had positive HIV tests, 81 (95.3%) received their confirmed HIV test result and 77 (90.6%) were linked to HIV care. Dual HCV/HIV testing increased the number of HCV tests performed by 362 between the 9 months preceding dual testing and first 9 months after dual testing integration, representing a 23.7% increment. Conclusion: Our HCV testing model shows that integrated routine testing and linkage to care is feasible and improved detection and linkage to care in a primary care setting. We found that prevalence of current HCV infection was higher than that seen in locally in Philadelphia and nationwide. Intensive linkage services can increase the number of patients who successfully navigate the HCV treatment cascade. The linkage to care coordinator position is an important position that acts as a trusted intermediary for patients being linked to care.

Keywords: HCV, routine testing, linkage to care, community health centers

Procedia PDF Downloads 336
278 Online-Scaffolding-Learning Tools to Improve First-Year Undergraduate Engineering Students’ Self-Regulated Learning Abilities

Authors: Chen Wang, Gerard Rowe

Abstract:

The number of undergraduate engineering students enrolled in university has been increasing rapidly recently, leading to challenges associated with increased student-instructor ratios and increased diversity in academic preparedness of the entrants. An increased student-instructor ratio makes the interaction between teachers and students more difficult, with the resulting student ‘anonymity’ known to be a risk to academic success. With increasing student numbers, there is also an increasing diversity in the academic preparedness of the students at entry to university. Conceptual understanding of the entrants has been quantified via diagnostic testing, with the results for the first-year course in electrical engineering showing significant conceptual misunderstandings amongst the entry cohort. The solution is clearly multi-faceted, but part of the solution likely involves greater demands being placed on students to be masters of their own learning. In consequence, it is highly desirable that instructors help students to develop better self-regulated learning skills. A self-regulated learner is one who is capable of setting up their own learning goals, monitoring their study processes, adopting and adjusting learning strategies, and reflecting on their own study achievements. The methods by which instructors might cultivate students’ self-regulated learning abilities is receiving increasing attention from instructors and researchers. The aim of this study was to help students understand fully their self-regulated learning skill levels and provide targeted instructions to help them improve particular learning abilities in order to meet the curriculum requirements. As a survey tool, this research applied the questionnaire ‘Motivated Strategies for Learning Questionnaire’ (MSLQ) to collect first year engineering student’s self-reported data of their cognitive abilities, motivational orientations and learning strategies. MSLQ is a widely-used questionnaire for assessment of university student’s self-regulated learning skills. The questionnaire was offered online as a part of the online-scaffolding-learning tools to develop student understanding of self-regulated learning theories and learning strategies. The online tools, which have been under development since 2015, are designed to help first-year students understand their self-regulated learning skill levels by providing prompt feedback after they complete the questionnaire. In addition, the online tool also supplies corresponding learning strategies to students if they want to improve specific learning skills. A total of 866 first year engineering students who enrolled in the first-year electrical engineering course were invited to participate in this research project. By the end of the course 857 students responded and 738 of their questionnaires were considered as valid questionnaires. Analysis of these surveys showed that 66% of the students thought the online-scaffolding-learning tools helped significantly to improve their self-regulated learning abilities. It was particularly pleasing that 16.4% of the respondents thought the online-scaffolding-learning tools were extremely effective. A current thrust of our research is to investigate the relationships between students’ self-regulated learning abilities and their academic performance. Our results are being used by the course instructors as they revise the curriculum and pedagogy for this fundamental first-year engineering course, but the general principles we have identified are applicable to most first-year STEM courses.

Keywords: academic preparedness, online-scaffolding-learning tool, self-regulated learning, STEM education

Procedia PDF Downloads 90
277 Physicochemical Properties and Toxicity Studies on a Lectin from the Bulb of Dioscorea bulbifera

Authors: Uchenna Nkiruka Umeononihu, Adenike Kuku, Oludele Odekanyin, Olubunmi Babalola, Femi Agboola, Rapheal Okonji

Abstract:

In this study, a lectin from the bulb of Dioscorea bulbifera was purified, characterised, and its acute and sub-acute toxicity was investigated with a view to evaluate its toxic effects in mice. The protein from the bulb was extracted by homogenising 50 g of the bulb in 500 ml of phosphate buffered saline (0.025 M) of pH 7.2, stirred for 3 hr, and centrifuged at the speed of 3000 rpm. Blood group and sugar specificity assays of the crude extract were determined. The lectin was purified in a two-step procedure- gel filtration on Sephadex G-75 and affinity chromatography on Sepharose 4-B arabinose. The degree of purity of the purified lectin was ascertained by SDS-polyacrylamide gel electrophoresis. Detection of covalently bound carbohydrate was carried out with Periodic Acid-Schiffs (PAS) reagent staining technique. Effects of temperature, pH, and EDTA on the lectin were carried out using standard methods. This was followed by acute toxicity studies via oral and subcutaneous routes using mice. The animals were monitored for mortality and signs of toxicity. The sub-acute toxicity studies were carried out using rats. Different concentrations of the lectin were administered twice daily for 5 days via the subcutaneous route. The animals were sacrificed on the sixth day; blood samples and liver tissues were collected. Biochemical assays (determination of total protein, direct bilirubin, Alanine aminotransferase (ALT), Aspartate aminotransferase (AST), catalase (CAT), and superoxide dismutase (SOD)) were carried out on the serum and liver homogenates. The collected organs (heart, liver, kidney, and spleen) were subjected to histopathological analysis. The results showed that lectin from the bulbs of Dioscorea bulbifera agglutinated non-specifically the erythrocytes of the human ABO system as well as rabbit erythrocytes. The haemagglutinating activity was strongly inhibited by arabinose and dulcitol with minimum inhibitory concentrations of 0.781 and 6.25, respectively. The lectin was purified to homogeneity with native and subunit molecular weights of 56,273 and 29,373 Daltons, respectively. The lectin was thermostable up to 30 0C and lost 25 %, 33.3 %, and 100 % of its heamagglutinating activity at 40°C, 50°C, and 60°C, respectively. The lectin was maximally active at pH 4 and 5 but lost its total activity at pH eight, while EDTA (10 mM) had no effect on its haemagglutinating activity. PAS reagent staining showed that the lectin was not a glycoprotein. The sub-acute studies on rats showed elevated levels of ALT, AST, serum bilirubin, total protein in serum and liver homogenates suggesting damage to liver and spleen. The study concluded that the aerial bulb of D. bulbifera lectin was non-specific in its heamagglutinating activity and dimeric in its structure. The lectin shared some physicochemical characteristics with lectins from other Dioscorecea species and was moderately toxic to the liver and spleen of treated animals.

Keywords: Dioscorea bulbifera, heamagglutinin, lectin, toxicity

Procedia PDF Downloads 103
276 Clinical Characteristics of Autistic children Receiving Care in Rehabilitation Centers in Sana'a City, Yemen

Authors: Hamdan Hamood Aldumaini, Amjad Hussein Meqdam, Shamsaldeen kassim Ali, Hamed Mohammed Al-Yousefi, Haron Ahmed Al-Badawi

Abstract:

Background: Autism Spectrum Disorder (ASD) is a complex developmental challenge characterized by significant impairments in social interaction, communication, and behavioral patterns. Diagnosing ASD is challenging due to the lack of definitive medical tests, making early identification crucial. Therefore, increasing people's awareness about autism leads to early diagnosis and better prognosis. Objective: Our study aims to identify the initial symptoms prompting families to seek medical advice, determine the timeline between symptom onset and formal diagnosis, and explore methods for assessing the severity of ASD. Subjects and Methods: The study design employed was a descriptive cross-sectional design, which was suitable for the nature of the research. The data collection took place from March 5, 2022, to April 5, 2022, in Autism Rehabilitation Centers in Sana'a, Yemen. The study population consisted of all children who were diagnosed with autism and visited Autism rehabilitation centers in Sana'a city. The sample size was determined using Epi info version 7, and a total population of 587 autistic children attending the treatment was calculated, but only 250 children were included in this study (176 were male vs. 74 female). Result: In terms of sociability problems, it was found that a significant proportion of Yemeni children with autism experienced difficulties in this area. Specifically, 39.6% were classified as having severe sociability problems, while 28.4% were classified as having moderate issues. Sensory-cognitive awareness problems were also prevalent among the respondents, with 29.6% exhibiting severe difficulties in this domain. Health and physical problems were identified as significant concerns for Yemeni children with autism. The results indicated that 38.4% of the participants experienced severe health and physical issues. Identifying the first symptoms of autism is crucial for early detection and intervention. According to the study, speech delay was the most commonly observed first abnormality, reported by 71.3% of parents. Communication difficulties with others were the second most noticed abnormality, reported by 54.9% of parents. Repetitive movements were the third most commonly observed abnormality, reported by 18% of parents. Regarding the awareness among parents of ASD, our study showed that a significant portion (62%) of parents lack awareness about Autism Spectrum Disorder (ASD) and its causes. Surprisingly, a majority of these parents (over 80%) believe that autism is a curable condition. Additionally, more than half (51.2%) of the parents surveyed reported insufficient knowledge about medication options available to support therapy and rehabilitation for their autistic children.

Keywords: autism characteristics, rehabilitation centres, yemen, children

Procedia PDF Downloads 12
275 Self-Organizing Maps for Exploration of Partially Observed Data and Imputation of Missing Values in the Context of the Manufacture of Aircraft Engines

Authors: Sara Rejeb, Catherine Duveau, Tabea Rebafka

Abstract:

To monitor the production process of turbofan aircraft engines, multiple measurements of various geometrical parameters are systematically recorded on manufactured parts. Engine parts are subject to extremely high standards as they can impact the performance of the engine. Therefore, it is essential to analyze these databases to better understand the influence of the different parameters on the engine's performance. Self-organizing maps are unsupervised neural networks which achieve two tasks simultaneously: they visualize high-dimensional data by projection onto a 2-dimensional map and provide clustering of the data. This technique has become very popular for data exploration since it provides easily interpretable results and a meaningful global view of the data. As such, self-organizing maps are usually applied to aircraft engine condition monitoring. As databases in this field are huge and complex, they naturally contain multiple missing entries for various reasons. The classical Kohonen algorithm to compute self-organizing maps is conceived for complete data only. A naive approach to deal with partially observed data consists in deleting items or variables with missing entries. However, this requires a sufficient number of complete individuals to be fairly representative of the population; otherwise, deletion leads to a considerable loss of information. Moreover, deletion can also induce bias in the analysis results. Alternatively, one can first apply a common imputation method to create a complete dataset and then apply the Kohonen algorithm. However, the choice of the imputation method may have a strong impact on the resulting self-organizing map. Our approach is to address simultaneously the two problems of computing a self-organizing map and imputing missing values, as these tasks are not independent. In this work, we propose an extension of self-organizing maps for partially observed data, referred to as missSOM. First, we introduce a criterion to be optimized, that aims at defining simultaneously the best self-organizing map and the best imputations for the missing entries. As such, missSOM is also an imputation method for missing values. To minimize the criterion, we propose an iterative algorithm that alternates the learning of a self-organizing map and the imputation of missing values. Moreover, we develop an accelerated version of the algorithm by entwining the iterations of the Kohonen algorithm with the updates of the imputed values. This method is efficiently implemented in R and will soon be released on CRAN. Compared to the standard Kohonen algorithm, it does not come with any additional cost in terms of computing time. Numerical experiments illustrate that missSOM performs well in terms of both clustering and imputation compared to the state of the art. In particular, it turns out that missSOM is robust to the missingness mechanism, which is in contrast to many imputation methods that are appropriate for only a single mechanism. This is an important property of missSOM as, in practice, the missingness mechanism is often unknown. An application to measurements on one type of part is also provided and shows the practical interest of missSOM.

Keywords: imputation method of missing data, partially observed data, robustness to missingness mechanism, self-organizing maps

Procedia PDF Downloads 128
274 A Computational Approach to Screen Antagonist’s Molecule against Mycobacterium tuberculosis Lipoprotein LprG (Rv1411c)

Authors: Syed Asif Hassan, Tabrej Khan

Abstract:

Tuberculosis (TB) caused by bacillus Mycobacterium tuberculosis (Mtb) continues to take a disturbing toll on human life and healthcare facility worldwide. The global burden of TB remains enormous. The alarming rise of multi-drug resistant strains of Mycobacterium tuberculosis calls for an increase in research efforts towards the development of new target specific therapeutics against diverse strains of M. tuberculosis. Therefore, the discovery of new molecular scaffolds targeting new drug sites should be a priority for a workable plan for fighting resistance in Mycobacterium tuberculosis (Mtb). Mtb non-acylated lipoprotein LprG (Rv1411c) has a Toll-like receptor 2 (TLR2) agonist actions that depend on its association with triacylated glycolipids binding specifically with the hydrophobic pocket of Mtb LprG lipoprotein. The detection of a glycolipid carrier function has important implications for the role of LprG in Mycobacterial physiology and virulence. Therefore, considering the pivotal role of glycolipids in mycobacterial physiology and host-pathogen interactions, designing competitive antagonist (chemotherapeutics) ligands that competitively bind to glycolipid binding domain in LprG lipoprotein, will lead to inhibition of tuberculosis infection in humans. In this study, a unified approach involving ligand-based virtual screening protocol USRCAT (Ultra Shape Recognition) software and molecular docking studies using Auto Dock Vina 1.1.2 using the X-ray crystal structure of Mtb LprG protein was implemented. The docking results were further confirmed by DSX (DrugScore eXtented), a robust program to evaluate the binding energy of ligands bound to the Ligand binding domain of the Mtb LprG lipoprotein. The ligand, which has the higher hypothetical affinity, also has greater negative value. Based on the USRCAT, Lipinski’s values and molecular docking results, [(2R)-2,3-di(hexadecanoyl oxy)propyl][(2S,3S,5S,6R)-3,4,5-trihydroxy-2,6-bis[[(2R,3S,4S,5R,6S)-3,4,5-trihydroxy-6 (hydroxymethyl)tetrahydropyran-2-yl]oxy]cyclohexyl] phosphate (XPX) was confirmed as a promising drug-like lead compound (antagonist) binding specifically to the hydrophobic domain of LprG protein with affinity greater than that of PIM2 (agonist of LprG protein) with a free binding energy of -9.98e+006 Kcal/mol and binding affinity of -132 Kcal/mol, respectively. A further, in vitro assay of this compound is required to establish its potency in inhibiting molecular evasion mechanism of MTB within the infected host macrophages. These results will certainly be helpful in future anti-TB drug discovery efforts against Multidrug-Resistance Tuberculosis (MDR-TB).

Keywords: antagonist, agonist, binding affinity, chemotherapeutics, drug-like, multi drug resistance tuberculosis (MDR-TB), RV1411c protein, toll-like receptor (TLR2)

Procedia PDF Downloads 249
273 Design and Integration of an Energy Harvesting Vibration Absorber for Rotating System

Authors: F. Infante, W. Kaal, S. Perfetto, S. Herold

Abstract:

In the last decade the demand of wireless sensors and low-power electric devices for condition monitoring in mechanical structures has been strongly increased. Networks of wireless sensors can potentially be applied in a huge variety of applications. Due to the reduction of both size and power consumption of the electric components and the increasing complexity of mechanical systems, the interest of creating dense nodes sensor networks has become very salient. Nevertheless, with the development of large sensor networks with numerous nodes, the critical problem of powering them is drawing more and more attention. Batteries are not a valid alternative for consideration regarding lifetime, size and effort in replacing them. Between possible alternative solutions for durable power sources useable in mechanical components, vibrations represent a suitable source for the amount of power required to feed a wireless sensor network. For this purpose, energy harvesting from structural vibrations has received much attention in the past few years. Suitable vibrations can be found in numerous mechanical environments including automotive moving structures, household applications, but also civil engineering structures like buildings and bridges. Similarly, a dynamic vibration absorber (DVA) is one of the most used devices to mitigate unwanted vibration of structures. This device is used to transfer the primary structural vibration to the auxiliary system. Thus, the related energy is effectively localized in the secondary less sensitive structure. Then, the additional benefit of harvesting part of the energy can be obtained by implementing dedicated components. This paper describes the design process of an energy harvesting tuned vibration absorber (EHTVA) for rotating systems using piezoelectric elements. The energy of the vibration is converted into electricity rather than dissipated. The device proposed is indeed designed to mitigate torsional vibrations as with a conventional rotational TVA, while harvesting energy as a power source for immediate use or storage. The resultant rotational multi degree of freedom (MDOF) system is initially reduced in an equivalent single degree of freedom (SDOF) system. The Den Hartog’s theory is used for evaluating the optimal mechanical parameters of the initial DVA for the SDOF systems defined. The performance of the TVA is operationally assessed and the vibration reduction at the original resonance frequency is measured. Then, the design is modified for the integration of active piezoelectric patches without detuning the TVA. In order to estimate the real power generated, a complex storage circuit is implemented. A DC-DC step-down converter is connected to the device through a rectifier to return a fixed output voltage. Introducing a big capacitor, the energy stored is measured at different frequencies. Finally, the electromechanical prototype is tested and validated achieving simultaneously reduction and harvesting functions.

Keywords: energy harvesting, piezoelectricity, torsional vibration, vibration absorber

Procedia PDF Downloads 118
272 Offshore Facilities Load Out: Case Study of Jacket Superstructure Loadout by Strand Jacking Skidding Method

Authors: A. Rahim Baharudin, Nor Arinee binti Mat Saaud, Muhammad Afiq Azman, Farah Adiba A. Sani

Abstract:

Objectives: This paper shares the case study on the engineering analysis, data analysis, and real-time data comparison for qualifying the stand wires' minimum breaking load and safe working load upon loadout operation for a new project and, at the same time, eliminate the risk due to discrepancies and unalignment of COMPANY Technical Standards to Industry Standards and Practices. This paper demonstrates “Lean Construction” for COMPANY’s Project by sustaining fit-for-purpose Technical Requirements of Loadout Strand Wire Factor of Safety (F.S). The case study utilizes historical engineering data from a few loadout operations by skidding methods from different projects. It is also demonstrating and qualifying the skidding wires' minimum breaking load and safe working load used for loadout operation for substructure and other facilities for the future. Methods: Engineering analysis and comparison of data were taken as referred to the international standard and internal COMPANY standard requirements. Data was taken from nine (9) previous projects for both topsides and jacket facilities executed at the several local fabrication yards where load out was conducted by three (3) different service providers with emphasis on four (4) basic elements: i) Industry Standards for Loadout Engineering and Operation Reference: COMPANY internal standard was referred to superseded documents of DNV-OS-H201 and DNV/GL 0013/ND. DNV/GL 0013/ND and DNVGL-ST-N001 do not mention any requirements of Strand Wire F.S of 4.0 for Skidding / Pulling Operations. ii) Reference to past Loadout Engineering and Execution Package: Reference was made to projects delivered by three (3) major offshore facilities operators. Strand Wire F.S observed ranges from 2.0 MBL (Min) to 2.5 MBL (Max). No Loadout Operation using the requirements of 4.0 MBL was sighted from the reference. iii) Strand Jack Equipment Manufacturer Datasheet Reference: Referring to Strand Jack Equipment Manufactured Datasheet by different loadout service providers, it is shown that the Designed F.S for the equipment is also ranging between 2.0 ~ 2.5. Eight (8) Strand Jack Datasheet Model was referred to, ranging from 15 Mt to 850 Mt Capacity; however, there are NO observations of designed F.S 4.0 sighted. iv) Site Monitoring on Actual Loadout Data and Parameter: Max Load on Strand Wire was captured during 2nd Breakout, which is during Static Condition of 12.9 MT / Strand Wire (67.9% Utilization). Max Load on Strand Wire for Dynamic Conditions during Step 8 and Step 12 is 9.4 Mt / Strand Wire (49.5% Utilization). Conclusion: This analysis and study demonstrated the adequacy of strand wires supplied by the service provider were technically sufficient in terms of strength, and via engineering analysis conducted, the minimum breaking load and safe working load utilized and calculated for the projects were satisfied and operated safely for the projects. It is recommended from this study that COMPANY’s technical requirements are to be revised for future projects’ utilization.

Keywords: construction, load out, minimum breaking load, safe working load, strand jacking, skidding

Procedia PDF Downloads 82