Search results for: critical applied linguistics
1816 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices
Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu
Abstract:
Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction
Procedia PDF Downloads 1051815 The Role and Challenges of Media in the Transformation of Contemporary Nigeria Democracies
Authors: Henry Okechukwu Onyeiwu
Abstract:
The role of media in the transformation of contemporary Nigeria's democracies is multifaceted and profoundly impactful. As Nigeria navigates its complex socio-political landscape, media serves as both a catalyst for democratic engagement and a platform for public discourse. This paper explores the various dimensions through which media influences democracy in Nigeria, including its role in informing citizens, shaping public opinion, and providing a forum for diverse voices. The increasing penetration of social media has revolutionized the political sphere, empowering citizens to participate in governance and hold leaders accountable. However, challenges such as misinformation, censorship, and media bias continue to pose significant threats to democratic integrity. This study critically analyzes the interplay between traditional and new media, highlighting their contributions to electoral processes, civic education, and advocacy for human rights. Ultimately, the findings illustrate that while media is a crucial agent for democratic transformation, its potential can only be realized through a commitment to journalistic integrity and the promotion of media literacy among the Nigerian populace. The media plays a critical role in shaping public democracies in Nigeria, yet it faces a myriad of challenges that hinder its effectiveness. This paper examines the various obstacles confronting media broadcasting in Nigeria, which range from political interference and censorship to issues of professionalism and the proliferation of fake news. Political interference is particularly pronounced, as government entities and political actors often attempt to control narratives, compromising the independence of media outlets. This control often manifests in the form of censorship, where journalists face threats and harassment for reporting on sensitive topics related to governance, corruption, and human rights abuses. Moreover, the rapid rise of social media has introduced a dual challenge; while it offers a platform for citizen engagement and diverse viewpoints, it also facilitates the spread of misinformation and propaganda. The lack of media literacy among the populace exacerbates this issue, as citizens often struggle to discern credible information from false narratives. Additionally, economic constraints deeply affect the sustainability and independence of many broadcasting organizations. Advertisers may unduly influence content, leading to sensationalism over substantive reporting. This paper argues that for media to effectively contribute to Nigerian public democracies, there needs to be a concerted effort to address these challenges. Strengthening journalistic ethics, enhancing regulatory frameworks, and promoting media literacy among citizens are essential steps in fostering a more vibrant and accountable media landscape. Ultimately, this research underscores the necessity of a resilient media ecosystem that can truly support democratic processes, empower citizens, and hold power to account in contemporary Nigeria.Keywords: media, democracy, socio-political, governance
Procedia PDF Downloads 211814 Influence Study of the Molar Ratio between Solvent and Initiator on the Reaction Rate of Polyether Polyols Synthesis
Authors: María José Carrero, Ana M. Borreguero, Juan F. Rodríguez, María M. Velencoso, Ángel Serrano, María Jesús Ramos
Abstract:
Flame-retardants are incorporated in different materials in order to reduce the risk of fire, either by providing increased resistance to ignition, or by acting to slow down combustion and thereby delay the spread of flames. In this work, polyether polyols with fire retardant properties were synthesized due to their wide application in the polyurethanes formulation. The combustion of polyurethanes is primarily dependent on the thermal properties of the polymer, the presence of impurities and formulation residue in the polymer as well as the supply of oxygen. There are many types of flame retardants, most of them are phosphorous compounds of different nature and functionality. The addition of these compounds is the most common method for the incorporation of flame retardant properties. The employment of glycerol phosphate sodium salt as initiator for the polyol synthesis allows obtaining polyols with phosphate groups in their structure. However, some of the critical points of the use of glycerol phosphate salt are: the lower reactivity of the salt and the necessity of a solvent (dimethyl sulfoxide, DMSO). Thus, the main aim in the present work was to determine the amount of the solvent needed to get a good solubility of the initiator salt. Although the anionic polymerization mechanism of polyether formation is well known, it seems convenient to clarify the role that DMSO plays at the starting point of the polymerization process. Regarding the fact that the catalyst deprotonizes the hydroxyl groups of the initiator and as a result of this, two water molecules and glycerol phosphate alkoxide are formed. This alkoxide, together with DMSO, has to form a homogeneous mixture where the initiator (solid) and the propylene oxide (PO) are soluble enough to mutually interact. The addition rate of PO increased when the solvent/initiator ratios studied were increased, observing that it also made the initiation step shorter. Furthermore, the molecular weight of the polyol decreased when higher solvent/initiator ratios were used, what revealed that more amount of salt was activated, initiating more chains of lower length but allowing to react more phosphate molecules and to increase the percentage of phosphorous in the final polyol. However, the final phosphorous content was lower than the theoretical one because only a percentage of salt was activated. On the other hand, glycerol phosphate disodium salt was still partially insoluble in DMSO studied proportions, thus, the recovery and reuse of this part of the salt for the synthesis of new flame retardant polyols was evaluated. In the recovered salt case, the rate of addition of PO remained the same than in the commercial salt but a shorter induction period was observed, this is because the recovered salt presents a higher amount of deprotonated hydroxyl groups. Besides, according to molecular weight, polydispersity index, FT-IR spectrum and thermal stability, there were no differences between both synthesized polyols. Thus, it is possible to use the recovered glycerol phosphate disodium salt in the same way that the commercial one.Keywords: DMSO, fire retardants, glycerol phosphate disodium salt, recovered initiator, solvent
Procedia PDF Downloads 2781813 Positive Incentives to Reduce Private Car Use: A Theory-Based Critical Analysis
Authors: Rafael Alexandre Dos Reis
Abstract:
Research has shown a substantial increase in the participation of Conventionally Fuelled Vehicles (CFVs) in the urban transport modal split. The reasons for this unsustainable reality are multiple, from economic interventions to individual behaviour. The development and delivery of positive incentives for the adoption of more environmental-friendly modes of transport is an emerging strategy to help in tackling the problem of excessive use of conventionally fuelled vehicles. The efficiency of this approach, like other information-based schemes, can benefit from the knowledge of their potential impacts in theoretical constructs of multiple behaviour change theories. The goal of this research is to critically analyse theories of behaviour that are relevant to transport research and the impacts of positive incentives on the theoretical determinants of behaviour, strengthening the current body of evidence about the benefits of this approach. The main method to investigate this will involve a literature review on two main topics: the current theories of behaviour that have empirical support in transport research and the past or ongoing positive incentives programs that had an impact on car use reduction. The reviewed programs of positive incentives were the following: The TravelSmart®; Spitsmijden®; Incentives for Singapore Commuters® (INSINC); COMMUTEGREENER®; MOVESMARTER®; STREETLIFE®; SUPERHUB®; SUNSET® and the EMPOWER® project. The theories analysed were the heory of Planned Behaviour (TPB); The Norm Activation Theory (NAM); Social Learning Theory (SLT); The Theory of Interpersonal Behaviour (TIB); The Goal-Setting Theory (GST) and The Value-Belief-Norm Theory (VBN). After the revisions of the theoretical constructs of each of the theories and their influence on car use, it can be concluded that positive incentives schemes impact on behaviour change in the following manners: -Changing individual’s attitudes through informational incentives; -Increasing feelings of moral obligations to reduce the use of CFVs; -Increase the perceived social pressure to engage in more sustainable mobility behaviours through the use of comparison mechanisms in social media, for example; -Increase the perceived control of behaviour through informational incentives and training incentives; -Increasing personal norms with reinforcing information; -Providing tools for self-monitoring and self-evaluation; -Providing real experiences in alternative modes to the car; -Making the observation of others’ car use reduction possible; -Informing about consequences of behaviour and emphasizing the individual’s responsibility with society and the environment; -Increasing the perception of the consequences of car use to an individual’s valued objects; -Increasing the perceived ability to reduce threats to environment; -Help establishing goals to reduce car use; - iving personalized feedback on the goal; -Increase feelings of commitment to the goal; -Reducing the perceived complexity of the use of alternatives to the car. It is notable that the emerging technique of delivering positive incentives are systematically connected to causal determinants of travel behaviour. The preliminary results of the reviewed programs evidence how positive incentives might strengthen these determinants and help in the process of behaviour change.Keywords: positive incentives, private car use reduction, sustainable behaviour, voluntary travel behaviour change
Procedia PDF Downloads 3391812 ‘Call Before, Save Lives’: Reducing Emergency Department Visits through Effective Communication
Authors: Sandra Cardoso, Gaspar Pais, Judite Neves, Sandra Cavaca, Fernando Araújo
Abstract:
In 2021, Portugal has 63 emergency department (ED) visits per 100 people annually, the highest numbers in Europe. While EDs provide a critical service, high use is indicative of inappropriate and inefficient healthcare. In Portugal, all ED have the Manchester Triage System (MTS), a clinical risk management tool to enable that patients are seen in order of clinical priority. In 2023, more than 40% of the ED visits were of non-urgent conditions (blue and green), that could be better managed in primary health care (PHC), meaning wrong use of resources and lack of health literacy. From 2017, the country has a phone line, SNS24 (Contact Centre of the National Health Service), for triage, counseling, and referral service, 24 hours/7 days a week. The pilot project ‘Call before, save lives’ was implemented in the municipalities of Póvoa de Varzim and Vila do Conde (around 150.000 residents), in May 2023, by the executive board of the Portuguese Health Service, with the support of the Shared Services of the Ministry of Health, and local authorities. This geographical area has short travel times, 99% of the population a family doctor and the region is organized in a health local unit (HLU), integrating PHC and the local hospital. The purposes of this project included to increase awareness to contact SNS 24, before going to an ED, and non-urgent conditions oriented to a family doctor, reducing ED visits. The implementation of the project involved two phases, beginning with: i) development of campaigns using local influencers (fishmonger, model, fireman) through local institutions and media; ii) provision of telephone installed on site to contact SNS24; iii) establishment of open consultation in PHC; iv) promotion of the use of SNS24; v) creation of acute consultations at the hospital for complex chronic patients; and vi) direct referral for home hospitalization by PHC. The results of this project showed an excellent level of access to SNS24, an increase in the number of users referred to ED, with great satisfaction of users and professionals. The second phase, initiated in January 2024, for access to the ED, the need for prior referral was established as an admission rule, except for certain situations, as trauma patients. If the patient refuses, their registration in the ED and subsequent screening in accordance with the MTS must be ensured. When the patient is non-urgent, shall not be observed in the ED, provided that, according to his clinical condition, is guaranteed to be referred to PHC or to consultation/day hospital, through effective scheduling of an appointment for the same or the following day. In terms of results, 8 weeks after beginning of phase 2, we assist of a decrease in self-reported patients to ED from 59% to 15%, and a reduction of around 7% of ED visits. The key for this success was an effective public campaign that increases the knowledge of the right use of the health system, and capable of changing behaviors.Keywords: contact centre of the national health service, emergency department visits, public campaign, health literacy, SNS24
Procedia PDF Downloads 671811 Ligandless Extraction and Determination of Trace Amounts of Lead in Pomegranate, Zucchini and Lettuce Samples after Dispersive Liquid-Liquid Microextraction with Ultrasonic Bath and Optimization of Extraction Condition with RSM Design
Authors: Fariba Tadayon, Elmira Hassanlou, Hasan Bagheri, Mostafa Jafarian
Abstract:
Heavy metals are released into water, plants, soil, and food by natural and human activities. Lead has toxic roles in the human body and may cause serious problems even in low concentrations, since it may have several adverse effects on human. Therefore, determination of lead in different samples is an important procedure in the studies of environmental pollution. In this work, an ultrasonic assisted-ionic liquid based-liquid-liquid microextraction (UA-IL-DLLME) procedure for the determination of lead in zucchini, pomegranate, and lettuce has been established and developed by using flame atomic absorption spectrometer (FAAS). For UA-IL-DLLME procedure, 10 mL of the sample solution containing Pb2+ was adjusted to pH=5 in a glass test tube with a conical bottom; then, 120 μL of 1-Hexyl-3-methylimidazolium hexafluoro phosphate (CMIM)(PF6) was rapidly injected into the sample solution with a microsyringe. After that, the resulting cloudy mixture was treated by ultrasonic for 5 min, then the separation of two phases was obtained by centrifugation for 5 min at 3000 rpm and IL-phase diluted with 1 cc ethanol, and the analytes were determined by FAAS. The effect of different experimental parameters in the extraction step including: ionic liquid volume, sonication time and pH was studied and optimized simultaneously by using Response Surface Methodology (RSM) employing a central composite design (CCD). The optimal conditions were determined to be an ionic liquid volume of 120 μL, sonication time of 5 min, and pH=5. The linear ranges of the calibration curve for the determination by FAAS of lead were 0.1-4 ppm with R2=0.992. Under optimized conditions, the limit of detection (LOD) for lead was 0.062 μg.mL-1, the enrichment factor (EF) was 93, and the relative standard deviation (RSD) for lead was calculated as 2.29%. The levels of lead for pomegranate, zucchini, and lettuce were calculated as 2.88 μg.g-1, 1.54 μg.g-1, 2.18 μg.g-1, respectively. Therefore, this method has been successfully applied for the analysis of the content of lead in different food samples by FAAS.Keywords: Dispersive liquid-liquid microextraction, Central composite design, Food samples, Flame atomic absorption spectrometry.
Procedia PDF Downloads 2831810 Synthesis and Characterization of Sulfonated Aromatic Hydrocarbon Polymers Containing Trifluoromethylphenyl Side Chain for Proton Exchange Membrane Fuel Cell
Authors: Yi-Chiang Huang, Hsu-Feng Lee, Yu-Chao Tseng, Wen-Yao Huang
Abstract:
Proton exchange membranes as a key component in fuel cells have been widely studying over the past few decades. As proton exchange, membranes should have some main characteristics, such as good mechanical properties, low oxidative stability and high proton conductivity. In this work, trifluoromethyl groups had been introduced on polymer backbone and phenyl side chain which can provide densely located sulfonic acid group substitution and also promotes solubility, thermal and oxidative stability. Herein, a series of novel sulfonated aromatic hydrocarbon polyelectrolytes was synthesized by polycondensation of 4,4''''-difluoro-3,3''''- bis(trifluoromethyl)-2'',3''-bis(3-(trifluoromethyl)phenyl)-1,1':4',1'':4'',1''':4''',1''''-quinquephenyl with 2'',3''',5'',6''-tetraphenyl-[1,1':4',1'': 4'',1''':4''',1''''-quinquephenyl]-4,4''''-diol and post-sulfonated was through chlorosulfonic acid to given sulfonated polymers (SFC3-X) possessing ion exchange capacities ranging from 1.93, 1.91 and 2.53 mmol/g. ¹H NMR and FT-IR spectroscopy were applied to confirm the structure and composition of sulfonated polymers. The membranes exhibited considerably dimension stability (10-27.8% in length change; 24-56.5% in thickness change) and excellent oxidative stability (weight remain higher than 97%). The mechanical properties of membranes demonstrated good tensile strength on account of the high rigidity multi-phenylated backbone. Young's modulus were ranged 0.65-0.77GPa which is much larger than that of Nafion 211 (0.10GPa). Proton conductivities of membranes ranged from 130 to 240 mS/cm at 80 °C under fully humidified which were comparable or higher than that of Nafion 211 (150 mS/cm). The morphology of membranes was investigated by transmission electron microscopy which demonstrated a clear hydrophilic/hydrophobic phase separation with spherical ionic clusters in the size range of 5-20 nm. The SFC3-1.97 single fuel cell performance demonstrates the maximum power density at 1.08W/cm², and Nafion 211 was 1.24W/cm² as a reference in this work. The result indicated that SFC3-X are good candidates for proton exchange membranes in fuel cell applications. Fuel cell of other membranes is under testing.Keywords: fuel cells, polyelectrolyte, proton exchange membrane, sulfonated polymers
Procedia PDF Downloads 4531809 Evaluation of the Effectiveness of Barriers for the Control of Rats in Rice Plantation Field
Authors: Melina, Jumardi Jumardi, Erwin Erwin, Sri Nuraminah, Andi Nasruddin
Abstract:
The rice field rat (Rattus argentiventer Robinson and Kloss) is a pest causing the greatest yield loss of rice plants, especially in lowland agroecosystems with intensive cropping patterns (2-3 plantings per year). Field mice damage rice plants at all stages of growth, from seedling to harvest, even in storage warehouses. Severe damage with yield loss of up to 100% occurs if rats attack rice at the generative stage because the plants are no longer able to recover by forming new tillers. Farmers mainly use rodenticides in the form of poisoned baits or as fumigants, which are applied to rat burrow holes. This practice is generally less effective because mice are able to avoid the poison or become resistant after several exposures to it. In addition, excessive use of rodenticides can have negative impacts on the environment and non-target organisms. For this reason, this research was conducted to evaluate the effectiveness of fences as an environmentally friendly mechanical control method in reducing rice yield losses due to rat attacks. This study used a factorial randomized block design. The first factor was the fence material, namely galvanized zinc plate and plastic. The second factor was the height of the fence, namely 25, 50, 75, and 100 cm from the ground level. Each treatment combination was repeated five times. Data shows that zinc fences with a height of 75 and 100 cm are able to provide full protection to plants from rat infestations throughout the planting season. However, zinc fences with a height of 25 and 50 cm failed to prevent rat attacks. Plastic fences with a height of 25 and 50 cm failed to prevent rat attacks during the planting season, whereas 75 and 100 cm were able to prevent rat attacks until all the crops outside of the fence had been eaten by rats. The rat managed to get into the fence by biting the plastic fence close to the ground. Thus, the research results show that fences made of zinc plate with a height of at least 75 cm from the ground surface are effective in preventing plant damage caused by rats. To our knowledge, this research is the first to quantify the effectiveness of fences as a control of field rodents.Keywords: rice field rat, Rattus argentiventer, fence, rice
Procedia PDF Downloads 401808 Developing Manufacturing Process for the Graphene Sensors
Authors: Abdullah Faqihi, John Hedley
Abstract:
Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.Keywords: laser scribing, lightscribe DVD, graphene oxide, scanning electron microscopy
Procedia PDF Downloads 1201807 Project-Based Learning Application: Applying Systems Thinking Concepts to Assure Continuous Improvement
Authors: Kimberley Kennedy
Abstract:
The major findings of this study discuss the importance of understanding and applying Systems thinking concepts to ensure an effective Project-Based Learning environment. A pilot project study of a major pedagogical change was conducted over a five year period with the goal to give students real world, hands-on learning experiences and the opportunity to apply what they had learned over the past two years of their business program. The first two weeks of the fifteen week semester utilized teaching methods of lectures, guest speakers and design thinking workshops to prepare students for the project work. For the remaining thirteen weeks of the semester, the students worked with actual business owners and clients on projects and challenges. The first three years of the five year study focused on student feedback to ensure a quality learning experience and continuous improvement process was developed. The final two years of the study, examined the conceptual understanding and perception of learning and teaching by faculty using Project-Based Learning pedagogy as compared to lectures and more traditional teaching methods was performed. Relevant literature was reviewed and data collected from program faculty participants who completed pre-and post-semester interviews and surveys over a two year period. Systems thinking concepts were applied to better understand the challenges for faculty using Project-Based Learning pedagogy as compared to more traditional teaching methods. Factors such as instructor and student fatigue, motivation, quality of work and enthusiasm were explored to better understand how to provide faculty with effective support and resources when using Project-Based Learning pedagogy as the main teaching method. This study provides value by presenting generalizable, foundational knowledge by offering suggestions for practical solutions to assure student and teacher engagement in Project-Based Learning courses.Keywords: continuous improvement, project-based learning, systems thinking, teacher engagement
Procedia PDF Downloads 1191806 The Amount of Conformity of Persian Subject Headlines with Users' Social Tagging
Authors: Amir Reza Asnafi, Masoumeh Kazemizadeh, Najmeh Salemi
Abstract:
Due to the diversity of information resources in the web0.2 environment, which is increasing in number from time to time, the social tagging system should be used to discuss Internet resources. Studying the relevance of social tags to thematic headings can help enrich resources and make them more accessible to resources. The present research is of applied-theoretical type and research method of content analysis. In this study, using the listing method and content analysis, the level of accurate, approximate, relative, and non-conformity of social labels of books available in the field of information science and bibliography of Kitabrah website with Persian subject headings was determined. The exact matching of subject headings with social tags averaged 22 items, the approximate matching of subject headings with social tags averaged 36 items, the relative matching of thematic headings with social tags averaged 36 social items, and the average matching titles did not match the title. The average is 116. According to the findings, the exact matching of subject headings with social labels is the lowest and the most inconsistent. This study showed that the average non-compliance of subject headings with social labels is even higher than the sum of the three types of exact, relative, and approximate matching. As a result, the relevance of thematic titles to social labels is low. Due to the fact that the subject headings are in the form of static text and users are not allowed to interact and insert new selected words and topics, and on the other hand, in websites based on Web 2 and based on the social classification system, this possibility is available for users. An important point of the present study and the studies that have matched the syntactic and semantic matching of social labels with thematic headings is that the degree of conformity of thematic headings with social labels is low. Therefore, these two methods can complement each other and create a hybrid cataloging that includes subject headings and social tags. The low level of conformity of thematic headings with social tags confirms the results of backgrounds and writings that have compared the social tags of books with the thematic headings of the Library of Congress. It is not enough to match social labels with thematic headings. It can be said that these two methods can be complementary.Keywords: Web 2/0, social tags, subject headings, hybrid cataloging
Procedia PDF Downloads 1591805 Structural and Functional Comparison of Untagged and Tagged EmrE Protein
Authors: S. Junaid S. Qazi, Denice C. Bay, Raymond Chew, Raymond J. Turner
Abstract:
EmrE, a member of the small multidrug resistance protein family in bacteria is considered to be the archetypical member of its family. It confers host resistance to a wide variety of quaternary cation compounds (QCCs) driven by proton motive force. Generally, purification yield is a challenge in all membrane proteins because of the difficulties in their expression, isolation and solubilization. EmrE is extremely hydrophobic which make the purification yield challenging. We have purified EmrE protein using two different approaches: organic solvent membrane extraction and hexahistidine (his6) tagged Ni-affinity chromatographic methods. We have characterized changes present between ligand affinity of untagged and his6-tagged EmrE proteins in similar membrane mimetic environments using biophysical experimental techniques. Purified proteins were solubilized in a buffer containing n-dodecyl-β-D-maltopyranoside (DDM) and the conformations in the proteins were explored in the presence of four QCCs, methyl viologen (MV), ethidium bromide (EB), cetylpyridinium chloride (CTP) and tetraphenyl phosphonium (TPP). SDS-Tricine PAGE and dynamic light scattering (DLS) analysis revealed that the addition of QCCs did not induce higher multimeric forms of either proteins at all QCC:EmrE molar ratios examined under the solubilization conditions applied. QCC binding curves obtained from the Trp fluorescence quenching spectra, gave the values of dissociation constant (Kd) and maximum specific one-site binding (Bmax). Lower Bmax values to QCCs for his6-tagged EmrE shows that the binding sites remained unoccupied. This lower saturation suggests that the his6-tagged versions provide a conformation that prevents saturated binding. Our data demonstrate that tagging an integral membrane protein can significantly influence the protein.Keywords: small multidrug resistance (SMR) protein, EmrE, integral membrane protein folding, quaternary ammonium compounds (QAC), quaternary cation compounds (QCC), nickel affinity chromatography, hexahistidine (His6) tag
Procedia PDF Downloads 3791804 Application of Groundwater Level Data Mining in Aquifer Identification
Authors: Liang Cheng Chang, Wei Ju Huang, You Cheng Chen
Abstract:
Investigation and research are keys for conjunctive use of surface and groundwater resources. The hydrogeological structure is an important base for groundwater analysis and simulation. Traditionally, the hydrogeological structure is artificially determined based on geological drill logs, the structure of wells, groundwater levels, and so on. In Taiwan, groundwater observation network has been built and a large amount of groundwater-level observation data are available. The groundwater level is the state variable of the groundwater system, which reflects the system response combining hydrogeological structure, groundwater injection, and extraction. This study applies analytical tools to the observation database to develop a methodology for the identification of confined and unconfined aquifers. These tools include frequency analysis, cross-correlation analysis between rainfall and groundwater level, groundwater regression curve analysis, and decision tree. The developed methodology is then applied to groundwater layer identification of two groundwater systems: Zhuoshui River alluvial fan and Pingtung Plain. The abovementioned frequency analysis uses Fourier Transform processing time-series groundwater level observation data and analyzing daily frequency amplitude of groundwater level caused by artificial groundwater extraction. The cross-correlation analysis between rainfall and groundwater level is used to obtain the groundwater replenishment time between infiltration and the peak groundwater level during wet seasons. The groundwater regression curve, the average rate of groundwater regression, is used to analyze the internal flux in the groundwater system and the flux caused by artificial behaviors. The decision tree uses the information obtained from the above mentioned analytical tools and optimizes the best estimation of the hydrogeological structure. The developed method reaches training accuracy of 92.31% and verification accuracy 93.75% on Zhuoshui River alluvial fan and training accuracy 95.55%, and verification accuracy 100% on Pingtung Plain. This extraordinary accuracy indicates that the developed methodology is a great tool for identifying hydrogeological structures.Keywords: aquifer identification, decision tree, groundwater, Fourier transform
Procedia PDF Downloads 1571803 Laser - Ultrasonic Method for the Measurement of Residual Stresses in Metals
Authors: Alexander A. Karabutov, Natalia B. Podymova, Elena B. Cherepetskaya
Abstract:
The theoretical analysis is carried out to get the relation between the ultrasonic wave velocity and the value of residual stresses. The laser-ultrasonic method is developed to evaluate the residual stresses and subsurface defects in metals. The method is based on the laser thermooptical excitation of longitudinal ultrasonic wave sand their detection by a broadband piezoelectric detector. A laser pulse with the time duration of 8 ns of the full width at half of maximum and with the energy of 300 µJ is absorbed in a thin layer of the special generator that is inclined relative to the object under study. The non-uniform heating of the generator causes the formation of a broadband powerful pulse of longitudinal ultrasonic waves. It is shown that the temporal profile of this pulse is the convolution of the temporal envelope of the laser pulse and the profile of the in-depth distribution of the heat sources. The ultrasonic waves reach the surface of the object through the prism that serves as an acoustic duct. At the interface ‚laser-ultrasonic transducer-object‘ the conversion of the most part of the longitudinal wave energy takes place into the shear, subsurface longitudinal and Rayleigh waves. They spread within the subsurface layer of the studied object and are detected by the piezoelectric detector. The electrical signal that corresponds to the detected acoustic signal is acquired by an analog-to-digital converter and when is mathematically processed and visualized with a personal computer. The distance between the generator and the piezodetector as well as the spread times of acoustic waves in the acoustic ducts are the characteristic parameters of the laser-ultrasonic transducer and are determined using the calibration samples. There lative precision of the measurement of the velocity of longitudinal ultrasonic waves is 0.05% that corresponds to approximately ±3 m/s for the steels of conventional quality. This precision allows one to determine the mechanical stress in the steel samples with the minimal detection threshold of approximately 22.7 MPa. The results are presented for the measured dependencies of the velocity of longitudinal ultrasonic waves in the samples on the values of the applied compression stress in the range of 20-100 MPa.Keywords: laser-ultrasonic method, longitudinal ultrasonic waves, metals, residual stresses
Procedia PDF Downloads 3251802 Measurement and Modelling of HIV Epidemic among High Risk Groups and Migrants in Two Districts of Maharashtra, India: An Application of Forecasting Software-Spectrum
Authors: Sukhvinder Kaur, Ashok Agarwal
Abstract:
Background: For the first time in 2009, India was able to generate estimates of HIV incidence (the number of new HIV infections per year). Analysis of epidemic projections helped in revealing that the number of new annual HIV infections in India had declined by more than 50% during the last decade (GOI Ministry of Health and Family Welfare, 2010). Then, National AIDS Control Organisation (NACO) planned to scale up its efforts in generating projections through epidemiological analysis and modelling by taking recent available sources of evidence such as HIV Sentinel Surveillance (HSS), India Census data and other critical data sets. Recently, NACO generated current round of HIV estimates-2012 through globally recommended tool “Spectrum Software” and came out with the estimates for adult HIV prevalence, annual new infections, number of people living with HIV, AIDS-related deaths and treatment needs. State level prevalence and incidence projections produced were used to project consequences of the epidemic in spectrum. In presence of HIV estimates generated at state level in India by NACO, USIAD funded PIPPSE project under the leadership of NACO undertook the estimations and projections to district level using same Spectrum software. In 2011, adult HIV prevalence in one of the high prevalent States, Maharashtra was 0.42% ahead of the national average of 0.27%. Considering the heterogeneity of HIV epidemic between districts, two districts of Maharashtra – Thane and Mumbai were selected to estimate and project the number of People-Living-with-HIV/AIDS (PLHIV), HIV-prevalence among adults and annual new HIV infections till 2017. Methodology: Inputs in spectrum included demographic data from Census of India since 1980 and sample registration system, programmatic data on ‘Alive and on ART (adult and children)’,‘Mother-Baby pairs under PPTCT’ and ‘High Risk Group (HRG)-size mapping estimates’, surveillance data from various rounds of HSS, National Family Health Survey–III, Integrated Biological and Behavioural Assessment and Behavioural Sentinel Surveillance. Major Findings: Assuming current programmatic interventions in these districts, an estimated decrease of 12% points in Thane and 31% points in Mumbai among new infections in HRGs and migrants is observed from 2011 by 2017. Conclusions: Project also validated decrease in HIV new infection among one of the high risk groups-FSWs using program cohort data since 2012 to 2016. Though there is a decrease in HIV prevalence and new infections in Thane and Mumbai, further decrease is possible if appropriate programme response, strategies and interventions are envisaged for specific target groups based on this evidence. Moreover, evidence need to be validated by other estimation/modelling techniques; and evidence can be generated for other districts of the state, where HIV prevalence is high and reliable data sources are available, to understand the epidemic within the local context.Keywords: HIV sentinel surveillance, high risk groups, projections, new infections
Procedia PDF Downloads 2111801 Corpus-Based Neural Machine Translation: Empirical Study Multilingual Corpus for Machine Translation of Opaque Idioms - Cloud AutoML Platform
Authors: Khadija Refouh
Abstract:
Culture bound-expressions have been a bottleneck for Natural Language Processing (NLP) and comprehension, especially in the case of machine translation (MT). In the last decade, the field of machine translation has greatly advanced. Neural machine translation NMT has recently achieved considerable development in the quality of translation that outperformed previous traditional translation systems in many language pairs. Neural machine translation NMT is an Artificial Intelligence AI and deep neural networks applied to language processing. Despite this development, there remain some serious challenges that face neural machine translation NMT when translating culture bounded-expressions, especially for low resources language pairs such as Arabic-English and Arabic-French, which is not the case with well-established language pairs such as English-French. Machine translation of opaque idioms from English into French are likely to be more accurate than translating them from English into Arabic. For example, Google Translate Application translated the sentence “What a bad weather! It runs cats and dogs.” to “يا له من طقس سيء! تمطر القطط والكلاب” into the target language Arabic which is an inaccurate literal translation. The translation of the same sentence into the target language French was “Quel mauvais temps! Il pleut des cordes.” where Google Translate Application used the accurate French corresponding idioms. This paper aims to perform NMT experiments towards better translation of opaque idioms using high quality clean multilingual corpus. This Corpus will be collected analytically from human generated idiom translation. AutoML translation, a Google Neural Machine Translation Platform, is used as a custom translation model to improve the translation of opaque idioms. The automatic evaluation of the custom model will be compared to the Google NMT using Bilingual Evaluation Understudy Score BLEU. BLEU is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Human evaluation is integrated to test the reliability of the Blue Score. The researcher will examine syntactical, lexical, and semantic features using Halliday's functional theory.Keywords: multilingual corpora, natural language processing (NLP), neural machine translation (NMT), opaque idioms
Procedia PDF Downloads 1491800 Neighborhood-Scape as a Methodology for Enhancing Gulf Region Cities' Quality of Life: Case of Doha, Qatar
Authors: Eman AbdelSabour
Abstract:
Sustainability is increasingly being considered as a critical aspect in shaping the urban environment. It works as an invention development basis for global urban growth. Currently, different models and structures impact the means of interpreting the criteria that would be included in defining a sustainable city. There is a collective need to improve the growth path to an extremely durable path by presenting different suggestions regarding multi-scale initiatives. The global rise in urbanization has led to increased demand and pressure for better urban planning choice and scenarios for a better sustainable urban alternative. The need for an assessment tool at the urban scale was prompted due to the trend of developing increasingly sustainable urban development (SUD). The neighborhood scale is being managed by a growing research committee since it seems to be a pertinent scale through which economic, environmental, and social impacts could be addressed. Although neighborhood design is a comparatively old practice, it is in the initial years of the 21st century when environmentalists and planners started developing sustainable assessment at the neighborhood level. Through this, urban reality can be considered at a larger scale whereby themes which are beyond the size of a single building can be addressed, while it still stays small enough that concrete measures could be analyzed. The neighborhood assessment tool has a crucial role in helping neighborhood sustainability to perform approach and fulfill objectives through a set of themes and criteria. These devices are also known as neighborhood assessment tool, district assessment tool, and sustainable community rating tool. The primary focus of research has been on sustainability from the economic and environmental aspect, whereas the social, cultural issue is rarely focused. Therefore, this research is based on Doha, Qatar, the current urban conditions of the neighborhoods is discussed in this study. The research problem focuses on the spatial features in relation to the socio-cultural aspects. This study is outlined in three parts; the first section comprises of review of the latest use of wellbeing assessment methods to enhance decision process of retrofitting physical features of the neighborhood. The second section discusses the urban settlement development, regulations and the process of decision-making rule. An analysis of urban development policy with reference to neighborhood development is also discussed in this section. Moreover, it includes a historical review of the urban growth of the neighborhoods as an atom of the city system present in Doha. Last part involves developing quantified indicators regarding subjective well-being through a participatory approach. Additionally, applying GIS will be utilized as a visualizing tool for the apparent Quality of Life (QoL) that need to develop in the neighborhood area as an assessment approach. Envisaging the present QoL situation in Doha neighborhoods is a process to improve current condition neighborhood function involves many days to day activities of the residents, due to which areas are considered dynamic.Keywords: neighborhood, subjective wellbeing, decision support tools, Doha, retrofiring
Procedia PDF Downloads 1381799 Comparison of Patient Satisfaction and Observer Rating of Outpatient Care among Public Hospitals in Shanghai
Authors: Tian Yi Du, Guan Rong Fan, Dong Dong Zou, Di Xue
Abstract:
Background: The patient satisfaction survey is becoming of increasing importance for hospitals or other providers to get more reimbursement and/or more governmental subsidies. However, when the results of patient satisfaction survey are compared among medical institutions, there are some concerns. The primary objectives of this study were to evaluate patient satisfaction in tertiary hospitals of Shanghai and to compare the satisfaction rating on physician services between patients and observers. Methods: Two hundred outpatients were randomly selected for patient satisfaction survey in each of 28 public tertiary hospitals of Shanghai. Four or five volunteers were selected to observe 5 physicians’ practice in each of above hospitals and rated observed physicians’ practice. The outpatients that the volunteers observed their physician practice also filled in the satisfaction questionnaires. The rating scale for outpatient survey and volunteers’ observation was: 1 (very dissatisfied) to 6 (very satisfied). If the rating was equal to or greater than 5, we considered the outpatients and volunteers were satisfied with the services. The validity and reliability of the measure were assessed. Multivariate regressions for each of the 4 dimensions and overall of patient satisfaction were used in analyses. Paired t tests were applied to analyze the rating agreement on physician services between outpatients and volunteers. Results: Overall, 90% of surveyed outpatients were satisfied with outpatient care in the tertiary public hospitals of Shanghai. The lowest three satisfaction rates were seen in the items of ‘Restrooms were sanitary and not crowded’ (81%), ‘It was convenient for the patient to pay medical bills’ (82%), and ‘Medical cost in the hospital was reasonable’ (84%). After adjusting the characteristics of patients, the patient satisfaction in general hospitals was higher than that in specialty hospitals. In addition, after controlling the patient characteristics and number of hospital visits, the hospitals with higher outpatient cost per visit had lower patient satisfaction. Paired t tests showed that the rating on 6 items in the dimension of physician services (total 14 items) was significantly different between outpatients and observers, in which 5 were rated lower by the observers than by the outpatients. Conclusions: The hospital managers and physicians should use patient satisfaction and observers’ evaluation to detect the room for improvement in areas such as social skills cost control, and medical ethics.Keywords: patient satisfaction, observation, quality, hospital
Procedia PDF Downloads 3241798 The Basin Management Methodology for Integrated Water Resources Management and Development
Authors: Julio Jesus Salazar, Max Jesus De Lama
Abstract:
The challenges of water management are aggravated by global change, which implies high complexity and associated uncertainty; water management is difficult because water networks cross domains (natural, societal, and political), scales (space, time, jurisdictional, institutional, knowledge, etc.) and levels (area: patches to global; knowledge: a specific case to generalized principles). In this context, we need to apply natural and non-natural measures to manage water and soil. The Basin Management Methodology considers multifunctional measures of natural water retention and erosion control and soil formation to protect water resources and address the challenges related to the recovery or conservation of the ecosystem, as well as natural characteristics of water bodies, to improve the quantitative status of water bodies and reduce vulnerability to floods and droughts. This method of water management focuses on the positive impacts of the chemical and ecological status of water bodies, restoration of the functioning of the ecosystem and its natural services; thus, contributing to both adaptation and mitigation of climate change. This methodology was applied in 7 interventions in the sub-basin of the Shullcas River in Huancayo-Junín-Peru, obtaining great benefits in the framework of the participation of alliances of actors and integrated planning scenarios. To implement the methodology in the sub-basin of the Shullcas River, a process called Climate Smart Territories (CST) was used; with which the variables were characterized in a highly complex space. The diagnosis was then worked using risk management and adaptation to climate change. Finally, it was concluded with the selection of alternatives and projects of this type. Therefore, the CST approach and process face the challenges of climate change through integrated, systematic, interdisciplinary and collective responses at different scales that fit the needs of ecosystems and their services that are vital to human well-being. This methodology is now replicated at the level of the Mantaro river basin, improving with other initiatives that lead to the model of a resilient basin.Keywords: climate-smart territories, climate change, ecosystem services, natural measures, Climate Smart Territories (CST) approach
Procedia PDF Downloads 1511797 Using Optical Character Recognition to Manage the Unstructured Disaster Data into Smart Disaster Management System
Authors: Dong Seop Lee, Byung Sik Kim
Abstract:
In the 4th Industrial Revolution, various intelligent technologies have been developed in many fields. These artificial intelligence technologies are applied in various services, including disaster management. Disaster information management does not just support disaster work, but it is also the foundation of smart disaster management. Furthermore, it gets historical disaster information using artificial intelligence technology. Disaster information is one of important elements of entire disaster cycle. Disaster information management refers to the act of managing and processing electronic data about disaster cycle from its’ occurrence to progress, response, and plan. However, information about status control, response, recovery from natural and social disaster events, etc. is mainly managed in the structured and unstructured form of reports. Those exist as handouts or hard-copies of reports. Such unstructured form of data is often lost or destroyed due to inefficient management. It is necessary to manage unstructured data for disaster information. In this paper, the Optical Character Recognition approach is used to convert handout, hard-copies, images or reports, which is printed or generated by scanners, etc. into electronic documents. Following that, the converted disaster data is organized into the disaster code system as disaster information. Those data are stored in the disaster database system. Gathering and creating disaster information based on Optical Character Recognition for unstructured data is important element as realm of the smart disaster management. In this paper, Korean characters were improved to over 90% character recognition rate by using upgraded OCR. In the case of character recognition, the recognition rate depends on the fonts, size, and special symbols of character. We improved it through the machine learning algorithm. These converted structured data is managed in a standardized disaster information form connected with the disaster code system. The disaster code system is covered that the structured information is stored and retrieve on entire disaster cycle such as historical disaster progress, damages, response, and recovery. The expected effect of this research will be able to apply it to smart disaster management and decision making by combining artificial intelligence technologies and historical big data.Keywords: disaster information management, unstructured data, optical character recognition, machine learning
Procedia PDF Downloads 1291796 The Impact of Developing an Educational Unit in the Light of Twenty-First Century Skills in Developing Language Skills for Non-Arabic Speakers: A Proposed Program for Application to Students of Educational Series in Regular Schools
Authors: Erfan Abdeldaim Mohamed Ahmed Abdalla
Abstract:
The era of the knowledge explosion in which we live requires us to develop educational curricula quantitatively and qualitatively to adapt to the twenty-first-century skills of critical thinking, problem-solving, communication, cooperation, creativity, and innovation. The process of developing the curriculum is as significant as building it; in fact, the development of curricula may be more difficult than building them. And curriculum development includes analyzing needs, setting goals, designing the content and educational materials, creating language programs, developing teachers, applying for programmes in schools, monitoring and feedback, and then evaluating the language programme resulting from these processes. When we look back at the history of language teaching during the twentieth century, we find that developing the delivery method is the most crucial aspect of change in language teaching doctrines. The concept of delivery method in teaching is a systematic set of teaching practices based on a specific theory of language acquisition. This is a key consideration, as the process of development must include all the curriculum elements in its comprehensive sense: linguistically and non-linguistically. The various Arabic curricula provide the student with a set of units, each unit consisting of a set of linguistic elements. These elements are often not logically arranged, and more importantly, they neglect essential points and highlight other less important ones. Moreover, the educational curricula entail a great deal of monotony in the presentation of content, which makes it hard for the teacher to select adequate content; so that the teacher often navigates among diverse references to prepare a lesson and hardly finds the suitable one. Similarly, the student often gets bored when learning the Arabic language and fails to fulfill considerable progress in it. Therefore, the problem is not related to the lack of curricula, but the problem is the development of the curriculum with all its linguistic and non-linguistic elements in accordance with contemporary challenges and standards for teaching foreign languages. The Arabic library suffers from a lack of references for curriculum development. In this paper, the researcher investigates the elements of development, such as the teacher, content, methods, objectives, evaluation, and activities. Hence, a set of general guidelines in the field of educational development were reached. The paper highlights the need to identify weaknesses in educational curricula, decide the twenty-first-century skills that must be employed in Arabic education curricula, and the employment of foreign language teaching standards in current Arabic Curricula. The researcher assumes that the series of teaching Arabic to speakers of other languages in regular schools do not address the skills of the twenty-first century, which is what the researcher tries to apply in the proposed unit. The experimental method is the method of this study. It is based on two groups: experimental and control. The development of an educational unit will help build suitable educational series for students of the Arabic language in regular schools, in which twenty-first-century skills and standards for teaching foreign languages will be addressed and be more useful and attractive to students.Keywords: curriculum, development, Arabic language, non-native, skills
Procedia PDF Downloads 841795 Heteroatom Doped Binary Metal Oxide Modified Carbon as a Bifunctional Electrocatalysts for all Vanadium Redox Flow Battery
Authors: Anteneh Wodaje Bayeh, Daniel Manaye Kabtamu, Chen-Hao Wang
Abstract:
As one of the most promising electrochemical energy storage systems, vanadium redox flow batteries (VRFBs) have received increasing attention owing to their attractive features for largescale storage applications. However, their high production cost and relatively low energy efficiency still limit their feasibility. For practical implementation, it is of great interest to improve their efficiency and reduce their cost. One of the key components of VRFBs that can greatly influence the efficiency and final cost is the electrode, which provide the reactions sites for redox couples (VO²⁺/VO₂ + and V²⁺/V³⁺). Carbon-based materials are considered to be the most feasible electrode materials in the VRFB because of their excellent potential in terms of operation range, good permeability, large surface area, and reasonable cost. However, owing to limited electrochemical activity and reversibility and poor wettability due to its hydrophobic properties, the performance of the cell employing carbon-based electrodes remained limited. To address the challenges, we synthesized heteroatom-doped bimetallic oxide grown on the surface of carbon through the one-step approach. When applied to VRFBs, the prepared electrode exhibits significant electrocatalytic effect toward the VO²⁺/VO₂ + and V³⁺/V²⁺ redox reaction compared with that of pristine carbon. It is found that the presence of heteroatom on metal oxide promotes the absorption of vanadium ions. The controlled morphology of bimetallic metal oxide also exposes more active sites for the redox reaction of vanadium ions. Hence, the prepared electrode displays the best electrochemical performance with energy and voltage efficiencies of 74.8% and 78.9%, respectively, which is much higher than those of 59.8% and 63.2% obtained from the pristine carbon at high current density. Moreover, the electrode exhibit durability and stability in an acidic electrolyte during long-term operation for 1000 cycles at the higher current density.Keywords: VRFB, VO²⁺/VO₂ + and V³⁺/V²⁺ redox couples, graphite felt, heteroatom-doping
Procedia PDF Downloads 981794 Framework Proposal on How to Use Game-Based Learning, Collaboration and Design Challenges to Teach Mechatronics
Authors: Michael Wendland
Abstract:
This paper presents a framework to teach a methodical design approach by the help of using a mixture of game-based learning, design challenges and competitions as forms of direct assessment. In today’s world, developing products is more complex than ever. Conflicting goals of product cost and quality with limited time as well as post-pandemic part shortages increase the difficulty. Common design approaches for mechatronic products mitigate some of these effects by helping the users with their methodical framework. Due to the inherent complexity of these products, the number of involved resources and the comprehensive design processes, students very rarely have enough time or motivation to experience a complete approach in one semester course. But, for students to be successful in the industrial world, it is crucial to know these methodical frameworks and to gain first-hand experience. Therefore, it is necessary to teach these design approaches in a real-world setting and keep the motivation high as well as learning to manage upcoming problems. This is achieved by using a game-based approach and a set of design challenges that are given to the students. In order to mimic industrial collaboration, they work in teams of up to six participants and are given the main development target to design a remote-controlled robot that can manipulate a specified object. By setting this clear goal without a given solution path, a constricted time-frame and limited maximal cost, the students are subjected to similar boundary conditions as in the real world. They must follow the methodical approach steps by specifying requirements, conceptualizing their ideas, drafting, designing, manufacturing and building a prototype using rapid prototyping. At the end of the course, the prototypes will be entered into a contest against the other teams. The complete design process is accompanied by theoretical input via lectures which is immediately transferred by the students to their own design problem in practical sessions. To increase motivation in these sessions, a playful learning approach has been chosen, i.e. designing the first concepts is supported by using lego construction kits. After each challenge, mandatory online quizzes help to deepen the acquired knowledge of the students and badges are awarded to those who complete a quiz, resulting in higher motivation and a level-up on a fictional leaderboard. The final contest is held in presence and involves all teams with their functional prototypes that now need to contest against each other. Prices for the best mechanical design, the most innovative approach and for the winner of the robotic contest are awarded. Each robot design gets evaluated with regards to the specified requirements and partial grades are derived from the results. This paper concludes with a critical review of the proposed framework, the game-based approach for the designed prototypes, the reality of the boundary conditions, the problems that occurred during the design and manufacturing process, the experiences and feedback of the students and the effectiveness of their collaboration as well as a discussion of the potential transfer to other educational areas.Keywords: design challenges, game-based learning, playful learning, methodical framework, mechatronics, student assessment, constructive alignment
Procedia PDF Downloads 671793 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping
Authors: Masato Saeki
Abstract:
Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level
Procedia PDF Downloads 4531792 Numerical Model of Low Cost Rubber Isolators for Masonry Housing in High Seismic Regions
Authors: Ahmad B. Habieb, Gabriele Milani, Tavio Tavio, Federico Milani
Abstract:
Housings in developing countries have often inadequate seismic protection, particularly for masonry. People choose this type of structure since the cost and application are relatively cheap. Seismic protection of masonry remains an interesting issue among researchers. In this study, we develop a low-cost seismic isolation system for masonry using fiber reinforced elastomeric isolators. The elastomer proposed consists of few layers of rubber pads and fiber lamina, making it lower in cost comparing to the conventional isolators. We present a finite element (FE) analysis to predict the behavior of the low cost rubber isolators undergoing moderate deformations. The FE model of the elastomer involves a hyperelastic material property for the rubber pad. We adopt a Yeoh hyperelasticity model and estimate its coefficients through the available experimental data. Having the shear behavior of the elastomers, we apply that isolation system onto small masonry housing. To attach the isolators on the building, we model the shear behavior of the isolation system by means of a damped nonlinear spring model. By this attempt, the FE analysis becomes computationally inexpensive. Several ground motion data are applied to observe its sensitivity. Roof acceleration and tensile damage of walls become the parameters to evaluate the performance of the isolators. In this study, a concrete damage plasticity model is used to model masonry in the nonlinear range. This tool is available in the standard package of Abaqus FE software. Finally, the results show that the low-cost isolators proposed are capable of reducing roof acceleration and damage level of masonry housing. Through this study, we are also capable of monitoring the shear deformation of isolators during seismic motion. It is useful to determine whether the isolator is applicable. According to the results, the deformations of isolators on the benchmark one story building are relatively small.Keywords: masonry, low cost elastomeric isolator, finite element analysis, hyperelasticity, damped non-linear spring, concrete damage plasticity
Procedia PDF Downloads 2861791 Modeling Karachi Dengue Outbreak and Exploration of Climate Structure
Authors: Syed Afrozuddin Ahmed, Junaid Saghir Siddiqi, Sabah Quaiser
Abstract:
Various studies have reported that global warming causes unstable climate and many serious impact to physical environment and public health. The increasing incidence of dengue incidence is now a priority health issue and become a health burden of Pakistan. In this study it has been investigated that spatial pattern of environment causes the emergence or increasing rate of dengue fever incidence that effects the population and its health. The climatic or environmental structure data and the Dengue Fever (DF) data was processed by coding, editing, tabulating, recoding, restructuring in terms of re-tabulating was carried out, and finally applying different statistical methods, techniques, and procedures for the evaluation. Five climatic variables which we have studied are precipitation (P), Maximum temperature (Mx), Minimum temperature (Mn), Humidity (H) and Wind speed (W) collected from 1980-2012. The dengue cases in Karachi from 2010 to 2012 are reported on weekly basis. Principal component analysis is applied to explore the climatic variables and/or the climatic (structure) which may influence in the increase or decrease in the number of dengue fever cases in Karachi. PC1 for all the period is General atmospheric condition. PC2 for dengue period is contrast between precipitation and wind speed. PC3 is the weighted difference between maximum temperature and wind speed. PC4 for dengue period contrast between maximum and wind speed. Negative binomial and Poisson regression model are used to correlate the dengue fever incidence to climatic variable and principal component score. Relative humidity is estimated to positively influence on the chances of dengue occurrence by 1.71% times. Maximum temperature positively influence on the chances dengue occurrence by 19.48% times. Minimum temperature affects positively on the chances of dengue occurrence by 11.51% times. Wind speed is effecting negatively on the weekly occurrence of dengue fever by 7.41% times.Keywords: principal component analysis, dengue fever, negative binomial regression model, poisson regression model
Procedia PDF Downloads 4451790 Train Timetable Rescheduling Using Sensitivity Analysis: Application of Sobol, Based on Dynamic Multiphysics Simulation of Railway Systems
Authors: Soha Saad, Jean Bigeon, Florence Ossart, Etienne Sourdille
Abstract:
Developing better solutions for train rescheduling problems has been drawing the attention of researchers for decades. Most researches in this field deal with minor incidents that affect a large number of trains due to cascading effects. They focus on timetables, rolling stock and crew duties, but do not take into account infrastructure limits. The present work addresses electric infrastructure incidents that limit the power available for train traction, and hence the transportation capacity of the railway system. Rescheduling is needed in order to optimally share the available power among the different trains. We propose a rescheduling process based on dynamic multiphysics railway simulations that include the mechanical and electrical properties of all the system components and calculate physical quantities such as the train speed profiles, voltage along the catenary lines, temperatures, etc. The optimization problem to solve has a large number of continuous and discrete variables, several output constraints due to physical limitations of the system, and a high computation cost. Our approach includes a phase of sensitivity analysis in order to analyze the behavior of the system and help the decision making process and/or more precise optimization. This approach is a quantitative method based on simulation statistics of the dynamic railway system, considering a predefined range of variation of the input parameters. Three important settings are defined. Factor prioritization detects the input variables that contribute the most to the outputs variation. Then, factor fixing allows calibrating the input variables which do not influence the outputs. Lastly, factor mapping is used to study which ranges of input values lead to model realizations that correspond to feasible solutions according to defined criteria or objectives. Generalized Sobol indexes are used for factor prioritization and factor fixing. The approach is tested in the case of a simple railway system, with a nominal traffic running on a single track line. The considered incident is the loss of a feeding power substation, which limits the power available and the train speed. Rescheduling is needed and the variables to be adjusted are the trains departure times, train speed reduction at a given position and the number of trains (cancellation of some trains if needed). The results show that the spacing between train departure times is the most critical variable, contributing to more than 50% of the variation of the model outputs. In addition, we identify the reduced range of variation of this variable which guarantees that the output constraints are respected. Optimal solutions are extracted, according to different potential objectives: minimizing the traveling time, the train delays, the traction energy, etc. Pareto front is also built.Keywords: optimization, rescheduling, railway system, sensitivity analysis, train timetable
Procedia PDF Downloads 3991789 Paradox of Growing Adaptive Capacities for Sustainability Transformation in Urban Water Management in Bangladesh
Authors: T. Yasmin, M. A. Farrelly, B. C. Rogers
Abstract:
Urban water governance in developing countries faces numerous challenges arising from uncontrolled urban population expansion, water pollution, greater economic push and more recently, climate change impact while undergoing transitioning towards a sustainable system. Sustainability transition requires developing adaptive capacities of the socio-ecological and socio-technical system to be able to deal with complexity. Adaptive capacities deliver strategies to connect individuals, organizations, agencies and institutions at multiple levels for dealing with such complexity. Understanding the level of adaptive capacities for sustainability transformation thus has gained significant research attention within developed countries, much less so in developing countries. Filling this gap, this article develops a conceptual framework for analysing the level of adaptive capacities (if any) within a developing context. This framework then applied to the chronological development of urban water governance strategies in Bangladesh for almost two centuries. The chronological analysis of governance interventions has revealed that crisis (public health, food and natural hazards) became the opportunities and thus opened the windows for experimentation and learning to occur as a deviation from traditional practices. Self-organization and networks thus created the platform for development or disruptions to occur for creating change. Leadership (internal or external) is important for nurturing and upscaling theses development or disruptions towards guiding policy vision and targets as well as championing ground implementation. In the case of Bangladesh, the leadership from the international and national aid organizations and targets have always lead the development whereas more often social capital tools (trust, power relations, cultural norms) act as disruptions. Historically, this has been evident in the development pathways of urban water governance in Bangladesh. Overall this research has shown some level of adaptive capacities is growing for sustainable urban growth in big cities, nevertheless unclear regarding the growth in medium and small cities context.Keywords: adaptive capacity, Bangladesh, sustainability transformation, water governance
Procedia PDF Downloads 3931788 Simulation and Thermal Evaluation of Containers Using PCM in Different Weather Conditions of Chile: Energy Savings in Lightweight Constructions
Authors: Paula Marín, Mohammad Saffari, Alvaro de Gracia, Luisa F. Cabeza, Svetlana Ushak
Abstract:
Climate control represents an important issue when referring to energy consumption of buildings and associated expenses, both in installation or operation periods. The climate control of a building relies on several factors. Among them, localization, orientation, architectural elements, sources of energy used, are considered. In order to study the thermal behaviour of a building set up, the present study proposes the use of energy simulation program Energy Plus. In recent years, energy simulation programs have become important tools for evaluation of thermal/energy performance of buildings and facilities. Besides, the need to find new forms of passive conditioning in buildings for energy saving is a critical component. The use of phase change materials (PCMs) for heat storage applications has grown in importance due to its high efficiency. Therefore, the climatic conditions of Northern Chile: high solar radiation and extreme temperature fluctuations ranging from -10°C to 30°C (Calama city), low index of cloudy days during the year are appropriate to take advantage of solar energy and use passive systems in buildings. Also, the extensive mining activities in northern Chile encourage the use of large numbers of containers to harbour workers during shifts. These containers are constructed with lightweight construction systems, requiring heating during night and cooling during day, increasing the HVAC electricity consumption. The use of PCM can improve thermal comfort and reduce the energy consumption. The objective of this study was to evaluate the thermal and energy performance of containers of 2.5×2.5×2.5 m3, located in four cities of Chile: Antofagasta, Calama, Santiago, and Concepción. Lightweight envelopes, typically used in these building prototypes, were evaluated considering a container without PCM inclusion as the reference building and another container with PCM-enhanced envelopes as a test case, both of which have a door and a window in the same wall, orientated in two directions: North and South. To see the thermal response of these containers in different seasons, the simulations were performed considering a period of one year. The results show that higher energy savings for the four cities studied are obtained when the distribution of door and window in the container is in the north direction because of higher solar radiation incidence. The comparison of HVAC consumption and energy savings in % for north direction of door and window are summarised. Simulation results show that in the city of Antofagasta 47% of heating energy could be saved and in the cities of Calama and Concepción the biggest savings in terms of cooling could be achieved since PCM reduces almost all the cooling demand. Currently, based on simulation results, four containers have been constructed and sized with the same structural characteristics carried out in simulations, that are, containers with/without PCM, with door and window in one wall. Two of these containers will be placed in Antofagasta and two containers in a copper mine near to Calama, all of them will be monitored for a period of one year. The simulation results will be validated with experimental measurements and will be reported in the future.Keywords: energy saving, lightweight construction, PCM, simulation
Procedia PDF Downloads 2861787 The Effect of Transactional Analysis Group Training on Self-Knowledge and Its Ego States (The Child, Parent, and Adult): A Quasi-Experimental Study Applied to Counselors of Tehran
Authors: Mehravar Javid, Sadrieh Khajavi Mazanderani, Kelly Gleischman, Zoe Andris
Abstract:
The present study was conducted with the aim of investigating the effectiveness of transactional analysis group training on self-knowledge and Its dimensions (self, child, and adult) in counselors working in public and private high schools in Tehran. Counseling has become an important job for society, and there is a need for consultants in organizations. Providing better and more efficient counseling is one of the goals of the education system. The personal characteristics of counselors are important for the success of the therapy. In TA, humans have three ego states, which are named parent, adult, and child, and the main concept in the transactional analysis is self-state, which means a stable feeling and pattern of thinking related to behavioral patterns. Self-knowledge, considered a prerequisite to effective communication, fosters psychological growth, and recognizing it, is pivotal for emotional development, leading to profound insights. The research sample included 30 working counselors (22 women and 8 men) in the academic year 2019-2020 who achieved the lowest scores on the self-knowledge questionnaire. The research method was quasi-experimental with a control group (15 people in the experimental group and 15 people in the control group). The research tool was a self-awareness questionnaire with 29 questions and three subscales (child, parent, and adult Ego state). The experimental group was exposed to transactional analysis training for 10 once-weekly 2-hour sessions; the questionnaire was implemented in both groups (post-test). Multivariate covariance analysis was used to analyze the data. The data showed that the level of self-awareness of counselors who received transactional analysis training is higher than that of counselors who did not receive any training (p<0.01). The result obtained from this analysis shows that transactional analysis training is an effective therapy for enhancing self-knowledge and its subscales (Adult ego state, Parent ego state, and Child ego state). Teaching transactional analysis increases self-knowledge, and self-realization and helps people to achieve independence and remove irresponsibility to improve intra-personal and interpersonal relationships.Keywords: ego state, group, transactional analysis, self-knowledge
Procedia PDF Downloads 76