Search results for: experimental simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11313

Search results for: experimental simulation

933 Mechanical Behavior of Hybrid Hemp/Jute Fibers Reinforced Polymer Composites at Liquid Nitrogen Temperature

Authors: B. Vinod, L. Jsudev

Abstract:

Natural fibers as reinforcement in polymer matrix material is gaining lot of attention in recent years, as they are light in weight, less in cost, and ecologically advanced surrogate material to glass and carbon fibers in composites. Natural fibers like jute, sisal, coir, hemp, banana etc. have attracted substantial importance as a potential structural material because of its attractive features along with its good mechanical properties. Cryogenic applications of natural fiber reinforced polymer composites like cryogenic wind tunnels, cryogenic transport vessels, support structures in space shuttles and rockets are gaining importance. In these unique cryogenic applications, the requirements of polymer composites are extremely severe and complicated. These materials need to possess good mechanical and physical properties at cryogenic temperatures such as liquid helium (4.2 K), liquid hydrogen (20 K), liquid nitrogen (77 K), and liquid oxygen (90 K) temperatures, etc., to meet the high requirements by the cryogenic engineering applications. The objective of this work is to investigate the mechanical behavior of hybrid hemp/jute fibers reinforced epoxy composite material at liquid nitrogen temperature. Hemp and Jute fibers are used as reinforcement material as they have high specific strength, stiffness and good adhering property and has the potential to replace the synthetic fibers. Hybrid hemp/jute fibers reinforced polymer composite is prepared by hand lay-up method and test specimens are cut according to ASTM standards. These test specimens are dipped in liquid nitrogen for different time durations. The tensile properties, flexural properties and impact strength of the specimen are tested immediately after the specimens are removed from liquid nitrogen container. The experimental results indicate that the cryogenic treatment of the polymer composite has a significant effect on the mechanical properties of this material. The tensile properties and flexural properties of the hybrid hemp/jute fibers epoxy composite at liquid nitrogen temperature is higher than at room temperature. The impact strength of the material decreased after subjecting it to liquid nitrogen temperature.

Keywords: liquid nitrogen temperature, polymer composite, tensile properties, flexural properties

Procedia PDF Downloads 335
932 Synthesis and Characterization of Mixed ligand complexes of Bipyridyl and Glycine with Different Counter Anions as Functional Antioxidant Enzyme Mimics

Authors: Mohamed M. Ibrahim, Gaber A. M. Mersal, Salih Al-Juaid, Samir A. El-Shazly

Abstract:

A series of mixed ligand complexes, viz., [Cu(BPy)(Gly)X]Y {X = Cl (1), Y = 0; X = 0, Y = ClO4- (2); X = H2O, Y = NO3- (3); X = H2O, Y = CH3COO- (4); and [Cu(BPy)(Gly)-(H2O)]2(SO4) (5) have been synthesized. Their structures and properties were characterized by elemental analysis, thermal analaysis, IR, UV–vis, and ESR spectroscopy, as well as electrochemical measurements including cyclic voltammetry, electrical molar conductivity, and magnetic moment measurements. Complexes 1 and 2 formed slightly distorted square-pyramidal coordination geometries of CuN3OCl and CuN3O2, respectively in which the N,O-donor glycine and N,N-donor bipyridyl bind at the basal plane with chloride ion or water as the axial ligand. Complex 3 shows square planar CuN3O coordination geometry, which exhibits chemically significant hydrogen bonding interactions besides showing coordination polymer formation. The superoxide dismutase and catalase-like activities of all complexes were tested and were found to be promising candidates as durable electron-transfer catalyst being close to the efficiency of the mimicking enzymes displaying either catalase or tyrosinase activity to serve for complete reactive oxygen species (ROS) detoxification, both with respect to superoxide radicals and related peroxides. The DNA binding interaction with super coiled pGEM-T plasmid DNA was investigated by using spectral (absorption and emission) titration and electrochemical techniques. The results revealed that DNA intercalate with complexes 1 and 2 through the groove binding mode. The calculated intrinsic binding constant (Kb) of 1 and 2 were 4.71 and 2.429 × 105 M−1, respectively. Gel electrophoresis study reveals the fact that both complexes cleave super coiled pGEM-T plasmid DNA to nicked and linear forms in the absence of any additives. On the other hand, the interaction of both complexes with DNA, the quasi-reversible CuII/CuI redox couple slightly improves its reversibility with considerable decrease in current intensity. All the experimental results indicate that the bipyridyl mixed copper(II) complex (1) intercalate more effectively into the DNA base pairs.

Keywords: enzyme mimics, mixed ligand complexes, X-ray structures, antioxidant, DNA-binding, DNA cleavage

Procedia PDF Downloads 541
931 Affective Robots: Evaluation of Automatic Emotion Recognition Approaches on a Humanoid Robot towards Emotionally Intelligent Machines

Authors: Silvia Santano Guillén, Luigi Lo Iacono, Christian Meder

Abstract:

One of the main aims of current social robotic research is to improve the robots’ abilities to interact with humans. In order to achieve an interaction similar to that among humans, robots should be able to communicate in an intuitive and natural way and appropriately interpret human affects during social interactions. Similarly to how humans are able to recognize emotions in other humans, machines are capable of extracting information from the various ways humans convey emotions—including facial expression, speech, gesture or text—and using this information for improved human computer interaction. This can be described as Affective Computing, an interdisciplinary field that expands into otherwise unrelated fields like psychology and cognitive science and involves the research and development of systems that can recognize and interpret human affects. To leverage these emotional capabilities by embedding them in humanoid robots is the foundation of the concept Affective Robots, which has the objective of making robots capable of sensing the user’s current mood and personality traits and adapt their behavior in the most appropriate manner based on that. In this paper, the emotion recognition capabilities of the humanoid robot Pepper are experimentally explored, based on the facial expressions for the so-called basic emotions, as well as how it performs in contrast to other state-of-the-art approaches with both expression databases compiled in academic environments and real subjects showing posed expressions as well as spontaneous emotional reactions. The experiments’ results show that the detection accuracy amongst the evaluated approaches differs substantially. The introduced experiments offer a general structure and approach for conducting such experimental evaluations. The paper further suggests that the most meaningful results are obtained by conducting experiments with real subjects expressing the emotions as spontaneous reactions.

Keywords: affective computing, emotion recognition, humanoid robot, human-robot-interaction (HRI), social robots

Procedia PDF Downloads 221
930 Anticoccidial Effects of the Herbal Mixture in Boilers after Eimeria spp. Infection

Authors: Yang-Ho Jang, Soon-Ok Jee, Hae-Chul Park, Jeong-Woo Kang, Byung-Jae So, Sung-Shik Shin, Kyu-Sung Ahn, Kwang-Jick Lee

Abstract:

Introduction: Antibiotics have been used as feed additives for the growth promotion and performance in food-producing animals. However, the possibility of selection of antimicrobial resistance and the concerns of residue in animal products led to ban the use of antibiotics in farm animals at 2011 in Korea. This strategy is also adjusted to anticoccidial drugs soon but these are still allowed for the time being to use in a diet for the treatment and control for the enteric necrosis in poultry. Therefore substantial focus has been given to find alternatives to antimicrobial agents. Several phytogenic materials have been reported to have positive effects on coccidiosis. This study was to evaluate the effects on anti-coccidial effect of oregano oil based herb mixture on Eimeria spp. in poultry. Materials and Methods: A total of one day-old boiler chickens divided into six groups (each group=30 chkckens) were used in this study. The herbal mixture was fed with water freely as follows: two groups, one infected with Eimeria spp. and the other group served as controls without herbal mixture respectively; 0.2ml/L of oregano oil; 0.2ml/L of oregano oil and Sanguisorbae radix; 0.2ml/L of Sanguisorbae radix; last group was fed with dichlazuril diet as positive control. Sporulated Eimeria spp. was infected at 14 day-old. Following infection, survival rate, bloody diarrhea, OPG (oocyst per gram) and feed conversion ratios were determined. The experimental period was lasted for 4 weeks. Results: Herbal mixture feeding groups (Group 3,4,5) showed low feed conversion ratio comparing with negative control. Oregano oil group and positive control group recorded the highest survival rate. The grade of bloody diarrhea was scored 0 to 5. Herbal mixture feeding groups showed 2, 3 and 1 score respectively however, group 2 (infection and no-treatment) showed 4. OPG results in herbal mixture feeding group were 3 to 4 times higher than diclazuril diet feeding group. Conclusions: These results showed that oregano oil and Sanguisorbae radix mixture may have an anti-coccidial effect and also affect chick performance.

Keywords: anticoccidial effects, oregano oil based herb mixture, herbal mixture, antibiotics

Procedia PDF Downloads 550
929 Optimization and Evaluation of Different Pathways to Produce Biofuel from Biomass

Authors: Xiang Zheng, Zhaoping Zhong

Abstract:

In this study, Aspen Plus was used to simulate the whole process of biomass conversion to liquid fuel in different ways, and the main results of material and energy flow were obtained. The process optimization and evaluation were carried out on the four routes of cellulosic biomass pyrolysis gasification low-carbon olefin synthesis olefin oligomerization, biomass water pyrolysis and polymerization to jet fuel, biomass fermentation to ethanol, and biomass pyrolysis to liquid fuel. The environmental impacts of three biomass species (poplar wood, corn stover, and rice husk) were compared by the gasification synthesis pathway. The global warming potential, acidification potential, and eutrophication potential of the three biomasses were the same as those of rice husk > poplar wood > corn stover. In terms of human health hazard potential and solid waste potential, the results were poplar > rice husk > corn stover. In the popular pathway, 100 kg of poplar biomass was input to obtain 11.9 kg of aviation coal fraction and 6.3 kg of gasoline fraction. The energy conversion rate of the system was 31.6% when the output product energy included only the aviation coal product. In the basic process of hydrothermal depolymerization process, 14.41 kg aviation kerosene was produced per 100 kg biomass. The energy conversion rate of the basic process was 33.09%, which can be increased to 38.47% after the optimal utilization of lignin gasification and steam reforming for hydrogen production. The total exergy efficiency of the system increased from 30.48% to 34.43% after optimization, and the exergy loss mainly came from the concentration of precursor dilute solution. Global warming potential in environmental impact is mostly affected by the production process. Poplar wood was used as raw material in the process of ethanol production from cellulosic biomass. The simulation results showed that 827.4 kg of pretreatment mixture, 450.6 kg of fermentation broth, and 24.8 kg of ethanol were produced per 100 kg of biomass. The power output of boiler combustion reached 94.1 MJ, the unit power consumption in the process was 174.9 MJ, and the energy conversion rate was 33.5%. The environmental impact was mainly concentrated in the production process and agricultural processes. On the basis of the original biomass pyrolysis to liquid fuel, the enzymatic hydrolysis lignin residue produced by cellulose fermentation to produce ethanol was used as the pyrolysis raw material, and the fermentation and pyrolysis processes were coupled. In the coupled process, 24.8 kg ethanol and 4.78 kg upgraded liquid fuel were produced per 100 kg biomass with an energy conversion rate of 35.13%.

Keywords: biomass conversion, biofuel, process optimization, life cycle assessment

Procedia PDF Downloads 66
928 Theoretical Study of Gas Adsorption in Zirconium Clusters

Authors: Rasha Al-Saedi, Anthony Meijer

Abstract:

The progress of new porous materials has increased rapidly over the past decade for use in applications such as catalysis, gas storage and removal of environmentally unfriendly species due to their high surface area and high thermal stability. In this work, a theoretical study of the zirconium-based metal organic framework (MOFs) were examined in order to determine their potential for gas adsorption of various guest molecules: CO2, N2, CH4 and H2. The zirconium cluster consists of an inner Zr6O4(OH)4 core in which the triangular faces of the Zr6- octahedron are alternatively capped by O and OH groups which bound to nine formate groups and three benzoate groups linkers. General formula is [Zr(μ-O)4(μ-OH)4(HCOO)9((phyO2C)3X))] where X= CH2OH, CH2NH2, CH2CONH2, n(NH2); (n = 1-3). Three types of adsorption sites on the Zr metal center have been studied, named according to capped chemical groups as the ‘−O site’; the H of (μ-OH) site removed and added to (μ-O) site, ‘–OH site’; (μ-OH) site removed, the ‘void site’ where H2O molecule removed; (μ-OH) from one site and H from other (μ-OH) site, in addition to no defect versions. A series of investigations have been performed aiming to address this important issue. First, density functional theory DFT-B3LYP method with 6-311G(d,p) basis set was employed using Gaussian 09 package in order to evaluate the gas adsorption performance of missing-linker defects in zirconium cluster. Next, study the gas adsorption behaviour on different functionalised zirconium clusters. Those functional groups as mentioned above include: amines, alcohol, amide, in comparison with non-substitution clusters. Then, dispersion-corrected density functional theory (DFT-D) calculations were performed to further understand the enhanced gas binding on zirconium clusters. Finally, study the water effect on CO2 and N2 adsorption. The small functionalized Zr clusters were found to result in good CO2 adsorption over N2, CH4, and H2 due to the quadrupole moment of CO2 while N2, CH4 and H2 weakly polar or non-polar. The adsorption efficiency was determined using the dispersion method where the adsorption binding improved as most of the interactions, for example, van der Waals interactions are missing with the conventional DFT method. The calculated gas binding strengths on the no defect site are higher than those on the −O site, −OH site and the void site, this difference is especially notable for CO2. It has been stated that the enhanced affinity of CO2 of no defect versions is most likely due to the electrostatic interactions between the negatively charged O of CO2 and the positively charged H of (μ-OH) metal site. The uptake of the gas molecule does not enhance in presence of water as the latter binds to Zr clusters more strongly than gas species which attributed to the competition on adsorption sites.

Keywords: density functional theory, gas adsorption, metal- organic frameworks, molecular simulation, porous materials, theoretical chemistry

Procedia PDF Downloads 179
927 Peer Group Approach: An Oral Health Intervention from Children for Children at Primary School in Klungkung, Bali, Indonesia

Authors: Regina Tedjasulaksana, Maria Martina Nahak, A. A. Gede Agung, Ni Made Widhiasti

Abstract:

Strategic effort to realize the empowerment of community in school is through the peer group approach so that it needs to choose the students who are trained as the’ little dentist’ in order to have the cognitive and skills to participate in the school dental health effort (UKGS) program, such as providing oral health education to the other students. Aim: To assessed the effectiveness of peer group approach to enhance the oral health knowledge level of schoolchildren at primary school in Klungkung, Bali. Methods: Experimental study using the pre-post test without control group design. The differences of knowledge levels, tooth brushing behavior and oral hygiene status (using PHP-M index) of 10 students before and after trained as the little dentists were analyzed using paired t-test. The correlations between knowledge level and tooth brushing behavior and correlations between tooth brushing behavior and oral hygiene before and after trained as the little dentists were analyzed using Spearman. Furthermore, the trained little dentists provide oral health education to 102 students of grade 1 to 5 at their school once a week for 3 months. The students’ knowledge level scores of each grade were taken every 21 days as many as three times The difference of it was analyzed using Repeated Measured. Result: The mean scores among all little dentists before and after training for each of knowledge level were each 63.05 + 5.62 and 85.00 + 7.81, tooth brushing behavior were each 31.00 + 14.49 and 100.00 + 0.00 and oral hygiene status using PHP-M index were each 32.80 + 10.17 and 11.40 + 8.01. The knowledge level, tooth brushing behavior and oral hygiene status of 10 students before and after trained as the little dentists were different significantly (p<0.05). Before and after trained as the little dentists it showed that significant correlations between knowledge level with tooth brushing behavior (p<0.05) and significant correlations between tooth brushing behavior and oral hygiene (p<0.05). The mean scores of knowledge level among all students before (pre-test) and after (post-test (1),(2),(3)) getting oral health education from little dentists for each, of grade 1 were 40.00 + 17.97; 67.85 + 18.88; 81.72 +26.48 and 70.00 + 22.87, grade 2 were 40.00 + 17.97; 67.85 + 18.88; 81.72 + 26.48 and 70.00 + 22.87, grade 3 were 65.83 + 23.94; 72.50 + 26.08; 80.41 + 24.93 and 83.75 + 19.74, grade 4 were 88.57 + 12.92; 90.71 + 9.97; 92.85 + 10.69 and 93.57 + 6.33 and grade 5 were 86.66 + 13.40; 93.33 + 9.16; 94.16 + 10.17 and 98.33 + 4.81. The students’ knowledge level of grade 1,2 and 3 before and after getting oral health education from little dentists showed significant different (p<0.05), meanwhile there was no significant different on grade 4 and 5 (p<0.05) although mean scores showed an increase. Conclusion: Peer group approach can be used to enhance the oral health knowledge level of schoolchildren at primary school in Klungkung, Bali.

Keywords: small dentists, oral health, peer group approach, school children

Procedia PDF Downloads 421
926 Computational Study on Traumatic Brain Injury Using Magnetic Resonance Imaging-Based 3D Viscoelastic Model

Authors: Tanu Khanuja, Harikrishnan N. Unni

Abstract:

Head is the most vulnerable part of human body and may cause severe life threatening injuries. As the in vivo brain response cannot be recorded during injury, computational investigation of the head model could be really helpful to understand the injury mechanism. Majority of the physical damage to living tissues are caused by relative motion within the tissue due to tensile and shearing structural failures. The present Finite Element study focuses on investigating intracranial pressure and stress/strain distributions resulting from impact loads on various sites of human head. This is performed by the development of the 3D model of a human head with major segments like cerebrum, cerebellum, brain stem, CSF (cerebrospinal fluid), and skull from patient specific MRI (magnetic resonance imaging). The semi-automatic segmentation of head is performed using AMIRA software to extract finer grooves of the brain. To maintain the accuracy high number of mesh elements are required followed by high computational time. Therefore, the mesh optimization has also been performed using tetrahedral elements. In addition, model validation with experimental literature is performed as well. Hard tissues like skull is modeled as elastic whereas soft tissues like brain is modeled with viscoelastic prony series material model. This paper intends to obtain insights into the severity of brain injury by analyzing impacts on frontal, top, back, and temporal sites of the head. Yield stress (based on von Mises stress criterion for tissues) and intracranial pressure distribution due to impact on different sites (frontal, parietal, etc.) are compared and the extent of damage to cerebral tissues is discussed in detail. This paper finds that how the back impact is more injurious to overall head than the other. The present work would be helpful to understand the injury mechanism of traumatic brain injury more effectively.

Keywords: dynamic impact analysis, finite element analysis, intracranial pressure, MRI, traumatic brain injury, von Misses stress

Procedia PDF Downloads 153
925 Sustainable Wood Harvesting from Juniperus procera Trees Managed under a Participatory Forest Management Scheme in Ethiopia

Authors: Mindaye Teshome, Evaldo Muñoz Braz, Carlos M. M. Eleto Torres, Patricia Mattos

Abstract:

Sustainable forest management planning requires up-to-date information on the structure, standing volume, biomass, and growth rate of trees from a given forest. This kind of information is lacking in many forests in Ethiopia. The objective of this study was to quantify the population structure, diameter growth rate, and standing volume of wood from Juniperus procera trees in the Chilimo forest. A total of 163 sample plots were set up in the forest to collect the relevant vegetation data. Growth ring measurements were conducted on stem disc samples collected from 12 J. procera trees. Diameter and height measurements were recorded from a total of 1399 individual trees with dbh ≥ 2 cm. The growth rate, maximum current and mean annual increments, minimum logging diameter, and cutting cycle were estimated, and alternative cutting cycles were established. Using these data, the harvestable volume of wood was projected by alternating four minimum logging diameters and five cutting cycles following the stand table projection method. The results show that J. procera trees have an average density of 183 stems ha⁻¹, a total basal area of 12.1 m² ha⁻¹, and a standing volume of 98.9 m³ ha⁻¹. The mean annual diameter growth ranges between 0.50 and 0.65 cm year⁻¹ with an overall mean of 0.59 cm year⁻¹. The population of J. procera tree followed a reverse J-shape diameter distribution pattern. The maximum current annual increment in volume (CAI) occurred at around 49 years when trees reached 30 cm in diameter. Trees showed the maximum mean annual increment in volume (MAI) around 91 years, with a diameter size of 50 cm. The simulation analysis revealed that 40 cm MLD and a 15-year cutting cycle are the best minimum logging diameter and cutting cycle. This combination showed the largest harvestable volume of wood potential, volume increments, and a 35% recovery of the initially harvested volume. It is concluded that the forest is well stocked and has a large amount of harvestable volume of wood from J. procera trees. This will enable the country to partly meet the national wood demand through domestic wood production. The use of the current population structure and diameter growth data from tree ring analysis enables the exact prediction of the harvestable volume of wood. The developed model supplied an idea about the productivity of the J. procera tree population and enables policymakers to develop specific management criteria for wood harvesting.

Keywords: logging, growth model, cutting cycle, minimum logging diameter

Procedia PDF Downloads 83
924 Microfiber Release During Laundry Under Different Rinsing Parameters

Authors: Fulya Asena Uluç, Ehsan Tuzcuoğlu, Songül Bayraktar, Burak Koca, Alper Gürarslan

Abstract:

Microplastics are contaminants that are widely distributed in the environment with a detrimental ecological effect. Besides this, recent research has proved the existence of microplastics in human blood and organs. Microplastics in the environment can be divided into two main categories: primary and secondary microplastics. Primary microplastics are plastics that are released into the environment as microscopic particles. On the other hand, secondary microplastics are the smaller particles that are shed as a result of the consumption of synthetic materials in textile products as well as other products. Textiles are the main source of microplastic contamination in aquatic ecosystems. Laundry of synthetic textiles (34.8%) accounts for an average annual discharge of 3.2 million tons of primary microplastics into the environment. Recently, microfiber shedding from laundry research has gained traction. However, no comprehensive study was conducted from the standpoint of rinsing parameters during laundry to analyze microfiber shedding. The purpose of the present study is to quantify microfiber shedding from fabric under different rinsing conditions and determine the effective rinsing parameters on microfiber release in a laundry environment. In this regard, a parametric study is carried out to investigate the key factors affecting the microfiber release from a front-load washing machine. These parameters are the amount of water used during the rinsing step and the spinning speed at the end of the washing cycle. Minitab statistical program is used to create a design of the experiment (DOE) and analyze the experimental results. Tests are repeated twice and besides the controlled parameters, other washing parameters are kept constant in the washing algorithm. At the end of each cycle, released microfibers are collected via a custom-made filtration system and weighted with precision balance. The results showed that by increasing the water amount during the rinsing step, the amount of microplastic released from the washing machine increased drastically. Also, the parametric study revealed that increasing the spinning speed results in an increase in the microfiber release from textiles.

Keywords: front load, laundry, microfiber, microfiber release, microfiber shedding, microplastic, pollution, rinsing parameters, sustainability, washing parameters, washing machine

Procedia PDF Downloads 90
923 Effectiveness of Self-Learning Module on the Academic Performance of Students in Statistics and Probability

Authors: Aneia Rajiel Busmente, Renato Gunio Jr., Jazin Mautante, Denise Joy Mendoza, Raymond Benedict Tagorio, Gabriel Uy, Natalie Quinn Valenzuela, Ma. Elayza Villa, Francine Yezha Vizcarra, Sofia Madelle Yapan, Eugene Kurt Yboa

Abstract:

COVID-19’s rapid spread caused a dramatic change in the nation, especially the educational system. The Department of Education was forced to adopt a practical learning platform without neglecting health, a printed modular distance learning. The Philippines' K–12 curriculum includes Statistics and Probability as one of the key courses as it offers students the knowledge to evaluate and comprehend data. Due to student’s difficulty and lack of understanding of the concepts of Statistics and Probability in Normal Distribution. The Self-Learning Module in Statistics and Probability about the Normal Distribution created by the Department of Education has several problems, including many activities, unclear illustrations, and insufficient examples of concepts which enables learners to have a difficulty accomplishing the module. The purpose of this study is to determine the effectiveness of self-learning module on the academic performance of students in the subject Statistics and Probability, it will also explore students’ perception towards the quality of created Self-Learning Module in Statistics and Probability. Despite the availability of Self-Learning Modules in Statistics and Probability in the Philippines, there are still few literatures that discuss its effectiveness in improving the performance of Senior High School students in Statistics and Probability. In this study, a Self-Learning Module on Normal Distribution is evaluated using a quasi-experimental design. STEM students in Grade 11 from National University's Nazareth School will be the study's participants, chosen by purposive sampling. Google Forms will be utilized to find at least 100 STEM students in Grade 11. The research instrument consists of 20-item pre- and post-test to assess participants' knowledge and performance regarding Normal Distribution, and a Likert scale survey to evaluate how the students perceived the self-learning module. Pre-test, post-test, and Likert scale surveys will be utilized to gather data, with Jeffreys' Amazing Statistics Program (JASP) software being used for analysis.

Keywords: self-learning module, academic performance, statistics and probability, normal distribution

Procedia PDF Downloads 99
922 Hot Carrier Photocurrent as a Candidate for an Intrinsic Loss in a Single Junction Solar Cell

Authors: Jonas Gradauskas, Oleksandr Masalskyi, Ihor Zharchenko

Abstract:

The advancement in improving the efficiency of conventional solar cells toward the Shockley-Queisser limit seems to be slowing down or reaching a point of saturation. The challenges hindering the reduction of this efficiency gap can be categorized into extrinsic and intrinsic losses, with the former being theoretically avoidable. Among the five intrinsic losses, two — the below-Eg loss (resulting from non-absorption of photons with energy below the semiconductor bandgap) and thermalization loss —contribute to approximately 55% of the overall lost fraction of solar radiation at energy bandgap values corresponding to silicon and gallium arsenide. Efforts to minimize the disparity between theoretically predicted and experimentally achieved efficiencies in solar cells necessitate the integration of innovative physical concepts. Hot carriers (HC) present a contemporary approach to addressing this challenge. The significance of hot carriers in photovoltaics is not fully understood. Although their excessive energy is thought to indirectly impact a cell's performance through thermalization loss — where the excess energy heats the lattice, leading to efficiency loss — evidence suggests the presence of hot carriers in solar cells. Despite their exceptionally brief lifespan, tangible benefits arise from their existence. The study highlights direct experimental evidence of hot carrier effect induced by both below- and above-bandgap radiation in a singlejunction solar cell. Photocurrent flowing across silicon and GaAs p-n junctions is analyzed. The photoresponse consists, on the whole, of three components caused by electron-hole pair generation, hot carriers, and lattice heating. The last two components counteract the conventional electron-hole generation-caused current required for successful solar cell operation. Also, a model of the temperature coefficient of the voltage change of the current–voltage characteristic is used to obtain the hot carrier temperature. The distribution of cold and hot carriers is analyzed with regard to the potential barrier height of the p-n junction. These discoveries contribute to a better understanding of hot carrier phenomena in photovoltaic devices and are likely to prompt a reevaluation of intrinsic losses in solar cells.

Keywords: solar cell, hot carriers, intrinsic losses, efficiency, photocurrent

Procedia PDF Downloads 60
921 Effects of Exercise in the Cold on Glycolipid Metabolism and Insulin Sensitivity in Obese Rats

Authors: Chaoge Wang, Xiquan Weng, Yan Meng, Wentao Lin

Abstract:

Objective: Cold exposure and exercise serve as two physiological stimuli to glycolipid metabolism and insulin sensitivity. So far, it remains to be elucidated whether exercise plus cold exposure can produce an addictive effect on promoting glycolipid metabolism and insulin sensitivity. Methods: 64 SD rats were subjected to high-fat and high-sugar diets for 9-week and sucessfully to establish an obesity model. They were randomly divided into 8 groups: normal control group (NC), normal exercise group (NE), continuous cold control group (CC), continuous cold exercise group (CE), acute clod control group (AC), acute cold exercise group (AE), intermittent cold control group (IC) and intermittent cold exercise group (IE). For continuous cold exposure, the rats stayed in a cold environment all day; for acute cold exposure, the rats were exposed to cold for only 4h before the end of the experiment; for intermittent cold exposure, the rats were exposed to cold for 4h per day. The protocol for treadmill runnings were as follows: 25m/min (speed), 0°C (slope), 30 mins each time, an interval for 10 mins between two runnings, twice/two days, lasting for 5 weeks. Sampling were conducted on the 5th weekend. Blood lipids, free fatty acids, blood glucose (FBG), and serum insulin (FINS) were examined, and the insulin resistance index (HOMA-IR = FBG (mmol/L)×FINS(mIU/L)/22.5) was calculated. SPSS 22.0 was used for statistical analysis of the experimental results, and the ANOVA analysis was performed between groups (p < 0.05 was significant). Results: (1) Compared with the NC group, the FBG of the rats was significantly declined in the NE, CE, AC, AE, and IE groups (p < 0.05), the FINS of the rats was significantly declined in the AE group (p < 0.05), the HOMA-IR of the rats was significantly declined in the NE, CE, AC, AE and IE groups (p < 0.05). Compared with the NE group, the FBG of the rats was significantly declined in the CE, AE, and IE groups (p < 0.05), the FINS and HOMA-IR of the rats were significantly declined in the AE group (p < 0.05). (2) Compared with the NC group, the CHO, TG, LDL-C, and FFA of the rats were significantly declined in CE and IE groups (p < 0.05), the HDL-C of the rats was significantly higher in NE, CC, CE, AE, and IE groups (p < 0.05). Compared with the NE group, the HDL-C of the rats was significantly higher in the CE and IE groups (p < 0.05). Conclusions: Sedentariness or exercise in the acute cold doesn't make sense in the treatment of type 2 diabetes, which led to one-off increases of the body's insulin sensitivity. Exercise in the continuous and intermittent cold can effectively decline the FBG, TC, TG, LDL-C, and FFA levels and increase the HDL-C level and insulin sensitivity in obese rats. These results can impact the prevention and treatment of type 2 diabetes.

Keywords: cold, exercise, insulin sensitivity, obesity

Procedia PDF Downloads 136
920 Dy3+ Ions Doped Single and Mixed Alkali Fluoro Tungstunate Tellurite Glasses for Laser and White LED Applications

Authors: Allam Srinivasa Rao, Ch. Annapurna Devi, G. Vijaya Prakash

Abstract:

A new-fangled series of white light emitting 1 mol% of Dy3+ ions doped Single-Alklai and Mixed-Alkai fluoro tungstunate tellurite glasses have been prepared using melt quenching technique and their spectroscopic behaviour was investigated by studying XRD, optical absorption, photoluminescence and lifetime measurements. The bonding parameter studies reveal the ionic nature of the Dy-O bond in the present glasses. From the absorption spectra, the Judd–Ofelt (J-O) intensity parameters have been determined which are used to explore the nature of bonding and symmetry orientation of the Dy–ligand field environment. The evaluated J-O parameters (Ω_4>Ω_2>Ω_6) for all the glasses are following the same trend. The photoluminescence spectra of all the glasses exhibit two intensified peaks in blue and Yellow regions corresponding to the transitions 4F9/2→6H15/2 (483 nm) and 4F9/2→6H13/2 (575 nm) respectively. From the photoluminescence spectra, it is observed that the luminescence intensity is maximum for Dy3+ ion doped potassium combination of fluoro tungstunate tellurite glass (TeWK: 1Dy). The J-O intensity parameters have been used to determine the various radiative properties for the different emission transitions from the 4F9/2 fluorescent level. The highest emission cross-section and branching ratio values observed for the 4F9/2→6H15/2 and 4F9/2→6H13/2 transitions suggest the possible laser action in the visible region from these glasses. By using the experimental lifetimes (τ_exp) measured from the decay spectral features and radiative lifetimes (τ_R), the quantum efficiencies (η) for all the glasses have been evaluated. Among all the glasses, the potassium combined fluoro tungstunate tellurite (TeWK:1Dy) glass has the highest quantum efficiency (94.6%). The CIE colour chromaticity coordinates (x, y), (u, v), colour correlated temperature (CCT) and Y/B ratio were also estimated from the photoluminescence spectra for different compositions of glasses. The (x, y) and (u, v) chromaticity colour coordinates fall within the white light region and the white light can be tuned by varying the composition of the glass. From all these studies, we are suggesting that the 1 mol% of Dy3+ ions doped TeWK glass is more suitable for lasing and White-LED applications.

Keywords: dysprosium, Judd-Ofelt parameters, photo luminescence, tellurite glasses

Procedia PDF Downloads 218
919 Seismic Assessment of Non-Structural Component Using Floor Design Spectrum

Authors: Amin Asgarian, Ghyslaine McClure

Abstract:

Experiences in the past earthquakes have clearly demonstrated the necessity of seismic design and assessment of Non-Structural Components (NSCs) particularly in post-disaster structures such as hospitals, power plants, etc. as they have to be permanently functional and operational. Meeting this objective is contingent upon having proper seismic performance of both structural and non-structural components. Proper seismic design, analysis, and assessment of NSCs can be attained through generation of Floor Design Spectrum (FDS) in a similar fashion as target spectrum for structural components. This paper presents the developed methodology to generate FDS directly from corresponding Uniform Hazard Spectrum (UHS) (i.e. design spectra for structural components). The methodology is based on the experimental and numerical analysis of a database of 27 real Reinforced Concrete (RC) buildings which are located in Montreal, Canada. The buildings were tested by Ambient Vibration Measurements (AVM) and their dynamic properties have been extracted and used as part of the approach. Database comprises 12 low-rises, 10 medium-rises, and 5 high-rises and they are mostly designated as post-disaster\emergency shelters by the city of Montreal. The buildings are subjected to 20 compatible seismic records to UHS of Montreal and Floor Response Spectra (FRS) are developed for every floors in two horizontal direction considering four different damping ratios of NSCs (i.e. 2, 5, 10, and 20 % viscous damping). Generated FRS (approximately 132’000 curves) are statistically studied and the methodology is proposed to generate the FDS directly from corresponding UHS. The approach is capable of generating the FDS for any selection of floor level and damping ratio of NSCs. It captures the effect of: dynamic interaction between primary (structural) and secondary (NSCs) systems, higher and torsional modes of primary structure. These are important improvements of this approach compared to conventional methods and code recommendations. Application of the proposed approach are represented here through two real case-study buildings: one low-rise building and one medium-rise. The proposed approach can be used as practical and robust tool for seismic assessment and design of NSCs especially in existing post-disaster structures.

Keywords: earthquake engineering, operational and functional components, operational modal analysis, seismic assessment and design

Procedia PDF Downloads 206
918 Pose-Dependency of Machine Tool Structures: Appearance, Consequences, and Challenges for Lightweight Large-Scale Machines

Authors: S. Apprich, F. Wulle, A. Lechler, A. Pott, A. Verl

Abstract:

Large-scale machine tools for the manufacturing of large work pieces, e.g. blades, casings or gears for wind turbines, feature pose-dependent dynamic behavior. Small structural damping coefficients lead to long decay times for structural vibrations that have negative impacts on the production process. Typically, these vibrations are handled by increasing the stiffness of the structure by adding mass. That is counterproductive to the needs of sustainable manufacturing as it leads to higher resource consumption both in material and in energy. Recent research activities have led to higher resource efficiency by radical mass reduction that rely on control-integrated active vibration avoidance and damping methods. These control methods depend on information describing the dynamic behavior of the controlled machine tools in order to tune the avoidance or reduction method parameters according to the current state of the machine. The paper presents the appearance, consequences and challenges of the pose-dependent dynamic behavior of lightweight large-scale machine tool structures in production. The paper starts with the theoretical introduction of the challenges of lightweight machine tool structures resulting from reduced stiffness. The statement of the pose-dependent dynamic behavior is corroborated by the results of the experimental modal analysis of a lightweight test structure. Afterwards, the consequences of the pose-dependent dynamic behavior of lightweight machine tool structures for the use of active control and vibration reduction methods are explained. Based on the state of the art on pose-dependent dynamic machine tool models and the modal investigation of an FE-model of the lightweight test structure, the criteria for a pose-dependent model for use in vibration reduction are derived. The description of the approach for a general pose-dependent model of the dynamic behavior of large lightweight machine tools that provides the necessary input to the aforementioned vibration avoidance and reduction methods to properly tackle machine vibrations is the outlook of the paper.

Keywords: dynamic behavior, lightweight, machine tool, pose-dependency

Procedia PDF Downloads 454
917 Mathematical Model to Simulate Liquid Metal and Slag Accumulation, Drainage and Heat Transfer in Blast Furnace Hearth

Authors: Hemant Upadhyay, Tarun Kumar Kundu

Abstract:

It is utmost important for a blast furnace operator to understand the mechanisms governing the liquid flow, accumulation, drainage and heat transfer between various phases in blast furnace hearth for a stable and efficient blast furnace operation. Abnormal drainage behavior may lead to high liquid build up in the hearth. Operational problems such as pressurization, low wind intake, and lower material descent rates, normally be encountered if the liquid levels in the hearth exceed a critical limit when Hearth coke and Deadman start to float. Similarly, hot metal temperature is an important parameter to be controlled in the BF operation; it should be kept at an optimal level to obtain desired product quality and a stable BF performance. It is not possible to carry out any direct measurement of above due to the hostile conditions in the hearth with chemically aggressive hot liquids. The objective here is to develop a mathematical model to simulate the variation in hot metal / slag accumulation and temperature during the tapping of the blast furnace based on the computed drainage rate, production rate, mass balance, heat transfer between metal and slag, metal and solids, slag and solids as well as among the various zones of metal and slag itself. For modeling purpose, the BF hearth is considered as a pressurized vessel, filled with solid coke particles. Liquids trickle down in hearth from top and accumulate in voids between the coke particles which are assumed thermally saturated. A set of generic mass balance equations gives the amount of metal and slag intake in hearth. A small drainage (tap hole) is situated at the bottom of the hearth and flow rate of liquids from tap hole is computed taking in account the amount of both the phases accumulated their level in hearth, pressure from gases in the furnace and erosion behaviors of tap hole itself. Heat transfer equations provide the exchange of heat between various layers of liquid metal and slag, and heat loss to cooling system through refractories. Based on all that information a dynamic simulation is carried out which provides real time information of liquids accumulation in hearth before and during tapping, drainage rate and its variation, predicts critical event timings during tapping and expected tapping temperature of metal and slag on preset time intervals. The model is in use at JSPL, India BF-II and its output is regularly cross-checked with actual tapping data, which are in good agreement.

Keywords: blast furnace, hearth, deadman, hotmetal

Procedia PDF Downloads 181
916 Mature Field Rejuvenation Using Hydraulic Fracturing: A Case Study of Tight Mature Oilfield with Reveal Simulator

Authors: Amir Gharavi, Mohamed Hassan, Amjad Shah

Abstract:

The main characteristics of unconventional reservoirs include low-to ultra low permeability and low-to-moderate porosity. As a result, hydrocarbon production from these reservoirs requires different extraction technologies than from conventional resources. An unconventional reservoir must be stimulated to produce hydrocarbons at an acceptable flow rate to recover commercial quantities of hydrocarbons. Permeability for unconventional reservoirs is mostly below 0.1 mD, and reservoirs with permeability above 0.1 mD are generally considered to be conventional. The hydrocarbon held in these formations naturally will not move towards producing wells at economic rates without aid from hydraulic fracturing which is the only technique to assess these tight reservoir productions. Horizontal well with multi-stage fracking is the key technique to maximize stimulated reservoir volume and achieve commercial production. The main objective of this research paper is to investigate development options for a tight mature oilfield. This includes multistage hydraulic fracturing and spacing by building of reservoir models in the Reveal simulator to model potential development options based on sidetracking the existing vertical well. To simulate potential options, reservoir models have been built in the Reveal. An existing Petrel geological model was used to build the static parts of these models. A FBHP limit of 40bars was assumed to take into account pump operating limits and to maintain the reservoir pressure above the bubble point. 300m, 600m and 900m lateral length wells were modelled, in conjunction with 4, 6 and 8 stages of fracs. Simulation results indicate that higher initial recoveries and peak oil rates are obtained with longer well lengths and also with more fracs and spacing. For a 25year forecast, the ultimate recovery ranging from 0.4% to 2.56% for 300m and 1000m laterals respectively. The 900m lateral with 8 fracs 100m spacing gave the highest peak rate of 120m3/day, with the 600m and 300m cases giving initial peak rates of 110m3/day. Similarly, recovery factor for the 900m lateral with 8 fracs and 100m spacing was the highest at 2.65% after 25 years. The corresponding values for the 300m and 600m laterals were 2.37% and 2.42%. Therefore, the study suggests that longer laterals with 8 fracs and 100m spacing provided the optimal recovery, and this design is recommended as the basis for further study.

Keywords: unconventional, resource, hydraulic, fracturing

Procedia PDF Downloads 294
915 Pressure-Robust Approximation for the Rotational Fluid Flow Problems

Authors: Medine Demir, Volker John

Abstract:

Fluid equations in a rotating frame of reference have a broad class of important applications in meteorology and oceanography, especially in the large-scale flows considered in ocean and atmosphere, as well as many physical and industrial applications. The Coriolis and the centripetal forces, resulting from the rotation of the earth, play a crucial role in such systems. For such applications it may be required to solve the system in complex three-dimensional geometries. In recent years, the Navier--Stokes equations in a rotating frame have been investigated in a number of papers using the classical inf-sup stable mixed methods, like Taylor-Hood pairs, to contribute to the analysis and the accurate and efficient numerical simulation. Numerical analysis reveals that these classical methods introduce a pressure-dependent contribution in the velocity error bounds that is proportional to some inverse power of the viscosity. Hence, these methods are optimally convergent but small velocity errors might not be achieved for complicated pressures and small viscosity coefficients. Several approaches have been proposed for improving the pressure-robustness of pairs of finite element spaces. In this contribution, a pressure-robust space discretization of the incompressible Navier--Stokes equations in a rotating frame of reference is considered. The discretization employs divergence-free, $H^1$-conforming mixed finite element methods like Scott--Vogelius pairs. However, this approach might come with a modification of the meshes, like the use of barycentric-refined grids in case of Scott--Vogelius pairs. However, this strategy requires the finite element code to have control on the mesh generator which is not realistic in many engineering applications and might also be in conflict with the solver for the linear system. An error estimate for the velocity is derived that tracks the dependency of the error bound on the coefficients of the problem, in particular on the angular velocity. Numerical examples illustrate the theoretical results. The idea of pressure-robust method could be cast on different types of flow problems which would be considered as future studies. As another future research direction, to avoid a modification of the mesh, one may use a very simple parameter-dependent modification of the Scott-Vogelius element, the pressure-wired Stokes element, such that the inf-sup constant is independent of nearly-singular vertices.

Keywords: navier-stokes equations in a rotating frame of refence, coriolis force, pressure-robust error estimate, scott-vogelius pairs of finite element spaces

Procedia PDF Downloads 52
914 Prediction of Sound Transmission Through Framed Façade Systems

Authors: Fangliang Chen, Yihe Huang, Tejav Deganyar, Anselm Boehm, Hamid Batoul

Abstract:

With growing population density and further urbanization, the average noise level in cities is increasing. Excessive noise is not only annoying but also leads to a negative impact on human health. To deal with the increasing city noise, environmental regulations bring up higher standards on acoustic comfort in buildings by mitigating the noise transmission from building envelope exterior to interior. Framed window, door and façade systems are the leading choice for modern fenestration construction, which provides demonstrated quality of weathering reliability, environmental efficiency, and installation ease. The overall sound insulation of such systems depends both on glasses and frames, where glass usually covers the majority of the exposed surfaces, thus it is the main source of sound energy transmission. While frames in modern façade systems become slimmer for aesthetic appearance, which contribute to a minimal percentage of exposed surfaces. Nevertheless, frames might provide substantial transmission paths for sound travels through because of much less mass crossing the path, thus becoming more critical in limiting the acoustic performance of the whole system. There are various methodologies and numerical programs that can accurately predict the acoustic performance of either glasses or frames. However, due to the vast variance of size and dimension between frame and glass in the same system, there is no satisfactory theoretical approach or affordable simulation tool in current practice to access the over acoustic performance of a whole façade system. For this reason, laboratory test turns out to be the only reliable source. However, laboratory test is very time consuming and high costly, moreover different lab might provide slightly different test results because of varieties of test chambers, sample mounting, and test operations, which significantly constrains the early phase design of framed façade systems. To address this dilemma, this study provides an effective analytical methodology to predict the acoustic performance of framed façade systems, based on vast amount of acoustic test results on glass, frame and the whole façade system consist of both. Further test results validate the current model is able to accurately predict the overall sound transmission loss of a framed system as long as the acoustic behavior of the frame is available. Though the presented methodology is mainly developed from façade systems with aluminum frames, it can be easily extended to systems with frames of other materials such as steel, PVC or wood.

Keywords: city noise, building facades, sound mitigation, sound transmission loss, framed façade system

Procedia PDF Downloads 51
913 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements

Authors: Alexander Buhr, Klaus Ehrenfried

Abstract:

Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.

Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements

Procedia PDF Downloads 301
912 Change through Stillness: Mindfulness Meditation as an Intervention for Men with Self-Perceived Problematic Pornography Use

Authors: Luke Sniewski, Pante Farvid, Phil Carter, Rita Csako

Abstract:

Background and Aims: Self-Perceived Problematic Porn Use (SPPPU) refers to individuals who identify as or perceive themselves to be addicted to porn. These individuals feel they are unable to regulate their porn consumption and experience adverse consequences as a result of their use in everyday life. To the author’s best knowledge, this research represents the first study to intervene with pornography use with mindfulness meditation, and aims to investigate the experiences and challenges of men with SPPPU as they engage in a mindfulness meditation intervention. As meditation is commonly characterized by sitting and observing one’s internal experience with non-reaction and acceptance, the study’s principal hypothesis was that consistent practice of meditation would develop the participant’s capacity to respond to cravings, urges, and unwanted thoughts in less reactive, more productive ways. Method: This 12-mixed method research utilised Single Case Experimental Design (SCED) methodology, with a standard AB design. Each participant was randomly assigned to an initial baseline time period between 2 to 5 weeks before learning the meditation technique and practicing it for the remainder of the 12-week study. The pilot study included 3 participants, while the intervention study included 12. The meditation technique used for the study involved a 15-minute guided breathing exercise in the morning, along with a 15-minute guided concentration meditation in the evening. Results: At the time of submission, only pilot study results were available. Results from the pilot study indicate an improved capacity for self-awareness of the uncomfortable mental and emotional states that drove their participants’ pornography use. Statistically significant reductions were also observed in daily porn use, total weekly time spent viewing porn, as well as lowered Pornography Craving Questionnaire (PCQ) and Problematic Pornography Use Scale (PPUS) scores. Conclusion: Pilot study results suggest that meditation could serve as a complementary tool for health professionals to provide clients in conjunction with therapeutic interventions. Study limitations, directions for future research, and clinical implications to be discussed as well.

Keywords: meditation, behavioural change, pornography, mindfulness

Procedia PDF Downloads 144
911 Interaction Between Task Complexity and Collaborative Learning on Virtual Patient Design: The Effects on Students’ Performance, Cognitive Load, and Task Time

Authors: Fatemeh Jannesarvatan, Ghazaal Parastooei, Jimmy frerejan, Saedeh Mokhtari, Peter Van Rosmalen

Abstract:

Medical and dental education increasingly emphasizes the acquisition, integration, and coordination of complex knowledge, skills, and attitudes that can be applied in practical situations. Instructional design approaches have focused on using real-life tasks in order to facilitate complex learning in both real and simulated environments. The Four component instructional design (4C/ID) model has become a useful guideline for designing instructional materials that improve learning transfer, especially in health profession education. The objective of this study was to apply the 4C/ID model in the creation of virtual patients (VPs) that dental students can use to practice their clinical management and clinical reasoning skills. The study first explored the context and concept of complication factors and common errors for novices and how they can affect the design of a virtual patient program. The study then selected key dental information and considered the content needs of dental students. The design of virtual patients was based on the 4C/ID model's fundamental principles, which included: Designing learning tasks that reflect real patient scenarios and applying different levels of task complexity to challenge students to apply their knowledge and skills in different contexts. Creating varied learning materials that support students during the VP program and are closely integrated with the learning tasks and students' curricula. Cognitive feedback was provided at different levels of the program. Providing procedural information where students followed a step-by-step process from history taking to writing a comprehensive treatment plan. Four virtual patients were designed using the 4C/ID model's principles, and an experimental design was used to test the effectiveness of the principles in achieving the intended educational outcomes. The 4C/ID model provides an effective framework for designing engaging and successful virtual patients that support the transfer of knowledge and skills for dental students. However, there are some challenges and pitfalls that instructional designers should take into account when developing these educational tools.

Keywords: 4C/ID model, virtual patients, education, dental, instructional design

Procedia PDF Downloads 76
910 The Use of Information and Communication Technology within and between Emergency Medical Teams during a Disaster: A Qualitative study

Authors: Badryah Alshehri, Kevin Gormley, Gillian Prue, Karen McCutcheon

Abstract:

In a disaster event, sharing patient information between the pre-hospital Emergency Medical Services (EMS) and Emergency Department (ED) hospitals is a complex process during which important information may be altered or lost due to poor communication. The aim of this study was to critically discuss the current evidence base in relation to communication between pre- EMS hospital and ED hospital professionals by the use of Information and Communication Systems (ICT). This study followed the systematic approach; six electronic databases were searched: CINAHL, Medline, Embase, PubMed, Web of Science, and IEEE Xplore Digital Library were comprehensively searched in January 2018 and a second search was completed in April 2020 to capture more recent publications. The study selection process was undertaken independently by the study authors. Both qualitative and quantitative studies were chosen that focused on factors that are positively or negatively associated with coordinated communication between pre-hospital EMS and ED teams in a disaster event. These studies were assessed for quality, and the data were analyzed according to the key screening themes which emerged from the literature search. Twenty-two studies were included. Eleven studies employed quantitative methods, seven studies used qualitative methods, and four studies used mixed methods. Four themes emerged on communication between EMTs (pre-hospital EMS and ED staff) in a disaster event using the ICT. (1) Disaster preparedness plans and coordination. This theme reported that disaster plans are in place in hospitals, and in some cases, there are interagency agreements with pre-hospital and relevant stakeholders. However, the findings showed that the disaster plans highlighted in these studies lacked information regarding coordinated communications within and between the pre-hospital and hospital. (2) Communication systems used in the disaster. This theme highlighted that although various communication systems are used between and within hospitals and pre-hospitals, technical issues have influenced communication between teams during disasters. (3) Integrated information management systems. This theme suggested the need for an integrated health information system that can help pre-hospital and hospital staff to record patient data and ensure the data is shared. (4) Disaster training and drills. While some studies analyzed disaster drills and training, the majority of these studies were focused on hospital departments other than EMTs. These studies suggest the need for simulation disaster training and drills, including EMTs. This review demonstrates that considerable gaps remain in the understanding of the communication between the EMS and ED hospital staff in relation to response in disasters. The review shows that although different types of ICTs are used, various issues remain which affect coordinated communication among the relevant professionals.

Keywords: emergency medical teams, communication, information and communication technologies, disaster

Procedia PDF Downloads 121
909 The Application of Transcranial Direct Current Stimulation (tDCS) Combined with Traditional Physical Therapy to Address Upper Limb Function in Chronic Stroke: A Case Study

Authors: Najmeh Hoseini

Abstract:

Strokerecovery happens through neuroplasticity, which is highly influenced by the environment, including neuro-rehabilitation. Transcranial direct current stimulation (tDCS) may enhance recovery by modulating neuroplasticity. With tDCS, weak direct currents are applied noninvasively to modify excitability in the cortical areas under its electrodes. Combined with functional activities, this may facilitate motor recovery in neurologic disorders such as stroke. The purpose of this case study was to examine the effect of tDCS combined with 30 minutes of traditional physical therapy (PT)on arm function following a stroke. A 29-year-old male with chronic stroke involving the left middle cerebral artery territory went through the treatment protocol. Design The design included 5 weeks of treatment: 1 week of traditional PT, 2 weeks of sham tDCS combined with traditional PT, and 2 weeks of tDCS combined with traditional PT. PT included functional electrical stimulation (FES) of wrist extensors followed by task-specific functional training. Dual hemispheric tDCS with 1 mA intensity was applied on the sensorimotor cortices for the first 20 min of the treatment combined with FES. Assessments before and after each treatment block included Modified Ashworth Scale, ChedokeMcmaster Arm and Hand inventory, Action Research Arm Test (ARAT), and the Box and Blocks Test. Results showed reduced spasticity in elbow and wrist flexors only after tDCS combination weeks (+1 to 0). The patient demonstrated clinically meaningful improvements in gross motor and fine motor control over the duration of the study; however, components of the ARAT that require fine motor control improved the greatest during the experimental block. Average time improvement compared to baseline was26.29 s for tDCS combination weeks, 18.48 s for sham tDCS, and 6.83 for PT standard of care weeks. Combining dual hemispheric tDCS with the standard of care PT demonstrated improvements in hand dexterity greater than PT alone in this patient case.

Keywords: tDCS, stroke, case study, physical therapy

Procedia PDF Downloads 88
908 Discharge Estimation in a Two Flow Braided Channel Based on Energy Concept

Authors: Amiya Kumar Pati, Spandan Sahu, Kishanjit Kumar Khatua

Abstract:

River is our main source of water which is a form of open channel flow and the flow in the open channel provides with many complex phenomena of sciences that needs to be tackled such as the critical flow conditions, boundary shear stress, and depth-averaged velocity. The development of society, more or less solely depends upon the flow of rivers. The rivers are major sources of many sediments and specific ingredients which are much essential for human beings. A river flow consisting of small and shallow channels sometimes divide and recombine numerous times because of the slow water flow or the built up sediments. The pattern formed during this process resembles the strands of a braid. Braided streams form where the sediment load is so heavy that some of the sediments are deposited as shifting islands. Braided rivers often exist near the mountainous regions and typically carry coarse-grained and heterogeneous sediments down a fairly steep gradient. In this paper, the apparent shear stress formulae were suitably modified, and the Energy Concept Method (ECM) was applied for the prediction of discharges at the junction of a two-flow braided compound channel. The Energy Concept Method has not been applied for estimating the discharges in the braided channels. The energy loss in the channels is analyzed based on mechanical analysis. The cross-section of channel is divided into two sub-areas, namely the main-channel below the bank-full level and region above the bank-full level for estimating the total discharge. The experimental data are compared with a wide range of theoretical data available in the published literature to verify this model. The accuracy of this approach is also compared with Divided Channel Method (DCM). From error analysis of this method, it is observed that the relative error is less for the data-sets having smooth floodplains when compared to rough floodplains. Comparisons with other models indicate that the present method has reasonable accuracy for engineering purposes.

Keywords: critical flow, energy concept, open channel flow, sediment, two-flow braided compound channel

Procedia PDF Downloads 122
907 Evidence of the Effect of the Structure of Social Representations on Group Identification

Authors: Eric Bonetto, Anthony Piermatteo, Fabien Girandola, Gregory Lo Monaco

Abstract:

The present contribution focuses on the effect of the structure of social representations on group identification. A social representation (SR) is defined as an organized and structured set of cognitions, produced and shared by members of a same group about a same social object. Within this framework, the central core theory establishes a structural distinction between central cognitions – or 'core' – and peripheral ones: the former are theoretically considered as more connected than the later to group members’ social identity and may play a greater role in SRs’ ability to allow group identification by means of a common vision of the object of representation. Indeed, the central core provides a reference point for the in-group as it constitutes a consensual vision that gives meaning to a social object particularly important to individuals and to the group. However, while numerous contributions clearly refer to the underlying role of SRs in group identification, there are only few empirical evidences of this aspect. Thus, we hypothesize an effect of the structure of SRs on group identification. More precisely, central cognitions (vs. peripheral ones) will lead to a stronger group identification. In addition, we hypothesize that the refutation of a cognition will lead to a stronger group identification than its activation. The SR mobilized here is that of 'studying' among a population of first-year undergraduate psychology students. Thus, a pretest (N = 82), using an Attribute-Challenge Technique, was designed in order to identify the central and the peripheral cognitions to use in the primings of our main study. The results of this pretest are in line with previous studies. Then, the main study (online; N = 184), using a social priming methodology, was based on a 2 (Structural status of the cognitions belonging to the prime: central vs. peripheral) x 2 (Type of prime: activation vs. refutation) experimental design in order to test our hypotheses. Results revealed, as expected, the main effect of the structure of the SR on group identification. Indeed, central cognitions trigger a higher level of identification than the peripheral ones. However, we observe neither effect of the type of prime, nor interaction effect. These results experimentally demonstrate for the first time the effect of the structure of SRs on group identification and indicate that central cognitions are more connected than peripheral ones to group members’ social identity. These results will be discussed considering the importance of understanding identity as a function of SRs and on their ability to potentially solve the lack of consideration of the definition of the group in Social Representations Theory.

Keywords: group identification, social identity, social representations, structural approach

Procedia PDF Downloads 187
906 A Statistical-Algorithmic Approach for the Design and Evaluation of a Fresnel Solar Concentrator-Receiver System

Authors: Hassan Qandil

Abstract:

Using a statistical algorithm incorporated in MATLAB, four types of non-imaging Fresnel lenses are designed; spot-flat, linear-flat, dome-shaped and semi-cylindrical-shaped. The optimization employs a statistical ray-tracing methodology of the incident light, mainly considering effects of chromatic aberration, varying focal lengths, solar inclination and azimuth angles, lens and receiver apertures, and the optimum number of prism grooves. While adopting an equal-groove-width assumption of the Poly-methyl-methacrylate (PMMA) prisms, the main target is to maximize the ray intensity on the receiver’s aperture and therefore achieving higher values of heat flux. The algorithm outputs prism angles and 2D sketches. 3D drawings are then generated via AutoCAD and linked to COMSOL Multiphysics software to simulate the lenses under solar ray conditions, which provides optical and thermal analysis at both the lens’ and the receiver’s apertures while setting conditions as per the Dallas-TX weather data. Once the lenses’ characterization is finalized, receivers are designed based on its optimized aperture size. Several cavity shapes; including triangular, arc-shaped and trapezoidal, are tested while coupled with a variety of receiver materials, working fluids, heat transfer mechanisms, and enclosure designs. A vacuum-reflective enclosure is also simulated for an enhanced thermal absorption efficiency. Each receiver type is simulated via COMSOL while coupled with the optimized lens. A lab-scale prototype for the optimum lens-receiver configuration is then fabricated for experimental evaluation. Application-based testing is also performed for the selected configuration, including that of a photovoltaic-thermal cogeneration system and solar furnace system. Finally, some future research work is pointed out, including the coupling of the collector-receiver system with an end-user power generator, and the use of a multi-layered genetic algorithm for comparative studies.

Keywords: COMSOL, concentrator, energy, fresnel, optics, renewable, solar

Procedia PDF Downloads 148
905 An Evolutionary Approach for QAOA for Max-Cut

Authors: Francesca Schiavello

Abstract:

This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.

Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization

Procedia PDF Downloads 55
904 The Effect of Affirmative Action in Private Schools on Education Expenditure in India: A Quasi-Experimental Approach

Authors: Athira Vinod

Abstract:

Under the Right to Education Act (2009), the Indian government introduced an affirmative action policy aimed at the reservation of seats in private schools at the entry-level and free primary education for children from lower socio-economic backgrounds. Using exogenous variation in the status of being in a lower social category (disadvantaged groups) and the year of starting school, this study investigates the effect of exposure to the policy on the expenditure on private education. It employs a difference-in-difference strategy with the help of repeated cross-sectional household data from the National Sample Survey (NSS) of India. It also exploits regional variation in exposure by combining the household data with administrative data on schools from the District Information System for Education (DISE). The study compares the outcome across two age cohorts of disadvantaged groups, starting school at different times, that is, before and after the policy. Regional variation in exposure is proxied with a measure of enrolment rate under the policy, calculated at the district level. The study finds that exposure to the policy led to an average reduction in annual private school fees of ₹223. Similarly, a 5% increase in the rate of enrolment under the policy in a district was associated with a reduction in annual private school fees of ₹240. Furthermore, there was a larger effect of the policy among households with a higher demand for private education. However, the effect is not due to fees waived through direct enrolment under the policy but rather an increase in the supply of low-fee private schools in India. The study finds that after the policy, 79,870 more private schools entered the market due to an increased demand for private education. The new schools, on average, charged a lower fee than existing schools and had a higher enrolment of children exposed to the policy. Additionally, the district-level variation in the enrolment under the policy was very strongly correlated with the entry of new schools, which not only charged a low fee but also had a higher enrolment under the policy. Results suggest that few disadvantaged children were admitted directly under the policy, but many were attending private schools, which were largely low-fee. This implies that disadvantaged households were willing to pay a lower fee to secure a place in a private school even if they did not receive a free place under the policy.

Keywords: affirmative action, disadvantaged groups, private schools, right to education act, school fees

Procedia PDF Downloads 110