Search results for: inquiry- based instruction
3403 The Diurnal and Seasonal Relationships of Pedestrian Injuries Secondary to Motor Vehicles in Young People
Authors: Amina Akhtar, Rory O'Connor
Abstract:
Introduction: There remains significant morbidity and mortality in young pedestrians hit by motor vehicles, even in the era of pedestrian crossings and speed limits. The aim of this study was to compare incidence and injury severity of motor vehicle-related pedestrian trauma according to time of day and season in a young population, based on the supposition that injuries would be more prevalent during dusk and dawn and during autumn and winter. Methods: Data was retrieved for patients between 10-25 years old from the National Trauma Audit and Research Network (TARN) database who had been involved as pedestrians in motor vehicle accidents between 2015-2020. The incidence of injuries, their severity (using the Injury Severity Score [ISS]), hospital transfer time, and mortality were analysed according to the hours of daylight, darkness, and season. Results: The study identified a seasonal pattern, showing that autumn was the predominant season and led to 34.9% of injuries, with a further 25.4% in winter in comparison to spring and summer, with 21.4% and 18.3% of injuries, respectively. However, visibility alone was not a sufficient factor as 49.5% of injuries occurred during the time of darkness, while 50.5% occurred during daylight. Importantly, the greatest injury rate (number of injuries/hour) occurred between 1500-1630, correlating to school pick-up times. A further significant relationship between injury severity score (ISS) and daylight was demonstrated (p-value= 0.0124), with moderate injuries (ISS 9-14) occurring most commonly during the day (72.7%) and more severe injuries (ISS>15) occurred during the night (55.8%). Conclusion: We have identified a relationship between time of day and the frequency and severity of pedestrian trauma in young people. In addition, particular time groupings correspond to the greatest injury rate, suggesting that reduced visibility coupled with school pick-up times may play a significant role. This could be addressed through a targeted public health approach to implementing change. We recommend targeted public health measures to improve road safety that focus on these times and that increase the visibility of children combined with education for drivers.Keywords: major trauma, paediatric trauma, road traffic accidents, diurnal pattern
Procedia PDF Downloads 1013402 A Geosynchronous Orbit Synthetic Aperture Radar Simulator for Moving Ship Targets
Authors: Linjie Zhang, Baifen Ren, Xi Zhang, Genwang Liu
Abstract:
Ship detection is of great significance for both military and civilian applications. Synthetic aperture radar (SAR) with all-day, all-weather, ultra-long-range characteristics, has been used widely. In view of the low time resolution of low orbit SAR and the needs for high time resolution SAR data, GEO (Geosynchronous orbit) SAR is getting more and more attention. Since GEO SAR has short revisiting period and large coverage area, it is expected to be well utilized in marine ship targets monitoring. However, the height of the orbit increases the time of integration by almost two orders of magnitude. For moving marine vessels, the utility and efficacy of GEO SAR are still not sure. This paper attempts to find the feasibility of GEO SAR by giving a GEO SAR simulator of moving ships. This presented GEO SAR simulator is a kind of geometrical-based radar imaging simulator, which focus on geometrical quality rather than high radiometric. Inputs of this simulator are 3D ship model (.obj format, produced by most 3D design software, such as 3D Max), ship's velocity, and the parameters of satellite orbit and SAR platform. Its outputs are simulated GEO SAR raw signal data and SAR image. This simulating process is accomplished by the following four steps. (1) Reading 3D model, including the ship rotations (pitch, yaw, and roll) and velocity (speed and direction) parameters, extract information of those little primitives (triangles) which is visible from the SAR platform. (2) Computing the radar scattering from the ship with physical optics (PO) method. In this step, the vessel is sliced into many little rectangles primitives along the azimuth. The radiometric calculation of each primitive is carried out separately. Since this simulator only focuses on the complex structure of ships, only single-bounce reflection and double-bounce reflection are considered. (3) Generating the raw data with GEO SAR signal modeling. Since the normal ‘stop and go’ model is not available for GEO SAR, the range model should be reconsidered. (4) At last, generating GEO SAR image with improved Range Doppler method. Numerical simulation of fishing boat and cargo ship will be given. GEO SAR images of different posture, velocity, satellite orbit, and SAR platform will be simulated. By analyzing these simulated results, the effectiveness of GEO SAR for the detection of marine moving vessels is evaluated.Keywords: GEO SAR, radar, simulation, ship
Procedia PDF Downloads 1773401 Thermoelectric Blanket for Aiding the Treatment of Cerebral Hypoxia and Other Related Conditions
Authors: Sarayu Vanga, Jorge Galeano-Cabral, Kaya Wei
Abstract:
Cerebral hypoxia refers to a condition in which there is a decrease in oxygen supply to the brain. Patients suffering from this condition experience a decrease in their body temperature. While there isn't any cure to treat cerebral hypoxia as of date, certain procedures are utilized to help aid in the treatment of the condition. Regulating the body temperature is an example of one of those procedures. Hypoxia is well known to reduce the body temperature of mammals, although the neural origins of this response remain uncertain. In order to speed recovery from this condition, it is necessary to maintain a stable body temperature. In this study, we present an approach to regulating body temperature for patients who suffer from cerebral hypoxia or other similar conditions. After a thorough literature study, we propose the use of thermoelectric blankets, which are temperature-controlled thermal blankets based on thermoelectric devices. These blankets are capable of heating up and cooling down the patient to stabilize body temperature. This feature is possible through the reversible effect that thermoelectric devices offer while behaving as a thermal sensor, and it is an effective way to stabilize temperature. Thermoelectricity is the direct conversion of thermal to electrical energy and vice versa. This effect is now known as the Seebeck effect, and it is characterized by the Seebeck coefficient. In such a configuration, the device has cooling and heating sides with temperatures that can be interchanged by simply switching the direction of the current input in the system. This design integrates various aspects, including a humidifier, ventilation machine, IV-administered medication, air conditioning, circulation device, and a body temperature regulation system. The proposed design includes thermocouples that will trigger the blanket to increase or decrease a set temperature through a medical temperature sensor. Additionally, the proposed design allows an efficient way to control fluctuations in body temperature while being cost-friendly, with an expected cost of 150 dollars. We are currently working on developing a prototype of the design to collect thermal and electrical data under different conditions and also intend to perform an optimization analysis to improve the design even further. While this proposal was developed for treating cerebral hypoxia, it can also aid in the treatment of other related conditions, as fluctuations in body temperature appear to be a common symptom that patients have for many illnesses.Keywords: body temperature regulation, cerebral hypoxia, thermoelectric, blanket design
Procedia PDF Downloads 1613400 Effect of Dietary Sour Lemon Peel Essential Oil on Serum Parameters in Rainbow Trout (Oncorhynchus mykiss) Fingerlings against Deltamethrin Stress
Authors: Maryam Amiri Resketi, Sakineh Yeganeh, Khosro Jani Khalili
Abstract:
The aim of this study was to investigate the effect of dietary lemon peel essential oil (Citrus limon) on serum parameters and liver enzyme activity of rainbow trout (Oncorhynchus mykiss) was exposed to deltamethrin. The 96-hour lethal concentrations of the toxin on rainbow trout (Oncorhynchus mykiss), was determined according to standard procedures O.E.C.D in static (Static). 96-hour LC50 was obtained 0.0082 mg/l by using statistical methods Probit program version. The maximum allowable concentration of deltamethrin was calculated 0.00082 mg/l in natural environment and was used for this experiment. Eight treatments were designed based on 3 levels of lemon essential oil 200, 400 and 600 mg/kg and 2 levels of deltamethrin 0 and 0.00082. Rainbow trout with an average weight of 95.14 ± 3.8 g were distributed in 300-liter tanks and cultured for eight weeks. Fish were fed in an amount of 2% of body weight. Water changes were done on a daily basis (90 percent of the tank). About the tanks containing 10 % deltamethrin, after dewatering, suitable concentration of toxin was added to water. At the end of the test, serum biochemical parameters (total protein, albumin, glucose, cholesterol, and triglycerides) and liver enzymes (ALP, AST, ALT and LDH) were evaluated. In treatments without and with toxin, increasing 400 mg/kg oil increased total protein and albumin levels and lower cholesterol and triglycerides were observed (p < 0.05). Rise to the level of 400 mg/kg of lemon peel essential oil treatments contain pesticides, reduced the amount of enzymes ALP, ALT and LDH compared to treatment of toxin-free lemon peel essential oil (p < 0.05). The results showed that usage of lemon peel essential oil in fish diet can increase the immune system parameters and strengthen it with strong antioxidant activity followed by reducing the effect of deltamethrin on the immune system of fish and effective dose can prevent the adverse effects of toxin due to the weakening of the fish immune system at the time of toxic pollutant entrance in fish farms.Keywords: deltamethrin, Oncorhynchus mykiss, LC5096h, lemon peel (citrus limon) essential oil, serum parameters, liver enzymes
Procedia PDF Downloads 2013399 Interactions of Socioeconomic Status, Age at Menarche, Body Composition and Bone Mineral Density in Healthy Turkish Female University Students
Authors: Betül Ersoy, Deniz Özalp Kizilay, Gül Gümüşer, Fatma Taneli
Abstract:
Introduction: Peak bone mass is reached in late adolescence in females. Age at menarche influences estrogen exposure, which plays a vital role in bone metabolism. The relationship between age at menarche and bone mineral density (BMD) is still controversial. In this study, we investigated the relationship between age at menarche, BMD, socioeconomic status (SES) and body composition in female university student. Participant and methods: A total of 138 healthy girls at late adolescence period (mean age 20.13±0.93 years, range 18-22) were included in this university school-based cross-sectional study in the urban area western region of Turkey. Participants have been randomly selected to reflect the university students studying in all faculties. We asked relevant questions about socioeconomic status and age at menarche to female university students. Students were grouped into three SES as lower, middle and higher according to the educational and occupational levels of their parents using Hollingshead index. Height and weight were measured. Body Mass Index (BMI) (kg/m2 ) was calculated. Dual energy X-ray absorptiometry (DXA) was performed using the Lunar DPX series, and BMD and body composition were evaluated. Results: The mean age of menarche of female university student included in the study was 13.09.±1.3 years. There was no significant difference between the three socioeconomic groups in terms of height, body weight, age at menarche, BMD [BMD (gr/cm2 ) (L2-L4) and BMD (gr/cm2 ) (total body)], and body composition (lean tissue, fat tissue, total fat, and body fat) (p>0.05). While no correlation was found between the age at menarche and any parameter (p>0.05), a positive significant correlation was found between lean tissue and BMD L2-L4 (r=0.286, p=0.01). When the relationships were evaluated separately according to socioeconomic status, there was a significant correlation between BMDL2-L4 (r: 0.431, p=0.005) and lean tissue in females with low SES, while this relationship disappeared in females with middle and high SES. Conclusion: Age at menarche did not change according to socioeconomic status, nor did BMD and body composition in female at late adolescents. No relationship was found between age at menarche and BMD and body composition determined by DEXA in female university student who were close to reaching peak bone mass. The results suggested that especially BMDL2-L4 might increase as lean tissue increases.Keywords: bone, osteoposis, menarche, dexa
Procedia PDF Downloads 753398 Students’ learning Effects in Physical Education between Sport Education Model with TPSR and Traditional Teaching Model with TPSR
Authors: Yi-Hsiang Pan, Chen-Hui Huang, Ching-Hsiang Chen, Wei-Ting Hsu
Abstract:
The purposes of the study were to explore the students' learning effect of physical education curriculum between merging Teaching Personal and Social Responsibility (TPSR) with sport education model and TPSR with traditional teaching model, which these learning effects included sport self-efficacy, sport enthusiastic, group cohesion, responsibility and game performance. The participants include 3 high school physical education teachers and 6 physical education classes, 133 participants with experience group 75 students and control group 58 students, and each teacher taught an experimental group and a control group for 16 weeks. The research methods used questionnaire investigation, interview, focus group meeting. The research instruments included personal and social responsibility questionnaire, sport enthusiastic scale, group cohesion scale, sport self-efficacy scale and game performance assessment instrument. Multivariate Analysis of covariance and Repeated measure ANOVA were used to test difference of students' learning effects between merging TPSR with sport education model and TPSR with traditional teaching model. The findings of research were: 1) The sport education model with TPSR could improve students' learning effects, including sport self-efficacy, game performance, sport enthusiastic, group cohesion and responsibility. 2) The traditional teaching model with TPSR could improve students' learning effect, including sport self-efficacy, responsibility and game performance. 3) the sport education model with TPSR could improve more learning effects than traditional teaching model with TPSR, including sport self-efficacy, sport enthusiastic,responsibility and game performance. 4) Based on qualitative data about learning experience of teachers and students, sport education model with TPSR significant improve learning motivation, group interaction and game sense. The conclusions indicated sport education model with TPSR could improve more learning effects in physical education curriculum. On other hand, the curricular projects of hybrid TPSR-Sport Education model and TPSR-Traditional Teaching model are both good curricular projects of moral character education, which may be applied in school physical education.Keywords: character education, sport season, game performance, sport competence
Procedia PDF Downloads 4523397 Sublethal Effects of Clothianidin and Summer Oil on the Demographic Parameters and Population Projection of Bravicoryne Brassicae(Hemiptera: Aphididae)
Authors: Mehdi Piri Ouchtapeh, Fariba Mehrkhou, Maryam Fourouzan
Abstract:
The cabbage aphid, Bravicoryne brassicae (Hemiptera: Aphididae), is known as an economically important and oligophagous pest of different cole crops. The polyvolitine characteristics of B. brassicae resulted in resistance to insecticides. For this purpose, in this study, the sub-lethal concentration (LC25) of two insecticides, clothianidin and summer oil, on the life table parameters and population projection of cabbage aphid were studied at controlled condition (20±1 ℃, R.H. 60 ±5 % and a photoperiod of 16:8 h (L:D). The dipping method was used in bioassay and life table studies. Briefly, the leaves of cabbage containing 15 the same-aged (24h) adults of cabbage aphid (four replicates) were dipped into the related concentrations of insecticides for 10 s. The sub-lethal (LC25) obtained concentration were used 5.822 and 108.741 p.p.m for clothianidin and summer oil, respectively. The biological and life table studies were done using at least 100, 93 and 82 the same age of eggs for control, summer oil and clothianidin treatments respectively. The life history data of the greenhouse whitefly cohorts exposed to sublethal concentration of the aforementioned insecticides were analyzed using the computer program TWOSEX–MSChart based on the age-stage, two-sex life table theory. The results of this study showed that the used insecticides affected the developmental time, survival rate, adult longevity, and fecundity of the F1 generation. The developmental time on control, clothianidin and summer oil treatments was obtained (5.91 ± 0.10 days), (7.64 ± 0.12 days) and (6.66 ± 0.10 days), respectively. The sublethal concentration of clothianidin resulted in decreasing of adult longevity (8.63 ± 0.30 days), fecundity (14.14 ± 87 nymphs), survival rate (71%) and the life expectancy (10.26 days) of B. brassicae, as well. Additionally, usage of LC25 insecticides led to decreasing of the net reproductive rate (R0) of the cabbage aphid compared to summer oil and control treatments. The intrinsic rate of increase (r) (day-1) was decreased in F1 adults of cabbage aphid compared with other treatments. Additionally, the population projection results were accordance with the population growth rate of cabbage aphid. Therefore, the findings of this research showed that, however, both of the insecticides were effective on cabbage aphid population, but clothianidin was more effective and could be consider in the management of aforementioned pest.Keywords: the cabbage aphid, sublethal effects, survival rate, population projection, life expectancy
Procedia PDF Downloads 793396 The Conservation of the Roman Mosaics in the Museum of Sousse, Tunisia: Between Doctrines and Practices
Authors: Zeineb Yousse, Fakher Kharrat
Abstract:
Mosaic is a part of a broad universal cultural heritage; sometimes it represents a rather essential source for the researches on the everyday life of some of the previous civilizations. Tunisia has one of the finest and largest collections of mosaics in the world, which is essentially exhibited in the Museums of Bardo and Sousse. Restored and reconstituted, they bear witnesses to hard work. Our paper deals with the discipline of conservation of Roman mosaics based on the proceedings of the workshop of the Museum of Sousse. Thus, we highlight two main objectives. In the first place, it is a question of revealing the techniques adopted by professionals to handle mosaics and to which school of conservation these techniques belong. In the second place, we are going to interpret the works initiated to preserve the archaeological heritage in order to protect it in present time and transmit it to future generations. To this end, we paid attention to four Roman mosaics currently exhibited in the Museum of Sousse. These Mosaics show different voids or gaps at the level of their surfaces and the method used to fill these gaps seems to be interesting to analyze. These mosaics are known under the names of: Orpheus Charming the Animals, Gladiator and Bears, Stud farm of Sorothus and finally Head of Medusa. The study on the conservation passes through two chained phases. We start with a small historical overview in order to gather information related to the original location, the date of its composition as well as the description of its image. Afterward, the intervention process is analyzed by handling three complementary elements which are: diagnosis of the existing state, the study of the medium processing and the study of the processing of the tesselatum surface which includes the pictorial composition of the mosaic. Furthermore, we have implemented an evaluation matrix with six operating principles allowing the assessment of the appropriateness of the intervention. These principles are the following: minimal intervention, reversibility, compatibility, visibility, durability, authenticity and enhancement. Various accumulated outcomes are pointing out the techniques used to fill the gaps as well as the level of compliance with the principles of conservation. Accordingly, the conservation of mosaics in Tunisia is a practice that combines various techniques without really arguing about the choice of a particular theory.Keywords: conservation, matrix, museum of Sousse, operating particular theory, principles, Roman mosaics
Procedia PDF Downloads 3293395 Determination of Genotypic Relationship among 12 Sugarcane (Saccharum officinarum) Varieties
Authors: Faith Eweluegim Enahoro-Ofagbe, Alika Eke Joseph
Abstract:
Information on genetic variation within a population is crucial for utilizing heterozygosity for breeding programs that aim to improve crop species. The study was conducted to ascertain the genotypic similarities among twelve sugarcane (Saccharum officinarum) varieties to group them for purposes of hybridizations for cane yield improvement. The experiment was conducted at the University of Benin, Faculty of Agriculture Teaching and Research Farm, Benin City. Twelve sugarcane varieties obtained from National Cereals Research Institute, Badeggi, Niger State, Nigeria, were planted in three replications in a randomized complete block design. Each variety was planted on a five-row plot of 5.0 m in length. Data were collected on 12 agronomic traits, including; the number of millable cane, cane girth, internode length, number of male and female flowers (fuss), days to flag leaf, days to flowering, brix%, cane yield, and others. There were significant differences, according to the findings among the twelve genotypes for the number of days to flag leaf, number of male and female flowers (fuss), and cane yield. The relationship between the twelve sugarcane varieties was expressed using hierarchical cluster analysis. The twelve genotypes were grouped into three major clusters based on hierarchical classification. Cluster I had five genotypes, cluster II had four, and cluster III had three. Cluster III was dominated by varieties characterized by higher cane yield, number of leaves, internode length, brix%, number of millable stalks, stalk/stool, cane girth, and cane length. Cluster II contained genotypes with early maturity characteristics, such as early flowering, early flag leaf development, growth rate, and the number of female and male flowers (fuss). The maximum inter-cluster distance between clusters III and I indicated higher genetic diversity between the two groups. Hybridization between the two groups could result in transgressive recombinants for agronomically important traits.Keywords: sugarcane, Saccharum officinarum, genotype, cluster analysis, principal components analysis
Procedia PDF Downloads 803394 Two-Dimensional Analysis and Numerical Simulation of the Navier-Stokes Equations for Principles of Turbulence around Isothermal Bodies Immersed in Incompressible Newtonian Fluids
Authors: Romulo D. C. Santos, Silvio M. A. Gama, Ramiro G. R. Camacho
Abstract:
In this present paper, the thermos-fluid dynamics considering the mixed convection (natural and forced convections) and the principles of turbulence flow around complex geometries have been studied. In these applications, it was necessary to analyze the influence between the flow field and the heated immersed body with constant temperature on its surface. This paper presents a study about the Newtonian incompressible two-dimensional fluid around isothermal geometry using the immersed boundary method (IBM) with the virtual physical model (VPM). The numerical code proposed for all simulations satisfy the calculation of temperature considering Dirichlet boundary conditions. Important dimensionless numbers such as Strouhal number is calculated using the Fast Fourier Transform (FFT), Nusselt number, drag and lift coefficients, velocity and pressure. Streamlines and isothermal lines are presented for each simulation showing the flow dynamics and patterns. The Navier-Stokes and energy equations for mixed convection were discretized using the finite difference method for space and a second order Adams-Bashforth and Runge-Kuta 4th order methods for time considering the fractional step method to couple the calculation of pressure, velocity, and temperature. This work used for simulation of turbulence, the Smagorinsky, and Spalart-Allmaras models. The first model is based on the local equilibrium hypothesis for small scales and hypothesis of Boussinesq, such that the energy is injected into spectrum of the turbulence, being equal to the energy dissipated by the convective effects. The Spalart-Allmaras model, use only one transport equation for turbulent viscosity. The results were compared with numerical data, validating the effect of heat-transfer together with turbulence models. The IBM/VPM is a powerful tool to simulate flow around complex geometries. The results showed a good numerical convergence in relation the references adopted.Keywords: immersed boundary method, mixed convection, turbulence methods, virtual physical model
Procedia PDF Downloads 1153393 Smart Defect Detection in XLPE Cables Using Convolutional Neural Networks
Authors: Tesfaye Mengistu
Abstract:
Power cables play a crucial role in the transmission and distribution of electrical energy. As the electricity generation, transmission, distribution, and storage systems become smarter, there is a growing emphasis on incorporating intelligent approaches to ensure the reliability of power cables. Various types of electrical cables are employed for transmitting and distributing electrical energy, with cross-linked polyethylene (XLPE) cables being widely utilized due to their exceptional electrical and mechanical properties. However, insulation defects can occur in XLPE cables due to subpar manufacturing techniques during production and cable joint installation. To address this issue, experts have proposed different methods for monitoring XLPE cables. Some suggest the use of interdigital capacitive (IDC) technology for online monitoring, while others propose employing continuous wave (CW) terahertz (THz) imaging systems to detect internal defects in XLPE plates used for power cable insulation. In this study, we have developed models that employ a custom dataset collected locally to classify the physical safety status of individual power cables. Our models aim to replace physical inspections with computer vision and image processing techniques to classify defective power cables from non-defective ones. The implementation of our project utilized the Python programming language along with the TensorFlow package and a convolutional neural network (CNN). The CNN-based algorithm was specifically chosen for power cable defect classification. The results of our project demonstrate the effectiveness of CNNs in accurately classifying power cable defects. We recommend the utilization of similar or additional datasets to further enhance and refine our models. Additionally, we believe that our models could be used to develop methodologies for detecting power cable defects from live video feeds. We firmly believe that our work makes a significant contribution to the field of power cable inspection and maintenance. Our models offer a more efficient and cost-effective approach to detecting power cable defects, thereby improving the reliability and safety of power grids.Keywords: artificial intelligence, computer vision, defect detection, convolutional neural net
Procedia PDF Downloads 1123392 Conflation Methodology Applied to Flood Recovery
Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong
Abstract:
Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.Keywords: community resilience, conflation, flood risk, nuisance flooding
Procedia PDF Downloads 1033391 1D/3D Modeling of a Liquid-Liquid Two-Phase Flow in a Milli-Structured Heat Exchanger/Reactor
Authors: Antoinette Maarawi, Zoe Anxionnaz-Minvielle, Pierre Coste, Nathalie Di Miceli Raimondi, Michel Cabassud
Abstract:
Milli-structured heat exchanger/reactors have been recently widely used, especially in the chemical industry, due to their enhanced performances in heat and mass transfer compared to conventional apparatuses. In our work, the ‘DeanHex’ heat exchanger/reactor with a 2D-meandering channel is investigated both experimentally and numerically. The square cross-sectioned channel has a hydraulic diameter of 2mm. The aim of our study is to model local physico-chemical phenomena (heat and mass transfer, axial dispersion, etc.) for a liquid-liquid two-phase flow in our lab-scale meandering channel, which represents the central part of the heat exchanger/reactor design. The numerical approach of the reactor is based on a 1D model for the flow channel encapsulated in a 3D model for the surrounding solid, using COMSOL Multiphysics V5.5. The use of the 1D approach to model the milli-channel reduces significantly the calculation time compared to 3D approaches, which are generally focused on local effects. Our 1D/3D approach intends to bridge the gap between the simulation at a small scale and the simulation at the reactor scale at a reasonable CPU cost. The heat transfer process between the 1D milli-channel and its 3D surrounding is modeled. The feasibility of this 1D/3D coupling was verified by comparing simulation results to experimental ones originated from two previous works. Temperature profiles along the channel axis obtained by simulation fit the experimental profiles for both cases. The next step is to integrate the liquid-liquid mass transfer model and to validate it with our experimental results. The hydrodynamics of the liquid-liquid two-phase system is modeled using the ‘mixture model approach’. The mass transfer behavior is represented by an overall volumetric mass transfer coefficient ‘kLa’ correlation obtained from our experimental results in the millimetric size meandering channel. The present work is a first step towards the scale-up of our ‘DeanHex’ expecting future industrialization of such equipment. Therefore, a generalized scaled-up model of the reactor comprising all the transfer processes will be built in order to predict the performance of the reactor in terms of conversion rate and energy efficiency at an industrial scale.Keywords: liquid-liquid mass transfer, milli-structured reactor, 1D/3D model, process intensification
Procedia PDF Downloads 1303390 Superordinated Control for Increasing Feed-in Capacity and Improving Power Quality in Low Voltage Distribution Grids
Authors: Markus Meyer, Bastian Maucher, Rolf Witzmann
Abstract:
The ever increasing amount of distributed generation in low voltage distribution grids (mainly PV and micro-CHP) can lead to reverse load flows from low to medium/high voltage levels at times of high feed-in. Reverse load flow leads to rising voltages that may even exceed the limits specified in the grid codes. Furthermore, the share of electrical loads connected to low voltage distribution grids via switched power supplies continuously increases. In combination with inverter-based feed-in, this results in high harmonic levels reducing overall power quality. Especially high levels of third-order harmonic currents can lead to neutral conductor overload, which is even more critical if lines with reduced neutral conductor section areas are used. This paper illustrates a possible concept for smart grids in order to increase the feed-in capacity, improve power quality and to ensure safe operation of low voltage distribution grids at all times. The key feature of the concept is a hierarchically structured control strategy that is run on a superordinated controller, which is connected to several distributed grid analyzers and inverters via broad band powerline (BPL). The strategy is devised to ensure both quick response time as well as the technically and economically reasonable use of the available inverters in the grid (PV-inverters, batteries, stepless line voltage regulators). These inverters are provided with standard features for voltage control, e.g. voltage dependent reactive power control. In addition they can receive reactive power set points transmitted by the superordinated controller. To further improve power quality, the inverters are capable of active harmonic filtering, as well as voltage balancing, whereas the latter is primarily done by the stepless line voltage regulators. By additionally connecting the superordinated controller to the control center of the grid operator, supervisory control and data acquisition capabilities for the low voltage distribution grid are enabled, which allows easy monitoring and manual input. Such a low voltage distribution grid can also be used as a virtual power plant.Keywords: distributed generation, distribution grid, power quality, smart grid, virtual power plant, voltage control
Procedia PDF Downloads 2673389 A Literature Review and a Proposed Conceptual Framework for Learning Activities in Business Process Management
Authors: Carin Lindskog
Abstract:
Introduction: Long-term success requires an organizational balance between continuity (exploitation) and change (exploration). The problem of balancing exploitation and exploration is a common issue in studies of organizational learning. In order to better face the tough competition in the face of changes, organizations need to exploit their current business and explore new business fields by developing new capabilities. The purpose of this work in progress is to develop a conceptual framework to shed light on the relevance of 'learning activities', i.e., exploitation and exploration, on different levels. The research questions that will be addressed are as follows: What sort of learning activities are found in the Business Process Management (BPM) field? How can these activities be linked to the individual level, group, level, and organizational level? In the work, a literature review will first be conducted. This review will explore the status of learning activities in the BPM field. An outcome from the literature review will be a conceptual framework of learning activities based on the included publications. The learning activities will be categorized to focus on the categories exploitation, exploration or both and into the levels of individual, group, and organization. The proposed conceptual framework will be a valuable tool for analyzing the research field as well as identification of future research directions. Related Work: BPM has increased in popularity as a way of working to strengthen the quality of the work and meet the demands of efficiency. Due to the increase in BPM popularity, more and more organizations reporting on BPM failure. One reason for this is the lack of knowledge about the extended scope of BPM to other business contexts that include, for example, more creative business fields. Yet another reason for the failures are the fact of the employees’ are resistant to changes. The learning process in an organization is an ongoing cycle of reflection and action and is a process that can be initiated, developed and practiced. Furthermore, organizational learning is multilevel; therefore the theory of organizational learning needs to consider the individual, the group, and the organization level. Learning happens over time and across levels, but it also creates a tension between incorporating new learning (feed-forward) and exploiting or using what has already been learned (feedback). Through feed-forward processes, new ideas and actions move from the individual to the group to the organization level. At the same time, what has already been learned feeds back from the organization to a group to an individual and has an impact on how people act and think.Keywords: business process management, exploitation, exploration, learning activities
Procedia PDF Downloads 1243388 Practice on Design Knowledge Management and Transfer across the Life Cycle of a New-Built Nuclear Power Plant in China
Authors: Danying Gu, Xiaoyan Li, Yuanlei He
Abstract:
As a knowledge-intensive industry, nuclear industry highly values the importance of safety and quality. The life cycle of a NPP (Nuclear Power Plant) can last 100 years from the initial research and design to its decommissioning. How to implement the high-quality knowledge management and how to contribute to a more safe, advanced and economic NPP (Nuclear Power Plant) is the most important issue and responsibility for knowledge management. As the lead of nuclear industry, nuclear research and design institute has competitive advantages of its advanced technology, knowledge and information, DKM (Design Knowledge Management) of nuclear research and design institute is the core of the knowledge management in the whole nuclear industry. In this paper, the study and practice on DKM and knowledge transfer across the life cycle of a new-built NPP in China is introduced. For this digital intelligent NPP, the whole design process is based on a digital design platform which includes NPP engineering and design dynamic analyzer, visualization engineering verification platform, digital operation maintenance support platform and digital equipment design, manufacture integrated collaborative platform. In order to make all the design data and information transfer across design, construction, commissioning and operation, the overall architecture of new-built digital NPP should become a modern knowledge management system. So a digital information transfer model across the NPP life cycle is proposed in this paper. The challenges related to design knowledge transfer is also discussed, such as digital information handover, data center and data sorting, unified data coding system. On the other hand, effective delivery of design information during the construction and operation phase will contribute to the comprehensive understanding of design ideas and components and systems for the construction contractor and operation unit, largely increasing the safety, quality and economic benefits during the life cycle. The operation and maintenance records generated from the NPP operation process have great significance for maintaining the operating state of NPP, especially the comprehensiveness, validity and traceability of the records. So the requirements of an online monitoring and smart diagnosis system of NPP is also proposed, to help utility-owners to improve the safety and efficiency.Keywords: design knowledge management, digital nuclear power plant, knowledge transfer, life cycle
Procedia PDF Downloads 2733387 Estimation of Forces Applied to Forearm Using EMG Signal Features to Control of Powered Human Arm Prostheses
Authors: Faruk Ortes, Derya Karabulut, Yunus Ziya Arslan
Abstract:
Myoelectric features gathering from musculature environment are considered on a preferential basis to perceive muscle activation and control human arm prostheses according to recent experimental researches. EMG (electromyography) signal based human arm prostheses have shown a promising performance in terms of providing basic functional requirements of motions for the amputated people in recent years. However, these assistive devices for neurorehabilitation still have important limitations in enabling amputated people to perform rather sophisticated or functional movements. Surface electromyogram (EMG) is used as the control signal to command such devices. This kind of control consists of activating a motion in prosthetic arm using muscle activation for the same particular motion. Extraction of clear and certain neural information from EMG signals plays a major role especially in fine control of hand prosthesis movements. Many signal processing methods have been utilized for feature extraction from EMG signals. The specific objective of this study was to compare widely used time domain features of EMG signal including integrated EMG(IEMG), root mean square (RMS) and waveform length(WL) for prediction of externally applied forces to human hands. Obtained features were classified using artificial neural networks (ANN) to predict the forces. EMG signals supplied to process were recorded during only type of muscle contraction which is isometric and isotonic one. Experiments were performed by three healthy subjects who are right-handed and in a range of 25-35 year-old aging. EMG signals were collected from muscles of the proximal part of the upper body consisting of: biceps brachii, triceps brachii, pectorialis major and trapezius. The force prediction results obtained from the ANN were statistically analyzed and merits and pitfalls of the extracted features were discussed with detail. The obtained results are anticipated to contribute classification process of EMG signal and motion control of powered human arm prosthetics control.Keywords: assistive devices for neurorehabilitation, electromyography, feature extraction, force estimation, human arm prosthesis
Procedia PDF Downloads 3673386 Extracting Opinions from Big Data of Indonesian Customer Reviews Using Hadoop MapReduce
Authors: Veronica S. Moertini, Vinsensius Kevin, Gede Karya
Abstract:
Customer reviews have been collected by many kinds of e-commerce websites selling products, services, hotel rooms, tickets and so on. Each website collects its own customer reviews. The reviews can be crawled, collected from those websites and stored as big data. Text analysis techniques can be used to analyze that data to produce summarized information, such as customer opinions. Then, these opinions can be published by independent service provider websites and used to help customers in choosing the most suitable products or services. As the opinions are analyzed from big data of reviews originated from many websites, it is expected that the results are more trusted and accurate. Indonesian customers write reviews in Indonesian language, which comes with its own structures and uniqueness. We found that most of the reviews are expressed with “daily language”, which is informal, do not follow the correct grammar, have many abbreviations and slangs or non-formal words. Hadoop is an emerging platform aimed for storing and analyzing big data in distributed systems. A Hadoop cluster consists of master and slave nodes/computers operated in a network. Hadoop comes with distributed file system (HDFS) and MapReduce framework for supporting parallel computation. However, MapReduce has weakness (i.e. inefficient) for iterative computations, specifically, the cost of reading/writing data (I/O cost) is high. Given this fact, we conclude that MapReduce function is best adapted for “one-pass” computation. In this research, we develop an efficient technique for extracting or mining opinions from big data of Indonesian reviews, which is based on MapReduce with one-pass computation. In designing the algorithm, we avoid iterative computation and instead adopt a “look up table” technique. The stages of the proposed technique are: (1) Crawling the data reviews from websites; (2) cleaning and finding root words from the raw reviews; (3) computing the frequency of the meaningful opinion words; (4) analyzing customers sentiments towards defined objects. The experiments for evaluating the performance of the technique were conducted on a Hadoop cluster with 14 slave nodes. The results show that the proposed technique (stage 2 to 4) discovers useful opinions, is capable of processing big data efficiently and scalable.Keywords: big data analysis, Hadoop MapReduce, analyzing text data, mining Indonesian reviews
Procedia PDF Downloads 2013385 The Human Process of Trust in Automated Decisions and Algorithmic Explainability as a Fundamental Right in the Exercise of Brazilian Citizenship
Authors: Paloma Mendes Saldanha
Abstract:
Access to information is a prerequisite for democracy while also guiding the material construction of fundamental rights. The exercise of citizenship requires knowing, understanding, questioning, advocating for, and securing rights and responsibilities. In other words, it goes beyond mere active electoral participation and materializes through awareness and the struggle for rights and responsibilities in the various spaces occupied by the population in their daily lives. In times of hyper-cultural connectivity, active citizenship is shaped through ethical trust processes, most often established between humans and algorithms. Automated decisions, so prevalent in various everyday situations, such as purchase preference predictions, virtual voice assistants, reduction of accidents in autonomous vehicles, content removal, resume selection, etc., have already found their place as a normalized discourse that sometimes does not reveal or make clear what violations of fundamental rights may occur when algorithmic explainability is lacking. In other words, technological and market development promotes a normalization for the use of automated decisions while silencing possible restrictions and/or breaches of rights through a culturally modeled, unethical, and unexplained trust process, which hinders the possibility of the right to a healthy, transparent, and complete exercise of citizenship. In this context, the article aims to identify the violations caused by the absence of algorithmic explainability in the exercise of citizenship through the construction of an unethical and silent trust process between humans and algorithms in automated decisions. As a result, it is expected to find violations of constitutionally protected rights such as privacy, data protection, and transparency, as well as the stipulation of algorithmic explainability as a fundamental right in the exercise of Brazilian citizenship in the era of virtualization, facing a threefold foundation called trust: culture, rules, and systems. To do so, the author will use a bibliographic review in the legal and information technology fields, as well as the analysis of legal and official documents, including national documents such as the Brazilian Federal Constitution, as well as international guidelines and resolutions that address the topic in a specific and necessary manner for appropriate regulation based on a sustainable trust process for a hyperconnected world.Keywords: artificial intelligence, ethics, citizenship, trust
Procedia PDF Downloads 643384 Macroscopic Support Structure Design for the Tool-Free Support Removal of Laser Powder Bed Fusion-Manufactured Parts Made of AlSi10Mg
Authors: Tobias Schmithuesen, Johannes Henrich Schleifenbaum
Abstract:
The additive manufacturing process laser powder bed fusion offers many advantages over conventional manufacturing processes. For example, almost any complex part can be produced, such as topologically optimized lightweight parts, which would be inconceivable with conventional manufacturing processes. A major challenge posed by the LPBF process, however, is, in most cases, the need to use and remove support structures on critically inclined part surfaces (α < 45 ° regarding substrate plate). These are mainly used for dimensionally accurate mapping of part contours and to reduce distortion by absorbing process-related internal stresses. Furthermore, they serve to transfer the process heat to the substrate plate and are, therefore, indispensable for the LPBF process. A major challenge for the economical use of the LPBF process in industrial process chains is currently still the high manual effort involved in removing support structures. According to the state of the art (SoA), the parts are usually treated by simple hand tools (e.g., pliers, chisels) or by machining (e.g., milling, turning). New automatable approaches are the removal of support structures by means of wet chemical ablation and thermal deburring. According to the state of the art, the support structures are essentially adapted to the LPBF process and not to potential post-processing steps. The aim of this study is the determination of support structure designs that are adapted to the mentioned post-processing approaches. In the first step, the essential boundary conditions for complete removal by means of the respective approaches are identified. Afterward, a representative demonstrator part with various macroscopic support structure designs will be LPBF-manufactured and tested with regard to a complete powder and support removability. Finally, based on the results, potentially suitable support structure designs for the respective approaches will be derived. The investigations are carried out on the example of the aluminum alloy AlSi10Mg.Keywords: additive manufacturing, laser powder bed fusion, laser beam melting, selective laser melting, post processing, tool-free, wet chemical ablation, thermal deburring, aluminum alloy, AlSi10Mg
Procedia PDF Downloads 913383 Construction and Demolition Waste Management in Indian Cities
Authors: Vaibhav Rathi, Soumen Maity, Achu R. Sekhar, Abhijit Banerjee
Abstract:
Construction sector in India is extremely resource and carbon intensive. It contributes to significantly to national greenhouse emissions. At the resource end the industry consumes significant portions of the output from mining. Resources such as sand and soil are most exploited and their rampant extraction is becoming constant source of impact on environment and society. Cement is another resource that is used in abundance in building and construction and has a direct impact on limestone resources. Though India is rich in cement grade limestone resource, efforts have to be made for sustainable consumption of this resource to ensure future availability. Use of these resources in high volumes in India is a result of rapid urbanization. More cities have grown to a population of million plus in the last decade and million plus cities are growing further. To cater to needs of growing urban population of construction activities are inevitable in the coming future thereby increasing material consumption. Increased construction will also lead to substantial increase in end of life waste generation from Construction and Demolition (C&D). Therefore proper management of C&D waste has the potential to reduce environmental pollution as well as contribute to the resource efficiency in the construction sector. The present study deals with estimation, characterisation and documenting current management practices of C&D waste in 10 Indian cities of different geographies and classes. Based on primary data the study draws conclusions on the potential of C&D waste to be used as an alternative to primary raw materials. The estimation results show that India generates 716 million tons of C&D waste annually, placing the country as second largest C&D waste generator in the world after China. The study also aimed at utilization of C&D waste in to building materials. The waste samples collected from various cities have been used to replace 100% stone aggregates in paver blocks without any decrease in strength. However, management practices of C&D waste in cities still remains poor instead of notification of rules and regulations notified for C&D waste management. Only a few cities have managed to install processing plant and set up management systems for C&D waste. Therefore there is immense opportunity for management and reuse of C&D waste in Indian cities.Keywords: building materials, construction and demolition waste, cities, environmental pollution, resource efficiency
Procedia PDF Downloads 3043382 Partial Least Square Regression for High-Dimentional and High-Correlated Data
Authors: Mohammed Abdullah Alshahrani
Abstract:
The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data
Procedia PDF Downloads 493381 Emerging Technologies for Learning: In Need of a Pro-Active Educational Strategy
Authors: Pieter De Vries, Renate Klaassen, Maria Ioannides
Abstract:
This paper is about an explorative research into the use of emerging technologies for teaching and learning in higher engineering education. The assumption is that these technologies and applications, which are not yet widely adopted, will help to improve education and as such actively work on the ability to better deal with the mismatch of skills bothering our industries. Technologies such as 3D printing, the Internet of Things, Virtual Reality, and others, are in a dynamic state of development which makes it difficult to grasp the value for education. Also, the instruments in current educational research seem not appropriate to assess the value of such technologies. This explorative research aims to foster an approach to better deal with this new complexity. The need to find out is urgent, because these technologies will be dominantly present in the near future in all aspects of life, including education. The methodology used in this research comprised an inventory of emerging technologies and tools that potentially give way to innovation and are used or about to be used in technical universities. The inventory was based on both a literature review and a review of reports and web resources like blogs and others and included a series of interviews with stakeholders in engineering education and at representative industries. In addition, a number of small experiments were executed with the aim to analyze the requirements for the use of in this case Virtual Reality and the Internet of Things to better understanding the opportunities and limitations in the day-today learning environment. The major findings indicate that it is rather difficult to decide about the value of these technologies for education due to the dynamic state of change and therefor unpredictability and the lack of a coherent policy at the institutions. Most decisions are being made by teachers on an individual basis, who in their micro-environment are not equipped to select, test and ultimately decide about the use of these technologies. Most experiences are being made in the industry knowing that the skills to handle these technologies are in high demand. The industry though is worried about the inclination and the capability of education to help bridge the skills gap related to the emergence of new technologies. Due to the complexity, the diversity, the speed of development and the decay, education is challenged to develop an approach that can make these technologies work in an integrated fashion. For education to fully profit from the opportunities, these technologies offer it is eminent to develop a pro-active strategy and a sustainable approach to frame the emerging technologies development.Keywords: emerging technologies, internet of things, pro-active strategy, virtual reality
Procedia PDF Downloads 1913380 A PHREEQC Reactive Transport Simulation for Simply Determining Scaling during Desalination
Authors: Andrew Freiburger, Sergi Molins
Abstract:
Freshwater is a vital resource; yet, the supply of clean freshwater is diminishing as the consequence of melting snow and ice from global warming, pollution from industry, and an increasing demand from human population growth. The unsustainable trajectory of diminishing water resources is projected to jeopardize water security for billions of people in the 21st century. Membrane desalination technologies may resolve the growing discrepancy between supply and demand by filtering arbitrary feed water into a fraction of renewable, clean water and a fraction of highly concentrated brine. The leading hindrance of membrane desalination is fouling, whereby the highly concentrated brine solution encourages micro-organismal colonization and/or the precipitation of occlusive minerals (i.e. scale) upon the membrane surface. Thus, an understanding of brine formation is necessary to mitigate membrane fouling and to develop efficacious desalination technologies that can bolster the supply of available freshwater. This study presents a reactive transport simulation of brine formation and scale deposition during reverse osmosis (RO) desalination. The simulation conceptually represents the RO module as a one-dimensional domain, where feed water directionally enters the domain with a prescribed fluid velocity and is iteratively concentrated in the immobile layer of a dual porosity model. Geochemical PHREEQC code numerically evaluated the conceptual model with parameters for the BW30-400 RO module and for real water feed sources – e.g. the Red and Mediterranean seas, and produced waters from American oil-wells, based upon peer-review data. The presented simulation is computationally simpler, and hence less resource intensive, than the existent and more rigorous simulations of desalination phenomena, like TOUGHREACT. The end-user may readily prepare input files and execute simulations on a personal computer with open source software. The graphical results of fouling-potential and brine characteristics may therefore be particularly useful as the initial tool for screening candidate feed water sources and/or informing the selection of an RO module.Keywords: desalination, PHREEQC, reactive transport, scaling
Procedia PDF Downloads 1363379 Practice of Developing EFL Coursebooks at Mongolian National University of Education
Authors: Nyamsuren Baljinnyam, Narmandakh Khaltar, Otgonbaatar Olzkhuu
Abstract:
Undergraduate students study English I (elective) and II (compulsory) courses which are included in the General foundation courses in the Teacher Education Curriculum Framework at the Mongolian National University of Education. Teachers at the English Department have designed and developed 2 levels (from pre-intermediate to upper-intermediate) English coursebooks since 2016 and published the second editions of each in 2018 and 2019. Developing coursebooks based on the students’ needs, satisfaction, and dissatisfaction with these instructional materials are essential phenomena in the delivery service of teaching English at the tertiary level. Thus, this study aims at findings from students’ views on English coursebooks which are studied mostly in the first and second semesters of the undergraduate academic program. The purpose of this research project was to determine the overall pedagogical value and suitability of the book to students’ needs and 21st-century teacher education concepts. We have designed a coursebook evaluation checklist with 28 questionnaires, including Morris’s English as a foreign language coursebook evaluation checklist (2017). The study is a 2 phased descriptive survey study that covered 572 and 519 undergraduate students who studied in the spring term of the 2021-2022 academic year and the fall term of the 2022-2023 academic year at 7 branch schools of Mongolian National University of Education (MNUE). Data analysis consists of student responses to each item. Coursebook evaluation data is classified into 3 main categories as “general attributes”, “learning content” and “task evaluation”. Some results of the study indicate the following findings: 97 percent of the total survey participants (in total 1091) have given positive responses to the coursebooks that these are fully aimed at acquiring the students’ language learning skills: reading, writing, listening, and speaking; 78 percent responded that the coursebooks were different from the English Textbooks that they learned in secondary schools; and 91 percent answered that the English coursebooks could give motivation to the students to achieve their self-study.Keywords: coursebook evaluation, improving English, student satisfaction and dissatisfaction with coursebooks, language learning materials, language tasks, students’ needs
Procedia PDF Downloads 93378 A Simulated Evaluation of Model Predictive Control
Authors: Ahmed AlNouss, Salim Ahmed
Abstract:
Process control refers to the techniques to control the variables in a process in order to maintain them at their desired values. Advanced process control (APC) is a broad term within the domain of control where it refers to different kinds of process control and control related tools, for example, model predictive control (MPC), statistical process control (SPC), fault detection and classification (FDC) and performance assessment. APC is often used for solving multivariable control problems and model predictive control (MPC) is one of only a few advanced control methods used successfully in industrial control applications. Advanced control is expected to bring many benefits to the plant operation; however, the extent of the benefits is plant specific and the application needs a large investment. This requires an analysis of the expected benefits before the implementation of the control. In a real plant simulation studies are carried out along with some experimentation to determine the improvement in the performance of the plant due to advanced control. In this research, such an exercise is undertaken to realize the needs of APC application. The main objectives of the paper are as follows: (1) To apply MPC to a number of simulations set up to realize the need of MPC by comparing its performance with that of proportional integral derivatives (PID) controllers. (2) To study the effect of controller parameters on control performance. (3) To develop appropriate performance index (PI) to compare the performance of different controller and develop novel idea to present tuning map of a controller. These objectives were achieved by applying PID controller and a special type of MPC which is dynamic matrix control (DMC) on the multi-tanks process simulated in loop-pro. Then the controller performance has been evaluated by changing the controller parameters. This performance was based on special indices related to the difference between set point and process variable in order to compare the both controllers. The same principle was applied for continuous stirred tank heater (CSTH) and continuous stirred tank reactor (CSTR) processes simulated in Matlab. However, in these processes some developed programs were written to evaluate the performance of the PID and MPC controllers. Finally these performance indices along with their controller parameters were plotted using special program called Sigmaplot. As a result, the improvement in the performance of the control loops was quantified using relevant indices to justify the need and importance of advanced process control. Also, it has been approved that, by using appropriate indices, predictive controller can improve the performance of the control loop significantly.Keywords: advanced process control (APC), control loop, model predictive control (MPC), proportional integral derivatives (PID), performance indices (PI)
Procedia PDF Downloads 4073377 Advances in Health Risk Assessment of Mycotoxins in Africa
Authors: Wilfred A. Abiaa, Chibundu N. Ezekiel, Benedikt Warth, Michael Sulyok, Paul C. Turner, Rudolf Krska, Paul F. Moundipa
Abstract:
Mycotoxins are a wide range of toxic secondary metabolites of fungi that contaminate various food commodities worldwide especially in sub-Saharan Africa (SSA). Such contamination seriously compromises food safety and quality posing a serious problem for human health as well as to trade and the economy. Their concentrations depend on various factors, such as the commodity itself, climatic conditions, storage conditions, seasonal variances, and processing methods. When humans consume foods contaminated by mycotoxins, they exert toxic effects to their health through various modes of actions. Rural populations in sub-Saharan Africa, are exposed to dietary mycotoxins, but it is supposed that exposure levels and health risks associated with mycotoxins between SSA countries may vary. Dietary exposures and health risk assessment studies have been limited by lack of equipment for the proper assessment of the associated health implications on consumer populations when they eat contaminated agricultural products. As such, mycotoxin research is premature in several SSA nations with product evaluation for mycotoxin loads below/above legislative limits being inadequate. Few nations have health risk assessment reports mainly based on direct quantification of the toxins in foods ('external exposure') and linking food levels with data from food frequency questionnaires. Nonetheless, the assessment of the exposure and health risk to mycotoxins requires more than the traditional approaches. Only a fraction of the mycotoxins in contaminated foods reaches the blood stream and exert toxicity ('internal exposure'). Also, internal exposure is usually smaller than external exposure thus dependence on external exposure alone may induce confounders in risk assessment. Some studies from SSA earlier focused on biomarker analysis mainly on aflatoxins while a few recent studies have concentrated on the multi-biomarker analysis of exposures in urine providing probable associations between observed disease occurrences and dietary mycotoxins levels. As a result, new techniques that could assess the levels of exposures directly in body tissue or fluid, and possibly link them to the disease state of individuals became urgent.Keywords: mycotoxins, biomarkers, exposure assessment, health risk assessment, sub-Saharan Africa
Procedia PDF Downloads 5743376 Engagement as a Predictor of Student Flourishing in the Online Classroom
Authors: Theresa Veach, Erin Crisp
Abstract:
It has been shown that traditional students flourish as a function of several factors including level of academic challenge, student/faculty interactions, active/collaborative learning, enriching educational experiences, and supportive campus environment. With the increase in demand for remote or online courses, factors that result in academic flourishing in the virtual classroom have become more crucial to understand than ever before. This study seeks to give insight into those factors that impact student learning, overall student wellbeing, and flourishing among college students enrolled in an online program. 4160 unique students participated in the completion of End of Course Survey (EOC) before final grades were released. Quantitative results from the survey are used by program directors as a measure of student satisfaction with both the curriculum and the faculty. In addition, students also submitted narrative comments in an open comment field. No prompts were given for the comment field on the survey. The purpose of this analysis was to report on the qualitative data available with the goal of gaining insight into what matters to students. Survey results from July 1st, 2016 to December 1st, 2016 were compiled into spreadsheet data sets. The analysis approach used involved both key word and phrase searches and reading results to identify patterns in responses and to tally the frequency of those patterns. In total, just over 25,000 comments were included in the analysis. Preliminary results indicate that it is the professor-student relationship, frequency of feedback and overall engagement of both instructors and students that are indicators of flourishing in college programs offered in an online format. This qualitative study supports the notion that college students flourish with regard to 1) education, 2) overall student well-being and 3) program satisfaction when overall engagement of both the instructor and the student is high. Ways to increase engagement in the online college environment were also explored. These include 1) increasing student participation by providing more project-based assignments, 2) interacting with students in meaningful ways that are both high in frequency and in personal content, and 3) allowing students to apply newly acquired knowledge in ways that are meaningful to current life circumstances and future goals.Keywords: college, engagement, flourishing, online
Procedia PDF Downloads 2723375 Development of Positron Emission Tomography (PET) Tracers for the in-Vivo Imaging of α-Synuclein Aggregates in α-Synucleinopathies
Authors: Bright Chukwunwike Uzuegbunam, Wojciech Paslawski, Hans Agren, Christer Halldin, Wolfgang Weber, Markus Luster, Thomas Arzberger, Behrooz Hooshyar Yousefi
Abstract:
There is a need to develop a PET tracer that will enable to diagnosis and track the progression of Alpha-synucleinopathies (Parkinson’s disease [PD], dementia with Lewy bodies [DLB], multiple system atrophy [MSA]) in living subjects over time. Alpha-synuclein aggregates (a-syn), which are present in all the stages of disease progression, for instance, in PD, are a suitable target for in vivo PET imaging. For this reason, we have developed some promising a-syn tracers based on a disarylbisthiazole (DABTA) scaffold. The precursors are synthesized via a modified Hantzsch thiazole synthesis. The precursors were then radiolabeled via one- or two-step radiofluorination methods. The ligands were initially screened using a combination of molecular dynamics and quantum/molecular mechanics approaches in order to calculate the binding affinity to a-syn (in silico binding experiments). Experimental in vitro binding assays were also performed. The ligands were further screened in other experiments such as log D, in vitro plasma protein binding & plasma stability, biodistribution & brain metabolite analyses in healthy mice. Radiochemical yields were up to 30% - 72% in some cases. Molecular docking revealed possible binding sites in a-syn and also the free energy of binding to those sites (-28.9 - -66.9 kcal/mol), which correlated to the high binding affinity of the DABTAs to a-syn (Ki as low as 0.5 nM) and selectivity (> 100-fold) over Aβ and tau, which usually co-exist with a-synin some pathologies. The log D values range from 2.88 - 2.34, which correlated with free-protein fraction of 0.28% - 0.5%. Biodistribution experiments revealed that the tracers are taken up (5.6 %ID/g - 7.3 %ID/g) in the brain at 5 min (post-injection) p.i., and cleared out (values as low as 0.39 %ID/g were obtained at 120 min p.i. Analyses of the mice brain 20 min p.i. Revealed almost no radiometabolites in the brain in most cases. It can be concluded that in silico study presents a new venue for the rational development of radioligands with suitable features. The results obtained so far are promising and encourage us to further validate the DABTAs in autoradiography, immunohistochemistry, and in vivo imaging in non-human primates and humans.Keywords: alpha-synuclein aggregates, alpha-synucleinopathies, PET imaging, tracer development
Procedia PDF Downloads 2353374 Economic Policy to Stimulate Industrial Development in Georgia
Authors: Gulnaz Erkomaishvili
Abstract:
The article analyzes the modern level of industrial production in Georgia, shows the export-import of industrial products and evaluates the results of the activities of institutions implementing industrial policy. The research showed us that the level of development of industry in the country and its export potential are quite low. The article concludes that in the modern phase of industrial development, the country should choose a model focused on technological development and maximum growth of export potential. Objectives. The aim of the research is to develop an economic policy that promotes the development of industry and to look for ways to implement it effectively. Methodologies This paper uses general and specific methods, in particular, analysis, synthesis, induction, deduction, scientific abstraction, comparative and statistical methods, as well as experts’ evaluation. In-depth interviews with experts were conducted to determine quantitative and qualitative indicators; Publications of the National Statistics Office of Georgia are used to determine the regularity between analytical and statistical estimations. Also, theoretical and applied research of international organizations and scientist-economists are used. Contributions Based on the identified challenges in the area of industry, recommendations for the implementation of an active industrial policy in short and long term periods were developed. In particular: the government's priority orientation of industrial development; paying special attention to the processing industry sectors that Georgia has the potential to produce; supporting the development of scientific fields; Determination of certain benefits for those investors who invest money in industrial production; State partnership with the private sector, manifested in the fight against bureaucracy, corruption and crime, creating favorable business conditions for entrepreneurs; Coordination between education - science - production should be implemented in the country. Much attention should be paid to basic scientific research, which does not require purely commercial returns in the short term, science should become a real productive force; Special importance should be given to the creation of an environment that will support the expansion of export-oriented production; Overcoming barriers to entry into export markets.Keywords: industry, sectoral structure of industry, exsport-import of industrial products, industrial policy
Procedia PDF Downloads 106