Search results for: required operational characteristics (ROC)
1533 Simulation Research of the Aerodynamic Drag of 3D Structures for Individual Transport Vehicle
Authors: Pawel Magryta, Mateusz Paszko
Abstract:
In today's world, a big problem of individual mobility, especially in large urban areas, occurs. Commonly used grand way of transport such as buses, trains or cars do not fulfill their tasks, i.e. they are not able to meet the increasing mobility needs of the growing urban population. Additional to that, the limitations of civil infrastructure construction in the cities exist. Nowadays the most common idea is to transfer the part of urban transport on the level of air transport. However to do this, there is a need to develop an individual flying transport vehicle. The biggest problem occurring in this concept is the type of the propulsion system from which the vehicle will obtain a lifting force. Standard propeller drives appear to be too noisy. One of the ideas is to provide the required take-off and flight power by the machine using the innovative ejector system. This kind of the system will be designed through a suitable choice of the three-dimensional geometric structure with special shape of nozzle in order to generate overpressure. The authors idea is to make a device that would allow to cumulate the overpressure using the a five-sided geometrical structure that will be limited on the one side by the blowing flow of air jet. In order to test this hypothesis a computer simulation study of aerodynamic drag of such 3D structures have been made. Based on the results of these studies, the tests on real model were also performed. The final stage of work was a comparative analysis of the results of simulation and real tests. The CFD simulation studies of air flow was conducted using the Star CD - Star Pro 3.2 software. The design of virtual model was made using the Catia v5 software. Apart from the objective to obtain advanced aviation propulsion system, all of the tests and modifications of 3D structures were also aimed at achieving high efficiency of this device while maintaining the ability to generate high value of overpressures. This was possible only in case of a large mass flow rate of air. All these aspects have been possible to verify using CFD methods for observing the flow of the working medium in the tested model. During the simulation tests, the distribution and size of pressure and velocity vectors were analyzed. Simulations were made with different boundary conditions (supply air pressure), but with a fixed external conditions (ambient temp., ambient pressure, etc.). The maximum value of obtained overpressure is 2 kPa. This value is too low to exploit the power of this device for the individual transport vehicle. Both the simulation model and real object shows a linear dependence of the overpressure values obtained from the different geometrical parameters of three-dimensional structures. Application of computational software greatly simplifies and streamlines the design and simulation capabilities. This work has been financed by the Polish Ministry of Science and Higher Education.Keywords: aviation propulsion, CFD, 3d structure, aerodynamic drag
Procedia PDF Downloads 3111532 Decomposition of the Discount Function Into Impatience and Uncertainty Aversion. How Neurofinance Can Help to Understand Behavioral Anomalies
Authors: Roberta Martino, Viviana Ventre
Abstract:
Intertemporal choices are choices under conditions of uncertainty in which the consequences are distributed over time. The Discounted Utility Model is the essential reference for describing the individual in the context of intertemporal choice. The model is based on the idea that the individual selects the alternative with the highest utility, which is calculated by multiplying the cardinal utility of the outcome, as if the reception were instantaneous, by the discount function that determines a decrease in the utility value according to how the actual reception of the outcome is far away from the moment the choice is made. Initially, the discount function was assumed to have an exponential trend, whose decrease over time is constant, in line with a profile of a rational investor described by classical economics. Instead, empirical evidence called for the formulation of alternative, hyperbolic models that better represented the actual actions of the investor. Attitudes that do not comply with the principles of classical rationality are termed anomalous, i.e., difficult to rationalize and describe through normative models. The development of behavioral finance, which describes investor behavior through cognitive psychology, has shown that deviations from rationality are due to the limited rationality condition of human beings. What this means is that when a choice is made in a very difficult and information-rich environment, the brain does a compromise job between the cognitive effort required and the selection of an alternative. Moreover, the evaluation and selection phase of the alternative, the collection and processing of information, are dynamics conditioned by systematic distortions of the decision-making process that are the behavioral biases involving the individual's emotional and cognitive system. In this paper we present an original decomposition of the discount function to investigate the psychological principles of hyperbolic discounting. It is possible to decompose the curve into two components: the first component is responsible for the smaller decrease in the outcome as time increases and is related to the individual's impatience; the second component relates to the change in the direction of the tangent vector to the curve and indicates how much the individual perceives the indeterminacy of the future indicating his or her aversion to uncertainty. This decomposition allows interesting conclusions to be drawn with respect to the concept of impatience and the emotional drives involved in decision-making. The contribution that neuroscience can make to decision theory and inter-temporal choice theory is vast as it would allow the description of the decision-making process as the relationship between the individual's emotional and cognitive factors. Neurofinance is a discipline that uses a multidisciplinary approach to investigate how the brain influences decision-making. Indeed, considering that the decision-making process is linked to the activity of the prefrontal cortex and amygdala, neurofinance can help determine the extent to which abnormal attitudes respect the principles of rationality.Keywords: impatience, intertemporal choice, neurofinance, rationality, uncertainty
Procedia PDF Downloads 1291531 Designing Offshore Pipelines Facing the Geohazard of Active Seismic Faults
Authors: Maria Trimintziou, Michael Sakellariou, Prodromos Psarropoulos
Abstract:
Nowadays, the exploitation of hydrocarbons reserves in deep seas and oceans, in combination with the need to transport hydrocarbons among countries, has made the design, construction and operation of offshore pipelines very significant. Under this perspective, it is evident that many more offshore pipelines are expected to be constructed in the near future. Since offshore pipelines are usually crossing extended areas, they may face a variety of geohazards that impose substantial permanent ground deformations (PGDs) to the pipeline and potentially threaten its integrity. In case of a geohazard area, there exist three options to proceed. The first option is to avoid the problematic area through rerouting, which is usually regarded as an unfavorable solution due to its high cost. The second is to apply (if possible) mitigation/protection measures in order to eliminate the geohazard itself. Finally, the last appealing option is to allow the pipeline crossing through the geohazard area, provided that the pipeline will have been verified against the expected PGDs. In areas with moderate or high seismicity the design of an offshore pipeline is more demanding due to the earthquake-related geohazards, such as landslides, soil liquefaction phenomena, and active faults. It is worthy to mention that although worldwide there is a great experience in offshore geotechnics and pipeline design, the experience in seismic design of offshore pipelines is rather limited due to the fact that most of the pipelines have been constructed in non-seismic regions (e.g. North Sea, West Australia, Gulf of Mexico, etc.). The current study focuses on the seismic design of offshore pipelines against active faults. After an extensive literature review of the provisions of the seismic norms worldwide and of the available analytical methods, the study simulates numerically (through finite-element modeling and strain-based criteria) the distress of offshore pipelines subjected to PGDs induced by active seismic faults at the seabed. Factors, such as the geometrical properties of the fault, the mechanical properties of the ruptured soil formations, and the pipeline characteristics, are examined. After some interesting conclusions regarding the seismic vulnerability of offshore pipelines, potential cost-effective mitigation measures are proposed taking into account constructability issues.Keywords: offhore pipelines, seismic design, active faults, permanent ground deformations (PGDs)
Procedia PDF Downloads 5881530 Use of Socially Assistive Robots in Early Rehabilitation to Promote Mobility for Infants with Motor Delays
Authors: Elena Kokkoni, Prasanna Kannappan, Ashkan Zehfroosh, Effrosyni Mavroudi, Kristina Strother-Garcia, James C. Galloway, Jeffrey Heinz, Rene Vidal, Herbert G. Tanner
Abstract:
Early immobility affects the motor, cognitive, and social development. Current pediatric rehabilitation lacks the technology that will provide the dosage needed to promote mobility for young children at risk. The addition of socially assistive robots in early interventions may help increase the mobility dosage. The aim of this study is to examine the feasibility of an early intervention paradigm where non-walking infants experience independent mobility while socially interacting with robots. A dynamic environment is developed where both the child and the robot interact and learn from each other. The environment involves: 1) a range of physical activities that are goal-oriented, age-appropriate, and ability-matched for the child to perform, 2) the automatic functions that perceive the child’s actions through novel activity recognition algorithms, and decide appropriate actions for the robot, and 3) a networked visual data acquisition system that enables real-time assessment and provides the means to connect child behavior with robot decision-making in real-time. The environment was tested by bringing a two-year old boy with Down syndrome for eight sessions. The child presented delays throughout his motor development with the current being on the acquisition of walking. During the sessions, the child performed physical activities that required complex motor actions (e.g. climbing an inclined platform and/or staircase). During these activities, a (wheeled or humanoid) robot was either performing the action or was at its end point 'signaling' for interaction. From these sessions, information was gathered to develop algorithms to automate the perception of activities which the robot bases its actions on. A Markov Decision Process (MDP) is used to model the intentions of the child. A 'smoothing' technique is used to help identify the model’s parameters which are a critical step when dealing with small data sets such in this paradigm. The child engaged in all activities and socially interacted with the robot across sessions. With time, the child’s mobility was increased, and the frequency and duration of complex and independent motor actions were also increased (e.g. taking independent steps). Simulation results on the combination of the MDP and smoothing support the use of this model in human-robot interaction. Smoothing facilitates learning MDP parameters from small data sets. This paradigm is feasible and provides an insight on how social interaction may elicit mobility actions suggesting a new early intervention paradigm for very young children with motor disabilities. Acknowledgment: This work has been supported by NIH under grant #5R01HD87133.Keywords: activity recognition, human-robot interaction, machine learning, pediatric rehabilitation
Procedia PDF Downloads 2921529 Aromatic Medicinal Plant Classification Using Deep Learning
Authors: Tsega Asresa Mengistu, Getahun Tigistu
Abstract:
Computer vision is an artificial intelligence subfield that allows computers and systems to retrieve meaning from digital images. It is applied in various fields of study self-driving cars, video surveillance, agriculture, Quality control, Health care, construction, military, and everyday life. Aromatic and medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, and other natural health products for therapeutic and Aromatic culinary purposes. Herbal industries depend on these special plants. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs, and going to export not only industrial raw materials but also valuable foreign exchange. There is a lack of technologies for the classification and identification of Aromatic and medicinal plants in Ethiopia. The manual identification system of plants is a tedious, time-consuming, labor, and lengthy process. For farmers, industry personnel, academics, and pharmacists, it is still difficult to identify parts and usage of plants before ingredient extraction. In order to solve this problem, the researcher uses a deep learning approach for the efficient identification of aromatic and medicinal plants by using a convolutional neural network. The objective of the proposed study is to identify the aromatic and medicinal plant Parts and usages using computer vision technology. Therefore, this research initiated a model for the automatic classification of aromatic and medicinal plants by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides the root, flower and fruit, latex, and barks. The study was conducted on aromatic and medicinal plants available in the Ethiopian Institute of Agricultural Research center. An experimental research design is proposed for this study. This is conducted in Convolutional neural networks and Transfer learning. The Researcher employs sigmoid Activation as the last layer and Rectifier liner unit in the hidden layers. Finally, the researcher got a classification accuracy of 66.4 in convolutional neural networks and 67.3 in mobile networks, and 64 in the Visual Geometry Group.Keywords: aromatic and medicinal plants, computer vision, deep convolutional neural network
Procedia PDF Downloads 4381528 Methodology to Assess the Circularity of Industrial Processes
Authors: Bruna F. Oliveira, Teresa I. Gonçalves, Marcelo M. Sousa, Sandra M. Pimenta, Octávio F. Ramalho, José B. Cruz, Flávia V. Barbosa
Abstract:
The EU Circular Economy action plan, launched in 2020, is one of the major initiatives to promote the transition into a more sustainable industry. The circular economy is a popular concept used by many companies nowadays. Some industries are better forwarded to this reality than others, and the tannery industry is a sector that needs more attention due to its strong environmental impact caused by its dimension, intensive resources consumption, lack of recyclability, and second use of its products, as well as the industrial effluents generated by the manufacturing processes. For these reasons, the zero-waste goal and the European objectives are further being achieved. In this context, a need arises to provide an effective methodology that allows to determine the level of circularity of tannery companies. Regarding the complexity of the circular economy concept, few factories have a specialist in sustainability to assess the company’s circularity or have the ability to implement circular strategies that could benefit the manufacturing processes. Although there are several methodologies to assess circularity in specific industrial sectors, there is not an easy go-to methodology applied in factories aiming for cleaner production. Therefore, a straightforward methodology to assess the level of circularity, in this case of a tannery industry, is presented and discussed in this work, allowing any company to measure the impact of its activities. The methodology developed consists in calculating the Overall Circular Index (OCI) by evaluating the circularity of four key areas -energy, material, economy and social- in a specific factory. The index is a value between 0 and 1, where 0 means a linear economy, and 1 is a complete circular economy. Each key area has a sub-index, obtained through key performance indicators (KPIs) regarding each theme, and the OCI reflects the average of the four sub-indexes. Some fieldwork in the appointed company was required in order to obtain all the necessary data. By having separate sub-indexes, one can observe which areas are more linear than others. Thus, it is possible to work on the most critical areas by implementing strategies to increase the OCI. After these strategies are implemented, the OCI is recalculated to check the improvements made and any other changes in the remaining sub-indexes. As such, the methodology in discussion works through continuous improvement, constantly reevaluating and improving the circularity of the factory. The methodology is also flexible enough to be implemented in any industrial sector by adapting the KPIs. This methodology was implemented in a selected Portuguese small and medium-sized enterprises (SME) tannery industry and proved to be a relevant tool to measure the circularity level of the factory. It was witnessed that it is easier for non-specialists to evaluate circularity and identify possible solutions to increase its value, as well as learn how one action can impact their environment. In the end, energetic and environmental inefficiencies were identified and corrected, increasing the sustainability and circularity of the company. Through this work, important contributions were provided, helping the Portuguese SMEs to achieve the European and UN 2030 sustainable goals.Keywords: circular economy, circularity index, sustainability, tannery industry, zero-waste
Procedia PDF Downloads 681527 Feasibility Study of Particle Image Velocimetry in the Muzzle Flow Fields during the Intermediate Ballistic Phase
Authors: Moumen Abdelhafidh, Stribu Bogdan, Laboureur Delphine, Gallant Johan, Hendrick Patrick
Abstract:
This study is part of an ongoing effort to improve the understanding of phenomena occurring during the intermediate ballistic phase, such as muzzle flows. A thorough comprehension of muzzle flow fields is essential for optimizing muzzle device and projectile design. This flow characterization has heretofore been almost entirely limited to local and intrusive measurement techniques such as pressure measurements using pencil probes. Consequently, the body of quantitative experimental data is limited, so is the number of numerical codes validated in this field. The objective of the work presented here is to demonstrate the applicability of the Particle Image Velocimetry (PIV) technique in the challenging environment of the propellant flow of a .300 blackout weapon to provide accurate velocity measurements. The key points of a successful PIV measurement are the selection of the particle tracer, their seeding technique, and their tracking characteristics. We have experimentally investigated the aforementioned points by evaluating the resistance, gas dispersion, laser light reflection as well as the response to a step change across the Mach disk for five different solid tracers using two seeding methods. To this end, an experimental setup has been performed and consisted of a PIV system, the combustion chamber pressure measurement, classical high-speed schlieren visualization, and an aerosol spectrometer. The latter is used to determine the particle size distribution in the muzzle flow. The experimental results demonstrated the ability of PIV to accurately resolve the salient features of the propellant flow, such as the under the expanded jet and vortex rings, as well as the instantaneous velocity field with maximum centreline velocities of more than 1000 m/s. Besides, naturally present unburned particles in the gas and solid ZrO₂ particles with a nominal size of 100 nm, when coated on the propellant powder, are suitable as tracers. However, the TiO₂ particles intended to act as a tracer, surprisingly not only melted but also functioned as a combustion accelerator and decreased the number of particles in the propellant gas.Keywords: intermediate ballistic, muzzle flow fields, particle image velocimetry, propellant gas, particle size distribution, under expanded jet, solid particle tracers
Procedia PDF Downloads 1611526 The Impact of Inconclusive Results of Thin Layer Chromatography for Marijuana Analysis and It’s Implication on Forensic Laboratory Backlog
Authors: Ana Flavia Belchior De Andrade
Abstract:
Forensic laboratories all over the world face a great challenge to overcame waiting time and backlog in many different areas. Many aspects contribute to this situation, such as an increase in drug complexity, increment in the number of exams requested and cuts in funding limiting laboratories hiring capacity. Altogether, those facts pose an essential challenge for forensic chemistry laboratories to keep both quality and time of response within an acceptable period. In this paper we will analyze how the backlog affects test results and, in the end, the whole judicial system. In this study data from marijuana samples seized by the Federal District Civil Police in Brazil between the years 2013 and 2017 were tabulated and the results analyzed and discussed. In the last five years, the number of petitioned exams increased from 822 in February 2013 to 1358 in March 2018, representing an increase of 32% in 5 years, a rise of more than 6% per year. Meanwhile, our data shows that the number of performed exams did not grow at the same rate. Product numbers are stationed as using the actual technology scenario and analyses routine the laboratory is running in full capacity. Marijuana detection is the most prevalence exam required, representing almost 70% of all exams. In this study, data from 7,110 (seven thousand one hundred and ten) marijuana samples were analyzed. Regarding waiting time, most of the exams were performed not later than 60 days after receipt (77%). Although some samples waited up to 30 months before being examined (0,65%). When marijuana´s exam is delayed we notice the enlargement of inconclusive results using thin-layer chromatography (TLC). Our data shows that if a marijuana sample is stored for more than 18 months, inconclusive results rise from 2% to 7% and when if storage exceeds 30 months, inconclusive rates increase to 13%. This is probably because Cannabis plants and preparations undergo oxidation under storage resulting in a decrease in the content of Δ9-tetrahydrocannabinol ( Δ9-THC). An inconclusive result triggers other procedures that require at least two more working hours of our analysts (e.g., GC/MS analysis) and the report would be delayed at least one day. Those new procedures increase considerably the running cost of a forensic drug laboratory especially when the backlog is significant as inconclusive results tend to increase with waiting time. Financial aspects are not the only ones to be observed regarding backlog cases; there are also social issues as legal procedures can be delayed and prosecution of serious crimes can be unsuccessful. Delays may slow investigations and endanger public safety by giving criminals more time on the street to re-offend. This situation also implies a considerable cost to society as at some point, if the exam takes a long time to be performed, an inconclusive can turn into a negative result and a criminal can be absolved by flawed expert evidence.Keywords: backlog, forensic laboratory, quality management, accreditation
Procedia PDF Downloads 1221525 Detailed Investigation of Thermal Degradation Mechanism and Product Characterization of Co-Pyrolysis of Indian Oil Shale with Rubber Seed Shell
Authors: Bhargav Baruah, Ali Shemsedin Reshad, Pankaj Tiwari
Abstract:
This work presents a detailed study on the thermal degradation kinetics of co-pyrolysis of oil shale of Upper Assam, India with rubber seed shell, and lab-scale pyrolysis to investigate the influence of pyrolysis parameters on product yield and composition of products. The physicochemical characteristics of oil shale and rubber seed shell were studied by proximate analysis, elemental analysis, Fourier transform infrared spectroscopy and X-ray diffraction. The physicochemical study showed the mixture to be of low moisture, high ash, siliceous, sour with the presence of aliphatic, aromatic, and phenolic compounds. The thermal decomposition of the oil shale with rubber seed shell was studied using thermogravimetric analysis at heating rates of 5, 10, 20, 30, and 50 °C/min. The kinetic study of the oil shale pyrolysis process was performed on the thermogravimetric (TGA) data using three model-free isoconversional methods viz. Friedman, Flynn Wall Ozawa (FWO), and Kissinger Akahira Sunnose (KAS). The reaction mechanisms were determined using the Criado master plot. The understanding of the composition of Indian oil shale and rubber seed shell and pyrolysis process kinetics can help to establish the experimental parameters for the extraction of valuable products from the mixture. Response surface methodology (RSM) was employed usinf central composite design (CCD) model to setup the lab-scale experiment using TGA data, and optimization of process parameters viz. heating rate, temperature, and particle size. The samples were pre-dried at 115°C for 24 hours prior to pyrolysis. The pyrolysis temperatures were set from 450 to 650 °C, at heating rates of 2 to 20°C/min. The retention time was set between 2 to 8 hours. The optimum oil yield was observed at 5°C/min and 550°C with a retention time of 5 hours. The pyrolytic oil and gas obtained at optimum conditions were subjected to characterization using Fourier transform infrared spectroscopy (FT-IR) gas chromatography and mass spectrometry (GC-MS) and nuclear magnetic resonance spectroscopy (NMR).Keywords: Indian oil shale, rubber seed shell, co-pyrolysis, isoconversional methods, gas chromatography, nuclear magnetic resonance, Fourier transform infrared spectroscopy
Procedia PDF Downloads 1461524 Precise Determination of the Residual Stress Gradient in Composite Laminates Using a Configurable Numerical-Experimental Coupling Based on the Incremental Hole Drilling Method
Authors: A. S. Ibrahim Mamane, S. Giljean, M.-J. Pac, G. L’Hostis
Abstract:
Fiber reinforced composite laminates are particularly subject to residual stresses due to their heterogeneity and the complex chemical, mechanical and thermal mechanisms that occur during their processing. Residual stresses are now well known to cause damage accumulation, shape instability, and behavior disturbance in composite parts. Many works exist in the literature on techniques for minimizing residual stresses in thermosetting and thermoplastic composites mainly. To study in-depth the influence of processing mechanisms on the formation of residual stresses and to minimize them by establishing a reliable correlation, it is essential to be able to measure very precisely the profile of residual stresses in the composite. Residual stresses are important data to consider when sizing composite parts and predicting their behavior. The incremental hole drilling is very effective in measuring the gradient of residual stresses in composite laminates. This method is semi-destructive and consists of drilling incrementally a hole through the thickness of the material and measuring relaxation strains around the hole for each increment using three strain gauges. These strains are then converted into residual stresses using a matrix of coefficients. These coefficients, called calibration coefficients, depending on the diameter of the hole and the dimensions of the gauges used. The reliability of the incremental hole drilling depends on the accuracy with which the calibration coefficients are determined. These coefficients are calculated using a finite element model. The samples’ features and the experimental conditions must be considered in the simulation. Any mismatch can lead to inadequate calibration coefficients, thus introducing errors on residual stresses. Several calibration coefficient correction methods exist for isotropic material, but there is a lack of information on this subject concerning composite laminates. In this work, a Python program was developed to automatically generate the adequate finite element model. This model allowed us to perform a parametric study to assess the influence of experimental errors on the calibration coefficients. The results highlighted the sensitivity of the calibration coefficients to the considered errors and gave an order of magnitude of the precisions required on the experimental device to have reliable measurements. On the basis of these results, improvements were proposed on the experimental device. Furthermore, a numerical method was proposed to correct the calibration coefficients for different types of materials, including thick composite parts for which the analytical approach is too complex. This method consists of taking into account the experimental errors in the simulation. Accurate measurement of the experimental errors (such as eccentricity of the hole, angular deviation of the gauges from their theoretical position, or errors on increment depth) is therefore necessary. The aim is to determine more precisely the residual stresses and to expand the validity domain of the incremental hole drilling technique.Keywords: fiber reinforced composites, finite element simulation, incremental hole drilling method, numerical correction of the calibration coefficients, residual stresses
Procedia PDF Downloads 1321523 Using Scilab® as New Introductory Method in Numerical Calculations and Programming for Computational Fluid Dynamics (CFD)
Authors: Nicoly Coelho, Eduardo Vieira Vilas Boas, Paulo Orestes Formigoni
Abstract:
Faced with the remarkable developments in the various segments of modern engineering, provided by the increasing technological development, professionals of all educational areas need to overcome the difficulties generated due to the good understanding of those who are starting their academic journey. Aiming to overcome these difficulties, this article aims at an introduction to the basic study of numerical methods applied to fluid mechanics and thermodynamics, demonstrating the modeling and simulations with its substance, and a detailed explanation of the fundamental numerical solution for the use of finite difference method, using SCILAB, a free software easily accessible as it is free and can be used for any research center or university, anywhere, both in developed and developing countries. It is known that the Computational Fluid Dynamics (CFD) is a necessary tool for engineers and professionals who study fluid mechanics, however, the teaching of this area of knowledge in undergraduate programs faced some difficulties due to software costs and the degree of difficulty of mathematical problems involved in this way the matter is treated only in postgraduate courses. This work aims to bring the use of DFC low cost in teaching Transport Phenomena for graduation analyzing a small classic case of fundamental thermodynamics with Scilab® program. The study starts from the basic theory involving the equation the partial differential equation governing heat transfer problem, implies the need for mastery of students, discretization processes that include the basic principles of series expansion Taylor responsible for generating a system capable of convergence check equations using the concepts of Sassenfeld, finally coming to be solved by Gauss-Seidel method. In this work we demonstrated processes involving both simple problems solved manually, as well as the complex problems that required computer implementation, for which we use a small algorithm with less than 200 lines in Scilab® in heat transfer study of a heated plate in rectangular shape on four sides with different temperatures on either side, producing a two-dimensional transport with colored graphic simulation. With the spread of computer technology, numerous programs have emerged requiring great researcher programming skills. Thinking that this ability to program DFC is the main problem to be overcome, both by students and by researchers, we present in this article a hint of use of programs with less complex interface, thus enabling less difficulty in producing graphical modeling and simulation for DFC with an extension of the programming area of experience for undergraduates.Keywords: numerical methods, finite difference method, heat transfer, Scilab
Procedia PDF Downloads 3871522 Contribution of Word Decoding and Reading Fluency on Reading Comprehension in Young Typical Readers of Kannada Language
Authors: Vangmayee V. Subban, Suzan Deelan. Pinto, Somashekara Haralakatta Shivananjappa, Shwetha Prabhu, Jayashree S. Bhat
Abstract:
Introduction and Need: During early years of schooling, the instruction in the schools mainly focus on children’s word decoding abilities. However, the skilled readers should master all the components of reading such as word decoding, reading fluency and comprehension. Nevertheless, the relationship between each component during the process of learning to read is less clear. The studies conducted in alphabetical languages have mixed opinion on relative contribution of word decoding and reading fluency on reading comprehension. However, the scenarios in alphasyllabary languages are unexplored. Aim and Objectives: The aim of the study was to explore the role of word decoding, reading fluency on reading comprehension abilities in children learning to read Kannada between the age ranges of 5.6 to 8.6 years. Method: In this cross sectional study, a total of 60 typically developing children, 20 each from Grade I, Grade II, Grade III maintaining equal gender ratio between the age range of 5.6 to 6.6 years, 6.7 to 7.6 years and 7.7 to 8.6 years respectively were selected from Kannada medium schools. The reading fluency and reading comprehension abilities of the children were assessed using Grade level passages selected from the Kannada text book of children core curriculum. All the passages consist of five questions to assess reading comprehension. The pseudoword decoding skills were assessed using 40 pseudowords with varying syllable length and their Akshara composition. Pseudowords are formed by interchanging the syllables within the meaningful word while maintaining the phonotactic constraints of Kannada language. The assessment material was subjected to content validation and reliability measures before collecting the data on the study samples. The data were collected individually, and reading fluency was assessed for words correctly read per minute. Pseudoword decoding was scored for the accuracy of reading. Results: The descriptive statistics indicated that the mean pseudoword reading, reading comprehension, words accurately read per minute increased with the Grades. The performance of Grade III children found to be higher, Grade I lower and Grade II remained intermediate of Grade III and Grade I. The trend indicated that reading skills gradually improve with the Grades. Pearson’s correlation co-efficient showed moderate and highly significant (p=0.00) positive co-relation between the variables, indicating the interdependency of all the three components required for reading. The hierarchical regression analysis revealed 37% variance in reading comprehension was explained by pseudoword decoding and was highly significant. Subsequent entry of reading fluency measure, there was no significant change in R-square and was only change 3%. Therefore, pseudoword-decoding evolved as a single most significant predictor of reading comprehension during early Grades of reading acquisition. Conclusion: The present study concludes that the pseudoword decoding skills contribute significantly to reading comprehension than reading fluency during initial years of schooling in children learning to read Kannada language.Keywords: alphasyllabary, pseudo-word decoding, reading comprehension, reading fluency
Procedia PDF Downloads 2621521 Development and Effects of Transtheoretical Model Exercise Program for Elderly Women with Chronic Back Pain
Authors: Hyun-Ju Oh, Soon-Rim Suh, Mihan Kim
Abstract:
The steady and rapid increase of the older population is a global phenomenon. Chronic diseases and disabilities are increased due to aging. In general, exercise has been known to be most effective in preventing and managing chronic back pain. However, it is hard for the older women to initiate and maintain the exercise. Transtheoretical model (TTM) is one of the theories explain behavioral changes such as exercise. The application of the program considering the stage of behavior change is effective for the elderly woman to start and maintain the exercise. The purpose of this study was to develop TTM based exercise program and to examine its effect for elderly women with chronic back-pain. For the program evaluation, the non-equivalent control pre-posttest design was applied. The independent variable of this study is exercise intervention program. The contents of the program were constructed considering the characteristics of the elderly women with chronic low back pain, focusing on the process of change, the stage of change by the previous studies. The developed exercise program was applied to the elderly women with chronic low back pain in the planning stage and the preparation stage. The subjects were 50 older women over 65 years of age with chronic back-pain who did not practice regular exercise. The experimental group (n=25) received the 8weeks TTM based exercise program. The control group received the book which named low back pain management. Data were collected at three times: before the exercise intervention, right after the intervention, and 4weeks after the intervention. The dependent variables were the processes of change, decisional balance, exercise self-efficacy, back-pain, depression and muscle strength. The results of this study were as follows. Processes of change (<.001), pros of decisional balance (<.001), exercise self-efficacy (<.001), back pain (<.001), depression (<.001), muscle strength (<.001) were higher in the experimental group than in the control group right after the program and 4weeks after the programs. The results of this study show that applying the TTM based exercise program increases the use of the change process, increases the exercise self-efficacy, increases the stage of changing the exercise behavior and strengthens the muscular strength by lowering the degree of pain and depression Respectively. The significance of the study was to confirm the effect of continuous exercise by maintaining regular exercise habits by applying exercise program of the transtheoretical model to the chronic low back pain elderly with exercise intention.Keywords: chronic back pain, elderly, exercise, women
Procedia PDF Downloads 2521520 The Influence of Alvar Aalto on the Early Work of Álvaro Siza
Authors: Eduardo Jorge Cabral dos Santos Fernandes
Abstract:
The expression ‘Porto School’, usually associated with an educational institution, the School of Fine Arts of Porto, is applied for the first time with the sense of an architectural trend by Nuno Portas in a text published in 1983. The expression is used to characterize a set of works by Porto architects, in which common elements are found, namely the desire to reuse languages and forms of the German and Dutch rationalism of the twenties, using the work of Alvar Aalto as a mediation for the reinterpretation of these models. In the same year, Álvaro Siza classifies the Finnish architect as a miscegenation agent who transforms experienced models and introduces them to different realities in a text published in Jornal de Letras, Artes e Ideias. The influence of foreign models and their adaptation to the context has been a recurrent theme in Portuguese architecture, which finds important contributions in the writings of Alexandre Alves Costa, at this time. However, the identification of these characteristics in Siza’s work is not limited to the Portuguese theoretical production: it is the recognition of this attitude towards the context that leads Kenneth Frampton to include Siza in the restricted group of architects who embody Critical Regionalism (in his book Modern architecture: a critical history). For Frampton, his work focuses on the territory and on the consequences of the intervention in the context, viewing architecture as a tectonic fact rather than a series of scenographic episodes and emphasizing site-specific aspects (topography, light, climate). Therefore, the motto of this paper is the dichotomous opposition between foreign influences and adaptation to the context in the early work of Álvaro Siza (designed in the sixties) in which the influence (theoretical, methodological, and formal) of Alvar Aalto manifests itself in the form and the language: the pool at Quinta da Conceição, the Seaside Pools and the Tea House (three works in Leça da Palmeira) and the Lordelo Cooperative (in Porto). This work is part of a more comprehensive project, which considers several case studies throughout the Portuguese architect's vast career, built in Portugal and abroad, in order to obtain a holistic view.Keywords: Alvar Aalto, Álvaro Siza, foreign influences, adaptation to the context
Procedia PDF Downloads 311519 Status of Vocational Education and Training in India: Policies and Practices
Authors: Vineeta Sirohi
Abstract:
The development of critical skills and competencies becomes imperative for young people to cope with the unpredicted challenges of the time and prepare for work and life. Recognizing that education has a critical role in reaching sustainability goals as emphasized by 2030 agenda for sustainability development, educating youth in global competence, meta-cognitive competencies, and skills from the initial stages of formal education are vital. Further, educating for global competence would help in developing work readiness and boost employability. Vocational education and training in India as envisaged in various policy documents remain marginalized in practice as compared to general education. The country is still far away from the national policy goal of tracking 25% of the secondary students at grade eleven and twelve under the vocational stream. In recent years, the importance of skill development has been recognized in the present context of globalization and change in the demographic structure of the Indian population. As a result, it has become a national policy priority and taken up with renewed focus by the government, which has set the target of skilling 500 million people by 2022. This paper provides an overview of the policies, practices, and current status of vocational education and training in India supported by statistics from the National Sample Survey, the official statistics of India. The national policy documents and annual reports of the organizations actively involved in vocational education and training have also been examined to capture relevant data and information. It has also highlighted major initiatives taken by the government to promote skill development. The data indicates that in the age group 15-59 years, only 2.2 percent reported having received formal vocational training, and 8.6 percent have received non-formal vocational training, whereas 88.3 percent did not receive any vocational training. At present, the coverage of vocational education is abysmal as less than 5 percent of the students are covered by the vocational education programme. Besides, launching various schemes to address the mismatch of skills supply and demand, the government through its National Policy on Skill Development and Entrepreneurship 2015 proposes to bring about inclusivity by bridging the gender, social and sectoral divide, ensuring that the skilling needs of socially disadvantaged and marginalized groups are appropriately addressed. It is fundamental that the curriculum is aligned with the demands of the labor market, incorporating more of the entrepreneur skills. Creating nonfarm employment opportunities for educated youth will be a challenge for the country in the near future. Hence, there is a need to formulate specific skill development programs for this sector and also programs for upgrading their skills to enhance their employability. There is a need to promote female participation in work and in non-traditional courses. Moreover, rigorous research and development of a robust information base for skills are required to inform policy decisions on vocational education and training.Keywords: policy, skill, training, vocational education
Procedia PDF Downloads 1531518 Active Development of Tacit Knowledge: Knowledge Management, High Impact Practices and Experiential Learning
Authors: John Zanetich
Abstract:
Due to their positive associations with student learning and retention, certain undergraduate opportunities are designated ‘high-impact.’ High-Impact Practices (HIPs) such as, learning communities, community based projects, research, internships, study abroad and culminating senior experience, share several traits bin common: they demand considerable time and effort, learning occurs outside of the classroom, and they require meaningful interactions between faculty and students, they encourage collaboration with diverse others, and they provide frequent and substantive feedback. As a result of experiential learning in these practices, participation in these practices can be life changing. High impact learning helps individuals locate tacit knowledge, and build mental models that support the accumulation of knowledge. On-going learning from experience and knowledge conversion provides the individual with a way to implicitly organize knowledge and share knowledge over a lifetime. Knowledge conversion is a knowledge management component which focuses on the explication of the tacit knowledge that exists in the minds of students and that knowledge which is embedded in the process and relationships of the classroom educational experience. Knowledge conversion is required when working with tacit knowledge and the demand for a learner to align deeply held beliefs with the cognitive dissonance created by new information. Knowledge conversion and tacit knowledge result from the fact that an individual's way of knowing, that is, their core belief structure, is considered generalized and tacit instead of explicit and specific. As a phenomenon, tacit knowledge is not readily available to the learner for explicit description unless evoked by an external source. The development of knowledge–related capabilities such as Aggressive Development of Tacit Knowledge (ADTK) can be used in experiential educational programs to enhance knowledge, foster behavioral change, improve decision making, and overall performance. ADTK allows the student in HIPs to use their existing knowledge in a way that allows them to evaluate and make any necessary modifications to their core construct of reality in order to amalgamate new information. Based on the Lewin/Schein Change Theory, the learner will reach for tacit knowledge as a stabilizing mechanism when they are challenged by new information that puts them slightly off balance. As in word association drills, the important concept is the first thought. The reactionary outpouring to an experience is the programmed or tacit memory and knowledge of their core belief structure. ADTK is a way to help teachers design their own methods and activities to unfreeze, create new learning, and then refreeze the core constructs upon which future learning in a subject area is built. This paper will explore the use of ADTK as a technique for knowledge conversion in the classroom in general and in HIP programs specifically. It will focus on knowledge conversion in curriculum development and propose the use of one-time educational experiences, multi-session experiences and sequential program experiences focusing on tacit knowledge in educational programs.Keywords: tacit knowledge, knowledge management, college programs, experiential learning
Procedia PDF Downloads 2621517 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations
Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso
Abstract:
Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.Keywords: pipeline, leakage, detection, AI
Procedia PDF Downloads 1911516 Cytotoxicity and Genotoxicity of Glyphosate and Its Two Impurities in Human Peripheral Blood Mononuclear Cells
Authors: Marta Kwiatkowska, Paweł Jarosiewicz, Bożena Bukowska
Abstract:
Glyphosate (N-phosphonomethylglycine) is a non-selected broad spectrum ingredient in the herbicide (Roundup) used for over 35 years for the protection of agricultural and horticultural crops. Glyphosate was believed to be environmentally friendly but recently, a large body of evidence has revealed that glyphosate can negatively affect on environment and humans. It has been found that glyphosate is present in the soil and groundwater. It can also enter human body which results in its occurrence in blood in low concentrations of 73.6 ± 28.2 ng/ml. Research conducted for potential genotoxicity and cytotoxicity can be an important element in determining the toxic effect of glyphosate. Due to regulation of European Parliament 1107/2009 it is important to assess genotoxicity and cytotoxicity not only for the parent substance but also its impurities, which are formed at different stages of production of major substance – glyphosate. Moreover verifying, which of these compounds are more toxic is required. Understanding of the molecular pathways of action is extremely important in the context of the environmental risk assessment. In 2002, the European Union has decided that glyphosate is not genotoxic. Unfortunately, recently performed studies around the world achieved results which contest decision taken by the committee of the European Union. World Health Organization (WHO) in March 2015 has decided to change the classification of glyphosate to category 2A, which means that the compound is considered to "probably carcinogenic to humans". This category relates to compounds for which there is limited evidence of carcinogenicity to humans and sufficient evidence of carcinogenicity on experimental animals. That is why we have investigated genotoxicity and cytotoxicity effects of the most commonly used pesticide: glyphosate and its impurities: N-(phosphonomethyl)iminodiacetic acid (PMIDA) and bis-(phosphonomethyl)amine on human peripheral blood mononuclear cells (PBMCs), mostly lymphocytes. DNA damage (analysis of DNA strand-breaks) using the single cell gel electrophoresis (comet assay) and ATP level were assessed. Cells were incubated with glyphosate and its impurities: PMIDA and bis-(phosphonomethyl)amine at concentrations from 0.01 to 10 mM for 24 hours. Evaluating genotoxicity using the comet assay showed a concentration-dependent increase in DNA damage for all compounds studied. ATP level was decreased to zero as a result of using the highest concentration of two investigated impurities, like bis-(phosphonomethyl)amine and PMIDA. Changes were observed using the highest concentration at which a person can be exposed as a result of acute intoxication. Our survey leads to a conclusion that the investigated compounds exhibited genotoxic and cytotoxic potential but only in high concentrations, to which people are not exposed environmentally. Acknowledgments: This work was supported by the Polish National Science Centre (Contract-2013/11/N/NZ7/00371), MSc Marta Kwiatkowska, project manager.Keywords: cell viability, DNA damage, glyphosate, impurities, peripheral blood mononuclear cells
Procedia PDF Downloads 4821515 Analysis of the Relationship between Micro-Regional Human Development and Brazil's Greenhouse Gases Emission
Authors: Geanderson Eduardo Ambrósio, Dênis Antônio Da Cunha, Marcel Viana Pires
Abstract:
Historically, human development has been based on economic gains associated with intensive energy activities, which often are exhaustive in the emission of Greenhouse Gases (GHGs). It requires the establishment of targets for mitigation of GHGs in order to disassociate the human development from emissions and prevent further climate change. Brazil presents itself as one of the most GHGs emitters and it is of critical importance to discuss such reductions in intra-national framework with the objective of distributional equity to explore its full mitigation potential without compromising the development of less developed societies. This research displays some incipient considerations about which Brazil’s micro-regions should reduce, when the reductions should be initiated and what its magnitude should be. We started with the methodological assumption that human development and GHGs emissions arise in the future as their behavior was observed in the past. Furthermore, we assume that once a micro-region became developed, it is able to maintain gains in human development without the need of keep growing GHGs emissions rates. The human development index and the carbon dioxide equivalent emissions (CO2e) were extrapolated to the year 2050, which allowed us to calculate when the micro-regions will become developed and the mass of GHG’s emitted. The results indicate that Brazil must throw 300 GT CO2e in the atmosphere between 2011 and 2050, of which only 50 GT will be issued by micro-regions before it’s develop and 250 GT will be released after development. We also determined national mitigation targets and structured reduction schemes where only the developed micro-regions would be required to reduce. The micro-region of São Paulo, the most developed of the country, should be also the one that reduces emissions at most, emitting, in 2050, 90% less than the value observed in 2010. On the other hand, less developed micro-regions will be responsible for less impactful reductions, i.e. Vale do Ipanema will issue in 2050 only 10% below the value observed in 2010. Such methodological assumption would lead the country to issue, in 2050, 56.5% lower than that observed in 2010, so that the cumulative emissions between 2011 and 2050 would reduce by 130 GT CO2e over the initial projection. The fact of associating the magnitude of the reductions to the level of human development of the micro-regions encourages the adoption of policies that favor both variables as the governmental planner will have to deal with both the increasing demand for higher standards of living and with the increasing magnitude of reducing emissions. However, if economic agents do not act proactively in local and national level, the country is closer to the scenario in which emits more than the one in which mitigates emissions. The research highlighted the importance of considering the heterogeneity in determining individual mitigation targets and also ratified the theoretical and methodological feasibility to allocate larger share of contribution for those who historically emitted more. It is understood that the proposals and discussions presented should be considered in mitigation policy formulation in Brazil regardless of the adopted reduction target.Keywords: greenhouse gases, human development, mitigation, intensive energy activities
Procedia PDF Downloads 3201514 Effects of Vegetable Oils Supplementation on in Vitro Rumen Fermentation and Methane Production in Buffaloes
Authors: Avijit Dey, Shyam S. Paul, Satbir S. Dahiya, Balbir S. Punia, Luciano A. Gonzalez
Abstract:
Methane emitted from ruminant livestock not only reduces the efficiency of feed energy utilization but also contributes to global warming. Vegetable oils, a source of poly unsaturated fatty acids, have potential to reduce methane production and increase conjugated linoleic acid in the rumen. However, characteristics of oils, level of inclusion and composition of basal diet influences their efficacy. Therefore, this study was aimed to investigate the effects of sunflower (SFL) and cottonseed (CSL) oils on methanogenesis, volatile fatty acids composition and feed fermentation pattern by in vitro gas production (IVGP) test. Four concentrations (0, 0.1, 0.2 and 0.4ml /30ml buffered rumen fluid) of each oil were used. Fresh rumen fluid was collected before morning feeding from two rumen cannulated buffalo steers fed a mixed ration. In vitro incubation was carried out with sorghum hay (200 ± 5 mg) as substrate in 100 ml calibrated glass syringes following standard IVGP protocol. After 24h incubation, gas production was recorded by displacement of piston. Methane in the gas phase and volatile fatty acids in the fermentation medium were estimated by gas chromatography. Addition of oils resulted in increase (p<0.05) in total gas production and decrease (p<0.05) in methane production, irrespective of type and concentration. Although the increase in gas production was similar, methane production (ml/g DM) and its concentration (%) in head space gas was lower (p< 0.01) in CSL than in SFL at corresponding doses. Linear decrease (p<0.001) in degradability of DM was evident with increasing doses of oils (0.2ml onwards). However, these effects were more pronounced with SFL. Acetate production tended to decrease but propionate and butyrate production increased (p<0.05) with addition of oils, irrespective of type and doses. The ratio of acetate to propionate was reduced (p<0.01) with addition of oils but no difference between the oils was noted. It is concluded that both the oils can reduce methane production. However, feed degradability was also affected with higher doses. Cotton seed oil in small dose (0.1ml/30 ml buffered rumen fluid) exerted greater inhibitory effects on methane production without impeding dry matter degradability. Further in vivo studies need to be carried out for their practical application in animal ration.Keywords: buffalo, methanogenesis, rumen fermentation, vegetable oils
Procedia PDF Downloads 4061513 DEMs: A Multivariate Comparison Approach
Authors: Juan Francisco Reinoso Gordo, Francisco Javier Ariza-López, José Rodríguez Avi, Domingo Barrera Rosillo
Abstract:
The evaluation of the quality of a data product is based on the comparison of the product with a reference of greater accuracy. In the case of MDE data products, quality assessment usually focuses on positional accuracy and few studies consider other terrain characteristics, such as slope and orientation. The proposal that is made consists of evaluating the similarity of two DEMs (a product and a reference), through the joint analysis of the distribution functions of the variables of interest, for example, elevations, slopes and orientations. This is a multivariable approach that focuses on distribution functions, not on single parameters such as mean values or dispersions (e.g. root mean squared error or variance). This is considered to be a more holistic approach. The use of the Kolmogorov-Smirnov test is proposed due to its non-parametric nature, since the distributions of the variables of interest cannot always be adequately modeled by parametric models (e.g. the Normal distribution model). In addition, its application to the multivariate case is carried out jointly by means of a single test on the convolution of the distribution functions of the variables considered, which avoids the use of corrections such as Bonferroni when several statistics hypothesis tests are carried out together. In this work, two DEM products have been considered, DEM02 with a resolution of 2x2 meters and DEM05 with a resolution of 5x5 meters, both generated by the National Geographic Institute of Spain. DEM02 is considered as the reference and DEM05 as the product to be evaluated. In addition, the slope and aspect derived models have been calculated by GIS operations on the two DEM datasets. Through sample simulation processes, the adequate behavior of the Kolmogorov-Smirnov statistical test has been verified when the null hypothesis is true, which allows calibrating the value of the statistic for the desired significance value (e.g. 5%). Once the process has been calibrated, the same process can be applied to compare the similarity of different DEM data sets (e.g. the DEM05 versus the DEM02). In summary, an innovative alternative for the comparison of DEM data sets based on a multinomial non-parametric perspective has been proposed by means of a single Kolmogorov-Smirnov test. This new approach could be extended to other DEM features of interest (e.g. curvature, etc.) and to more than three variablesKeywords: data quality, DEM, kolmogorov-smirnov test, multivariate DEM comparison
Procedia PDF Downloads 1151512 Project Work with Design Thinking and Blended Learning: A Practical Report from Teaching in Higher Education
Authors: C. Vogeler
Abstract:
Change processes such as individualization and digitalization have an impact on higher education. Graduates are expected to cooperate in creative work processes in their professional life. During their studies, they need to be prepared accordingly. This includes modern learning scenarios that integrate the benefits of digital media. Therefore, design thinking and blended learning have been combined in the project-based seminar conception introduced here. The presented seminar conception has been realized and evaluated with students of information sciences since September 2017. Within the seminar, the students learn to work on a project. They apply the methods in a problem-based learning scenario. Task of the case study is to arrange a conference on the topic gaming in libraries. In order to collaborative develop creative possibilities of realization within the group of students the design thinking method has been chosen. Design thinking is a method, used to create user-centric, problem-solving and need-driven innovation through creative collaboration in multidisciplinary teams. Central characteristics are the openness of this approach to work results and the visualization of ideas. This approach is now also accepted in the field of higher education. Especially in problem-based learning scenarios, the method offers clearly defined process steps for creative ideas and their realization. The creative process can be supported by digital media, such as search engines and tools for the documentation of brainstorming, creation of mind maps, project management etc. Because the students have to do two-thirds of the workload in their private study, design thinking has been combined with a blended learning approach. This supports students’ preparation and follow-up of the joint work in workshops (flipped classroom scenario) as well as the communication and collaboration during the entire project work phase. For this purpose, learning materials are provided on a Moodle-based learning platform as well as various tools that supported the design thinking process as described above. In this paper, the seminar conception with a combination of design thinking and blended learning is described and the potentials and limitations of the chosen strategy for the development of a course with a multimedia approach in higher education are reflected.Keywords: blended learning, design thinking, digital media tools and methods, flipped classroom
Procedia PDF Downloads 1971511 Collaborative Procurement in the Pursuit of Net- Zero: A Converging Journey
Authors: Bagireanu Astrid, Bros-Williamson Julio, Duncheva Mila, Currie John
Abstract:
The Architecture, Engineering, and Construction (AEC) sector plays a critical role in the global transition toward sustainable and net-zero built environments. However, the industry faces unique challenges in planning for net-zero while struggling with low productivity, cost overruns and overall resistance to change. Traditional practices fall short due to their inability to meet the requirements for systemic change, especially as governments increasingly demand transformative approaches. Working in silos and rigid hierarchies and a short-term, client-centric approach prioritising immediate gains over long-term benefit stands in stark contrast to the fundamental requirements for the realisation of net-zero objectives. These practices have limited capacity to effectively integrate AEC stakeholders and promote the essential knowledge sharing required to address the multifaceted challenges of achieving net-zero. In the context of built environment, procurement may be described as the method by which a project proceeds from inception to completion. Collaborative procurement methods under the Integrated Practices (IP) umbrella have the potential to align more closely with net-zero objectives. This paper explores the synergies between collaborative procurement principles and the pursuit of net zero in the AEC sector, drawing upon the shared values of cross-disciplinary collaboration, Early Supply Chain involvement (ESI), use of standards and frameworks, digital information management, strategic performance measurement, integrated decision-making principles and contractual alliancing. To investigate the role of collaborative procurement in advancing net-zero objectives, a structured research methodology was employed. First, the study focuses on a systematic review on the application of collaborative procurement principles in the AEC sphere. Next, a comprehensive analysis is conducted to identify common clusters of these principles across multiple procurement methods. An evaluative comparison between traditional procurement methods and collaborative procurement for achieving net-zero objectives is presented. Then, the study identifies the intersection between collaborative procurement principles and the net-zero requirements. Lastly, an exploration of key insights for AEC stakeholders focusing on the implications and practical applications of these findings is made. Directions for future development of this research are recommended. Adopting collaborative procurement principles can serve as a strategic framework for guiding the AEC sector towards realising net-zero. Synergising these approaches overcomes fragmentation, fosters knowledge sharing, and establishes a net-zero-centered ecosystem. In the context of the ongoing efforts to amplify project efficiency within the built environment, a critical realisation of their central role becomes imperative for AEC stakeholders. When effectively leveraged, collaborative procurement emerges as a powerful tool to surmount existing challenges in attaining net-zero objectives.Keywords: collaborative procurement, net-zero, knowledge sharing, architecture, built environment
Procedia PDF Downloads 731510 Investigation of Wind Farm Interaction with Ethiopian Electric Power’s Grid: A Case Study at Ashegoda Wind Farm
Authors: Fikremariam Beyene, Getachew Bekele
Abstract:
Ethiopia is currently on the move with various projects to raise the amount of power generated in the country. The progress observed in recent years indicates this fact clearly and indisputably. The rural electrification program, the modernization of the power transmission system, the development of wind farm is some of the main accomplishments worth mentioning. As it is well known, currently, wind power is globally embraced as one of the most important sources of energy mainly for its environmentally friendly characteristics, and also that once it is installed, it is a source available free of charge. However, integration of wind power plant with an existing network has many challenges that need to be given serious attention. In Ethiopia, a number of wind farms are either installed or are under construction. A series of wind farm is planned to be installed in the near future. Ashegoda Wind farm (13.2°, 39.6°), which is the subject of this study, is the first large scale wind farm under construction with the capacity of 120 MW. The first phase of 120 MW (30 MW) has been completed and is expected to be connected to the grid soon. This paper is concerned with the investigation of the wind farm interaction with the national grid under transient operating condition. The main concern is the fault ride through (FRT) capability of the system when the grid voltage drops to exceedingly low values because of short circuit fault and also the active and reactive power behavior of wind turbines after the fault is cleared. On the wind turbine side, a detailed dynamic modelling of variable speed wind turbine of a 1 MW capacity running with a squirrel cage induction generator and full-scale power electronics converters is done and analyzed using simulation software DIgSILENT PowerFactory. On the Ethiopian electric power corporation side, after having collected sufficient data for the analysis, the grid network is modeled. In the model, a fault ride-through (FRT) capability of the plant is studied by applying 3-phase short circuit on the grid terminal near the wind farm. The results show that the Ashegoda wind farm can ride from voltage deep within a short time and the active and reactive power performance of the wind farm is also promising.Keywords: squirrel cage induction generator, active and reactive power, DIgSILENT PowerFactory, fault ride-through capability, 3-phase short circuit
Procedia PDF Downloads 1731509 Allelopathic Action of Diferents Sorghum bicolor [L.] Moench Fractions on Ipomoea grandifolia [Dammer] O'Donell
Authors: Mateus L. O. Freitas, Flávia H. de M. Libório, Letycia L. Ricardo, Patrícia da C. Zonetti, Graciene de S. Bido
Abstract:
Weeds compete with agricultural crops for resources such as light, water, and nutrients. This competition can cause significant damage to agricultural producers, and, currently, the use of agrochemicals is the most effective method for controlling these undesirable plants. Morning glory (Ipomoea grandifolia [Dammer] O'Donell) is an aggressive weed and significantly reduces agricultural productivity making harvesting difficult, especially mechanical harvesting. The biggest challenge in modern agriculture is to preserve high productivity reducing environmental damage and maintaining soil characteristics. No-till is a sustainable practice that can reduce the use of agrochemicals and environmental impacts due to the presence of plant residues in the soil, which release allelopathic compounds and reduce the incidence or alter the growth and development of crops and weeds. Sorghum (Sorghum bicolor [L.] Moench) is a forage with proven allelopathic activity, mainly for producing sorgholeone. In this context, this research aimed to evaluate the allelopathic action of sorghum fractions using hexane, dichloromethane, butanol, and ethyl acetate on the germination and initial growth of morning glory. The parameters analyzed were the percentage of germination, speed of germination, seedling length, and biomass weight (fresh and dry). The bioassays were performed in Petri dishes, kept in an incubation chamber for 7 days, at 25 °C, with a 12h photoperiod. The experimental design was completely randomized, with five replicates of each treatment. The data were evaluated by analysis of variance, and the averages between each treatment were compared using the Scott Knott test at a 5% significance level. The results indicated that the dichloromethane and ethyl acetate fractions showed bioherbicidal effects, promoting effective reductions on germination and initial growth of the morning glory. It was concluded that allelochemicals were probably extracted in these fractions. These secondary metabolites can reduce the use of agrochemicals and environmental impact, making agricultural production systems more sustainable.Keywords: allelochemicals, secondary metabolism, sorgoleone, weeds
Procedia PDF Downloads 1481508 Impact of Material Chemistry and Morphology on Attrition Behavior of Excipients during Blending
Authors: Sri Sharath Kulkarni, Pauline Janssen, Alberto Berardi, Bastiaan Dickhoff, Sander van Gessel
Abstract:
Blending is a common process in the production of pharmaceutical dosage forms where the high shear is used to obtain a homogenous dosage. The shear required can lead to uncontrolled attrition of excipients and affect API’s. This has an impact on the performance of the formulation as this can alter the structure of the mixture. Therefore, it is important to understand the driving mechanisms for attrition. The aim of this study was to increase the fundamental understanding of the attrition behavior of excipients. Attrition behavior of the excipients was evaluated using a high shear blender (Procept Form-8, Zele, Belgium). Twelve pure excipients are tested, with morphologies varying from crystalline (sieved), granulated to spray dried (round to fibrous). Furthermore, materials include lactose, microcrystalline cellulose (MCC), di-calcium phosphate (DCP), and mannitol. The rotational speed of the blender was set at 1370 rpm to have the highest shear with a Froude (Fr) number 9. Varying blending times of 2-10 min were used. Subsequently, after blending, the excipients were analyzed for changes in particle size distribution (PSD). This was determined (n = 3) by dry laser diffraction (Helos/KR, Sympatec, Germany). Attrition was found to be a surface phenomenon which occurs in the first minutes of the high shear blending process. An increase of blending time above 2 mins showed no change in particle size distribution. Material chemistry was identified as a key driver for differences in the attrition behavior between different excipients. This is mainly related to the proneness to fragmentation, which is known to be higher for materials such as DCP and mannitol compared to lactose and MCC. Secondly, morphology also was identified as a driver of the degree of attrition. Granular products consisting of irregular surfaces showed the highest reduction in particle size. This is due to the weak solid bonds created between the primary particles during the granulation process. Granular DCP and mannitol show a reduction of 80-90% in x10(µm) compared to a 20-30% drop for granular lactose (monohydrate and anhydrous). Apart from the granular lactose, all the remaining morphologies of lactose (spray dried-round, sieved-tomahawk, milled) show little change in particle size. Similar observations have been made for spray-dried fibrous MCC. All these morphologies have little irregular or sharp surfaces and thereby are less prone to fragmentation. Therefore, products containing brittle materials such as mannitol and DCP are more prone to fragmentation when exposed to shear. Granular products with irregular surfaces lead to an increase in attrition. While spherical, crystalline, or fibrous morphologies show reduced impact during high shear blending. These changes in size will affect the functionality attributes of the formulation, such as flow, API homogeneity, tableting, formation of dust, etc. Hence it is important for formulators to fully understand the excipients to make the right choices.Keywords: attrition, blending, continuous manufacturing, excipients, lactose, microcrystalline cellulose, shear
Procedia PDF Downloads 1111507 Evaluation of Coupled CFD-FEA Simulation for Fire Determination
Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Ella Quigley, Kevin Tinkham
Abstract:
Fire performance is a crucial aspect to consider when designing cladding products, and testing this performance is extremely expensive. Appropriate use of numerical simulation of fire performance has the potential to reduce the total number of fire tests required when designing a product by eliminating poor-performing design ideas early in the design phase. Due to the complexity of fire and the large spectrum of failures it can cause, multi-disciplinary models are needed to capture the complex fire behavior and its structural effects on its surroundings. Working alongside Tata Steel U.K., the authors have focused on completing a coupled CFD-FEA simulation model suited to test Polyisocyanurate (PIR) based sandwich panel products to gain confidence before costly experimental standards testing. The sandwich panels are part of a thermally insulating façade system primarily for large non-domestic buildings. The work presented in this paper compares two coupling methodologies of a replicated physical experimental standards test LPS 1181-1, carried out by Tata Steel U.K. The two coupling methodologies that are considered within this research are; one-way and two-way. A one-way coupled analysis consists of importing thermal data from the CFD solver into the FEA solver. A two-way coupling analysis consists of continuously importing the updated changes in thermal data, due to the fire's behavior, to the FEA solver throughout the simulation. Likewise, the mechanical changes will also be updated back to the CFD solver to include geometric changes within the solution. For CFD calculations, a solver called Fire Dynamic Simulator (FDS) has been chosen due to its adapted numerical scheme to focus solely on fire problems. Validation of FDS applicability has been achieved in past benchmark cases. In addition, an FEA solver called ABAQUS has been chosen to model the structural response to the fire due to its crushable foam plasticity model, which can accurately model the compressibility of PIR foam. An open-source code called FDS-2-ABAQUS is used to couple the two solvers together, using several python modules to complete the process, including failure checks. The coupling methodologies and experimental data acquired from Tata Steel U.K are compared using several variables. The comparison data includes; gas temperatures, surface temperatures, and mechanical deformation of the panels. Conclusions are drawn, noting improvements to be made on the current coupling open-source code FDS-2-ABAQUS to make it more applicable to Tata Steel U.K sandwich panel products. Future directions for reducing the computational cost of the simulation are also considered.Keywords: fire engineering, numerical coupling, sandwich panels, thermo fluids
Procedia PDF Downloads 901506 The Effect of the Variety and Harvesting Date on Polyphenol Composition of Haskap (Lonicera caerulea L.) and Anti-diabetic Properties of Haskap Polyphenols
Authors: Aruma Baduge Kithma De Silva
Abstract:
Haskap (Lonicera caerulea L.), also known as blue honeysuckle, is a newly commercialized berry crop in Canada. Haskap berries are rich in polyphenols, including, anthocyanins, which are known for potential health-promoting properties. Cyanidin-3-O-glucoside (C3G) is the most abundant anthocyanin of haskap berries. The compound C3G has the ability to reduce the risk of type 2 diabetes (T2D), which has become an increasingly common health issue around the world. The T2D is characterized as a metabolic disorder of hyperglycemia and insulin resistance. It has been demonstrated that C3G has anti-diabetic effects through several ways, including inhibition of dipeptidyl peptidase-4 (DPP-4), reduction of gluconeogenesis, improvement in insulin sensitivity, and inhibition of activities of carbohydrate hydrolyzing enzymes, including α-amylase and α-glucosidase. The goal of this study was to investigate the influence of variety and harvests maturity of haskap on C3G, other fruit quality characteristics and anti-diabetic activities of haskap berries using in vitro studies. The polyphenols present in four commercially grown haskap cultivars, Aurora, Rebecca, Larissa, and Evie harvested at five harvesting dates (H1-H5) apart from 2-3 days, were extracted separately. High-performance liquid chromatography electrospray ionization mass spectrometry (HPLC-ESI-MS) analyzes of polyphenols revealed that haskap berries contain predominantly anthocyanins, flavonols, flavan-3-ols, and phenolic acids. The compound C3G was the most prominent anthocyanin, which is available in approximately 79% of total anthocyanin in four cultivars. The Larissa at H5 contained the highest C3G content. The antioxidant capacity of Evie at H5 was greater than other cultivars. Furthermore, Larissa H5 showed the greatest inhibition of carbohydrate hydrolyzing enzymes including alpha-glucosidase and alpha-amylase. In conclusion, the haskap variety and harvesting date influenced the polyphenol composition and biological properties. The variety Larissa, at H5 harvesting date, contained the highest polyphenol content and the ability of inhibition of the carbohydrate hydrolyzing enzyme as well as DPP4 enzyme in order to reduce type 2 diabetes.Keywords: anthocyanin, Haskap, type 2 diabetes, polyphenol
Procedia PDF Downloads 1421505 Dynamics Pattern of Land Use and Land Cover Change and Its Driving Factors Based on a Cellular Automata Markov Model: A Case Study at Ibb Governorate, Yemen
Authors: Abdulkarem Qasem Dammag, Basema Qasim Dammag, Jian Dai
Abstract:
Change in Land use and Land cover (LU/LC) has a profound impact on the area's natural, economic, and ecological development, and the search for drivers of land cover change is one of the fundamental issues of LU/LC change. The study aimed to assess the temporal and Spatio-temporal dynamics of LU/LC in the past and to predict the future using Landsat images by exploring the characteristics of different LU/LC types. Spatio-temporal patterns of LU/LC change in Ibb Governorate, Yemen, were analyzed based on RS and GIS from 1990, 2005, and 2020. A socioeconomic survey and key informant interviews were used to assess potential drivers of LU/LC. The results showed that from 1990 to 2020, the total area of vegetation land decreased by 5.3%, while the area of barren land, grassland, built-up area, and waterbody increased by 2.7%, 1.6%, 1.04%, and 0.06%, respectively. Based on socio-economic surveys and key informant interviews, natural factors had a significant and long-term impact on land change. In contrast, site construction and socio-economic factors were the main driving forces affecting land change in a short time scale. The analysis results have been linked to the CA-Markov Land Use simulation and forecasting model for the years 2035 and 2050. The simulation results revealed from the period 2020 to 2050, the trend of dynamic changes in land use, where the total area of barren land decreased by 7.0% and grassland by 0.2%, while the vegetation land, built-up area, and waterbody increased by 4.6%, 2.6%, and 0.1 %, respectively. Overall, these findings provide LULC's past and future trends and identify drivers, which can play an important role in sustainable land use planning and management by balancing and coordinating urban growth and land use and can also be used at the regional level in different levels to provide as a reference. In addition, the results provide scientific guidance to government departments and local decision-makers in future land-use planning through dynamic monitoring of LU/LC change.Keywords: LU/LC change, CA-Markov model, driving forces, change detection, LU/LC change simulation
Procedia PDF Downloads 641504 The Impact of Social Support on Anxiety and Depression under the Context of COVID-19 Pandemic: A Scoping Review and Meta-Analysis
Authors: Meng Wu, Atif Rahman, Eng Gee, Lim, Jeong Jin Yu, Rong Yan
Abstract:
Context: The COVID-19 pandemic has had a profound impact on mental health, with increased rates of anxiety and depression observed. Social support, a critical factor in mental well-being, has also undergone significant changes during the pandemic. This study aims to explore the relationship between social support, anxiety, and depression during COVID-19, taking into account various demographic and contextual factors. Research Aim: The main objective of this study is to conduct a comprehensive systematic review and meta-analysis to examine the impact of social support on anxiety and depression during the COVID-19 pandemic. The study aims to determine the consistency of these relationships across different age groups, occupations, regions, and research paradigms. Methodology: A scoping review and meta-analytic approach were employed in this study. A search was conducted across six databases from 2020 to 2022 to identify relevant studies. The selected studies were then subjected to random effects models, with pooled correlations (r and ρ) estimated. Homogeneity was assessed using Q and I² tests. Subgroup analyses were conducted to explore variations across different demographic and contextual factors. Findings: The meta-analysis of both cross-sectional and longitudinal studies revealed significant correlations between social support, anxiety, and depression during COVID-19. The pooled correlations (ρ) indicated a negative relationship between social support and anxiety (ρ = -0.30, 95% CI = [-0.333, -0.255]) as well as depression (ρ = -0.27, 95% CI = [-0.370, -0.281]). However, further investigation is required to validate these results across different age groups, occupations, and regions. Theoretical Importance: This study emphasizes the multifaceted role of social support in mental health during the COVID-19 pandemic. It highlights the need to reevaluate and expand our understanding of social support's impact on anxiety and depression. The findings contribute to the existing literature by shedding light on the associations and complexities involved in these relationships. Data Collection and Analysis Procedures: The data collection involved an extensive search across six databases to identify relevant studies. The selected studies were then subjected to rigorous analysis using random effects models and subgroup analyses. Pooled correlations were estimated, and homogeneity was assessed using Q and I² tests. Question Addressed: This study aimed to address the question of the impact of social support on anxiety and depression during the COVID-19 pandemic. It sought to determine the consistency of these relationships across different demographic and contextual factors. Conclusion: The findings of this study highlight the significant association between social support, anxiety, and depression during the COVID-19 pandemic. However, further research is needed to validate these findings across different age groups, occupations, and regions. The study emphasizes the need for a comprehensive understanding of social support's multifaceted role in mental health and the importance of considering various contextual and demographic factors in future investigations.Keywords: social support, anxiety, depression, COVID-19, meta-analysis
Procedia PDF Downloads 62