Search results for: heat input
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4960

Search results for: heat input

400 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning

Authors: Hossein Havaeji, Tony Wong, Thien-My Dao

Abstract:

1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.

Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning

Procedia PDF Downloads 109
399 Harvesting Value-added Products Through Anodic Electrocatalytic Upgrading Intermediate Compounds Utilizing Biomass to Accelerating Hydrogen Evolution

Authors: Mehran Nozari-Asbemarz, Italo Pisano, Simin Arshi, Edmond Magner, James J. Leahy

Abstract:

Integrating electrolytic synthesis with renewable energy makes it feasible to address urgent environmental and energy challenges. Conventional water electrolyzers concurrently produce H₂ and O₂, demanding additional procedures in gas separation to prevent contamination of H₂ with O₂. Moreover, the oxygen evolution reaction (OER), which is sluggish and has a low overall energy conversion efficiency, does not deliver a significant value product on the electrode surface. Compared to conventional water electrolysis, integrating electrolytic hydrogen generation from water with thermodynamically more advantageous aqueous organic oxidation processes can increase energy conversion efficiency and create value-added compounds instead of oxygen at the anode. One strategy is to use renewable and sustainable carbon sources from biomass, which has a large annual production capacity and presents a significant opportunity to supplement carbon sourced from fossil fuels. Numerous catalytic techniques have been researched in order to utilize biomass economically. Because of its safe operating conditions, excellent energy efficiency, and reasonable control over production rate and selectivity using electrochemical parameters, electrocatalytic upgrading stands out as an appealing choice among the numerous biomass refinery technologies. Therefore, we propose a broad framework for coupling H2 generation from water splitting with oxidative biomass upgrading processes. Four representative biomass targets were considered for oxidative upgrading that used a hierarchically porous CoFe-MOF/LDH @ Graphite Paper bifunctional electrocatalyst, including glucose, ethanol, benzyl, furfural, and 5-hydroxymethylfurfural (HMF). The potential required to support 50 mA cm-2 is considerably lower than (~ 380 mV) the potential for OER. All four compounds can be oxidized to yield liquid byproducts with economic benefit. The electrocatalytic oxidation of glucose to the value-added products, gluconic acid, glucuronic acid, and glucaric acid, was examined in detail. The cell potential for combined H₂ production and glucose oxidation was substantially lower than for water splitting (1.44 V(RHE) vs. 1.82 V(RHE) for 50 mA cm-2). In contrast, the oxidation byproduct at the anode was significantly more valuable than O₂, taking advantage of the more favorable glucose oxidation in comparison to the OER. Overall, such a combination of HER and oxidative biomass valorization using electrocatalysts prevents the production of potentially explosive H₂/O₂mixtures and produces high-value products at both electrodes with lower voltage input, thereby increasing the efficiency and activity of electrocatalytic conversion.

Keywords: biomass, electrocatalytic, glucose oxidation, hydrogen evolution

Procedia PDF Downloads 80
398 Investigating the English Speech Processing System of EFL Japanese Older Children

Authors: Hiromi Kawai

Abstract:

This study investigates the nature of EFL older children’s L2 perceptive and productive abilities using classroom data, in order to find a pedagogical solution to the teaching of L2 sounds at an early stage of learning in a formal school setting. It is still inconclusive whether older children with only EFL formal school instruction at the initial stage of L2 learning are able to attain native-like perception and production in English within the very limited amount of exposure to the target language available. Based on the notion of the lack of study of EFL Japanese children’s acquisition of English segments, the researcher uses a model of L1 speech processing which was developed for investigating L1 English children’s speech and literacy difficulties using a psycholinguistic framework. The model is composed of input channel, output channel, and lexical representation, and examines how a child receives information from spoken or written language, remembers and stores it within the lexical representations and how the child selects and produces spoken or written words. Concerning language universality and language specificity in the language acquisitional process, the aim of finding any sound errors in L1 English children seemed to conform to the author’s intention to find abilities of English sounds in older Japanese children at the novice level of English in an EFL setting. 104 students in Grade 5 (between the ages of 10 and 11 years old) of an elementary school in Tokyo participated in this study. Four tests to measure their perceptive ability and three oral repetition tests to measure their productive ability were conducted with/without reference to lexical representation. All the test items were analyzed to calculate item facility (IF) indices, and correlational analyses and Structural Equation Modeling (SEM) were conducted to examine the relationship between the receptive ability and the productive ability. IF analysis showed that (1) the participants were better at perceiving a segment than producing a segment, (2) they had difficulty in auditory discrimination of paired consonants when one of them does not exist in the Japanese inventory, (3) they had difficulty in both perceiving and producing English vowels, and (4) their L1 loan word knowledge had an influence on their ability to perceive and produce L2 sounds. The result of the Multiple Regression Modeling showed that the two production tests could predict the participants’ auditory ability of real words in English. The result of SEM showed that the hypothesis that perceptive ability affects productive ability was supported. Based on these findings, the author discusses the possible explicit method of teaching English segments to EFL older children in a formal school setting.

Keywords: EFL older children, english segments, perception, production, speech processing system

Procedia PDF Downloads 227
397 Climate Variations and Fishers

Authors: S. Surapa Raju

Abstract:

In Andhra Pradesh, the symptoms of climate variations in coastal villages can be observed from various studies. The Andhra Pradesh coast is known its frequent tropical cyclones and associated floods and tidal surges causing loss of life and property in the region. In the last decade alone, the state experienced 18 devastating storms causing huge loss to coastal people. The year 2007 was the fourth warmest year on record since 1901 and 2009 witnessed the heat wave conditions prevailing over the coastal Andhra Pradesh. With regarding to sea level rise (SLR), 43 percent of the coastal areas considered to be at high risk. The main objectives of the study are: to know the perceptions of fisher people on climate variations and to find out the awareness of the fisher people on climate variations and its effects at village and on fishing households. Altogether 150 households were chosen purposively for this study and collected information from the households based on semi-structured schedule. The present field-based study observed that most of the fisher people are experienced about the changes in climate variations in their villages. The first generation fisher people expressed that the at least 1/2km of sea erosion taken place from the last 20 years and most of them displaced. With regard to fishing activities, first generation fisher people revealed that 20 years back they were fishing in near-shore areas, but now availability of near shore is decreased at a large extent. The present study observed the lot of variations in growth of species in marine districts of Andhra Pradesh from the year 2005-2010. Some species like Silver pomfret, Sole (flat fish), Chriocentrus, Thrisocies, Stakes, Rays etc. are in decaling. The results of the study indicate that huge variation observed in growth rates of fish species. Small and traditional fishers have drastically effected in El NiNo years than the normal years as they have not own suitable equipment such as crafts and nets. The study discovered that many changes taken place in the fishing activities and they are: go for long distance for fishing which increases the cost of fishing operations; decrease in fish catches. Need to take up in-depth studies in the marine villages and tackle the situation by creating more awareness about the negative effects of climate variations among fishing households. Suitable fish craft technology is to be supplied and create more employment opportunities for the fishers in other than fishery.

Keywords: climate, Andhra Pradesh, El nino years, India

Procedia PDF Downloads 408
396 Processes and Application of Casting Simulation and Its Software’s

Authors: Surinder Pal, Ajay Gupta, Johny Khajuria

Abstract:

Casting simulation helps visualize mold filling and casting solidification; predict related defects like cold shut, shrinkage porosity and hard spots; and optimize the casting design to achieve the desired quality with high yield. Flow and solidification of molten metals are, however, a very complex phenomenon that is difficult to simulate correctly by conventional computational techniques, especially when the part geometry is intricate and the required inputs (like thermo-physical properties and heat transfer coefficients) are not available. Simulation software is based on the process of modeling a real phenomenon with a set of mathematical formulas. It is, essentially, a program that allows the user to observe an operation through simulation without actually performing that operation. Simulation software is used widely to design equipment so that the final product will be as close to design specs as possible without expensive in process modification. Simulation software with real-time response is often used in gaming, but it also has important industrial applications. When the penalty for improper operation is costly, such as airplane pilots, nuclear power plant operators, or chemical plant operators, a mockup of the actual control panel is connected to a real-time simulation of the physical response, giving valuable training experience without fear of a disastrous outcome. The all casting simulation software has own requirements, like magma cast has only best for crack simulation. The latest generation software Auto CAST developed at IIT Bombay provides a host of functions to support method engineers, including part thickness visualization, core design, multi-cavity mold design with common gating and feeding, application of various feed aids (feeder sleeves, chills, padding, etc.), simulation of mold filling and casting solidification, automatic optimization of feeders and gating driven by the desired quality level, and what-if cost analysis. IIT Bombay has developed a set of applications for the foundry industry to improve casting yield and quality. Casting simulation is a fast and efficient solution for process for advanced tool which is the result of more than 20 years of collaboration with major industrial partners and academic institutions around the world. In this paper the process of casting simulation is studied.

Keywords: casting simulation software’s, simulation technique’s, casting simulation, processes

Procedia PDF Downloads 464
395 The Impact of Roof Thermal Performance on the Indoor Thermal Comfort in a Natural Ventilated Building Envelope in Hot Climatic Climates

Authors: J. Iwaro, A. Mwasha, K. Ramsubhag

Abstract:

Global warming has become a threat of our time. It poses challenges to the existence of beings on earth, the built environment, natural environment and has made a clear impact on the level of energy and water consumption. As such, increase in the ambient temperature increases indoor and outdoor temperature level of the buildings which brings about the use of more energy and mechanical air conditioning systems. In addition, in view of the increased modernization and economic growth in the developing countries, a significant amount of energy is being used, especially those with hot climatic conditions. Since modernization in developing countries is rising rapidly, more pressure is being placed on the buildings and energy resources to satisfy the indoor comfort requirements. This paper presents a sustainable passive roof solution as a means of reducing energy cooling loads for satisfying human comfort requirements in a hot climate. As such, the study based on the field study data discusses indoor thermal roof design strategies for a hot climate by investigating the impacts of roof thermal performance on indoor thermal comfort in naturally ventilated building envelope small scaled structures. In this respect, the traditional concrete flat roof, corrugated galvanised iron roof and pre-painted standing seam roof were used. The experiment made used of three identical small scale physical models constructed and sited on the roof of a building at the University of the West Indies. The results show that the utilization of insulation in traditional roofing systems will significantly reduce heat transfer between the internal and ambient environment, thus reducing the energy demand of the structure and the relative carbon footprint of a structure per unit area over its lifetime. Also, the application of flat slab concrete roofing system showed the best performance as opposed to the metal roof sheeting alternative systems. In addition, it has been shown experimentally through this study that a sustainable passive roof solution such as insulated flat concrete roof in hot dry climate has a better cooling strength that can provide building occupant with a better thermal comfort, conducive indoor conditions and energy efficiency.

Keywords: building envelope, roof, energy consumption, thermal comfort

Procedia PDF Downloads 255
394 A Computational Model of the Thermal Grill Illusion: Simulating the Perceived Pain Using Neuronal Activity in Pain-Sensitive Nerve Fibers

Authors: Subhankar Karmakar, Madhan Kumar Vasudevan, Manivannan Muniyandi

Abstract:

Thermal Grill Illusion (TGI) elicits a strong and often painful sensation of burn when interlacing warm and cold stimuli that are individually non-painful, excites thermoreceptors beneath the skin. Among several theories of TGI, the “disinhibition” theory is the most widely accepted in the literature. According to this theory, TGI is the result of the disinhibition or unmasking of the pain-sensitive HPC (Heat-Pinch-Cold) nerve fibers due to the inhibition of cold-sensitive nerve fibers that are responsible for masking HPC nerve fibers. Although researchers focused on understanding TGI throughexperiments and models, none of them investigated the prediction of TGI pain intensity through a computational model. Furthermore, the comparison of psychophysically perceived TGI intensity with neurophysiological models has not yet been studied. The prediction of pain intensity through a computational model of TGI can help inoptimizing thermal displays and understanding pathological conditions related to temperature perception. The current studyfocuses on developing a computational model to predict the intensity of TGI pain and experimentally observe the perceived TGI pain. The computational model is developed based on the disinhibition theory and by utilizing the existing popular models of warm and cold receptors in the skin. The model aims to predict the neuronal activity of the HPC nerve fibers. With a temperature-controlled thermal grill setup, fifteen participants (ten males and five females) were presented with five temperature differences between warm and cold grills (each repeated three times). All the participants rated the perceived TGI pain sensation on a scale of one to ten. For the range of temperature differences, the experimentally observed perceived intensity of TGI is compared with the neuronal activity of pain-sensitive HPC nerve fibers. The simulation results show a monotonically increasing relationship between the temperature differences and the neuronal activity of the HPC nerve fibers. Moreover, a similar monotonically increasing relationship is experimentally observed between temperature differences and the perceived TGI intensity. This shows the potential comparison of TGI pain intensity observed through the experimental study with the neuronal activity predicted through the model. The proposed model intends to bridge the theoretical understanding of the TGI and the experimental results obtained through psychophysics. Further studies in pain perception are needed to develop a more accurate version of the current model.

Keywords: thermal grill Illusion, computational modelling, simulation, psychophysics, haptics

Procedia PDF Downloads 150
393 Architectural Wind Data Maps Using an Array of Wireless Connected Anemometers

Authors: D. Serero, L. Couton, J. D. Parisse, R. Leroy

Abstract:

In urban planning, an increasing number of cities require wind analysis to verify comfort of public spaces and around buildings. These studies are made using computer fluid dynamic simulation (CFD). However, this technique is often based on wind information taken from meteorological stations located at several kilometers of the spot of analysis. The approximated input data on project surroundings produces unprecise results for this type of analysis. They can only be used to get general behavior of wind in a zone but not to evaluate precise wind speed. This paper presents another approach to this problem, based on collecting wind data and generating an urban wind cartography using connected ultrasound anemometers. They are wireless devices that send immediate data on wind to a remote server. Assembled in array, these devices generate geo-localized data on wind such as speed, temperature, pressure and allow us to compare wind behavior on a specific site or building. These Netatmo-type anemometers communicate by wifi with central equipment, which shares data acquired by a wide variety of devices such as wind speed, indoor and outdoor temperature, rainfall, and sunshine. Beside its precision, this method extracts geo-localized data on any type of site that can be feedback looped in the architectural design of a building or a public place. Furthermore, this method allows a precise calibration of a virtual wind tunnel using numerical aeraulic simulations (like STAR CCM + software) and then to develop the complete volumetric model of wind behavior over a roof area or an entire city block. The paper showcases connected ultrasonic anemometers, which were implanted for an 18 months survey on four study sites in the Grand Paris region. This case study focuses on Paris as an urban environment with multiple historical layers whose diversity of typology and buildings allows considering different ways of capturing wind energy. The objective of this approach is to categorize the different types of wind in urban areas. This, particularly the identification of the minimum and maximum wind spectrum, helps define the choice and performance of wind energy capturing devices that could be implanted there. The localization on the roof of a building, the type of wind, the altimetry of the device in relation to the levels of the roofs, the potential nuisances generated. The method allows identifying the characteristics of wind turbines in order to maximize their performance in an urban site with turbulent wind.

Keywords: computer fluid dynamic simulation in urban environment, wind energy harvesting devices, net-zero energy building, urban wind behavior simulation, advanced building skin design methodology

Procedia PDF Downloads 84
392 Determinants of Profit Efficiency among Poultry Egg Farmers in Ondo State, Nigeria: A Stochastic Profit Function Approach

Authors: Olufunke Olufunmilayo Ilemobayo, Barakat. O Abdulazeez

Abstract:

Profit making among poultry egg farmers has been a challenge to efficient distribution of scarce farm resources over the years, due majorly to low capital base, inefficient management, technical inefficiency, economic inefficiency, thus poultry egg production has moved into an underperformed situation, characterised by low profit margin. Though previous studies focus mainly on broiler production and efficiency of its production, however, paucity of information exist in the areas of profit efficiency in the study area. Hence, determinants of profit efficiency among poultry egg farmers in Ondo State, Nigeria were investigated. A purposive sampling technique was used to obtain primary data from poultry egg farmers in Owo and Akure local government area of Ondo State, through a well-structured questionnaire. socio-economic characteristics such as age, gender, educational level, marital status, household size, access to credit, extension contact, other variables were input and output data like flock size, cost of feeder and drinker, cost of feed, cost of labour, cost of drugs and medications, cost of energy, price of crate of table egg, price of spent layers were variables used in the study. Data were analysed using descriptive statistics, budgeting analysis, and stochastic profit function/inefficiency model. Result of the descriptive statistics shows that 52 per cent of the poultry farmers were between 31-40 years, 62 per cent were male, 90 per cent had tertiary education, 66 per cent were primarily poultry farmers, 78 per cent were original poultry farm owners and 55 per cent had more than 5 years’ work experience. Descriptive statistics on cost and returns indicated that 64 per cent of the return were from sales of egg, while the remaining 36 per cent was from sales of spent layers. The cost of feeding take the highest proportion of 69 per cent of cost of production and cost of medication the lowest (7 per cent). A positive gross margin of N5, 518,869.76, net farm income of ₦ 5, 500.446.82 and net return on investment of 0.28 indicated poultry egg production is profitable. Equipment’s cost (22.757), feeding cost (18.3437), labour cost (136.698), flock size (16.209), drug and medication cost (4.509) were factors that affecting profit efficiency, while education (-2.3143), household size (-18.4291), access to credit (-16.027), and experience (-7.277) were determinant of profit efficiency. Education, household size, access to credit and experience in poultry production were the main determinants of profit efficiency of poultry egg production in Ondo State. Other factors that affect profit efficiency were cost of feeding, cost of labour, flock size, cost of drug and medication, they positively and significantly influenced profit efficiency in Ondo State, Nigeria.

Keywords: cost and returns, economic inefficiency, profit margin, technical inefficiency

Procedia PDF Downloads 114
391 Working Memory and Phonological Short-Term Memory in the Acquisition of Academic Formulaic Language

Authors: Zhicheng Han

Abstract:

This study examines the correlation between knowledge of formulaic language, working memory (WM), and phonological short-term memory (PSTM) in Chinese L2 learners of English. This study investigates if WM and PSTM correlate differently to the acquisition of formulaic language, which may be relevant for the discourse around the conceptualization of formulas. Connectionist approaches have lead scholars to argue that formulas are form-meaning connections stored whole, making PSTM significant in the acquisitional process as it pertains to the storage and retrieval of chunk information. Generativist scholars, on the other hand, argued for active participation of interlanguage grammar in the acquisition and use of formulaic language, where formulas are represented in the mind but retain the internal structure built around a lexical core. This would make WM, especially the processing component of WM an important cognitive factor since it plays a role in processing and holding information for further analysis and manipulation. The current study asked L1 Chinese learners of English enrolled in graduate programs in China to complete a preference raking task where they rank their preference for formulas, grammatical non-formulaic expressions, and ungrammatical phrases with and without the lexical core in academic contexts. Participants were asked to rank the options in order of the likeliness of them encountering these phrases in the test sentences within academic contexts. Participants’ syntactic proficiency is controlled with a cloze test and grammar test. Regression analysis found a significant relationship between the processing component of WM and preference of formulaic expressions in the preference ranking task while no significant correlation is found for PSTM or syntactic proficiency. The correlational analysis found that WM, PSTM, and the two proficiency test scores have significant covariates. However, WM and PSTM have different predictor values for participants’ preference for formulaic language. Both storage and processing components of WM are significantly correlated with the preference for formulaic expressions while PSTM is not. These findings are in favor of the role of interlanguage grammar and syntactic knowledge in the acquisition of formulaic expressions. The differing effects of WM and PSTM suggest that selective attention to and processing of the input beyond simple retention play a key role in successfully acquiring formulaic language. Similar correlational patterns were found for preferring the ungrammatical phrase with the lexical core of the formula over the ones without the lexical core, attesting to learners’ awareness of the lexical core around which formulas are constructed. These findings support the view that formulaic phrases retain internal syntactic structures that are recognized and processed by the learners.

Keywords: formulaic language, working memory, phonological short-term memory, academic language

Procedia PDF Downloads 42
390 The Use of Stroke Journey Map in Improving Patients' Perceived Knowledge in Acute Stroke Unit

Authors: C. S. Chen, F. Y. Hui, B. S. Farhana, J. De Leon

Abstract:

Introduction: Stroke can lead to long-term disability, affecting one’s quality of life. Providing stroke education to patient and family members is essential to optimize stroke recovery and prevent recurrent stroke. Currently, nurses conduct stroke education by handing out pamphlets and explaining their contents to patients. However, this is not always effective as nurses have varying levels of knowledge and depth of content discussed with the patient may not be consistent. With the advancement of information technology, health education is increasingly being disseminated via electronic software and studies have shown this to have benefitted patients. Hence, a multi-disciplinary team consisting of doctors, nurses and allied health professionals was formed to create the stroke journey map software to deliver consistent and concise stroke education. Research Objectives: To evaluate the effectiveness of using a stroke journey map software in improving patients’ perceived knowledge in the acute stroke unit during hospitalization. Methods: Patients admitted to the acute stroke unit were given stroke journey map software during patient education. The software consists of 31 interactive slides that are brightly coloured and 4 videos, based on input provided by the multi-disciplinary team. Participants were then assessed with pre-and-post survey questionnaires before and after viewing the software. The questionnaire consists of 10 questions with a 5-point Likert scale which sums up to a total score of 50. The inclusion criteria are patients diagnosed with ischemic stroke and are cognitively alert and oriented. This study was conducted between May 2017 to October 2017. Participation was voluntary. Results: A total of 33 participants participated in the study. The results demonstrated that the use of a stroke journey map as a stroke education medium was effective in improving patients’ perceived knowledge. A comparison of pre- and post-implementation data of stroke journey map revealed an overall mean increase in patients’ perceived knowledge from 24.06 to 40.06. The data is further broken down to evaluate patients’ perceived knowledge in 3 domains: (1) Understanding of disease process; (2) Management and treatment plans; (3) Post-discharge care. Each domain saw an increase in mean score from 10.7 to 16.2, 6.9 to 11.9 and 6.6 to 11.7 respectively. Project Impact: The implementation of stroke journey map has a positive impact in terms of (1) Increasing patient’s perceived knowledge which could contribute to greater empowerment of health; (2) Reducing need for stroke education material printouts making it environmentally friendly; (3) Decreasing time nurses spent on giving education resulting in more time to attend to patients’ needs. Conclusion: This study has demonstrated the benefit of using stroke journey map as a platform for stroke education. Overall, it has increased patients’ perceived knowledge in understanding their disease process, the management and treatment plans as well as the discharge process.

Keywords: acute stroke, education, ischemic stroke, knowledge, stroke

Procedia PDF Downloads 147
389 Reorientation of Sustainable Livestock Management: A Case Study Applied to Wastes Management in Faculty of Animal Husbandry, Padjadjaran University, Indonesia

Authors: Raka Rahmatulloh, Mohammad Ilham Nugraha, Muhammad Ifan Fathurrahman

Abstract:

The agricultural sector covers a wide area, one of them is livestock subsector that supply needs of the food source of animal protein. Animal protein is produced by the main livestock production such as meat, milk, eggs, etc. Besides the main production, livestock would produce metabolic residue, so called livestock wastes. Characteristics of livestock wastes can be either solid (feces), liquid (urine), and gas (methane) which turned out to be useful and has economical value when well-processed and well-controlled. Nowadays, this livestock wastes is considered as a source of pollutants, especially water pollution. If the source of pollutants used in an integrated way, it will have a positive impact on organic farming and a healthy environment. Management of livestock wastes can be integrated with the farming sector to the planting and caring that rely on fertilizers. Most Indonesian farmers still use chemical fertilizers, where the use of it in the long term will disturb the ecological balance of the environment. One of the main efforts is to use organic fertilizers instead of chemical fertilizer that conducted by the Faculty of Animal Husbandry, Padjadjaran University. The method is to use the solid waste of livestock and agricultural wastes into liquid organic fertilizer, feed additive, biogas and vermicompost through decomposition. The decomposition takes as long as 14 days including aeration and extraction process using water as a nutrients solvent media which contained in decomposes and disinfection media to release pathogenic microorganisms in decomposes. Liquid organic fertilizer has highly efficient for the farmers to have a ratio of carbon/nitrogen (C/N) 25/1 to 30/1 and neutral pH (6.5-7.5) which is good for plant growth. Feed additive may be given to improve the digestibility of feed so that substances can be easily absorbed by the body for production. Biogas contains methane (CH4), which has a high enough heat to produce electricity. Vermicompost is an overhaul of waste organic material that has excellent structure, porosity, aeration, drainage, and moisture holding capacity. Based on the case study above, an integrated livestock wastes management program strongly supports the Indonesian government in the achievement of sustainable livestock development.

Keywords: integrated, livestock wastes, organic fertilizer, sustainable livestock development

Procedia PDF Downloads 417
388 Molecular Modeling and Prediction of the Physicochemical Properties of Polyols in Aqueous Solution

Authors: Maria Fontenele, Claude-Gilles Dussap, Vincent Dumouilla, Baptiste Boit

Abstract:

Roquette Frères is a producer of plant-based ingredients that employs many processes to extract relevant molecules and often transforms them through chemical and physical processes to create desired ingredients with specific functionalities. In this context, Roquette encounters numerous multi-component complex systems in their processes, including fibers, proteins, and carbohydrates, in an aqueous environment. To develop, control, and optimize both new and old processes, Roquette aims to develop new in silico tools. Currently, Roquette uses process modelling tools which include specific thermodynamic models and is willing to develop computational methodologies such as molecular dynamics simulations to gain insights into the complex interactions in such complex media, and especially hydrogen bonding interactions. The issue at hand concerns aqueous mixtures of polyols with high dry matter content. The polyols mannitol and sorbitol molecules are diastereoisomers that have nearly identical chemical structures but very different physicochemical properties: for example, the solubility of sorbitol in water is 2.5 kg/kg of water, while mannitol has a solubility of 0.25 kg/kg of water at 25°C. Therefore, predicting liquid-solid equilibrium properties in this case requires sophisticated solution models that cannot be based solely on chemical group contributions, knowing that for mannitol and sorbitol, the chemical constitutive groups are the same. Recognizing the significance of solvation phenomena in polyols, the GePEB (Chemical Engineering, Applied Thermodynamics, and Biosystems) team at Institut Pascal has developed the COSMO-UCA model, which has the structural advantage of using quantum mechanics tools to predict formation and phase equilibrium properties. In this work, we use molecular dynamics simulations to elucidate the behavior of polyols in aqueous solution. Specifically, we employ simulations to compute essential metrics such as radial distribution functions and hydrogen bond autocorrelation functions. Our findings illuminate a fundamental contrast: sorbitol and mannitol exhibit disparate hydrogen bond lifetimes within aqueous environments. This observation serves as a cornerstone in elucidating the divergent physicochemical properties inherent to each compound, shedding light on the nuanced interplay between their molecular structures and water interactions. We also present a methodology to predict the physicochemical properties of complex solutions, taking as sole input the three-dimensional structure of the molecules in the medium. Finally, by developing knowledge models, we represent some physicochemical properties of aqueous solutions of sorbitol and mannitol.

Keywords: COSMO models, hydrogen bond, molecular dynamics, thermodynamics

Procedia PDF Downloads 22
387 Acute Neurophysiological Responses to Resistance Training; Evidence of a Shortened Super Compensation Cycle and Early Neural Adaptations

Authors: Christopher Latella, Ashlee M. Hendy, Dan Vander Westhuizen, Wei-Peng Teo

Abstract:

Introduction: Neural adaptations following resistance training interventions have been widely investigated, however the evidence regarding the mechanisms of early adaptation are less clear. Understanding neural responses from an acute resistance training session is pivotal in the prescription of frequency, intensity and volume in applied strength and conditioning practice. Therefore the primary aim of this study was to investigate the time course of neurophysiological mechanisms post training against current super compensation theory, and secondly, to examine whether these responses reflect neural adaptations observed with resistance training interventions. Methods: Participants (N=14) completed a randomised, counterbalanced crossover study comparing; control, strength and hypertrophy conditions. The strength condition involved 3 x 5RM leg extensions with 3min recovery, while the hypertrophy condition involved 3 x 12 RM with 60s recovery. Transcranial magnetic stimulation (TMS) and peripheral nerve stimulation were used to measure excitability of the central and peripheral neural pathways, and maximal voluntary contraction (MVC) to quantify strength changes. Measures were taken pre, immediately post, 10, 20 and 30 mins and 1, 2, 6, 24, 48, 72 and 96 hrs following training. Results: Significant decreases were observed at post, 10, 20, 30 min, 1 and 2 hrs for both training groups compared to control group for force, (p <.05), maximal compound wave; (p < .005), silent period; (p < .05). A significant increase in corticospinal excitability; (p < .005) was observed for both groups. Corticospinal excitability between strength and hypertrophy groups was near significance, with a large effect (η2= .202). All measures returned to baseline within 6 hrs post training. Discussion: Neurophysiological mechanisms appear to be significantly altered in the period 2 hrs post training, returning to homeostasis by 6 hrs. The evidence suggests that the time course of neural recovery post resistance training occurs 18-40 hours shorter than previous super compensation models. Strength and hypertrophy protocols showed similar response profiles with current findings suggesting greater post training corticospinal drive from hypertrophy training, despite previous evidence that strength training requires greater neural input. The increase in corticospinal drive and decrease inl inhibition appear to be a compensatory mechanism for decreases in peripheral nerve excitability and maximal voluntary force output. The changes in corticospinal excitability and inhibition are akin to adaptive processes observed with training interventions of 4 wks or longer. It appears that the 2 hr recovery period post training is the most influential for priming further neural adaptations with resistance training. Secondly, the frequency of prescribed resistance sessions can be scheduled closer than previous super compensation theory for optimal strength gains.

Keywords: neural responses, resistance training, super compensation, transcranial magnetic stimulation

Procedia PDF Downloads 265
386 Techno Economic Analysis for Solar PV and Hydro Power for Kafue Gorge Power Station

Authors: Elvis Nyirenda

Abstract:

This research study work was done to evaluate and propose an optimum measure to enhance the uptake of clean energy technologies such as solar photovoltaics, the study also aims at enhancing the country’s energy mix from the overdependence on hydro power which is susceptible to droughts and climate change challenges The country in the years 2015 - 2016 and 2018 - 2019 had received rainfall below average due to climate change and a shift in the weather pattern; this resulted in prolonged power outages and load shedding for more than 10 hours per day. ZESCO Limited, the utility company that owns infrastructure in the generation, transmission, and distribution of electricity (state-owned), is seeking alternative sources of energy in order to reduce the over-dependence on hydropower stations. One of the alternative sources of energy is Solar Energy from the sun. However, solar power is intermittent in nature and to smoothen the load curve, investment in robust energy storage facilities is of great importance to enhance security and reliability of electricity supply in the country. The methodology of the study looked at the historical performance of the Kafue gorge upper power station and utilised the hourly generation figures as input data for generation modelling in Homer software. The average yearly demand was derived from the available data on the system SCADA. The two dams were modelled as natural battery with the absolute state of charging and discharging determined by the available water resource and the peak electricity demand. The software Homer Energy System is used to simulate the scheme incorporating a pumped storage facility and Solar photovoltaic systems. The pumped hydro scheme works like a natural battery for the conservation of water, with the only losses being evaporation and water leakages from the dams and the turbines. To address the problem of intermittency on the solar resource and the non-availability of water for hydropower generation, the study concluded that utilising the existing Hydro power stations, Kafue Gorge upper and Kafue Gorge Lower to work conjunctively with Solar energy will reduce power deficits and increase the security of supply for the country. An optimum capacity of 350MW of solar PV can be integrated while operating Kafue Gorge power station in both generating and pumping mode to enable efficient utilisation of water at Kafue Gorge upper Dam and Kafue Gorge Lower dam.

Keywords: hydropower, solar power systems, energy storage, photovoltaics, solar irradiation, pumped hydro storage system, supervisory control and data acquisition, Homer energy

Procedia PDF Downloads 99
385 Coulomb-Explosion Driven Proton Focusing in an Arched CH Target

Authors: W. Q. Wang, Y. Yin, D. B. Zou, T. P. Yu, J. M. Ouyang, F. Q. Shao

Abstract:

High-energy-density state, i.e., matter and radiation at energy densities in excess of 10^11 J/m^3, is related to material, nuclear physics, astrophysics, and geophysics. Laser-driven particle beams are better suited to heat the matter as a trigger due to their unique properties of ultrashort duration and low emittance. Compared to X-ray and electron sources, it is easier to generate uniformly heated large-volume material for the proton and ion beams because of highly localized energy deposition. With the construction of state-of-art high power laser facilities, creating of extremely conditions of high-temperature and high-density in laboratories becomes possible. It has been demonstrated that on a picosecond time scale the solid density material can be isochorically heated to over 20 eV by the ultrafast proton beam generated from spherically shaped targets. For the above-mentioned technique, the proton energy density plays a crucial role in the formation of warm dense matter states. Recently, several methods have devoted to realize the focusing of the accelerated protons, involving externally exerted static-fields or specially designed targets interacting with a single or multi-pile laser pulses. In previous works, two co-propagating or opposite direction laser pulses are employed to strike a submicron plasma-shell. However, ultra-high pulse intensities, accurately temporal synchronization and undesirable transverse instabilities for a long time are still intractable for currently experimental implementations. A mechanism of the focusing of laser-driven proton beams from two-ion-species arched targets is investigated by multi-dimensional particle-in-cell simulations. When an intense linearly-polarized laser pulse impinges on the thin arched target, all electrons are completely evacuated, leading to a Coulomb-explosive electric-field mostly originated from the heavier carbon ions. The lighter protons in the moving reference frame by the ionic sound speed will be accelerated and effectively focused because of this radially isotropic field. At a 2.42×10^21 W/cm^2 laser intensity, a ballistic proton bunch with its energy-density as high as 2.15×10^17 J/m^3 is produced, and the highest proton energy and the focusing position agree well with that from the theory.

Keywords: Coulomb explosion, focusing, high-energy-density, ion acceleration

Procedia PDF Downloads 316
384 Mapping Actors in Sao Paulo's Urban Development Policies: Interests at Stake in the Challenge to Sustainability

Authors: A. G. Back

Abstract:

In the context of global climate change, extreme weather events are increasingly intense and frequent, challenging the adaptability of urban space. In this sense, urban planning is a relevant instrument for addressing, in a systemic manner, various sectoral policies capable of linking the urban agenda to the reduction of social and environmental risks. The Master Plan of the Municipality of Sao Paulo, 2014, presents innovations capable of promoting the transition to sustainability in the urban space. Among such innovations, the following stand out: i) promotion of density in the axes of mass transport involving mixture of commercial, residential, services, and leisure uses (principles related to the compact city); ii) vulnerabilities reduction based on housing policies, including regular sources of funds for social housing and land reservation in urbanized areas; iii) reserve of green areas in the city to create parks and environmental regulations for new buildings focused on reducing the effects of heat island and improving urban drainage. However, long-term implementation involves distributive conflicts and may change in different political, economic, and social contexts over time. Thus, the central objective of this paper is to identify which factors limit or support the implementation of these policies. That is, to map the challenges and interests of converging and/or divergent urban actors in the sustainable urban development agenda and what resources they mobilize to support or limit these actions in the city of Sao Paulo. Recent proposals to amend the urban zoning law undermine the implementation of the Master Plan guidelines. In this context, three interest groups with different views of the city come into dispute: the real estate market, upper middle class neighborhood associations ('not in my backyard' movements), and social housing rights movements. This paper surveys the different interests and visions of these groups taking into account their convergences, or not, with the principles of sustainable urban development. This approach seeks to fill a gap in the international literature on the causes that underpin or hinder the continued implementation of policies aimed at the transition to urban sustainability in the medium and long term.

Keywords: adaptation, ecosystem-based adaptation, interest groups, urban planning, urban transition to sustainability

Procedia PDF Downloads 102
383 Tuberculosis Outpatient Treatment in the Context of Reformation of the Health Care System

Authors: Danylo Brindak, Viktor Liashko, Olexander Chepurniy

Abstract:

Despite considerable experience in implementation of the best international approaches and services within response to epidemy of multi-drug resistant tuberculosis, the results of situation analysis indicate the presence of faults in this area. In 2014, Ukraine (for the first time) was included in the world’s five countries with the highest level of drug-resistant tuberculosis. The effectiveness of its treatment constitutes only 35% in the country. In this context, the increase in allocation of funds to control the epidemic of multidrug-resistant tuberculosis does not produce perceptible positive results. During 2001-2016, only the Global Fund to fight AIDS, Tuberculosis, and Malaria allocated to Ukraine more than USD 521,3 million for programs of tuberculosis and HIV/AIDS control. However, current conditions in post-Semashko system create little motivation for rational use of resources or cost control at inpatient TB facilities. There is no motivation to reduce overdue hospitalization and to target resources to priority sectors of modern tuberculosis control, including a model of care focused on the patient. In the presence of a line-item budget at medical institutions, based on the input factors as the ratios of beds and staff, there is a passive disposal of budgetary funds by health care institutions and their employees who have no motivation to improve quality and efficiency of service provision. Outpatient treatment of tuberculosis is being implemented in Ukraine since 2011 and has many risks, namely creation of parallel systems, low consistency through dependence on funding for the project, reduced the role of the family doctor, the fragmentation of financing, etc. In terms of reforming approaches to health system financing, which began in Ukraine in late 2016, NGO Infection Control in Ukraine conducted piloting of a new, motivating method of remuneration of employees in primary health care. The innovative aspect of this funding mechanism is cost according to results of treatment. The existing method of payment on the basis of the standard per inhabitant (per capita ratio) was added with motivating costs according to results of work. The effectiveness of such treatment of TB patients at the outpatient stage is 90%, while in whole on the basis of a current system the effectiveness of treatment of newly diagnosed pulmonary TB with positive swab is around 60% in the country. Even though Ukraine has 5.24 TB beds per 10 000 citizens. Implemented pilot model of ambulatory treatment will be used for the creation of costs system according to results of activities, the integration of TB and primary health and social services and their focus on achieving results, the reduction of inpatient treatment of tuberculosis.

Keywords: health care reform, multi-drug resistant tuberculosis, outpatient treatment efficiency, tuberculosis

Procedia PDF Downloads 133
382 The Environmental Impact of Sustainability Dispersion of Chlorine Releases in Coastal Zone of Alexandra: Spatial-Ecological Modeling

Authors: Mohammed El Raey, Moustafa Osman Mohammed

Abstract:

The spatial-ecological modeling is relating sustainable dispersions with social development. Sustainability with spatial-ecological model gives attention to urban environments in the design review management to comply with Earth’s System. Naturally exchange patterns of ecosystems have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. The probabilistic risk assessment (PRA) technique is utilized to assess the safety of industrial complex. The other analytical approach is the Failure-Safe Mode and Effect Analysis (FMEA) for critical components. The plant safety parameters are identified for engineering topology as employed in assessment safety of industrial ecology. In particular, the most severe accidental release of hazardous gaseous is postulated, analyzed and assessment in industrial region. The IAEA- safety assessment procedure is used to account the duration and rate of discharge of liquid chlorine. The ecological model of plume dispersion width and concentration of chlorine gas in the downwind direction is determined using Gaussian Plume Model in urban and ruler areas and presented with SURFER®. The prediction of accident consequences is traced in risk contour concentration lines. The local greenhouse effect is predicted with relevant conclusions. The spatial-ecological model is also predicted the distribution schemes from the perspective of pollutants that considered multiple factors of multi-criteria analysis. The data extends input–output analysis to evaluate the spillover effect, and conducted Monte Carlo simulations and sensitivity analysis. Their unique structure is balanced within “equilibrium patterns”, such as the biosphere and collective a composite index of many distributed feedback flows. These dynamic structures are related to have their physical and chemical properties and enable a gradual and prolonged incremental pattern. While this spatial model structure argues from ecology, resource savings, static load design, financial and other pragmatic reasons, the outcomes are not decisive in artistic/ architectural perspective. The hypothesis is an attempt to unify analytic and analogical spatial structure for development urban environments using optimization software and applied as an example of integrated industrial structure where the process is based on engineering topology as optimization approach of systems ecology.

Keywords: spatial-ecological modeling, spatial structure orientation impact, composite structure, industrial ecology

Procedia PDF Downloads 59
381 Uniform and Controlled Cooling of a Steel Block by Multiple Jet Impingement and Airflow

Authors: E. K. K. Agyeman, P. Mousseau, A. Sarda, D. Edelin

Abstract:

During the cooling of hot metals by the circulation of water in canals formed by boring holes in the metal, the rapid phase change of the water due to the high initial temperature of the metal leads to a non homogenous distribution of the phases within the canals. The liquid phase dominates towards the entrance of the canal while the gaseous phase dominates towards the exit. As a result of the different thermal properties of both phases, the metal is not uniformly cooled. This poses a problem during the cooling of moulds, where a uniform temperature distribution is needed in order to ensure the integrity of the part being formed. In this study, the simultaneous use of multiple water jets and an airflow for the uniform and controlled cooling of a steel block is investigated. A circular hole is bored at the centre of the steel block along its length and a perforated steel pipe is inserted along the central axis of the hole. Water jets that impact the internal surface of the steel block are generated from the perforations in the steel pipe when the water within it is put under pressure. These jets are oriented in the opposite direction to that of gravity. An intermittent airflow is imposed in the annular space between the steel pipe and the surface of hole bored in the steel block. The evolution of the temperature with respect to time of the external surface of the block is measured with the help of thermocouples and an infrared camera. Due to the high initial temperature of the steel block (350 °C), the water changes phase when it impacts the internal surface of the block. This leads to high heat fluxes. The strategy used to control the cooling speed of the block is the intermittent impingement of its internal surface by the jets. The intervals of impingement and of non impingement are varied in order to achieve the desired result. An airflow is used during the non impingement periods as an additional regulator of the cooling speed and to improve the temperature homogeneity of the impinged surface. After testing different jet positions, jet speeds and impingement intervals, it’s observed that the external surface of the steel block has a uniform temperature distribution along its length. However, the temperature distribution along its width isn’t uniform with the maximum temperature difference being between the centre of the block and its edge. Changing the positions of the jets has no significant effect on the temperature distribution on the external surface of the steel block. It’s also observed that reducing the jet impingement interval and increasing the non impingement interval slows down the cooling of the block and improves upon the temperature homogeneity of its external surface while increasing the duration of jet impingement speeds up the cooling process.

Keywords: cooling speed, homogenous cooling, jet impingement, phase change

Procedia PDF Downloads 113
380 Plasma Arc Burner for Pulverized Coal Combustion

Authors: Gela Gelashvili, David Gelenidze, Sulkhan Nanobashvili, Irakli Nanobashvili, George Tavkhelidze, Tsiuri Sitchinava

Abstract:

Development of new highly efficient plasma arc combustion system of pulverized coal is presented. As it is well-known, coal is one of the main energy carriers by means of which electric and heat energy is produced in thermal power stations. The quality of the extracted coal decreases very rapidly. Therefore, the difficulties associated with its firing and complete combustion arise and thermo-chemical preparation of pulverized coal becomes necessary. Usually, other organic fuels (mazut-fuel oil or natural gas) are added to low-quality coal for this purpose. The fraction of additional organic fuels varies within 35-40% range. This decreases dramatically the economic efficiency of such systems. At the same time, emission of noxious substances in the environment increases. Because of all these, intense development of plasma combustion systems of pulverized coal takes place in whole world. These systems are equipped with Non-Transferred Plasma Arc Torches. They allow practically complete combustion of pulverized coal (without organic additives) in boilers, increase of energetic and financial efficiency. At the same time, emission of noxious substances in the environment decreases dramatically. But, the non-transferred plasma torches have numerous drawbacks, e.g. complicated construction, low service life (especially in the case of high power), instability of plasma arc and most important – up to 30% of energy loss due to anode cooling. Due to these reasons, intense development of new plasma technologies that are free from these shortcomings takes place. In our proposed system, pulverized coal-air mixture passes through plasma arc area that burns between to carbon electrodes directly in pulverized coal muffler burner. Consumption of the carbon electrodes is low and does not need a cooling system, but the main advantage of this method is that radiation of plasma arc directly impacts on coal-air mixture that accelerates the process of thermo-chemical preparation of coal to burn. To ensure the stability of the plasma arc in such difficult conditions, we have developed a power source that provides fixed current during fluctuations in the arc resistance automatically compensated by the voltage change as well as regulation of plasma arc length over a wide range. Our combustion system where plasma arc acts directly on pulverized coal-air mixture is simple. This should allow a significant improvement of pulverized coal combustion (especially low-quality coal) and its economic efficiency. Preliminary experiments demonstrated the successful functioning of the system.

Keywords: coal combustion, plasma arc, plasma torches, pulverized coal

Procedia PDF Downloads 146
379 A Comparison of Direct Water Injection with Membrane Humidifier for Proton Exchange Membrane Fuel Cells Humification

Authors: Flavien Marteau, Pedro Affonso Nóbrega, Pascal Biwole, Nicolas Autrusson, Iona De Bievre, Christian Beauger

Abstract:

Effective water management is essential for the optimal performance of fuel cells. For this reason, many vehicle systems use a membrane humidifier, a passive device that humidifies the air before the cathode inlet. Although they offer good performance, humidifiers are voluminous, costly, and fragile, hence the desire to find an alternative. Direct water injection could be an option, although this method lacks maturity. It consists of injecting liquid water as a spray in the dry heated air coming out from the compressor. This work focuses on the evaluation of direct water injection and its performance compared to the membrane humidifier selected as a reference. Two architectures were experimentally tested to humidify an industrial 2 kW short stack made up of 20 cells of 150 cm² each. For the reference architecture, the inlet air is humidified with a commercial membrane humidifier. For the direct water injection architecture, a pneumatic nozzle was selected to generate a fine spray in the air flow with a Sauter mean diameter of about 20 μm. Initial performance was compared over the entire range of current based on polarisation curves. Then, the influence of various parameters impacting water management was studied, such as the temperature, the gas stoichiometry, and the water injection flow rate. The experimental results obtained confirm the possibility of humidifying the fuel cell using direct water injection. This study, however shows the limits of this humidification method, the mean cell voltage being significantly lower in some operating conditions with direct water injection than with the membrane humidifier. The voltage drop reaches 30 mV per cell (4 %) at 1 A/cm² (1,8 bara, 80 °C) and increases in more demanding humidification conditions. It is noteworthy that the heat of compression available is not enough to evaporate all the injected liquid water in the case of DWI, resulting in a mix of liquid and vapour water entering the fuel cell, whereas only vapour is present with the humidifier. Variation of the injection flow rate shows that part of the injected water is useless for humidification and seems to cross channels without reaching the membrane. The stack was successfully humidified thanks to direct water injection. Nevertheless, our work shows that its implementation requires substantial adaptations and may reduce the fuel cell stack performance when compared to conventional membrane humidifiers, but opportunities for optimisation have been identified.

Keywords: cathode humidification, direct water injection, membrane humidifier, proton exchange membrane fuel cell

Procedia PDF Downloads 22
378 Discovering the Effects of Meteorological Variables on the Air Quality of Bogota, Colombia, by Data Mining Techniques

Authors: Fabiana Franceschi, Martha Cobo, Manuel Figueredo

Abstract:

Bogotá, the capital of Colombia, is its largest city and one of the most polluted in Latin America due to the fast economic growth over the last ten years. Bogotá has been affected by high pollution events which led to the high concentration of PM10 and NO2, exceeding the local 24-hour legal limits (100 and 150 g/m3 each). The most important pollutants in the city are PM10 and PM2.5 (which are associated with respiratory and cardiovascular problems) and it is known that their concentrations in the atmosphere depend on the local meteorological factors. Therefore, it is necessary to establish a relationship between the meteorological variables and the concentrations of the atmospheric pollutants such as PM10, PM2.5, CO, SO2, NO2 and O3. This study aims to determine the interrelations between meteorological variables and air pollutants in Bogotá, using data mining techniques. Data from 13 monitoring stations were collected from the Bogotá Air Quality Monitoring Network within the period 2010-2015. The Principal Component Analysis (PCA) algorithm was applied to obtain primary relations between all the parameters, and afterwards, the K-means clustering technique was implemented to corroborate those relations found previously and to find patterns in the data. PCA was also used on a per shift basis (morning, afternoon, night and early morning) to validate possible variation of the previous trends and a per year basis to verify that the identified trends have remained throughout the study time. Results demonstrated that wind speed, wind direction, temperature, and NO2 are the most influencing factors on PM10 concentrations. Furthermore, it was confirmed that high humidity episodes increased PM2,5 levels. It was also found that there are direct proportional relationships between O3 levels and wind speed and radiation, while there is an inverse relationship between O3 levels and humidity. Concentrations of SO2 increases with the presence of PM10 and decreases with the wind speed and wind direction. They proved as well that there is a decreasing trend of pollutant concentrations over the last five years. Also, in rainy periods (March-June and September-December) some trends regarding precipitations were stronger. Results obtained with K-means demonstrated that it was possible to find patterns on the data, and they also showed similar conditions and data distribution among Carvajal, Tunal and Puente Aranda stations, and also between Parque Simon Bolivar and las Ferias. It was verified that the aforementioned trends prevailed during the study period by applying the same technique per year. It was concluded that PCA algorithm is useful to establish preliminary relationships among variables, and K-means clustering to find patterns in the data and understanding its distribution. The discovery of patterns in the data allows using these clusters as an input to an Artificial Neural Network prediction model.

Keywords: air pollution, air quality modelling, data mining, particulate matter

Procedia PDF Downloads 241
377 Effect of Laser Ablation OTR Films on the Storability of Endive and Pak Choi by Baby Vegetables in Modified Atmosphere Condition

Authors: In-Lee Choi, Min Jae Jeong, Jun Pill Baek, Ho-Min Kang

Abstract:

As the consumption trends of vegetables become different from the past, it is increased using vegetable more convenience such as fresh-cut vegetables, sprouts, baby vegetables rather than an existing hole piece of vegetables. Selected baby vegetables have various functional materials but they have short shelf life. This study was conducted to improve storability by using suitable laser ablation OTR (oxygen transmission rate) films. Baby vegetable of endive (Cichorium endivia L.) and pak choi (Brassica rapa chinensis) for this research, around 10 cm height, cultivated in glass greenhouse during 3 weeks. Harvested endive and pak choi were stored at 8 ℃ for 5 days and were packed by PP (Polypropylene) container and covered different types of laser ablation OTR film (DaeRyung Co., Ltd.) such as 1,300 cc, 10,000 cc, 20,000 cc, 40,000 cc /m2•day•atm, and control (perforated film) with heat sealing machine (SC200-IP, Kumkang, Korea). All the samples conducted 5 times replication. Statistical analysis was carried out using a Microsoft Excel 2010 program and results were expressed as standard deviations. The fresh weight loss rate of both baby vegetables were less than 0.3 % in treated films as maximum weight loss rate. On the other hands, control in the final storage day had around 3.0 % weight loss rate and it followed decreasing quantity. Endive had less 2.0 % carbon dioxide contents as maximum contents in 20,000 cc and 40,000 cc. Oxygen contents was maintained between 17 and 20 % in endive, 19 and 20 % in pak choi. Ethylene concentration of both vegetables maintained little lower contents in 20,000 cc treatments than others at final storage day without statistical significance. In the case of hardness, 40,000 cc film was shown little higher value at both baby vegetables without statistical significance. Visual quality was good at 10,000 cc and 20,000 cc in endive and pak choi, and off-flavor was not appeard any off-flavor in both vegetables. Chlorophyll (SPAD-502, Minolta, Japan) value of endive was shown as similar result with initial in all treatments except 20,000 cc as little lower. And chlorophyll value of pak choi decreased in all treatments compared with initial value but was not shown significantly difference each other. Color of leaves (CR-400, Minolta, Japan) changed significantly in 40,000 cc at endive. In an event of pak choi, all the treatments started yellowing by increasing hunter b value, among them control increased substantially. As above the result, 10,000 cc film was most reasonable packaging film for storing at endive and 20,000 cc at pak choi with good quality.

Keywords: carbon dioxide, shelf-life, visual quality, pak choi

Procedia PDF Downloads 774
376 Machine Learning for Disease Prediction Using Symptoms and X-Ray Images

Authors: Ravija Gunawardana, Banuka Athuraliya

Abstract:

Machine learning has emerged as a powerful tool for disease diagnosis and prediction. The use of machine learning algorithms has the potential to improve the accuracy of disease prediction, thereby enabling medical professionals to provide more effective and personalized treatments. This study focuses on developing a machine-learning model for disease prediction using symptoms and X-ray images. The importance of this study lies in its potential to assist medical professionals in accurately diagnosing diseases, thereby improving patient outcomes. Respiratory diseases are a significant cause of morbidity and mortality worldwide, and chest X-rays are commonly used in the diagnosis of these diseases. However, accurately interpreting X-ray images requires significant expertise and can be time-consuming, making it difficult to diagnose respiratory diseases in a timely manner. By incorporating machine learning algorithms, we can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The study utilized the Mask R-CNN algorithm, which is a state-of-the-art method for object detection and segmentation in images, to process chest X-ray images. The model was trained and tested on a large dataset of patient information, which included both symptom data and X-ray images. The performance of the model was evaluated using a range of metrics, including accuracy, precision, recall, and F1-score. The results showed that the model achieved an accuracy rate of over 90%, indicating that it was able to accurately detect and segment regions of interest in the X-ray images. In addition to X-ray images, the study also incorporated symptoms as input data for disease prediction. The study used three different classifiers, namely Random Forest, K-Nearest Neighbor and Support Vector Machine, to predict diseases based on symptoms. These classifiers were trained and tested using the same dataset of patient information as the X-ray model. The results showed promising accuracy rates for predicting diseases using symptoms, with the ensemble learning techniques significantly improving the accuracy of disease prediction. The study's findings indicate that the use of machine learning algorithms can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The model developed in this study has the potential to assist medical professionals in diagnosing respiratory diseases more accurately and efficiently. However, it is important to note that the accuracy of the model can be affected by several factors, including the quality of the X-ray images, the size of the dataset used for training, and the complexity of the disease being diagnosed. In conclusion, the study demonstrated the potential of machine learning algorithms for disease prediction using symptoms and X-ray images. The use of these algorithms can improve the accuracy of disease diagnosis, ultimately leading to better patient care. Further research is needed to validate the model's accuracy and effectiveness in a clinical setting and to expand its application to other diseases.

Keywords: K-nearest neighbor, mask R-CNN, random forest, support vector machine

Procedia PDF Downloads 120
375 Improving Teaching in English-Medium Instruction Classes at Japanese Universities through Needs-Based Professional Development Workshops

Authors: Todd Enslen

Abstract:

In order to attract more international students to study for undergraduate degrees in Japan, many universities have been developing English-Medium Instruction degree programs. This means that many faculty members must now teach their courses in English, which raises a number of concerns. A common misconception of English-Medium Instruction (EMI) is that teaching in English is simply a matter of translating materials. Since much of the teaching in Japan still relies on a more traditional, teachercentered, approach, continuing with this style in an EMI environment that targets international students can cause a clash between what is happening and what students expect in the classroom, not to mention what the Scholarship of Teaching and Learning (SoTL) has shown is effective teaching. A variety of considerations need to be taken into account in EMI classrooms such as varying English abilities of the students, modifying input material, and assuring comprehension through interactional checks. This paper analyzes the effectiveness of the English-Medium Instruction (EMI) undergraduate degree programs in engineering, agriculture, and science at a large research university in Japan by presenting the results from student surveys regarding the areas where perceived improvements need to be made. The students were the most dissatisfied with communication with their teachers in English, communication with Japanese students in English, adherence to only English being used in the classes, and the quality of the education they received. In addition, the results of a needs analysis survey of Japanese teachers having to teach in English showed that they believed they were most in need of English vocabulary and expressions to use in the classroom and teaching methods for teaching in English. The result from the student survey and the faculty survey show similar concerns between the two groups. By helping the teachers to understand student-centered teaching and the benefits for learning that it provides, teachers may begin to incorporate more student-centered approaches that in turn help to alleviate the dissatisfaction students are currently experiencing. Through analyzing the current environment in Japanese higher education against established best practices in teaching and EMI, three areas that need to be addressed in professional development workshops were identified. These were “culture” as it relates to the English language, “classroom management techniques” and ways to incorporate them into classes, and “language” issues. Materials used to help faculty better understand best practices as they relate to these specific areas will be provided to help practitioners begin the process of helping EMI faculty build awareness of better teaching practices. Finally, the results from faculty development workshops participants’ surveys will show the impact that these workshops can have. Almost all of the participants indicated that they learned something new and would like to incorporate the ideas from the workshop into their teaching. In addition, the vast majority of the participants felt the workshop provided them with new information, and they would like more workshops like these.

Keywords: English-medium instruction, materials development, professional development, teaching effectiveness

Procedia PDF Downloads 71
374 Characterization of Aerosol Droplet in Absorption Columns to Avoid Amine Emissions

Authors: Hammad Majeed, Hanna Knuutila, Magne Hilestad, Hallvard Svendsen

Abstract:

Formation of aerosols can cause serious complications in industrial exhaust gas CO2 capture processes. SO3 present in the flue gas can cause aerosol formation in an absorption based capture process. Small mist droplets and fog formed can normally not be removed in conventional demisting equipment because their submicron size allows the particles or droplets to follow the gas flow. As a consequence of this aerosol based emissions in the order of grams per Nm3 have been identified from PCCC plants. In absorption processes aerosols are generated by spontaneous condensation or desublimation processes in supersaturated gas phases. Undesired aerosol development may lead to amine emissions many times larger than what would be encountered in a mist free gas phase in PCCC development. It is thus of crucial importance to understand the formation and build-up of these aerosols in order to mitigate the problem.Rigorous modelling of aerosol dynamics leads to a system of partial differential equations. In order to understand mechanics of a particle entering an absorber an implementation of the model is created in Matlab. The model predicts the droplet size, the droplet internal variable profiles and the mass transfer fluxes as function of position in the absorber. The Matlab model is based on a subclass method of weighted residuals for boundary value problems named, orthogonal collocation method. The model comprises a set of mass transfer equations for transferring components and the essential diffusion reaction equations to describe the droplet internal profiles for all relevant constituents. Also included is heat transfer across the interface and inside the droplet. This paper presents results describing the basic simulation tool for the characterization of aerosols formed in CO2 absorption columns and gives examples as to how various entering droplets grow or shrink through an absorber and how their composition changes with respect to time. Below are given some preliminary simulation results for an aerosol droplet composition and temperature profiles. Results: As an example a droplet of initial size of 3 microns, initially containing a 5M MEA, solution is exposed to an atmosphere free of MEA. Composition of the gas phase and temperature is changing with respect to time throughout the absorber.

Keywords: amine solvents, emissions, global climate change, simulation and modelling, aerosol generation

Procedia PDF Downloads 248
373 Solar Electric Propulsion: The Future of Deep Space Exploration

Authors: Abhishek Sharma, Arnab Banerjee

Abstract:

The research is intended to study the solar electric propulsion (SEP) technology for planetary missions. The main benefits of using solar electric propulsion for such missions are shorter flight times, more frequent target accessibility and the use of a smaller launch vehicle than that required by a comparable chemical propulsion mission. Energized by electric power from on-board solar arrays, the electrically propelled system uses 10 times less propellant than conventional chemical propulsion system, yet the reduced fuel mass can provide vigorous power which is capable of propelling robotic and crewed missions beyond the Lower Earth Orbit (LEO). The various thrusters used in the SEP are gridded ion thrusters and the Hall Effect thrusters. The research is solely aimed to study the ion thrusters and investigate the complications related to it and what can be done to overcome the glitches. The ion thrusters are used because they are found to have a total lower propellant requirement and have substantially longer time. In the ion thrusters, the anode pushes or directs the incoming electrons from the cathode. But the anode is not maintained at a very high potential which leads to divergence. Divergence leads to the charges interacting against the surface of the thruster. Just as the charges ionize the xenon gases, they are capable of ionizing the surfaces and over time destroy the surface and hence contaminate it. Hence the lifetime of thruster gets limited. So a solution to this problem is using substances which are not easy to ionize as the surface material. Another approach can be to increase the potential of anode so that the electrons don’t deviate much or reduce the length of thruster such that the positive anode is more effective. The aim is to work on these aspects as to how constriction of the deviation of charges can be done by keeping the input power constant and hence increase the lifetime of the thruster. Predominantly ring cusp magnets are used in the ion thrusters. However, the study is also intended to observe the effect of using solenoid for producing micro-solenoidal magnetic field apart from using the ring cusp magnetic field which are used in the discharge chamber for prevention of interaction of electrons with the ionization walls. Another foremost area of interest is what are the ways by which power can be provided to the Solar Electric Propulsion Vehicle for lowering and boosting the orbit of the spacecraft and also provide substantial amount of power to the solenoid for producing stronger magnetic fields. This can be successfully achieved by using the concept of Electro-dynamic tether which will serve as a power source for powering both the vehicle and the solenoids in the ion thruster and hence eliminating the need for carrying extra propellant on the spacecraft which will reduce the weight and hence reduce the cost of space propulsion.

Keywords: electro-dynamic tether, ion thruster, lifetime of thruster, solar electric propulsion vehicle

Procedia PDF Downloads 196
372 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments

Authors: Skyler Kim

Abstract:

An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.

Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning

Procedia PDF Downloads 175
371 Rainfall and Flood Forecast Models for Better Flood Relief Plan of the Mae Sot Municipality

Authors: S. Chuenchooklin, S. Taweepong, U. Pangnakorn

Abstract:

This research was conducted in the Mae Sot Watershed whereas located in the Moei River Basin at the Upper Salween River Basin in Tak Province, Thailand. The Mae Sot Municipality is the largest urbanized in Tak Province and situated in the midstream of the Mae Sot Watershed. It usually faces flash flood problem after heavy rain due to poor flood management has been reported since economic rapidly bloom up in recently years. Its catchment can be classified as ungauged basin with lack of rainfall data and no any stream gaging station was reported. It was attached by most severely flood event in 2013 as the worst studied case for those all communities in this municipality. Moreover, other problems are also faced in this watershed such shortage water supply for domestic consumption and agriculture utilizations including deterioration of water quality and landslide as well. The research aimed to increase capability building and strengthening the participation of those local community leaders and related agencies to conduct better water management in urban area was started by mean of the data collection and illustration of appropriated application of some short period rainfall forecasting model as the aim for better flood relief plan and management through the hydrologic model system and river analysis system programs. The authors intended to apply the global rainfall data via the integrated data viewer (IDV) program from the Unidata with the aim for rainfall forecasting in short period of 7 - 10 days in advance during rainy season instead of real time record. The IDV product can be present in advance period of rainfall with time step of 3 - 6 hours was introduced to the communities. The result can be used to input to either the hydrologic modeling system model (HEC-HMS) or the soil water assessment tool model (SWAT) for synthesizing flood hydrographs and use for flood forecasting as well. The authors applied the river analysis system model (HEC-RAS) to present flood flow behaviors in the reach of the Mae Sot stream via the downtown of the Mae Sot City as flood extents as water surface level at every cross-sectional profiles of the stream. Both models of HMS and RAS were tested in 2013 with observed rainfall and inflow-outflow data from the Mae Sot Dam. The result of HMS showed fit to the observed data at dam and applied at upstream boundary discharge to RAS in order to simulate flood extents and tested in the field, and the result found satisfied. The result of IDV’s rainfall forecast data was compared to observed data and found fair. However, it is an appropriate tool to use in the ungauged catchment to use with flood hydrograph and river analysis models for future efficient flood relief plan and management.

Keywords: global rainfall, flood forecast, hydrologic modeling system, river analysis system

Procedia PDF Downloads 338