Search results for: large-scale assembly units
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1962

Search results for: large-scale assembly units

102 A 500 MWₑ Coal-Fired Power Plant Operated under Partial Oxy-Combustion: Methodology and Economic Evaluation

Authors: Fernando Vega, Esmeralda Portillo, Sara Camino, Benito Navarrete, Elena Montavez

Abstract:

The European Union aims at strongly reducing their CO₂ emissions from energy and industrial sector by 2030. The energy sector contributes with more than two-thirds of the CO₂ emission share derived from anthropogenic activities. Although efforts are mainly focused on the use of renewables by energy production sector, carbon capture and storage (CCS) remains as a frontline option to reduce CO₂ emissions from industrial process, particularly from fossil-fuel power plants and cement production. Among the most feasible and near-to-market CCS technologies, namely post-combustion and oxy-combustion, partial oxy-combustion is a novel concept that can potentially reduce the overall energy requirements of the CO₂ capture process. This technology consists in the use of higher oxygen content in the oxidizer that should increase the CO₂ concentration of the flue gas once the fuel is burnt. The CO₂ is then separated from the flue gas downstream by means of a conventional CO₂ chemical absorption process. The production of a higher CO₂ concentrated flue gas should enhance the CO₂ absorption into the solvent, leading to further reductions of the CO₂ separation performance in terms of solvent flow-rate, equipment size, and energy penalty related to the solvent regeneration. This work evaluates a portfolio of CCS technologies applied to fossil-fuel power plants. For this purpose, an economic evaluation methodology was developed in detail to determine the main economical parameters for CO₂ emission removal such as the levelized cost of electricity (LCOE) and the CO₂ captured and avoided costs. ASPEN Plus™ software was used to simulate the main units of power plant and solve the energy and mass balance. Capital and investment costs were determined from the purchased cost of equipment, also engineering costs and project and process contingencies. The annual capital cost and operating and maintenance costs were later obtained. A complete energy balance was performed to determine the net power produced in each case. The baseline case consists of a supercritical 500 MWe coal-fired power plant using anthracite as a fuel without any CO₂ capture system. Four cases were proposed: conventional post-combustion capture, oxy-combustion and partial oxy-combustion using two levels of oxygen-enriched air (40%v/v and 75%v/v). CO₂ chemical absorption process using monoethanolamine (MEA) was used as a CO₂ separation process whereas the O₂ requirement was achieved using a conventional air separation unit (ASU) based on Linde's cryogenic process. Results showed a reduction of 15% of the total investment cost of the CO₂ separation process when partial oxy-combustion was used. Oxygen-enriched air production also reduced almost half the investment costs required for ASU in comparison with oxy-combustion cases. Partial oxy-combustion has a significant impact on the performance of both CO₂ separation and O₂ production technologies, and it can lead to further energy reductions using new developments on both CO₂ and O₂ separation processes.

Keywords: carbon capture, cost methodology, economic evaluation, partial oxy-combustion

Procedia PDF Downloads 125
101 Treatment of Wastewater by Constructed Wetland Eco-Technology: Plant Species Alters the Performance and the Enrichment of Bacteria Ries Alters the Performance and the Enrichment of Bacteria

Authors: Kraiem Khadija, Hamadi Kallali, Naceur Jedidi

Abstract:

Constructed wetland systems are eco-technology recognized as environmentally friendly and emerging innovative solutions remediation as these systems are cost-effective and sustainable wastewater treatment systems. The performance of these biological system is affected by various factors such as plant, substrate, wastewater type, hydraulic loading rate, hydraulic retention time, water depth, and operation mood. The objective of this study was to to assess the alters of plant species on pollutants reduction and enrichment of anammox and nitrifing denitrifing bacteria in a modified vertical flow (VFCW) constructed wetland. This tests were carried out using three modified vertical constructed wetlands with a surface of 0.23 m² and depth 80 cm. It was a saturated vertical constructed wetland at the bottom. The saturation zone is maintained by the siphon structure at the outlet. The VFCW (₁) system was unplanted, VFCW (₂) planted with Typha angustofolia, and VFCW(₃) planted with Phragmites australis. The experimental units were fed with domestic wastewater and were operated by batch mode during 8 months at an average hydraulic loading rate around 20 cm day− 1. The operation cycle was two days feeding and five days rest. Results indicated that plants presence improved the removal efficiency; the removal rates of organic matter (85.1–90.9%; COD and 81.8–88.9%; BOD5), nitrogen (54.2–73%; NTK and 66–77%; NH4 -N) were higher by 10.7–30.1% compared to the unplanted vertical constructed wetland. On the other hand, the plant species had no significant effect on removal efficiency of COD, The removal of COD was similar in VFCW (₂) and VFCW (₃) (p > 0.05), attaining average removal efficiencies of 88.7% and 85.2%, respectively. Whereas it had a significant effect on NTK removal (p > 0.05), with an average removal rate of 72% versus 51% for VFCW (₂) and VFCW (₃), respectively. Among the three sets of vertical flow constructed wetlands, the VFCW(₂) removed the highest percent of total streptococcus, fecal streptococcus total coliforms, fecal coliforms, E. coli as 59, 62, 52, 63, and 58%, respectively. The presence and the plant species alters the community composition and abundance of the bacteria. The abundance of bacteria in the planted wetland was much higher than that in the unplanted one. VFCW(₃) had the highest relative abundance of nitrifying bacteria such as Nitrosospira (18%), Nitrosospira (12%), and Nitrobacter (8%). Whereas the vertical constructed wetland planted with typha had larger number of denitrifying species, with relative abundances of Aeromonas (13%), Paracoccus (11%), Thauera (7%), and Thiobacillus (6%). However, the abundance of nitrifying bacteria was very lower in this system than VFCW(₂). Interestingly, the presence of Thypha angustofolia species favored the enrichment of anammox bacteria compared to unplanted system and system planted with phragmites australis. The results showed that the middle layer had the most accumulation of anammox bacteria, which the anaerobic condition is better and the root system is moderate. Vegetation has several characteristics that make it an essential component of wetlands, but its exact effects are complex and debated.

Keywords: wastawater, constructed wetland, anammox, removal

Procedia PDF Downloads 74
100 Investigating the Online Effect of Language on Gesture in Advanced Bilinguals of Two Structurally Different Languages in Comparison to L1 Native Speakers of L2 and Explores Whether Bilinguals Will Follow Target L2 Patterns in Speech and Co-speech

Authors: Armita Ghobadi, Samantha Emerson, Seyda Ozcaliskan

Abstract:

Being a bilingual involves mastery of both speech and gesture patterns in a second language (L2). We know from earlier work in first language (L1) production contexts that speech and co-speech gesture form a tightly integrated system: co-speech gesture mirrors the patterns observed in speech, suggesting an online effect of language on nonverbal representation of events in gesture during the act of speaking (i.e., “thinking for speaking”). Relatively less is known about the online effect of language on gesture in bilinguals speaking structurally different languages. The few existing studies—mostly with small sample sizes—suggests inconclusive findings: some show greater achievement of L2 patterns in gesture with more advanced L2 speech production, while others show preferences for L1 gesture patterns even in advanced bilinguals. In this study, we focus on advanced bilingual speakers of two structurally different languages (Spanish L1 with English L2) in comparison to L1 English speakers. We ask whether bilingual speakers will follow target L2 patterns not only in speech but also in gesture, or alternatively, follow L2 patterns in speech but resort to L1 patterns in gesture. We examined this question by studying speech and gestures produced by 23 advanced adult Spanish (L1)-English (L2) bilinguals (Mage=22; SD=7) and 23 monolingual English speakers (Mage=20; SD=2). Participants were shown 16 animated motion event scenes that included distinct manner and path components (e.g., "run over the bridge"). We recorded and transcribed all participant responses for speech and segmented it into sentence units that included at least one motion verb and its associated arguments. We also coded all gestures that accompanied each sentence unit. We focused on motion event descriptions as it shows strong crosslinguistic differences in the packaging of motion elements in speech and co-speech gesture in first language production contexts. English speakers synthesize manner and path into a single clause or gesture (he runs over the bridge; running fingers forward), while Spanish speakers express each component separately (manner-only: el corre=he is running; circle arms next to body conveying running; path-only: el cruza el puente=he crosses the bridge; trace finger forward conveying trajectory). We tallied all responses by group and packaging type, separately for speech and co-speech gesture. Our preliminary results (n=4/group) showed that productions in English L1 and Spanish L1 differed, with greater preference for conflated packaging in L1 English and separated packaging in L1 Spanish—a pattern that was also largely evident in co-speech gesture. Bilinguals’ production in L2 English, however, followed the patterns of the target language in speech—with greater preference for conflated packaging—but not in gesture. Bilinguals used separated and conflated strategies in gesture in roughly similar rates in their L2 English, showing an effect of both L1 and L2 on co-speech gesture. Our results suggest that online production of L2 language has more limited effects on L2 gestures and that mastery of native-like patterns in L2 gesture might take longer than native-like L2 speech patterns.

Keywords: bilingualism, cross-linguistic variation, gesture, second language acquisition, thinking for speaking hypothesis

Procedia PDF Downloads 49
99 Review of Carbon Materials: Application in Alternative Energy Sources and Catalysis

Authors: Marita Pigłowska, Beata Kurc, Maciej Galiński

Abstract:

The application of carbon materials in the branches of the electrochemical industry shows an increasing tendency each year due to the many interesting properties they possess. These are, among others, a well-developed specific surface, porosity, high sorption capacity, good adsorption properties, low bulk density, electrical conductivity and chemical resistance. All these properties allow for their effective use, among others in supercapacitors, which can store electric charges of the order of 100 F due to carbon electrodes constituting the capacitor plates. Coals (including expanded graphite, carbon black, graphite carbon fibers, activated carbon) are commonly used in electrochemical methods of removing oil derivatives from water after tanker disasters, e.g. phenols and their derivatives by their electrochemical anodic oxidation. Phenol can occupy practically the entire surface of carbon material and leave the water clean of hydrophobic impurities. Regeneration of such electrodes is also not complicated, it is carried out by electrochemical methods consisting in unblocking the pores and reducing resistances, and thus their reactivation for subsequent adsorption processes. Graphite is commonly used as an anode material in lithium-ion cells, while due to the limited capacity it offers (372 mAh g-1), new solutions are sought that meet both capacitive, efficiency and economic criteria. Increasingly, biodegradable materials, green materials, biomass, waste (including agricultural waste) are used in order to reuse them and reduce greenhouse effects and, above all, to meet the biodegradability criterion necessary for the production of lithium-ion cells as chemical power sources. The most common of these materials are cellulose, starch, wheat, rice, and corn waste, e.g. from agricultural, paper and pharmaceutical production. Such products are subjected to appropriate treatments depending on the desired application (including chemical, thermal, electrochemical). Starch is a biodegradable polysaccharide that consists of polymeric units such as amylose and amylopectin that build an ordered (linear) and amorphous (branched) structure of the polymer. Carbon is also used as a catalyst. Elemental carbon has become available in many nano-structured forms representing the hybridization combinations found in the primary carbon allotropes, and the materials can be enriched with a large number of surface functional groups. There are many examples of catalytic applications of coal in the literature, but the development of this field has been hampered by the lack of a conceptual approach combining structure and function and a lack of understanding of material synthesis. In the context of catalytic applications, the integrity of carbon environmental management properties and parameters such as metal conductivity range and bond sequence management should be characterized. Such data, along with surface and textured information, can form the basis for the provision of network support services.

Keywords: carbon materials, catalysis, BET, capacitors, lithium ion cell

Procedia PDF Downloads 143
98 Histogenesis of the Stomach of Pre-Hatching Quail: A Light and Electron Microscopic Study

Authors: Soha A Soliman, Yasser A Ahmed, Mohamed A Khalaf

Abstract:

Although the enormous literature describing the histology of the stomach of different avian species during the posthatching development, the available literature on the pre-hatching development of quail stomach development is scanty. Thus, the current study was undertaken to provide a careful description of the main histological events during the embryonic development of quail stomach. To achieve this aim, daily histological specimens from the stomach of quail of 4 days post-incubation till the day 17 (few hours before hatching) were examined with light microscopy. The current study showed that the primitive gut tube of the embryonic quail appeared at the 4th day post incubation, and both parts of stomach (proventriculus and gizzard) were similar in structure and composed of endodermal epithelium of pseudostratified type surrounded by undifferentiated mesenchymal tissue. The sequences of the developmental events in the gut tube were preceded in a cranio-caudal pattern. By the 5th day, the endodermal covering of the primitive proventriculus gave rise to sac-like invaginations. The primitive gizzard was distinguished into thick-walled bodies and thin-walled sacs. In the 6th day, the prospective proventricular glandular epithelium became canalized and the muscular layer was developed in the cranial part of the proventriculus, whereas the primitive muscular coat of the gizzard was represented by a layer of condensed mesenchyme. In the 7th day, the proventricular glandular epithelial invaginations increased in depth and number, while, the muscularis mucosa and the muscular layer began to be distinguished. In the 8th day, the myoblasts differentiated into spindle shaped smooth muscle fibers. In the 10th day, branching of the proventricular glands began. The branching continued later on. The surface and the glandular epithelium were transformed into simple columnar type in the 12th day. The epithelial covering of the gizzard gave rise to tubular invaginations lined by simple cuboidal epithelium and the surface epithelium became simple columnar. Canalization of the tubular glands was recognized in the 14th day. In the 15th day, the proventricular surface epithelium invaginated in an concentric manner around a central cavity to form immature secretory units. The central cavity was lined by eosinophilic cells which form the ductal epithelia. The peripheral lamellae were lined by basophilic cells; the undifferentiated oxyntico-peptic cells. Entero-endocrine cells stained positive for silver impregnation in the proventricular glands. The mucosal folding in the gizzard appeared in the 15th day to form the plicae and the sulci. The wall of the proventriculus and gizzard in the 17th day acquired the main histological features of post-hatching birds, but neither the surface nor the ductal epithelium were differentiated to mucous producing cells. The current results shoed be considered in the molecular developmental studies.

Keywords: quail, proventriculus, gizzard, pre-hatching, histology

Procedia PDF Downloads 592
97 Implementation of Language Policy in a Swedish Multicultural Early Childhood School: A Development Project

Authors: Carina Hermansson

Abstract:

This presentation focuses a development project aiming at developing and documenting the steps taken at a multilingual, multicultural K-5 school, with the aim to improve the achievement levels of the pupils by focusing language and literacy development across the schedule in a digital classroom, and in all units of the school. This pre-formulated aim, thus, may be said to adhere to neoliberal educational and accountability policies in terms of its focus on digital learning, learning results, and national curriculum standards. In particular the project aimed at improving the collaboration between the teachers, the leisure time unit, the librarians, the mother tongue teachers and bilingual study counselors. This is a school environment characterized by cultural, ethnic, linguistic, and professional pluralization. The overarching aims of the research project were to scrutinize and analyze the factors enabling and obstructing the implementation of the Language Policy in a digital classroom. Theoretical framework: We apply multi-level perspectives in the analyses inspired by Uljens’ ideas about interactive and interpersonal first order (teacher/students) and second order(principal/teachers and other staff) educational leadership as described within the framework of discursive institutionalism, when we try to relate the Language Policy, educational policy, and curriculum with the administrative processes. Methodology/research design: The development project is based on recurring research circles where teachers, leisure time assistants, mother tongue teachers and study counselors speaking the mother tongue of the pupils together with two researchers discuss their digital literacy practices in the classroom. The researchers have in collaboration with the principal developed guidelines for the work, expressed in a Language Policy document. In our understanding the document is, however, only a part of the concept, the actions of the personnel and their reflections on the practice constitute the major part of the development project. One and a half years out of three years have now passed and the project has met with a row of difficulties which shed light on factors of importance for the progress of the development project. Field notes and recordings from the research circles, a survey with the personnel, and recorded group interviews provide data on the progress of the project. Expected conclusions: The problems experienced deal with leadership, curriculum, interplay between aims, technology, contents and methods, the parents as customers taking their children to other schools, conflicting values, and interactional difficulties, that is, phenomena on different levels, ranging from school to a societal level, as for example teachers being substituted as a result of the marketization of schools. Also underlying assumptions from actors at different levels create obstacles. We find this study and the problems we are facing utterly important to share and discuss in an era with a steady flow of refugees arriving in the Nordic countries.

Keywords: early childhood education, language policy, multicultural school, school development project

Procedia PDF Downloads 120
96 Technology Optimization of Compressed Natural Gas Home Fast Refueling Units

Authors: Szymon Kuczynski, Krystian Liszka, Mariusz Laciak, Andrii Oliinyk, Robert Strods, Adam Szurlej

Abstract:

Despіte all glоbal ecоnоmіc shіfts and the fact that Natural Gas іs recоgnіzed wоrldwіde as the maіn and the leadіng alternatіve tо оіl prоducts іn transpоrtatіоn sectоr, there іs a huge barrіer tо swіtch passenger vehіcle segment tо Natural gas - the lack оf refuelіng іnfrastructure fоr Natural Gas Vehіcles. Whіle іnvestments іn publіc gas statіоns requіre establіshed NGV market іn оrder tо be cоst effectіve, the market іs nоt there due tо lack оf refuelіng statіоns. The key tо sоlvіng that prоblem and prоvіdіng barrіer breakіng refuelіng іnfrastructure sоlutіоn fоr Natural Gas Vehіcles (NGV) іs Hоme Fast Refuelіng Unіts. Іt оperates usіng Natural Gas (Methane), whіch іs beіng prоvіded thrоugh gas pіpelіnes at clіents hоme, and electrіcіty cоnnectіоn pоіnt. Іt enables an envіrоnmentally frіendly NGV’s hоme refuelіng just іn mіnutes. The underlyіng technоlоgy іs a patented technоlоgy оf оne stage hydraulіc cоmpressоr (іnstead оf multіstage mechanіcal cоmpressоr technоlоgy avaіlable оn the market nоw) whіch prоvіdes the pоssіbіlіty tо cоmpress lоw pressure gas frоm resіdentіal gas grіd tо 200 bar fоr іts further usage as a fuel fоr NGVs іn the mоst ecоnоmіcally effіcіent and cоnvenіent fоr custоmer way. Descrіptіоn оf wоrkіng algоrіthm: Twо hіgh pressure cylіnders wіth upper necks cоnnected tо lоw pressure gas sоurce are placed vertіcally. Іnіtіally оne оf them іs fіlled wіth lіquіd and anоther оne – wіth lоw pressure gas. Durіng the wоrkіng prоcess lіquіd іs transferred by means оf hydraulіc pump frоm оne cylіnder tо anоther and back. Wоrkіng lіquіd plays a rоle оf pіstоns іnsіde cylіnders. Mоvement оf wоrkіng lіquіd іnsіde cylіnders prоvіdes sіmultaneоus suctіоn оf a pоrtіоn оf lоw pressure gas іntо оne оf the cylіnder (where lіquіd mоves dоwn) and fоrcіng оut gas оf hіgher pressure frоm anоther cylіnder (where lіquіd mоves up) tо the fuel tank оf the vehіcle / stоrage tank. Each cycle оf fоrcіng the gas оut оf the cylіnder rіses up the pressure оf gas іn the fuel tank оf a vehіcle wіth 2 cylіnders. The prоcess іs repeated untіl the pressure оf gas іn the fuel tank reaches 200 bar. Mоbіlіty has becоme a necessіty іn peоple’s everyday lіfe, whіch led tо оіl dependence. CNG Hоme Fast Refuelіng Unіts can become a part fоr exіstіng natural gas pіpelіne іnfrastructure and becоme the largest vehіcle refuelіng іnfrastructure. Hоme Fast Refuelіng Unіts оwners wіll enjоy day-tо-day tіme savіngs and cоnvenіence - Hоme Car refuelіng іn mіnutes, mоnth-tо-mоnth fuel cоst ecоnоmy, year-tо-year іncentіves and tax deductіbles оn NG refuelіng systems as per cоuntry, reduce CО2 lоcal emіssіоns, savіng cоsts and mоney.

Keywords: CNG (compressed natural gas), CNG stations, NGVs (natural gas vehicles), natural gas

Procedia PDF Downloads 183
95 Culturally Relevant Education Challenges and Threats in the US Secondary Classroom

Authors: Owen Cegielski, Kristi Maida, Danny Morales, Sylvia L. Mendez

Abstract:

This study explores the challenges and threats US secondary educators experience in incorporating culturally relevant education (CRE) practices in their classrooms. CRE is a social justice pedagogical practice used to connect student’s cultural references to academic skills and content, to promote critical reflection, to facilitate cultural competence, and to critique discourses of power and oppression. Empirical evidence on CRE demonstrates positive student educational outcomes in terms of achievement, engagement, and motivation. Additionally, due to the direct focus on uplifting diverse cultures through the curriculum, students experience greater feelings of belonging, increased interest in the subject matter, and stronger racial/ethnic identities. When these teaching practices are in place, educators develop deeper relationships with their students and appreciate the multitude of gifts they (and their families) bring to the classroom environment. Yet, educators regularly report being unprepared to incorporate CRE in their daily teaching practice and identify substantive gaps in their knowledge and skills in this area. Often, they were not exposed to CRE in their educator preparation program, nor do they receive adequate support through school- or district-wide professional development programming. Through a descriptive phenomenological research design, 20 interviews were conducted with a diverse set of secondary school educators to explore the challenges and threats they experience in incorporating CRE practices in their classrooms. The guiding research question for this study is: What are the challenges and threats US secondary educators face when seeking to incorporate CRE practices in their classrooms? Interviews were grounded by the theory of challenge and threat states, which highlights the ways in which challenges and threats are appraised and how resources factor into emotional valence and perception, as well as the potential to meet the task at hand. Descriptive phenomenological data analysis strategies were utilized to develop an essential structure of the educators’ views of challenges and threats in regard to incorporating CRE practices in their secondary classrooms. The attitude of the phenomenological reduction method was adopted, and the data were analyzed through five steps: sense of the whole, meaning units, transformation, structure, and essential structure. The essential structure that emerged was while secondary educators display genuine interest in learning how to successfully incorporate CRE practices, they perceive it to be a challenge (and not a threat) due to lack of exposure which diminishes educator capacity, comfort, and confidence in employing CRE practices. These findings reveal the value of attending to emotional valence and perception of CRE in promoting this social justice pedagogical practice. Findings also reveal the importance of appropriately resourcing educators with CRE support to ensure they develop and utilize this practice.

Keywords: culturally relevant education, descriptive phenomenology, social justice practice, US secondary education

Procedia PDF Downloads 159
94 Carbon Footprint Assessment and Application in Urban Planning and Geography

Authors: Hyunjoo Park, Taehyun Kim, Taehyun Kim

Abstract:

Human life, activity, and culture depend on the wider environment. Cities offer economic opportunities for goods and services, but cannot exist in environments without food, energy, and water supply. Technological innovation in energy supply and transport speeds up the expansion of urban areas and the physical separation from agricultural land. As a result, division of urban agricultural areas causes more energy demand for food and goods transport between the regions. As the energy resources are leaking all over the world, the impact on the environment crossing the boundaries of cities is also growing. While advances in energy and other technologies can reduce the environmental impact of consumption, there is still a gap between energy supply and demand by current technology, even in technically advanced countries. Therefore, reducing energy demand is more realistic than relying solely on the development of technology for sustainable development. The purpose of this study is to introduce the application of carbon footprint assessment in fields of urban planning and geography. In urban studies, carbon footprint has been assessed at different geographical scales, such as nation, city, region, household, and individual. Carbon footprint assessment for a nation and a city is available by using national or city level statistics of energy consumption categories. By means of carbon footprint calculation, it is possible to compare the ecological capacity and deficit among nations and cities. Carbon footprint also offers great insight on the geographical distribution of carbon intensity at a regional level in the agricultural field. The study shows the background of carbon footprint applications in urban planning and geography by case studies such as figuring out sustainable land-use measures in urban planning and geography. For micro level, footprint quiz or survey can be adapted to measure household and individual carbon footprint. For example, first case study collected carbon footprint data from the survey measuring home energy use and travel behavior of 2,064 households in eight cities in Gyeonggi-do, Korea. Second case study analyzed the effects of the net and gross population densities on carbon footprint of residents at an intra-urban scale in the capital city of Seoul, Korea. In this study, the individual carbon footprint of residents was calculated by converting the carbon intensities of home and travel fossil fuel use of respondents to the unit of metric ton of carbon dioxide (tCO₂) by multiplying the conversion factors equivalent to the carbon intensities of each energy source, such as electricity, natural gas, and gasoline. Carbon footprint is an important concept not only for reducing climate change but also for sustainable development. As seen in case studies carbon footprint may be measured and applied in various spatial units, including but not limited to countries and regions. These examples may provide new perspectives on carbon footprint application in planning and geography. In addition, additional concerns for consumption of food, goods, and services can be included in carbon footprint calculation in the area of urban planning and geography.

Keywords: carbon footprint, case study, geography, urban planning

Procedia PDF Downloads 269
93 Inclusion Body Refolding at High Concentration for Large-Scale Applications

Authors: J. Gabrielczyk, J. Kluitmann, T. Dammeyer, H. J. Jördening

Abstract:

High-level expression of proteins in bacteria often causes production of insoluble protein aggregates, called inclusion bodies (IB). They contain mainly one type of protein and offer an easy and efficient way to get purified protein. On the other hand, proteins in IB are normally devoid of function and therefore need a special treatment to become active. Most refolding techniques aim at diluting the solubilizing chaotropic agents. Unfortunately, optimal refolding conditions have to be found empirically for every protein. For large-scale applications, a simple refolding process with high yields and high final enzyme concentrations is still missing. The constructed plasmid pASK-IBA63b containing the sequence of fructosyltransferase (FTF, EC 2.4.1.162) from Bacillus subtilis NCIMB 11871 was transformed into E. coli BL21 (DE3) Rosetta. The bacterium was cultivated in a fed-batch bioreactor. The produced FTF was obtained mainly as IB. For refolding experiments, five different amounts of IBs were solubilized in urea buffer with protein concentration of 0.2-8.5 g/L. Solubilizates were refolded with batch or continuous dialysis. The refolding yield was determined by measuring the protein concentration of the clear supernatant before and after the dialysis. Particle size was measured by dynamic light scattering. We tested the solubilization properties of fructosyltransferase IBs. The particle size measurements revealed that the solubilization of the aggregates is achieved at urea concentration of 5M or higher and confirmed by absorption spectroscopy. All results confirm previous investigations that refolding yields are dependent upon initial protein concentration. In batch dialysis, the yields dropped from 67% to 12% and 72% to 19% for continuous dialysis, in relation to initial concentrations from 0.2 to 8.5 g/L. Often used additives such as sucrose and glycerol had no effect on refolding yields. Buffer screening indicated a significant increase in activity but also temperature stability of FTF with citrate/phosphate buffer. By adding citrate to the dialysis buffer, we were able to increase the refolding yields to 82-47% in batch and 90-74% in the continuous process. Further experiments showed that in general, higher ionic strength of buffers had major impact on refolding yields; doubling the buffer concentration increased the yields up to threefold. Finally, we achieved corresponding high refolding yields by reducing the chamber volume by 75% and the amount of buffer needed. The refolded enzyme had an optimal activity of 12.5±0.3 x104 units/g. However, detailed experiments with native FTF revealed a reaggregation of the molecules and loss in specific activity depending on the enzyme concentration and particle size. For that reason, we actually focus on developing a process of simultaneous enzyme refolding and immobilization. The results of this study show a new approach in finding optimal refolding conditions for inclusion bodies at high concentrations. Straightforward buffer screening and increase of the ionic strength can optimize the refolding yield of the target protein by 400%. Gentle removal of chaotrope with continuous dialysis increases the yields by an additional 65%, independent of the refolding buffer applied. In general time is the crucial parameter for successful refolding of solubilized proteins.

Keywords: dialysis, inclusion body, refolding, solubilization

Procedia PDF Downloads 275
92 Implementation of a Multidisciplinary Weekly Safety Briefing in a Tertiary Paediatric Cardiothoracic Transplant Unit

Authors: Lauren Dhugga, Meena Parameswaran, David Blundell, Abbas Khushnood

Abstract:

Context: A multidisciplinary weekly safety briefing was implemented at the Paediatric Cardiothoracic Unit at the Freeman Hospital in Newcastle-upon-Tyne. It is a tertiary referral centre with a quarternary cardiac paediatric intensive care unit and provides complexed care including heart and lung transplants, mechanical support and advanced heart failure assessment. Aim: The aim of this briefing is to provide a structured platform of communication, in an effort to improve efficiency, safety, and patient care. Problem: The paediatric cardiothoracic unit is made up of a vast multidisciplinary team including doctors, intensivists, anaesthetists, surgeons, specialist nurses, echocardiogram technicians, physiotherapists, psychologists, dentists, and dietitians. It provides care for children with congenital and acquired cardiac disease and is one of only two units in the UK to offer paediatric heart transplant. The complexity of cases means that there can be many teams involved in providing care to each patient, and frequent movement of children between ward, high dependency, and intensive care areas. Currently, there is no structured forum for communicating important information across the department, for example, staffing shortages, prescribing errors and significant events. Strategy: An initial survey questioning the need for better communication found 90% of respondents agreed that they could think of an incident that had occurred due to ineffective communication, and 85% felt that incident could have been avoided had there been a better form of communication. Lastly, 80% of respondents felt that a weekly 60 second safety briefing would be beneficial to improve communication within our multidisciplinary team. Based on those promising results, a weekly 60 second safety briefing was implemented to be conducted on a Monday morning. The safety briefing covered four key areas (SAFE): staffing, awareness, fix and events. This was to highlight any staffing gaps, any incident reports to be learned from, any issues that required fixing and any events including teachings for the week ahead. The teams were encouraged to email suggestions or issues to be raised for the week or to approach in person with information to add. The safety briefing was implemented using change theory. Effect: The safety briefing has been trialled over 6 weeks and has received a good buy in from staff across specialties. The aim is to embed this safety briefing into a weekly meeting using the PDSA cycle. There will be a second survey in one month to assess the efficacy of the safety briefing and to continue to improve the delivery of information. The project will be presented at the next clinical governance briefing to attract wider feedback and input from across the trust. Lessons: The briefing displays promise as a tool to improve vigilance and communication in a busy multi-disciplinary unit. We have learned about how to implement quality improvement and about the culture of our hospital - how hierarchy influences change. We demonstrate how to implement change through a grassroots process, using a junior led briefing to improve the efficiency, safety, and communication in the workplace.

Keywords: briefing, communication, safety, team

Procedia PDF Downloads 115
91 Assessing the Environmental Efficiency of China’s Power System: A Spatial Network Data Envelopment Analysis Approach

Authors: Jianli Jiang, Bai-Chen Xie

Abstract:

The climate issue has aroused global concern. Achieving sustainable development is a good path for countries to mitigate environmental and climatic pressures, although there are many difficulties. The first step towards sustainable development is to evaluate the environmental efficiency of the energy industry with proper methods. The power sector is a major source of CO2, SO2, and NOx emissions. Evaluating the environmental efficiency (EE) of power systems is the premise to alleviate the terrible situation of energy and the environment. Data Envelopment Analysis (DEA) has been widely used in efficiency studies. However, measuring the efficiency of a system (be it a nation, region, sector, or business) is a challenging task. The classic DEA takes the decision-making units (DMUs) as independent, which neglects the interaction between DMUs. While ignoring these inter-regional links may result in a systematic bias in the efficiency analysis; for instance, the renewable power generated in a certain region may benefit the adjacent regions while the SO2 and CO2 emissions act oppositely. This study proposes a spatial network DEA (SNDEA) with a slack measure that can capture the spatial spillover effects of inputs/outputs among DMUs to measure efficiency. This approach is used to study the EE of China's power system, which consists of generation, transmission, and distribution departments, using a panel dataset from 2014 to 2020. In the empirical example, the energy and patent inputs, the undesirable CO2 output, and the renewable energy (RE) power variables are tested for a significant spatial spillover effect. Compared with the classic network DEA, the SNDEA result shows an obvious difference tested by the global Moran' I index. From a dynamic perspective, the EE of the power system experiences a visible surge from 2015, then a sharp downtrend from 2019, which keeps the same trend with the power transmission department. This phenomenon benefits from the market-oriented reform in the Chinese power grid enacted in 2015. The rapid decline in the environmental efficiency of the transmission department in 2020 was mainly due to the Covid-19 epidemic, which hinders economic development seriously. While the EE of the power generation department witnesses a declining trend overall, this is reasonable, taking the RE power into consideration. The installed capacity of RE power in 2020 is 4.40 times that in 2014, while the power generation is 3.97 times; in other words, the power generation per installed capacity shrank. In addition, the consumption cost of renewable power increases rapidly with the increase of RE power generation. These two aspects make the EE of the power generation department show a declining trend. Incorporation of the interactions among inputs/outputs into the DEA model, this paper proposes an efficiency evaluation method on the basis of the DEA framework, which sheds some light on efficiency evaluation in regional studies. Furthermore, the SNDEA model and the spatial DEA concept can be extended to other fields, such as industry, country, and so on.

Keywords: spatial network DEA, environmental efficiency, sustainable development, power system

Procedia PDF Downloads 77
90 Carbon Nanotubes (CNTs) as Multiplex Surface Enhanced Raman Scattering Sensing Platforms

Authors: Pola Goldberg Oppenheimer, Stephan Hofmann, Sumeet Mahajan

Abstract:

Owing to its fingerprint molecular specificity and high sensitivity, surface-enhanced Raman scattering (SERS) is an established analytical tool for chemical and biological sensing capable of single-molecule detection. A strong Raman signal can be generated from SERS-active platforms given the analyte is within the enhanced plasmon field generated near a noble-metal nanostructured substrate. The key requirement for generating strong plasmon resonances to provide this electromagnetic enhancement is an appropriate metal surface roughness. Controlling nanoscale features for generating these regions of high electromagnetic enhancement, the so-called SERS ‘hot-spots’, is still a challenge. Significant advances have been made in SERS research, with wide-ranging techniques to generate substrates with tunable size and shape of the nanoscale roughness features. Nevertheless, the development and application of SERS has been inhibited by the irreproducibility and complexity of fabrication routes. The ability to generate straightforward, cost-effective, multiplex-able and addressable SERS substrates with high enhancements is of profound interest for miniaturised sensing devices. Carbon nanotubes (CNTs) have been concurrently, a topic of extensive research however, their applications for plasmonics has been only recently beginning to gain interest. CNTs can provide low-cost, large-active-area patternable substrates which, coupled with appropriate functionalization capable to provide advanced SERS-platforms. Herein, advanced methods to generate CNT-based SERS active detection platforms will be discussed. First, a novel electrohydrodynamic (EHD) lithographic technique will be introduced for patterning CNT-polymer composites, providing a straightforward, single-step approach for generating high-fidelity sub-micron-sized nanocomposite structures within which anisotropic CNTs are vertically aligned. The created structures are readily fine-tuned, which is an important requirement for optimizing SERS to obtain the highest enhancements with each of the EHD-CNTs individual structural units functioning as an isolated sensor. Further, gold-functionalized VACNTFs are fabricated as SERS micro-platforms. The dependence on the VACNTs’ diameters and density play an important role in the Raman signal strength, thus highlighting the importance of structural parameters, previously overlooked in designing and fabricating optimized CNTs-based SERS nanoprobes. VACNTs forests patterned into predesigned pillar structures are further utilized for multiplex detection of bio-analytes. Since CNTs exhibit electrical conductivity and unique adsorption properties, these are further harnessed in the development of novel chemical and bio-sensing platforms.

Keywords: carbon nanotubes (CNTs), EHD patterning, SERS, vertically aligned carbon nanotube forests (VACNTF)

Procedia PDF Downloads 303
89 User-Centered Design in the Development of Patient Decision Aids

Authors: Ariane Plaisance, Holly O. Witteman, Patrick Michel Archambault

Abstract:

Upon admission to an intensive care unit (ICU), all patients should discuss their wishes concerning life-sustaining interventions (e.g., cardiopulmonary resuscitation (CPR)). Without such discussions, interventions that prolong life at the cost of decreasing its quality may be used without appropriate guidance from patients. We employed user-centered design to adapt an existing decision aid (DA) about CPR to create a novel wiki-based DA adapted to the context of a single ICU and tailored to individual patient’s risk factors. During Phase 1, we conducted three weeks of ethnography of the decision-making context in our ICU to identify clinician and patient needs for a decision aid. During this time, we observed five dyads of intensivists and patients discussing their wishes concerning life-sustaining interventions. We also conducted semi-structured interviews with the attending intensivists in this ICU. During Phase 2, we conducted three rounds of rapid prototyping involving 15 patients and 11 other allied health professionals. We recorded discussions between intensivists and patients and used a standardized observation grid to collect patients’ comments and sociodemographic data. We applied content analysis to field notes, verbatim transcripts and the completed observation grids. Each round of observations and rapid prototyping iteratively informed the design of the next prototype. We also used the programming architecture of a wiki platform to embed the GO-FAR prediction rule programming code that we linked to a risk graphics software to better illustrate outcome risks calculated. During Phase I, we identified the need to add a section in our DA concerning invasive mechanical ventilation in addition to CPR because both life-sustaining interventions were often discussed together by physicians. During Phase II, we produced a context-adapted decision aid about CPR and mechanical ventilation that includes a values clarification section, questions about the patient’s functional autonomy prior to admission to the ICU and the functional decline that they would judge acceptable upon hospital discharge, risks and benefits of CPR and invasive mechanical ventilation, population-level statistics about CPR, a synthesis section to help patients come to a final decision and an online calculator based on the GO-FAR prediction rule. Even though the three rounds of rapid prototyping led to simplifying the information in our DA, 60% (n= 3/5) of the patients involved in the last cycle still did not understand the purpose of the DA. We also identified gaps in the discussion and documentation of patients’ preferences concerning life-sustaining interventions (e.g.,. CPR, invasive mechanical ventilation). The final version of our DA and our online wiki-based GO-FAR risk calculator using the IconArray.com risk graphics software are available online at www.wikidecision.org and are ready to be adapted to other contexts. Our results inform producers of decision aids on the use of wikis and user-centered design to develop DAs that are better adapted to users’ needs. Further work is needed on the creation of a video version of our DA. Physicians will also need the training to use our DA and to develop shared decision-making skills about goals of care.

Keywords: ethnography, intensive care units, life-sustaining therapies, user-centered design

Procedia PDF Downloads 325
88 The Effect of Online Analyzer Malfunction on the Performance of Sulfur Recovery Unit and Providing a Temporary Solution to Reduce the Emission Rate

Authors: Hamid Reza Mahdipoor, Mehdi Bahrami, Mohammad Bodaghi, Seyed Ali Akbar Mansoori

Abstract:

Nowadays, with stricter limitations to reduce emissions, considerable penalties are imposed if pollution limits are exceeded. Therefore, refineries, along with focusing on improving the quality of their products, are also focused on producing products with the least environmental impact. The duty of the sulfur recovery unit (SRU) is to convert H₂S gas coming from the upstream units to elemental sulfur and minimize the burning of sulfur compounds to SO₂. The Claus process is a common process for converting H₂S to sulfur, including a reaction furnace followed by catalytic reactors and sulfur condensers. In addition to a Claus section, SRUs usually consist of a tail gas treatment (TGT) section to decrease the concentration of SO₂ in the flue gas below the emission limits. To operate an SRU properly, the flow rate of combustion air to the reaction furnace must be adjusted so that the Claus reaction is performed according to stoichiometry. Accurate control of the air demand leads to an optimum recovery of sulfur during the flow and composition fluctuations in the acid gas feed. Therefore, the major control system in the SRU is the air demand control loop, which includes a feed-forward control system based on predetermined feed flow rates and a feed-back control system based on the signal from the tail gas online analyzer. The use of online analyzers requires compliance with the installation and operation instructions. Unfortunately, most of these analyzers in Iran are out of service for different reasons, like the low importance of environmental issues and a lack of access to after-sales services due to sanctions. In this paper, an SRU in Iran was simulated and calibrated using industrial experimental data. Afterward, the effect of the malfunction of the online analyzer on the performance of SRU was investigated using the calibrated simulation. The results showed that an increase in the SO₂ concentration in the tail gas led to an increase in the temperature of the reduction reactor in the TGT section. This increase in temperature caused the failure of TGT and increased the concentration of SO₂ from 750 ppm to 35,000 ppm. In addition, the lack of a control system for the adjustment of the combustion air caused further increases in SO₂ emissions. In some processes, the major variable cannot be controlled directly due to difficulty in measurement or a long delay in the sampling system. In these cases, a secondary variable, which can be measured more easily, is considered to be controlled. With the correct selection of this variable, the main variable is also controlled along with the secondary variable. This strategy for controlling a process system is referred to as inferential control" and is considered in this paper. Therefore, a sensitivity analysis was performed to investigate the sensitivity of other measurable parameters to input disturbances. The results revealed that the output temperature of the first Claus reactor could be used for inferential control of the combustion air. Applying this method to the operation led to maximizing the sulfur recovery in the Claus section.

Keywords: sulfur recovery, online analyzer, inferential control, SO₂ emission

Procedia PDF Downloads 47
87 Design and Application of a Model Eliciting Activity with Civil Engineering Students on Binomial Distribution to Solve a Decision Problem Based on Samples Data Involving Aspects of Randomness and Proportionality

Authors: Martha E. Aguiar-Barrera, Humberto Gutierrez-Pulido, Veronica Vargas-Alejo

Abstract:

Identifying and modeling random phenomena is a fundamental cognitive process to understand and transform reality. Recognizing situations governed by chance and giving them a scientific interpretation, without being carried away by beliefs or intuitions, is a basic training for citizens. Hence the importance of generating teaching-learning processes, supported using technology, paying attention to model creation rather than only executing mathematical calculations. In order to develop the student's knowledge about basic probability distributions and decision making; in this work a model eliciting activity (MEA) is reported. The intention was applying the Model and Modeling Perspective to design an activity related to civil engineering that would be understandable for students, while involving them in its solution. Furthermore, the activity should imply a decision-making challenge based on sample data, and the use of the computer should be considered. The activity was designed considering the six design principles for MEA proposed by Lesh and collaborators. These are model construction, reality, self-evaluation, model documentation, shareable and reusable, and prototype. The application and refinement of the activity was carried out during three school cycles in the Probability and Statistics class for Civil Engineering students at the University of Guadalajara. The analysis of the way in which the students sought to solve the activity was made using audio and video recordings, as well as with the individual and team reports of the students. The information obtained was categorized according to the activity phase (individual or team) and the category of analysis (sample, linearity, probability, distributions, mechanization, and decision-making). With the results obtained through the MEA, four obstacles have been identified to understand and apply the binomial distribution: the first one was the resistance of the student to move from the linear to the probabilistic model; the second one, the difficulty of visualizing (infering) the behavior of the population through the sample data; the third one, viewing the sample as an isolated event and not as part of a random process that must be viewed in the context of a probability distribution; and the fourth one, the difficulty of decision-making with the support of probabilistic calculations. These obstacles have also been identified in literature on the teaching of probability and statistics. Recognizing these concepts as obstacles to understanding probability distributions, and that these do not change after an intervention, allows for the modification of these interventions and the MEA. In such a way, the students may identify themselves the erroneous solutions when they carrying out the MEA. The MEA also showed to be democratic since several students who had little participation and low grades in the first units, improved their participation. Regarding the use of the computer, the RStudio software was useful in several tasks, for example in such as plotting the probability distributions and to exploring different sample sizes. In conclusion, with the models created to solve the MEA, the Civil Engineering students improved their probabilistic knowledge and understanding of fundamental concepts such as sample, population, and probability distribution.

Keywords: linear model, models and modeling, probability, randomness, sample

Procedia PDF Downloads 90
86 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data

Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora

Abstract:

Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.

Keywords: drilling optimization, geological formations, machine learning, rate of penetration

Procedia PDF Downloads 101
85 The Impact of Developing an Educational Unit in the Light of Twenty-First Century Skills in Developing Language Skills for Non-Arabic Speakers: A Proposed Program for Application to Students of Educational Series in Regular Schools

Authors: Erfan Abdeldaim Mohamed Ahmed Abdalla

Abstract:

The era of the knowledge explosion in which we live requires us to develop educational curricula quantitatively and qualitatively to adapt to the twenty-first-century skills of critical thinking, problem-solving, communication, cooperation, creativity, and innovation. The process of developing the curriculum is as significant as building it; in fact, the development of curricula may be more difficult than building them. And curriculum development includes analyzing needs, setting goals, designing the content and educational materials, creating language programs, developing teachers, applying for programmes in schools, monitoring and feedback, and then evaluating the language programme resulting from these processes. When we look back at the history of language teaching during the twentieth century, we find that developing the delivery method is the most crucial aspect of change in language teaching doctrines. The concept of delivery method in teaching is a systematic set of teaching practices based on a specific theory of language acquisition. This is a key consideration, as the process of development must include all the curriculum elements in its comprehensive sense: linguistically and non-linguistically. The various Arabic curricula provide the student with a set of units, each unit consisting of a set of linguistic elements. These elements are often not logically arranged, and more importantly, they neglect essential points and highlight other less important ones. Moreover, the educational curricula entail a great deal of monotony in the presentation of content, which makes it hard for the teacher to select adequate content; so that the teacher often navigates among diverse references to prepare a lesson and hardly finds the suitable one. Similarly, the student often gets bored when learning the Arabic language and fails to fulfill considerable progress in it. Therefore, the problem is not related to the lack of curricula, but the problem is the development of the curriculum with all its linguistic and non-linguistic elements in accordance with contemporary challenges and standards for teaching foreign languages. The Arabic library suffers from a lack of references for curriculum development. In this paper, the researcher investigates the elements of development, such as the teacher, content, methods, objectives, evaluation, and activities. Hence, a set of general guidelines in the field of educational development were reached. The paper highlights the need to identify weaknesses in educational curricula, decide the twenty-first-century skills that must be employed in Arabic education curricula, and the employment of foreign language teaching standards in current Arabic Curricula. The researcher assumes that the series of teaching Arabic to speakers of other languages in regular schools do not address the skills of the twenty-first century, which is what the researcher tries to apply in the proposed unit. The experimental method is the method of this study. It is based on two groups: experimental and control. The development of an educational unit will help build suitable educational series for students of the Arabic language in regular schools, in which twenty-first-century skills and standards for teaching foreign languages will be addressed and be more useful and attractive to students.

Keywords: curriculum, development, Arabic language, non-native, skills

Procedia PDF Downloads 48
84 Defense Priming from Egg to Larvae in Litopenaeus vannamei with Non-Pathogenic and Pathogenic Bacteria Strains

Authors: Angelica Alvarez-Lee, Sergio Martinez-Diaz, Jose Luis Garcia-Corona, Humberto Lanz-Mendoza

Abstract:

World aquaculture is always looking for improvements to achieve productions with high yields avoiding the infection by pathogenic agents. The best way to achieve this is to know the biological model to create alternative treatments that could be applied in the hatcheries, which results in greater economic gains and improvements in human public health. In the last decade, immunomodulation in shrimp culture with probiotics, organic acids and different carbon sources has gained great interest, mainly in larval and juvenile stages. Immune priming is associated with a strong protective effect against a later pathogen challenge. This work provides another perspective about immunostimulation from spawning until hatching. The stimulation happens during development embryos and generates resistance to infection by pathogenic bacteria. Massive spawnings of white shrimp L. vannamei were obtained and placed in experimental units with 700 mL of sterile seawater at 30 °C, salinity of 28 ppm and continuous aeration at a density of 8 embryos.mL⁻¹. The immunostimulating effect of three death strains of non-pathogenic bacterial (Escherichia coli, Staphylococcus aureus and Bacillus subtilis) and a pathogenic strain for white shrimp (Vibrio parahaemolyticus) was evaluated. The strains killed by heat were adjusted to O.D. 0.5, at A 600 nm, and directly added to the seawater of each unit at a ratio of 1/100 (v/v). A control group of embryos without inoculum of dead bacteria was kept under the same physicochemical conditions as the rest of the treatments throughout the experiment and used as reference. The duration of the stimulus was 12 hours, then, the larvae that hatched were collected, counted and transferred to a new experimental unit (same physicochemical conditions but at a salinity of 28 ppm) to carry out a challenge of infection against the pathogen V. parahaemolyticus, adding directly to seawater an amount 1/100 (v/v) of the live strain adjusted to an OD 0.5; at A 600 nm. Subsequently, 24 hrs after infection, nauplii survival was evaluated. The results of this work shows that, after 24 hrs, the hatching rates of immunostimulated shrimp embryos with the dead strains of B. subtillis and V. parahaemolyticus are significantly higher compared to the rest of the treatments and the control. Furthermore, survival of L. vanammei after a challenge of infection of 24 hrs against the live strain of V. parahaemolyticus is greater (P < 0.05) in the larvae immunostimulated during the embryonic development with the dead strains B. subtillis and V. parahaemolyticus, followed by those that were treated with E. coli. In summary superficial antigens can stimulate the development cells to promote hatching and can have normal development in agreeing with the optical observations, plus exist a differential response effect between each treatment post-infection. This research provides evidence of the immunostimulant effect of death pathogenic and non-pathogenic bacterial strains in the rate of hatching and oversight of shrimp L. vannamei during embryonic and larval development. This research continues evaluating the effect of these death strains on the expression of genes related to the defense priming in larvae of L. vannamei that come from massive spawning in hatcheries before and after the infection challenge against V. parahaemolyticus.

Keywords: immunostimulation, L. vannamei, hatching, survival

Procedia PDF Downloads 118
83 A Hybrid of BioWin and Computational Fluid Dynamics Based Modeling of Biological Wastewater Treatment Plants for Model-Based Control

Authors: Komal Rathore, Kiesha Pierre, Kyle Cogswell, Aaron Driscoll, Andres Tejada Martinez, Gita Iranipour, Luke Mulford, Aydin Sunol

Abstract:

Modeling of Biological Wastewater Treatment Plants requires several parameters for kinetic rate expressions, thermo-physical properties, and hydrodynamic behavior. The kinetics and associated mechanisms become complex due to several biological processes taking place in wastewater treatment plants at varying times and spatial scales. A dynamic process model that incorporated the complex model for activated sludge kinetics was developed using the BioWin software platform for an Advanced Wastewater Treatment Plant in Valrico, Florida. Due to the extensive number of tunable parameters, an experimental design was employed for judicious selection of the most influential parameter sets and their bounds. The model was tuned using both the influent and effluent plant data to reconcile and rectify the forecasted results from the BioWin Model. Amount of mixed liquor suspended solids in the oxidation ditch, aeration rates and recycle rates were adjusted accordingly. The experimental analysis and plant SCADA data were used to predict influent wastewater rates and composition profiles as a function of time for extended periods. The lumped dynamic model development process was coupled with Computational Fluid Dynamics (CFD) modeling of the key units such as oxidation ditches in the plant. Several CFD models that incorporate the nitrification-denitrification kinetics, as well as, hydrodynamics was developed and being tested using ANSYS Fluent software platform. These realistic and verified models developed using BioWin and ANSYS were used to plan beforehand the operating policies and control strategies for the biological wastewater plant accordingly that further allows regulatory compliance at minimum operational cost. These models, with a little bit of tuning, can be used for other biological wastewater treatment plants as well. The BioWin model mimics the existing performance of the Valrico Plant which allowed the operators and engineers to predict effluent behavior and take control actions to meet the discharge limits of the plant. Also, with the help of this model, we were able to find out the key kinetic and stoichiometric parameters which are significantly more important for modeling of biological wastewater treatment plants. One of the other important findings from this model were the effects of mixed liquor suspended solids and recycle ratios on the effluent concentration of various parameters such as total nitrogen, ammonia, nitrate, nitrite, etc. The ANSYS model allowed the abstraction of information such as the formation of dead zones increases through the length of the oxidation ditches as compared to near the aerators. These profiles were also very useful in studying the behavior of mixing patterns, effect of aerator speed, and use of baffles which in turn helps in optimizing the plant performance.

Keywords: computational fluid dynamics, flow-sheet simulation, kinetic modeling, process dynamics

Procedia PDF Downloads 177
82 High Speed Motion Tracking with Magnetometer in Nonuniform Magnetic Field

Authors: Jeronimo Cox, Tomonari Furukawa

Abstract:

Magnetometers have become more popular in inertial measurement units (IMU) for their ability to correct estimations using the earth's magnetic field. Accelerometer and gyroscope-based packages fail with dead-reckoning errors accumulated over time. Localization in robotic applications with magnetometer-inclusive IMUs has become popular as a way to track the odometry of slower-speed robots. With high-speed motions, the accumulated error increases over smaller periods of time, making them difficult to track with IMU. Tracking a high-speed motion is especially difficult with limited observability. Visual obstruction of motion leaves motion-tracking cameras unusable. When motions are too dynamic for estimation techniques reliant on the observability of the gravity vector, the use of magnetometers is further justified. As available magnetometer calibration methods are limited with the assumption that background magnetic fields are uniform, estimation in nonuniform magnetic fields is problematic. Hard iron distortion is a distortion of the magnetic field by other objects that produce magnetic fields. This kind of distortion is often observed as the offset from the origin of the center of data points when a magnetometer is rotated. The magnitude of hard iron distortion is dependent on proximity to distortion sources. Soft iron distortion is more related to the scaling of the axes of magnetometer sensors. Hard iron distortion is more of a contributor to the error of attitude estimation with magnetometers. Indoor environments or spaces inside ferrite-based structures, such as building reinforcements or a vehicle, often cause distortions with proximity. As positions correlate to areas of distortion, methods of magnetometer localization include the production of spatial mapping of magnetic field and collection of distortion signatures to better aid location tracking. The goal of this paper is to compare magnetometer methods that don't need pre-productions of magnetic field maps. Mapping the magnetic field in some spaces can be costly and inefficient. Dynamic measurement fusion is used to track the motion of a multi-link system with us. Conventional calibration by data collection of rotation at a static point, real-time estimation of calibration parameters each time step, and using two magnetometers for determining local hard iron distortion are compared to confirm the robustness and accuracy of each technique. With opposite-facing magnetometers, hard iron distortion can be accounted for regardless of position, Rather than assuming that hard iron distortion is constant regardless of positional change. The motion measured is a repeatable planar motion of a two-link system connected by revolute joints. The links are translated on a moving base to impulse rotation of the links. Equipping the joints with absolute encoders and recording the motion with cameras to enable ground truth comparison to each of the magnetometer methods. While the two-magnetometer method accounts for local hard iron distortion, the method fails where the magnetic field direction in space is inconsistent.

Keywords: motion tracking, sensor fusion, magnetometer, state estimation

Procedia PDF Downloads 56
81 Bridging the Gap between Teaching and Learning: A 3-S (Strength, Stamina, Speed) Model for Medical Education

Authors: Mangala. Sadasivan, Mary Hughes, Bryan Kelly

Abstract:

Medical Education must focus on bridging the gap between teaching and learning when training pre-clinical year students in skills needed to keep up with medical knowledge and to meet the demands of health care in the future. The authors were interested in showing that a 3-S Model (building strength, developing stamina, and increasing speed) using a bridged curriculum design helps connect teaching and learning and improves students’ retention of basic science and clinical knowledge. The authors designed three learning modules using the 3-S Model within a systems course in a pre-clerkship medical curriculum. Each module focused on a bridge (concept map) designed by the instructor for specific content delivered to students in the course. This with-in-subjects design study included 304 registered MSU osteopathic medical students (3 campuses) ranked by quintile based on previous coursework. The instructors used the bridge to create self-directed learning exercises (building strength) to help students master basic science content. Students were video coached on how to complete assignments, and given pre-tests and post-tests designed to give them control to assess and identify gaps in learning and strengthen connections. The instructor who designed the modules also used video lectures to help students master clinical concepts and link them (building stamina) to previously learned material connected to the bridge. Boardstyle practice questions relevant to the modules were used to help students improve access (increasing speed) to stored content. Unit Examinations covering the content within modules and materials covered by other instructors teaching within the units served as outcome measures in this study. This data was then compared to each student’s performance on a final comprehensive exam and their COMLEX medical board examinations taken some time after the course. The authors used mean comparisons to evaluate students’ performances on module items (using 3-S Model) to non-module items on unit exams, final course exam and COMLEX medical board examination. The data shows that on average, students performed significantly better on module items compared to non-module items on exams 1 and 2. The module 3 exam was canceled due to a university shut down. The difference in mean scores (module verses non-module) items disappeared on the final comprehensive exam which was rescheduled once the university resumed session. Based on Quintile designation, the mean scores were higher for module items than non-module items and the difference in scores between items for Quintiles 1 and 2 were significantly better on exam 1 and the gap widened for all Quintile groups on exam 2 and disappeared in exam 3. Based on COMLEX performance, all students on average as a group, whether they Passed or Failed, performed better on Module items than non-module items in all three exams. The gap between scores of module items for students who passed COMLEX to those who failed was greater on Exam 1 (14.3) than on Exam 2 (7.5) and Exam 3 (10.2). Data shows the 3-S Model using a bridge effectively connects teaching and learning

Keywords: bridging gap, medical education, teaching and learning, model of learning

Procedia PDF Downloads 28
80 Cobb Angle Measurement from Coronal X-Rays Using Artificial Neural Networks

Authors: Andrew N. Saylor, James R. Peters

Abstract:

Scoliosis is a complex 3D deformity of the thoracic and lumbar spines, clinically diagnosed by measurement of a Cobb angle of 10 degrees or more on a coronal X-ray. The Cobb angle is the angle made by the lines drawn along the proximal and distal endplates of the respective proximal and distal vertebrae comprising the curve. Traditionally, Cobb angles are measured manually using either a marker, straight edge, and protractor or image measurement software. The task of measuring the Cobb angle can also be represented by a function taking the spine geometry rendered using X-ray imaging as input and returning the approximate angle. Although the form of such a function may be unknown, it can be approximated using artificial neural networks (ANNs). The performance of ANNs is affected by many factors, including the choice of activation function and network architecture; however, the effects of these parameters on the accuracy of scoliotic deformity measurements are poorly understood. Therefore, the objective of this study was to systematically investigate the effect of ANN architecture and activation function on Cobb angle measurement from the coronal X-rays of scoliotic subjects. The data set for this study consisted of 609 coronal chest X-rays of scoliotic subjects divided into 481 training images and 128 test images. These data, which included labeled Cobb angle measurements, were obtained from the SpineWeb online database. In order to normalize the input data, each image was resized using bi-linear interpolation to a size of 500 × 187 pixels, and the pixel intensities were scaled to be between 0 and 1. A fully connected (dense) ANN with a fixed cost function (mean squared error), batch size (10), and learning rate (0.01) was developed using Python Version 3.7.3 and TensorFlow 1.13.1. The activation functions (sigmoid, hyperbolic tangent [tanh], or rectified linear units [ReLU]), number of hidden layers (1, 3, 5, or 10), and number of neurons per layer (10, 100, or 1000) were varied systematically to generate a total of 36 network conditions. Stochastic gradient descent with early stopping was used to train each network. Three trials were run per condition, and the final mean squared errors and mean absolute errors were averaged to quantify the network response for each condition. The network that performed the best used ReLU neurons had three hidden layers, and 100 neurons per layer. The average mean squared error of this network was 222.28 ± 30 degrees2, and the average mean absolute error was 11.96 ± 0.64 degrees. It is also notable that while most of the networks performed similarly, the networks using ReLU neurons, 10 hidden layers, and 1000 neurons per layer, and those using Tanh neurons, one hidden layer, and 10 neurons per layer performed markedly worse with average mean squared errors greater than 400 degrees2 and average mean absolute errors greater than 16 degrees. From the results of this study, it can be seen that the choice of ANN architecture and activation function has a clear impact on Cobb angle inference from coronal X-rays of scoliotic subjects.

Keywords: scoliosis, artificial neural networks, cobb angle, medical imaging

Procedia PDF Downloads 102
79 Adaptation of Retrofit Strategies for the Housing Sector in Northern Cyprus

Authors: B. Ozarisoy, E. Ampatzi, G. Z. Lancaster

Abstract:

This research project is undertaken in the Turkish Republic of Northern Cyprus (T.R.N.C). The study focuses on identifying refurbishment activities capable of diagnosing and detecting the underlying problems alongside the challenges offered by the buildings’ typology in addition to identifying the correct construction materials in the refurbishment process which allow for the maximisation of expected energy savings. Attention is drawn to, the level of awareness and understanding of refurbishment activity that needs to be raised in the current construction process alongside factors that include the positive environmental impact and the saving of energy. The approach here is to look at buildings that have been built by private construction companies that have already been refurbished by occupants and to suggest additional control mechanisms for retrofitting that can further enhance the process of renewal. The objective of the research is to investigate the occupants’ behaviour and role in the refurbishment activity; to explore how and why occupants decide to change building components and to understand why and how occupants consider using energy-efficient materials. The present work is based on data from this researcher’s first-hand experience and incorporates the preliminary data collection on recent housing sector statistics, including the year in which housing estates were built, an examination of the characteristics that define the construction industry in the T.R.N.C., building typology and the demographic structure of house owners. The housing estates are chosen from 16 different projects in four different regions of the T.R.N.C. that include urban and suburban areas. There is, therefore, a broad representation of the common drivers in the property market, each with different levels of refurbishment activity and this is coupled with different samplings from different climatic regions within the T.R.N.C. The study is conducted through semi-structured interviews to identify occupants’ behaviour as it is associated with refurbishment activity. The interviews provide all the occupants’ demographic information, needs and intentions as they relate to various aspects of the refurbishment process. This research paper presents the results of semi-structured interviews with 70 homeowners in a selected group of 16 housing estates in five different parts of the T.R.N.C. The people who agreed to be interviewed in this study are all residents of single or multi-family housing units. Alongside the construction process and its impact on the environment, the results point out the need for control mechanisms in the housing sector to promote and support the adoption of retrofit strategies and minimize non-controlled refurbishment activities, in line with diagnostic information of the selected buildings. The expected solutions should be effective, environmentally acceptable and feasible given the type of housing projects under review, with due regard for their location, the climatic conditions within which they were undertaken, the socio-economic standing of the house owners and their attitudes, local resources and legislative constraints. Furthermore, the study goes on to insist on the practical and long-term economic benefits of refurbishment under the proper conditions and why this should be fully understood by the householders.

Keywords: construction process, energy-efficiency, refurbishment activity, retrofitting

Procedia PDF Downloads 295
78 Design and Synthesis of an Organic Material with High Open Circuit Voltage of 1.0 V

Authors: Javed Iqbal

Abstract:

The growing need for energy by the human society and depletion of conventional energy sources demands a renewable, safe, infinite, low-cost and omnipresent energy source. One of the most suitable ways to solve the foreseeable world’s energy crisis is to use the power of the sun. Photovoltaic devices are especially of wide interest as they can convert solar energy to electricity. Recently the best performing solar cells are silicon-based cells. However, silicon cells are expensive, rigid in structure and have a large timeline for the payback of cost and electricity. Organic photovoltaic cells are cheap, flexible and can be manufactured in a continuous process. Therefore, organic photovoltaic cells are an extremely favorable replacement. Organic photovoltaic cells utilize sunlight as energy and convert it into electricity through the use of conductive polymers/ small molecules to separate electrons and electron holes. A major challenge for these new organic photovoltaic cells is the efficiency, which is low compared with the traditional silicon solar cells. To overcome this challenge, usually two straightforward strategies have been considered: (1) reducing the band-gap of molecular donors to broaden the absorption range, which results in higher short circuit current density (JSC) of devices, and (2) lowering the highest occupied molecular orbital (HOMO) energy of molecular donors so as to increase the open-circuit voltage (VOC) of applications devices.8 Keeping in mind the cost of chemicals it is hard to try many materials on test basis. The best way is to find the suitable material in the bulk. For this purpose, we use computational approach to design molecules based on our organic chemistry knowledge and determine their physical and electronic properties. In this study, we did DFT calculations with different options to get high open circuit voltage and after getting suitable data from calculation we finally did synthesis of a novel D–π–A–π–D type low band-gap small molecular donor material (ZOPTAN-TPA). The Aarylene vinylene based bis(arylhalide) unit containing a cyanostilbene unit acts as a low-band- gap electron-accepting block, and is coupled with triphenylamine as electron-donating blocks groups. The motivation for choosing triphenylamine (TPA) as capped donor was attributed to its important role in stabilizing the separated hole from an exciton and thus improving the hole-transporting properties of the hole carrier.3 A π-bridge (thiophene) is inserted between the donor and acceptor unit to reduce the steric hindrance between the donor and acceptor units and to improve the planarity of the molecule. The ZOPTAN-TPA molecule features a low HOMO level of 5.2 eV and an optical energy gap of 2.1 eV. Champion OSCs based on a solution-processed and non-annealed active-material blend of [6,6]-phenyl-C61-butyric acid methyl ester (PCBM) and ZOPTAN-TPA in a mass ratio of 2:1 exhibits a power conversion efficiency of 1.9 % and a high open-circuit voltage of over 1.0 V.

Keywords: high open circuit voltage, donor, triphenylamine, organic solar cells

Procedia PDF Downloads 221
77 Prevalence and Factors Associated With Concurrent Use of Herbal Medicine and Anti-retroviral Therapy Among HIV/Aids Patients Attending Selected HIV Clinics in Wakiso District

Authors: Nanteza Rachel

Abstract:

Background: Worldwide, there were 36.7 million people living with Human Immunodeficiency Virus (HIV) in 2015, up from 35 million at the end of 2013. Wakiso district is one of the hotspots for the Human Immunodeficiency Virus (HIV)/ Acquired Immune Deficiency Syndrome (AIDS) infection in Uganda, with the prevalence of 8.1 %. Herbal medicine has gained popularity among Human Immunodeficiency Virus (HIV)/ Acquired Immune Deficiency Syndrome (AIDS) patients as adjuvant therapy to reduce the adverse effects of ART. Regardless of the subsidized and physical availability of the Anti-Retroviral Therapy (ART), majority of Africans living with Human Immunodeficiency Virus (HIV)/ Acquired Immune Deficiency Syndrome (AIDS) resort to adding to their ART traditional medicine. Result found out from a pilot observation made by the PI that indicate 13 out of 30 People Living with AIDS(PLWA) who are attending Human Immunodeficiency Virus (HIV) clinics in Wakiso district reported to be using herbal preparations despite the fact that they were taking Anti Retro Viral (ARVs) this prompted this study to be done. Purpose of the study: To determine the prevalence and factors associated with concurrent use of herbal medicine and anti-retroviral therapy among HIV/AIDS patients attending selected HIV clinics in Wakiso district. Methodology: This was a cross sectional study with both quantitative data collection (use of a questionnaire) and qualitative data collection (key informants’ interviews). A mixed method of sampling was used, that is, purposive and random sampling. Purposive sampling was based on the location in the district and used to select 7 health facilities basing on the 7 health sub districts from Wakiso. Simple random sampling was used to select one HIV clinic from each of the 7 health sub districts. Furthermore, the study units were enrolled in to the study as they entered into the HIV clinics, and 105 respondents were interviewed. Both manual and computer packages (SPSS) were used to analyze the data Results: The prevalence of concurrent use of herbal medicine and ART was 38 (36.2%). Commonly HIV symptom treated with herbs was fever 27(71.1%), diarrhea 3(7.9%) and cough 2(5.3%). Commonly used herbs for fever (Omululuza (Vernonica amydalina), Ekigagi (Aloe sp), Nalongo (Justicia betonica Linn) while for diarrhea was Ntwatwa. The side effects also included; too much pain, itchy pain of HIV, aneamia,felt sick, loss/gain appetite, joint pain and bad dreams. Herbs used to sooth the side effects were; for aneamia was avocado leaves Parea Americana mill The significant factors associated with concurrent use of herbal medicine were being familiar with herbs and conventional medicine for management HIV symptoms being expensive. The other significant factor was exhibiting hostility to patients by health personnel providing HIV care. Conclusion: Herbal medicine is widely used by clients in HIV/AIDS care. Patients being familiar with herbs and conventional medicine being expensive were associated with concurrent use of herbal medicine and ART. The exhibition of hostility to the HIV/AIDS patients by the health care providers was also associated with concurrent use of herbal medicine and ART among HIV/AIDS patients.

Keywords: HIV patients, herbal medicine, antiretroviral therapy, factors associated

Procedia PDF Downloads 68
76 One Species into Five: Nucleo-Mito Barcoding Reveals Cryptic Species in 'Frankliniella Schultzei Complex': Vector for Tospoviruses

Authors: Vikas Kumar, Kailash Chandra, Kaomud Tyagi

Abstract:

The insect order Thysanoptera includes small insects commonly called thrips. As insect vectors, only thrips are capable of Tospoviruses transmission (genus Tospovirus, family Bunyaviridae) affecting various crops. Currently, fifteen species of subfamily Thripinae (Thripidae) have been reported as vectors for tospoviruses. Frankliniella schultzei, which is reported as act as a vector for at least five tospovirses, have been suspected to be a species complex with more than one species. It is one of the historical unresolved issues where, two species namely, F. schultzei Trybom and F. sulphurea Schmutz were erected from South Africa and Srilanaka respectively. These two species were considered to be valid until 1968 when sulphurea was treated as colour morph (pale form) and synonymised under schultzei (dark form) However, these two have been considered as valid species by some of the thrips workers. Parallel studies have indicated that brown form of schultzei is a vector for tospoviruses while yellow form is a non-vector. However, recent studies have shown that yellow populations have also been documented as vectors. In view of all these facts, it is highly important to have a clear understanding whether these colour forms represent true species or merely different populations with different vector carrying capacities and whether there is some hidden diversity in 'Frankliniella schultzei species complex'. In this study, we aim to study the 'Frankliniella schultzei species complex' with molecular spectacles with DNA data from India and Australia and Africa. A total of fifty-five specimens was collected from diverse locations in India and Australia. We generated molecular data using partial fragments of mitochondrial cytochrome c oxidase I gene (mtCOI) and 28S rRNA gene. For COI dataset, there were seventy-four sequences, out of which data on fifty-five was generated in the current study and others were retrieved from NCBI. All the four different tree construction methods: neighbor-joining, maximum parsimony, maximum likelihood and Bayesian analysis, yielded the same tree topology and produced five cryptic species with high genetic divergence. For, rDNA, there were forty-five sequences, out of which data on thirty-nine was generated in the current study and others were retrieved from NCBI. The four tree building methods yielded four cryptic species with high bootstrap support value/posterior probability. Here we could not retrieve one cryptic species from South Africa as we could not generate data on rDNA from South Africa and sequence for rDNA from African region were not available in the database. The results of multiple species delimitation methods (barcode index numbers, automatic barcode gap discovery, general mixed Yule-coalescent, and Poisson-tree-processes) also supported the phylogenetic data and produced 5 and 4 Molecular Operational Taxonomic Units (MOTUs) for mtCOI and 28S dataset respectively. These results of our study indicate the likelihood that F. sulphurea may be a valid species, however, more morphological and molecular data is required on specimens from type localities of these two species and comparison with type specimens.

Keywords: DNA barcoding, species complex, thrips, species delimitation

Procedia PDF Downloads 110
75 Development of an Systematic Design in Evaluating Force-On-Force Security Exercise at Nuclear Power Plants

Authors: Seungsik Yu, Minho Kang

Abstract:

As the threat of terrorism to nuclear facilities is increasing globally after the attacks of September 11, we are striving to recognize the physical protection system and strengthen the emergency response system. Since 2015, Korea has implemented physical protection security exercise for nuclear facilities. The exercise should be carried out with full cooperation between the operator and response forces. Performance testing of the physical protection system should include appropriate exercises, for example, force-on-force exercises, to determine if the response forces can provide an effective and timely response to prevent sabotage. Significant deficiencies and actions taken should be reported as stipulated by the competent authority. The IAEA(International Atomic Energy Agency) is also preparing force-on-force exercise program documents to support exercise of member states. Currently, ROK(Republic of Korea) is implementing exercise on the force-on-force exercise evaluation system which is developed by itself for the nuclear power plant, and it is necessary to establish the exercise procedure considering the use of the force-on-force exercise evaluation system. The purpose of this study is to establish the work procedures of the three major organizations related to the force-on-force exercise of nuclear power plants in ROK, which conduct exercise using force-on-force exercise evaluation system. The three major organizations are composed of licensee, KINAC (Korea Institute of Nuclear Nonproliferation and Control), and the NSSC(Nuclear Safety and Security Commission). Major activities are as follows. First, the licensee establishes and conducts an exercise plan, and when recommendations are derived from the result of the exercise, it prepares and carries out a force-on-force result report including a plan for implementation of the recommendations. Other detailed tasks include consultation with surrounding units for adversary, interviews with exercise participants, support for document evaluation, and self-training to improve the familiarity of the MILES (Multiple Integrated Laser Engagement System). Second, KINAC establishes a force-on-force exercise plan review report and reviews the force-on-force exercise plan report established by licensee. KINAC evaluate force-on-force exercise using exercise evaluation system and prepare training evaluation report. Other detailed tasks include MILES training, adversary consultation, management of exercise evaluation systems, and analysis of exercise evaluation results. Finally, the NSSC decides whether or not to approve the force-on-force exercise and makes a correction request to the nuclear facility based on the exercise results. The most important part of ROK's force-on-force exercise system is the analysis through the exercise evaluation system implemented by KINAC after the exercise. The analytical method proceeds in the order of collecting data from the exercise evaluation system and analyzing the collected data. The exercise application process of the exercise evaluation system introduced in ROK in 2016 will be concretely set up, and a system will be established to provide objective and consistent conclusions between exercise sessions. Based on the conclusions drawn up, the ultimate goal is to complement the physical protection system of licensee so that the system makes licensee respond effectively and timely against sabotage or unauthorized removal of nuclear materials.

Keywords: Force-on-Force exercise, nuclear power plant, physical protection, sabotage, unauthorized removal

Procedia PDF Downloads 120
74 Compromising Quality of Life in Low Income Settlement's: The Case of Ashrayan Prakalpa, Khulna

Authors: Salma Akter, Md. Kamal Uddin

Abstract:

This study aims to demonstrate how top-down shelter policy and its resultant dwelling environment leads to ‘everyday compromise’ by the grassroots according to subjective (satisfaction) and objective (physical design elements and physical environmental elements) indicators, which are measured across three levels of the settlement; macro (Community), meso (Neighborhood or shelter/built environment) and micro (family). Ashrayan Prakalpa is a resettlement /housing project of Government of Bangladesh for providing shelters and human resources development activities like education, microcredit, and training programme to landless, homeless and rootless people. Despite the integrated nature of the shelter policies (comprises poverty alleviation, employment opportunity, secured tenure, and livelihood training), the ‘quality of life’ issue at the different levels of settlements becomes questionable. As dwellers of shelter units (although formally termed as ‘barracks’ rather shelter or housing) remain on the receiving end of government’s resettlement policies, they often involve with spatial-physical and socio-economic negotiation and assume curious forms of spatial practice, which often upholds contradiction with policy planning. Thus, policy based shelter force dwellers to persistently compromise with their provided built environments both in overtly and covertly. Compromising with prescribed designed space and facilities across living places articulated their negotiation with the quality of allocated space, built form and infrastructures, which in turn exert as less quality of life. The top-down shelter project, Dakshin Chandani Mahal Ashrayan Prakalpa at Dighalia Upazila, the study area located at the Eastern fringe area of Khulna, Bangladesh, is still in progress to resettle internally displaced and homeless people. In terms of methodology, this research is primarily exploratory and adopts a case study method, and an analytical framework is developed through the deductive approach for evaluating the quality of life. Secondary data have been obtained from housing policy analysis and relevant literature review, while key informant interview, focus group discussion, necessary drawings and photographs and participant observation across dwelling, neighborhood, and community level have also been administered as primary data collection methodology. Findings have revealed that various shortages, inadequacies, and negligence of policymakers force to compromise with allocated designed space, physical infrastructure and economic opportunities across dwelling, neighborhood and mostly community level. Thus, the outcome of this study can be beneficial for a global-level understating of the compromising the ‘quality of life’ under top-down shelter policy. Locally, for instance, in the context of Bangladesh, it can help policymakers and concerned authorities to formulate the shelter policies and take initiatives to improve the well-being of marginalized.

Keywords: Ashrayan Prakalpa, compromise, displaced people, quality of life

Procedia PDF Downloads 126
73 Statistical Models and Time Series Forecasting on Crime Data in Nepal

Authors: Dila Ram Bhandari

Abstract:

Throughout the 20th century, new governments were created where identities such as ethnic, religious, linguistic, caste, communal, tribal, and others played a part in the development of constitutions and the legal system of victim and criminal justice. Acute issues with extremism, poverty, environmental degradation, cybercrimes, human rights violations, crime against, and victimization of both individuals and groups have recently plagued South Asian nations. Everyday massive number of crimes are steadfast, these frequent crimes have made the lives of common citizens restless. Crimes are one of the major threats to society and also for civilization. Crime is a bone of contention that can create a societal disturbance. The old-style crime solving practices are unable to live up to the requirement of existing crime situations. Crime analysis is one of the most important activities of the majority of intelligent and law enforcement organizations all over the world. The South Asia region lacks such a regional coordination mechanism, unlike central Asia of Asia Pacific regions, to facilitate criminal intelligence sharing and operational coordination related to organized crime, including illicit drug trafficking and money laundering. There have been numerous conversations in recent years about using data mining technology to combat crime and terrorism. The Data Detective program from Sentient as a software company, uses data mining techniques to support the police (Sentient, 2017). The goals of this internship are to test out several predictive model solutions and choose the most effective and promising one. First, extensive literature reviews on data mining, crime analysis, and crime data mining were conducted. Sentient offered a 7-year archive of crime statistics that were daily aggregated to produce a univariate dataset. Moreover, a daily incidence type aggregation was performed to produce a multivariate dataset. Each solution's forecast period lasted seven days. Statistical models and neural network models were the two main groups into which the experiments were split. For the crime data, neural networks fared better than statistical models. This study gives a general review of the applied statistics and neural network models. A detailed image of each model's performance on the available data and generalizability is provided by a comparative analysis of all the models on a comparable dataset. Obviously, the studies demonstrated that, in comparison to other models, Gated Recurrent Units (GRU) produced greater prediction. The crime records of 2005-2019 which was collected from Nepal Police headquarter and analysed by R programming. In conclusion, gated recurrent unit implementation could give benefit to police in predicting crime. Hence, time series analysis using GRU could be a prospective additional feature in Data Detective.

Keywords: time series analysis, forecasting, ARIMA, machine learning

Procedia PDF Downloads 140