Search results for: emission scenarios
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2688

Search results for: emission scenarios

48 An Efficient Algorithm for Solving the Transmission Network Expansion Planning Problem Integrating Machine Learning with Mathematical Decomposition

Authors: Pablo Oteiza, Ricardo Alvarez, Mehrdad Pirnia, Fuat Can

Abstract:

To effectively combat climate change, many countries around the world have committed to a decarbonisation of their electricity, along with promoting a large-scale integration of renewable energy sources (RES). While this trend represents a unique opportunity to effectively combat climate change, achieving a sound and cost-efficient energy transition towards low-carbon power systems poses significant challenges for the multi-year Transmission Network Expansion Planning (TNEP) problem. The objective of the multi-year TNEP is to determine the necessary network infrastructure to supply the projected demand in a cost-efficient way, considering the evolution of the new generation mix, including the integration of RES. The rapid integration of large-scale RES increases the variability and uncertainty in the power system operation, which in turn increases short-term flexibility requirements. To meet these requirements, flexible generating technologies such as energy storage systems must be considered within the TNEP as well, along with proper models for capturing the operational challenges of future power systems. As a consequence, TNEP formulations are becoming more complex and difficult to solve, especially for its application in realistic-sized power system models. To meet these challenges, there is an increasing need for developing efficient algorithms capable of solving the TNEP problem with reasonable computational time and resources. In this regard, a promising research area is the use of artificial intelligence (AI) techniques for solving large-scale mixed-integer optimization problems, such as the TNEP. In particular, the use of AI along with mathematical optimization strategies based on decomposition has shown great potential. In this context, this paper presents an efficient algorithm for solving the multi-year TNEP problem. The algorithm combines AI techniques with Column Generation, a traditional decomposition-based mathematical optimization method. One of the challenges of using Column Generation for solving the TNEP problem is that the subproblems are of mixed-integer nature, and therefore solving them requires significant amounts of time and resources. Hence, in this proposal we solve a linearly relaxed version of the subproblems, and trained a binary classifier that determines the value of the binary variables, based on the results obtained from the linearized version. A key feature of the proposal is that we integrate the binary classifier into the optimization algorithm in such a way that the optimality of the solution can be guaranteed. The results of a study case based on the HRP 38-bus test system shows that the binary classifier has an accuracy above 97% for estimating the value of the binary variables. Since the linearly relaxed version of the subproblems can be solved with significantly less time than the integer programming counterpart, the integration of the binary classifier into the Column Generation algorithm allowed us to reduce the computational time required for solving the problem by 50%. The final version of this paper will contain a detailed description of the proposed algorithm, the AI-based binary classifier technique and its integration into the CG algorithm. To demonstrate the capabilities of the proposal, we evaluate the algorithm in case studies with different scenarios, as well as in other power system models.

Keywords: integer optimization, machine learning, mathematical decomposition, transmission planning

Procedia PDF Downloads 43
47 Provotyping Futures Through Design

Authors: Elisabetta Cianfanelli, Maria Claudia Coppola, Margherita Tufarelli

Abstract:

Design practices throughout history return a critical understanding of society since they always conveyed values and meanings aimed at (re)framing reality by acting in everyday life: here, design gains cultural and normative character, since its artifacts, services, and environments hold the power to intercept, influence and inspire thoughts, behaviors, and relationships. In this sense, design can be persuasive, engaging in the production of worlds and, as such, acting in the space between poietics and politics so that chasing preferable futures and their aesthetic strategies becomes a matter full of political responsibility. This resonates with contemporary landscapes of radical interdependencies challenging designers to focus on complex socio-technical systems and to better support values such as equality and justice for both humans and nonhumans. In fact, it is in times of crisis and structural uncertainty that designers turn into visionaries at the service of society, envisioning scenarios and dwelling in the territories of imagination to conceive new fictions and frictions to be added to the thickness of the real. Here, design’s main tasks are to develop options, to increase the variety of choices, to cultivate its role as scout, jester, agent provocateur for the public, so that design for transformation emerges, making an explicit commitment to society, furthering structural change in a proactive and synergic manner. However, the exploration of possible futures is both a trap and a trampoline because, although it embodies a radical research tool, it raises various challenges when the design process goes further in the translation of such vision into an artefact - whether tangible or intangible -, through which it should deliver that bit of future into everyday experience. Today designers are making up new tools and practices to tackle current wicked challenges, combining their approaches with other disciplinary domains: futuring through design, thus, rises from research strands like speculative design, design fiction, and critical design, where the blending of design approaches and futures thinking brings an action-oriented and product-based approach to strategic insights. The contribution positions at the intersection of those approaches, aiming at discussing design’s tools of inquiry through which it is possible to grasp the agency of imagined futures into present time. Since futures are not remote, they actively participate in creating path-dependent decisions, crystallized into designed artifacts par excellence, prototypes, and their conceptual other, provotypes: with both being unfinished and multifaceted, the first ones are effective in reiterating solutions to problems already framed, while the second ones prove to be useful when the goal is to explore and break boundaries, bringing closer preferable futures. By focusing on some provotypes throughout history which challenged markets and, above all, social and cultural structures, the contribution’s final aim is understanding the knowledge produced by provotypes, understood as design spaces where designs’s humanistic side might help developing a deeper sensibility about uncertainty and, most of all, the unfinished feature of societal artifacts, whose experimentation would leave marks and traces to build up f(r)ictions as vital sparks of plurality and collective life.

Keywords: speculative design, provotypes, design knowledge, political theory

Procedia PDF Downloads 106
46 Geochemical Evaluation of Metal Content and Fluorescent Characterization of Dissolved Organic Matter in Lake Sediments

Authors: Fani Sakellariadou, Danae Antivachis

Abstract:

Purpose of this paper is to evaluate the environmental status of a coastal Mediterranean lake, named Koumoundourou, located in the northeastern coast of Elefsis Bay, in the western region of Attiki in Greece, 15 km far from Athens. It is preserved from ancient times having an important archaeological interest. Koumoundourou lake is also considered as a valuable wetland accommodating an abundant flora and fauna, with a variety of bird species including a few world’s threatened ones. Furthermore, it is a heavily modified lake, affected by various anthropogenic pollutant sources which provide industrial, urban and agricultural contaminants. The adjacent oil refineries and the military depot are the major pollution providers furnishing with crude oil spills and leaks. Moreover, the lake accepts a quantity of groundwater leachates from the major landfill of Athens. The environmental status of the lake results from the intensive land uses combined with the permeable lithology of the surrounding area and the existence of karstic springs which discharge calcareous mountains. Sediment samples were collected along the shoreline of the lake using a Van Veen grab stainless steel sampler. They were studied for the determination of the total metal content and the metal fractionation in geochemical phases as well as the characterization of the dissolved organic matter (DOM). These constituents have a significant role in the ecological consideration of the lake. Metals may be responsible for harmful environmental impacts. The metal partitioning offers comprehensive information for the origin, mode of occurrence, biological and physicochemical availability, mobilization and transport of metals. Moreover, DOM has a multifunctional importance interacting with inorganic and organic contaminants leading to biogeochemical and ecological effects. The samples were digested using microwave heating with a suitable laboratory microwave unit. For the total metal content, the samples were treated with a mixture of strong acids. Then, a sequential extraction procedure was applied for the removal of exchangeable, carbonate hosted, reducible, organic/sulphides and residual fractions. Metal content was determined by an ICP-MS (Perkin Elmer, ICP MASS Spectrophotometer NexION 350D). Furthermore, the DOM was removed via a gentle extraction procedure and then it was characterized by fluorescence spectroscopy using a Perkin-Elmer LS 55 luminescence spectrophotometer equipped with the WinLab 4.00.02 software for data processing (Agilent, Cary Eclipse Fluorescence). Mono dimensional emission, excitation, synchronous-scan excitation and total luminescence spectra were recorded for the classification of chromophoric units present in the aqueous extracts. Total metal concentrations were determined and compared with those of the Elefsis gulf sediments. Element partitioning showed the anthropogenic sources and the contaminant bioavailability. All fluorescence spectra, as well as humification indices, were evaluated in detail to find out the nature and origin of DOM. All the results were compared and interpreted to evaluate the environmental quality of Koumoundourou lake and the need for environmental management and protection.

Keywords: anthropogenic contaminant, dissolved organic matter, lake, metal, pollution

Procedia PDF Downloads 127
45 Synthesis and Properties of Poly(N-(sulfophenyl)aniline) Nanoflowers and Poly(N-(sulfophenyl)aniline) Nanofibers/Titanium dioxide Nanoparticles by Solid Phase Mechanochemical and Their Application in Hybrid Solar Cell

Authors: Mazaher Yarmohamadi-Vasel, Ali Reza Modarresi-Alama, Sahar Shabzendedara

Abstract:

Purpose/Objectives: The first purpose was synthesize Poly(N-(sulfophenyl)aniline) nanoflowers (PSANFLs) and Poly(N-(sulfophenyl)aniline) nanofibers/titanium dioxide nanoparticles ((PSANFs/TiO2NPs) by a solid-state mechano-chemical reaction and template-free method and use them in hybrid solar cell. Also, our second aim was to increase the solubility and the processability of conjugated nanomaterials in water through polar functionalized materials. poly[N-(4-sulfophenyl)aniline] is easily soluble in water because of the presence of polar groups of sulfonic acid in the polymer chain. Materials/Methods: Iron (III) chloride hexahydrate (FeCl3∙6H2O) were bought from Merck Millipore Company. Titanium oxide nanoparticles (TiO2, <20 nm, anatase) and Sodium diphenylamine-4-sulfonate (99%) were bought from Sigma-Aldrich Company. Titanium dioxide nanoparticles paste (PST-20T) was prepared from Sharifsolar Co. Conductive glasses coated with indium tin oxide (ITO) were bought from Xinyan Technology Co (China). For the first time we used the solid-state mechano-chemical reaction and template-free method to synthesize Poly(N-(sulfophenyl)aniline) nanoflowers. Moreover, for the first time we used the same technique to synthesize nanocomposite of Poly(N-(sulfophenyl)aniline) nanofibers and titanium dioxide nanoparticles (PSANFs/TiO2NPs) also for the first time this nanocomposite was synthesized. Examining the results of electrochemical calculations energy gap obtained by CV curves and UV–vis spectra demonstrate that PSANFs/TiO2NPs nanocomposite is a p-n type material that can be used in photovoltaic cells. Doctor blade method was used to creat films for three kinds of hybrid solar cells in terms of different patterns like ITO│TiO2NPs│Semiconductor sample│Al. In the following, hybrid photovoltaic cells in bilayer and bulk heterojunction structures were fabricated as ITO│TiO2NPs│PSANFLs│Al and ITO│TiO2NPs│PSANFs /TiO2NPs│Al, respectively. Fourier-transform infrared spectra, field emission scanning electron microscopy (FE-SEM), ultraviolet-visible spectra, cyclic voltammetry (CV) and electrical conductivity were the analysis that used to characterize the synthesized samples. Results and Conclusions: FE-SEM images clearly demonstrate that the morphology of the synthesized samples are nanostructured (nanoflowers and nanofibers). Electrochemical calculations of band gap from CV curves demonstrated that the forbidden band gap of the PSANFLs and PSANFs/TiO2NPs nanocomposite are 2.95 and 2.23 eV, respectively. I–V characteristics of hybrid solar cells and their power conversion efficiency (PCE) under 100 mWcm−2 irradiation (AM 1.5 global conditions) were measured that The PCE of the samples were 0.30 and 0.62%, respectively. At the end, all the results of solar cell analysis were discussed. To sum up, PSANFLs and PSANFLs/TiO2NPs were successfully synthesized by an affordable and straightforward mechanochemical reaction in solid-state under the green condition. The solubility and processability of the synthesized compounds have been improved compared to the previous work. We successfully fabricated hybrid photovoltaic cells of synthesized semiconductor nanostructured polymers and TiO2NPs as different architectures. We believe that the synthesized compounds can open inventive pathways for the development of other Poly(N-(sulfophenyl)aniline based hybrid materials (nanocomposites) proper for preparing new generation solar cells.

Keywords: mechanochemical synthesis, PSANFLs, PSANFs/TiO2NPs, solar cell

Procedia PDF Downloads 40
44 Particle Size Characteristics of Aerosol Jets Produced by A Low Powered E-Cigarette

Authors: Mohammad Shajid Rahman, Tarik Kaya, Edgar Matida

Abstract:

Electronic cigarettes, also known as e-cigarettes, may have become a tool to improve smoking cessation due to their ability to provide nicotine at a selected rate. Unlike traditional cigarettes, which produce toxic elements from tobacco combustion, e-cigarettes generate aerosols by heating a liquid solution (commonly a mixture of propylene glycol, vegetable glycerin, nicotine and some flavoring agents). However, caution still needs to be taken when using e-cigarettes due to the presence of addictive nicotine and some harmful substances produced from the heating process. Particle size distribution (PSD) and associated velocities generated by e-cigarettes have significant influence on aerosol deposition in different regions of human respiratory tracts. On another note, low actuation power is beneficial in aerosol generating devices since it exhibits a reduced emission of toxic chemicals. In case of e-cigarettes, lower heating powers can be considered as powers lower than 10 W compared to a wide range of powers (0.6 to 70.0 W) studied in literature. Due to the importance regarding inhalation risk reduction, deeper understanding of particle size characteristics of e-cigarettes demands thorough investigation. However, comprehensive study on PSD and velocities of e-cigarettes with a standard testing condition at relatively low heating powers is still lacking. The present study aims to measure particle number count and size distribution of undiluted aerosols of a latest fourth-generation e-cigarette at low powers, within 6.5 W using real-time particle counter (time-of-flight method). Also, temporal and spatial evolution of particle size and velocity distribution of aerosol jets are examined using phase Doppler anemometry (PDA) technique. To the authors’ best knowledge, application of PDA in e-cigarette aerosol measurement is rarely reported. In the present study, preliminary results about particle number count of undiluted aerosols measured by time-of-flight method depicted that an increase of heating power from 3.5 W to 6.5 W resulted in an enhanced asymmetricity in PSD, deviating from log-normal distribution. This can be considered as an artifact of rapid vaporization, condensation and coagulation processes on aerosols caused by higher heating power. A novel mathematical expression, combining exponential, Gaussian and polynomial (EGP) distributions, was proposed to describe asymmetric PSD successfully. The value of count median aerodynamic diameter and geometric standard deviation laid within a range of about 0.67 μm to 0.73 μm, and 1.32 to 1.43, respectively while the power varied from 3.5 W to 6.5 W. Laser Doppler velocimetry (LDV) and PDA measurement suggested a typical centerline streamwise mean velocity decay of aerosol jet along with a reduction of particle sizes. In the final submission, a thorough literature review, detailed description of experimental procedure and discussion of the results will be provided. Particle size and turbulent characteristics of aerosol jets will be further examined, analyzing arithmetic mean diameter, volumetric mean diameter, volume-based mean diameter, streamwise mean velocity and turbulence intensity. The present study has potential implications in PSD simulation and validation of aerosol dosimetry model, leading to improving related aerosol generating devices.

Keywords: E-cigarette aerosol, laser doppler velocimetry, particle size distribution, particle velocity, phase Doppler anemometry

Procedia PDF Downloads 12
43 Blood Lipid Management: Combined Treatment with Hydrotherapy and Ozone Bubbles Bursting in Water

Authors: M. M. Wickramasinghe

Abstract:

Cholesterol and triglycerides are lipids, mainly essential to maintain the cellular structure of the human body. Cholesterol is also important for hormone production, vitamin D production, proper digestion functions, and strengthening the immune system. Excess fats in the blood circulation, known as hyperlipidemia, become harmful leading to arterial clogging and causing atherosclerosis. Aim of this research is to develop a treatment protocol to efficiently break down and maintain circulatory lipids by improving blood circulation without strenuous physical exercises while immersed in a tub of water. To achieve the target of strong exercise effect, this method involves generating powerful ozone bubbles to spin, collide, and burst in the water. Powerful emission of air into water is capable of transferring locked energy of the water molecules and releasing energy. This method involves water and air-based impact generated by pumping ozone at the speed of 46 lts/sec with a concentration of 0.03-0.05 ppt according to safety standards of The Federal Institute for Drugs and Medical Devices, BfArM, Germany. The direct impact of ozone bubbles on the muscular system and skin becomes the main target and is capable of increasing the heart rate while immersed in water. A total time duration of 20 minutes is adequate to exert a strong exercise effect, improve blood circulation, and stimulate the nervous and endocrine systems. Unstable ozone breakdown into oxygen release onto the surface of the water giving additional benefits and supplying high-quality air rich in oxygen required to maintain efficient metabolic functions. The breathing technique was introduced to improve the efficiency of lung functions and benefit the air exchange mechanism. The temperature of the water is maintained at 39c to 40c to support arterial dilation and enzyme functions and efficiently improve blood circulation to the vital organs. The buoyancy of water and natural hydrostatic pressure release the tension of the body weight and relax the mind and body. Sufficient hydration (3lts of water per day) is an essential requirement to transport nutrients and remove waste byproducts to process through the liver, kidney, and skin. Proper nutritional intake is an added advantage to optimize the efficiency of this method which aids in a fast recovery process. Within 20-30 days of daily treatment, triglycerides, low-density lipoproteins (LDL), and total cholesterol reduction were observed in patients with abnormal levels of lipid profile. Borderline patients were cleared within 10–15 days of treatment. This is a highly efficient system that provides many benefits and is able to achieve a successful reduction of triglycerides, LDL, and total cholesterol within a short period of time. Supported by proper hydration and nutritional balance, this system of natural treatment maintains healthy levels of lipids in the blood and avoids the risk of cerebral stroke, high blood pressure, and heart attacks.

Keywords: atherosclerosis, cholesterol, hydrotherapy, hyperlipidemia, lipid management, ozone therapy, triglycerides

Procedia PDF Downloads 63
42 An Integrated Real-Time Hydrodynamic and Coastal Risk Assessment Model

Authors: M. Reza Hashemi, Chris Small, Scott Hayward

Abstract:

The Northeast Coast of the US faces damaging effects of coastal flooding and winds due to Atlantic tropical and extratropical storms each year. Historically, several large storm events have produced substantial levels of damage to the region; most notably of which were the Great Atlantic Hurricane of 1938, Hurricane Carol, Hurricane Bob, and recently Hurricane Sandy (2012). The objective of this study was to develop an integrated modeling system that could be used as a forecasting/hindcasting tool to evaluate and communicate the risk coastal communities face from these coastal storms. This modeling system utilizes the ADvanced CIRCulation (ADCIRC) model for storm surge predictions and the Simulating Waves Nearshore (SWAN) model for the wave environment. These models were coupled, passing information to each other and computing over the same unstructured domain, allowing for the most accurate representation of the physical storm processes. The coupled SWAN-ADCIRC model was validated and has been set up to perform real-time forecast simulations (as well as hindcast). Modeled storm parameters were then passed to a coastal risk assessment tool. This tool, which is generic and universally applicable, generates spatial structural damage estimate maps on an individual structure basis for an area of interest. The required inputs for the coastal risk model included a detailed information about the individual structures, inundation levels, and wave heights for the selected region. Additionally, calculation of wind damage to structures was incorporated. The integrated coastal risk assessment system was then tested and applied to Charlestown, a small vulnerable coastal town along the southern shore of Rhode Island. The modeling system was applied to Hurricane Sandy and a synthetic storm. In both storm cases, effect of natural dunes on coastal risk was investigated. The resulting damage maps for the area (Charlestown) clearly showed that the dune eroded scenarios affected more structures, and increased the estimated damage. The system was also tested in forecast mode for a large Nor’Easters: Stella (March 2017). The results showed a good performance of the coupled model in forecast mode when compared to observations. Finally, a nearshore model XBeach was then nested within this regional grid (ADCIRC-SWAN) to simulate nearshore sediment transport processes and coastal erosion. Hurricane Irene (2011) was used to validate XBeach, on the basis of a unique beach profile dataset at the region. XBeach showed a relatively good performance, being able to estimate eroded volumes along the beach transects with a mean error of 16%. The validated model was then used to analyze the effectiveness of several erosion mitigation methods that were recommended in a recent study of coastal erosion in New England: beach nourishment, coastal bank (engineered core), and submerged breakwater as well as artificial surfing reef. It was shown that beach nourishment and coastal banks perform better to mitigate shoreline retreat and coastal erosion.

Keywords: ADCIRC, coastal flooding, storm surge, coastal risk assessment, living shorelines

Procedia PDF Downloads 81
41 High Purity Lignin for Asphalt Applications: Using the Dawn Technology™ Wood Fractionation Process

Authors: Ed de Jong

Abstract:

Avantium is a leading technology development company and a frontrunner in renewable chemistry. Avantium develops disruptive technologies that enable the production of sustainable high value products from renewable materials and actively seek out collaborations and partnerships with like-minded companies and academic institutions globally, to speed up introductions of chemical innovations in the marketplace. In addition, Avantium helps companies to accelerate their catalysis R&D to improve efficiencies and deliver increased sustainability, growth, and profits, by providing proprietary systems and services to this regard. Many chemical building blocks and materials can be produced from biomass, nowadays mainly from 1st generation based carbohydrates, but potential for competition with the human food chain leads brand-owners to look for strategies to transition from 1st to 2nd generation feedstock. The use of non-edible lignocellulosic feedstock is an equally attractive source to produce chemical intermediates and an important part of the solution addressing these global issues (Paris targets). Avantium’s Dawn Technology™ separates the glucose, mixed sugars, and lignin available in non-food agricultural and forestry residues such as wood chips, wheat straw, bagasse, empty fruit bunches or corn stover. The resulting very pure lignin is dense in energy and can be used for energy generation. However, such a material might preferably be deployed in higher added value applications. Bitumen, which is fossil based, are mostly used for paving applications. Traditional hot mix asphalt emits large quantities of the GHG’s CO₂, CH₄, and N₂O, which is unfavorable for obvious environmental reasons. Another challenge for the bitumen industry is that the petrochemical industry is becoming more and more efficient in breaking down higher chain hydrocarbons to lower chain hydrocarbons with higher added value than bitumen. This has a negative effect on the availability of bitumen. The asphalt market, as well as governments, are looking for alternatives with higher sustainability in terms of GHG emission. The usage of alternative sustainable binders, which can (partly) replace the bitumen, contributes to reduce GHG emissions and at the same time broadens the availability of binders. As lignin is a major component (around 25-30%) of lignocellulosic material, which includes terrestrial plants (e.g., trees, bushes, and grass) and agricultural residues (e.g., empty fruit bunches, corn stover, sugarcane bagasse, straw, etc.), it is globally highly available. The chemical structure shows resemblance with the structure of bitumen and could, therefore, be used as an alternative for bitumen in applications like roofing or asphalt. Applications such as the use of lignin in asphalt need both fundamental research as well as practical proof under relevant use conditions. From a fundamental point of view, rheological aspects, as well as mixing, are key criteria. From a practical point of view, behavior in real road conditions is key (how easy can the asphalt be prepared, how easy can it be applied on the road, what is the durability, etc.). The paper will discuss the fundamentals of the use of lignin as bitumen replacement as well as the status of the different demonstration projects in Europe using lignin as a partial bitumen replacement in asphalts and will especially present the results of using Dawn Technology™ lignin as partial replacement of bitumen.

Keywords: biorefinery, wood fractionation, lignin, asphalt, bitumen, sustainability

Procedia PDF Downloads 127
40 Encapsulated Bioflavonoids: Nanotechnology Driven Food Waste Utilization

Authors: Niharika Kaushal, Minni Singh

Abstract:

Citrus fruits fall into the category of those commercially grown fruits that constitute an excellent repository of phytochemicals with health-promoting properties. Fruits belonging to the citrus family, when processed by industries, produce tons of agriculture by-products in the form of peels, pulp, and seeds, which normally have no further usage and are commonly discarded. In spite of this, such residues are of paramount importance due to their richness in valuable compounds; therefore, agro-waste is considered a valuable bioresource for various purposes in the food sector. A range of biological properties, including anti-oxidative, anti-cancerous, anti-inflammatory, anti-allergenicity, and anti-aging activity, have been reported for these bioactive compounds. Taking advantage of these inexpensive residual sources requires special attention to extract bioactive compounds. Mandarin (Citrus nobilis X Citrus deliciosa) is a potential source of bioflavonoids with antioxidant properties, and it is increasingly regarded as a functional food. Despite these benefits, flavonoids suffer from a barrier of pre-systemic metabolism in gastric fluid, which impedes their effectiveness. Therefore, colloidal delivery systems can completely overcome the barrier in question. This study involved the extraction and identification of key flavonoids from mandarin biomass. Using a green chemistry approach, supercritical fluid extraction at 330 bar, temperature 40C, and co-solvent 10% ethanol was employed for extraction, and the identification of flavonoids was made by mass spectrometry. As flavonoids are concerned with a limitation, the obtained extract was encapsulated in polylactic-co-glycolic acid (PLGA) matrix using a solvent evaporation method. Additionally, the antioxidant potential was evaluated by the 2,2-diphenylpicrylhydrazyl (DPPH) assay. A release pattern of flavonoids was observed over time using simulated gastrointestinal fluids. From the results, it was observed that the total flavonoids extracted from the mandarin biomass were estimated to be 47.3 ±1.06 mg/ml rutin equivalents as total flavonoids. In the extract, significantly, polymethoxyflavones (PMFs), tangeretin and nobiletin were identified, followed by hesperetin and naringin. The designed flavonoid-PLGA nanoparticles exhibited a particle size between 200-250nm. In addition, the bioengineered nanoparticles had a high entrapment efficiency of nearly 80.0% and maintained stability for more than a year. Flavonoid nanoparticles showed excellent antioxidant activity with an IC50 of 0.55μg/ml. Morphological studies revealed the smooth and spherical shape of nanoparticles as visualized by Field emission scanning electron microscopy (FE-SEM). Simulated gastrointestinal studies of free extract and nanoencapsulation revealed the degradation of nearly half of the flavonoids under harsh acidic conditions in the case of free extract. After encapsulation, flavonoids exhibited sustained release properties, suggesting that polymeric encapsulates are efficient carriers of flavonoids. Thus, such technology-driven and biomass-derived products form the basis for their use in the development of functional foods with improved therapeutic potential and antioxidant properties. As a result, citrus processing waste can be considered a new resource that has high value and can be used for promoting its utilization.

Keywords: citrus, agrowaste, flavonoids, nanoparticles

Procedia PDF Downloads 70
39 Extension of Moral Agency to Artificial Agents

Authors: Sofia Quaglia, Carmine Di Martino, Brendan Tierney

Abstract:

Artificial Intelligence (A.I.) constitutes various aspects of modern life, from the Machine Learning algorithms predicting the stocks on Wall streets to the killing of belligerents and innocents alike on the battlefield. Moreover, the end goal is to create autonomous A.I.; this means that the presence of humans in the decision-making process will be absent. The question comes naturally: when an A.I. does something wrong when its behavior is harmful to the community and its actions go against the law, which is to be held responsible? This research’s subject matter in A.I. and Robot Ethics focuses mainly on Robot Rights and its ultimate objective is to answer the questions: (i) What is the function of rights? (ii) Who is a right holder, what is personhood and the requirements needed to be a moral agent (therefore, accountable for responsibility)? (iii) Can an A.I. be a moral agent? (ontological requirements) and finally (iv) if it ought to be one (ethical implications). With the direction to answer this question, this research project was done via a collaboration between the School of Computer Science in the Technical University of Dublin that oversaw the technical aspects of this work, as well as the Department of Philosophy in the University of Milan, who supervised the philosophical framework and argumentation of the project. Firstly, it was found that all rights are positive and based on consensus; they change with time based on circumstances. Their function is to protect the social fabric and avoid dangerous situations. The same goes for the requirements considered necessary to be a moral agent: those are not absolute; in fact, they are constantly redesigned. Hence, the next logical step was to identify what requirements are regarded as fundamental in real-world judicial systems, comparing them to that of ones used in philosophy. Autonomy, free will, intentionality, consciousness and responsibility were identified as the requirements to be considered a moral agent. The work went on to build a symmetrical system between personhood and A.I. to enable the emergence of the ontological differences between the two. Each requirement is introduced, explained in the most relevant theories of contemporary philosophy, and observed in its manifestation in A.I. Finally, after completing the philosophical and technical analysis, conclusions were drawn. As underlined in the research questions, there are two issues regarding the assignment of moral agency to artificial agent: the first being that all the ontological requirements must be present and secondly being present or not, whether an A.I. ought to be considered as an artificial moral agent. From an ontological point of view, it is very hard to prove that an A.I. could be autonomous, free, intentional, conscious, and responsible. The philosophical accounts are often very theoretical and inconclusive, making it difficult to fully detect these requirements on an experimental level of demonstration. However, from an ethical point of view it makes sense to consider some A.I. as artificial moral agents, hence responsible for their own actions. When considering artificial agents as responsible, there can be applied already existing norms in our judicial system such as removing them from society, and re-educating them, in order to re-introduced them to society. This is in line with how the highest profile correctional facilities ought to work. Noticeably, this is a provisional conclusion and research must continue further. Nevertheless, the strength of the presented argument lies in its immediate applicability to real world scenarios. To refer to the aforementioned incidents, involving the murderer of innocents, when this thesis is applied it is possible to hold an A.I. accountable and responsible for its actions. This infers removing it from society by virtue of its un-usability, re-programming it and, only when properly functioning, re-introducing it successfully

Keywords: artificial agency, correctional system, ethics, natural agency, responsibility

Procedia PDF Downloads 156
38 Residential Building Facade Retrofit

Authors: Galit Shiff, Yael Gilad

Abstract:

The need to retrofit old buildings lies in the fact that buildings are responsible for the main energy use and CO₂ emission. Existing old structures are more dominant in their effect than new energy-efficient buildings. Nevertheless not every case of urban renewal that aims to replace old buildings with new neighbourhoods necessarily has a financial or sustainable justification. Façade design plays a vital role in the building's energy performance and the unit's comfort conditions. A retrofit façade residential methodology and feasibility applicative study has been carried out for the past four years, with two projects already fully renovated. The intention of this study is to serve as a case study for limited budget façade retrofit in Mediterranean climate urban areas. The two case study buildings are set in Israel. However, they are set in different local climatic conditions. One is in 'Sderot' in the south of the country, and one is in' Migdal Hahemek' in the north of the country. The building typology is similar. The budget of the projects is around $14,000 per unit and includes interventions at the buildings' envelope while tenants are living in. Extensive research and analysis of the existing conditions have been done. The building's components, materials and envelope sections were mapped, examined and compared to relevant updated standards. Solar radiation simulations for the buildings in their surroundings during winter and summer days were done. The energy rate of each unit, as well as the building as a whole, was calculated according to the Israeli Energy Code. The buildings’ facades were documented with the use of a thermal camera during different hours of the day. This information was superimposed with data about the electricity use and the thermal comfort that was collected from the residential units. Later in the process, similar tools were further used in order to compare the effectiveness of different design options and to evaluate the chosen solutions. Both projects showed that the most problematic units were the ones below the roof and the ones on top of the elevated entrance floor (pilotis). Old buildings tend to have poor insulation on those two horizontal surfaces which require treatment. Different radiation levels and wall sections in the two projects influenced the design strategies: In the southern project, there was an extreme difference in solar radiations levels between the main façade and the back elevation. Eventually, it was decided to invest in insulating the main south-west façade and the side façades, leaving the back north-east façade almost untouched. Lower levels of radiation in the northern project led to a different tactic: a combination of basic insulation on all façades, together with intense treatment on areas with problematic thermal behavior. While poor execution of construction details and bad installation of windows in the northern project required replacing them all, in the southern project it was found that it is more essential to shade the windows than replace them. Although the buildings and the construction typology was chosen for this study are similar, the research shows that there are large differences due to the location in different climatic zones and variation in local conditions. Therefore, in order to reach a systematic and cost-effective method of work, a more extensive catalogue database is needed. Such a catalogue will enable public housing companies in the Mediterranean climate to promote massive projects of renovating existing old buildings, drawing on minimal analysis and planning processes.

Keywords: facade, low budget, residential, retrofit

Procedia PDF Downloads 174
37 Improvements and Implementation Solutions to Reduce the Computational Load for Traffic Situational Awareness with Alerts (TSAA)

Authors: Salvatore Luongo, Carlo Luongo

Abstract:

This paper discusses the implementation solutions to reduce the computational load for the Traffic Situational Awareness with Alerts (TSAA) application, based on Automatic Dependent Surveillance-Broadcast (ADS-B) technology. In 2008, there were 23 total mid-air collisions involving general aviation fixed-wing aircraft, 6 of which were fatal leading to 21 fatalities. These collisions occurred during visual meteorological conditions, indicating the limitations of the see-and-avoid concept for mid-air collision avoidance as defined in the Federal Aviation Administration’s (FAA). The commercial aviation aircraft are already equipped with collision avoidance system called TCAS, which is based on classic transponder technology. This system dramatically reduced the number of mid-air collisions involving air transport aircraft. In general aviation, the same reduction in mid-air collisions has not occurred, so this reduction is the main objective of the TSAA application. The major difference between the original conflict detection application and the TSAA application is that the conflict detection is focused on preventing loss of separation in en-route environments. Instead TSAA is devoted to reducing the probability of mid-air collision in all phases of flight. The TSAA application increases the flight crew traffic situation awareness providing alerts of traffic that are detected in conflict with ownship in support of the see-and-avoid responsibility. The relevant effort has been spent in the design process and the code generation in order to maximize the efficiency and performances in terms of computational load and memory consumption reduction. The TSAA architecture is divided into two high-level systems: the “Threats database” and the “Conflict detector”. The first one receives the traffic data from ADS-B device and provides the memorization of the target’s data history. Conflict detector module estimates ownship and targets trajectories in order to perform the detection of possible future loss of separation between ownship and each target. Finally, the alerts are verified by additional conflict verification logic, in order to prevent possible undesirable behaviors of the alert flag. In order to reduce the computational load, a pre-check evaluation module is used. This pre-check is only a computational optimization, so the performances of the conflict detector system are not modified in terms of number of alerts detected. The pre-check module uses analytical trajectories propagation for both target and ownship. This allows major accuracy and avoids the step-by-step propagation, which requests major computational load. Furthermore, the pre-check permits to exclude the target that is certainly not a threat, using an analytical and efficient geometrical approach, in order to decrease the computational load for the following modules. This software improvement is not suggested by FAA documents, and so it is the main innovation of this work. The efficiency and efficacy of this enhancement are verified using fast-time and real-time simulations and by the execution on a real device in several FAA scenarios. The final implementation also permits the FAA software certification in compliance with DO-178B standard. The computational load reduction allows the installation of TSAA application also on devices with multiple applications and/or low capacity in terms of available memory and computational capabilities

Keywords: traffic situation awareness, general aviation, aircraft conflict detection, computational load reduction, implementation solutions, software certification

Procedia PDF Downloads 251
36 Ethnic Andean Concepts of Health and Illness in the Post-Colombian World and Its Relevance Today

Authors: Elizabeth J. Currie, Fernando Ortega Perez

Abstract:

—‘MEDICINE’ is a new project funded under the EC Horizon 2020 Marie-Sklodowska Curie Actions, to determine concepts of health and healing from a culturally specific indigenous context, using a framework of interdisciplinary methods which integrates archaeological-historical, ethnographic and modern health sciences approaches. The study will generate new theoretical and methodological approaches to model how peoples survive and adapt their traditional belief systems in a context of alien cultural impacts. In the immediate wake of the conquest of Peru by invading Spanish armies and ideology, native Andeans responded by forming the Taki Onkoy millenarian movement, which rejected European philosophical and ontological teachings, claiming “you make us sick”. The study explores how people’s experience of their world and their health beliefs within it, is fundamentally shaped by their inherent beliefs about the nature of being and identity in relation to the wider cosmos. Cultural and health belief systems and related rituals or behaviors sustain a people’s sense of identity, wellbeing and integrity. In the event of dislocation and persecution these may change into devolved forms, which eventually inter-relate with ‘modern’ biomedical systems of health in as yet unidentified ways. The development of new conceptual frameworks that model this process will greatly expand our understanding of how people survive and adapt in response to cultural trauma. It will also demonstrate the continuing role, relevance and use of TM in present-day indigenous communities. Studies will first be made of relevant pre-Colombian material culture, and then of early colonial period ethnohistorical texts which document the health beliefs and ritual practices still employed by indigenous Andean societies at the advent of the 17th century Jesuit campaigns of persecution - ‘Extirpación de las Idolatrías’. Core beliefs drawn from these baseline studies will then be used to construct a questionnaire about current health beliefs and practices to be taken into the study population of indigenous Quechua peoples in the northern Andean region of Ecuador. Their current systems of knowledge and medicine have evolved within complex historical contexts of both the conquest by invading Inca armies in the late 15th century, followed a generation later by Spain, into new forms. A new model will be developed of contemporary  Andean concepts of health, illness and healing demonstrating  the way these have changed through time. With this, a ‘policy tool’ will be constructed as a bridhging facility into contemporary global scenarios relevant to other Indigenous, First Nations, and migrant peoples to provide a means through which their traditional health beliefs and current needs may be more appropriately understood and met. This paper presents findings from the first analytical phases of the work based upon the study of the literature and the archaeological records. The study offers a novel perspective and methods in the development policies sensitive to indigenous and minority people’s health needs.

Keywords: Andean ethnomedicine, Andean health beliefs, health beliefs models, traditional medicine

Procedia PDF Downloads 322
35 Meta-Analysis of Previously Unsolved Cases of Aviation Mishaps Employing Molecular Pathology

Authors: Michael Josef Schwerer

Abstract:

Background: Analyzing any aircraft accident is mandatory based on the regulations of the International Civil Aviation Organization and the respective country’s criminal prosecution authorities. Legal medicine investigations are unavoidable when fatalities involve the flight crew or when doubts arise concerning the pilot’s aeromedical health status before the event. As a result of frequently tremendous blunt and sharp force trauma along with the impact of the aircraft to the ground, consecutive blast or fire exposition of the occupants or putrefaction of the dead bodies in cases of delayed recovery, relevant findings can be masked or destroyed and therefor being inaccessible in standard pathology practice comprising just forensic autopsy and histopathology. Such cases are of considerable risk of remaining unsolved without legal consequences for those responsible. Further, no lessons can be drawn from these scenarios to improve flight safety and prevent future mishaps. Aims and Methods: To learn from previously unsolved aircraft accidents, re-evaluations of the investigation files and modern molecular pathology studies were performed. Genetic testing involved predominantly PCR-based analysis of gene regulation, studying DNA promotor methylations, RNA transcription and posttranscriptional regulation. In addition, the presence or absence of infective agents, particularly DNA- and RNA-viruses, was studied. Technical adjustments of molecular genetic procedures when working with archived sample material were necessary. Standards for the proper interpretation of the respective findings had to be settled. Results and Discussion: Additional molecular genetic testing significantly contributes to the quality of forensic pathology assessment in aviation mishaps. Previously undetected cardiotropic viruses potentially explain e.g., a pilot’s sudden incapacitation resulting from cardiac failure or myocardial arrhythmia. In contrast, negative results for infective agents participate in ruling out concerns about an accident pilot’s fitness to fly and the aeromedical examiner’s precedent decision to issue him or her an aeromedical certificate. Care must be taken in the interpretation of genetic testing for pre-existing diseases such as hypertrophic cardiomyopathy or ischemic heart disease. Molecular markers such as mRNAs or miRNAs, which can establish these diagnoses in clinical patients, might be misleading in-flight crew members because of adaptive changes in their tissues resulting from repeated mild hypoxia during flight, for instance. Military pilots especially demonstrate significant physiological adjustments to their somatic burdens in flight, such as cardiocirculatory stress and air combat maneuvers. Their non-pathogenic alterations in gene regulation and expression will likely be misinterpreted for genuine disease by inexperienced investigators. Conclusions: The growing influence of molecular pathology on legal medicine practice has found its way into aircraft accident investigation. As appropriate quality standards for laboratory work and data interpretation are provided, forensic genetic testing supports the medico-legal analysis of aviation mishaps and potentially reduces the number of unsolved events in the future.

Keywords: aviation medicine, aircraft accident investigation, forensic pathology, molecular pathology

Procedia PDF Downloads 17
34 Development and Experimental Validation of Coupled Flow-Aerosol Microphysics Model for Hot Wire Generator

Authors: K. Ghosh, S. N. Tripathi, Manish Joshi, Y. S. Mayya, Arshad Khan, B. K. Sapra

Abstract:

We have developed a CFD coupled aerosol microphysics model in the context of aerosol generation from a glowing wire. The governing equations can be solved implicitly for mass, momentum, energy transfer along with aerosol dynamics. The computationally efficient framework can simulate temporal behavior of total number concentration and number size distribution. This formulation uniquely couples standard K-Epsilon scheme with boundary layer model with detailed aerosol dynamics through residence time. This model uses measured temperatures (wire surface and axial/radial surroundings) and wire compositional data apart from other usual inputs for simulations. The model predictions show that bulk fluid motion and local heat distribution can significantly affect the aerosol behavior when the buoyancy effect in momentum transfer is considered. Buoyancy generated turbulence was found to be affecting parameters related to aerosol dynamics and transport as well. The model was validated by comparing simulated predictions with results obtained from six controlled experiments performed with a laboratory-made hot wire nanoparticle generator. Condensation particle counter (CPC) and scanning mobility particle sizer (SMPS) were used for measurement of total number concentration and number size distribution at the outlet of reactor cell during these experiments. Our model-predicted results were found to be in reasonable agreement with observed values. The developed model is fast (fully implicit) and numerically stable. It can be used specifically for applications in the context of the behavior of aerosol particles generated from glowing wire technique and in general for other similar large scale domains. Incorporation of CFD in aerosol microphysics framework provides a realistic platform to study natural convection driven systems/ applications. Aerosol dynamics sub-modules (nucleation, coagulation, wall deposition) have been coupled with Navier Stokes equations modified to include buoyancy coupled K-Epsilon turbulence model. Coupled flow-aerosol dynamics equation was solved numerically and in the implicit scheme. Wire composition and temperature (wire surface and cell domain) were obtained/measured, to be used as input for the model simulations. Model simulations showed a significant effect of fluid properties on the dynamics of aerosol particles. The role of buoyancy was highlighted by observation and interpretation of nucleation zones in the planes above the wire axis. The model was validated against measured temporal evolution, total number concentration and size distribution at the outlet of hot wire generator cell. Experimentally averaged and simulated total number concentrations were found to match closely, barring values at initial times. Steady-state number size distribution matched very well for sub 10 nm particle diameters while reasonable differences were noticed for higher size ranges. Although tuned specifically for the present context (i.e., aerosol generation from hotwire generator), the model can also be used for diverse applications, e.g., emission of particles from hot zones (chimneys, exhaust), fires and atmospheric cloud dynamics.

Keywords: nanoparticles, k-epsilon model, buoyancy, CFD, hot wire generator, aerosol dynamics

Procedia PDF Downloads 112
33 Sensing Study through Resonance Energy and Electron Transfer between Föster Resonance Energy Transfer Pair of Fluorescent Copolymers and Nitro-Compounds

Authors: Vishal Kumar, Soumitra Satapathi

Abstract:

Föster Resonance Energy Transfer (FRET) is a powerful technique used to probe close-range molecular interactions. Physically, the FRET phenomenon manifests as a dipole–dipole interaction between closely juxtaposed fluorescent molecules (10–100 Å). Our effort is to employ this FRET technique to make a prototype device for highly sensitive detection of environment pollutant. Among the most common environmental pollutants, nitroaromatic compounds (NACs) are of particular interest because of their durability and toxicity. That’s why, sensitive and selective detection of small amounts of nitroaromatic explosives, in particular, 2,4,6-trinitrophenol (TNP), 2,4-dinitrotoluene (DNT) and 2,4,6-trinitrotoluene (TNT) has been a critical challenge due to the increasing threat of explosive-based terrorism and the need of environmental monitoring of drinking and waste water. In addition, the excessive utilization of TNP in several other areas such as burn ointment, pesticides, glass and the leather industry resulted in environmental accumulation, and is eventually contaminating the soil and aquatic systems. To the date, high number of elegant methods, including fluorimetry, gas chromatography, mass, ion-mobility and Raman spectrometry have been successfully applied for explosive detection. Among these efforts, fluorescence-quenching methods based on the mechanism of FRET show good assembly flexibility, high selectivity and sensitivity. Here, we report a FRET-based sensor system for the highly selective detection of NACs, such as TNP, DNT and TNT. The sensor system is composed of a copolymer Poly [(N,N-dimethylacrylamide)-co-(Boc-Trp-EMA)] (RP) bearing tryptophan derivative in the side chain as donor and dansyl tagged copolymer P(MMA-co-Dansyl-Ala-HEMA) (DCP) as an acceptor. Initially, the inherent fluorescence of RP copolymer is quenched by non-radiative energy transfer to DCP which only happens once the two molecules are within Förster critical distance (R0). The excellent spectral overlap (Jλ= 6.08×10¹⁴ nm⁴M⁻¹cm⁻¹) between donors’ (RP) emission profile and acceptors’ (DCP) absorption profile makes them an exciting and efficient FRET pair i.e. further confirmed by the high rate of energy transfer from RP to DCP i.e. 0.87 ns⁻¹ and lifetime measurement by time correlated single photon counting (TCSPC) to validate the 64% FRET efficiency. This FRET pair exhibited a specific fluorescence response to NACs such as DNT, TNT and TNP with 5.4, 2.3 and 0.4 µM LODs, respectively. The detection of NACs occurs with high sensitivity by photoluminescence quenching of FRET signal induced by photo-induced electron transfer (PET) from electron-rich FRET pair to electron-deficient NAC molecules. The estimated stern-volmer constant (KSV) values for DNT, TNT and TNP are 6.9 × 10³, 7.0 × 10³ and 1.6 × 104 M⁻¹, respectively. The mechanistic details of molecular interactions are established by time-resolved fluorescence, steady-state fluorescence and absorption spectroscopy confirmed that the sensing process is of mixed type, i.e. both dynamic and static quenching as lifetime of FRET system (0.73 ns) is reduced to 0.55, 0.57 and 0.61 ns DNT, TNT and TNP, respectively. In summary, the simplicity and sensitivity of this novel FRET sensor opens up the possibility of designing optical sensor of various NACs in one single platform for developing multimodal sensor for environmental monitoring and future field based study.

Keywords: FRET, nitroaromatic, stern-Volmer constant, tryptophan and dansyl tagged copolymer

Procedia PDF Downloads 101
32 A Modular Solution for Large-Scale Critical Industrial Scheduling Problems with Coupling of Other Optimization Problems

Authors: Ajit Rai, Hamza Deroui, Blandine Vacher, Khwansiri Ninpan, Arthur Aumont, Francesco Vitillo, Robert Plana

Abstract:

Large-scale critical industrial scheduling problems are based on Resource-Constrained Project Scheduling Problems (RCPSP), that necessitate integration with other optimization problems (e.g., vehicle routing, supply chain, or unique industrial ones), thus requiring practical solutions (i.e., modular, computationally efficient with feasible solutions). To the best of our knowledge, the current industrial state of the art is not addressing this holistic problem. We propose an original modular solution that answers the issues exhibited by the delivery of complex projects. With three interlinked entities (project, task, resources) having their constraints, it uses a greedy heuristic with a dynamic cost function for each task with a situational assessment at each time step. It handles large-scale data and can be easily integrated with other optimization problems, already existing industrial tools and unique constraints as required by the use case. The solution has been tested and validated by domain experts on three use cases: outage management in Nuclear Power Plants (NPPs), planning of future NPP maintenance operation, and application in the defense industry on supply chain and factory relocation. In the first use case, the solution, in addition to the resources’ availability and tasks’ logical relationships, also integrates several project-specific constraints for outage management, like, handling of resource incompatibility, updating of tasks priorities, pausing tasks in a specific circumstance, and adjusting dynamic unit of resources. With more than 20,000 tasks and multiple constraints, the solution provides a feasible schedule within 10-15 minutes on a standard computer device. This time-effective simulation corresponds with the nature of the problem and requirements of several scenarios (30-40 simulations) before finalizing the schedules. The second use case is a factory relocation project where production lines must be moved to a new site while ensuring the continuity of their production. This generates the challenge of merging job shop scheduling and the RCPSP with location constraints. Our solution allows the automation of the production tasks while considering the rate expectation. The simulation algorithm manages the use and movement of resources and products to respect a given relocation scenario. The last use case establishes a future maintenance operation in an NPP. The project contains complex and hard constraints, like on Finish-Start precedence relationship (i.e., successor tasks have to start immediately after predecessors while respecting all constraints), shareable coactivity for managing workspaces, and requirements of a specific state of "cyclic" resources (they can have multiple states possible with only one at a time) to perform tasks (can require unique combinations of several cyclic resources). Our solution satisfies the requirement of minimization of the state changes of cyclic resources coupled with the makespan minimization. It offers a solution of 80 cyclic resources with 50 incompatibilities between levels in less than a minute. Conclusively, we propose a fast and feasible modular approach to various industrial scheduling problems that were validated by domain experts and compatible with existing industrial tools. This approach can be further enhanced by the use of machine learning techniques on historically repeated tasks to gain further insights for delay risk mitigation measures.

Keywords: deterministic scheduling, optimization coupling, modular scheduling, RCPSP

Procedia PDF Downloads 155
31 Exploring the Effect of Nursing Students’ Self-Directed Learning and Technology Acceptance through the Use of Digital Game-Based Learning in Medical Terminology Course

Authors: Hsin-Yu Lee, Ming-Zhong Li, Wen-Hsi Chiu, Su-Fen Cheng, Shwu-Wen Lin

Abstract:

Background: The use of medical terminology is essential to professional nurses on clinical practice. However, most nursing students consider traditional lecture-based teaching of medical terminology as boring and overly conceptual and lack motivation to learn. It is thus an issue to be discussed on how to enhance nursing students’ self-directed learning and improve learning outcomes of medical terminology. Digital game-based learning is a learner-centered way of learning. Past literature showed that the most common game-based learning for language education has been immersive games and teaching games. Thus, this study selected role-playing games (RPG) and digital puzzle games for observation and comparison. It is interesting to explore whether digital game-based learning has positive impact on nursing students’ learning of medical terminology and whether students can adapt well on this type of learning. Results can be used to provide references for institutes and teachers on teaching medical terminology. These instructions give you guidelines for preparing papers for the conference. Use this document as a template if you are using Microsoft Word. Otherwise, use this document as an instruction set. The electronic file of your paper will be formatted further at WASET. Define all symbols used in the abstract. Do not cite references in the abstract. Do not delete the blank line immediately above the abstract; it sets the footnote at the bottom of this column. Page margins are 1,78 cm top and down; 1,65 cm left and right. Each column width is 8,89 cm and the separation between the columns is 0,51 cm. Objective: The purpose of this research is to explore respectively the impact of RPG and puzzle game on nursing students’ self-directed learning and technology acceptance. The study further discusses whether different game types bring about different influences on students’ self-directed learning and technology acceptance. Methods: A quasi-experimental design was adopted in this study so that repeated measures between two groups could be conveniently conducted. 103 nursing students from a nursing college in Northern Taiwan participated in the study. For three weeks of experiment, the experiment group (n=52) received “traditional teaching + RPG” while the control group (n=51) received “traditional teaching + puzzle games”. Results: 1. On self-directed learning: For each game type, there were significant differences for the delayed tests of both groups as compared to the pre and post-tests of each group. However, there were no significant differences between the two game types. 2. On technology acceptance: For the experiment group, after the intervention of RPG, there were no significant differences concerning technology acceptance. For the control group, after the intervention of puzzle games, there were significant differences regarding technology acceptance. Pearson-correlation coefficient and path analysis conducted on the results of the two groups revealed that the dimension were highly correlated and reached statistical significance. Yet, the comparison of technology acceptance between the two game types did not reach statistical significance. Conclusion and Recommend: This study found that through using different digital games on learning, nursing students have effectively improved their self-directed learning. Students’ technology acceptances were also high for the two different digital game types and each dimension was significantly correlated. The results of the experimental group showed that through the scenarios of RPG, students had a deeper understanding of medical terminology, which reached the ‘Understand’ dimension of Bloom’s taxonomy. The results of the control group indicated that digital puzzle games could help students memorize and review medical terminology, which reached the ‘Remember’ dimension of Bloom’s taxonomy. The findings suggest that teachers of medical terminology could use digital games to assist their teaching according to their goals on cognitive learning. Adequate use of those games could help improve students’ self-directed learning and further enhance their learning outcome on medical terminology.

Keywords: digital game-based learning, medical terminology, nursing education, self-directed learning, technology acceptance model

Procedia PDF Downloads 139
30 A Report on the Elearning Programme of the Irish College of General Practitioners Which Can Address Continuing Education Needs of Primary Care Physicians

Authors: Nicholas P. Fenlon, Aisling Lavelle, David Mclean, Margaret O'riordan

Abstract:

Background: The case for continuing professional development has been well made, and was formalized in Ireland in recent years through the enactment of the Medical Practitioner’s Act, which requires registered medical practitioners to complete a minimum of 50 hours CPD each year. The ICGP, who have been providing CPD opportunities to its members for many years, have responded to this need by developing a series of evidence-based, high-quality, multimedia modules across a range of clinical and non-clinical areas. (More traditional education opportunities are still being provided by the college also). Overview of Programme: The first module was released in September 2011, since when the eLearning program has grown steadily, and there are currently almost 20 modules available, with a further 5 in production. Each module contains three to six 10-minute video lessons, which use a combination of graphics, images, text, voice-over and clinical clips. These are supported by supplementary videos of expert pieces-to-camera, Q&As with content experts, clinical scenarios, external links and relevant documentation and other resources. Successful completion of MCQs will result in a Certificate of Completion, which can be printed or stored in Professional Competence portfolio. The Medical Practitioner’s Act requires doctors to gather CPD credits across 8 domains of practice, and various eLearning modules have been developed to address each. For instance, modules with a strong clinical content would include Management of Hypertension, Management of COPD, and Management of Asthma. Other modules focus on health promotion such as Promoting Smoking Cessation, Promoting Physical Activity, and Addressing Childhood Obesity. Modules where communication skills are keys include modules on Suicide Prevention and Management of Depression. Other modules, currently in development include non-clinical topics around risk management, including Confidentiality, Consent etc. Each module is developed by a core group, which includes where possible, a GP with a special interest in the area, and a content expert(s). The college works closely with a medical education consultant and a production company in developing and producing the modules. Modules can be accessed (with password) through the ICGP website and are available free to all ICGP members. Summary of Evaluation: There are over 1700 registered users to date (over 55% of College membership). The program was evaluated using an online survey in 2013 (N = 144/950 – 12%) and results were very positive overall but provided material for the further improvement of the program also. Future Plans: While knowledge can be imparted well through eLearning, skills and attitudes are more difficult to influence through an online environment. The college is now developing a series of linked workshops, which will lead to ICGP Professional Competence Awards. The first pilot workshop is scheduled for February 2015 and is Cardiology-themed. Participants will be required to complete the following 4 modules in advance of attending – Management of Hypertension, Management of Heart Failure, Promoting Smoking Cessation, and Promoting Physical Activity. The workshop will be case-based and interactive, addressing ECG Interpretation in General Practice. Conclusions: The ICGP have responded to members needs for high-quality evidence-based education delivered in a way that suits GPs.

Keywords: CPD opportunities, evidence-based, high quality, multimedia modules across a range of clinical and non-clinical areas, medical practitioner’s act

Procedia PDF Downloads 573
29 Metal Contamination in an E-Waste Recycling Community in Northeastern Thailand

Authors: Aubrey Langeland, Richard Neitzel, Kowit Nambunmee

Abstract:

Electronic waste, ‘e-waste’, refers generally to discarded electronics and electrical equipment, including products from cell phones and laptops to wires, batteries and appliances. While e-waste represents a transformative source of income in low- and middle-income countries, informal e-waste workers use rudimentary methods to recover materials, simultaneously releasing harmful chemicals into the environment and creating a health hazard for themselves and surrounding communities. Valuable materials such as precious metals, copper, aluminum, ferrous metals, plastic and components are recycled from e-waste. However, persistent organic pollutants such as polychlorinated biphenyls (PCBs) and some polybrominated diphenyl ethers (PBDEs), and heavy metals are toxicants contained within e-waste and are of great concern to human and environmental health. The current study seeks to evaluate the environmental contamination resulting from informal e-waste recycling in a predominantly agricultural community in northeastern Thailand. To accomplish this objective, five types of environmental samples were collected and analyzed for concentrations of eight metals commonly associated with e-waste recycling during the period of July 2016 through July 2017. Rice samples from the community were collected after harvest and analyzed using inductively coupled plasma mass spectrometry (ICP-MS) and gas furnace atomic spectroscopy (GF-AS). Soil samples were collected and analyzed using methods similar to those used in analyzing the rice samples. Surface water samples were collected and analyzed using absorption colorimetry for three heavy metals. Environmental air samples were collected using a sampling pump and matched-weight PVC filters, then analyzed using Inductively Coupled Argon Plasma-Atomic Emission Spectroscopy (ICAP-AES). Finally, surface wipe samples were collected from surfaces in homes where e-waste recycling activities occur and were analyzed using ICAP-AES. Preliminary1 results indicate that some rice samples have concentrations of lead and cadmium significantly higher than limits set by the United States Department of Agriculture (USDA) and the World Health Organization (WHO). Similarly, some soil samples show levels of copper, lead and cadmium more than twice the maximum permissible level set by the USDA and WHO, and significantly higher than other areas of Thailand. Surface water samples indicate that areas near e-waste recycling activities, particularly the burning of e-waste products, result in increased levels of cadmium, lead and copper in surface waters. This is of particular concern given that many of the surface waters tested are used in irrigation of crops. Surface wipe samples measured concentrations of metals commonly associated with e-waste, suggesting a danger of ingestion of metals during cooking and other activities. Of particular concern is the relevance of surface contamination of metals to child health. Finally, air sampling showed that the burning of e-waste presents a serious health hazard to workers and the environment through inhalation and deposition2. Our research suggests a need for improved methods of e-waste recycling that allows workers to continue this valuable revenue stream in a sustainable fashion that protects both human and environmental health. 1Statistical analysis to be finished in October 2017 due to follow-up field studies occurring in July and August 2017. 2Still awaiting complete analytic results.

Keywords: e-waste, environmental contamination, informal recycling, metals

Procedia PDF Downloads 339
28 Bridging Minds and Nature: Revolutionizing Elementary Environmental Education Through Artificial Intelligence

Authors: Hoora Beheshti Haradasht, Abooali Golzary

Abstract:

Environmental education plays a pivotal role in shaping the future stewards of our planet. Leveraging the power of artificial intelligence (AI) in this endeavor presents an innovative approach to captivate and educate elementary school children about environmental sustainability. This paper explores the application of AI technologies in designing interactive and personalized learning experiences that foster curiosity, critical thinking, and a deep connection to nature. By harnessing AI-driven tools, virtual simulations, and personalized content delivery, educators can create engaging platforms that empower children to comprehend complex environmental concepts while nurturing a lifelong commitment to protecting the Earth. With the pressing challenges of climate change and biodiversity loss, cultivating an environmentally conscious generation is imperative. Integrating AI in environmental education revolutionizes traditional teaching methods by tailoring content, adapting to individual learning styles, and immersing students in interactive scenarios. This paper delves into the potential of AI technologies to enhance engagement, comprehension, and pro-environmental behaviors among elementary school children. Modern AI technologies, including natural language processing, machine learning, and virtual reality, offer unique tools to craft immersive learning experiences. Adaptive platforms can analyze individual learning patterns and preferences, enabling real-time adjustments in content delivery. Virtual simulations, powered by AI, transport students into dynamic ecosystems, fostering experiential learning that goes beyond textbooks. AI-driven educational platforms provide tailored content, ensuring that environmental lessons resonate with each child's interests and cognitive level. By recognizing patterns in students' interactions, AI algorithms curate customized learning pathways, enhancing comprehension and knowledge retention. Utilizing AI, educators can develop virtual field trips and interactive nature explorations. Children can navigate virtual ecosystems, analyze real-time data, and make informed decisions, cultivating an understanding of the delicate balance between human actions and the environment. While AI offers promising educational opportunities, ethical concerns must be addressed. Safeguarding children's data privacy, ensuring content accuracy, and avoiding biases in AI algorithms are paramount to building a trustworthy learning environment. By merging AI with environmental education, educators can empower children not only with knowledge but also with the tools to become advocates for sustainable practices. As children engage in AI-enhanced learning, they develop a sense of agency and responsibility to address environmental challenges. The application of artificial intelligence in elementary environmental education presents a groundbreaking avenue to cultivate environmentally conscious citizens. By embracing AI-driven tools, educators can create transformative learning experiences that empower children to grasp intricate ecological concepts, forge an intimate connection with nature, and develop a strong commitment to safeguarding our planet for generations to come.

Keywords: artificial intelligence, environmental education, elementary children, personalized learning, sustainability

Procedia PDF Downloads 43
27 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques

Authors: Stefan K. Behfar

Abstract:

The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.

Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing

Procedia PDF Downloads 39
26 Potential Benefits and Adaptation of Climate Smart Practices by Small Farmers Under Three-Crop Rice Production System in Vietnam

Authors: Azeem Tariq, Stephane De Tourdonnet, Lars Stoumann Jensen, Reiner Wassmann, Bjoern Ole Sander, Quynh Duong Vu, Trinh Van Mai, Andreas De Neergaard

Abstract:

Rice growing area is increasing to meet the food demand of increasing population. Mostly, rice is growing on lowland, small landholder fields in most part of the world, which is one of the major sources of greenhouse gases (GHG) emissions from agriculture fields. The strategies such as, altering water and residues (carbon) management practices are assumed to be essential to mitigate the GHG emissions from flooded rice system. The actual implementation and potential of these measures on small farmer fields is still challenging. A field study was conducted on red river delta in Northern Vietnam to identify the potential challenges and barriers to the small rice farmers for implementation of climate smart rice practices. The objective of this study was to develop and access the feasibility of climate smart rice prototypes under actual farmer conditions. Field and scientific oriented framework was used to meet our objective. The methodological framework composed of six steps: i) identification of stakeholders and possible options, ii) assessment of barrios, drawbacks/advantages of new technologies, iii) prototype design, iv) assessment of mitigation potential of each prototype, v) scenario building and vi) scenario assessment. A farm survey was conducted to identify the existing farm practices and major constraints of small rice farmers. We proposed the two water (pre transplant+midseason drainage and early+midseason drainage) and one straw (full residue incorporation) management option keeping in views the farmers constraints and barriers for implementation. To test new typologies with existing prototypes (midseason drainage, partial residue incorporation) at farmer local conditions, a participatory field experiment was conducted for two consecutive rice seasons at farmer fields. Following the results of each season a workshop was conducted with stakeholders (farmers, village leaders, cooperatives, irrigation staff, extensionists, agricultural officers) at local and district level to get feedbacks on new tested prototypes and to develop possible scenarios for climate smart rice production practices. The farm analysis survey showed that non-availability of cheap labor and lacks of alternatives for straw management influence the small farmers to burn the residues in the fields except to use for composting or other purposes. Our field results revealed that application of early season drainage significantly mitigates (40-60%) the methane emissions from residue incorporation. Early season drainage was more efficient and easy to control under cooperate manage system than individually managed water system, and it leads to both economic (9-11% high rice yield, low cost of production, reduced nutrient loses) and environmental (mitigate methane emissions) benefits. The participatory field study allows the assessment of adaptation potential and possible benefits of climate smart practices on small farmer fields. If farmers have no other residue management option, full residue incorporation with early plus midseason drainage is adaptable and beneficial (both environmentally and economically) management option for small rice farmers.

Keywords: adaptation, climate smart agriculture, constrainsts, smallholders

Procedia PDF Downloads 239
25 Understanding the Impact of Spatial Light Distribution on Object Identification in Low Vision: A Pilot Psychophysical Study

Authors: Alexandre Faure, Yoko Mizokami, éRic Dinet

Abstract:

These recent years, the potential of light in assisting visually impaired people in their indoor mobility has been demonstrated by different studies. Implementing smart lighting systems for selective visual enhancement, especially designed for low-vision people, is an approach that breaks with the existing visual aids. The appearance of the surface of an object is significantly influenced by the lighting conditions and the constituent materials of the objects. Appearance of objects may appear to be different from expectation. Therefore, lighting conditions lead to an important part of accurate material recognition. The main objective of this work was to investigate the effect of the spatial distribution of light on object identification in the context of low vision. The purpose was to determine whether and what specific lighting approaches should be preferred for visually impaired people. A psychophysical experiment was designed to study the ability of individuals to identify the smallest cube of a pair under different lighting diffusion conditions. Participants were divided into two distinct groups: a reference group of observers with normal or corrected-to-normal visual acuity and a test group, in which observers were required to wear visual impairment simulation glasses. All participants were presented with pairs of cubes in a "miniature room" and were instructed to estimate the relative size of the two cubes. The miniature room replicates real-life settings, adorned with decorations and separated from external light sources by black curtains. The correlated color temperature was set to 6000 K, and the horizontal illuminance at the object level at approximately 240 lux. The objects presented for comparison consisted of 11 white cubes and 11 black cubes of different sizes manufactured with a 3D printer. Participants were seated 60 cm away from the objects. Two different levels of light diffuseness were implemented. After receiving instructions, participants were asked to judge whether the two presented cubes were the same size or if one was smaller. They provided one of five possible answers: "Left one is smaller," "Left one is smaller but unsure," "Same size," "Right one is smaller," or "Right one is smaller but unsure.". The method of constant stimuli was used, presenting stimulus pairs in a random order to prevent learning and expectation biases. Each pair consisted of a comparison stimulus and a reference cube. A psychometric function was constructed to link stimulus value with the frequency of correct detection, aiming to determine the 50% correct detection threshold. Collected data were analyzed through graphs illustrating participants' responses to stimuli, with accuracy increasing as the size difference between cubes grew. Statistical analyses, including 2-way ANOVA tests, showed that light diffuseness had no significant impact on the difference threshold, whereas object color had a significant influence in low vision scenarios. The first results and trends derived from this pilot experiment clearly and strongly suggest that future investigations could explore extreme diffusion conditions to comprehensively assess the impact of diffusion on object identification. For example, the first findings related to light diffuseness may be attributed to the range of manipulation, emphasizing the need to explore how other lighting-related factors interact with diffuseness.

Keywords: Lighting, Low Vision, Visual Aid, Object Identification, Psychophysical Experiment

Procedia PDF Downloads 37
24 A Systemic Review and Comparison of Non-Isolated Bi-Directional Converters

Authors: Rahil Bahrami, Kaveh Ashenayi

Abstract:

This paper presents a systematic classification and comparative analysis of non-isolated bi-directional DC-DC converters. The increasing demand for efficient energy conversion in diverse applications has spurred the development of various converter topologies. In this study, we categorize bi-directional converters into three distinct classes: Inverting, Non-Inverting, and Interleaved. Each category is characterized by its unique operational characteristics and benefits. Furthermore, a practical comparison is conducted by evaluating the results of simulation of each bi-directional converter. BDCs can be classified into isolated and non-isolated topologies. Non-isolated converters share a common ground between input and output, making them suitable for applications with minimal voltage change. They are easy to integrate, lightweight, and cost-effective but have limitations like limited voltage gain, switching losses, and no protection against high voltages. Isolated converters use transformers to separate input and output, offering safety benefits, high voltage gain, and noise reduction. They are larger and more costly but are essential for automotive designs where safety is crucial. The paper focuses on non-isolated systems.The paper discusses the classification of non-isolated bidirectional converters based on several criteria. Common factors used for classification include topology, voltage conversion, control strategy, power capacity, voltage range, and application. These factors serve as a foundation for categorizing converters, although the specific scheme might vary depending on contextual, application, or system-specific requirements. The paper presents a three-category classification for non-isolated bi-directional DC-DC converters: inverting, non-inverting, and interleaved. In the inverting category, converters produce an output voltage with reversed polarity compared to the input voltage, achieved through specific circuit configurations and control strategies. This is valuable in applications such as motor control and grid-tied solar systems. The non-inverting category consists of converters maintaining the same voltage polarity, useful in scenarios like battery equalization. Lastly, the interleaved category employs parallel converter stages to enhance power delivery and reduce current ripple. This classification framework enhances comprehension and analysis of non-isolated bi-directional DC-DC converters. The findings contribute to a deeper understanding of the trade-offs and merits associated with different converter types. As a result, this work aids researchers, practitioners, and engineers in selecting appropriate bi-directional converter solutions for specific energy conversion requirements. The proposed classification framework and experimental assessment collectively enhance the comprehension of non-isolated bi-directional DC-DC converters, fostering advancements in efficient power management and utilization.The simulation process involves the utilization of PSIM to model and simulate non-isolated bi-directional converter from both inverted and non-inverted category. The aim is to conduct a comprehensive comparative analysis of these converters, considering key performance indicators such as rise time, efficiency, ripple factor, and maximum error. This systematic evaluation provides valuable insights into the dynamic response, energy efficiency, output stability, and overall precision of the converters. The results of this comparison facilitate informed decision-making and potential optimizations, ensuring that the chosen converter configuration aligns effectively with the designated operational criteria and performance goals.

Keywords: bi-directional, DC-DC converter, non-isolated, energy conversion

Procedia PDF Downloads 46
23 Unknown Groundwater Pollution Source Characterization in Contaminated Mine Sites Using Optimal Monitoring Network Design

Authors: H. K. Esfahani, B. Datta

Abstract:

Groundwater is one of the most important natural resources in many parts of the world; however it is widely polluted due to human activities. Currently, effective and reliable groundwater management and remediation strategies are obtained using characterization of groundwater pollution sources, where the measured data in monitoring locations are utilized to estimate the unknown pollutant source location and magnitude. However, accurately identifying characteristics of contaminant sources is a challenging task due to uncertainties in terms of predicting source flux injection, hydro-geological and geo-chemical parameters, and the concentration field measurement. Reactive transport of chemical species in contaminated groundwater systems, especially with multiple species, is a complex and highly non-linear geochemical process. Although sufficient concentration measurement data is essential to accurately identify sources characteristics, available data are often sparse and limited in quantity. Therefore, this inverse problem-solving method for characterizing unknown groundwater pollution sources is often considered ill-posed, complex and non- unique. Different methods have been utilized to identify pollution sources; however, the linked simulation-optimization approach is one effective method to obtain acceptable results under uncertainties in complex real life scenarios. With this approach, the numerical flow and contaminant transport simulation models are externally linked to an optimization algorithm, with the objective of minimizing the difference between measured concentration and estimated pollutant concentration at observation locations. Concentration measurement data are very important to accurately estimate pollution source properties; therefore, optimal design of the monitoring network is essential to gather adequate measured data at desired times and locations. Due to budget and physical restrictions, an efficient and effective approach for groundwater pollutant source characterization is to design an optimal monitoring network, especially when only inadequate and arbitrary concentration measurement data are initially available. In this approach, preliminary concentration observation data are utilized for preliminary source location, magnitude and duration of source activity identification, and these results are utilized for monitoring network design. Further, feedback information from the monitoring network is used as inputs for sequential monitoring network design, to improve the identification of unknown source characteristics. To design an effective monitoring network of observation wells, optimization and interpolation techniques are used. A simulation model should be utilized to accurately describe the aquifer properties in terms of hydro-geochemical parameters and boundary conditions. However, the simulation of the transport processes becomes complex when the pollutants are chemically reactive. Three dimensional transient flow and reactive contaminant transport process is considered. The proposed methodology uses HYDROGEOCHEM 5.0 (HGCH) as the simulation model for flow and transport processes with chemically multiple reactive species. Adaptive Simulated Annealing (ASA) is used as optimization algorithm in linked simulation-optimization methodology to identify the unknown source characteristics. Therefore, the aim of the present study is to develop a methodology to optimally design an effective monitoring network for pollution source characterization with reactive species in polluted aquifers. The performance of the developed methodology will be evaluated for an illustrative polluted aquifer sites, for example an abandoned mine site in Queensland, Australia.

Keywords: monitoring network design, source characterization, chemical reactive transport process, contaminated mine site

Procedia PDF Downloads 209
22 Design, Control and Implementation of 3.5 kW Bi-Directional Energy Harvester for Intelligent Green Energy Management System

Authors: P. Ramesh, Aby Joseph, Arya G. Lal, U. S. Aji

Abstract:

Integration of distributed green renewable energy sources in addition with battery energy storage is an inevitable requirement in a smart grid environment. To achieve this, an Intelligent Green Energy Management System (i-GEMS) needs to be incorporated to ensure coordinated operation between supply and load demand based on the hierarchy of Renewable Energy Sources (RES), battery energy storage and distribution grid. A bi-directional energy harvester is an integral component facilitating Intelligent Green Energy Management System (i-GEMS) and it is required to meet the technical challenges mentioned as follows: (1) capability for bi-directional mode of operation (buck/boost) (2) reduction of circuit parasitic to suppress voltage spikes (3) converter startup problem (4) high frequency magnetics (5) higher power density (6) mode transition issues during battery charging and discharging. This paper is focused to address the above mentioned issues and targeted to design, develop and implement a bi-directional energy harvester with galvanic isolation. In this work, the hardware architecture for bi-directional energy harvester rated 3.5 kW is developed with Isolated Full Bridge Boost Converter (IFBBC) as well as Dual Active Bridge (DAB) Converter configuration using modular power electronics hardware which is identical for both solar PV array and battery energy storage. In IFBBC converter, the current fed full bridge circuit is enabled and voltage fed full bridge circuit is disabled through Pulse Width Modulation (PWM) pulses for boost mode of operation and vice-versa for buck mode of operation. In DAB converter, all the switches are in active state so as to adjust the phase shift angle between primary full bridge and secondary full bridge which in turn decides the power flow directions depending on modes (boost/buck) of operation. Here, the control algorithm is developed to ensure the regulation of the common DC link voltage and maximum power extraction from the renewable energy sources depending on the selected mode (buck/boost) of operation. The circuit analysis and simulation study are conducted using PSIM 9.0 in three scenarios which are - 1.IFBBC with passive clamp, 2. IFBBC with active clamp, 3. DAB converter. In this work, a common hardware prototype for bi-directional energy harvester with 3.5 kW rating is built for IFBBC and DAB converter configurations. The power circuit is equipped with right choice of MOSFETs, gate drivers with galvanic isolation, high frequency transformer, filter capacitors, and filter boost inductor. The experiment was conducted for IFBBC converter with passive clamp under boost mode and the prototype confirmed the simulation results showing the measured efficiency as 88% at 2.5 kW output power. The digital controller hardware platform is developed using floating point microcontroller TMS320F2806x from Texas Instruments. The firmware governing the operation of the bi-directional energy harvester is written in C language and developed using code composer studio. The comprehensive analyses of the power circuit design, control strategy for battery charging/discharging under buck/boost modes and comparative performance evaluation using simulation and experimental results will be presented.

Keywords: bi-directional energy harvester, dual active bridge, isolated full bridge boost converter, intelligent green energy management system, maximum power point tracking, renewable energy sources

Procedia PDF Downloads 104
21 High Performance Lithium Ion Capacitors from Biomass Waste-Derived Activated Carbon

Authors: Makhan Maharjan, Mani Ulaganathan, Vanchiappan Aravindan, Srinivasan Madhavi, Jing-Yuan Wang, Tuti Mariana Lim

Abstract:

The ever-increasing energy demand has made research to develop high performance energy storage systems that are able to fulfill energy needs. Supercapacitors have potential applications as portable energy storage devices. In recent years, there have been huge research interests to enhance the performances of supercapacitors via exploiting novel promising carbon precursors, tailoring textural properties of carbons, exploiting various electrolytes and device types. In this work, we employed orange peel (waste material) as the starting material and synthesized activated carbon by pyrolysis of KOH impregnated orange peel char at 800 °C in argon atmosphere. The resultant orange peel-derived activated carbon (OP-AC) exhibited BET surface area of 1,901 m² g-1, which is the highest surface area so far reported for the orange peel. The pore size distribution (PSD) curve exhibits the pores centered at 11.26 Å pore width, suggesting dominant microporosity. The high surface area OP-AC accommodates more ions in the electrodes and its well-developed porous structure facilitates fast diffusion of ions which subsequently enhance electrochemical performance. The OP-AC was studied as positive electrode in combination with different negative electrode materials, such as pre-lithiated graphite (LiC6) and Li4Ti5O12 for making hybrid capacitors. The lithium ion capacitor (LIC) fabricated using OP-AC with pre-lithiated graphite delivered high energy density of ~106 Wh kg–1. The energy density for OP-AC||Li4Ti5O12 capacitor was ~35 Wh kg⁻¹. For comparison purpose, configuration of OP-AC||OP-AC capacitors were studied in both aqueous (1M H2SO4) and organic (1M LiPF6 in EC-DMC) electrolytes, which delivered the energy density of 8.0 Wh kg⁻¹ and 16.3 Wh kg⁻¹, respectively. The cycling retentions obtained at current density of 1 A g⁻¹ were ~85.8, ~87.0 ~82.2 and ~58.8% after 2500 cycles for OP-AC||OP-AC (aqueous), OP-AC||OP-AC (organic), OP-AC||Li4Ti5O12 and OP-AC||LiC6 configurations, respectively. In addition, characterization studies were performed by elemental and proximate composition, thermogravimetry analysis, field emission-scanning electron microscopy, Raman spectra, X-ray diffraction (XRD) pattern, Fourier transform-infrared, X-ray photoelectron spectroscopy (XPS) and N2 sorption isotherms. The morphological features from FE-SEM exhibited well-developed porous structures. Two typical broad peaks observed in the XRD framework of the synthesized carbon implies amorphous graphitic structure. The ratio of 0.86 for ID/IG in Raman spectra infers high degree of graphitization in the sample. The band spectra of C 1s in XPS display the well resolved peaks related to carbon atoms in various chemical environments. The presence of functional groups is also corroborated from the FTIR spectroscopy. Characterization studies revealed the synthesized carbon to be promising electrode material towards the application for energy storage devices. Overall, the intriguing properties of OP-AC make it a new alternative promising electrode material for the development of high energy lithium ion capacitors from abundant, low-cost, renewable biomass waste. The authors gratefully acknowledge Agency for Science, Technology and Research (A*STAR)/ Singapore International Graduate Award (SINGA) and Nanyang Technological University (NTU), Singapore for funding support.

Keywords: energy storage, lithium-ion capacitors, orange peels, porous activated carbon

Procedia PDF Downloads 197
20 Local Energy and Flexibility Markets to Foster Demand Response Services within the Energy Community

Authors: Eduardo Rodrigues, Gisela Mendes, José M. Torres, José E. Sousa

Abstract:

In the sequence of the liberalisation of the electricity sector a progressive engagement of consumers has been considered and targeted by sector regulatory policies. With the objective of promoting market competition while protecting consumers interests, by transferring some of the upstream benefits to the end users while reaching a fair distribution of system costs, different market models to value consumers’ demand flexibility at the energy community level are envisioned. Local Energy and Flexibility Markets (LEFM) involve stakeholders interested in providing or procure local flexibility for community, services and markets’ value. Under the scope of DOMINOES, a European research project supported by Horizon 2020, the local market concept developed is expected to: • Enable consumers/prosumers empowerment, by allowing them to value their demand flexibility and Distributed Energy Resources (DER); • Value local liquid flexibility to support innovative distribution grid management, e.g., local balancing and congestion management, voltage control and grid restoration; • Ease the wholesale market uptake of DER, namely small-scale flexible loads aggregation as Virtual Power Plants (VPPs), facilitating Demand Response (DR) service provision; • Optimise the management and local sharing of Renewable Energy Sources (RES) in Medium Voltage (MV) and Low Voltage (LV) grids, trough energy transactions within an energy community; • Enhance the development of energy markets through innovative business models, compatible with ongoing policy developments, that promote the easy access of retailers and other service providers to the local markets, allowing them to take advantage of communities’ flexibility to optimise their portfolio and subsequently their participation in external markets. The general concept proposed foresees a flow of market actions, technical validations, subsequent deliveries of energy and/or flexibility and balance settlements. Since the market operation should be dynamic and capable of addressing different requests, either prioritising balancing and prosumer services or system’s operation, direct procurement of flexibility within the local market must also be considered. This paper aims to highlight the research on the definition of suitable DR models to be used by the Distribution System Operator (DSO), in case of technical needs, and by the retailer, mainly for portfolio optimisation and solve unbalances. The models to be proposed and implemented within relevant smart distribution grid and microgrid validation environments, are focused on day-ahead and intraday operation scenarios, for predictive management and near-real-time control respectively under the DSO’s perspective. At local level, the DSO will be able to procure flexibility in advance to tackle different grid constrains (e.g., demand peaks, forecasted voltage and current problems and maintenance works), or during the operating day-to-day, to answer unpredictable constraints (e.g., outages, frequency deviations and voltage problems). Due to the inherent risks of their active market participation retailers may resort to DR models to manage their portfolio, by optimising their market actions and solve unbalances. The interaction among the market actors involved in the DR activation and in flexibility exchange is explained by a set of sequence diagrams for the DR modes of use from the DSO and the energy provider perspectives. • DR for DSO’s predictive management – before the operating day; • DR for DSO’s real-time control – during the operating day; • DR for retailer’s day-ahead operation; • DR for retailer’s intraday operation.

Keywords: demand response, energy communities, flexible demand, local energy and flexibility markets

Procedia PDF Downloads 74
19 Understanding the Impact of Resilience Training on Cognitive Performance in Military Personnel

Authors: Haji Mohammad Zulfan Farhi Bin Haji Sulaini, Mohammad Azeezudde’en Bin Mohd Ismaon

Abstract:

The demands placed on military athletes extend beyond physical prowess to encompass cognitive resilience in high-stress environments. This study investigates the effects of resilience training on the cognitive performance of military athletes, shedding light on the potential benefits and implications for optimizing their overall readiness. In a rapidly evolving global landscape, armed forces worldwide are recognizing the importance of cognitive resilience alongside physical fitness. The study employs a mixed-methods approach, incorporating quantitative cognitive assessments and qualitative data from military athletes undergoing resilience training programs. Cognitive performance is evaluated through a battery of tests, including measures of memory, attention, decision-making, and reaction time. The participants, drawn from various branches of the military, are divided into experimental and control groups. The experimental group undergoes a comprehensive resilience training program, while the control group receives traditional physical training without a specific focus on resilience. The initial findings indicate a substantial improvement in cognitive performance among military athletes who have undergone resilience training. These improvements are particularly evident in domains such as attention and decision-making. The experimental group demonstrated enhanced situational awareness, quicker problem-solving abilities, and increased adaptability in high-stress scenarios. These results suggest that resilience training not only bolsters mental toughness but also positively impacts cognitive skills critical to military operations. In addition to quantitative assessments, qualitative data is collected through interviews and surveys to gain insights into the subjective experiences of military athletes. Preliminary analysis of these narratives reveals that participants in the resilience training program report higher levels of self-confidence, emotional regulation, and an improved ability to manage stress. These psychological attributes contribute to their enhanced cognitive performance and overall readiness. Moreover, this study explores the potential long-term benefits of resilience training. By tracking participants over an extended period, we aim to assess the durability of cognitive improvements and their effects on overall mission success. Early results suggest that resilience training may serve as a protective factor against the detrimental effects of prolonged exposure to stressors, potentially reducing the risk of burnout and psychological trauma among military athletes. This research has significant implications for military organizations seeking to optimize the performance and well-being of their personnel. The findings suggest that integrating resilience training into the training regimen of military athletes can lead to a more resilient and cognitively capable force. This, in turn, may enhance mission success, reduce the risk of injuries, and improve the overall effectiveness of military operations. In conclusion, this study provides compelling evidence that resilience training positively impacts the cognitive performance of military athletes. The preliminary results indicate improvements in attention, decision-making, and adaptability, as well as increased psychological resilience. As the study progresses and incorporates long-term follow-ups, it is expected to provide valuable insights into the enduring effects of resilience training on the cognitive readiness of military athletes, contributing to the ongoing efforts to optimize military personnel's physical and mental capabilities in the face of ever-evolving challenges.

Keywords: military athletes, cognitive performance, resilience training, cognitive enhancement program

Procedia PDF Downloads 47