Search results for: traffic flow optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8505

Search results for: traffic flow optimization

585 Thermodynamic Performance of a Low-Cost House Coated with Transparent Infrared Reflective Paint

Authors: Ochuko K. Overen, Edson L. Meyer

Abstract:

Uncontrolled heat transfer between the inner and outer space of low-cost housings through the thermal envelope result in indoor thermal discomfort. As a result, an excessive amount of energy is consumed for space heating and cooling. Thermo-optical properties are the ability of paints to reduce the rate of heat transfer through the thermal envelope. The aim of this study is to analyze the thermal performance of a low-cost house with its walls inner surface coated with transparent infrared reflective paint. The thermo-optical properties of the paint were analyzed using Scanning Electron Microscopy/ Energy Dispersive X-ray spectroscopy (SEM/EDX), Fourier Transform Infra-Red (FTIR) and thermal photographic technique. Meteorological indoor and ambient parameters such as; air temperature, relative humidity, solar radiation, wind speed and direction of a low-cost house in Golf-course settlement, South Africa were monitored. The monitoring period covers both winter and summer period before and after coating. The thermal performance of the coated walls was evaluated using time lag and decrement factor. The SEM image shows that the coat is transparent to light. The presence of Al as Al2O and other elements were revealed by the EDX spectrum. Before coating, the average decrement factor of the walls in summer was found to be 0.773 with a corresponding time lag of 1.3 hours. In winter, the average decrement factor and corresponding time lag were 0.467 and 1.6 hours, respectively. After coating, the average decrement factor and corresponding time lag were 0.533 and 2.3 hour, respectively in summer. In winter, an average decrement factor of 1.120 and corresponding time lag of 3 hours was observed. The findings show that the performance of the coats is influenced by the seasons. With a 74% reduction in decrement factor and 1.4 time lag increase in winter, it implies that the coatings have more ability to retain heat within the inner space of the house than preventing heat flow into the house. In conclusion, the results have shown that transparent infrared reflective paint has the ability to reduce the propagation of heat flux through building walls. Hence, it can serve as a remedy to the poor thermal performance of low-cost housings in South Africa.

Keywords: energy efficiency, decrement factor, low-cost housing, paints, rural development, thermal comfort, time lag

Procedia PDF Downloads 273
584 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap

Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui

Abstract:

As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.

Keywords: calibration, building energy modeling, performance gap, sensor network

Procedia PDF Downloads 145
583 An Object-Oriented Modelica Model of the Water Level Swell during Depressurization of the Reactor Pressure Vessel of the Boiling Water Reactor

Authors: Rafal Bryk, Holger Schmidt, Thomas Mull, Ingo Ganzmann, Oliver Herbst

Abstract:

Prediction of the two-phase water mixture level during fast depressurization of the Reactor Pressure Vessel (RPV) resulting from an accident scenario is an important issue from the view point of the reactor safety. Since the level swell may influence the behavior of some passive safety systems, it has been recognized that an assumption which at the beginning may be considered as a conservative one, not necessary leads to a conservative result. This paper discusses outcomes obtained during simulations of the water dynamics and heat transfer during sudden depressurization of a vessel filled up to a certain level with liquid water under saturation conditions and with the rest of the vessel occupied by saturated steam. In case of the pressure decrease e.g. due to the main steam line break, the liquid water evaporates abruptly, being a reason thereby, of strong transients in the vessel. These transients and the sudden emergence of void in the region occupied at the beginning by liquid, cause elevation of the two-phase mixture. In this work, several models calculating the water collapse and swell levels are presented and validated against experimental data. Each of the models uses different approach to calculate void fraction. The object-oriented models were developed with the Modelica modelling language and the OpenModelica environment. The models represent the RPV of the Integral Test Facility Karlstein (INKA) – a dedicated test rig for simulation of KERENA – a new Boiling Water Reactor design of Framatome. The models are based on dynamic mass and energy equations. They are divided into several dynamic volumes in each of which, the fluid may be single-phase liquid, steam or a two-phase mixture. The heat transfer between the wall of the vessel and the fluid is taken into account. Additional heat flow rate may be applied to the first volume of the vessel in order to simulate the decay heat of the reactor core in a similar manner as it is simulated at INKA. The comparison of the simulations results against the reference data shows a good agreement.

Keywords: boiling water reactor, level swell, Modelica, RPV depressurization, thermal-hydraulics

Procedia PDF Downloads 196
582 Study of Mixing Conditions for Different Endothelial Dysfunction in Arteriosclerosis

Authors: Sara Segura, Diego Nuñez, Miryam Villamil

Abstract:

In this work, we studied the microscale interaction of foreign substances with blood inside an artificial transparent artery system that represents medium and small muscular arteries. This artery system had channels ranging from 75 μm to 930 μm and was fabricated using glass and transparent polymer blends like Phenylbis(2,4,6-trimethylbenzoyl) phosphine oxide, Poly(ethylene glycol) and PDMS in order to be monitored in real time. The setup was performed using a computer controlled precision micropump and a high resolution optical microscope capable of tracking fluids at fast capture. Observation and analysis were performed using a real time software that reconstructs the fluid dynamics determining the flux velocity, injection dependency, turbulence and rheology. All experiments were carried out with fully computer controlled equipment. Interactions between substances like water, serum (0.9% sodium chloride and electrolyte with a ratio of 4 ppm) and blood cells were studied at microscale as high as 400nm of resolution and the analysis was performed using a frame-by-frame observation and HD-video capture. These observations lead us to understand the fluid and mixing behavior of the interest substance in the blood stream and to shed a light on the use of implantable devices for drug delivery at arteries with different Endothelial dysfunction. Several substances were tested using the artificial artery system. Initially, Milli-Q water was used as a control substance for the study of the basic fluid dynamics of the artificial artery system. However, serum and other low viscous substances were pumped into the system with the presence of other liquids to study the mixing profiles and behaviors. Finally, mammal blood was used for the final test while serum was injected. Different flow conditions, pumping rates, and time rates were evaluated for the determination of the optimal mixing conditions. Our results suggested the use of a very fine controlled microinjection for better mixing profiles with and approximately rate of 135.000 μm3/s for the administration of drugs inside arteries.

Keywords: artificial artery, drug delivery, microfluidics dynamics, arteriosclerosis

Procedia PDF Downloads 272
581 Evaluating and Supporting Student Engagement in Online Learning

Authors: Maria Hopkins

Abstract:

Research on student engagement is founded on a desire to improve the quality of online instruction in both course design and delivery. A high level of student engagement is associated with a wide range of educational practices including purposeful student-faculty contact, peer to peer contact, active and collaborative learning, and positive factors such as student satisfaction, persistence, achievement, and learning. By encouraging student engagement, institutions of higher education can have a positive impact on student success that leads to retention and degree completion. The current research presents the results of an online student engagement survey which support faculty teaching practices to maximize the learning experience for online students. The ‘Indicators of Engaged Learning Online’ provide a framework that measures level of student engagement. Social constructivism and collaborative learning form the theoretical basis of the framework. Social constructivist pedagogy acknowledges the social nature of knowledge and its creation in the minds of individual learners. Some important themes that flow from social constructivism involve the importance of collaboration among instructors and students, active learning vs passive consumption of information, a learning environment that is learner and learning centered, which promotes multiple perspectives, and the use of social tools in the online environment to construct knowledge. The results of the survey indicated themes that emphasized the importance of: Interaction among peers and faculty (collaboration); Timely feedback on assignment/assessments; Faculty participation and visibility; Relevance and real-world application (in terms of assignments, activities, and assessments); and Motivation/interest (the need for faculty to motivate students especially those that may not have an interest in the coursework per se). The qualitative aspect of this student engagement study revealed what instructors did well that made students feel engaged in the course, but also what instructors did not do well, which could inform recommendations to faculty when expectations for teaching a course are reviewed. Furthermore, this research provides evidence for the connection between higher student engagement and persistence and retention in online programs, which supports our rationale for encouraging student engagement, especially in the online environment because attrition rates are higher than in the face-to-face environment.

Keywords: instructional design, learning effectiveness, online learning, student engagement

Procedia PDF Downloads 282
580 Limbic Involvement in Visual Processing

Authors: Deborah Zelinsky

Abstract:

The retina filters millions of incoming signals into a smaller amount of exiting optic nerve fibers that travel to different portions of the brain. Most of the signals are for eyesight (called "image-forming" signals). However, there are other faster signals that travel "elsewhere" and are not directly involved with eyesight (called "non-image-forming" signals). This article centers on the neurons of the optic nerve connecting to parts of the limbic system. Eye care providers are currently looking at parvocellular and magnocellular processing pathways without realizing that those are part of an enormous "galaxy" of all the body systems. Lenses are modifying both non-image and image-forming pathways, taking A.M. Skeffington's seminal work one step further. Almost 100 years ago, he described the Where am I (orientation), Where is It (localization), and What is It (identification) pathways. Now, among others, there is a How am I (animation) and a Who am I (inclination, motivation, imagination) pathway. Classic eye testing considers pupils and often assesses posture and motion awareness, but classical prescriptions often overlook limbic involvement in visual processing. The limbic system is composed of the hippocampus, amygdala, hypothalamus, and anterior nuclei of the thalamus. The optic nerve's limbic connections arise from the intrinsically photosensitive retinal ganglion cells (ipRGC) through the "retinohypothalamic tract" (RHT). There are two main hypothalamic nuclei with direct photic inputs. These are the suprachiasmatic nucleus and the paraventricular nucleus. Other hypothalamic nuclei connected with retinal function, including mood regulation, appetite, and glucose regulation, are the supraoptic nucleus and the arcuate nucleus. The retino-hypothalamic tract is often overlooked when we prescribe eyeglasses. Each person is different, but the lenses we choose are influencing this fast processing, which affects each patient's aiming and focusing abilities. These signals arise from the ipRGC cells that were only discovered 20+ years ago and do not address the campana retinal interneurons that were only discovered 2 years ago. As eyecare providers, we are unknowingly altering such factors as lymph flow, glucose metabolism, appetite, and sleep cycles in our patients. It is important to know what we are prescribing as the visual processing evaluations expand past the 20/20 central eyesight.

Keywords: neuromodulation, retinal processing, retinohypothalamic tract, limbic system, visual processing

Procedia PDF Downloads 71
579 Design and 3D-Printout of The Stack-Corrugate-Sheel Core Sandwiched Decks for The Bridging System

Authors: K. Kamal

Abstract:

Structural sandwich panels with core of Advanced Composites Laminates l Honeycombs / PU-foams are used in aerospace applications and are also fabricated for use now in some civil engineering applications. An all Advanced Composites Foot Over Bridge (FOB) system, designed and developed for pedestrian traffic is one such application earlier, may be cited as an example here. During development stage of this FoB, a profile of its decks was then spurred as a single corrugate sheet core sandwiched between two Glass Fibre Reinforced Plastics(GFRP) flat laminates. Once successfully fabricated and used, these decks did prove suitable also to form other structure on assembly, such as, erecting temporary shelters. Such corrugated sheet core profile sandwiched panels were then also tried using the construction materials but any conventional method of construction only posed certain difficulties in achieving the required core profile monolithically within the sandwiched slabs and hence it was then abended. Such monolithic construction was, however, subsequently eased out on demonstration by dispensing building materials mix through a suitably designed multi-dispenser system attached to a 3D Printer. This study conducted at lab level was thus reported earlier and it did include the fabrication of a 3D printer in-house first as ‘3DcMP’ as well as on its functional operation, some required sandwich core profiles also been 3D-printed out producing panels hardware. Once a number of these sandwich panels in single corrugated sheet core monolithically printed out, panels were subjected to load test in an experimental set up as also their structural behavior was studied analytically, and subsequently, these results were correlated as reported in the literature. In achieving the required more depths and also to exhibit further the stronger and creating sandwiched decks of better structural and mechanical behavior, further more complex core configuration such as stack corrugate sheets core with a flat mid plane was felt to be the better sandwiched core. Such profile remained as an outcome that turns out merely on stacking of two separately printed out monolithic units of single corrugated sheet core developed earlier as above and bonded them together initially, maintaining a different orientation. For any required sequential understanding of the structural behavior of any such complex profile core sandwiched decks with special emphasis to study of the effect in the variation of corrugation orientation in each distinct tire in this core, it obviously calls for an analytical study first. The rectangular,simply supported decks have therefore been considered for analysis adopting the ‘Advanced Composite Technology(ACT), some numerical results along with some fruitful findings were obtained and these are all presented here in this paper. From this numerical result, it has been observed that a mid flat layer which eventually get created monolethically itself, in addition to eliminating the bonding process in development, has been found to offer more effective bending resistance by such decks subjected to UDL over them. This is understood to have resulted here since the existence of a required shear resistance layer at the mid of the core in this profile, unlike other bending elements. As an addendum to all such efforts made as covered above and was published earlier, this unique stack corrugate sheet core profile sandwiched structural decks, monolithically construction with ease at the site itself, has been printed out from a 3D Printer. On employing 3DcMP and using some innovative building construction materials, holds the future promises of such research & development works since all those several aspects of a 3D printing in construction are now included such as reduction in the required construction time, offering cost effective solutions with freedom in design of any such complex shapes thus can widely now be realized by the modern construction industry.

Keywords: advance composite technology(ACT), corrugated laminates, 3DcMP, foot over bridge (FOB), sandwiched deck units

Procedia PDF Downloads 159
578 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm

Authors: Zachary Huffman, Joana Rocha

Abstract:

Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.

Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations

Procedia PDF Downloads 127
577 Estimation of Small Hydropower Potential Using Remote Sensing and GIS Techniques in Pakistan

Authors: Malik Abid Hussain Khokhar, Muhammad Naveed Tahir, Muhammad Amin

Abstract:

Energy demand has been increased manifold due to increasing population, urban sprawl and rapid socio-economic improvements. Low water capacity in dams for continuation of hydrological power, land cover and land use are the key parameters which are creating problems for more energy production. Overall installed hydropower capacity of Pakistan is more than 35000 MW whereas Pakistan is producing up to 17000 MW and the requirement is more than 22000 that is resulting shortfall of 5000 - 7000 MW. Therefore, there is a dire need to develop small hydropower to fulfill the up-coming requirements. In this regards, excessive rainfall, snow nurtured fast flowing perennial tributaries and streams in northern mountain regions of Pakistan offer a gigantic scope of hydropower potential throughout the year. Rivers flowing in KP (Khyber Pakhtunkhwa) province, GB (Gilgit Baltistan) and AJK (Azad Jammu & Kashmir) possess sufficient water availability for rapid energy growth. In the backdrop of such scenario, small hydropower plants are believed very suitable measures for more green environment and power sustainable option for the development of such regions. Aim of this study is to estimate hydropower potential sites for small hydropower plants and stream distribution as per steam network available in the available basins in the study area. The proposed methodology will focus on features to meet the objectives i.e. site selection of maximum hydropower potential for hydroelectric generation using well emerging GIS tool SWAT as hydrological run-off model on the Neelum, Kunhar and the Dor Rivers’ basins. For validation of the results, NDWI will be computed to show water concentration in the study area while overlaying on geospatial enhanced DEM. This study will represent analysis of basins, watershed, stream links, and flow directions with slope elevation for hydropower potential to produce increasing demand of electricity by installing small hydropower stations. Later on, this study will be benefitted for other adjacent regions for further estimation of site selection for installation of such small power plants as well.

Keywords: energy, stream network, basins, SWAT, evapotranspiration

Procedia PDF Downloads 211
576 Design of a Plant to Produce 100,000 MTPY of Green Hydrogen from Brine

Authors: Abdulrazak Jinadu Otaru, Ahmed Almulhim, Hassan Alhassan, Mohammed Sabri

Abstract:

Saudi Arabia is host to a state-owned oil and gas corporation, known as Saudi ARAMCO, that is responsible for the highest emissions of carbon dioxide (CO₂) due to the heavy reliance on fossil fuels as an energy source for various sectors such as transportation, aerospace, manufacturing, and residential use. Unfortunately, the detrimental consequences of CO₂ emissions include escalating temperatures in the Middle East region, posing significant obstacles in terms of food security and water scarcity for the Kingdom of Saudi Arabia. As part of the Saudi Vision 2030 initiative, which aims to reduce the country's reliance on fossil fuels by 50 %, this study focuses on designing a plant that will produce approximately 100,000 metric tons per year (MTPY) of green hydrogen (H₂) using brine as the primary feedstock. The proposed facility incorporates a double electrolytic technology that first separates brine or sodium chloride (NaCl) into sodium hydroxide, hydrogen gas, and chlorine gas. The sodium hydroxide is then used as an electrolyte in the splitting of water molecules through the supply of electrical energy in a second-stage electrolyser to produce green hydrogen. The study encompasses a comprehensive analysis of process descriptions and flow diagrams, as well as materials and energy balances. It also includes equipment design and specification, cost analysis, and considerations for safety and environmental impact. The design capitalizes on the abundant brine supply, a byproduct of the world's largest desalination plant located in Al Jubail, Saudi Arabia. Additionally, the design incorporates the use of available renewable energy sources, such as solar and wind power, to power the proposed plant. This approach not only helps reduce carbon emissions but also aligns with Saudi Arabia's energy transition policy. Furthermore, it supports the United Nations Sustainable Development Goals on Sustainable Cities and Communities (Goal 11) and Climate Action (Goal 13), benefiting not only Saudi Arabia but also other countries in the Middle East.

Keywords: plant design, electrolysis, brine, sodium hydroxide, chlorine gas, green hydrogen

Procedia PDF Downloads 33
575 Quantification of Dispersion Effects in Arterial Spin Labelling Perfusion MRI

Authors: Rutej R. Mehta, Michael A. Chappell

Abstract:

Introduction: Arterial spin labelling (ASL) is an increasingly popular perfusion MRI technique, in which arterial blood water is magnetically labelled in the neck before flowing into the brain, providing a non-invasive measure of cerebral blood flow (CBF). The accuracy of ASL CBF measurements, however, is hampered by dispersion effects; the distortion of the ASL labelled bolus during its transit through the vasculature. In spite of this, the current recommended implementation of ASL – the white paper (Alsop et al., MRM, 73.1 (2015): 102-116) – does not account for dispersion, which leads to the introduction of errors in CBF. Given that the transport time from the labelling region to the tissue – the arterial transit time (ATT) – depends on the region of the brain and the condition of the patient, it is likely that these errors will also vary with the ATT. In this study, various dispersion models are assessed in comparison with the white paper (WP) formula for CBF quantification, enabling the errors introduced by the WP to be quantified. Additionally, this study examines the relationship between the errors associated with the WP and the ATT – and how this is influenced by dispersion. Methods: Data were simulated using the standard model for pseudo-continuous ASL, along with various dispersion models, and then quantified using the formula in the WP. The ATT was varied from 0.5s-1.3s, and the errors associated with noise artefacts were computed in order to define the concept of significant error. The instantaneous slope of the error was also computed as an indicator of the sensitivity of the error with fluctuations in ATT. Finally, a regression analysis was performed to obtain the mean error against ATT. Results: An error of 20.9% was found to be comparable to that introduced by typical measurement noise. The WP formula was shown to introduce errors exceeding 20.9% for ATTs beyond 1.25s even when dispersion effects were ignored. Using a Gaussian dispersion model, a mean error of 16% was introduced by using the WP, and a dispersion threshold of σ=0.6 was determined, beyond which the error was found to increase considerably with ATT. The mean error ranged from 44.5% to 73.5% when other physiologically plausible dispersion models were implemented, and the instantaneous slope varied from 35 to 75 as dispersion levels were varied. Conclusion: It has been shown that the WP quantification formula holds only within an ATT window of 0.5 to 1.25s, and that this window gets narrower as dispersion occurs. Provided that the dispersion levels fall below the threshold evaluated in this study, however, the WP can measure CBF with reasonable accuracy if dispersion is correctly modelled by the Gaussian model. However, substantial errors were observed with other common models for dispersion with dispersion levels similar to those that have been observed in literature.

Keywords: arterial spin labelling, dispersion, MRI, perfusion

Procedia PDF Downloads 361
574 Counting Fishes in Aquaculture Ponds: Application of Imaging Sonars

Authors: Juan C. Gutierrez-Estrada, Inmaculada Pulido-Calvo, Ignacio De La Rosa, Antonio Peregrin, Fernando Gomez-Bravo, Samuel Lopez-Dominguez, Alejandro Garrocho-Cruz, Jairo Castro-Gutierrez

Abstract:

The semi-intensive aquaculture in traditional earth ponds is the main rearing system in Southern Spain. These fish rearing systems are approximately two thirds of aquatic production in this area which has made a significant contribution to the regional economy in recent years. In this type of rearing system, a crucial aspect is the correct quantification and control of the fish abundance in the ponds because the fish farmer knows how many fishes he puts in the ponds but doesn’t know how many fishes will harvest at the end of the rear period. This is a consequence of the mortality induced by different causes as pathogen agents as parasites, viruses and bacteria and other factors as predation of fish-eating birds and poaching. Track the fish abundance in these installations is very difficult because usually the ponds take up a large area of land and the management of the water flow is not automatized. Therefore, there is a very high degree of uncertainty on the abundance fishes which strongly hinders the management and planning of the sales. A novel and non-invasive procedure to count fishes in the ponds is by the means of imaging sonars, particularly fixed systems and/or linked to aquatic vehicles as Remotely Operated Vehicles (ROVs). In this work, a method based on census stations procedures is proposed to evaluate the fish abundance estimation accuracy using images obtained of multibeam sonars. The results indicate that it is possible to obtain a realistic approach about the number of fishes, sizes and therefore the biomass contained in the ponds. This research is included in the framework of the KTTSeaDrones Project (‘Conocimiento y transferencia de tecnología sobre vehículos aéreos y acuáticos para el desarrollo transfronterizo de ciencias marinas y pesqueras 0622-KTTSEADRONES-5-E’) financed by the European Regional Development Fund (ERDF) through the Interreg V-A Spain-Portugal Programme (POCTEP) 2014-2020.

Keywords: census station procedure, fish biomass, semi-intensive aquaculture, multibeam sonars

Procedia PDF Downloads 206
573 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption

Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu

Abstract:

By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.

Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture

Procedia PDF Downloads 360
572 A Risk-Based Comprehensive Framework for the Assessment of the Security of Multi-Modal Transport Systems

Authors: Mireille Elhajj, Washington Ochieng, Deeph Chana

Abstract:

The challenges of the rapid growth in the demand for transport has traditionally been seen within the context of the problems of congestion, air quality, climate change, safety, and affordability. However, there are increasing threats including those related to crime such as cyber-attacks that threaten the security of the transport of people and goods. To the best of the authors’ knowledge, this paper presents for the first time, a comprehensive framework for the assessment of the current and future security issues of multi-modal transport systems. The approach or method proposed is based on a structured framework starting with a detailed specification of the transport asset map (transport system architecture), followed by the identification of vulnerabilities. The asset map and vulnerabilities are used to identify the various approaches for exploitation of the vulnerabilities, leading to the creation of a set of threat scenarios. The threat scenarios are then transformed into risks and their categories, and include insights for their mitigation. The consideration of the mitigation space is holistic and includes the formulation of appropriate policies and tactics and/or technical interventions. The quality of the framework is ensured through a structured and logical process that identifies the stakeholders, reviews the relevant documents including policies and identifies gaps, incorporates targeted surveys to augment the reviews, and uses subject matter experts for validation. The approach to categorising security risks is an extension of the current methods that are typically employed. Specifically, the partitioning of risks into either physical or cyber categories is too limited for developing mitigation policies and tactics/interventions for transport systems where an interplay between physical and cyber processes is very often the norm. This interplay is rapidly taking on increasing significance for security as the emergence of cyber-physical technologies, are shaping the future of all transport modes. Examples include: Connected Autonomous Vehicles (CAVs) in road transport; the European Rail Traffic Management System (ERTMS) in rail transport; Automatic Identification System (AIS) in maritime transport; advanced Communications, Navigation and Surveillance (CNS) technologies in air transport; and the Internet of Things (IoT). The framework adopts a risk categorisation scheme that considers risks as falling within the following threat→impact relationships: Physical→Physical, Cyber→Cyber, Cyber→Physical, and Physical→Cyber). Thus the framework enables a more complete risk picture to be developed for today’s transport systems and, more importantly, is readily extendable to account for emerging trends in the sector that will define future transport systems. The framework facilitates the audit and retro-fitting of mitigations in current transport operations and the analysis of security management options for the next generation of Transport enabling strategic aspirations such as systems with security-by-design and co-design of safety and security to be achieved. An initial application of the framework to transport systems has shown that intra-modal consideration of security measures is sub-optimal and that a holistic and multi-modal approach that also addresses the intersections/transition points of such networks is required as their vulnerability is high. This is in-line with traveler-centric transport service provision, widely accepted as the future of mobility services. In summary, a risk-based framework is proposed for use by the stakeholders to comprehensively and holistically assess the security of transport systems. It requires a detailed understanding of the transport architecture to enable a detailed vulnerabilities analysis to be undertaken, creates threat scenarios and transforms them into risks which form the basis for the formulation of interventions.

Keywords: mitigations, risk, transport, security, vulnerabilities

Procedia PDF Downloads 149
571 Electrical Tortuosity across Electrokinetically Remediated Soils

Authors: Waddah S. Abdullah, Khaled F. Al-Omari

Abstract:

Electrokinetic remediation is one of the most influential and effective methods to decontaminate contaminated soils. Electroosmosis and electromigration are the processes of electrochemical extraction of contaminants from soils. The driving force that causes removing contaminants from soils (electroosmosis process or electromigration process) is voltage gradient. Therefore, the electric field distribution throughout the soil domain is extremely important to investigate and to determine the factors that help to establish a uniform electric field distribution in order to make the clean-up process work properly and efficiently. In this study, small-sized passive electrodes (made of graphite) were placed at predetermined locations within the soil specimen, and the voltage drop between these passive electrodes was measured in order to observe the electrical distribution throughout the tested soil specimens. The electrokinetic test was conducted on two types of soils; a sandy soil and a clayey soil. The electrical distribution throughout the soil domain was conducted with different tests properties; and the electrical field distribution was observed in three-dimensional pattern in order to establish the electrical distribution within the soil domain. The effects of density, applied voltages, and degree of saturation on the electrical distribution within the remediated soil were investigated. The distribution of the moisture content, concentration of the sodium ions, and the concentration of the calcium ions were determined and established in three-dimensional scheme. The study has shown that the electrical conductivity within soil domain depends on the moisture content and concentration of electrolytes present in the pore fluid. The distribution of the electrical field in the saturated soil was found not be affected by its density. The study has also shown that high voltage gradient leads to non-uniform electric field distribution within the electroremediated soil. Very importantly, it was found that even when the electric field distribution is uniform globally (i.e. between the passive electrodes), local non-uniformity could be established within the remediated soil mass. Cracks or air gaps formed due to temperature rise (because of electric flow in low conductivity regions) promotes electrical tortuosity. Thus, fracturing or cracking formed in the remediated soil mass causes disconnection of electric current and hence, no removal of contaminant occur within these areas.

Keywords: contaminant removal, electrical tortuousity, electromigration, electroosmosis, voltage distribution

Procedia PDF Downloads 412
570 Properties Optimization of Keratin Films Produced by Film Casting and Compression Moulding

Authors: Mahamad Yousif, Eoin Cunningham, Beatrice Smyth

Abstract:

Every year ~6 million tonnes of feathers are produced globally. Due to feathers’ low density and possible contamination with pathogens, their disposal causes health and environmental problems. The extraction of keratin, which represents >90% of feathers’ dry weight, could offer a solution due to its wide range of applications in the food, medical, cosmetics, and biopolymer industries. One of these applications is the production of biofilms which can be used for packaging, edible films, drug delivery, wound healing etc. Several studies in the last two decades investigated keratin film production and its properties. However, the effects of many parameters on the properties of the films remain to be investigated including the extraction method, crosslinker type and concentration, and the film production method. These parameters were investigated in this study. Keratin was extracted from chicken feathers using two methods, alkaline extraction with 0.5 M NaOH at 80 °C or sulphitolysis extraction with 0.5 M sodium sulphite, 8 M urea, and 0.25-1 g sodium dodecyl sulphate (SDS) at 100 °C. The extracted keratin was mixed with different types and concentrations of plasticizers (glycerol and polyethylene glycol) and crosslinkers (formaldehyde (FA), glutaraldehyde, cinnamaldehyde, glyoxal, and 1,4-Butanediol diglycidyl ether (BDE)). The mixtures were either cast in a mould or compression moulded to produce films. For casting, keratin powder was initially dissolved in water to form a 5% keratin solution and the mixture was dried in an oven at 60 °C. For compression moulding, 10% water was added and the compression moulding temperature and pressure were in the range of 60-120 °C and 10-30 bar. Finally, the tensile properties, solubility, and transparency of the films were analysed. The films prepared using the sulphitolysis keratin had superior tensile properties to the alkaline keratin and formed successfully with lower plasticizer concentrations. Lowering the SDS concentration from 1 to 0.25 g/g feathers improved all the tensile properties. All the films prepared without crosslinkers were 100% water soluble but adding crosslinkers reduced solubility to as low as 21%. FA and BDE were found to be the best crosslinkers increasing the tensile strength and elongation at break of the films. Higher compression moulding temperature and pressure lowered the tensile properties of the films; therefore, 80 °C and 10 bar were considered to be the optimal compression moulding temperature and pressure. Nevertheless, the films prepared by casting had higher tensile properties than compression moulding but were less transparent. Two optimal films, prepared by film casting, were identified and their compositions were: (a) Sulphitolysis keratin, 20% glycerol, 10% FA, and 10% BDE. (b) Sulphitolysis keratin, 20% glycerol, and 10% BDE. Their tensile strength, elongation at break, Young’s modulus, solubility, and transparency were: (a) 4.275±0.467 MPa, 86.12±4.24%, 22.227±2.711 MPa, 21.34±1.11%, and 8.57±0.94* respectively. (b) 3.024±0.231 MPa, 113.65±14.61%, 10±1.948 MPa, 25.03±5.3%, and 4.8±0.15 respectively. A higher value indicates that the film is less transparent. The extraction method, film composition, and production method had significant influence on the properties of keratin films and should therefore be tailored to meet the desired properties and applications.

Keywords: compression moulding, crosslinker, film casting, keratin, plasticizer, solubility, tensile properties, transparency

Procedia PDF Downloads 12
569 Mechanical, Thermal and Biodegradable Properties of Bioplast-Spruce Green Wood Polymer Composites

Authors: A. Atli, K. Candelier, J. Alteyrac

Abstract:

Environmental and sustainability concerns push the industries to manufacture alternative materials having less environmental impact. The Wood Plastic Composites (WPCs) produced by blending the biopolymers and natural fillers permit not only to tailor the desired properties of materials but also are the solution to meet the environmental and sustainability requirements. This work presents the elaboration and characterization of the fully green WPCs prepared by blending a biopolymer, BIOPLAST® GS 2189 and spruce sawdust used as filler with different amounts. Since both components are bio-based, the resulting material is entirely environmentally friendly. The mechanical, thermal, structural properties of these WPCs were characterized by different analytical methods like tensile, flexural and impact tests, Thermogravimetric Analysis (TGA), Differential Scanning Calorimetry (DSC) and X-ray Diffraction (XRD). Their water absorption properties and resistance to the termite and fungal attacks were determined in relation with different wood filler content. The tensile and flexural moduli of WPCs increased with increasing amount of wood fillers into the biopolymer, but WPCs became more brittle compared to the neat polymer. Incorporation of spruce sawdust modified the thermal properties of polymer: The degradation, cold crystallization, and melting temperatures shifted to higher temperatures when spruce sawdust was added into polymer. The termite, fungal and water absorption resistance of WPCs decreased with increasing wood amount in WPCs, but remained in durability class 1 (durable) concerning fungal resistance and quoted 1 (attempted attack) in visual rating regarding to the termites resistance except that the WPC with the highest wood content (30 wt%) rated 2 (slight attack) indicating a long term durability. All the results showed the possibility to elaborate the easy injectable composite materials with adjustable properties by incorporation of BIOPLAST® GS 2189 and spruce sawdust. Therefore, lightweight WPCs allow both to recycle wood industry byproducts and to produce a full ecologic material.

Keywords: biodegradability, color measurements, durability, mechanical properties, melt flow index, MFI, structural properties, thermal properties, wood-plastic composites, WPCs

Procedia PDF Downloads 128
568 Smart Irrigation System for Applied Irrigation Management in Tomato Seedling Production

Authors: Catariny C. Aleman, Flavio B. Campos, Matheus A. Caliman, Everardo C. Mantovani

Abstract:

The seedling production stage is a critical point in the vegetable production system. Obtaining high-quality seedlings is a prerequisite for subsequent cropping to occur well and productivity optimization is required. The water management is an important step in agriculture production. The adequate water requirement in horticulture seedlings can provide higher quality and increase field production. The practice of irrigation is indispensable and requires a duly adjusted quality irrigation system, together with a specific water management plan to meet the water demand of the crop. Irrigation management in seedling management requires a great deal of specific information, especially when it involves the use of inputs such as hydrorentering polymers and automation technologies of the data acquisition and irrigation system. The experiment was conducted in a greenhouse at the Federal University of Viçosa, Viçosa - MG. Tomato seedlings (Lycopersicon esculentum Mill) were produced in plastic trays of 128 cells, suspended at 1.25 m from the ground. The seedlings were irrigated by 4 micro sprinklers of fixed jet 360º per tray, duly isolated by sideboards, following the methodology developed for this work. During Phase 1, in January / February 2017 (duration of 24 days), the cultivation coefficient (Kc) of seedlings cultured in the presence and absence of hydrogel was evaluated by weighing lysimeter. In Phase 2, September 2017 (duration of 25 days), the seedlings were submitted to 4 irrigation managements (Kc, timer, 0.50 ETo, and 1.00 ETo), in the presence and absence of hydrogel and then evaluated in relation to quality parameters. The microclimate inside the greenhouse was monitored with the use of air temperature, relative humidity and global radiation sensors connected to a microcontroller that performed hourly calculations of reference evapotranspiration by Penman-Monteith standard method FAO56 modified for the balance of long waves according to Walker, Aldrich, Short (1983), and conducted water balance and irrigation decision making for each experimental treatment. Kc of seedlings cultured on a substrate with hydrogel (1.55) was higher than Kc on a pure substrate (1.39). The use of the hydrogel was a differential for the production of earlier tomato seedlings, with higher final height, the larger diameter of the colon, greater accumulation of a dry mass of shoot, a larger area of crown projection and greater the rate of relative growth. The handling 1.00 ETo promoted higher relative growth rate.

Keywords: automatic system; efficiency of water use; precision irrigation, micro sprinkler.

Procedia PDF Downloads 104
567 Bundling of Transport Flows: Adoption Barriers and Opportunities

Authors: Vandenbroucke Karel, Georges Annabel, Schuurman Dimitri

Abstract:

In the past years, bundling of transport flows, whether or not implemented in an intermodal process, has popped up as a promising concept in the logistics sector. Bundling of transport flows is a process where two or more shippers decide to synergize their shipped goods over a common transport lane. Promoted by the European Commission, several programs have been set up and have shown their benefits. Bundling promises both shippers and logistics service providers economic, societal and ecological benefits. By bundling transport flows and thus reducing truck (or other carrier) capacity, the problems of driver shortage, increased fuel prices, mileage charges and restricted hours of service on the road are solved. In theory, the advantages of bundled transport exceed the drawbacks, however, in practice adoption among shippers remains low. In fact, bundling is mentioned as a disruptive process in the rather traditional logistics sector. In this context, a Belgian company asked iMinds Living Labs to set up a Living Lab research project with the goal to investigate how the uptake of bundling transport flows can be accelerated and to check whether an online data sharing platform can overcome the adoption barriers. The Living Lab research was conducted in 2016 and combined quantitative and qualitative end-user and market research. Concretely, extensive desk research was conducted and combined with insights from expert interviews with four consultants active in the Belgian logistics sector and in-depth interviews with logistics professionals working for shippers (N=10) and LSP’s (N=3). In the article, we present findings which show that there are several factors slowing down the uptake of bundling transport flows. Shippers are hesitant to change how they currently work and they are hesitant to work together with other shippers. Moreover, several practical challenges impede shippers to work together. We also present some opportunities that can accelerate the adoption of bundling of transport flows. First, it seems that there is not enough support coming from governmental and commercial organizations. Secondly, there is the chicken and the egg problem: too few interested parties will lead to no or very few matching lanes. Shippers are therefore reluctant to partake in these projects because the benefits have not yet been proven. Thirdly, the incentive is not big enough for shippers. Road transport organized by the shipper individually is still seen as the easiest and cheapest solution. A solution for the abovementioned challenges might be found in the online data sharing platform of the Belgian company. The added value of this platform is showing shippers possible matching lanes, without the shippers having to invest time in negotiating and networking with other shippers and running the risk of not finding a match. The interviewed shippers and experts indicated that the online data sharing platform is a very promising concept which could accelerate the uptake of bundling of transport flows.

Keywords: adoption barriers, bundling of transport, shippers, transport optimization

Procedia PDF Downloads 192
566 Development of a Feedback Control System for a Lab-Scale Biomass Combustion System Using Programmable Logic Controller

Authors: Samuel O. Alamu, Seong W. Lee, Blaise Kalmia, Marc J. Louise Caballes, Xuejun Qian

Abstract:

The application of combustion technologies for thermal conversion of biomass and solid wastes to energy has been a major solution to the effective handling of wastes over a long period of time. Lab-scale biomass combustion systems have been observed to be economically viable and socially acceptable, but major concerns are the environmental impacts of the process and deviation of temperature distribution within the combustion chamber. Both high and low combustion chamber temperature may affect the overall combustion efficiency and gaseous emissions. Therefore, there is an urgent need to develop a control system which measures the deviations of chamber temperature from set target values, sends these deviations (which generates disturbances in the system) in the form of feedback signal (as input), and control operating conditions for correcting the errors. In this research study, major components of the feedback control system were determined, assembled, and tested. In addition, control algorithms were developed to actuate operating conditions (e.g., air velocity, fuel feeding rate) using ladder logic functions embedded in the Programmable Logic Controller (PLC). The developed control algorithm having chamber temperature as a feedback signal is integrated into the lab-scale swirling fluidized bed combustor (SFBC) to investigate the temperature distribution at different heights of the combustion chamber based on various operating conditions. The air blower rates and the fuel feeding rates obtained from automatic control operations were correlated with manual inputs. There was no observable difference in the correlated results, thus indicating that the written PLC program functions were adequate in designing the experimental study of the lab-scale SFBC. The experimental results were analyzed to study the effect of air velocity operating at 222-273 ft/min and fuel feeding rate of 60-90 rpm on the chamber temperature. The developed temperature-based feedback control system was shown to be adequate in controlling the airflow and the fuel feeding rate for the overall biomass combustion process as it helps to minimize the steady-state error.

Keywords: air flow, biomass combustion, feedback control signal, fuel feeding, ladder logic, programmable logic controller, temperature

Procedia PDF Downloads 120
565 Effects of Drying and Extraction Techniques on the Profile of Volatile Compounds in Banana Pseudostem

Authors: Pantea Salehizadeh, Martin P. Bucknall, Robert Driscoll, Jayashree Arcot, George Srzednicki

Abstract:

Banana is one of the most important crops produced in large quantities in tropical and sub-tropical countries. Of the total plant material grown, approximately 40% is considered waste and left in the field to decay. This practice allows fungal diseases such as Sigatoka Leaf Spot to develop, limiting plant growth and spreading spores in the air that can cause respiratory problems in the surrounding population. The pseudostem is considered a waste residue of production (60 to 80 tonnes/ha/year), although it is a good source of dietary fiber and volatile organic compounds (VOC’s). Strategies to process banana pseudostem into palatable, nutritious and marketable food materials could provide significant social and economic benefits. Extraction of VOC’s with desirable odor from dried and fresh pseudostem could improve the smell of products from the confectionary and bakery industries. Incorporation of banana pseudostem flour into bakery products could provide cost savings and improve nutritional value. The aim of this study was to determine the effects of drying methods and different banana species on the profile of volatile aroma compounds in dried banana pseudostem. The banana species analyzed were Musa acuminata and Musa balbisiana. Fresh banana pseudostem samples were processed by either freeze-drying (FD) or heat pump drying (HPD). The extraction of VOC’s was performed at ambient temperature using vacuum distillation and the resulting, mostly aqueous, distillates were analyzed using headspace solid phase microextraction (SPME) gas chromatography – mass spectrometry (GC-MS). Optimal SPME adsorption conditions were 50 °C for 60 min using a Supelco 65 μm PDMS/DVB Stableflex fiber1. Compounds were identified by comparison of their electron impact mass spectra with those from the Wiley 9 / NIST 2011 combined mass spectral library. The results showed that the two species have notably different VOC profiles. Both species contained VOC’s that have been established in literature to have pleasant appetizing aromas. These included l-Menthone, D-Limonene, trans-linlool oxide, 1-Nonanol, CIS 6 Nonen-1ol, 2,6 Nonadien-1-ol, Benzenemethanol, 4-methyl, 1-Butanol, 3-methyl, hexanal, 1-Propanol, 2-methyl- acid، 2-Methyl-2-butanol. Results show banana pseudostem VOC’s are better preserved by FD than by HPD. This study is still in progress and should lead to the optimization of processing techniques that would promote the utilization of banana pseudostem in the food industry.

Keywords: heat pump drying, freeze drying, SPME, vacuum distillation, VOC analysis

Procedia PDF Downloads 312
564 Total Arterial Coronary Revascularization with Aorto-Bifemoral Bipopliteal Bypass: A Case Report

Authors: Nuruddin Mohammod Zahangir, Syed Tanvir Ahmady, Firoz Ahmed, Mainul Kabir, Tamjid Mohammad Najmus Sakib Khan, Nazmul Hossain, Niaz Ahmed, Madhava Janardhan Naik

Abstract:

The management of combined Coronary Artery Disease and Peripheral Vascular Disease is a challenge and brings with it numerous clinical dilemmas.The 56 year old gentleman presented to our department with significant triple vessel disease with occluded lower end of aorta just before bifurcation and bilateral superficial femoral arteries. Operation was done on 11.03.14. The The Left Internal Mammary Artery (LIMA) and the Right Internal Mammary Artery (RIMA) were harvested in skeletonized manner. The free RIMA was then anastomosed with LIMA to make LIMA-RIMA Y. Cardio Pulmonary Bypass was then established and coronary artery bypass grafts performed. LIMA was anastomosed to the Left Anterior Descending artery. RIMA was anastomosed to Posterior Descending Artery, 1st and 2nd Obtuse Marginal arteries in a sequential manner. Abdomen was opened by midline incision. The infrarenal aorta exposed and was found to be severely diseased. A Vascular Clamp was applied infrarenally, aortotomy done and limited endarterectomy performed. An end-to-side anastomosis was done with upper end of PTFE synthetic Y-graft (14/7 mm) to the infarenal Aorta and the Clamp released. Good flow noted in both limbs of the graft. Patient was then slowly weaned off from Cardio Pulmonary Bypass without difficulty. The distal two limbs of the Y graft were passed to the groin through retroperitoneal tunnels and anastomosed end-to-side with the common femoral arteries. Saphenous vein was interposed between common femoral and popliteal arteries bilaterally through subfascial tunnels in both thigh. On 12th postoperative day he was discharged from hospital in good general condition. Follow up after 3 months of operation the patient is doing good and free of chest pain and claudication pain.

Keywords: total arterial, coronary revascularization, aorto-bifemoral bypass, bifemoro-bipopliteal bypass

Procedia PDF Downloads 457
563 Atmospheric Circulation Types Related to Dust Transport Episodes over Crete in the Eastern Mediterranean

Authors: K. Alafogiannis, E. E. Houssos, E. Anagnostou, G. Kouvarakis, N. Mihalopoulos, A. Fotiadi

Abstract:

The Mediterranean basin is an area where different aerosol types coexist, including urban/industrial, desert dust, biomass burning and marine particles. Particularly, mineral dust aerosols, mostly originated from North African deserts, significantly contribute to high aerosol loads above the Mediterranean. Dust transport, controlled by the variation of the atmospheric circulation throughout the year, results in a strong spatial and temporal variability of aerosol properties. In this study, the synoptic conditions which favor dust transport over the Eastern Mediterranean are thoroughly investigated. For this reason, three datasets are employed. Firstly, ground-based daily data of aerosol properties, namely Aerosol Optical Thickness (AOT), Ångström exponent (α440-870) and fine fraction from the FORTH-AERONET (Aerosol Robotic Network) station along with measurements of PM10 concentrations from Finokalia station, for the period 2003-2011, are used to identify days with high coarse aerosol load (episodes) over Crete. Then, geopotential height at 1000, 850 and 700 hPa levels obtained from the NCEP/NCAR Reanalysis Project, are utilized to depict the atmospheric circulation during the identified episodes. Additionally, air-mass back trajectories, calculated by HYSPLIT, are used to verify the origin of aerosols from neighbouring deserts. For the 227 identified dust episodes, the statistical methods of Factor and Cluster Analysis are applied on the corresponding atmospheric circulation data to reveal the main types of the synoptic conditions favouring dust transport towards Crete (Eastern Mediterranean). The 227 cases are classified into 11 distinct types (clusters). Dust episodes in Eastern Mediterranean, are found to be more frequent (52%) in spring with a secondary maximum in autumn. The main characteristic of the atmospheric circulation associated with dust episodes, is the presence of a low-pressure system at surface, either in southwestern Europe or western/central Mediterranean, which induces a southerly air flow favouring dust transport from African deserts. The exact position and the intensity of the low-pressure system vary notably among clusters. More rarely dust may originate from deserts of Arabian Peninsula.

Keywords: aerosols, atmospheric circulation, dust particles, Eastern Mediterranean

Procedia PDF Downloads 218
562 The Use of Food Industry Bio-Products for Sustainable Lactic Acid Bacteria Encapsulation

Authors: Paulina Zavistanaviciute, Vita Krungleviciute, Elena Bartkiene

Abstract:

Lactic acid bacteria (LAB) are microbial supplements that increase the nutritional, therapeutic, and safety value of food and feed. Often LAB strains are incubated in an expensive commercially available de Man-Rogosa-Sharpe (MRS) medium; the cultures are centrifuged, and the cells are washing with sterile water. Potato juice and apple juice industry bio-products are industrial wastes which may constitute a source of digestible nutrients for microorganisms. Due to their low cost and good chemical composition, potato juice and apple juice production bio- products could have a potential application in LAB encapsulation. In this study, pure LAB (P. acidilactici and P. pentosaceus) were multiplied in a crushed potato juice and apple juice industry bio-products medium. Before using, bio-products were sterilized and filtered. No additives were added to mass, except apple juice industry bioproducts were diluted with sterile water (1/5; v/v). The tap of sterilised mass, and LAB cell suspension (5 mL), containing of 8.9 log10 colony-forming units (cfu) per mL of the P. acidilactici and P. pentosaceus was used to multiply the LAB for 72 h. The final colony number in the potato juice and apple juice bio- products substrate was on average 9.60 log10 cfu/g. In order to stabilize the LAB, several methods of dehydration have been tested: lyophilisation (MilrockKieffer Lane, Kingston, USA) and dehydration in spray drying system (SD-06, Keison, Great Britain). Into the spray drying system multiplied LAB in a crushed potato juice and apple juice bio-products medium was injected in peristaltic way (inlet temperature +60 °C, inlet air temperature +150° C, outgoing air temperature +80 °C, air flow 200 m3/h). After lyophilisation (-48 °C) and spray drying (+150 °C) the viable cell concentration in the fermented potato juice powder was 9.18 ± 0.09 log10 cfu/g and 9.04 ± 0.07 log10 cfu/g, respectively, and in apple mass powder 8.03 ± 0.04 log10 cfu/g and 7.03 ± 0.03 log10 cfu/g, respectively. Results indicated that during the storage (after 12 months) at room temperature (22 +/- 2 ºC) LAB count in dehydrated products was 5.18 log10 cfu/g and 7.00 log10 cfu/g (in spray dried and lyophilized potato juice powder, respectively), and 3.05 log10 cfu/g and 4.10 log10 cfu/g (in spray dried and lyophilized apple juice industry bio-products powder, respectively). According to obtained results, potato juice could be used as alternative substrate for P. acidilactici and P. pentosaceus cultivation, and by drying received powders can be used in food/feed industry as the LAB starters. Therefore, apple juice industry by- products before spray drying and lyophilisation should be modified (i. e. by using different starches) in order to improve its encapsulation.

Keywords: bio-products, encapsulation, lactic acid bacteria, sustainability

Procedia PDF Downloads 267
561 Impact of the Oxygen Content on the Optoelectronic Properties of the Indium-Tin-Oxide Based Transparent Electrodes for Silicon Heterojunction Solar Cells

Authors: Brahim Aissa

Abstract:

Transparent conductive oxides (TCOs) used as front electrodes in solar cells must feature simultaneously high electrical conductivity, low contact resistance with the adjacent layers, and an appropriate refractive index for maximal light in-coupling into the device. However, these properties may conflict with each other, motivating thereby the search for TCOs with high performance. Additionally, due to the presence of temperature sensitive layers in many solar cell designs (for example, in thin-film silicon and silicon heterojunction (SHJ)), low-temperature deposition processes are more suitable. Several deposition techniques have been already explored to fabricate high-mobility TCOs at low temperatures, including sputter deposition, chemical vapor deposition, and atomic layer deposition. Among this variety of methods, to the best of our knowledge, magnetron sputtering deposition is the most established technique, despite the fact that it can lead to damage of underlying layers. The Sn doped In₂O₃ (ITO) is the most commonly used transparent electrode-contact in SHJ technology. In this work, we studied the properties of ITO thin films grown by RF sputtering. Using different oxygen fraction in the argon/oxygen plasma, we prepared ITO films deposited on glass substrates, on one hand, and on a-Si (p and n-types):H/intrinsic a-Si/glass substrates, on the other hand. Hall Effect measurements were systematically conducted together with total-transmittance (TT) and total-reflectance (TR) spectrometry. The electrical properties were drastically affected whereas the TT and TR were found to be slightly impacted by the oxygen variation. Furthermore, the time of flight-secondary ion mass spectrometry (TOF-SIMS) technique was used to determine the distribution of various species throughout the thickness of the ITO and at various interfaces. The depth profiling of indium, oxygen, tin, silicon, phosphorous, boron and hydrogen was investigated throughout the various thicknesses and interfaces, and obtained results are discussed accordingly. Finally, the extreme conditions were selected to fabricate rear emitter SHJ devices, and the photovoltaic performance was evaluated; the lower oxygen flow ratio was found to yield the best performance attributed to lower series resistance.

Keywords: solar cell, silicon heterojunction, oxygen content, optoelectronic properties

Procedia PDF Downloads 143
560 Collaborative and Experimental Cultures in Virtual Reality Journalism: From the Perspective of Content Creators

Authors: Radwa Mabrook

Abstract:

Virtual Reality (VR) content creation is a complex and an expensive process, which requires multi-disciplinary teams of content creators. Grant schemes from technology companies help media organisations to explore the VR potential in journalism and factual storytelling. Media organisations try to do as much as they can in-house, but they may outsource due to time constraints and skill availability. Journalists, game developers, sound designers and creative artists work together and bring in new cultures of work. This study explores the collaborative experimental nature of VR content creation, through tracing every actor involved in the process and examining their perceptions of the VR work. The study builds on Actor Network Theory (ANT), which decomposes phenomena into their basic elements and traces the interrelations among them. Therefore, the researcher conducted 22 semi-structured interviews with VR content creators between November 2017 and April 2018. Purposive and snowball sampling techniques allowed the researcher to recruit fact-based VR content creators from production studios and media organisations, as well as freelancers. Interviews lasted up to three hours, and they were a mix of Skype calls and in-person interviews. Participants consented for their interviews to be recorded, and for their names to be revealed in the study. The researcher coded interviews’ transcripts in Nvivo software, looking for key themes that correspond with the research questions. The study revealed that VR content creators must be adaptive to change, open to learn and comfortable with mistakes. The VR content creation process is very iterative because VR has no established work flow or visual grammar. Multi-disciplinary VR team members often speak different languages making it hard to communicate. However, adaptive content creators perceive VR work as a fun experience and an opportunity to learn. The traditional sense of competition and the strive for information exclusivity are now replaced by a strong drive for knowledge sharing. VR content creators are open to share their methods of work and their experiences. They target to build a collaborative network that aims to harness VR technology for journalism and factual storytelling. Indeed, VR is instilling collaborative and experimental cultures in journalism.

Keywords: collaborative culture, content creation, experimental culture, virtual reality

Procedia PDF Downloads 116
559 Controllable Modification of Glass-Crystal Composites with Ion-Exchange Technique

Authors: Andrey A. Lipovskii, Alexey V. Redkov, Vyacheslav V. Rusan, Dmitry K. Tagantsev, Valentina V. Zhurikhina

Abstract:

The presented research is related to the development of recently proposed technique of the formation of composite materials, like optical glass-ceramics, with predetermined structure and properties of the crystalline component. The technique is based on the control of the size and concentration of the crystalline grains using the phenomenon of glass-ceramics decrystallization (vitrification) induced by ion-exchange. This phenomenon was discovered and explained in the beginning of the 2000s, while related theoretical description was given in 2016 only. In general, the developed theory enables one to model the process and optimize the conditions of ion-exchange processing of glass-ceramics, which provide given properties of crystalline component, in particular, profile of the average size of the crystalline grains. The optimization is possible if one knows two dimensionless parameters of the theoretical model. One of them (β) is the value which is directly related to the solubility of crystalline component of the glass-ceramics in the glass matrix, and another (γ) is equal to the ratio of characteristic times of ion-exchange diffusion and crystalline grain dissolution. The presented study is dedicated to the development of experimental technique and simulation which allow determining these parameters. It is shown that these parameters can be deduced from the data on the space distributions of diffusant concentrations and average size of crystalline grains in the glass-ceramics samples subjected to ion-exchange treatment. Measurements at least at two temperatures and two processing times at each temperature are necessary. The composite material used was a silica-based glass-ceramics with crystalline grains of Li2OSiO2. Cubical samples of the glass-ceramics (6x6x6 mm3) underwent the ion exchange process in NaNO3 salt melt at 520 oC (for 16 and 48 h), 540 oC (for 8 and 24 h), 560 oC (for 4 and 12 h), and 580 oC (for 2 and 8 h). The ion exchange processing resulted in the glass-ceramics vitrification in the subsurface layers where ion-exchange diffusion took place. Slabs about 1 mm thick were cut from the central part of the samples and their big facets were polished. These slabs were used to find profiles of diffusant concentrations and average size of the crystalline grains. The concentration profiles were determined from refractive index profiles measured with Max-Zender interferometer, and profiles of the average size of the crystalline grains were determined with micro-Raman spectroscopy. Numerical simulation were based on the developed theoretical model of the glass-ceramics decrystallization induced by ion exchange. The simulation of the processes was carried out for different values of β and γ parameters under all above-mentioned ion exchange conditions. As a result, the temperature dependences of the parameters, which provided a reliable coincidence of the simulation and experimental data, were found. This ensured the adequate modeling of the process of the glass-ceramics decrystallization in 520-580 oC temperature interval. Developed approach provides a powerful tool for fine tuning of the glass-ceramics structure, namely, concentration and average size of crystalline grains.

Keywords: diffusion, glass-ceramics, ion exchange, vitrification

Procedia PDF Downloads 260
558 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains

Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe

Abstract:

The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.

Keywords: digitalization, digital transformation, Industrie 4.0, lean production, value chain

Procedia PDF Downloads 299
557 Decision Support System for Hospital Selection in Emergency Medical Services: A Discrete Event Simulation Approach

Authors: D. Tedesco, G. Feletti, P. Trucco

Abstract:

The present study aims to develop a Decision Support System (DSS) to support the operational decision of the Emergency Medical Service (EMS) regarding the assignment of medical emergency requests to Emergency Departments (ED). In the literature, this problem is also known as “hospital selection” and concerns the definition of policies for the selection of the ED to which patients who require further treatment are transported by ambulance. The employed research methodology consists of the first phase of revision of the technical-scientific literature concerning DSSs to support the EMS management and, in particular, the hospital selection decision. From the literature analysis, it emerged that current studies are mainly focused on the EMS phases related to the ambulance service and consider a process that ends when the ambulance is available after completing a request. Therefore, all the ED-related issues are excluded and considered as part of a separate process. Indeed, the most studied hospital selection policy turned out to be proximity, thus allowing to minimize the transport time and release the ambulance in the shortest possible time. The purpose of the present study consists in developing an optimization model for assigning medical emergency requests to the EDs, considering information relating to the subsequent phases of the process, such as the case-mix, the expected service throughput times, and the operational capacity of different EDs in hospitals. To this end, a Discrete Event Simulation (DES) model was created to evaluate different hospital selection policies. Therefore, the next steps of the research consisted of the development of a general simulation architecture, its implementation in the AnyLogic software and its validation on a realistic dataset. The hospital selection policy that produced the best results was the minimization of the Time To Provider (TTP), considered as the time from the beginning of the ambulance journey to the ED at the beginning of the clinical evaluation by the doctor. Finally, two approaches were further compared: a static approach, which is based on a retrospective estimate of the TTP, and a dynamic approach, which is based on a predictive estimate of the TTP determined with a constantly updated Winters model. Findings reveal that considering the minimization of TTP as a hospital selection policy raises several benefits. It allows to significantly reduce service throughput times in the ED with a minimum increase in travel time. Furthermore, an immediate view of the saturation state of the ED is produced and the case-mix present in the ED structures (i.e., the different triage codes) is considered, as different severity codes correspond to different service throughput times. Besides, the use of a predictive approach is certainly more reliable in terms of TTP estimation than a retrospective approach but entails a more difficult application. These considerations can support decision-makers in introducing different hospital selection policies to enhance EMSs performance.

Keywords: discrete event simulation, emergency medical services, forecast model, hospital selection

Procedia PDF Downloads 81
556 Computer-Assisted Management of Building Climate and Microgrid with Model Predictive Control

Authors: Vinko Lešić, Mario Vašak, Anita Martinčević, Marko Gulin, Antonio Starčić, Hrvoje Novak

Abstract:

With 40% of total world energy consumption, building systems are developing into technically complex large energy consumers suitable for application of sophisticated power management approaches to largely increase the energy efficiency and even make them active energy market participants. Centralized control system of building heating and cooling managed by economically-optimal model predictive control shows promising results with estimated 30% of energy efficiency increase. The research is focused on implementation of such a method on a case study performed on two floors of our faculty building with corresponding sensors wireless data acquisition, remote heating/cooling units and central climate controller. Building walls are mathematically modeled with corresponding material types, surface shapes and sizes. Models are then exploited to predict thermal characteristics and changes in different building zones. Exterior influences such as environmental conditions and weather forecast, people behavior and comfort demands are all taken into account for deriving price-optimal climate control. Finally, a DC microgrid with photovoltaics, wind turbine, supercapacitor, batteries and fuel cell stacks is added to make the building a unit capable of active participation in a price-varying energy market. Computational burden of applying model predictive control on such a complex system is relaxed through a hierarchical decomposition of the microgrid and climate control, where the former is designed as higher hierarchical level with pre-calculated price-optimal power flows control, and latter is designed as lower level control responsible to ensure thermal comfort and exploit the optimal supply conditions enabled by microgrid energy flows management. Such an approach is expected to enable the inclusion of more complex building subsystems into consideration in order to further increase the energy efficiency.

Keywords: price-optimal building climate control, Microgrid power flow optimisation, hierarchical model predictive control, energy efficient buildings, energy market participation

Procedia PDF Downloads 452