Search results for: emission scenarios
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2773

Search results for: emission scenarios

403 The Grammar of the Content Plane as a Style Marker in Forensic Authorship Attribution

Authors: Dayane de Almeida

Abstract:

This work aims at presenting a study that demonstrates the usability of categories of analysis from Discourse Semiotics – also known as Greimassian Semiotics in authorship cases in forensic contexts. It is necessary to know if the categories examined in semiotic analysis (the ‘grammar’ of the content plane) can distinguish authors. Thus, a study with 4 sets of texts from a corpus of ‘not on demand’ written samples (those texts differ in formality degree, purpose, addressees, themes, etc.) was performed. Each author contributed with 20 texts, separated into 2 groups of 10 (Author1A, Author1B, and so on). The hypothesis was that texts from a single author were semiotically more similar to each other than texts from different authors. The assumptions and issues that led to this idea are as follows: -The features analyzed in authorship studies mostly relate to the expression plane: they are manifested on the ‘surface’ of texts. If language is both expression and content, content would also have to be considered for more accurate results. Style is present in both planes. -Semiotics postulates the content plane is structured in a ‘grammar’ that underlies expression, and that presents different levels of abstraction. This ‘grammar’ would be a style marker. -Sociolinguistics demonstrates intra-speaker variation: an individual employs different linguistic uses in different situations. Then, how to determine if someone is the author of several texts, distinct in nature (as it is the case in most forensic sets), when it is known intra-speaker variation is dependent on so many factors?-The idea is that the more abstract the level in the content plane, the lower the intra-speaker variation, because there will be a greater chance for the author to choose the same thing. If two authors recurrently chose the same options, differently from one another, it means each one’s option has discriminatory power. -Size is another issue for various attribution methods. Since most texts in real forensic settings are short, methods relying only on the expression plane tend to fail. The analysis of the content plane as proposed by greimassian semiotics would be less size-dependable. -The semiotic analysis was performed using the software Corpus Tool, generating tags to allow the counting of data. Then, similarities and differences were quantitatively measured, through the application of the Jaccard coefficient (a statistical measure that compares the similarities and differences between samples). The results showed the hypothesis was confirmed and, hence, the grammatical categories of the content plane may successfully be used in questioned authorship scenarios.

Keywords: authorship attribution, content plane, forensic linguistics, greimassian semiotics, intraspeaker variation, style

Procedia PDF Downloads 239
402 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers

Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran

Abstract:

With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.

Keywords: optical fiber, multi-mode, data centers, encircled flux

Procedia PDF Downloads 374
401 Using Nature-Based Solutions to Decarbonize Buildings in Canadian Cities

Authors: Zahra Jandaghian, Mehdi Ghobadi, Michal Bartko, Alex Hayes, Marianne Armstrong, Alexandra Thompson, Michael Lacasse

Abstract:

The Intergovernmental Panel on Climate Change (IPCC) report stated the urgent need to cut greenhouse gas emissions to avoid the adverse impacts of climatic changes. The United Nations has forecasted that nearly 70 percent of people will live in urban areas by 2050 resulting in a doubling of the global building stock. Given that buildings are currently recognised as emitting 40 percent of global carbon emissions, there is thus an urgent incentive to decarbonize existing buildings and to build net-zero carbon buildings. To attain net zero carbon emissions in communities in the future requires action in two directions: I) reduction of emissions; and II) removal of on-going emissions from the atmosphere once de-carbonization measures have been implemented. Nature-based solutions (NBS) have a significant role to play in achieving net zero carbon communities, spanning both emission reductions and removal of on-going emissions. NBS for the decarbonisation of buildings can be achieved by using green roofs and green walls – increasing vertical and horizontal vegetation on the building envelopes – and using nature-based materials that either emit less heat to the atmosphere thus decreasing photochemical reaction rates, or store substantial amount of carbon during the whole building service life within their structure. The NBS approach can also mitigate urban flooding and overheating, improve urban climate and air quality, and provide better living conditions for the urban population. For existing buildings, de-carbonization mostly requires retrofitting existing envelopes efficiently to use NBS techniques whereas for future construction, de-carbonization involves designing new buildings with low carbon materials as well as having the integrity and system capacity to effectively employ NBS. This paper presents the opportunities and challenges in respect to the de-carbonization of buildings using NBS for both building retrofits and new construction. This review documents the effectiveness of NBS to de-carbonize Canadian buildings, identifies the missing links to implement these techniques in cold climatic conditions, and determine a road map and immediate approaches to mitigate the adverse impacts of climate change such as urban heat islanding. Recommendations are drafted for possible inclusion in the Canadian building and energy codes.

Keywords: decarbonization, nature-based solutions, GHG emissions, greenery enhancement, buildings

Procedia PDF Downloads 91
400 Changing the Biopower Hierarchy between Women’s Bodily Knowledge and the Medical Knowledge about the Body: The Case of Female Ejaculation and #Notpee

Authors: Lior B. Navon

Abstract:

The objective of this study is to investigate how technology, such as social media, can influence the biopower hierarchy between the medical knowledge about the body and women’s bodily knowledge through the case study of the hashtag 'notpee'. In January 2015, the hashtag #notpee, relating to a feminine physiological phenomenon called female ejaculation (FE) or squirting (SQ) started circulating on twitter. This hashtag, born as a reaction to a medical study claiming that SQ is essentially involuntary emission of urine during sexual activity, sparked an unusual public discourse about FE, a phenomenon that is usually not discussed or referred to in socio-legitimate public spheres. This unusual backlash got the attention of women’s magazines and blogs, as well as more mainstream large and respected outlets such as The Guardian and CNN. Both the tweets on twitter, as well as the media coverage of them, were mainly aimed at rejecting the research’s findings. While not offering an alternative and choosing to define the phenomenon by negation, women argued that the fluid extracted was not pee based on their personal experiences. Based on a critical discourse analysis of 742 tweets with the hashtag 'notpee' between January 2015 and January 2016, and of 15 articles covering the backlash, this study suggests that the #notpee backlash challenged the power balance between the medical knowledge about the feminine body and the feminine bodily knowledge through two different, yet related, forms of resistance to biopower. The first resistance is to the authority over knowledge production — who has the power to produce 'true' statements when it comes to the body? Is it the women who experience the phenomenon, or is it the medical institution? The second resistance to biopower has to do with what we regard as facts or veracity. A critical discourse analysis reveals that while both the scientific field, as well as the women arguing against its findings, use empirical information, they, nevertheless, rely on two dichotomic databases- while the scientific research relies on samples from the 'dead like body', these woman are relying on their lived subjective senses as a source for fact making. Nevertheless, while #notpee is asking to change the power relations between the feminine subjective bodily knowledge and the seemingly objective masculine medical knowledge about the body, it by no means dismisses it. These women are essentially asking the medical institution to take into consideration the subjective body as well as the objective one while acknowledging and accepting the power of the latter over knowledge production.

Keywords: biopower, female ejaculation, new media, bodily knowledge

Procedia PDF Downloads 155
399 Architectural Identity in Manifestation of Tall-buildings' Design

Authors: Huda Arshadlamphon

Abstract:

Advancing frontiers of technology and industry is moving rapidly fast influenced by the economic and political phenomena. One vital phenomenon,which has had consolidated the world to a one single village, is Globalization. In response, architecture and the built-environment have faced numerous changes, adjustments, and developments. Tall-buildings, as a product of globalization, represent prestigious icons, symbols, and landmarks for highly economics and advanced countries. Despite the fact, this trend has been encountering several design challenges incorporating architectural identity, traditions, and characteristics that enhance the built-environments' sociocultural values and traditions. The necessity of these values and traditionsform self-solitarily, leading to visual and spatial creativity, independency, and individuality. In other words, they maintain the inherited identity and avoid replications in all means and aspects. This paper, firstly, defines globalization phenomenon, architectural identity, and the concerns of sociocultural values in relation to the traditional characteristics of the built-environment. Secondly, through three case-studies of tall-buildings located in Jeddah city, Saudi Arabia, the Queen's Building, the National Commercial Bank Building (NCB), and the Islamic Development Bank Building; design strategies and methodologies in acclimating architectural identity and characteristics in tall-buildings are discussed. The case-studies highlight buildings' sites and surroundings, concepts and inspirations, design elements, architectural forms and compositions, characteristics, issues, barriers, and trammels facing the designs' decisions, representation of facades, and selection of materials and colors. Furthermore, the research will elucidate briefs of the dominant factors that shape the architectural identity of Jeddah city. In conclusion, the study manifests four tall-buildings' design standards guideline in preserving and developing architectural identity in Jeddah city; the scale of urban and natural environment, the scale of architectural design elements, the integration of visual images, and the creation of spatial scenes and scenarios. The prosed guideline will encourage the development of architectural identity aligned with zeitgeist demands and requirements, supports the contemporary architectural movement toward tall-buildings, and shoresself-solitarily in representing sociocultural values and traditions of the built-environment.

Keywords: architectural identity, built-environment, globalization, sociocultural values and traditions, tall-buildings

Procedia PDF Downloads 159
398 Grain Structure Evolution during Friction-Stir Welding of 6061-T6 Aluminum Alloy

Authors: Aleksandr Kalinenko, Igor Vysotskiy, Sergey Malopheyev, Sergey Mironov, Rustam Kaibyshev

Abstract:

From a thermo-mechanical standpoint, friction-stir welding (FSW) represents a unique combination of very large strains, high temperature and relatively high strain rate. The material behavior under such extreme deformation conditions is not studied well and thus, the microstructural examinations of the friction-stir welded materials represent an essential academic interest. Moreover, a clear understanding of the microstructural mechanisms operating during FSW should improve our understanding of the microstructure-properties relationship in the FSWed materials and thus enables us to optimize their service characteristics. Despite extensive research in this field, the microstructural behavior of some important structural materials remains not completely clear. In order to contribute to this important work, the present study was undertaken to examine the grain structure evolution during the FSW of 6061-T6 aluminum alloy. To provide an in-depth insight into this process, the electron backscatter diffraction (EBSD) technique was employed for this purpose. Microstructural observations were conducted by using an FEI Quanta 450 Nova field-emission-gun scanning electron microscope equipped with TSL OIMTM software. A suitable surface finish for EBSD was obtained by electro-polishing in a solution of 25% nitric acid in methanol. A 15° criterion was employed to differentiate low-angle boundaries (LABs) from high-angle boundaries (HABs). In the entire range of the studied FSW regimes, the grain structure evolved in the stir zone was found to be dominated by nearly-equiaxed grains with a relatively high fraction of low-angle boundaries and the moderate-strength B/-B {112}<110> simple-shear texture. In all cases, the grain-structure development was found to be dictated by an extensive formation of deformation-induced boundaries, their gradual transformation to the high-angle grain boundaries. Accordingly, the grain subdivision was concluded to the key microstructural mechanism. Remarkably, a gradual suppression of this mechanism has been observed at relatively high welding temperatures. This surprising result has been attributed to the reduction of dislocation density due to the annihilation phenomena.

Keywords: electron backscatter diffraction, friction-stir welding, heat-treatable aluminum alloys, microstructure

Procedia PDF Downloads 233
397 Factory Communication System for Customer-Based Production Execution: An Empirical Study on the Manufacturing System Entropy

Authors: Nyashadzashe Chiraga, Anthony Walker, Glen Bright

Abstract:

The manufacturing industry is currently experiencing a paradigm shift into the Fourth Industrial Revolution in which customers are increasingly at the epicentre of production. The high degree of production customization and personalization requires a flexible manufacturing system that will rapidly respond to the dynamic and volatile changes driven by the market. They are a gap in technology that allows for the optimal flow of information and optimal manufacturing operations on the shop floor regardless of the rapid changes in the fixture and part demands. Information is the reduction of uncertainty; it gives meaning and context on the state of each cell. The amount of information needed to describe cellular manufacturing systems is investigated by two measures: the structural entropy and the operational entropy. Structural entropy is the expected amount of information needed to describe scheduled states of a manufacturing system. While operational entropy is the amount of information that describes the scheduled states of a manufacturing system, which occur during the actual manufacturing operation. Using Anylogic simulator a typical manufacturing job shop was set-up with a cellular manufacturing configuration. The cellular make-up of the configuration included; a Material handling cell, 3D Printer cell, Assembly cell, manufacturing cell and Quality control cell. The factory shop provides manufactured parts to a number of clients, and there are substantial variations in the part configurations, new part designs are continually being introduced to the system. Based on the normal expected production schedule, the schedule adherence was calculated from the structural entropy and operation entropy of varying the amounts of information communicated in simulated runs. The structural entropy denotes a system that is in control; the necessary real-time information is readily available to the decision maker at any point in time. For contractive analysis, different out of control scenarios were run, in which changes in the manufacturing environment were not effectively communicated resulting in deviations in the original predetermined schedule. The operational entropy was calculated from the actual operations. From the results obtained in the empirical study, it was seen that increasing, the efficiency of a factory communication system increases the degree of adherence of a job to the expected schedule. The performance of downstream production flow fed from the parallel upstream flow of information on the factory state was increased.

Keywords: information entropy, communication in manufacturing, mass customisation, scheduling

Procedia PDF Downloads 241
396 Beyond the Flipped Classroom: A Tool to Promote Autonomy, Cooperation, Differentiation and the Pleasure of Learning

Authors: Gabriel Michel

Abstract:

The aim of our research is to find solutions for adapting university teaching to today's students and companies. To achieve this, we have tried to change the posture and behavior of those involved in the learning situation by promoting other skills. There is a gap between the expectations and functioning of students and university teaching. At the same time, the business world needs employees who are obviously competent and proficient in technology, but who are also imaginative, flexible, able to communicate, learn on their own and work in groups. These skills are rarely developed as a goal at university. The flipped classroom has been one solution. Thanks to digital tools such as Moodle, for example, but the model behind them is still centered on teachers and classic learning scenarios: it makes course materials available without really involving them and encouraging them to cooperate. It's against this backdrop that we've conducted action research to explore the possibility of changing the way we learn (rather than teach) by changing the posture of both the classic student and the teacher. We hypothesized that a tool we developed would encourage autonomy, the possibility of progressing at one's own pace, collaboration and learning using all available resources(other students, course materials, those on the web and the teacher/facilitator). Experimentation with this tool was carried out with around thirty German and French first-year students at the Université de Lorraine in Metz (France). The projected changesin the groups' learning situations were as follows: - use the flipped classroom approach but with a few traditional presentations by the teacher (materials having been put on a server) and lots of collective case solving, - engage students in their learning by inviting them to set themselves a primary objective from the outset, e.g. “Assimilating 90% of the course”, and secondary objectives (like a to-do list) such as “create a new case study for Tuesday”, - encourage students to take control of their learning (knowing at all times where they stand and how far they still have to go), - develop cooperation: the tool should encourage group work, the search for common solutions and the exchange of the best solutions with other groups. Those who have advanced much faster than the others, or who already have expertise in a subject, can become tutors for the others. A student can also present a case study he or she has developed, for example, or share materials found on the web or produced by the group, as well as evaluating the productions of others, - etc… A questionnaire and analysis of assessment results showed that the test group made considerable progress compared with a similar control group. These results confirmed our hypotheses. Obviously, this tool is only effective if the organization of teaching is adapted and if teachers are willing to change the way they work.

Keywords: pedagogy, cooperation, university, learning environment

Procedia PDF Downloads 10
395 Renewable Energy Storage Capacity Rating: A Forecast of Selected Load and Resource Scenario in Nigeria

Authors: Yakubu Adamu, Baba Alfa, Salahudeen Adamu Gene

Abstract:

As the drive towards clean, renewable and sustainable energy generation is gradually been reshaped by renewable penetration over time, energy storage has thus, become an optimal solution for utilities looking to reduce transmission and capacity cost, therefore the need for capacity resources to be adjusted accordingly such that renewable energy storage may have the opportunity to substitute for retiring conventional energy systems with higher capacity factors. Considering the Nigeria scenario, where Over 80% of the current Nigerian primary energy consumption is met by petroleum, electricity demand is set to more than double by mid-century, relative to 2025 levels. With renewable energy penetration rapidly increasing, in particular biomass, hydro power, solar and wind energy, it is expected to account for the largest share of power output in the coming decades. Despite this rapid growth, the imbalance between load and resources has created a hindrance to the development of energy storage capacity, load and resources, hence forecasting energy storage capacity will therefore play an important role in maintaining the balance between load and resources including supply and demand. Therefore, the degree to which this might occur, its timing and more importantly its sustainability, is the subject matter of the current research. Here, we forecast the future energy storage capacity rating and thus, evaluate the load and resource scenario in Nigeria. In doing so, We used the scenario-based International Energy Agency models, the projected energy demand and supply structure of the country through 2030 are presented and analysed. Overall, this shows that in high renewable (solar) penetration scenarios in Nigeria, energy storage with 4-6h duration can obtain over 86% capacity rating with storage comprising about 24% of peak load capacity. Therefore, the general takeaway from the current study is that most power systems currently used has the potential to support fairly large penetrations of 4-6 hour storage as capacity resources prior to a substantial reduction in capacity ratings. The data presented in this paper is a crucial eye-opener for relevant government agencies towards developing these energy resources in tackling the present energy crisis in Nigeria. However, if the transformation of the Nigeria. power system continues primarily through expansion of renewable generation, then longer duration energy storage will be needed to qualify as capacity resources. Hence, the analytical task from the current survey will help to determine whether and when long-duration storage becomes an integral component of the capacity mix that is expected in Nigeria by 2030.

Keywords: capacity, energy, power system, storage

Procedia PDF Downloads 31
394 Design and Test a Robust Bearing-Only Target Motion Analysis Algorithm Based on Modified Gain Extended Kalman Filter

Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy

Abstract:

Passive sonar is a method for detecting acoustic signals in the ocean. It detects the acoustic signals emanating from external sources. With passive sonar, we can determine the bearing of the target only, no information about the range of the target. Target Motion Analysis (TMA) is a process to estimate the position and speed of a target using passive sonar information. Since bearing is the only available information, the TMA technique called Bearing-only TMA. Many TMA techniques have been developed. However, until now, there is not a very effective method that could be used to always track an unknown target and extract its moving trace. In this work, a design of effective Bearing-only TMA Algorithm is done. The measured bearing angles are very noisy. Moreover, for multi-beam sonar, the measurements is quantized due to the sonar beam width. To deal with this, modified gain extended Kalman filter algorithm is used. The algorithm is fine-tuned, and many modules are added to improve the performance. A special validation gate module is used to insure stability of the algorithm. Many indicators of the performance and confidence level measurement are designed and tested. A new method to detect if the target is maneuvering is proposed. Moreover, a reactive optimal observer maneuver based on bearing measurements is proposed, which insure converging to the right solution all of the times. To test the performance of the proposed TMA algorithm a simulation is done with a MATLAB program. The simulator program tries to model a discrete scenario for an observer and a target. The simulator takes into consideration all the practical aspects of the problem such as a smooth transition in the speed, a circular turn of the ship, noisy measurements, and a quantized bearing measurement come for multi-beam sonar. The tests are done for a lot of given test scenarios. For all the tests, full tracking is achieved within 10 minutes with very little error. The range estimation error was less than 5%, speed error less than 5% and heading error less than 2 degree. For the online performance estimator, it is mostly aligned with the real performance. The range estimation confidence level gives a value equal to 90% when the range error less than 10%. The experiments show that the proposed TMA algorithm is very robust and has low estimation error. However, the converging time of the algorithm is needed to be improved.

Keywords: target motion analysis, Kalman filter, passive sonar, bearing-only tracking

Procedia PDF Downloads 393
393 Soil Quality Response to Long-Term Intensive Resources Management and Soil Texture

Authors: Dalia Feiziene, Virginijus Feiza, Agne Putramentaite, Jonas Volungevicius, Kristina Amaleviciute, Sarunas Antanaitis

Abstract:

The investigations on soil conservation are one of the most important topics in modern agronomy. Soil management practices have great influence on soil physico-chemical quality and GHG emission. Research objective: To reveal the sensitivity and vitality of soils with different texture to long-term antropogenisation on Cambisol in Central Lithuania and to compare them with not antropogenised soil resources. Methods: Two long-term field experiments (loam on loam; sandy loam on loam) with different management intensity were estimated. Disturbed and undisturbed soil samples were collected from 5-10, 15-20 and 30-35 cm depths. Soil available P and K contents were determined by ammonium lactate extraction, total N by the dry combustion method, SOC content by Tyurin titrimetric (classical) method, texture by pipette method. In undisturbed core samples soil pore volume distribution, plant available water (PAW) content were determined. A closed chamber method was applied to quantify soil respiration (SR). Results: Long-term resources management changed soil quality. In soil with loam texture, within 0-10, 10-20 and 30-35 cm soil layers, significantly higher PAW, SOC and mesoporosity (MsP) were under no-tillage (NT) than under conventional tillage (CT). However, total porosity (TP) under NT was significantly higher only in 0-10 cm layer. MsP acted as dominant factor for N, P and K accumulation in adequate layers. P content in all soil layers was higher under NT than in CT. N and K contents were significantly higher than under CT only in 0-10 cm layer. In soil with sandy loam texture, significant increase in SOC, PAW, MsP, N, P and K under NT was only in 0-10 cm layer. TP under NT was significantly lower in all layers. PAW acted as strong dominant factor for N, P, K accumulation. The higher PAW the higher NPK contents were determined. NT did not secure chemical quality within deeper layers than CT. Long-term application of mineral fertilisers significantly increased SOC and soil NPK contents primarily in top-soil. Enlarged fertilization determined the significantly higher leaching of nutrients to deeper soil layers (CT) and increased hazards of top-soil pollution. Straw returning significantly increased SOC and NPK accumulation in top-soil. The SR on sandy loam was significantly higher than on loam. At dry weather conditions, on loam SR was higher in NT than in CT, on sandy loam SR was higher in CT than in NT. NPK fertilizers promoted significantly higher SR in both dry and wet year, but suppressed SR on sandy loam during usual year. Not antropogenised soil had similar SOC and NPK distribution within 0-35 cm layer and depended on genesis of soil profile horizons.

Keywords: fertilizers, long-term experiments, soil texture, soil tillage, straw

Procedia PDF Downloads 297
392 Maintenance Optimization for a Multi-Component System Using Factored Partially Observable Markov Decision Processes

Authors: Ipek Kivanc, Demet Ozgur-Unluakin

Abstract:

Over the past years, technological innovations and advancements have played an important role in the industrial world. Due to technological improvements, the degree of complexity of the systems has increased. Hence, all systems are getting more uncertain that emerges from increased complexity, resulting in more cost. It is challenging to cope with this situation. So, implementing efficient planning of maintenance activities in such systems are getting more essential. Partially Observable Markov Decision Processes (POMDPs) are powerful tools for stochastic sequential decision problems under uncertainty. Although maintenance optimization in a dynamic environment can be modeled as such a sequential decision problem, POMDPs are not widely used for tackling maintenance problems. However, they can be well-suited frameworks for obtaining optimal maintenance policies. In the classical representation of the POMDP framework, the system is denoted by a single node which has multiple states. The main drawback of this classical approach is that the state space grows exponentially with the number of state variables. On the other side, factored representation of POMDPs enables to simplify the complexity of the states by taking advantage of the factored structure already available in the nature of the problem. The main idea of factored POMDPs is that they can be compactly modeled through dynamic Bayesian networks (DBNs), which are graphical representations for stochastic processes, by exploiting the structure of this representation. This study aims to demonstrate how maintenance planning of dynamic systems can be modeled with factored POMDPs. An empirical maintenance planning problem of a dynamic system consisting of four partially observable components deteriorating in time is designed. To solve the empirical model, we resort to Symbolic Perseus solver which is one of the state-of-the-art factored POMDP solvers enabling approximate solutions. We generate some more predefined policies based on corrective or proactive maintenance strategies. We execute the policies on the empirical problem for many replications and compare their performances under various scenarios. The results show that the computed policies from the POMDP model are superior to the others. Acknowledgment: This work is supported by the Scientific and Technological Research Council of Turkey (TÜBİTAK) under grant no: 117M587.

Keywords: factored representation, maintenance, multi-component system, partially observable Markov decision processes

Procedia PDF Downloads 133
391 Unified Coordinate System Approach for Swarm Search Algorithms in Global Information Deficit Environments

Authors: Rohit Dey, Sailendra Karra

Abstract:

This paper aims at solving the problem of multi-target searching in a Global Positioning System (GPS) denied environment using swarm robots with limited sensing and communication abilities. Typically, existing swarm-based search algorithms rely on the presence of a global coordinate system (vis-à-vis, GPS) that is shared by the entire swarm which, in turn, limits its application in a real-world scenario. This can be attributed to the fact that robots in a swarm need to share information among themselves regarding their location and signal from targets to decide their future course of action but this information is only meaningful when they all share the same coordinate frame. The paper addresses this very issue by eliminating any dependency of a search algorithm on the need of a predetermined global coordinate frame by the unification of the relative coordinate of individual robots when within the communication range, therefore, making the system more robust in real scenarios. Our algorithm assumes that all the robots in the swarm are equipped with range and bearing sensors and have limited sensing range and communication abilities. Initially, every robot maintains their relative coordinate frame and follow Levy walk random exploration until they come in range with other robots. When two or more robots are within communication range, they share sensor information and their location w.r.t. their coordinate frames based on which we unify their coordinate frames. Now they can share information about the areas that were already explored, information about the surroundings, and target signal from their location to make decisions about their future movement based on the search algorithm. During the process of exploration, there can be several small groups of robots having their own coordinate systems but eventually, it is expected for all the robots to be under one global coordinate frame where they can communicate information on the exploration area following swarm search techniques. Using the proposed method, swarm-based search algorithms can work in a real-world scenario without GPS and any initial information about the size and shape of the environment. Initial simulation results show that running our modified-Particle Swarm Optimization (PSO) without global information we can still achieve the desired results that are comparable to basic PSO working with GPS. In the full paper, we plan on doing the comparison study between different strategies to unify the coordinate system and to implement them on other bio-inspired algorithms, to work in GPS denied environment.

Keywords: bio-inspired search algorithms, decentralized control, GPS denied environment, swarm robotics, target searching, unifying coordinate systems

Procedia PDF Downloads 133
390 Sertraline Chronic Exposure: Impact on Reproduction and Behavior on the Key Benthic Invertebrate Capitella teleta

Authors: Martina Santobuono, Wing Sze Chan, Elettra D'Amico, Henriette Selck

Abstract:

Chemicals in modern society are fundamental in many different aspects of daily human life. We use a wide range of substances, including polychlorinated compounds, pesticides, plasticizers, and pharmaceuticals, to name a few. These compounds are excessively produced, and this has led to their introduction to the environment and food resources. Municipal and industrial effluents, landfills, and agricultural runoffs are a few examples of sources of chemical pollution. Many of these compounds, such as pharmaceuticals, have been proven to mimic or alter the performance of the hormone system, thus disrupting its normal function and altering the behavior and reproductive capability of non-target organisms. Antidepressants are pharmaceuticals commonly detected in the environment, usually in the range of ng L⁻¹ and µg L⁻¹. Since they are designed to have a biological effect at low concentrations, they might pose a risk to the native species, especially if exposure lasts for long periods. Hydrophobic antidepressants, like the selective serotonin reuptake inhibitor (SSRI) Sertraline, can sorb to the particles in the water column and eventually accumulate in the sediment compartment. Thus, deposit-feeding organisms may be at particular risk of exposure. The polychaete Capitella teleta is widespread in estuarine organically enriched sediments, being a key deposit-feeder involved in geochemistry processes happening in sediments. Since antidepressants are neurotoxic chemicals and endocrine disruptors, the aim of this work was to test if sediment-associated Sertraline impacts burrowing- and feeding behavior as well as reproduction capability in Capitella teleta in a chronic exposure set-up, which could better mimic what happens in the environment. 7 days old juveniles were selected and exposed to different concentrations of Sertraline for an entire generation until the mature stage was reached. This work was able to show that some concentrations of Sertraline altered growth and the time of first reproduction in Capitella teleta juveniles, potentially disrupting the population’s capability of survival. Acknowledgments: This Ph.D. position is part of the CHRONIC project “Chronic exposure scenarios driving environmental risks of Chemicals”, which is an Innovative Training Network (ITN) funded by the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Actions (MSCA).

Keywords: antidepressants, Capitella teleta, chronic exposure, endocrine disruption, sublethal endpoints, neurotoxicity

Procedia PDF Downloads 92
389 Ultra-Wideband Antennas for Ultra-Wideband Communication and Sensing Systems

Authors: Meng Miao, Jeongwoo Han, Cam Nguyen

Abstract:

Ultra-wideband (UWB) time-domain impulse communication and radar systems use ultra-short duration pulses in the sub-nanosecond regime, instead of continuous sinusoidal waves, to transmit information. The pulse directly generates a very wide-band instantaneous signal with various duty cycles depending on specific usages. In UWB systems, the total transmitted power is spread over an extremely wide range of frequencies; the power spectral density is extremely low. This effectively results in extremely small interference to other radio signals while maintains excellent immunity to interference from these signals. UWB devices can therefore work within frequencies already allocated for other radio services, thus helping to maximize this dwindling resource. Therefore, impulse UWB technique is attractive for realizing high-data-rate, short-range communications, ground penetrating radar (GPR), and military radar with relatively low emission power levels. UWB antennas are the key element dictating the transmitted and received pulse shape and amplitude in both time and frequency domain. They should have good impulse response with minimal distortion. To facilitate integration with transmitters and receivers employing microwave integrated circuits, UWB antennas enabling direct integration are preferred. We present the development of two UWB antennas operating from 3.1 to 10.6 GHz and 0.3-6 GHz for UWB systems that provide direct integration with microwave integrated circuits. The operation of these antennas is based on the principle of wave propagation on a non-uniform transmission line. Time-domain EM simulation is conducted to optimize the antenna structures to minimize reflections occurring at the open-end transition. Calculated and measured results of these UWB antennas are presented in both frequency and time domains. The antennas have good time-domain responses. They can transmit and receive pulses effectively with minimum distortion, little ringing, and small reflection, clearly demonstrating the signal fidelity of the antennas in reproducing the waveform of UWB signals which is critical for UWB sensors and communication systems. Good performance together with seamless microwave integrated-circuit integration makes these antennas good candidates not only for UWB applications but also for integration with printed-circuit UWB transmitters and receivers.

Keywords: antennas, ultra-wideband, UWB, UWB communication systems, UWB radar systems

Procedia PDF Downloads 233
388 Noncovalent Antibody-Nanomaterial Conjugates: A Simple Approach to Produce Targeted Nanomedicines

Authors: Nicholas Fletcher, Zachary Houston, Yongmei Zhao, Christopher Howard, Kristofer Thurecht

Abstract:

One promising approach to enhance nanomedicine therapeutic efficacy is to include a targeting agent, such as an antibody, to increase accumulation at the tumor site. However, the application of such targeted nanomedicines remains limited, in part due to difficulties involved with biomolecule conjugation to synthetic nanomaterials. One approach recently developed to overcome this has been to engineer bispecific antibodies (BsAbs) with dual specificity, whereby one portion binds to methoxy polyethyleneglycol (mPEG) epitopes present on synthetic nanomedicines, while the other binds to molecular disease markers of interest. In this way, noncovalent complexes of nanomedicine core, comprising a hyperbranched polymer (HBP) of primarily mPEG, decorated with targeting ligands are able to be produced by simple mixing. Further work in this area has now demonstrated such complexes targeting the breast cancer marker epidermal growth factor receptor (EGFR) to show enhanced binding to tumor cells both in vitro and in vivo. Indeed the enhanced accumulation at the tumor site resulted in improved therapeutic outcomes compared to untargeted nanomedicines and free chemotherapeutics. The current work on these BsAb-HBP conjugates focuses on further probing antibody-nanomaterial interactions and demonstrating broad applicability to a range of cancer types. Herein are reported BsAb-HBP materials targeted towards prostate-specific membrane antigen (PSMA) and study of their behavior in vivo using ⁸⁹Zr positron emission tomography (PET) in a dual-tumor prostate cancer xenograft model. In this model mice bearing both PSMA+ and PSMA- tumors allow for PET imaging to discriminate between nonspecific and targeted uptake in tumors, and better quantify the increased accumulation following BsAb conjugation. Also examined is the potential for formation of these targeted complexes in situ following injection of individual components? The aim of this approach being to avoid undesirable clearance of proteinaceous complexes upon injection limiting available therapeutic. Ultimately these results demonstrate BsAb functionalized nanomaterials as a powerful and versatile approach for producing targeted nanomedicines for a variety of cancers.

Keywords: bioengineering, cancer, nanomedicine, polymer chemistry

Procedia PDF Downloads 137
387 Morphological and Molecular Evaluation of Dengue Virus Serotype 3 Infection in BALB/c Mice Lungs

Authors: Gabriela C. Caldas, Fernanda C. Jacome, Arthur da C. Rasinhas, Ortrud M. Barth, Flavia B. dos Santos, Priscila C. G. Nunes, Yuli R. M. de Souza, Pedro Paulo de A. Manso, Marcelo P. Machado, Debora F. Barreto-Vieira

Abstract:

The establishment of animal models for studies of DENV infections has been challenging, since circulating epidemic viruses do not naturally infect nonhuman species. Such studies are of great relevance to the various areas of dengue research, including immunopathogenesis, drug development and vaccines. In this scenario, the main objective of this study is to verify possible morphological changes, as well as the presence of antigens and viral RNA in lung samples from BALB/c mice experimentally infected with an epidemic and non-neuroadapted DENV-3 strain. Male BALB/c mice, 2 months old, were inoculated with DENV-3 by intravenous route. After 72 hours of infection, the animals were euthanized and the lungs were collected. Part of the samples was processed by standard technique for analysis by light and transmission electronic microscopies and another part was processed for real-time PCR analysis. Morphological analyzes of lungs from uninfected mice showed preserved tissue areas. In mice infected with DENV-3, the analyzes revealed interalveolar septum thickening with presence of inflammatory infiltrate, foci of alveolar atelectasis and hyperventilation, bleeding foci in the interalveolar septum and bronchioles, peripheral capillary congestion, accumulation of fluid in the blood capillary, signs of interstitial cell necrosis presence of platelets and mononuclear inflammatory cells circulating in the capillaries and/or adhered to the endothelium. In addition, activation of endothelial cells, platelets, mononuclear inflammatory cell and neutrophil-type polymorphonuclear inflammatory cell evidenced by the emission of cytoplasmic membrane prolongation was observed. DEN-like particles were seen in the cytoplasm of endothelial cells. The viral genome was recovered from 3 in 12 lung samples. These results demonstrate that the BALB / c mouse represents a suitable model for the study of the histopathological changes induced by DENV infection in the lung, with tissue alterations similar to those observed in human cases of DEN.

Keywords: BALB/c mice, dengue, histopathology, lung, ultrastructure

Procedia PDF Downloads 252
386 High Physical Properties of Biochar Issued from Cashew Nut Shell to Adsorb Mycotoxins (Aflatoxins and Ochratoxine A) and Its Effects on Toxigenic Molds

Authors: Abderahim Ahmadou, Alfredo Napoli, Noel Durand, Didier Montet

Abstract:

Biochar is a microporous and adsorbent solid carbon product obtained from the pyrolysis of various organic materials (biomass, agricultural waste). Biochar is distinguished from vegetable charcoal by its manufacture methods. Biochar is used as the amendment in soils to give them favorable characteristics under certain conditions, i.e., absorption of water and its release at low speed. Cashew nuts shell from Mali is usually discarded on land by local processors or burnt as a mean for waste management. The burning of this biomass poses serious socio-environmental problems including greenhouse gas emission and accumulation of tars and soot on houses closed to factories, leading to neighbor complaints. Some mycotoxins as aflatoxins are carcinogenic compounds resulting from the secondary metabolism of molds that develop on plants in the field and during their conservation. They are found at high level on some seeds and nuts in Africa. Ochratoxin A, member of mycotoxins, is produced by various species of Aspergillus and Penicillium. Human exposure to Ochratoxin A can occur through consumption of contaminated food products, particularly contaminated grain, as well as coffee, wine grapes. We showed that cashew shell biochars produced at 400, 600 and 800°C adsorbed aflatoxins (B1, B2, G1, G2) at 100% by filtration (rapid contact) as well as by stirring (long contact). The average percentage of adsorption of Ochratoxin A was 35% by filtration and 80% by stirring. The duration of the biochar-mycotoxin contact was a significant parameter. The effect of biochar was also tested on two strains of toxigenic molds: Aspergillus parasiticus (producers of Aflatoxins) and Aspergillus carbonarius (producers of Ochratoxins). The growth of the strain Aspergillus carbonarius was inhibited at up to 60% by the biochar at 600°C. An opposite effect to the inhibition was observed on Aspergillus parasiticus using the same biochar. In conclusion, we observed that biochar adsorbs mycotoxins: Aflatoxins and Ochratoxin A to different degrees; 100% adsorption of aflatoxins under all conditions (filtration and stirring) and adsorption of Ochratoxin A varied depending on the type of biochar and the experiment conditions (35% by filtration and 85% by stirring). The effects of biochar at 600 °C on the toxigenic molds: Aspergillus parasiticus and Aspergillus carbonarius, varied according to the experimental conditions and the strains. We observed an opposite effect on the growth with an inhibition of Aspergillus carbonarius up to 60% and a stimulated growth of Aspergillus parasiticus.

Keywords: biochar, cashew nut shell, mycotoxins, toxicogenic molds

Procedia PDF Downloads 185
385 Modeling the Acquisition of Expertise in a Sequential Decision-Making Task

Authors: Cristóbal Moënne-Loccoz, Rodrigo C. Vergara, Vladimir López, Domingo Mery, Diego Cosmelli

Abstract:

Our daily interaction with computational interfaces is plagued of situations in which we go from inexperienced users to experts through self-motivated exploration of the same task. In many of these interactions, we must learn to find our way through a sequence of decisions and actions before obtaining the desired result. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion so that a specific sequence of actions must be performed in order to produce the expected outcome. But, as they become experts in the use of such interfaces, do users adopt specific search and learning strategies? Moreover, if so, can we use this information to follow the process of expertise development and, eventually, predict future actions? This would be a critical step towards building truly adaptive interfaces that can facilitate interaction at different moments of the learning curve. Furthermore, it could provide a window into potential mechanisms underlying decision-making behavior in real world scenarios. Here we tackle this question using a simple game interface that instantiates a 4-level binary decision tree (BDT) sequential decision-making task. Participants have to explore the interface and discover an underlying concept-icon mapping in order to complete the game. We develop a Hidden Markov Model (HMM)-based approach whereby a set of stereotyped, hierarchically related search behaviors act as hidden states. Using this model, we are able to track the decision-making process as participants explore, learn and develop expertise in the use of the interface. Our results show that partitioning the problem space into such stereotyped strategies is sufficient to capture a host of exploratory and learning behaviors. Moreover, using the modular architecture of stereotyped strategies as a Mixture of Experts, we are able to simultaneously ask the experts about the user's most probable future actions. We show that for those participants that learn the task, it becomes possible to predict their next decision, above chance, approximately halfway through the game. Our long-term goal is, on the basis of a better understanding of real-world decision-making processes, to inform the construction of interfaces that can establish dynamic conversations with their users in order to facilitate the development of expertise.

Keywords: behavioral modeling, expertise acquisition, hidden markov models, sequential decision-making

Procedia PDF Downloads 250
384 Establishing a Sustainable Construction Industry: Review of Barriers That Inhibit Adoption of Lean Construction in Lesotho

Authors: Tsepiso Mofolo, Luna Bergh

Abstract:

The Lesotho construction industry fails to embrace environmental practices, which has then lead to excessive consumption of resources, land degradation, air and water pollution, loss of habitats, and high energy usage. The industry is highly inefficient, and this undermines its capability to yield the optimum contribution to social, economic and environmental developments. Sustainable construction is, therefore, imperative to ensure the cultivation of benefits from all these intrinsic themes of sustainable development. The development of a sustainable construction industry requires a holistic approach that takes into consideration the interaction between Lean Construction principles, socio-economic and environmental policies, technological advancement and the principles of construction or project management. Sustainable construction is a cutting-edge phenomenon, forming a component of a subjectively defined concept called sustainable development. Sustainable development can be defined in terms of attitudes and judgments to assist in ensuring long-term environmental, social and economic growth in society. The key concept of sustainable construction is Lean Construction. Lean Construction emanates from the principles of the Toyota Production System (TPS), namely the application and adaptation of the fundamental concepts and principles that focus on waste reduction, the increase in value to the customer, and continuous improvement. The focus is on the reduction of socio-economic waste, and protestation of environmental degradation by reducing carbon dioxide emission footprint. Lean principles require a fundamental change in the behaviour and attitudes of the parties involved in order to overcome barriers to cooperation. Prevalent barriers to adoption of Lean Construction in Lesotho are mainly structural - such as unavailability of financing, corruption, operational inefficiency or wastage, lack of skills and training and inefficient construction legislation and political interferences. The consequential effects of these problems trigger down to quality, cost and time of the project - which then result in an escalation of operational costs due to the cost of rework or material wastage. Factor and correlation analysis of these barriers indicate that they are highly correlated, which then poses a detrimental potential to the country’s welfare, environment and construction safety. It is, therefore, critical for Lesotho’s construction industry to develop a robust governance through bureaucracy reforms and stringent law enforcement.

Keywords: construction industry, sustainable development, sustainable construction industry, lean construction, barriers to sustainable construction

Procedia PDF Downloads 286
383 Performance Management of Tangible Assets within the Balanced Scorecard and Interactive Business Decision Tools

Authors: Raymond K. Jonkers

Abstract:

The present study investigated approaches and techniques to enhance strategic management governance and decision making within the framework of a performance-based balanced scorecard. The review of best practices from strategic, program, process, and systems engineering management provided for a holistic approach toward effective outcome-based capability management. One technique, based on factorial experimental design methods, was used to develop an empirical model. This model predicted the degree of capability effectiveness and is dependent on controlled system input variables and their weightings. These variables represent business performance measures, captured within a strategic balanced scorecard. The weighting of these measures enhances the ability to quantify causal relationships within balanced scorecard strategy maps. The focus in this study was on the performance of tangible assets within the scorecard rather than the traditional approach of assessing performance of intangible assets such as knowledge and technology. Tangible assets are represented in this study as physical systems, which may be thought of as being aboard a ship or within a production facility. The measures assigned to these systems include project funding for upgrades against demand, system certifications achieved against those required, preventive maintenance to corrective maintenance ratios, and material support personnel capacity against that required for supporting respective systems. The resultant scorecard is viewed as complimentary to the traditional balanced scorecard for program and performance management. The benefits from these scorecards are realized through the quantified state of operational capabilities or outcomes. These capabilities are also weighted in terms of priority for each distinct system measure and aggregated and visualized in terms of overall state of capabilities achieved. This study proposes the use of interactive controls within the scorecard as a technique to enhance development of alternative solutions in decision making. These interactive controls include those for assigning capability priorities and for adjusting system performance measures, thus providing for what-if scenarios and options in strategic decision-making. In this holistic approach to capability management, several cross functional processes were highlighted as relevant amongst the different management disciplines. In terms of assessing an organization’s ability to adopt this approach, consideration was given to the P3M3 management maturity model.

Keywords: management, systems, performance, scorecard

Procedia PDF Downloads 320
382 A Prediction Method of Pollutants Distribution Pattern: Flare Motion Using Computational Fluid Dynamics (CFD) Fluent Model with Weather Research Forecast Input Model during Transition Season

Authors: Benedictus Asriparusa, Lathifah Al Hakimi, Aulia Husada

Abstract:

A large amount of energy is being wasted by the release of natural gas associated with the oil industry. This release interrupts the environment particularly atmosphere layer condition globally which contributes to global warming impact. This research presents an overview of the methods employed by researchers in PT. Chevron Pacific Indonesia in the Minas area to determine a new prediction method of measuring and reducing gas flaring and its emission. The method emphasizes advanced research which involved analytical studies, numerical studies, modeling, and computer simulations, amongst other techniques. A flaring system is the controlled burning of natural gas in the course of routine oil and gas production operations. This burning occurs at the end of a flare stack or boom. The combustion process releases emissions of greenhouse gases such as NO2, CO2, SO2, etc. This condition will affect the chemical composition of air and environment around the boundary layer mainly during transition season. Transition season in Indonesia is absolutely very difficult condition to predict its pattern caused by the difference of two air mass conditions. This paper research focused on transition season in 2013. A simulation to create the new pattern of the pollutants distribution is needed. This paper has outlines trends in gas flaring modeling and current developments to predict the dominant variables in the pollutants distribution. A Fluent model is used to simulate the distribution of pollutants gas coming out of the stack, whereas WRF model output is used to overcome the limitations of the analysis of meteorological data and atmospheric conditions in the study area. Based on the running model, the most influence factor was wind speed. The goal of the simulation is to predict the new pattern based on the time of fastest wind and slowest wind occurs for pollutants distribution. According to the simulation results, it can be seen that the fastest wind (last of March) moves pollutants in a horizontal direction and the slowest wind (middle of May) moves pollutants vertically. Besides, the design of flare stack in compliance according to EPA Oil and Gas Facility Stack Parameters likely shows pollutants concentration remains on the under threshold NAAQS (National Ambient Air Quality Standards).

Keywords: flare motion, new prediction, pollutants distribution, transition season, WRF model

Procedia PDF Downloads 546
381 Association between a Forward Lag of Historical Total Accumulated Gasoline Lead Emissions and Contemporary Autism Prevalence Trends in California, USA

Authors: Mark A. S. Laidlaw, Howard W. Mielke

Abstract:

In California between the late 1920’s and 1986 the lead concentrations in urban soils and dust climbed rapidly following the deposition of greater than 387,000 tonnes of lead emitted from gasoline. Previous research indicates that when children are lead exposed around 90% of the lead is retained in their bones and teeth due to the substitution of lead for calcium. Lead in children’s bones has been shown to accumulate over time and is highest in inner-city urban areas, lower in suburban areas and lowest in rural areas. It is also known that women’s bones demineralize during pregnancy due to the foetus's high demand for calcium. Lead accumulates in women’s bones during childhood and the accumulated lead is subsequently released during pregnancy – a lagged response. This results in calcium plus lead to enter the blood stream and cross the placenta to expose the foetus with lead. In 1970 in the United States, the average age of a first‐time mother was about 21. In 2008, the average age was 25.1. In this study, it is demonstrated that in California there is a forward lagged relationship between the accumulated emissions of lead from vehicle fuel additives and later autism prevalence trends between the 1990’s and current time period. Regression analysis between a 24 year forward lag of accumulated lead emissions and autism prevalence trends in California are associated strongly (R2=0.95, p=0.00000000127). It is hypothesized that autism in genetically susceptible children may stem from vehicle fuel lead emission exposures of their mothers during childhood and that the release of stored lead during subsequent pregnancy resulted in lead exposure of foetuses during a critical developmental period. It is furthermore hypothesized that the 24 years forward lag between lead exposures has occurred because that is time period is the average length for women to enter childbearing age. To test the hypothesis that lead in mothers bones is associated with autism, it is hypothesized that retrospective case-control studies would show an association between the lead in mother’s bones and autism. Furthermore, it is hypothesized that the forward lagged relationship between accumulated historical vehicle fuel lead emissions (or air lead concentrations) and autism prevalence trends will be similar in cities at the national and international scale. If further epidemiological studies indicate a strong relationship between accumulated vehicle fuel lead emissions (or accumulated air lead concentrations) and lead in mother’s bones and autism rates, then urban areas may require extensive soil intervention to prevent the development of autism in children.

Keywords: autism, bones, lead, gasoline, petrol, prevalence

Procedia PDF Downloads 291
380 Automatic and High Precise Modeling for System Optimization

Authors: Stephanie Chen, Mitja Echim, Christof Büskens

Abstract:

To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.

Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization

Procedia PDF Downloads 402
379 Water Balance in the Forest Basins Essential for the Water Supply in Central America

Authors: Elena Listo Ubeda, Miguel Marchamalo Sacristan

Abstract:

The demand for water doubles every twenty years, at a rate which is twice as fast as the world´s population growth. Despite it´s great importance, water is one of the most degraded natural resources in the world, mainly because of the reduction of natural vegetation coverage, population growth, contamination and changes in the soil use which reduces its capacity to collect water. This situation is especially serious in Central America, as reflected in the Human Development reports. The objective of this project is to assist in the improvement of water production and quality in Central America. In order to do these two watersheds in Costa Rica were selected as experiments: that of the Virilla-Durazno River, located in the extreme north east of the central valley which has an Atlantic influence; and that of the Jabillo River, which flows directly into the Pacific. The Virilla river watershed is located over andisols, and that of the Jabillo River is over alfisols, and both are of great importance for water supply to the Greater Metropolitan Area and the future tourist resorts respectively, as well as for the production of agriculture, livestock and hydroelectricity. The hydrological reaction in different soil-cover complexes, varying from the secondary forest to natural vegetation and degraded pasture, was analyzed according to the evaluation of the properties of the soil, infiltration, soil compaction, as well as the effects of the soil cover complex on erosion, calculated by the C factor of the Revised Universal Soil Loss Equation (RUSLE). A water balance was defined for each watershed, in which the volume of water that enters and leaves were estimated, as well as the evapotranspiration, runoff, and infiltration. Two future scenarios, representing the implementation of reforestation and deforestation plans, were proposed, and were analyzed for the effects of the soil cover complex on the water balance in each case. The results obtained show an increase of the ground water recharge in the humid forest areas, and an extension of the study of the dry areas is proposed since the ground water recharge here is diminishing. These results are of great significance for the planning, design of Payment Schemes for Environmental Services and the improvement of the existing water supply systems. In Central America spatial planning is a priority, as are the watersheds, in order to assess the water resource socially and economically, and securing its availability for the future.

Keywords: Costa Rica, infiltration, soil, water

Procedia PDF Downloads 383
378 Economic Evaluation of Degradation by Corrosion of an On-Grid Battery Energy Storage System: A Case Study in Algeria Territory

Authors: Fouzia Brihmat

Abstract:

Economic planning models, which are used to build microgrids and distributed energy resources, are the current norm for expressing such confidence (DER). These models often decide both short-term DER dispatch and long-term DER investments. This research investigates the most cost-effective hybrid (photovoltaic-diesel) renewable energy system (HRES) based on Total Net Present Cost (TNPC) in an Algerian Saharan area, which has a high potential for solar irradiation and has a production capacity of 1GW/h. Lead-acid batteries have been around much longer and are easier to understand, but have limited storage capacity. Lithium-ion batteries last longer, are lighter, but generally more expensive. By combining the advantages of each chemistry, we produce cost-effective high-capacity battery banks that operate solely on AC coupling. The financial implications of this research describe the corrosion process that occurs at the interface between the active material and grid material of the positive plate of a lead-acid battery. The best cost study for the HRES is completed with the assistance of the HOMER Pro MATLAB Link. Additionally, during the course of the project's 20 years, the system is simulated for each time step. In this model, which takes into consideration decline in solar efficiency, changes in battery storage levels over time, and rises in fuel prices above the rate of inflation. The trade-off is that the model is more accurate, but it took longer to compute. As a consequence, the model is more precise, but the computation takes longer. We initially utilized the Optimizer to run the model without MultiYear in order to discover the best system architecture. The optimal system for the single-year scenario is the Danvest generator, which has 760 kW, 200 kWh of the necessary quantity of lead-acid storage, and a somewhat lower COE of $0.309/kWh. Different scenarios that account for fluctuations in the gasified biomass generator's production of electricity have been simulated, and various strategies to guarantee the balance between generation and consumption have been investigated. The technological optimization of the same system has been finished and is being reviewed in a recent paper study.

Keywords: battery, corrosion, diesel, economic planning optimization, hybrid energy system, lead-acid battery, multi-year planning, microgrid, price forecast, PV, total net present cost

Procedia PDF Downloads 83
377 Water Quality Trading with Equitable Total Maximum Daily Loads

Authors: S. Jamshidi, E. Feizi Ashtiani, M. Ardestani, A. Feizi Ashtiani

Abstract:

Waste load allocation (WLA) strategies usually intend to find economical policies for water resource management. Water quality trading (WQT) is an approach that uses discharge permit market to reduce total environmental protection costs. This primarily requires assigning discharge limits known as total maximum daily loads (TMDLs). These are determined by monitoring organizations with respect to the receiving water quality and remediation capabilities. The purpose of this study is to compare two approaches of TMDL assignment for WQT policy in small catchment area of Haraz River, in north of Iran. At first, TMDLs are assigned uniformly for the whole point sources to keep the concentrations of BOD and dissolved oxygen (DO) at the standard level at checkpoint (terminus point). This was simply simulated and controlled by Qual2kw software. In the second scenario, TMDLs are assigned using multi objective particle swarm optimization (MOPSO) method in which the environmental violation at river basin and total treatment costs are minimized simultaneously. In both scenarios, the equity index and the WLA based on trading discharge permits (TDP) are calculated. The comparative results showed that using economically optimized TMDLs (2nd scenario) has slightly more cost savings rather than uniform TMDL approach (1st scenario). The former annually costs about 1 M$ while the latter is 1.15 M$. WQT can decrease these annual costs to 0.9 and 1.1 M$, respectively. In other word, these approaches may save 35 and 45% economically in comparison with command and control policy. It means that using multi objective decision support systems (DSS) may find more economical WLA, however its outcome is not necessarily significant in comparison with uniform TMDLs. This may be due to the similar impact factors of dischargers in small catchments. Conversely, using uniform TMDLs for WQT brings more equity that makes stakeholders not feel that much envious of difference between TMDL and WQT allocation. In addition, for this case, determination of TMDLs uniformly would be much easier for monitoring. Consequently, uniform TMDL for TDP market is recommended as a sustainable approach. However, economical TMDLs can be used for larger watersheds.

Keywords: waste load allocation (WLA), water quality trading (WQT), total maximum daily loads (TMDLs), Haraz River, multi objective particle swarm optimization (MOPSO), equity

Procedia PDF Downloads 388
376 Design and Implementation of Generative Models for Odor Classification Using Electronic Nose

Authors: Kumar Shashvat, Amol P. Bhondekar

Abstract:

In the midst of the five senses, odor is the most reminiscent and least understood. Odor testing has been mysterious and odor data fabled to most practitioners. The delinquent of recognition and classification of odor is important to achieve. The facility to smell and predict whether the artifact is of further use or it has become undesirable for consumption; the imitation of this problem hooked on a model is of consideration. The general industrial standard for this classification is color based anyhow; odor can be improved classifier than color based classification and if incorporated in machine will be awfully constructive. For cataloging of odor for peas, trees and cashews various discriminative approaches have been used Discriminative approaches offer good prognostic performance and have been widely used in many applications but are incapable to make effectual use of the unlabeled information. In such scenarios, generative approaches have better applicability, as they are able to knob glitches, such as in set-ups where variability in the series of possible input vectors is enormous. Generative models are integrated in machine learning for either modeling data directly or as a transitional step to form an indeterminate probability density function. The algorithms or models Linear Discriminant Analysis and Naive Bayes Classifier have been used for classification of the odor of cashews. Linear Discriminant Analysis is a method used in data classification, pattern recognition, and machine learning to discover a linear combination of features that typifies or divides two or more classes of objects or procedures. The Naive Bayes algorithm is a classification approach base on Bayes rule and a set of qualified independence theory. Naive Bayes classifiers are highly scalable, requiring a number of restraints linear in the number of variables (features/predictors) in a learning predicament. The main recompenses of using the generative models are generally a Generative Models make stronger assumptions about the data, specifically, about the distribution of predictors given the response variables. The Electronic instrument which is used for artificial odor sensing and classification is an electronic nose. This device is designed to imitate the anthropological sense of odor by providing an analysis of individual chemicals or chemical mixtures. The experimental results have been evaluated in the form of the performance measures i.e. are accuracy, precision and recall. The investigational results have proven that the overall performance of the Linear Discriminant Analysis was better in assessment to the Naive Bayes Classifier on cashew dataset.

Keywords: odor classification, generative models, naive bayes, linear discriminant analysis

Procedia PDF Downloads 384
375 Just a Heads Up: Approach to Head Shape Abnormalities

Authors: Noreen Pulte

Abstract:

Prior to the 'Back to Sleep' Campaign in 1992, 1 of every 300 infants seen by Advanced Practice Providers had plagiocephaly. Insufficient attention is given to plagiocephaly and brachycephaly diagnoses in practice and pediatric education. In this talk, Nurse Practitioners and Pediatric Providers will be able to: (1) identify red flags associated with head shape abnormalities, (2) learn techniques they can teach parents to prevent head shape abnormalities, and (3) differentiate between plagiocephaly, brachycephaly, and craniosynostosis. The presenter is a Primary Care Pediatric Nurse Practitioner at Ann & Robert H. Lurie Children's Hospital of Chicago and the primary provider for its head shape abnormality clinics. She will help participants translate key information obtained from birth history, review of systems, and developmental history to understand risk factors for head shape abnormalities and progression of deformities. Synostotic and non-synostotic head shapes will be explained to help participants differentiate plagiocephaly and brachycephaly from synostotic head shapes. This knowledge is critical for the prompt referral of infants with craniosynostosis for surgical evaluation and correction. Rapid referral for craniosynostosis can possibly direct the patient to a minimally invasive surgical procedure versus a craniectomy. As for plagiocephaly and brachycephaly, this timely referral can also aid in a physical therapy referral if necessitated, which treats torticollis and aids in improving head shape. A well-timed referral to a head shape clinic can possibly eliminate the need for a helmet and/or minimize the time in a helmet. Practitioners will learn the importance of obtaining head measurements using calipers. The presenter will explain head calculations and how the calculations are interpreted to determine the severity of the head shape abnormalities. Severity defines the treatment plan. Participants will learn when to refer patients to a head shape abnormality clinic and techniques they should teach parents to perform while waiting for the referral appointment. The purpose, mechanics, and logistics of helmet therapy, including optimal time to initiate helmet therapy, recommended helmet wear-time, and tips for helmet therapy compliance, will be described. Case scenarios will be incorporated into the presenter's presentation to support learning. The salient points of the case studies will be explained and discussed. Practitioners will be able to immediately translate the knowledge and skills gained in this presentation into their clinical practice.

Keywords: plagiocephaly, brachycephaly, craniosynostosis, red flags

Procedia PDF Downloads 93
374 System Devices to Reduce Particulate Matter Concentrations in Railway Metro Systems

Authors: Armando Cartenì

Abstract:

Within the design of sustainable transportation engineering, the problem of reducing particulate matter (PM) concentrations in railways metro system was not much discussed. It is well known that PM levels in railways metro system are mainly produced by mechanical friction at the rail-wheel-brake interactions and by the PM re-suspension caused by the turbulence generated by the train passage, which causes dangerous problems for passenger health. Starting from these considerations, the aim of this research was twofold: i) to investigate the particulate matter concentrations in a ‘traditional’ railways metro system; ii) to investigate the particulate matter concentrations of a ‘high quality’ metro system equipped with design devices useful for reducing PM concentrations: platform screen doors, rubber-tyred and an advanced ventilation system. Two measurement surveys were performed: one in the ‘traditional’ metro system of Naples (Italy) and onother in the ‘high quality’ rubber-tyred metro system of Turin (Italy). Experimental results regarding the ‘traditional’ metro system of Naples, show that the average PM10 concentrations measured in the underground station platforms are very high and range between 172 and 262 µg/m3 whilst the average PM2,5 concentrations range between 45 and 60 µg/m3, with dangerous problems for passenger health. By contrast the measurements results regarding the ‘high quality’ metro system of Turin show that: i) the average PM10 (PM2.5) concentrations measured in the underground station platform is 22.7 µg/m3 (16.0 µg/m3) with a standard deviation of 9.6 µg/m3 (7.6 µg/m3); ii) the indoor concentrations (both for PM10 and for PM2.5) are statistically lower from those measured in outdoors (with a ratio equal to 0.9-0.8), meaning that the indoor air quality is greater than those in urban ambient; iii) that PM concentrations in underground stations are correlated to the trains passage; iv) the inside trains concentrations (both for PM10 and for PM2.5) are statistically lower from those measured at station platform (with a ratio equal to 0.7-0.8), meaning that inside trains the use of air conditioning system could promote a greater circulation that clean the air. The comparison among the two case studies allow to conclude that the metro system designed with PM reduction devices allow to reduce PM concentration up to 11 times against a ‘traditional’ one. From these results, it is possible to conclude that PM concentrations measured in a ‘high quality’ metro system are significantly lower than the ones measured in a ‘traditional’ railway metro systems. This result allows possessing the bases for the design of useful devices for retrofitting metro systems all around the world.

Keywords: air quality, pollutant emission, quality in public transport, underground railway, external cost reduction, transportation planning

Procedia PDF Downloads 209