Search results for: fault ride-through capability
110 The Enquiry of Food Culture Products, Practices and Perspectives: An Action Research on Teaching and Learning Food Culture from International Food Documentary Films
Authors: Tsuiping Chen
Abstract:
It has always been an international consensus that food forms a big part of any culture since the old times. However, this idea has not been globally concretized until the announcement of including food or cuisine as intangible cultural heritage by UNESCO in 2010. This announcement strengthens the value of food culture, which is getting more and more notice by every country. Although Taiwan is not one of the members of the United Nations, we cannot detach ourselves from this important global trend, especially when we have a lot of culinary students expected to join the world culinary job market. These students should have been well educated with the knowledge of world food culture to make them have the sensibility and perspectives for the occurring global food issues before joining the culinary jobs. Under the premise of the above concern, the researcher and also the instructor took on action research with one class of students in the 'Food Culture' course watching, discussing, and analyzing 12 culinary documentary films selected from one decade’s (2007-2016) of Berlin Culinary Cinema in one semester of class hours. In addition, after class, the students separated themselves into six groups and joined 12 times of one-hour-long focus group discussion on the 12 films conducted by the researcher. Furthermore, during the semester, the students submitted their reflection reports on each film to the university e-portfolio system. All the focus discussions and reflection reports were recorded and collected for further analysis by the researcher and one invited film researcher. Glaser and Strauss’ Grounded Theory (1967) constant comparison method was employed to analyze the collected data. Finally, the findings' results were audited by all participants of the research. All the participants and the researchers created 200 items of food culture products, 74 items of food culture practices, and 50 items of food culture perspectives from the action research journey through watching culinary documentaries. The journey did broaden students’ points of view on world food culture and enhance their capability on perspective construction for food culture. Four aspects of significant findings were demonstrated. First, learning food culture through watching Berlin culinary films helps students link themselves to the happening global food issues such as food security, food poverty, and food sovereignty, which direct them to rethink how people should grow, share and consume food. Second, watching different categories of documentary food films enhances students’ strong sense of responsibility for ensuring healthy lives and promoting well-being for all people in every corner of the world. Third, watching these documentary films encourages students to think if the culinary education they have accepted in this island is inclusive and the importance of quality education, which can promote lifelong learning. Last but not least, the journey of the culinary documentary film watching in the 'Food Culture' course inspires students to take pride in their profession. It is hoped the model of teaching food culture with culinary documentary films will inspire more food culture educators, researchers, and the culinary curriculum designers.Keywords: food culture, action research, culinary documentary films, food culture products, practices, perspectives
Procedia PDF Downloads 110109 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator
Authors: Yildiz Stella Dak, Jale Tezcan
Abstract:
Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection
Procedia PDF Downloads 330108 Carbon Aerogels with Tailored Porosity as Cathode in Li-Ion Capacitors
Authors: María Canal-Rodríguez, María Arnaiz, Natalia Rey-Raap, Ana Arenillas, Jon Ajuria
Abstract:
The constant demand of electrical energy, as well as the increase in environmental concern, lead to the necessity of investing in clean and eco-friendly energy sources that implies the development of enhanced energy storage devices. Li-ion batteries (LIBs) and Electrical double layer capacitors (EDLCs) are the most widespread energy systems. Batteries are able to storage high energy densities contrary to capacitors, which main strength is the high-power density supply and the long cycle life. The combination of both technologies gave rise to Li-ion capacitors (LICs), which offers all these advantages in a single device. This is achieved combining a capacitive, supercapacitor-like positive electrode with a faradaic, battery-like negative electrode. Due to the abundance and affordability, dual carbon-based LICs are nowadays the common technology. Normally, an Active Carbon (AC) is used as the EDLC like electrode, while graphite is the material commonly employed as anode. LICs are potential systems to be used in applications in which high energy and power densities are required, such us kinetic energy recovery systems. Although these devices are already in the market, some drawbacks like the limited power delivered by graphite or the energy limiting nature of AC must be solved to trigger their used. Focusing on the anode, one possibility could be to replace graphite with Hard Carbon (HC). The better rate capability of the latter increases the power performance of the device. Moreover, the disordered carbonaceous structure of HCs enables storage twice the theoretical capacity of graphite. With respect to the cathode, the ACs are characterized for their high volume of micropores, in which the charge is storage. Nevertheless, they normally do not show mesoporous, which are really important mainly at high C-rates as they act as transport channels for the ions to reach the micropores. Usually, the porosity of ACs cannot be tailored, as it strongly depends on the precursor employed to get the final carbon. Moreover, they are not characterized for having a high electrical conductivity, which is an important characteristic to get a good performance in energy storage applications. A possible candidate to substitute ACs are carbon aerogels (CAs). CAs are materials that combine a high porosity with great electrical conductivity, opposite characteristics in carbon materials. Furthermore, its porous properties can be tailored quite accurately according to with the requirements of the application. In the present study, CAs with controlled porosity were obtained from polymerization of resorcinol and formaldehyde by microwave heating. Varying the synthesis conditions, mainly the amount of precursors and pH of the precursor solution, carbons with different textural properties were obtained. The way the porous characteristics affect the performance of the cathode was studied by means of a half-cell configuration. The material with the best performance was evaluated as cathode in a LIC versus a hard carbon as anode. An analogous full LIC made by a high microporous commercial cathode was also assembled for comparison purposes.Keywords: li-ion capacitors, energy storage, tailored porosity, carbon aerogels
Procedia PDF Downloads 167107 Regularized Euler Equations for Incompressible Two-Phase Flow Simulations
Authors: Teng Li, Kamran Mohseni
Abstract:
This paper presents an inviscid regularization technique for the incompressible two-phase flow simulations. This technique is known as observable method due to the understanding of observability that any feature smaller than the actual resolution (physical or numerical), i.e., the size of wire in hotwire anemometry or the grid size in numerical simulations, is not able to be captured or observed. Differ from most regularization techniques that applies on the numerical discretization, the observable method is employed at PDE level during the derivation of equations. Difficulties in the simulation and analysis of realistic fluid flow often result from discontinuities (or near-discontinuities) in the calculated fluid properties or state. Accurately capturing these discontinuities is especially crucial when simulating flows involving shocks, turbulence or sharp interfaces. Over the past several years, the properties of this new regularization technique have been investigated that show the capability of simultaneously regularizing shocks and turbulence. The observable method has been performed on the direct numerical simulations of shocks and turbulence where the discontinuities are successfully regularized and flow features are well captured. In the current paper, the observable method will be extended to two-phase interfacial flows. Multiphase flows share the similar features with shocks and turbulence that is the nonlinear irregularity caused by the nonlinear terms in the governing equations, namely, Euler equations. In the direct numerical simulation of two-phase flows, the interfaces are usually treated as the smooth transition of the properties from one fluid phase to the other. However, in high Reynolds number or low viscosity flows, the nonlinear terms will generate smaller scales which will sharpen the interface, causing discontinuities. Many numerical methods for two-phase flows fail at high Reynolds number case while some others depend on the numerical diffusion from spatial discretization. The observable method regularizes this nonlinear mechanism by filtering the convective terms and this process is inviscid. The filtering effect is controlled by an observable scale which is usually about a grid length. Single rising bubble and Rayleigh-Taylor instability are studied, in particular, to examine the performance of the observable method. A pseudo-spectral method is used for spatial discretization which will not introduce numerical diffusion, and a Total Variation Diminishing (TVD) Runge Kutta method is applied for time integration. The observable incompressible Euler equations are solved for these two problems. In rising bubble problem, the terminal velocity and shape of the bubble are particularly examined and compared with experiments and other numerical results. In the Rayleigh-Taylor instability, the shape of the interface are studied for different observable scale and the spike and bubble velocities, as well as positions (under a proper observable scale), are compared with other simulation results. The results indicate that this regularization technique can potentially regularize the sharp interface in the two-phase flow simulationsKeywords: Euler equations, incompressible flow simulation, inviscid regularization technique, two-phase flow
Procedia PDF Downloads 502106 Climate Change Law and Transnational Corporations
Authors: Manuel Jose Oyson
Abstract:
The Intergovernmental Panel on Climate Change (IPCC) warned in its most recent report for the entire world “to both mitigate and adapt to climate change if it is to effectively avoid harmful climate impacts.” The IPCC observed “with high confidence” a more rapid rise in total anthropogenic greenhouse gas emissions (GHG) emissions from 2000 to 2010 than in the past three decades that “were the highest in human history”, which if left unchecked will entail a continuing process of global warming and can alter the climate system. Current efforts, however, to respond to the threat of global warming, such as the United Nations Framework Convention on Climate Change and the Kyoto Protocol, have focused on states, and fail to involve Transnational Corporations (TNCs) which are responsible for a vast amount of GHG emissions. Involving TNCs in the search for solutions to climate change is consistent with an acknowledgment by contemporary international law that there is an international role for other international persons, including TNCs, and departs from the traditional “state-centric” response to climate change. Putting the focus of GHG emissions away from states recognises that the activities of TNCs “are not bound by national borders” and that the international movement of goods meets the needs of consumers worldwide. Although there is no legally-binding instrument that covers TNC activities or legal responsibilities generally, TNCs have increasingly been made legally responsible under international law for violations of human rights, exploitation of workers and environmental damage, but not for climate change damage. Imposing on TNCs a legally-binding obligation to reduce their GHG emissions or a legal liability for climate change damage is arguably formidable and unlikely in the absence a recognisable source of obligation in international law or municipal law. Instead a recourse to “soft law” and non-legally binding instruments may be a way forward for TNCs to reduce their GHG emissions and help in addressing climate change. Positive effects have been noted by various studies to voluntary approaches. TNCs have also in recent decades voluntarily committed to “soft law” international agreements. This development reflects a growing recognition among corporations in general and TNCs in particular of their corporate social responsibility (CSR). While CSR used to be the domain of “small, offbeat companies”, it has now become part of mainstream organization. The paper argues that TNCs must voluntarily commit to reducing their GHG emissions and helping address climate change as part of their CSR. One, as a serious “global commons problem”, climate change requires international cooperation from multiple actors, including TNCs. Two, TNCs are not innocent bystanders but are responsible for a large part of GHG emissions across their vast global operations. Three, TNCs have the capability to help solve the problem of climate change. Assuming arguendo that TNCs did not strongly contribute to the problem of climate change, society would have valid expectations for them to use their capabilities, knowledge-base and advanced technologies to help address the problem. It would seem unthinkable for TNCs to do nothing while the global environment fractures.Keywords: climate change law, corporate social responsibility, greenhouse gas emissions, transnational corporations
Procedia PDF Downloads 350105 Integrated Manufacture of Polymer and Conductive Tracks for Functional Objects Fabrication
Authors: Barbara Urasinska-Wojcik, Neil Chilton, Peter Todd, Christopher Elsworthy, Gregory J. Gibbons
Abstract:
The recent increase in the application of Additive Manufacturing (AM) of products has resulted in new demands on capability. The ability to integrate both form and function within printed objects is the next frontier in the 3D printing area. To move beyond prototyping into low volume production, we demonstrate a UK-designed and built AM hybrid system that combines polymer based structural deposition with digital deposition of electrically conductive elements. This hybrid manufacturing system is based on a multi-planar build approach to improve on many of the limitations associated with AM, such as poor surface finish, low geometric tolerance, and poor robustness. Specifically, the approach involves a multi-planar Material Extrusion (ME) process in which separated build stations with up to 5 axes of motion replace traditional horizontally-sliced layer modeling. The construction of multi-material architectures also involved using multiple print systems in order to combine both ME and digital deposition of conductive material. To demonstrate multi-material 3D printing, three thermoplastics, acrylonitrile butadiene styrene (ABS), polyamide 6,6/6 copolymers (CoPA) and polyamide 12 (PA) were used to print specimens, on top of which our high viscosity Ag-particulate ink was printed in a non-contact process, during which drop characteristics such as shape, velocity, and volume were assessed using a drop watching system. Spectroscopic analysis of these 3D printed materials in the IR region helped to determine the optimum in-situ curing system for implementation into the AM system to achieve improved adhesion and surface refinement. Thermal Analyses were performed to determine the printed materials glass transition temperature (Tg), stability and degradation behavior to find the optimum annealing conditions post printing. Electrical analysis of printed conductive tracks on polymer surfaces during mechanical testing (static tensile and 3-point bending and dynamic fatigue) was performed to assess the robustness of the electrical circuits. The tracks on CoPA, ABS, and PA exhibited low electrical resistance, and in case of PA resistance values of tracks remained unchanged across hundreds of repeated tensile cycles up to 0.5% strain amplitude. Our developed AM printer has the ability to fabricate fully functional objects in one build, including complex electronics. It enables product designers and manufacturers to produce functional saleable electronic products from a small format modular platform. It will make 3D printing better, faster and stronger.Keywords: additive manufacturing, conductive tracks, hybrid 3D printer, integrated manufacture
Procedia PDF Downloads 166104 Microbial Fuel Cells: Performance and Applications
Authors: Andrea Pietrelli, Vincenzo Ferrara, Bruno Allard, Francois Buret, Irene Bavasso, Nicola Lovecchio, Francesca Costantini, Firas Khaled
Abstract:
This paper aims to show some applications of microbial fuel cells (MFCs), an energy harvesting technique, as clean power source to supply low power device for application like wireless sensor network (WSN) for environmental monitoring. Furthermore, MFC can be used directly as biosensor to analyse parameters like pH and temperature or arranged in form of cluster devices in order to use as small power plant. An MFC is a bioreactor that converts energy stored in chemical bonds of organic matter into electrical energy, through a series of reactions catalysed by microorganisms. We have developed a lab-scale terrestrial microbial fuel cell (TMFC), based on soil that acts as source of bacteria and flow of nutrient and a lab-scale waste water microbial fuel cell (WWMFC), where waste water acts as flow of nutrient and bacteria. We performed large series of tests to exploit the capability as biosensor. The pH value has strong influence on the open circuit voltage (OCV) delivered from TMFCs. We analyzed three condition: test A and B were filled with same soil but changing pH from 6 to 6.63, test C was prepared using a different soil with a pH value of 6.3. Experimental results clearly show how with higher pH value a higher OCV was produced; indeed reactors are influenced by different values of pH which increases the voltage in case of a higher pH value until the best pH value of 7 is achieved. The influence of pH on OCV of lab-scales WWMFC was analyzed at pH value of 6.5, 7, 7.2, 7.5 and 8. WWMFCs are influenced from temperature more than TMFCs. We tested the power performance of WWMFCs considering four imposed values of ambient temperature. Results show how power performance increase proportionally with higher temperature values, doubling the output power from 20° to 40°. The best value of power produced from our lab-scale TMFC was equal to 310 μW using peaty soil, at 1KΩ, corresponding to a current of 0.5 mA. A TMFC can supply proper energy to low power devices of a WSN by means of the design of three stages scheme of an energy management system, which adapts voltage level of TMFC to those required by a WSN node, as 3.3V. Using a commercial DC/DC boost converter, that needs an input voltage of 700 mV, the current source of 0.5 mA, charges a capacitor of 6.8 mF until it will have accumulated an amount of charge equal to 700 mV in a time of 10 s. The output stage includes an output switch that close the circuit after a time of 10s + 1.5ms because the converter can boost the voltage from 0.7V to 3.3V in 1.5 ms. Furthermore, we tested in form of clusters connected in series up to 20 WWMFCs, we have obtained a high voltage value as output, around 10V, but low current value. MFC can be considered a suitable clean energy source to be used to supply low power devices as a WSN node or to be used directly as biosensor.Keywords: energy harvesting, low power electronics, microbial fuel cell, terrestrial microbial fuel cell, waste-water microbial fuel cell, wireless sensor network
Procedia PDF Downloads 207103 Remote Radiation Mapping Based on UAV Formation
Authors: Martin Arguelles Perez, Woosoon Yim, Alexander Barzilov
Abstract:
High-fidelity radiation monitoring is an essential component in the enhancement of the situational awareness capabilities of the Department of Energy’s Office of Environmental Management (DOE-EM) personnel. In this paper, multiple units of unmanned aerial vehicles (UAVs) each equipped with a cadmium zinc telluride (CZT) gamma-ray sensor are used for radiation source localization, which can provide vital real-time data for the EM tasks. To achieve this goal, a fully autonomous system of multicopter-based UAV swarm in 3D tetrahedron formation is used for surveying the area of interest and performing radiation source localization. The CZT sensor used in this study is suitable for small-size multicopter UAVs due to its small size and ease of interfacing with the UAV’s onboard electronics for high-resolution gamma spectroscopy enabling the characterization of radiation hazards. The multicopter platform with a fully autonomous flight feature is suitable for low-altitude applications such as radiation contamination sites. The conventional approach uses a single UAV mapping in a predefined waypoint path to predict the relative location and strength of the source, which can be time-consuming for radiation localization tasks. The proposed UAV swarm-based approach can significantly improve its ability to search for and track radiation sources. In this paper, two approaches are developed using (a) 2D planar circular (3 UAVs) and (b) 3D tetrahedron formation (4 UAVs). In both approaches, accurate estimation of the gradient vector is crucial for heading angle calculation. Each UAV carries the CZT sensor; the real-time radiation data are used for the calculation of a bulk heading vector for the swarm to achieve a UAV swarm’s source-seeking behavior. Also, a spinning formation is studied for both cases to improve gradient estimation near a radiation source. In the 3D tetrahedron formation, a UAV located closest to the source is designated as a lead unit to maintain the tetrahedron formation in space. Such a formation demonstrated a collective and coordinated movement for estimating a gradient vector for the radiation source and determining an optimal heading direction of the swarm. The proposed radiation localization technique is studied by computer simulation and validated experimentally in the indoor flight testbed using gamma sources. The technology presented in this paper provides the capability to readily add/replace radiation sensors to the UAV platforms in the field conditions enabling extensive condition measurement and greatly improving situational awareness and event management. Furthermore, the proposed radiation localization approach allows long-term measurements to be efficiently performed at wide areas of interest to prevent disasters and reduce dose risks to people and infrastructure.Keywords: radiation, unmanned aerial system(UAV), source localization, UAV swarm, tetrahedron formation
Procedia PDF Downloads 99102 Graphene Supported Nano Cerium Oxides Hybrid as an Electrocatalyst for Oxygen Reduction Reactions
Authors: Siba Soren, Purnendu Parhi
Abstract:
Today, the world is facing a severe challenge due to depletion of traditional fossil fuels. Scientists across the globe are working for a solution that involves a dramatic shift to practical and environmentally sustainable energy sources. High-capacity energy systems, such as metal-air batteries, fuel cells, are highly desirable to meet the urgent requirement of sustainable energies. Among the fuel cells, Direct methanol fuel cells (DMFCs) are recognized as an ideal power source for mobile applications and have received considerable attention in recent past. In this advanced electrochemical energy conversion technologies, Oxygen Reduction Reaction (ORR) is of utmost importance. However, the poor kinetics of cathodic ORR in DMFCs significantly hampers their possibilities of commercialization. The oxygen is reduced in alkaline medium either through a 4-electron (equation i) or a 2-electron (equation ii) reduction pathway at the cathode ((i) O₂ + 2H₂O + 4e⁻ → 4OH⁻, (ii) O₂ + H₂O + 2e⁻ → OH⁻ + HO₂⁻ ). Due to sluggish ORR kinetics the ability to control the reduction of molecular oxygen electrocatalytically is still limited. The electrocatalytic ORR starts with adsorption of O₂ on the electrode surface followed by O–O bond activation/cleavage and oxide removal. The reaction further involves transfer of 4 electrons and 4 protons. The sluggish kinetics of ORR, on the one hand, demands high loading of precious metal-containing catalysts (e.g., Pt), which unfavorably increases the cost of these electrochemical energy conversion devices. Therefore, synthesis of active electrocatalyst with an increase in ORR performance is need of the hour. In the recent literature, there are many reports on transition metal oxide (TMO) based ORR catalysts for their high activity TMOs are also having drawbacks like low electrical conductivity, which seriously affects the electron transfer process during ORR. It was found that 2D graphene layer is having high electrical conductivity, large surface area, and excellent chemical stability, appeared to be an ultimate choice as support material to enhance the catalytic performance of bare metal oxide. g-C₃N₄ is also another candidate that has been used by the researcher for improving the ORR performance of metal oxides. This material provides more active reaction sites than other N containing carbon materials. Rare earth oxide like CeO₂ is also a good candidate for studying the ORR activity as the metal oxide not only possess unique electronic properties but also possess catalytically active sites. Here we will discuss the ORR performance (in alkaline medium) of N-rGO/C₃N₄ supported nano Cerium Oxides hybrid synthesized by microwave assisted Solvothermal method. These materials exhibit superior electrochemical stability and methanol tolerance capability to that of commercial Pt/C.Keywords: oxygen reduction reaction, electrocatalyst, cerium oxide, graphene
Procedia PDF Downloads 194101 Shale Gas and Oil Resource Assessment in Middle and Lower Indus Basin of Pakistan
Authors: Amjad Ali Khan, Muhammad Ishaq Saqi, Kashif Ali
Abstract:
The focus of hydrocarbon exploration in Pakistan has been primarily on conventional hydrocarbon resources. Directorate General Petroleum Concessions (DGPC) has taken the lead on the assessment of indigenous unconventional oil and gas resources, which has resulted in a ‘Shale Oil/Gas Resource Assessment Study’ conducted with the help of USAID. This was critically required in the energy-starved Pakistan, where the gap between indigenous oil & gas production and demand continues to widen for a long time. Exploration & exploitation of indigenous unconventional resources of Pakistan have become vital to meet our energy demand and reduction of oil and gas import bill of the country. This study has attempted to bridge a critical gap in geological information about the potential of shale gas & oil in Pakistan in the four formations, i.e., Sembar, Lower Goru, Ranikot and Ghazij in the Middle and Lower Indus Basins, which were selected for the study as for resource assessment for shale gas & oil. The primary objective of the study was to estimate and establish shale oil/gas resource assessment of the study area by carrying out extensive geological analysis of exploration, appraisal and development wells drilled in the Middle and Lower Indus Basins, along with identification of fairway(s) and sweet spots in the study area. The Study covers the Lower parts of the Middle Indus basins located in Sindh, southern Punjab & eastern parts of the Baluchistan provinces, with a total sedimentary area of 271,795 km2. Initially, 1611 wells were reviewed, including 1324 wells drilled through different shale formations. Based on the availability of required technical data, a detailed petrophysical analysis of 124 wells (21 Confidential & 103 in the public domain) has been conducted for the shale gas/oil potential of the above-referred formations. The core & cuttings samples of 32 wells and 33 geochemical reports of prospective Shale Formations were available, which were analyzed to calibrate the results of petrophysical analysis with petrographic/ laboratory analyses to increase the credibility of the Shale Gas Resource assessment. This study has identified the most prospective intervals, mainly in Sembar and Lower Goru Formations, for shale gas/oil exploration in the Middle and Lower Indus Basins of Pakistan. The study recommends seven (07) sweet spots for undertaking pilot projects, which will enable to evaluate of the actual production capability and production sustainability of shale oil/gas reservoirs of Pakistan for formulating future strategies to explore and exploit shale/oil resources of Pakistan including fiscal incentives required for developing shale oil/gas resources of Pakistan. Some E&P Companies are being persuaded to make a consortium for undertaking pilot projects that have shown their willingness to participate in the pilot project at appropriate times. The location for undertaking the pilot project has been finalized as a result of a series of technical sessions by geoscientists of the potential consortium members after the review and evaluation of available studies.Keywords: conventional resources, petrographic analysis, petrophysical analysis, unconventional resources, shale gas & oil, sweet spots
Procedia PDF Downloads 48100 Laboratory Assessment of Electrical Vertical Drains in Composite Soils Using Kaolin and Bentonite Clays
Authors: Maher Z. Mohammed, Barry G. Clarke
Abstract:
As an alternative to stone column in fine grained soils, it is possible to create stiffened columns of soils using electroosmosis (electroosmotic piles). This program of this research is to establish the effectiveness and efficiency of the process in different soils. The aim of this study is to assess the capability of electroosmosis treatment in a range of composite soils. The combined electroosmotic and preloading equipment developed by Nizar and Clarke (2013) was used with an octagonal array of anodes surrounding a single cathode in a nominal 250mm diameter 300mm deep cylinder of soil and 80mm anode to cathode distance. Copper coiled springs were used as electrodes to allow the soil to consolidate either due to an external vertical applied load or electroosmosis. The equipment was modified to allow the temperature to be monitored during the test. Electroosmotic tests were performed on China Clay Grade E kaolin and calcium bentonite (Bentonex CB) mixed with sand fraction C (BS 1881 part 131) at different ratios by weight; (0, 23, 33, 50 and 67%) subjected to applied voltages (5, 10, 15 and 20). The soil slurry was prepared by mixing the dry soil with water to 1.5 times the liquid limit of the soil mixture. The mineralogical and geotechnical properties of the tested soils were measured before the electroosmosis treatment began. In the electroosmosis cell tests, the settlement, expelled water, variation of electrical current and applied voltage, and the generated heat was monitored during the test time for 24 osmotic tests. Water content was measured at the end of each test. The electroosmotic tests are divided into three phases. In Phase 1, 15 kPa was applied to simulate a working platform and produce a uniform soil which had been deposited as a slurry. 50 kPa was used in Phase 3 to simulate a surcharge load. The electroosmotic treatment was only performed during Phase 2 where a constant voltage was applied through the electrodes in addition to the 15 kPa pressure. This phase was stopped when no further water was expelled from the cell, indicating the electroosmotic process had stopped due to either the degradation of the anode or the flow due to the hydraulic gradient exactly balanced the electroosmotic flow resulting in no flow. Control tests for each soil mixture were carried out to assess the behaviour of the soil samples subjected to only an increase of vertical pressure, which is 15kPa in Phase 1 and 50kPa in Phase 3. Analysis of the experimental results from this study showed a significant dewatering effect on the soil slurries. The water discharged by the electroosmotic treatment process decreased as the sand content increased. Soil temperature increased significantly when electrical power was applied and drops when applied DC power turned off or when the electrode degraded. The highest increase in temperature was found in pure clays at higher applied voltage after about 8 hours of electroosmosis test.Keywords: electrokinetic treatment, electrical conductivity, electroosmotic consolidation, electroosmosis permeability ratio
Procedia PDF Downloads 16699 Techno Economic Analysis of CAES Systems Integrated into Gas-Steam Combined Plants
Authors: Coriolano Salvini
Abstract:
The increasing utilization of renewable energy sources for electric power production calls for the introduction of energy storage systems to match the electric demand along the time. Although many countries are pursuing as a final goal a “decarbonized” electrical system, in the next decades the traditional fossil fuel fed power plant still will play a relevant role in fulfilling the electric demand. Presently, such plants provide grid ancillary services (frequency control, grid balance, reserve, etc.) by adapting the output power to the grid requirements. An interesting option is represented by the possibility to use traditional plants to improve the grid storage capabilities. The present paper is addressed to small-medium size systems suited for distributed energy storage. The proposed Energy Storage System (ESS) is based on a Compressed Air Energy Storage (CAES) integrated into a Gas-Steam Combined Cycle (GSCC) or a Gas Turbine based CHP plants. The systems can be incorporated in an ex novo built plant or added to an already existing one. To avoid any geological restriction related to the availability of natural compressed air reservoirs, artificial storage is addressed. During the charging phase, electric power is absorbed from the grid by an electric driven intercooled/aftercooled compressor. In the course of the discharge phase, the compressed stored air is sent to a heat transfer device fed by hot gas taken upstream the Heat Recovery Steam Generator (HRSG) and subsequently expanded for power production. To maximize the output power, a staged reheated expansion process is adopted. The specific power production related to the kilogram per second of exhaust gas used to heat the stored air is two/three times larger than that achieved if the gas were used to produce steam in the HRSG. As a result, a relevant power augmentation is attained with respect to normal GSCC plant operations without additional use of fuel. Therefore, the excess of output power can be considered “fuel free” and the storage system can be compared to “pure” ESSs such as electrochemical, pumped hydro or adiabatic CAES. Representative cases featured by different power absorption, production capability, and storage capacity have been taken into consideration. For each case, a technical optimization aimed at maximizing the storage efficiency has been carried out. On the basis of the resulting storage pressure and volume, number of compression and expansion stages, air heater arrangement and process quantities found for each case, a cost estimation of the storage systems has been performed. Storage efficiencies from 0.6 to 0.7 have been assessed. Capital costs in the range of 400-800 €/kW and 500-1000 €/kWh have been estimated. Such figures are similar or lower to those featuring alternative storage technologies.Keywords: artificial air storage reservoir, compressed air energy storage (CAES), gas steam combined cycle (GSCC), techno-economic analysis
Procedia PDF Downloads 21498 Education Delivery in Youth Justice Centres: Inside-Out Prison Exchange Program Pedagogy in an Australian Context
Authors: Tarmi A'Vard
Abstract:
This paper discusses the transformative learning experience for students participating in the Inside-Out Prison Exchange Program (Inside-out) and explores the value this pedagogical approach may have in youth justice centers. Inside-Out is a semester-long university course which is unique as it takes 15 university students, with their textbook and theory-based knowledge, behind the walls to study alongside 15 incarcerated students, who have the lived experience of the criminal justice system. Inside-out is currently offered in three Victorian prisons, expanding to five in 2020. The Inside-out pedagogy which is based on transformative dialogic learning is reliant upon the participants sharing knowledge and experiences to develop an understanding and appreciation of the diversity and uniqueness of one another. Inside-out offers the class an opportunity to create its own guidelines for dialogue, which can lead to the student’s sense of equality, which is fundamental in the success of this program. Dialogue allows active participation by all parties in reconciling differences, collaborating ideas, critiquing and developing hypotheses and public policies, and encouraging self-reflection and exploration. The structure of the program incorporates the implementation of circular seating (where the students alternate between inside and outside), activities, individual reflective tasks, group work, and theory analysis. In this circle everyone is equal, this includes the educator, who serves as a facilitator more so than the traditional teacher role. A significant function of the circle is to develop a group consciousness, allowing the whole class to see itself as a collective, and no one person holds a superior role. This also encourages participants to be responsible and accountable for their behavior and contributions. Research indicates completing academic courses, like Inside-Out, contributes positively to reducing recidivism. Inside-Out’s benefits and success in many adult correctional institutions have been outlined in evaluation reports and scholarly articles. The key findings incorporate the learning experiences for the students in both an academic capability and professional practice and development. Furthermore, stereotypes and pre-determined ideas are challenged, and there is a promotion of critical thinking and evidence of self-discovery and growth. There is empirical data supporting positive outcomes of education in youth justice centers in reducing recidivism and increasing the likelihood of returning to education upon release. Hence, this research could provide the opportunity to increase young people’s engagement in education which is a known protective factor for assisting young people to move away from criminal behavior. In 2016, Tarmi completed the Inside-Out educator training in Philadelphia, Pennsylvania, and has developed an interest in exploring the pedagogy of Inside-Out, specifically targeting young offenders in a Youth Justice Centre.Keywords: dialogic transformative learning, inside-out prison exchange program, prison education, youth justice
Procedia PDF Downloads 12697 The Study of Mirror Self-Recognition in Wildlife
Authors: Azwan Hamdan, Mohd Qayyum Ab Latip, Hasliza Abu Hassim, Tengku Rinalfi Putra Tengku Azizan, Hafandi Ahmad
Abstract:
Animal cognition provides some evidence for self-recognition, which is described as the ability to recognize oneself as an individual separate from the environment and other individuals. The mirror self-recognition (MSR) or mark test is a behavioral technique to determine whether an animal have the ability of self-recognition or self-awareness in front of the mirror. It also describes the capability for an animal to be aware of and make judgments about its new environment. Thus, the objectives of this study are to measure and to compare the ability of wild and captive wildlife in mirror self-recognition. Wild animals from the Royal Belum Rainforest Malaysia were identified based on the animal trails and salt lick grounds. Acrylic mirrors with wood frame (200 x 250cm) were located near to animal trails. Camera traps (Bushnell, UK) with motion-detection infrared sensor are placed near the animal trails or hiding spot. For captive wildlife, animals such as Malayan sun bear (Helarctos malayanus) and chimpanzee (Pan troglodytes) were selected from Zoo Negara Malaysia. The captive animals were also marked using odorless and non-toxic white paint on its forehead. An acrylic mirror with wood frame (200 x 250cm) and a video camera were placed near the cage. The behavioral data were analyzed using ethogram and classified through four stages of MSR; social responses, physical inspection, repetitive mirror-testing behavior and realization of seeing themselves. Results showed that wild animals such as barking deer (Muntiacus muntjak) and long-tailed macaque (Macaca fascicularis) increased their physical inspection (e.g inspecting the reflected image) and repetitive mirror-testing behavior (e.g rhythmic head and leg movement). This would suggest that the ability to use a mirror is most likely related to learning process and cognitive evolution in wild animals. However, the sun bear’s behaviors were inconsistent and did not clearly undergo four stages of MSR. This result suggests that when keeping Malayan sun bear in captivity, it may promote communication and familiarity between conspecific. Interestingly, chimp has positive social response (e.g manipulating lips) and physical inspection (e.g using hand to inspect part of the face) when they facing a mirror. However, both animals did not show any sign towards the mark due to lost of interest in the mark and realization that the mark is inconsequential. Overall, the results suggest that the capacity for MSR is the beginning of a developmental process of self-awareness and mental state attribution. In addition, our findings show that self-recognition may be based on different complex neurological and level of encephalization in animals. Thus, research on self-recognition in animals will have profound implications in understanding the cognitive ability of an animal as an effort to help animals, such as enhanced management, design of captive individuals’ enclosures and exhibits, and in programs to re-establish populations of endangered or threatened species.Keywords: mirror self-recognition (MSR), self-recognition, self-awareness, wildlife
Procedia PDF Downloads 27296 Ensuring Safety in Fire Evacuation by Facilitating Way-Finding in Complex Buildings
Authors: Atefeh Omidkhah, Mohammadreza Bemanian
Abstract:
The issue of way-finding earmarks a wide range of literature in architecture and despite the 50 year background of way-finding studies, it still lacks a comprehensive theory for indoor settings. Way-finding has a notable role in emergency evacuation as well. People in the panic situation of a fire emergency need to find the safe egress route correctly and in as minimum time as possible. In this regard the parameters of an appropriate way-finding are mentioned in the evacuation related researches albeit scattered. This study reviews the fire safety related literature to extract a way-finding related framework for architectural purposes of the design of a safe evacuation route. In this regard a research trend review in addition with applied methodological approaches review is conducted. Then by analyzing eight original researches related to way-finding parameters in fire evacuation, main parameters that affect way-finding in emergency situation of a fire incident are extracted and a framework was developed based on them. Results show that the issues related to exit route and emergency evacuation can be chased in task oriented studies of way-finding. This research trend aims to access a high-level framework and in the best condition a theory that has an explanatory capability to define differences in way-finding in indoor/outdoor settings, complex/simple buildings and different building types or transitional spaces. The methodological advances demonstrate the evacuation way-finding researches in line with three approaches that the latter one is the most up-to-date and precise method to research this subject: real actors and hypothetical stimuli as in evacuation experiments, hypothetical actors and stimuli as in agent-based simulations and real actors and semi-real stimuli as in virtual reality environment by adding multi-sensory simulation. Findings on data-mining of 8 sample of original researches in way-finding in evacuation indicate that emergency way-finding design of a building should consider two level of space cognition problems in the time of emergency and performance consequences of them in the built environment. So four major classes of problems in way-finding which are visual information deficiency, confusing layout configuration, improper navigating signage and demographic issues had been defined and discussed as the main parameters that should be provided with solutions in design and interior of a building. In the design phase of complex buildings, which face more reported problem in way-finding, it is important to consider the interior components regarding to the building type of occupancy and behavior of its occupants and determine components that tend to become landmarks and set the architectural features of egress route in line with the directions that they navigate people. Research on topological cognition of environmental and its effect on way-finding task in emergency evacuation is proposed for future.Keywords: architectural design, egress route, way-finding, fire safety, evacuation
Procedia PDF Downloads 17495 Development of Three-Dimensional Bio-Reactor Using Magnetic Field Stimulation to Enhance PC12 Cell Axonal Extension
Authors: Eiji Nakamachi, Ryota Sakiyama, Koji Yamamoto, Yusuke Morita, Hidetoshi Sakamoto
Abstract:
The regeneration of injured central nerve network caused by the cerebrovascular accidents is difficult, because of poor regeneration capability of central nerve system composed of the brain and the spinal cord. Recently, new regeneration methods such as transplant of nerve cells and supply of nerve nutritional factor were proposed and examined. However, there still remain many problems with the canceration of engrafted cells and so on and it is strongly required to establish an efficacious treating method of a central nerve system. Blackman proposed the electromagnetic stimulation method to enhance the axonal nerve extension. In this study, we try to design and fabricate a new three-dimensional (3D) bio-reactor, which can load a uniform AC magnetic field stimulation on PC12 cells in the extracellular environment for enhancement of an axonal nerve extension and 3D nerve network generation. Simultaneously, we measure the morphology of PC12 cell bodies, axons, and dendrites by the multiphoton excitation fluorescence microscope (MPM) and evaluate the effectiveness of the uniform AC magnetic stimulation to enhance the axonal nerve extension. Firstly, we designed and fabricated the uniform AC magnetic field stimulation bio-reactor. For the AC magnetic stimulation system, we used the laminated silicon steel sheets for a yoke structure of 3D chamber, which had a high magnetic permeability. Next, we adopted the pole piece structure and installed similar specification coils on both sides of the yoke. We searched an optimum pole piece structure using the magnetic field finite element (FE) analyses and the response surface methodology. We confirmed that the optimum 3D chamber structure showed a uniform magnetic flux density in the PC12 cell culture area by using FE analysis. Then, we fabricated the uniform AC magnetic field stimulation bio-reactor by adopting analytically determined specifications, such as the size of chamber and electromagnetic conditions. We confirmed that measurement results of magnetic field in the chamber showed a good agreement with FE results. Secondly, we fabricated a dish, which set inside the uniform AC magnetic field stimulation of bio-reactor. PC12 cells were disseminated with collagen gel and could be 3D cultured in the dish. The collagen gel were poured in the dish. The collagen gel, which had a disk shape of 6 mm diameter and 3mm height, was set on the membrane filter, which was located at 4 mm height from the bottom of dish. The disk was full filled with the culture medium inside the dish. Finally, we evaluated the effectiveness of the uniform AC magnetic field stimulation to enhance the nurve axonal extension. We confirmed that a 6.8 increase in the average axonal extension length of PC12 under the uniform AC magnetic field stimulation at 7 days culture in our bio-reactor, and a 24.7 increase in the maximum axonal extension length. Further, we confirmed that a 60 increase in the number of dendrites of PC12 under the uniform AC magnetic field stimulation. Finally, we confirm the availability of our uniform AC magnetic stimulation bio-reactor for the nerve axonal extension and the nerve network generation.Keywords: nerve regeneration, axonal extension , PC12 cell, magnetic field, three-dimensional bio-reactor
Procedia PDF Downloads 16894 Flexural Performance of the Sandwich Structures Having Aluminum Foam Core with Different Thicknesses
Authors: Emre Kara, Ahmet Fatih Geylan, Kadir Koç, Şura Karakuzu, Metehan Demir, Halil Aykul
Abstract:
The structures obtained with the use of sandwich technologies combine low weight with high energy absorbing capacity and load carrying capacity. Hence, there is a growing and markedly interest in the use of sandwiches with aluminium foam core because of very good properties such as flexural rigidity and energy absorption capability. The static (bending and penetration) and dynamic (dynamic bending and low velocity impact) tests were already performed on the aluminum foam cored sandwiches with different types of outer skins by some of the authors. In the current investigation, the static three-point bending tests were carried out on the sandwiches with aluminum foam core and glass fiber reinforced polymer (GFRP) skins at different values of support span distances (L= 55, 70, 80, 125 mm) aiming the analyses of their flexural performance. The influence of the core thickness and the GFRP skin type was reported in terms of peak load, energy absorption capacity and energy efficiency. For this purpose, the skins with two different types of fabrics ([0°/90°] cross ply E-Glass Woven and [0°/90°] cross ply S-Glass Woven which have same thickness value of 1.5 mm) and the aluminum foam core with two different thicknesses (h=10 and 15 mm) were bonded with a commercial polyurethane based flexible adhesive in order to combine the composite sandwich panels. The GFRP skins fabricated via Vacuum Assisted Resin Transfer Molding (VARTM) technique used in the study can be easily bonded to the aluminum foam core and it is possible to configure the base materials (skin, adhesive and core), fiber angle orientation and number of layers for a specific application. The main results of the bending tests are: force-displacement curves, peak force values, absorbed energy, energy efficiency, collapse mechanisms and the effect of the support span length and core thickness. The results of the experimental study showed that the sandwich with the skins made of S-Glass Woven fabrics and with the thicker foam core presented higher mechanical values such as load carrying and energy absorption capacities. The increment of the support span distance generated the decrease of the mechanical values for each type of panels, as expected, because of the inverse proportion between the force and span length. The most common failure types of the sandwiches are debonding of the upper or lower skin and the core shear. The obtained results have particular importance for applications that require lightweight structures with a high capacity of energy dissipation, such as the transport industry (automotive, aerospace, shipbuilding and marine industry), where the problems of collision and crash have increased in the last years.Keywords: aluminum foam, composite panel, flexure, transport application
Procedia PDF Downloads 33893 A Study of the Carbon Footprint from a Liquid Silicone Rubber Compounding Facility in Malaysia
Authors: Q. R. Cheah, Y. F. Tan
Abstract:
In modern times, the push for a low carbon footprint entails achieving carbon neutrality as a goal for future generations. One possible step towards carbon footprint reduction is the use of more durable materials with longer lifespans, for example, silicone data cableswhich show at least double the lifespan of similar plastic products. By having greater durability and longer lifespans, silicone data cables can reduce the amount of trash produced as compared to plastics. Furthermore, silicone products don’t produce micro contamination harmful to the ocean. Every year the electronics industry produces an estimated 5 billion data cables for USB type C and lightning data cables for tablets and mobile phone devices. Material usage for outer jacketing is 6 to 12 grams per meter. Tests show that the product lifespan of a silicone data cable over plastic can be doubled due to greater durability. This can save at least 40,000 tonnes of material a year just on the outer jacketing of the data cable. The facility in this study specialises in compounding of liquid silicone rubber (LSR) material for the extrusion process in jacketing for the silicone data cable. This study analyses the carbon emissions from the facility, which is presently capable of producing more than 1,000 tonnes of LSR annually. This study uses guidelines from the World Business Council for Sustainable Development (WBCSD) and World Resources Institute (WRI) to define the boundaries of the scope. The scope of emissions is defined as 1. Emissions from operations owned or controlled by the reporting company, 2. Emissions from the generation of purchased or acquired energy such as electricity, steam, heating, or cooling consumed by the reporting company, and 3. All other indirect emissions occurring in the value chain of the reporting company, including both upstream and downstream emissions. As the study is limited to the compounding facility, the system boundaries definition according to GHG protocol is cradle-to-gate instead of cradle-to-grave exercises. Malaysia’s present electricity generation scenario was also used, where natural gas and coal constitute the bulk of emissions. Calculations show the LSR produced for the silicone data cable with high fire retardant capability has scope 1 emissions of 0.82kg CO2/kg, scope 2 emissions of 0.87kg CO2/kg, and scope 3 emissions of 2.76kg CO2/kg, with a total product carbon footprint of 4.45kg CO2/kg. This total product carbon footprint (Cradle-to-gate) is comparable to the industry and to plastic materials per tonne of material. Although per tonne emission is comparable to plastic material, due to greater durability and longer lifespan, there can be significantly reduced use of LSR material. Suggestions to reduce the calculated product carbon footprint in the scope of emissions involve 1. Incorporating the recycling of factory silicone waste into operations, 2. Using green renewable energy for external electricity sources and 3. Sourcing eco-friendly raw materials with low GHG emissions.Keywords: carbon footprint, liquid silicone rubber, silicone data cable, Malaysia facility
Procedia PDF Downloads 9692 Indoor Air Pollution and Reduced Lung Function in Biomass Exposed Women: A Cross Sectional Study in Pune District, India
Authors: Rasmila Kawan, Sanjay Juvekar, Sandeep Salvi, Gufran Beig, Rainer Sauerborn
Abstract:
Background: Indoor air pollution especially from the use of biomass fuels, remains a potentially large global health threat. The inefficient use of such fuels in poorly ventilated conditions results in high levels of indoor air pollution, most seriously affecting women and young children. Objectives: The main aim of this study was to measure and compare the lung function of the women exposed in the biomass fuels and LPG fuels and relate it to the indoor emission measured using a structured questionnaire, spirometer and filter based low volume samplers respectively. Methodology: This cross-sectional comparative study was conducted among the women (aged > 18 years) living in rural villages of Pune district who were not diagnosed of chronic pulmonary diseases or any other respiratory diseases and using biomass fuels or LPG for cooking for a minimum period of 5 years or more. Data collection was done from April to June 2017 in dry season. Spirometer was performed using the portable, battery-operated ultrasound Easy One spirometer (Spiro bank II, NDD Medical Technologies, Zurich, Switzerland) to determine the lung function over Forced expiratory volume. The primary outcome variable was forced expiratory volume in 1 second (FEV1). Secondary outcome was chronic obstruction pulmonary disease (post bronchodilator FEV1/ Forced Vital Capacity (FVC) < 70%) as defined by the Global Initiative for Obstructive Lung Disease. Potential confounders such as age, height, weight, smoking history, occupation, educational status were considered. Results: Preliminary results showed that the lung function of the women using Biomass fuels (FEV1/FVC = 85% ± 5.13) had comparatively reduced lung function than the LPG users (FEV1/FVC = 86.40% ± 5.32). The mean PM 2.5 mass concentration in the biomass user’s kitchen was 274.34 ± 314.90 and 85.04 ± 97.82 in the LPG user’s kitchen. Black carbon amount was found higher in the biomass users (black carbon = 46.71 ± 46.59 µg/m³) than LPG users (black carbon=11.08 ± 22.97 µg/m³). Most of the houses used separate kitchen. Almost all the houses that used the clean fuel like LPG had minimum amount of the particulate matter 2.5 which might be due to the background pollution and cross ventilation from the houses using biomass fuels. Conclusions: Therefore, there is an urgent need to adopt various strategies to improve indoor air quality. There is a lacking of current state of climate active pollutants emission from different stove designs and identify major deficiencies that need to be tackled. Moreover, the advancement in research tools, measuring technique in particular, is critical for researchers in developing countries to improve their capability to study the emissions for addressing the growing climate change and public health concerns.Keywords: black carbon, biomass fuels, indoor air pollution, lung function, particulate matter
Procedia PDF Downloads 17491 Clinical and Analytical Performance of Glial Fibrillary Acidic Protein and Ubiquitin C-Terminal Hydrolase L1 Biomarkers for Traumatic Brain Injury in the Alinity Traumatic Brain Injury Test
Authors: Raj Chandran, Saul Datwyler, Jaime Marino, Daniel West, Karla Grasso, Adam Buss, Hina Syed, Zina Al Sahouri, Jennifer Yen, Krista Caudle, Beth McQuiston
Abstract:
The Alinity i TBI test is Therapeutic Goods Administration (TGA) registered and is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays for the measurement of glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) in plasma and serum. The Alinity i TBI performance was evaluated in a multi-center pivotal study to demonstrate the capability to assist in determining the need for a CT scan of the head in adult subjects (age 18+) presenting with suspected mild TBI (traumatic brain injury) with a Glasgow Coma Scale score of 13 to 15. TBI has been recognized as an important cause of death and disability and is a growing public health problem. An estimated 69 million people globally experience a TBI annually1. Blood-based biomarkers such as glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) have shown utility to predict acute traumatic intracranial injury on head CT scans after TBI. A pivotal study using prospectively collected archived (frozen) plasma specimens was conducted to establish the clinical performance of the TBI test on the Alinity i system. The specimens were originally collected in a prospective, multi-center clinical study. Testing of the specimens was performed at three clinical sites in the United States. Performance characteristics such as detection limits, imprecision, linearity, measuring interval, expected values, and interferences were established following Clinical and Laboratory Standards Institute (CLSI) guidance. Of the 1899 mild TBI subjects, 120 had positive head CT scan results; 116 of the 120 specimens had a positive TBI interpretation (Sensitivity 96.7%; 95% CI: 91.7%, 98.7%). Of the 1779 subjects with negative CT scan results, 713 had a negative TBI interpretation (Specificity 40.1%; 95% CI: 37.8, 42.4). The negative predictive value (NPV) of the test was 99.4% (713/717, 95% CI: 98.6%, 99.8%). The analytical measuring interval (AMI) extends from the limit of quantitation (LoQ) to the upper LoQ and is determined by the range that demonstrates acceptable performance for linearity, imprecision, and bias. The AMI is 6.1 to 42,000 pg/mL for GFAP and 26.3 to 25,000 pg/mL for UCH-L1. Overall, within-laboratory imprecision (20 day) ranged from 3.7 to 5.9% CV for GFAP and 3.0 to 6.0% CV for UCH-L1, when including lot and instrument variances. The Alinity i TBI clinical performance results demonstrated high sensitivity and high NPV, supporting the utility to assist in determining the need for a head CT scan in subjects presenting to the emergency department with suspected mild TBI. The GFAP and UCH-L1 assays show robust analytical performance across a broad concentration range of GFAP and UCH-L1 and may serve as a valuable tool to help evaluate TBI patients across the spectrum of mild to severe injury.Keywords: biomarker, diagnostic, neurology, TBI
Procedia PDF Downloads 6690 Unveiling Drought Dynamics in the Cuneo District, Italy: A Machine Learning-Enhanced Hydrological Modelling Approach
Authors: Mohammadamin Hashemi, Mohammadreza Kashizadeh
Abstract:
Droughts pose a significant threat to sustainable water resource management, agriculture, and socioeconomic sectors, particularly in the field of climate change. This study investigates drought simulation using rainfall-runoff modelling in the Cuneo district, Italy, over the past 60-year period. The study leverages the TUW model, a lumped conceptual rainfall-runoff model with a semi-distributed operation capability. Similar in structure to the widely used Hydrologiska Byråns Vattenbalansavdelning (HBV) model, the TUW model operates on daily timesteps for input and output data specific to each catchment. It incorporates essential routines for snow accumulation and melting, soil moisture storage, and streamflow generation. Multiple catchments' discharge data within the Cuneo district form the basis for thorough model calibration employing the Kling-Gupta Efficiency (KGE) metric. A crucial metric for reliable drought analysis is one that can accurately represent low-flow events during drought periods. This ensures that the model provides a realistic picture of water availability during these critical times. Subsequent validation of monthly discharge simulations thoroughly evaluates overall model performance. Beyond model development, the investigation delves into drought analysis using the robust Standardized Runoff Index (SRI). This index allows for precise characterization of drought occurrences within the study area. A meticulous comparison of observed and simulated discharge data is conducted, with particular focus on low-flow events that characterize droughts. Additionally, the study explores the complex interplay between land characteristics (e.g., soil type, vegetation cover) and climate variables (e.g., precipitation, temperature) that influence the severity and duration of hydrological droughts. The study's findings demonstrate successful calibration of the TUW model across most catchments, achieving commendable model efficiency. Comparative analysis between simulated and observed discharge data reveals significant agreement, especially during critical low-flow periods. This agreement is further supported by the Pareto coefficient, a statistical measure of goodness-of-fit. The drought analysis provides critical insights into the duration, intensity, and severity of drought events within the Cuneo district. This newfound understanding of spatial and temporal drought dynamics offers valuable information for water resource management strategies and drought mitigation efforts. This research deepens our understanding of drought dynamics in the Cuneo region. Future research directions include refining hydrological modelling techniques and exploring future drought projections under various climate change scenarios.Keywords: hydrologic extremes, hydrological drought, hydrological modelling, machine learning, rainfall-runoff modelling
Procedia PDF Downloads 4189 Seismic Perimeter Surveillance System (Virtual Fence) for Threat Detection and Characterization Using Multiple ML Based Trained Models in Weighted Ensemble Voting
Authors: Vivek Mahadev, Manoj Kumar, Neelu Mathur, Brahm Dutt Pandey
Abstract:
Perimeter guarding and protection of critical installations require prompt intrusion detection and assessment to take effective countermeasures. Currently, visual and electronic surveillance are the primary methods used for perimeter guarding. These methods can be costly and complicated, requiring careful planning according to the location and terrain. Moreover, these methods often struggle to detect stealthy and camouflaged insurgents. The object of the present work is to devise a surveillance technique using seismic sensors that overcomes the limitations of existing systems. The aim is to improve intrusion detection, assessment, and characterization by utilizing seismic sensors. Most of the similar systems have only two types of intrusion detection capability viz., human or vehicle. In our work we could even categorize further to identify types of intrusion activity such as walking, running, group walking, fence jumping, tunnel digging and vehicular movements. A virtual fence of 60 meters at GCNEP, Bahadurgarh, Haryana, India, was created by installing four underground geophones at a distance of 15 meters each. The signals received from these geophones are then processed to find unique seismic signatures called features. Various feature optimization and selection methodologies, such as LightGBM, Boruta, Random Forest, Logistics, Recursive Feature Elimination, Chi-2 and Pearson Ratio were used to identify the best features for training the machine learning models. The trained models were developed using algorithms such as supervised support vector machine (SVM) classifier, kNN, Decision Tree, Logistic Regression, Naïve Bayes, and Artificial Neural Networks. These models were then used to predict the category of events, employing weighted ensemble voting to analyze and combine their results. The models were trained with 1940 training events and results were evaluated with 831 test events. It was observed that using the weighted ensemble voting increased the efficiency of predictions. In this study we successfully developed and deployed the virtual fence using geophones. Since these sensors are passive, do not radiate any energy and are installed underground, it is impossible for intruders to locate and nullify them. Their flexibility, quick and easy installation, low costs, hidden deployment and unattended surveillance make such systems especially suitable for critical installations and remote facilities with difficult terrain. This work demonstrates the potential of utilizing seismic sensors for creating better perimeter guarding and protection systems using multiple machine learning models in weighted ensemble voting. In this study the virtual fence achieved an intruder detection efficiency of over 97%.Keywords: geophone, seismic perimeter surveillance, machine learning, weighted ensemble method
Procedia PDF Downloads 7888 Superparamagnetic Sensor with Lateral Flow Immunoassays as Platforms for Biomarker Quantification
Authors: M. Salvador, J. C. Martinez-Garcia, A. Moyano, M. C. Blanco-Lopez, M. Rivas
Abstract:
Biosensors play a crucial role in the detection of molecules nowadays due to their advantages of user-friendliness, high selectivity, the analysis in real time and in-situ applications. Among them, Lateral Flow Immunoassays (LFIAs) are presented among technologies for point-of-care bioassays with outstanding characteristics such as affordability, portability and low-cost. They have been widely used for the detection of a vast range of biomarkers, which do not only include proteins but also nucleic acids and even whole cells. Although the LFIA has traditionally been a positive/negative test, tremendous efforts are being done to add to the method the quantifying capability based on the combination of suitable labels and a proper sensor. One of the most successful approaches involves the use of magnetic sensors for detection of magnetic labels. Bringing together the required characteristics mentioned before, our research group has developed a biosensor to detect biomolecules. Superparamagnetic nanoparticles (SPNPs) together with LFIAs play the fundamental roles. SPMNPs are detected by their interaction with a high-frequency current flowing on a printed micro track. By means of the instant and proportional variation of the impedance of this track provoked by the presence of the SPNPs, quantitative and rapid measurement of the number of particles can be obtained. This way of detection requires no external magnetic field application, which reduces the device complexity. On the other hand, the major limitations of LFIAs are that they are only qualitative or semiquantitative when traditional gold or latex nanoparticles are used as color labels. Moreover, the necessity of always-constant ambient conditions to get reproducible results, the exclusive detection of the nanoparticles on the surface of the membrane, and the short durability of the signal are drawbacks that can be advantageously overcome with the design of magnetically labeled LFIAs. The approach followed was to coat the SPIONs with a specific monoclonal antibody which targets the protein under consideration by chemical bonds. Then, a sandwich-type immunoassay was prepared by printing onto the nitrocellulose membrane strip a second antibody against a different epitope of the protein (test line) and an IgG antibody (control line). When the sample flows along the strip, the SPION-labeled proteins are immobilized at the test line, which provides magnetic signal as described before. Preliminary results using this practical combination for the detection and quantification of the Prostatic-Specific Antigen (PSA) shows the validity and consistency of the technique in the clinical range, where a PSA level of 4.0 ng/mL is the established upper normal limit. Moreover, a LOD of 0.25 ng/mL was calculated with a confident level of 3 according to the IUPAC Gold Book definition. Its versatility has also been proved with the detection of other biomolecules such as troponin I (cardiac injury biomarker) or histamine.Keywords: biosensor, lateral flow immunoassays, point-of-care devices, superparamagnetic nanoparticles
Procedia PDF Downloads 23187 Cognitive Radio in Aeronautic: Comparison of Some Spectrum Sensing Technics
Authors: Abdelkhalek Bouchikhi, Elyes Benmokhtar, Sebastien Saletzki
Abstract:
The aeronautical field is experiencing issues with RF spectrum congestion due to the constant increase in the number of flights, aircrafts and telecom systems on board. In addition, these systems are bulky in size, weight and energy consumption. The cognitive radio helps particularly solving the spectrum congestion issue by its capacity to detect idle frequency channels then, allowing an opportunistic exploitation of the RF spectrum. The present work aims to propose a new use case for aeronautical spectrum sharing and to study the performances of three different detection techniques: energy detector, matched filter and cyclostationary detector within the aeronautical use case. The spectrum in the proposed cognitive radio is allocated dynamically where each cognitive radio follows a cognitive cycle. The spectrum sensing is a crucial step. The goal of the sensing is gathering data about the surrounding environment. Cognitive radio can use different sensors: antennas, cameras, accelerometer, thermometer, etc. In IEEE 802.22 standard, for example, a primary user (PU) has always the priority to communicate. When a frequency channel witch used by the primary user is idle, the secondary user (SU) is allowed to transmit in this channel. The Distance Measuring Equipment (DME) is composed of a UHF transmitter/receiver (interrogator) in the aircraft and a UHF receiver/transmitter on the ground. While the future cognitive radio will be used jointly to alleviate the spectrum congestion issue in the aeronautical field. LDACS, for example, is a good candidate; it provides two isolated data-links: ground-to-air and air-to-ground data-links. The first contribution of the present work is a strategy allowing sharing the L-band. The adopted spectrum sharing strategy is as follow: the DME will play the role of PU which is the licensed user and the LDACS1 systems will be the SUs. The SUs could use the L-band channels opportunely as long as they do not causing harmful interference signals which affect the QoS of the DME system. Although the spectrum sensing is a key step, it helps detecting holes by determining whether the primary signal is present or not in a given frequency channel. A missing detection on primary user presence creates interference between PU and SU and will affect seriously the QoS of the legacy radio. In this study, first brief definitions, concepts and the state of the art of cognitive radio will be presented. Then, a study of three communication channel detection algorithms in a cognitive radio context is carried out. The study is made from the point of view of functions, material requirements and signal detection capability in the aeronautical field. Then, we presented a modeling of the detection problem by three different methods (energy, adapted filter, and cyclostationary) as well as an algorithmic description of these detectors is done. Then, we study and compare the performance of the algorithms. Simulations were carried out using MATLAB software. We analyzed the results based on ROCs curves for SNR between -10dB and 20dB. The three detectors have been tested with a synthetics and real world signals.Keywords: aeronautic, communication, navigation, surveillance systems, cognitive radio, spectrum sensing, software defined radio
Procedia PDF Downloads 17486 Adapting an Accurate Reverse-time Migration Method to USCT Imaging
Authors: Brayden Mi
Abstract:
Reverse time migration has been widely used in the Petroleum exploration industry to reveal subsurface images and to detect rock and fluid properties since the early 1980s. The seismic technology involves the construction of a velocity model through interpretive model construction, seismic tomography, or full waveform inversion, and the application of the reverse-time propagation of acquired seismic data and the original wavelet used in the acquisition. The methodology has matured from 2D, simple media to present-day to handle full 3D imaging challenges in extremely complex geological conditions. Conventional Ultrasound computed tomography (USCT) utilize travel-time-inversion to reconstruct the velocity structure of an organ. With the velocity structure, USCT data can be migrated with the “bend-ray” method, also known as migration. Its seismic application counterpart is called Kirchhoff depth migration, in which the source of reflective energy is traced by ray-tracing and summed to produce a subsurface image. It is well known that ray-tracing-based migration has severe limitations in strongly heterogeneous media and irregular acquisition geometries. Reverse time migration (RTM), on the other hand, fully accounts for the wave phenomena, including multiple arrives and turning rays due to complex velocity structure. It has the capability to fully reconstruct the image detectable in its acquisition aperture. The RTM algorithms typically require a rather accurate velocity model and demand high computing powers, and may not be applicable to real-time imaging as normally required in day-to-day medical operations. However, with the improvement of computing technology, such a computational bottleneck may not present a challenge in the near future. The present-day (RTM) algorithms are typically implemented from a flat datum for the seismic industry. It can be modified to accommodate any acquisition geometry and aperture, as long as sufficient illumination is provided. Such flexibility of RTM can be conveniently implemented for the application in USCT imaging if the spatial coordinates of the transmitters and receivers are known and enough data is collected to provide full illumination. This paper proposes an implementation of a full 3D RTM algorithm for USCT imaging to produce an accurate 3D acoustic image based on the Phase-shift-plus-interpolation (PSPI) method for wavefield extrapolation. In this method, each acquired data set (shot) is propagated back in time, and a known ultrasound wavelet is propagated forward in time, with PSPI wavefield extrapolation and a piece-wise constant velocity model of the organ (breast). The imaging condition is then applied to produce a partial image. Although each image is subject to the limitation of its own illumination aperture, the stack of multiple partial images will produce a full image of the organ, with a much-reduced noise level if compared with individual partial images.Keywords: illumination, reverse time migration (RTM), ultrasound computed tomography (USCT), wavefield extrapolation
Procedia PDF Downloads 7485 An eHealth Intervention Using Accelerometer- Smart Phone-App Technology to Promote Physical Activity and Health among Employees in a Military Setting
Authors: Emilia Pietiläinen, Heikki Kyröläinen, Tommi Vasankari, Matti Santtila, Tiina Luukkaala, Kai Parkkola
Abstract:
Working in the military sets special demands on physical fitness, however, reduced physical activity levels among employees in the Finnish Defence Forces (FDF), a trend also being seen among the working-age population in Finland, is leading to reduced physical fitness levels and increased risk of cardiovascular and metabolic diseases, something which also increases human resource costs. Therefore, the aim of the present study was to develop an eHealth intervention using accelerometer- smartphone app feedback technique, telephone counseling and physical activity recordings to increase physical activity of the personnel and thereby improve their health. Specific aims were to reduce stress, improve quality of sleep and mental and physical performance, ability to work and reduce sick leave absences. Employees from six military brigades around Finland were invited to participate in the study, and finally, 260 voluntary participants were included (66 women, 194 men). The participants were randomized into intervention (156) and control groups (104). The eHealth intervention group used accelerometers measuring daily physical activity and duration and quality of sleep for six months. The accelerometers transmitted the data to smartphone apps while giving feedback about daily physical activity and sleep. The intervention group participants were also encouraged to exercise for two hours a week during working hours, a benefit that was already offered to employees following existing FDF guidelines. To separate the exercise done during working hours from the accelerometer data, the intervention group marked this exercise into an exercise diary. The intervention group also participated in telephone counseling about their physical activity. On the other hand, the control group participants continued with their normal exercise routine without the accelerometer and feedback. They could utilize the benefit of being able to exercise during working hours, but they were not separately encouraged for it, nor was the exercise diary used. The participants were measured at baseline, after the entire intervention period, and six months after the end of the entire intervention. The measurements included accelerometer recordings, biochemical laboratory tests, body composition measurements, physical fitness tests, and a wide questionnaire focusing on sociodemographic factors, physical activity and health. In terms of results, the primary indicators of effectiveness are increased physical activity and fitness, improved health status, and reduced sick leave absences. The evaluation of the present scientific reach is based on the data collected during the baseline measurements. Maintenance of the studied outcomes is assessed by comparing the results of the control group measured at the baseline and a year follow-up. Results of the study are not yet available but will be presented at the conference. The present findings will help to develop an easy and cost-effective model to support the health and working capability of employees in the military and other workplaces.Keywords: accelerometer, health, mobile applications, physical activity, physical performance
Procedia PDF Downloads 19684 The Origins of Representations: Cognitive and Brain Development
Authors: Athanasios Raftopoulos
Abstract:
In this paper, an attempt is made to explain the evolution or development of human’s representational arsenal from its humble beginnings to its modern abstract symbols. Representations are physical entities that represent something else. To represent a thing (in a general sense of “thing”) means to use in the mind or in an external medium a sign that stands for it. The sign can be used as a proxy of the represented thing when the thing is absent. Representations come in many varieties, from signs that perceptually resemble their representative to abstract symbols that are related to their representata through conventions. Relying the distinction among indices, icons, and symbols, it is explained how symbolic representations gradually emerged from indices and icons. To understand the development or evolution of our representational arsenal, the development of the cognitive capacities that enabled the gradual emergence of representations of increasing complexity and expressive capability should be examined. The examination of these factors should rely on a careful assessment of the available empirical neuroscientific and paleo-anthropological evidence. These pieces of evidence should be synthesized to produce arguments whose conclusions provide clues concerning the developmental process of our representational capabilities. The analysis of the empirical findings in this paper shows that Homo Erectus was able to use both icons and symbols. Icons were used as external representations, while symbols were used in language. The first step in the emergence of representations is that a sensory-motor purely causal schema involved in indices is decoupled from its normal causal sensory-motor functions and serves as a representation of the object that initially called it into play. Sensory-motor schemes are tied to specific contexts of the organism-environment interactions and are activated only within these contexts. For a representation of an object to be possible, this scheme must be de-contextualized so that the same object can be represented in different contexts; a decoupled schema loses its direct ties to reality and becomes mental content. The analysis suggests that symbols emerged due to selection pressures of the social environment. The need to establish and maintain social relationships in ever-enlarging groups that would benefit the group was a sufficient environmental pressure to lead to the appearance of the symbolic capacity. Symbols could serve this need because they can express abstract relationships, such as marriage or monogamy. Icons, by being firmly attached to what can be observed, could not go beyond surface properties to express abstract relations. The cognitive capacities that are required for having iconic and then symbolic representations were present in Homo Erectus, which had a language that started without syntactic rules but was structured so as to mirror the structure of the world. This language became increasingly complex, and grammatical rules started to appear to allow for the construction of more complex expressions required to keep up with the increasing complexity of social niches. This created evolutionary pressures that eventually led to increasing cranial size and restructuring of the brain that allowed more complex representational systems to emerge.Keywords: mental representations, iconic representations, symbols, human evolution
Procedia PDF Downloads 5783 Geomechanics Properties of Tuzluca (Eastern. Turkey) Bedded Rock Salt and Geotechnical Safety
Authors: Mehmet Salih Bayraktutan
Abstract:
Geomechanical properties of Rock Salt Deposits in Tuzluca Salt Mine Area (Eastern Turkey) are studied for modeling the operation- excavation strategy. The purpose of this research focused on calculating the critical value of span height- which will meet the safety requirements. The Mine Site Tuzluca Hills consist of alternating parallel bedding of Salt ( NaCl ) and Gypsum ( CaS04 + 2 H20) rocks. Rock Salt beds are more resistant than narrow Gypsum interlayers. Rock Salt beds formed almost 97 percent of the total height of the Hill. Therefore, the geotechnical safety of Galleries depends on the mechanical criteria of Rock Salt Cores. General deposition of Tuzluca Basin was finally completed by Tuzluca Evaporites, as for the uppermost stratigraphic unit. They are currently running mining operations performed by classic mechanical excavation, room and pillar method. Rooms and Pillars are currently experiencing an initial stage of fracturing in places. Geotechnical safety of the whole mining area evaluated by Rock Mass Rating (RMR), Rock Quality Designation (RQD) spacing of joints, and the interaction of groundwater and fracture system. In general, bedded rock salt Show large lateral deformation capacity (while deformation modulus stays in relative small values, here E= 9.86 GPa). In such litho-stratigraphic environments, creep is a critical mechanism in failure. Rock Salt creep rate in steady-state is greater than interbedding layers. Under long-lasted compressive stresses, creep may cause shear displacements, partly using bedding planes. Eventually, steady-state creep in time returns to accelerated stages. Uniaxial compression creep tests on specimens were performed to have an idea of rock salt strength. To give an idea, on Rock Salt cores, average axial strength and strain are found as 18 - 24 MPa and 0.43-0.45 %, respectively. Uniaxial Compressive strength of 26- 32 MPa, from bedded rock salt cores. Elastic modulus is comparatively low, but lateral deformation of the rock salt is high under the uniaxial compression stress state. Poisson ratio = 0.44, break load = 156 kN, cohesion c= 12.8 kg/cm2, specific gravity SG=2.17 gr/cm3. Fracture System; spacing of fractures, joints, faults, offsets are evaluated under acting geodynamic mechanism. Two sand beds, each 4-6 m thick, exist near to upper level and at the top of the evaporating sequence. They act as aquifers and keep infiltrated water on top for a long duration, which may result in the failure of roofs or pillars. Two major active seismic ( N30W and N70E ) striking Fault Planes and parallel fracture strands have seismically triggered moderate risk of structural deformation of rock salt bedding sequence. Earthquakes and Floods are two prevailing sources of geohazards in this region—the seismotectonic activity of the Mine Site based on the crossing framework of Kagizman Faults and Igdir Faults. Dominant Hazard Risk sources include; a) Weak mechanical properties of rock salt, gypsum, anhydrite beds-creep. b) Physical discontinuities cutting across the thick parallel layers of Evaporite Mass, c) Intercalated beds of weak cemented or loose sand, clayey sandy sediments. On the other hand, absorbing the effects of salt-gyps parallel bedded deposits on seismic wave amplitudes has a reducing effect on the Rock Mass.Keywords: bedded rock salt, creep, failure mechanism, geotechnical safety
Procedia PDF Downloads 19082 Electromagnetic Simulation Based on Drift and Diffusion Currents for Real-Time Systems
Authors: Alexander Norbach
Abstract:
The script in this paper describes the use of advanced simulation environment using electronic systems (Microcontroller, Operational Amplifiers, and FPGA). The simulation may be used for all dynamic systems with the diffusion and the ionisation behaviour also. By additionally required observer structure, the system works with parallel real-time simulation based on diffusion model and the state-space representation for other dynamics. The proposed deposited model may be used for electrodynamic effects, including ionising effects and eddy current distribution also. With the script and proposed method, it is possible to calculate the spatial distribution of the electromagnetic fields in real-time. For further purpose, the spatial temperature distribution may be used also. With upon system, the uncertainties, unknown initial states and disturbances may be determined. This provides the estimation of the more precise system states for the required system, and additionally, the estimation of the ionising disturbances that occur due to radiation effects. The results have shown that a system can be also developed and adopted specifically for space systems with the real-time calculation of the radiation effects only. Electronic systems can take damage caused by impacts with charged particle flux in space or radiation environment. In order to be able to react to these processes, it must be calculated within a shorter time that ionising radiation and dose is present. All available sensors shall be used to observe the spatial distributions. By measured value of size and known location of the sensors, the entire distribution can be calculated retroactively or more accurately. With the formation, the type of ionisation and the direct effect to the systems and thus possible prevent processes can be activated up to the shutdown. The results show possibilities to perform more qualitative and faster simulations independent of kind of systems space-systems and radiation environment also. The paper gives additionally an overview of the diffusion effects and their mechanisms. For the modelling and derivation of equations, the extended current equation is used. The size K represents the proposed charge density drifting vector. The extended diffusion equation was derived and shows the quantising character and has similar law like the Klein-Gordon equation. These kinds of PDE's (Partial Differential Equations) are analytically solvable by giving initial distribution conditions (Cauchy problem) and boundary conditions (Dirichlet boundary condition). For a simpler structure, a transfer function for B- and E- fields was analytically calculated. With known discretised responses g₁(k·Ts) and g₂(k·Ts), the electric current or voltage may be calculated using a convolution; g₁ is the direct function and g₂ is a recursive function. The analytical results are good enough for calculation of fields with diffusion effects. Within the scope of this work, a proposed model of the consideration of the electromagnetic diffusion effects of arbitrary current 'waveforms' has been developed. The advantage of the proposed calculation of diffusion is the real-time capability, which is not really possible with the FEM programs available today. It makes sense in the further course of research to use these methods and to investigate them thoroughly.Keywords: advanced observer, electrodynamics, systems, diffusion, partial differential equations, solver
Procedia PDF Downloads 13081 Drivetrain Comparison and Selection Approach for Armored Wheeled Hybrid Vehicles
Authors: Çağrı Bekir Baysal, Göktuğ Burak Çalık
Abstract:
Armored vehicles may have different traction layouts as a result of terrain capabilities and mobility needs. Two main categories of layouts can be separated as wheeled and tracked. Tracked vehicles have superior off-road capabilities but what they gain on terrain performance they lose on mobility front. Wheeled vehicles on the other hand do not have as good terrain capabilities as tracked vehicles but they have superior mobility capabilities such as top speed, range and agility with respect to tracked vehicles. Conventional armored vehicles employ a diesel ICE as main power source. In these vehicles ICE is mechanically connected to the powertrain. This determines the ICE rpm as a result of speed and torque requested by the driver. ICE efficiency changes drastically with torque and speed required and conventional vehicles suffer in terms of fuel consumption because of this. Hybrid electric vehicles employ at least one electric motor in order to improve fuel efficiency. There are different types of hybrid vehicles but main types are Series Hybrid, Parallel Hybrid and Series-Parallel Hybrid. These vehicles introduce an electric motor for traction and also can have a generator electric motor for range extending purposes. Having an electric motor as the traction power source brings the flexibility of either using the ICE as an alternative traction source while it is in efficient range or completely separating the ICE from traction and using it solely considering efficiency. Hybrid configurations have additional advantages for armored vehicles in addition to fuel efficiency. Heat signature, silent operation and prolonged stationary missions can be possible with the help of the high-power battery pack that will be present in the vehicle for hybrid drivetrain. Because of the reasons explained, hybrid armored vehicles are becoming a target area for military and also for vehicle suppliers. In order to have a better idea and starting point when starting a hybrid armored vehicle design, hybrid drivetrain configuration has to be selected after performing a trade-off study. This study has to include vehicle mobility simulations, integration level, vehicle level and performance level criteria. In this study different hybrid traction configurations possible for an 8x8 vehicle is compared using above mentioned criteria set. In order to compare hybrid traction configurations ease of application, cost, weight advantage, reliability, maintainability, redundancy and performance criteria have been used. Performance criteria points have been defined with the help of vehicle simulations and tests. Results of these simulations and tests also help determining required tractive power for an armored vehicle including conditions like trench and obstacle crossing, gradient climb. With the method explained in this study, each configuration is assigned a point for each criterion. This way, correct configuration can be selected objectively for every application. Also, key aspects of armored vehicles, mine protection and ballistic protection will be considered for hybrid configurations. Results are expected to vary for different types of vehicles but it is observed that having longitudinal differential locking capability improves mobility and having high motor count increases complexity in general.Keywords: armored vehicles, electric drivetrain, electric mobility, hybrid vehicles
Procedia PDF Downloads 86