Search results for: GPS bsed based Robotic library
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 28786

Search results for: GPS bsed based Robotic library

20896 A Spatial Hypergraph Based Semi-Supervised Band Selection Method for Hyperspectral Imagery Semantic Interpretation

Authors: Akrem Sellami, Imed Riadh Farah

Abstract:

Hyperspectral imagery (HSI) typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image. Hence, a pixel in HSI is a high-dimensional vector of intensities with a large spectral range and a high spectral resolution. Therefore, the semantic interpretation is a challenging task of HSI analysis. We focused in this paper on object classification as HSI semantic interpretation. However, HSI classification still faces some issues, among which are the following: The spatial variability of spectral signatures, the high number of spectral bands, and the high cost of true sample labeling. Therefore, the high number of spectral bands and the low number of training samples pose the problem of the curse of dimensionality. In order to resolve this problem, we propose to introduce the process of dimensionality reduction trying to improve the classification of HSI. The presented approach is a semi-supervised band selection method based on spatial hypergraph embedding model to represent higher order relationships with different weights of the spatial neighbors corresponding to the centroid of pixel. This semi-supervised band selection has been developed to select useful bands for object classification. The presented approach is evaluated on AVIRIS and ROSIS HSIs and compared to other dimensionality reduction methods. The experimental results demonstrate the efficacy of our approach compared to many existing dimensionality reduction methods for HSI classification.

Keywords: dimensionality reduction, hyperspectral image, semantic interpretation, spatial hypergraph

Procedia PDF Downloads 306
20895 Interlayer-Mechanical Working: Effective Strategy to Mitigate Solidification Cracking in Wire-Arc Additive Manufacturing (WAAM) of Fe-based Shape Memory Alloy

Authors: Soumyajit Koley, Kuladeep Rajamudili, Supriyo Ganguly

Abstract:

In recent years, iron-based shape-memory alloys have been emerging as an inexpensive alternative to costly Ni-Ti alloy and thus considered suitable for many different applications in civil structures. Fe-17Mn-10Cr-5Si-4Ni-0.5V-0.5C alloy contains 37 wt.% of total solute elements. Such complex multi-component metallurgical system often leads to severe solute segregation and solidification cracking. Wire-arc additive manufacturing (WAAM) of Fe-17Mn-10Cr-5Si-4Ni-0.5V-0.5C alloy was attempted using a cold-wire fed plasma arc torch attached to a 6-axis robot. Self-standing walls were manufactured. However, multiple vertical cracks were observed after deposition of around 15 layers. Microstructural characterization revealed open surfaces of dendrites inside the crack, confirming these cracks as solidification cracks. Machine hammer peening (MHP) process was adopted on each layer to cold work the newly deposited alloy. Effect of MHP traverse speed were varied systematically to attain a window of operation where cracking was completely stopped. Microstructural and textural analysis were carried out further to correlate the peening process to microstructure.MHP helped in many ways. Firstly, a compressive residual stress was induced on each layer which countered the tensile residual stress evolved from solidification process; thus, reducing net tensile stress on the wall along its length. Secondly, significant local plastic deformation from MHP followed by the thermal cycle induced by deposition of next layer resulted into a recovered and recrystallized equiaxed microstructure instead of long columnar grains along the vertical direction. This microstructural change increased the total crack propagation length and thus, the overall toughness. Thirdly, the inter-layer peening significantly reduced the strong cubic {001} crystallographic texture formed along the build direction. Cubic {001} texture promotes easy separation of planes and easy crack propagation. Thus reduction of cubic texture alleviates the chance of cracking.

Keywords: Iron-based shape-memory alloy, wire-arc additive manufacturing, solidification cracking, inter-layer cold working, machine hammer peening

Procedia PDF Downloads 72
20894 Significant Aspects and Drivers of Germany and Australia's Energy Policy from a Political Economy Perspective

Authors: Sarah Niklas, Lynne Chester, Mark Diesendorf

Abstract:

Geopolitical tensions, climate change and recent movements favouring a transformative shift in institutional power structures have influenced the economics of conventional energy supply for decades. This study takes a multi-dimensional approach to illustrate the potential of renewable energy (RE) technology to provide a pathway to a low-carbon economy driven by ecologically sustainable, independent and socially just energy. This comparative analysis identifies economic, political and social drivers that shaped the adoption of RE policy in two significantly different economies, Germany and Australia, with strong and weak commitments to RE respectively. Two complementary political-economy theories frame the document-based analysis. Régulation Theory, inspired by Marxist ideas and strongly influenced by contemporary economic problems, provides the background to explore the social relationships contributing the adoption of RE within the macro-economy. Varieties of Capitalism theory, a more recently developed micro-economic approach, examines the nature of state-firm relationships. Together these approaches provide a comprehensive lens of analysis. Germany’s energy policy transformed substantially over the second half of the last century. The development is characterised by the coordination of societal, environmental and industrial demands throughout the advancement of capitalist regimes. In the Fordist regime, mass production based on coal drove Germany’s astounding economic recovery during the post-war period. Economic depression and the instability of institutional arrangements necessitated the impulsive seeking of national security and energy independence. During the postwar Flexi-Fordist period, quality-based production, innovation and technology-based competition schemes, particularly with regard to political power structures in and across Europe, favoured the adoption of RE. Innovation, knowledge and education were institutionalized, leading to the legislation of environmental concerns. Lastly the establishment of government-industry-based coordinative programs supported the phase out of nuclear power and the increased adoption of RE during the last decade. Australia’s energy policy is shaped by the country’s richness in mineral resources. Energy policy largely served coal mining, historically and currently one of the most capital-intense industry. Assisted by the macro-economic dimensions of institutional arrangements, social and financial capital is orientated towards the export-led and strongly demand-oriented economy. Here energy policy serves the maintenance of capital accumulation in the mining sector and the emerging Asian economies. The adoption of supportive renewable energy policy would challenge the distinct role of the mining industry within the (neo)-liberal market economy. The state’s protective role of the mining sector has resulted in weak commitment to RE policy and investment uncertainty in the energy sector. Recent developments, driven by strong public support for RE, emphasize the sense of community in urban and rural areas and the emergence of a bottom-up approach to adopt renewables. Thus, political economy frameworks on both the macro-economic (Regulation Theory) and micro-economic (Varieties of Capitalism theory) scales can together explain the strong commitment to RE in Germany vis-à-vis the weak commitment in Australia.

Keywords: political economy, regulation theory, renewable energy, social relationships, energy transitions

Procedia PDF Downloads 381
20893 Investigating the Impact of Job-Related and Organisational Factors on Employee Engagement: An Emotionally Relevant Approach Based on Psychological Climate and Organisational Emotional Intelligence (OEI)

Authors: Nuno Da Camara, Victor Dulewicz, Malcolm Higgs

Abstract:

Factors on employee engagement: In particular, although theorists have described the critical role of emotional cognition of the workplace environment as antecedents to employee engagement, empirical research on the impact of emotional cognition on employee engagement is limited. However, previous researchers have typically provided evidence of the link between emotional cognition of the workplace environment and workplace attitudes such as job satisfaction and organisational commitment. This study therefore aims to investigate the impact of emotional cognition of job, role, leader and organisation domains of the work environment – as represented by measures of psychological climate and organizational emotional intelligence (OEI) - on employee engagement. The research is based on a quantitative cross-sectional survey of employees in a UK charity organization (n=174). The research instruments applied include the psychological climate scale, the organisational emotional intelligence questionnaire (OEIQ) and the Utrecht Work Engagement Scale (UWES). The data were analysed using hierarchical regression and partial least squares (PLS) analytical techniques. The results of the study show that both psychological climate and OEI, which represent emotional cognition of job, role, leader and organisation domains in the workplace are significant drivers of employee engagement. In particular, the study found that a sense of contribution and challenge at work are the strongest drivers of vigour, dedication and absorption and highlights the importance of emotionally relevant approaches in furthering our understanding of workplace engagement.

Keywords: employee engagement, organisational emotional intelligence, psychological climate, workplace attitudes

Procedia PDF Downloads 505
20892 The Role of Strategic Alliances, Innovation Capability, Cost Reduction in Enhancing Customer Loyalty and Firm’s Competitive Advantage

Authors: Soebowo Musa

Abstract:

Mining industries are known to be very volatile due to their sensitive nature toward changes in the environment, particularly coal mining. Heavy equipment distributors and coal mining contractors are among heavily affected by such volatility. They are facing more uncertainty on the sustainability of the coal mining industry. Strategic alliances and organizational capabilities such as innovation capability have long been seen as ways to stay competitive with a focus more on the strategic alliances partner-to-partner in serving their customers. In today’s rapid change in the environment, a shift in consumer behaviors, and the human-centric business approach, this study looks at the strategic alliance partner-to-customer relationship in both the industrial organization and resource-based theories. This study was conducted based on 250 respondents from the strategic alliances partner-to-customer between heavy equipment distributors and coal mining contractors in Indonesia. This study finds strategic alliances have the highest association toward cost reduction, a proxy of operational efficiency followed by its association toward innovation capability. Further, strategic alliances and innovation capability have a positive relationship with customer loyalty, while innovation capability and customer loyalty have no significant relationships toward the firm’s competitive advantage. This study also indicates that cost reduction is not a condition to develop customer loyalty in the strategic alliance partner-to-customer relationship. It confirms strategic alliances are a strategy that creates a firm’s operational efficiency, innovation capability that develops customer loyalty, and competitive advantage.

Keywords: strategic alliance, innovation capability, cost reduction, customer loyalty, competitive advantage

Procedia PDF Downloads 119
20891 Developing Second Language Learners’ Reading Comprehension through Content and Language Integrated Learning

Authors: Kaine Gulozer

Abstract:

A strong methodological conception in the practice of teaching, content, and language integrated learning (CLIL) is adapted to boost efficiency in the second language (L2) instruction with a range of proficiency levels. This study aims to investigate whether the incorporation of two different mediums of meaningful CLIL reading activities (in-school and out-of-school settings) influence L2 students’ development of comprehension skills differently. CLIL based instructional methodology was adopted and total of 50 preparatory year students (N=50, 25 students for each proficiency level) from two distinct language proficiency learners (elementary and intermediate) majoring in engineering faculties were recruited for the study. Both qualitative and quantitative methods through a post-test design were adopted. Data were collected through a questionnaire, a reading comprehension test and a semi-structured interview addressed to the two proficiency groups. The results show that both settings in relation to the development of reading comprehension are beneficial, whereas the impact of the reading activities conducted in school settings was higher at the elementary language level of students than that of the one conducted out-of-class settings based on the reported interview results. This study suggests that the incorporation of meaningful CLIL reading activities in both settings for both proficiency levels could create students’ self-awareness of their language learning process and the sense of ownership in successful improvements of field-specific reading comprehension. Further potential suggestions and implications of the study were discussed.

Keywords: content and language integrated learning, in-school setting, language proficiency, out-of-school setting, reading comprehension

Procedia PDF Downloads 146
20890 Selection of Soil Quality Indicators of Rice Cropping Systems Using Minimum Data Set Influenced by Imbalanced Fertilization

Authors: Theresa K., Shanmugasundaram R., Kennedy J. S.

Abstract:

Nutrient supplements are indispensable for raising crops and to reap determining productivity. The nutrient imbalance between replenishment and crop uptake is attempted through the input of inorganic fertilizers. Excessive dumping of inorganic nutrients in soil cause stagnant and decline in yield. Imbalanced N-P-K ratio in the soil exacerbates and agitates the soil ecosystems. The study evaluated the fertilization practices of conventional (CFs), organic and Integrated Nutrient Management system (INM) on soil quality using key indicators and soil quality indices. Twelve rice farming fields of which, ten fields were having conventional cultivation practices, one field each was organic farming based and INM based cultivated under monocropping sequence in the Thondamuthur block of Coimbatore district were fixed and properties viz., physical, chemical and biological were studied for four cropping seasons to determine soil quality index (SQI). SQI was computed for conventional, organic and INM fields. Comparing conventional farming (CF) with organic and INM, CF was recorded with a lower soil quality index. While in organic and INM fields, the higher SQI value of 0.99 and 0.88 respectively were registered. CF₄ received with a super-optimal dose of N (250%) showed a lesser SQI value (0.573) as well as the yield (3.20 t ha⁻¹) and the CF6 which received 125 % N recorded the highest SQI (0.715) and yield (6.20 t ha⁻¹). Likewise, most of the CFs received higher N beyond the level of 125 % except CF₃ and CF₉, which recorded lower yields. CFs which received super-optimal P in the order of CF₆&CF₇>CF₁&CF₁₀ recorded lesser yields except for CF₆. Super-optimal K application also recorded lesser yield in CF₄, CF₇ and CF₉.

Keywords: rice cropping system, soil quality indicators, imbalanced fertilization, yield

Procedia PDF Downloads 157
20889 Determinants of Teenage Pregnancy: The Case of School Adolescents of Arba Minch Town, Southern Ethiopia

Authors: Aleme Mekuria, Samuel Mathewos

Abstract:

Background: Teenage pregnancy has long been a worldwide social, economic and educational concern for the developed, developing and underdeveloped countries. Studies on adolescent sexuality and pregnancy are very limited in our country. Therefore, this study aims at assessing the prevalence of teenage pregnancy and its determinants among school adolescents of Arba Minch town. Methods: Institution- based, cross-sectional study was conducted from 20-30 March 2014. Systematic sampling technique was used to select a total of 578 students from four schools of the town. Data were collected by trained data collectors using a pre-tested, self-administered structured questionnaire. The analysis was made using the software SPSS version 20.0 statistical packages. Multivariate logistic regression was used to identify the predictors of teenage pregnancy. Results: The prevalence of teenage pregnancy among school adolescents of Arba Minch town was 7.7%. Being grade11(AOR=4.6;95%CI:1.4,9.3) and grade12 student (AOR=5.8;95% CI:1.3,14.4), not knowing the correct time to take emergency contraceptives(AOR=3.3;95%CI:1.4,7.4), substance use(AOR=3.1;95%CI:1.1,8.8), living with either of biological parents (AOR=3.3;95%CI:1.1,8.7) and poor parent-daughter interaction (AOR=3.1;95%CI:1.1,8.7) were found to be significant predictors of teenage pregnancy. Conclusion: This study revealed a high level of teenage pregnancy among school adolescents of Arba Minch town. A significant number of adolescent female school students were at risk of facing the challenges of teenage pregnancy in the study area. School-based reproductive health education and strong parent-daughter relationships should be strengthened.

Keywords: adolescent, Arba minch, risk factors, school, southern Ethiopia, teenage pregnancy

Procedia PDF Downloads 349
20888 Mathematical Modelling and AI-Based Degradation Analysis of the Second-Life Lithium-Ion Battery Packs for Stationary Applications

Authors: Farhad Salek, Shahaboddin Resalati

Abstract:

The production of electric vehicles (EVs) featuring lithium-ion battery technology has substantially escalated over the past decade, demonstrating a steady and persistent upward trajectory. The imminent retirement of electric vehicle (EV) batteries after approximately eight years underscores the critical need for their redirection towards recycling, a task complicated by the current inadequacy of recycling infrastructures globally. A potential solution for such concerns involves extending the operational lifespan of electric vehicle (EV) batteries through their utilization in stationary energy storage systems during secondary applications. Such adoptions, however, require addressing the safety concerns associated with batteries’ knee points and thermal runaways. This paper develops an accurate mathematical model representative of the second-life battery packs from a cell-to-pack scale using an equivalent circuit model (ECM) methodology. Neural network algorithms are employed to forecast the degradation parameters based on the EV batteries' aging history to develop a degradation model. The degradation model is integrated with the ECM to reflect the impacts of the cycle aging mechanism on battery parameters during operation. The developed model is tested under real-life load profiles to evaluate the life span of the batteries in various operating conditions. The methodology and the algorithms introduced in this paper can be considered the basis for Battery Management System (BMS) design and techno-economic analysis of such technologies.

Keywords: second life battery, electric vehicles, degradation, neural network

Procedia PDF Downloads 65
20887 Parameters Identification of Granular Soils around PMT Test by Inverse Analysis

Authors: Younes Abed

Abstract:

The successful application of in-situ testing of soils heavily depends on development of interpretation methods of tests. The pressuremeter test simulates the expansion of a cylindrical cavity and because it has well defined boundary conditions, it is more unable to rigorous theoretical analysis (i. e. cavity expansion theory) then most other in-situ tests. In this article, and in order to make the identification process more convenient, we propose a relatively simple procedure which involves the numerical identification of some mechanical parameters of a granular soil, especially, the elastic modulus and the friction angle from a pressuremeter curve. The procedure, applied here to identify the parameters of generalised prager model associated to the Drucker & Prager criterion from a pressuremeter curve, is based on an inverse analysis approach, which consists of minimizing the function representing the difference between the experimental curve and the curve obtained by integrating the model along the loading path in in-situ testing. The numerical process implemented here is based on the established finite element program. We present a validation of the proposed approach by a database of tests on expansion of cylindrical cavity. This database consists of four types of tests; thick cylinder tests carried out on the Hostun RF sand, pressuremeter tests carried out on the Hostun sand, in-situ pressuremeter tests carried out at the site of Fos with marine self-boring pressuremeter and in-situ pressuremeter tests realized on the site of Labenne with Menard pressuremeter.

Keywords: granular soils, cavity expansion, pressuremeter test, finite element method, identification procedure

Procedia PDF Downloads 292
20886 Protein Extraction by Enzyme-Assisted Extraction followed by Alkaline Extraction from Red Seaweed Eucheuma denticulatum (Spinosum) Used in Carrageenan Production

Authors: Alireza Naseri, Susan L. Holdt, Charlotte Jacobsen

Abstract:

In 2014, the global amount of carrageenan production was 60,000 ton with a value of US$ 626 million. From this number, it can be estimated that the total dried seaweed consumption for this production was at least 300,000 ton/year. The protein content of these types of seaweed is 5 – 25%. If just half of this total amount of protein could be extracted, 18,000 ton/year of a high-value protein product would be obtained. The overall aim of this study was to develop a technology that will ensure further utilization of the seaweed that is used only as raw materials for carrageenan production as single extraction at present. More specifically, proteins should be extracted from the seaweed either before or after extraction of carrageenan with focus on maintaining the quality of carrageenan as a main product. Different mechanical, chemical and enzymatic technologies were evaluated. The optimized process was implemented in lab scale and based on its results; the new experiments were done a pilot and larger scale. In order to calculate the efficiency of the new upstream multi-extraction process, protein content was tested before and after extraction. After this step, the extraction of carrageenan was done and carrageenan content and the effect of extraction on yield were evaluated. The functionality and quality of carrageenan were measured based on rheological parameters. The results showed that by using the new multi-extraction process (submitted patent); it is possible to extract almost 50% of total protein without any negative impact on the carrageenan quality. Moreover, compared to the routine carrageenan extraction process, the new multi-extraction process could increase the yield of carrageenan and the rheological properties such as gel strength in the final carrageenan had a promising improvement. The extracted protein has initially been screened as a plant protein source in typical food applications. Further work will be carried out in order to improve properties such as color, solubility, and taste.

Keywords: carrageenan, extraction, protein, seaweed

Procedia PDF Downloads 284
20885 Analysis of the Level of Production Failures by Implementing New Assembly Line

Authors: Joanna Kochanska, Dagmara Gornicka, Anna Burduk

Abstract:

The article examines the process of implementing a new assembly line in a manufacturing enterprise of the household appliances industry area. At the initial stages of the project, a decision was made that one of its foundations should be the concept of lean management. Because of that, eliminating as many errors as possible in the first phases of its functioning was emphasized. During the start-up of the line, there were identified and documented all production losses (from serious machine failures, through any unplanned downtime, to micro-stops and quality defects). During 6 weeks (line start-up period), all errors resulting from problems in various areas were analyzed. These areas were, among the others, production, logistics, quality, and organization. The aim of the work was to analyze the occurrence of production failures during the initial phase of starting up the line and to propose a method for determining their critical level during its full functionality. There was examined the repeatability of the production losses in various areas and at different levels at such an early stage of implementation, by using the methods of statistical process control. Based on the Pareto analysis, there were identified the weakest points in order to focus improvement actions on them. The next step was to examine the effectiveness of the actions undertaken to reduce the level of recorded losses. Based on the obtained results, there was proposed a method for determining the critical failures level in the studied areas. The developed coefficient can be used as an alarm in case of imbalance of the production, which is caused by the increased failures level in production and production support processes in the period of the standardized functioning of the line.

Keywords: production failures, level of production losses, new production line implementation, assembly line, statistical process control

Procedia PDF Downloads 128
20884 Coupling of Microfluidic Droplet Systems with ESI-MS Detection for Reaction Optimization

Authors: Julia R. Beulig, Stefan Ohla, Detlev Belder

Abstract:

In contrast to off-line analytical methods, lab-on-a-chip technology delivers direct information about the observed reaction. Therefore, microfluidic devices make an important scientific contribution, e.g. in the field of synthetic chemistry. Herein, the rapid generation of analytical data can be applied for the optimization of chemical reactions. These microfluidic devices enable a fast change of reaction conditions as well as a resource saving method of operation. In the presented work, we focus on the investigation of multiphase regimes, more specifically on a biphasic microfluidic droplet systems. Here, every single droplet is a reaction container with customized conditions. The biggest challenge is the rapid qualitative and quantitative readout of information as most detection techniques for droplet systems are non-specific, time-consuming or too slow. An exception is the electrospray mass spectrometry (ESI-MS). The combination of a reaction screening platform with a rapid and specific detection method is an important step in droplet-based microfluidics. In this work, we present a novel approach for synthesis optimization on the nanoliter scale with direct ESI-MS detection. The development of a droplet-based microfluidic device, which enables the modification of different parameters while simultaneously monitoring the effect on the reaction within a single run, is shown. By common soft- and photolithographic techniques a polydimethylsiloxane (PDMS) microfluidic chip with different functionalities is developed. As an interface for the MS detection, we use a steel capillary for ESI and improve the spray stability with a Teflon siphon tubing, which is inserted underneath the steel capillary. By optimizing the flow rates, it is possible to screen parameters of various reactions, this is exemplarity shown by a Domino Knoevenagel Hetero-Diels-Alder reaction. Different starting materials, catalyst concentrations and solvent compositions are investigated. Due to the high repetition rate of the droplet production, each set of reaction condition is examined hundreds of times. As a result, of the investigation, we receive possible reagents, the ideal water-methanol ratio of the solvent and the most effective catalyst concentration. The developed system can help to determine important information about the optimal parameters of a reaction within a short time. With this novel tool, we make an important step on the field of combining droplet-based microfluidics with organic reaction screening.

Keywords: droplet, mass spectrometry, microfluidics, organic reaction, screening

Procedia PDF Downloads 301
20883 An Improved Robust Algorithm Based on Cubature Kalman Filter for Single-Frequency Global Navigation Satellite System/Inertial Navigation Tightly Coupled System

Authors: Hao Wang, Shuguo Pan

Abstract:

The Global Navigation Satellite System (GNSS) signal received by the dynamic vehicle in the harsh environment will be frequently interfered with and blocked, which generates gross error affecting the positioning accuracy of the GNSS/Inertial Navigation System (INS) integrated navigation. Therefore, this paper put forward an improved robust Cubature Kalman filter (CKF) algorithm for single-frequency GNSS/INS tightly coupled system ambiguity resolution. Firstly, the dynamic model and measurement model of a single-frequency GNSS/INS tightly coupled system was established, and the method for GNSS integer ambiguity resolution with INS aided is studied. Then, we analyzed the influence of pseudo-range observation with gross error on GNSS/INS integrated positioning accuracy. To reduce the influence of outliers, this paper improved the CKF algorithm and realized an intelligent selection of robust strategies by judging the ill-conditioned matrix. Finally, a field navigation test was performed to demonstrate the effectiveness of the proposed algorithm based on the double-differenced solution mode. The experiment has proved the improved robust algorithm can greatly weaken the influence of separate, continuous, and hybrid observation anomalies for enhancing the reliability and accuracy of GNSS/INS tightly coupled navigation solutions.

Keywords: GNSS/INS integrated navigation, ambiguity resolution, Cubature Kalman filter, Robust algorithm

Procedia PDF Downloads 100
20882 Unlocking Green Hydrogen Potential: A Machine Learning-Based Assessment

Authors: Said Alshukri, Mazhar Hussain Malik

Abstract:

Green hydrogen is hydrogen produced using renewable energy sources. In the last few years, Oman aimed to reduce its dependency on fossil fuels. Recently, the hydrogen economy has become a global trend, and many countries have started to investigate the feasibility of implementing this sector. Oman created an alliance to establish the policy and rules for this sector. With motivation coming from both global and local interest in green hydrogen, this paper investigates the potential of producing hydrogen from wind and solar energies in three different locations in Oman, namely Duqm, Salalah, and Sohar. By using machine learning-based software “WEKA” and local metrological data, the project was designed to figure out which location has the highest wind and solar energy potential. First, various supervised models were tested to obtain their prediction accuracy, and it was found that the Random Forest (RF) model has the best prediction performance. The RF model was applied to 2021 metrological data for each location, and the results indicated that Duqm has the highest wind and solar energy potential. The system of one wind turbine in Duqm can produce 8335 MWh/year, which could be utilized in the water electrolysis process to produce 88847 kg of hydrogen mass, while a solar system consisting of 2820 solar cells is estimated to produce 1666.223 MWh/ year which is capable of producing 177591 kg of hydrogen mass.

Keywords: green hydrogen, machine learning, wind and solar energies, WEKA, supervised models, random forest

Procedia PDF Downloads 79
20881 Testing the Possibility of Healthy Individuals to Mimic Fatigability in Multiple Sclerotic Patients

Authors: Emmanuel Abban Sagoe

Abstract:

A proper functioning of the Central Nervous System ensures that we are able to accomplish just about everything we do as human beings such as walking, breathing, running, etc. Myelinated neurons throughout the body which transmit signals at high speeds facilitate these actions. In the case of MS, the body’s immune system attacks the myelin sheath surrounding the neurons and overtime destroys the myelin sheaths. Depending upon where the destruction occurs in the brain symptoms can vary from person to person. Fatigue is, however, the biggest problem encountered by an MS sufferer. It is very often described as the bedrock upon which other symptoms of MS such challenges in balance and coordination, dizziness, slurred speech, etc. may occur. Classifying and distinguishing between perceptions based fatigue and performance based fatigability is key to identifying appropriate treatment options for patients. Objective methods for assessing motor fatigability is also key to providing clinicians and physiotherapist with critical information on the progression of the symptom. This study tested if the Fatigue Index Kliniken Schmieder assessment tool can detect fatigability as seen in MS patients when healthy subjects with no known history of neurological pathology mimic abnormal gaits. Thirty three healthy adults between ages 18-58years volunteered as subjects for the study. The subjects, strapped with RehaWatch sensors on both feet, completed 6 gait protocols of normal and mimicked fatigable gaits for 60 seconds per each gait and at 1.38889m/s treadmill speed following clear instructions given.

Keywords: attractor attributes, fatigue index Kliniken Schmieder, gait variability, movement pattern

Procedia PDF Downloads 123
20880 Efficient DNN Training on Heterogeneous Clusters with Pipeline Parallelism

Authors: Lizhi Ma, Dan Liu

Abstract:

Pipeline parallelism has been widely used to accelerate distributed deep learning to alleviate GPU memory bottlenecks and to ensure that models can be trained and deployed smoothly under limited graphics memory conditions. However, in highly heterogeneous distributed clusters, traditional model partitioning methods are not able to achieve load balancing. The overlap of communication and computation is also a big challenge. In this paper, HePipe is proposed, an efficient pipeline parallel training method for highly heterogeneous clusters. According to the characteristics of the neural network model pipeline training task, oriented to the 2-level heterogeneous cluster computing topology, a training method based on the 2-level stage division of neural network modeling and partitioning is designed to improve the parallelism. Additionally, a multi-forward 1F1B scheduling strategy is designed to accelerate the training time of each stage by executing the computation units in advance to maximize the overlap between the forward propagation communication and backward propagation computation. Finally, a dynamic recomputation strategy based on task memory requirement prediction is proposed to improve the fitness ratio of task and memory, which improves the throughput of the cluster and solves the memory shortfall problem caused by memory differences in heterogeneous clusters. The empirical results show that HePipe improves the training speed by 1.6×−2.2× over the existing asynchronous pipeline baselines.

Keywords: pipeline parallelism, heterogeneous cluster, model training, 2-level stage partitioning

Procedia PDF Downloads 19
20879 Thermal Hysteresis Activity of Ice Binding Proteins during Ice Crystal Growth in Sucrose Solution

Authors: Bercem Kiran-Yildirim, Volker Gaukel

Abstract:

Ice recrystallization (IR) which occurs especially during frozen storage is an undesired process due to the possible influence on the quality of products. As a result of recrystallization, the total volume of ice remains constant, but the size, number, and shape of ice crystals change. For instance, as indicated in the literature, the size of ice crystals in ice cream increases due to recrystallization. This results in texture deterioration. Therefore, the inhibition of ice recrystallization is of great importance, not only for food industry but also for several other areas where sensitive products are stored frozen, like pharmaceutical products or organs and blood in medicine. Ice-binding proteins (IBPs) have the unique ability to inhibit ice growth and in consequence inhibit recrystallization. This effect is based on their ice binding affinity. In the presence of IBP in a solution, ice crystal growth is inhibited during temperature decrease until a certain temperature is reached. The melting during temperature increase is not influenced. The gap between melting and freezing points is known as thermal hysteresis (TH). In literature, the TH activity is usually investigated under laboratory conditions in IBP buffer solutions. In product applications (e.g., food) there are many other solutes present which may influence the TH activity. In this study, a subset of IBPs, so-called antifreeze proteins (AFPs), is used for the investigation of the influence of sucrose solution concentration on the TH activity. For the investigation, a polarization microscope (Nikon Eclipse LV100ND) equipped with a digital camera (Nikon DS-Ri1) and a cold stage (Linkam LTS420) was used. In a first step, the equipment was established and validated concerning the accuracy of TH measurements based on literature data.

Keywords: ice binding proteins, ice crystals, sucrose solution, thermal hysteresis

Procedia PDF Downloads 185
20878 Model-Based Global Maximum Power Point Tracking at Photovoltaic String under Partial Shading Conditions Using Multi-Input Interleaved Boost DC-DC Converter

Authors: Seyed Hossein Hosseini, Seyed Majid Hashemzadeh

Abstract:

Solar energy is one of the remarkable renewable energy sources that have particular characteristics such as unlimited, no environmental pollution, and free access. Generally, solar energy can be used in thermal and photovoltaic (PV) types. The cost of installation of the PV system is very high. Additionally, due to dependence on environmental situations such as solar radiation and ambient temperature, electrical power generation of this system is unpredictable and without power electronics devices, there is no guarantee to maximum power delivery at the output of this system. Maximum power point tracking (MPPT) should be used to achieve the maximum power of a PV string. MPPT is one of the essential parts of the PV system which without this section, it would be impossible to reach the maximum amount of the PV string power and high losses are caused in the PV system. One of the noticeable challenges in the problem of MPPT is the partial shading conditions (PSC). In PSC, the output photocurrent of the PV module under the shadow is less than the PV string current. The difference between the mentioned currents passes from the module's internal parallel resistance and creates a large negative voltage across shaded modules. This significant negative voltage damages the PV module under the shadow. This condition is called hot-spot phenomenon. An anti-paralleled diode is inserted across the PV module to prevent the happening of this phenomenon. This diode is known as the bypass diode. Due to the performance of the bypass diode under PSC, the P-V curve of the PV string has several peaks. One of the P-V curve peaks that makes the maximum available power is the global peak. Model-based Global MPPT (GMPPT) methods can estimate the optimal point with higher speed than other GMPPT approaches. Centralized, modular, and interleaved DC-DC converter topologies are the significant structures that can be used for GMPPT at a PV string. there are some problems in the centralized structure such as current mismatch losses at PV sting, loss of power of the shaded modules because of bypassing by bypass diodes under PSC, needing to series connection of many PV modules to reach the desired voltage level. In the modular structure, each PV module is connected to a DC-DC converter. In this structure, by increasing the amount of demanded power from the PV string, the number of DC-DC converters that are used at the PV system will increase. As a result, the cost of the modular structure is very high. We can implement the model-based GMPPT through the multi-input interleaved boost DC-DC converter to increase the power extraction from the PV string and reduce hot-spot and current mismatch error in a PV string under different environmental condition and variable load circumstances. The interleaved boost DC-DC converter has many privileges than other mentioned structures, such as high reliability and efficiency, better regulation of DC voltage at DC link, overcome the notable errors such as module's current mismatch and hot spot phenomenon, and power switches voltage stress reduction.

Keywords: solar energy, photovoltaic systems, interleaved boost converter, maximum power point tracking, model-based method, partial shading conditions

Procedia PDF Downloads 130
20877 Agroecology: Rethink the Local in the Global to Promote the Creation of Novelties

Authors: Pauline Cuenin, Marcelo Leles Romarco Oliveira

Abstract:

Based on their localities and following their ecological rationality, family-based farmers have experimented, adapted and innovated to improve their production systems continuously for millennia. With the technological package transfer processes of the so-called Green Revolution for agricultural holdings, farmers have become increasingly dependent on ready-made "recipes" built from so-called "universal" and global knowledge to face the problems that emerge in the management of local agroecosystems, thus reducing their creative and experiential capacities. However, the production of novelties within farms is fundamental to the transition to more sustainable agro food systems. In fact, as the fruits of local knowledge and / or the contextualization of exogenous knowledge, novelties are seen as seeds of transition. By presenting new techniques, new organizational forms and epistemological approaches, agroecology was pointed out as a way to encourage and promote the creative capacity of farmers. From this perspective, this theoretical work aims to analyze how agroecology encourages the innovative capacity of farmers, and in general, the production of novelties. For this, an analysis was made of the theoretical and methodological bases of agroecology through a literature review, specifically looking for the way in which it articulates the local with the global, complemented by an analysis of agro ecological Brazilian experiences. It was emphasized that, based on the peasant way of doing agriculture, that is, on ecological / social co-evolution or still called co-production (interaction between human beings and living nature), agroecology recognizes and revalues peasant involves the deep interactions of the farmer with his site (bio-physical and social). As a "place science," practice and movement, it specifically takes into consideration the local and empirical knowledge of farmers, which allows questioning and modifying the paradigms that underpin the current agriculture that have disintegrated farmers' creative processes. In addition to upgrade the local, agroecology allows the dialogue of local knowledge with global knowledge, essential in the process of changes to get out of the dominant logic of thought and give shape to new experiences. In order to reach this articulation, agroecology involves new methodological focuses seeking participatory methods of study and intervention that express themselves in the form of horizontal spaces of socialization and collective learning that involve several actors with different knowledge. These processes promoted by agroecology favor the production of novelties at local levels for expansion at other levels, such as the global, through trans local agro ecological networks.

Keywords: agroecology, creativity, global, local, novelty

Procedia PDF Downloads 223
20876 Using Business Simulations and Game-Based Learning for Enterprise Resource Planning Implementation Training

Authors: Carin Chuang, Kuan-Chou Chen

Abstract:

An Enterprise Resource Planning (ERP) system is an integrated information system that supports the seamless integration of all the business processes of a company. Implementing an ERP system can increase efficiencies and decrease the costs while helping improve productivity. Many organizations including large, medium and small-sized companies have already adopted an ERP system for decades. Although ERP system can bring competitive advantages to organizations, the lack of proper training approach in ERP implementation is still a major concern. Organizations understand the importance of ERP training to adequately prepare managers and users. The low return on investment, however, for the ERP training makes the training difficult for knowledgeable workers to transfer what is learned in training to the jobs at workplace. Inadequate and inefficient ERP training limits the value realization and success of an ERP system. That is the need to call for a profound change and innovation for ERP training in both workplace at industry and the Information Systems (IS) education in academia. The innovated ERP training approach can improve the users’ knowledge in business processes and hands-on skills in mastering ERP system. It also can be instructed as educational material for IS students in universities. The purpose of the study is to examine the use of ERP simulation games via the ERPsim system to train the IS students in learning ERP implementation. The ERPsim is the business simulation game developed by ERPsim Lab at HEC Montréal, and the game is a real-life SAP (Systems Applications and Products) ERP system. The training uses the ERPsim system as the tool for the Internet-based simulation games and is designed as online student competitions during the class. The competitions involve student teams with the facilitation of instructor and put the students’ business skills to the test via intensive simulation games on a real-world SAP ERP system. The teams run the full business cycle of a manufacturing company while interacting with suppliers, vendors, and customers through sending and receiving orders, delivering products and completing the entire cash-to-cash cycle. To learn a range of business skills, student needs to adopt individual business role and make business decisions around the products and business processes. Based on the training experiences learned from rounds of business simulations, the findings show that learners have reduced risk in making mistakes that help learners build self-confidence in problem-solving. In addition, the learners’ reflections from their mistakes can speculate the root causes of the problems and further improve the efficiency of the training. ERP instructors teaching with the innovative approach report significant improvements in student evaluation, learner motivation, attendance, engagement as well as increased learner technology competency. The findings of the study can provide ERP instructors with guidelines to create an effective learning environment and can be transferred to a variety of other educational fields in which trainers are migrating towards a more active learning approach.

Keywords: business simulations, ERP implementation training, ERPsim, game-based learning, instructional strategy, training innovation

Procedia PDF Downloads 139
20875 HLB Disease Detection in Omani Lime Trees using Hyperspectral Imaging Based Techniques

Authors: Jacintha Menezes, Ramalingam Dharmalingam, Palaiahnakote Shivakumara

Abstract:

In the recent years, Omani acid lime cultivation and production has been affected by Citrus greening or Huanglongbing (HLB) disease. HLB disease is one of the most destructive diseases for citrus, with no remedies or countermeasures to stop the disease. Currently used Polymerase chain reaction (PCR) and enzyme-linked immunosorbent assay (ELISA) HLB detection tests require lengthy and labor-intensive laboratory procedures. Furthermore, the equipment and staff needed to carry out the laboratory procedures are frequently specialized hence making them a less optimal solution for the detection of the disease. The current research uses hyperspectral imaging technology for automatic detection of citrus trees with HLB disease. Omani citrus tree leaf images were captured through portable Specim IQ hyperspectral camera. The research considered healthy, nutrition deficient, and HLB infected leaf samples based on the Polymerase chain reaction (PCR) test. The highresolution image samples were sliced to into sub cubes. The sub cubes were further processed to obtain RGB images with spatial features. Similarly, RGB spectral slices were obtained through a moving window on the wavelength. The resized spectral-Spatial RGB images were given to Convolution Neural Networks for deep features extraction. The current research was able to classify a given sample to the appropriate class with 92.86% accuracy indicating the effectiveness of the proposed techniques. The significant bands with a difference in three types of leaves are found to be 560nm, 678nm, 726 nm and 750nm.

Keywords: huanglongbing (HLB), hyperspectral imaging (HSI), · omani citrus, CNN

Procedia PDF Downloads 80
20874 Educational Impact of Participatory Theatre Based Intervention on Gender Equality Attitudes, Youth in Serbia

Authors: Jasna Milošević Đorđević, Jelisaveta Blagojević, Jovana Timotijević, Alison Mckinley

Abstract:

Young people in Serbia, have grown up in turbulent times during the Balkan wars, in a cultural and economic isolation without adequate education on (ethnic, gender, social,..) equality. They often have very strong patriarchal gender stereotypes. The perception of gender in Serbia is still heavily influenced by traditional worldview and young people have little opportunity in traditional educational system to challenge it, receiving no formal sex education. Educational policies have addressed achieving gender equality as one of the goals, supporting all young people to gain better educational opportunities, but there are obvious shortcomings of the official education system in implementation of those goals. Therefore new approaches should be implemented. We evaluate the impact of non traditional approach, such as participatory theatre performance with strong transformative potential, especially in relation to gender issues. Theatre based intervention (TBI) was created to provoke the young people to become aware of their gender constructs. Engaging young people in modern form of education such as transformative gender intervention through participatory theatre could have positive impact on their sex knowledge and understanding gender roles. The transformative process in TBI happens on two levels – the affective and the cognitive. The founding agency of the project and evaluation is IPPF. The most important aim of this survey is evaluation of the transformative TBI, as a new educational approach related to better understanding gender as social construct. To reach this goal, we have measured attitude change in three indicators: a) gender identity/ perception of feminine identity, perception of masculine identity, importance of gender for personal identity, b) gender roles on the labor market, c) Gender equality in partnership & sexual behavior. Our main hypothesis is that participatory theatre-based intervention can have a transformational potential in challenging traditional gender knowledge and attitudes among youth in Serbia. To evaluate the impact of TB intervention, we implement: online baseline and end-line survey with nonparticipants of the TBI on the representative sample in targeted towns (control group). Additionally we conducted testing the experimental group twice: pretest at the beginning of each TBI and post testing of participants after the play. A sample of 500 respondents aged 18-30 years, from 9 towns in Serbia responded to online questionnaire in September 2017, in a baseline research. Pre and post measurement of all tested variables among participants in nine towns would be performed. End-line survey with 500 respondents would be conducted at the end of the project (early year 2018). After the first TBI (60 participants) no impact was detected on measured indicators: perception of desirable characteristics of man F(1,59)= 1.291, p=.260; perception of desirable characteristics of women F(1,55)=1.386, p= .244; gender identity importance F(1,63)= .050, p=.824; sex related behavior F(1,61)=1,145, p=.289; gender equality on labor market F(1,63)=.076, p=.783; gender equality in partnership F(1,61)=.201, p=.656; However, we hope that following intervention would bring more data showing that participatory theatre intervention explaining gender as a social construct could have additional positive impact in traditional educational system.

Keywords: educational impact, gender identity, gender role, participatory theatre based intervention

Procedia PDF Downloads 183
20873 The Role of Social Media in the Success or Failure of a Revolution: A Comparative Case Study of 2008/2018 Revolutions in Armenia

Authors: Nane Giloyan

Abstract:

The rapid development of social networks in the 21st century increases the interests towards the role and impact of social media on the success or failure of a revolution. Even though studies are investigating the role of social media on the outcome of a revolution, still, the conclusions on this matter are ambiguous so far. Hence, this research aims to investigate the role of social media in the success or failure of a revolution and make a contribution to the literature gap. The study aims to examine the research question whether the use of social media explains the success or failure of revolutions in 2008 and 2018 in Armenia. The research question is investigated through content analysis of two cases; failed revolution in 2008 and the successful revolution in 2018 in Armenia. The secondary data analysis was based on information devoted to two revolutions using local and major international news articles, journal and critical articles, in Armenian, Russian and English, also videos, posts and live streams of the revolutionary leaders. There can be many factors explaining the success or failure of a revolution. However, the investigation of the factors and their role to explain the outcome of a revolution other than the use of social media is beyond the scope of this research study. The study holds other variables constant and concludes that in the cases of 2008 and 2018 revolutions in Armenia the mobilization of society through social media explains the differences in the outcomes (failed or successful). The comparative case study of the revolutions in 2008 and 2018 in Armenia emphasizes the important role and impact of the use of social media on the success or failure of a revolution. The results highlight that the use of the Internet, particularly social media and live streams, by the opposition was the essential difference between two revolutions. Social media platforms, live streams, and communication apps that were absent in the revolutionary situation in 2008 were fundamental to the Armenian Velvet Revolution in 2018. The changes in the situation in favor of the opposition, so the outcome of the protests, were mainly based on the Internet-based mobilization of the society. It is also important to take into consideration that the country experienced a great increase in penetration rates over the decade. The percentage of access to the Internet drastically increased between 2008 and 2018. This fact may help to have a clearer understanding of the use of the Internet and social media by the opposition and the reliance on social media by society. According to the results of the continent analysis, the use of social media to direct the protests and to mobilize the society, have a vital role and positive impact on the outcome of a revolution. Thus the study concludes that it is the use of social media to initiate, organize, and direct the protests that explain the success or failure of two Armenian revolutions.

Keywords: social media, revolution, Armenia, success, failure

Procedia PDF Downloads 129
20872 Numerical Modelling of Skin Tumor Diagnostics through Dynamic Thermography

Authors: Luiz Carlos Wrobel, Matjaz Hribersek, Jure Marn, Jurij Iljaz

Abstract:

Dynamic thermography has been clinically proven to be a valuable diagnostic technique for skin tumor detection as well as for other medical applications such as breast cancer diagnostics, diagnostics of vascular diseases, fever screening, dermatological and other applications. Thermography for medical screening can be done in two different ways, observing the temperature response under steady-state conditions (passive or static thermography), and by inducing thermal stresses by cooling or heating the observed tissue and measuring the thermal response during the recovery phase (active or dynamic thermography). The numerical modelling of heat transfer phenomena in biological tissue during dynamic thermography can aid the technique by improving process parameters or by estimating unknown tissue parameters based on measured data. This paper presents a nonlinear numerical model of multilayer skin tissue containing a skin tumor, together with the thermoregulation response of the tissue during the cooling-rewarming processes of dynamic thermography. The model is based on the Pennes bioheat equation and solved numerically by using a subdomain boundary element method which treats the problem as axisymmetric. The paper includes computational tests and numerical results for Clark II and Clark IV tumors, comparing the models using constant and temperature-dependent thermophysical properties, which showed noticeable differences and highlighted the importance of using a local thermoregulation model.

Keywords: boundary element method, dynamic thermography, static thermography, skin tumor diagnostic

Procedia PDF Downloads 107
20871 Amperometric Biosensor for Glucose Determination Based on a Recombinant Mn Peroxidase from Corn Cross-linked to a Gold Electrode

Authors: Anahita Izadyar, My Ni Van, Kayleigh Amber Rodriguez, Ilwoo Seok, Elizabeth E. Hood

Abstract:

Using a recombinant enzyme derived from corn and a simple modification, we fabricated a facile, fast, and cost-beneficial biosensor to measure glucose. The Nafion/ Plant Produced Mn Peroxidase (PPMP)– glucose oxidase (GOx)- Bovine serum albumin (BSA) /Au electrode showed an excellent amperometric response to detect glucose. This biosensor is capable of responding to a wide range of glucose—20.0 µM−15.0 mM and has a lower detection limit (LOD) of 2.90µM. The reproducibility response using six electrodes is also very substantial and indicates the high capability of this biosensor to detect a wide range of 3.10±0.19µM to 13.2±1.8 mM glucose concentration. Selectivity of this electrode was investigated in an optimized experimental solution contains 10% diet green tea with citrus containing ascorbic acid (AA), and citric acid (CA) in a wide concentration of glucose at 0.02 to 14.0mM with an LOD of 3.10µM. Reproducibility was also investigated using 4 electrodes in this sample and shows notable results in the wide concentration range of 3.35±0.45µM to of 13.0 ± 0.81 mM. We also used other voltammetry methods to evaluate this biosensor. We applied linear sweep voltammetry (LSV) and this technique shows a wide range of 0.10−15.0 mM to detect glucose with a lower detection limit of 19.5µM. The performance and strength of this enzyme biosensor were the simplicity, wide linear ranges, sensitivities, selectivity, and low limits of detection. We expect that the modified biosensor has the potential for monitoring various biofluids.

Keywords: plant-produced manganese peroxidase, enzyme-based biosensors, glucose, modified gold electrode, glucose oxidase

Procedia PDF Downloads 140
20870 Application of Improved Semantic Communication Technology in Remote Sensing Data Transmission

Authors: Tingwei Shu, Dong Zhou, Chengjun Guo

Abstract:

Semantic communication is an emerging form of communication that realize intelligent communication by extracting semantic information of data at the source and transmitting it, and recovering the data at the receiving end. It can effectively solve the problem of data transmission under the situation of large data volume, low SNR and restricted bandwidth. With the development of Deep Learning, semantic communication further matures and is gradually applied in the fields of the Internet of Things, Uumanned Air Vehicle cluster communication, remote sensing scenarios, etc. We propose an improved semantic communication system for the situation where the data volume is huge and the spectrum resources are limited during the transmission of remote sensing images. At the transmitting, we need to extract the semantic information of remote sensing images, but there are some problems. The traditional semantic communication system based on Convolutional Neural Network cannot take into account the global semantic information and local semantic information of the image, which results in less-than-ideal image recovery at the receiving end. Therefore, we adopt the improved vision-Transformer-based structure as the semantic encoder instead of the mainstream one using CNN to extract the image semantic features. In this paper, we first perform pre-processing operations on remote sensing images to improve the resolution of the images in order to obtain images with more semantic information. We use wavelet transform to decompose the image into high-frequency and low-frequency components, perform bilinear interpolation on the high-frequency components and bicubic interpolation on the low-frequency components, and finally perform wavelet inverse transform to obtain the preprocessed image. We adopt the improved Vision-Transformer structure as the semantic coder to extract and transmit the semantic information of remote sensing images. The Vision-Transformer structure can better train the huge data volume and extract better image semantic features, and adopt the multi-layer self-attention mechanism to better capture the correlation between semantic features and reduce redundant features. Secondly, to improve the coding efficiency, we reduce the quadratic complexity of the self-attentive mechanism itself to linear so as to improve the image data processing speed of the model. We conducted experimental simulations on the RSOD dataset and compared the designed system with a semantic communication system based on CNN and image coding methods such as BGP and JPEG to verify that the method can effectively alleviate the problem of excessive data volume and improve the performance of image data communication.

Keywords: semantic communication, transformer, wavelet transform, data processing

Procedia PDF Downloads 78
20869 Application of Neutron Stimulated Gamma Spectroscopy for Soil Elemental Analysis and Mapping

Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert

Abstract:

Determining soil elemental content and distribution (mapping) within a field are key features of modern agricultural practice. While traditional chemical analysis is a time consuming and labor-intensive multi-step process (e.g., sample collections, transport to laboratory, physical preparations, and chemical analysis), neutron-gamma soil analysis can be performed in-situ. This analysis is based on the registration of gamma rays issued from nuclei upon interaction with neutrons. Soil elements such as Si, C, Fe, O, Al, K, and H (moisture) can be assessed with this method. Data received from analysis can be directly used for creating soil elemental distribution maps (based on ArcGIS software) suitable for agricultural purposes. The neutron-gamma analysis system developed for field application consisted of an MP320 Neutron Generator (Thermo Fisher Scientific, Inc.), 3 sodium iodide gamma detectors (SCIONIX, Inc.) with a total volume of 7 liters, 'split electronics' (XIA, LLC), a power system, and an operational computer. Paired with GPS, this system can be used in the scanning mode to acquire gamma spectra while traversing a field. Using acquired spectra, soil elemental content can be calculated. These data can be combined with geographical coordinates in a geographical information system (i.e., ArcGIS) to produce elemental distribution maps suitable for agricultural purposes. Special software has been developed that will acquire gamma spectra, process and sort data, calculate soil elemental content, and combine these data with measured geographic coordinates to create soil elemental distribution maps. For example, 5.5 hours was needed to acquire necessary data for creating a carbon distribution map of an 8.5 ha field. This paper will briefly describe the physics behind the neutron gamma analysis method, physical construction the measurement system, and main characteristics and modes of work when conducting field surveys. Soil elemental distribution maps resulting from field surveys will be presented. and discussed. Comparison of these maps with maps created on the bases of chemical analysis and soil moisture measurements determined by soil electrical conductivity was similar. The maps created by neutron-gamma analysis were reproducible, as well. Based on these facts, it can be asserted that neutron stimulated soil gamma spectroscopy paired with GPS system is fully applicable for soil elemental agricultural field mapping.

Keywords: ArcGIS mapping, neutron gamma analysis, soil elemental content, soil gamma spectroscopy

Procedia PDF Downloads 134
20868 Large Eddy Simulation of Hydrogen Deflagration in Open Space and Vented Enclosure

Authors: T. Nozu, K. Hibi, T. Nishiie

Abstract:

This paper discusses the applicability of the numerical model for a damage prediction method of the accidental hydrogen explosion occurring in a hydrogen facility. The numerical model was based on an unstructured finite volume method (FVM) code “NuFD/FrontFlowRed”. For simulating unsteady turbulent combustion of leaked hydrogen gas, a combination of Large Eddy Simulation (LES) and a combustion model were used. The combustion model was based on a two scalar flamelet approach, where a G-equation model and a conserved scalar model expressed a propagation of premixed flame surface and a diffusion combustion process, respectively. For validation of this numerical model, we have simulated the previous two types of hydrogen explosion tests. One is open-space explosion test, and the source was a prismatic 5.27 m3 volume with 30% of hydrogen-air mixture. A reinforced concrete wall was set 4 m away from the front surface of the source. The source was ignited at the bottom center by a spark. The other is vented enclosure explosion test, and the chamber was 4.6 m × 4.6 m × 3.0 m with a vent opening on one side. Vent area of 5.4 m2 was used. Test was performed with ignition at the center of the wall opposite the vent. Hydrogen-air mixtures with hydrogen concentrations close to 18% vol. were used in the tests. The results from the numerical simulations are compared with the previous experimental data for the accuracy of the numerical model, and we have verified that the simulated overpressures and flame time-of-arrival data were in good agreement with the results of the previous two explosion tests.

Keywords: deflagration, large eddy simulation, turbulent combustion, vented enclosure

Procedia PDF Downloads 244
20867 Dynamic Process Model for Designing Smart Spaces Based on Context-Awareness and Computational Methods Principles

Authors: Heba M. Jahin, Ali F. Bakr, Zeyad T. Elsayad

Abstract:

As smart spaces can be defined as any working environment which integrates embedded computers, information appliances and multi-modal sensors to remain focused on the interaction between the users, their activity, and their behavior in the space; hence, smart space must be aware of their contexts and automatically adapt to their changing context-awareness, by interacting with their physical environment through natural and multimodal interfaces. Also, by serving the information used proactively. This paper suggests a dynamic framework through the architectural design process of the space based on the principles of computational methods and context-awareness principles to help in creating a field of changes and modifications. It generates possibilities, concerns about the physical, structural and user contexts. This framework is concerned with five main processes: gathering and analyzing data to generate smart design scenarios, parameters, and attributes; which will be transformed by coding into four types of models. Furthmore, connecting those models together in the interaction model which will represent the context-awareness system. Then, transforming that model into a virtual and ambient environment which represents the physical and real environments, to act as a linkage phase between the users and their activities taking place in that smart space . Finally, the feedback phase from users of that environment to be sure that the design of that smart space fulfill their needs. Therefore, the generated design process will help in designing smarts spaces that can be adapted and controlled to answer the users’ defined goals, needs, and activity.

Keywords: computational methods, context-awareness, design process, smart spaces

Procedia PDF Downloads 331