Search results for: input output linearization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3634

Search results for: input output linearization

2524 The Proton Flow Battery for Storing Renewable Energy: A Theoretical Model of Electrochemical Hydrogen Storage in an Activated Carbon Electrode

Authors: Sh. Heidari, A. J. Andrews, A. Oberoi

Abstract:

Electrochemical storage of hydrogen in activated carbon electrodes as part of a reversible fuel cell offers a potentially attractive option for storing surplus electrical energy from inherently variable solar and wind energy resources. Such a system – which we have called a proton flow battery – promises to have a roundtrip energy efficiency comparable to lithium ion batteries, while having higher gravimetric and volumetric energy densities. In this paper, a theoretical model is presented of the process of H+ ion (proton) conduction through an acid electrolyte into a highly porous activated carbon electrode where it is neutralised and absorbed on the inner surfaces of pores. A Butler-Volmer type equation relates the rate of adsorption to the potential difference between the activated carbon surface and the electrolyte. This model for the hydrogen storage electrode is then incorporated into a more general computer model based on MATLAB software of the entire electrochemical cell including the oxygen electrode. Hence a theoretical voltage-current curve is generated for given input parameters for a particular activated carbon electrode. It is shown that theoretical VI curves produced by the model can be fitted accurately to experimental data from an actual electrochemical cell with the same characteristics. By obtaining the best-fit values of input parameters, such as the exchange current density and charge transfer coefficient for the hydrogen adsorption reaction, an improved understanding of the adsorption reaction is obtained. This new model will assist in designing improved proton flow batteries for storing solar and wind energy.

Keywords: electrochemical hydrogen storage, proton flow battery, butler-volmer equation, activated carbon

Procedia PDF Downloads 495
2523 Design and Development of On-Line, On-Site, In-Situ Induction Motor Performance Analyser

Authors: G. S. Ayyappan, Srinivas Kota, Jaffer R. C. Sheriff, C. Prakash Chandra Joshua

Abstract:

In the present scenario of energy crises, energy conservation in the electrical machines is very important in the industries. In order to conserve energy, one needs to monitor the performance of an induction motor on-site and in-situ. The instruments available for this purpose are very meager and very expensive. This paper deals with the design and development of induction motor performance analyser on-line, on-site, and in-situ. The system measures only few electrical input parameters like input voltage, line current, power factor, frequency, powers, and motor shaft speed. These measured data are coupled to name plate details and compute the operating efficiency of induction motor. This system employs the method of computing motor losses with the help of equivalent circuit parameters. The equivalent circuit parameters of the concerned motor are estimated using the developed algorithm at any load conditions and stored in the system memory. The developed instrument is a reliable, accurate, compact, rugged, and cost-effective one. This portable instrument could be used as a handy tool to study the performance of both slip ring and cage induction motors. During the analysis, the data can be stored in SD Memory card and one can perform various analyses like load vs. efficiency, torque vs. speed characteristics, etc. With the help of the developed instrument, one can operate the motor around its Best Operating Point (BOP). Continuous monitoring of the motor efficiency could lead to Life Cycle Assessment (LCA) of motors. LCA helps in taking decisions on motor replacement or retaining or refurbishment.

Keywords: energy conservation, equivalent circuit parameters, induction motor efficiency, life cycle assessment, motor performance analysis

Procedia PDF Downloads 374
2522 A Radiomics Approach to Predict the Evolution of Prostate Imaging Reporting and Data System Score 3/5 Prostate Areas in Multiparametric Magnetic Resonance

Authors: Natascha C. D'Amico, Enzo Grossi, Giovanni Valbusa, Ala Malasevschi, Gianpiero Cardone, Sergio Papa

Abstract:

Purpose: To characterize, through a radiomic approach, the nature of areas classified PI-RADS (Prostate Imaging Reporting and Data System) 3/5, recognized in multiparametric prostate magnetic resonance with T2-weighted (T2w), diffusion and perfusion sequences with paramagnetic contrast. Methods and Materials: 24 cases undergoing multiparametric prostate MR and biopsy were admitted to this pilot study. Clinical outcome of the PI-RADS 3/5 was found through biopsy, finding 8 malignant tumours. The analysed images were acquired with a Philips achieva 1.5T machine with a CE- T2-weighted sequence in the axial plane. Semi-automatic tumour segmentation was carried out on MR images using 3DSlicer image analysis software. 45 shape-based, intensity-based and texture-based features were extracted and represented the input for preprocessing. An evolutionary algorithm (a TWIST system based on KNN algorithm) was used to subdivide the dataset into training and testing set and select features yielding the maximal amount of information. After this pre-processing 20 input variables were selected and different machine learning systems were used to develop a predictive model based on a training testing crossover procedure. Results: The best machine learning system (three-layers feed-forward neural network) obtained a global accuracy of 90% ( 80 % sensitivity and 100% specificity ) with a ROC of 0.82. Conclusion: Machine learning systems coupled with radiomics show a promising potential in distinguishing benign from malign tumours in PI-RADS 3/5 areas.

Keywords: machine learning, MR prostate, PI-Rads 3, radiomics

Procedia PDF Downloads 183
2521 A Continuous Real-Time Analytic for Predicting Instability in Acute Care Rapid Response Team Activations

Authors: Ashwin Belle, Bryce Benson, Mark Salamango, Fadi Islim, Rodney Daniels, Kevin Ward

Abstract:

A reliable, real-time, and non-invasive system that can identify patients at risk for hemodynamic instability is needed to aid clinicians in their efforts to anticipate patient deterioration and initiate early interventions. The purpose of this pilot study was to explore the clinical capabilities of a real-time analytic from a single lead of an electrocardiograph to correctly distinguish between rapid response team (RRT) activations due to hemodynamic (H-RRT) and non-hemodynamic (NH-RRT) causes, as well as predict H-RRT cases with actionable lead times. The study consisted of a single center, retrospective cohort of 21 patients with RRT activations from step-down and telemetry units. Through electronic health record review and blinded to the analytic’s output, each patient was categorized by clinicians into H-RRT and NH-RRT cases. The analytic output and the categorization were compared. The prediction lead time prior to the RRT call was calculated. The analytic correctly distinguished between H-RRT and NH-RRT cases with 100% accuracy, demonstrating 100% positive and negative predictive values, and 100% sensitivity and specificity. In H-RRT cases, the analytic detected hemodynamic deterioration with a median lead time of 9.5 hours prior to the RRT call (range 14 minutes to 52 hours). The study demonstrates that an electrocardiogram (ECG) based analytic has the potential for providing clinical decision and monitoring support for caregivers to identify at risk patients within a clinically relevant timeframe allowing for increased vigilance and early interventional support to reduce the chances of continued patient deterioration.

Keywords: critical care, early warning systems, emergency medicine, heart rate variability, hemodynamic instability, rapid response team

Procedia PDF Downloads 141
2520 Seismic Behaviour of RC Knee Joints in Closing and Opening Actions

Authors: S. Mogili, J. S. Kuang, N. Zhang

Abstract:

Knee joints, the beam column connections found at the roof level of a moment resisting frame buildings, are inherently different from conventional interior and exterior beam column connections in the way that forces from adjoining members are transferred into joint and then resisted by the joint. A knee connection has two distinct load resisting mechanisms, each for closing and opening actions acting simultaneously under reversed cyclic loading. In spite of many distinct differences in the behaviour of shear resistance in knee joints, there are no special design provisions in the major design codes available across the world due to lack of in-depth research on the knee connections. To understand the relative importance of opening and closing actions in design, it is imperative to study knee joints under varying shear stresses, especially at higher opening-to-closing shear stress ratios. Three knee joint specimens, under different input shear stresses, were designed to produce a varying ratio of input opening to closing shear stresses. The design was carried out in such a way that the ratio of flexural strength of beams with consideration of axial forces in opening to closing actions are maintained at 0.5, 0.7, and 1.0, thereby resulting in the required variation of opening to closing joint shear stress ratios among the specimens. The behaviour of these specimens was then carefully studied in terms of closing and opening capacities, hysteretic behaviour, and envelope curves to understand the differences in joint performance based on which an attempt to suggest design guidelines for knee joints is made emphasizing the relative importance of opening and closing actions. Specimens with relatively higher opening stresses were observed to be more vulnerable under the action of seismic loading.

Keywords: Knee-joints, large-scale testing, opening and closing shear stresses, seismic performance

Procedia PDF Downloads 215
2519 Commissioning, Test and Characterization of Low-Tar Biomass Gasifier for Rural Applications and Small-Scale Plant

Authors: M. Mashiur Rahman, Ulrik Birk Henriksen, Jesper Ahrenfeldt, Maria Puig Arnavat

Abstract:

Using biomass gasification to make producer gas is one of the promising sustainable energy options available for small scale plant and rural applications for power and electricity. Tar content in producer gas is the main problem if it is used directly as a fuel. A low-tar biomass (LTB) gasifier of approximately 30 kW capacity has been developed to solve this. Moving bed gasifier with internal recirculation of pyrolysis gas has been the basic principle of the LTB gasifier. The gasifier focuses on the concept of mixing the pyrolysis gases with gasifying air and burning the mixture in separate combustion chamber. Five tests were carried out with the use of wood pellets and wood chips separately, with moisture content of 9-34%. The LTB gasifier offers excellent opportunities for handling extremely low-tar in the producer gas. The gasifiers producer gas had an extremely low tar content of 21.2 mg/Nm³ (avg.) and an average lower heating value (LHV) of 4.69 MJ/Nm³. Tar content found in different tests in the ranges of 10.6-29.8 mg/Nm³. This low tar content makes the producer gas suitable for direct use in internal combustion engine. Using mass and energy balances, the average gasifier capacity and cold gas efficiency (CGE) observed 23.1 kW and 82.7% for wood chips, and 33.1 kW and 60.5% for wood pellets, respectively. Average heat loss in term of higher heating value (HHV) observed 3.2% of thermal input for wood chips and 1% for wood pellets, where heat loss was found 1% of thermal input in term of enthalpy. Thus, the LTB gasifier performs better compared to typical gasifiers in term of heat loss. Equivalence ratio (ER) in the range of 0.29 to 0.41 gives better performance in terms of heating value and CGE. The specific gas production yields at the above ER range were in the range of 2.1-3.2 Nm³/kg. Heating value and CGE changes proportionally with the producer gas yield. The average gas compositions (H₂-19%, CO-19%, CO₂-10%, CH₄-0.7% and N₂-51%) obtained for wood chips are higher than the typical producer gas composition. Again, the temperature profile of the LTB gasifier observed relatively low temperature compared to typical moving bed gasifier. The average partial oxidation zone temperature of 970°C observed for wood chips. The use of separate combustor in the partial oxidation zone substantially lowers the bed temperature to 750°C. During the test, the engine was started and operated completely with the producer gas. The engine operated well on the produced gas, and no deposits were observed in the engine afterwards. Part of the producer gas flow was used for engine operation, and corresponding electrical power was found to be 1.5 kW continuously, and maximum power of 2.5 kW was also observed, while maximum generator capacity is 3 kW. A thermodynamic equilibrium model is good agreement with the experimental results and correctly predicts the equilibrium bed temperature, gas composition, LHV of the producer gas and ER with the experimental data, when the heat loss of 4% of the energy input is considered.

Keywords: biomass gasification, low-tar biomass gasifier, tar elimination, engine, deposits, condensate

Procedia PDF Downloads 110
2518 Analysis of the Physical Behavior of Library Users in Reading Rooms through GIS: A Case Study of the Central Library of Tehran University

Authors: Roya Pournaghi

Abstract:

Measuring the extent of daily use of the libraries study space is of utmost significance in order to develop, re-organize and maintain the efficiency of the study space. The current study aimed to employ GIS in analyzing the study halls space of the document center and central library of Tehran University and determine the extent of use of the study chairs and desks by the students-intended users. This combination of survey methods - descriptive design system. In order to collect the required data and a description of the method, To implement and entering data into ArcGIS software. It also analyzes the data and displays the results on the library floor map design method were used. And spatial database design and plan has been done at the Central Library of Tehran University through the amount of space used by members of the Library and Information halls plans. Results showed that Biruni's hall is allocated the highest occupancy rate to tables and chairs compared to other halls. In the Hall of Science and Technology, with an average occupancy rate of 0.39 in the tables represents the lowest users and Rashid al-Dins hall, and Science and Technology’s hall with an average occupancy rate (0.40) represents the lowest users of seats. In this study, the comparison of the space is occupied at different period as a study’s hall in the morning, evenings, afternoons, and several months was performed through GIS. This system analyzed the space relationship effectively and efficiently. The output of this study can be used by administrators and librarians to determine the exact amount of using the Equipment of study halls and librarians can use the output map to design more efficient space at the library.

Keywords: geospatial information system, spatial analysis, reading room, academic libraries, library’s user, central library of Tehran university

Procedia PDF Downloads 223
2517 Fabrication of Glucose/O₂ Microfluidic Biofuel Cell with Double Layer of Electrodes

Authors: Haroon Khan, Chul Min Kim, Sung Yeol Kim, Sanket Goel, Prabhat K. Dwivedi, Ashutosh Sharma, Gyu Man Kim

Abstract:

Enzymatic biofuel cells (EBFCs) have drawn the attention of researchers due to its demanding application in medical implants. In EBFCs, electricity is produced with the help of redox enzymes. In this study, we report the fabrication of membraneless EBFC with new design of electrodes to overcome microchannel related limitations. The device consists of double layer of electrodes on both sides of Y-shaped microchannel to reduce the effect of oxygen depletion layer and diffusion of fuel and oxidant at the end of microchannel. Moreover, the length of microchannel was reduced by half keeping the same area of multiwalled carbon nanotubes (MWCNT) electrodes. Polydimethylsiloxane (PDMS) stencils were used to pattern MWCNT electrodes on etched Indium Tin Oxide (ITO) glass. PDMS casting was used to fabricate microchannel of the device. Both anode and cathode were modified with glucose oxidase and laccase. Furthermore, these enzymes were covalently bound to carboxyl MWCNTs with the help of EDC/NHS. Glucose used as fuel was oxidized by glucose oxidase at anode while oxygen was reduced to water at the cathode side. The resulted devices were investigated with the help of polarization curves obtained from Chronopotentiometry technique by using potentiostat. From results, we conclude that the performance of double layer EBFC is improved 15 % as compared to single layer EBFC delivering maximum power density of 71.25 µW cm-2 at a cell potential of 0.3 V and current density of 250 µA cm-2 at micro channel height of 450-µm and flow rate of 25 ml hr-1. However, the new device was stable only for three days after which its power output was rapidly dropped by 75 %. This work demonstrates that the power output of membraneless EBFC is improved comparatively, but still efforts will be needed to make the device stable over long period of time.

Keywords: EBFC, glucose, MWCNT, microfluidic

Procedia PDF Downloads 318
2516 A Hybrid Multi-Criteria Hotel Recommender System Using Explicit and Implicit Feedbacks

Authors: Ashkan Ebadi, Adam Krzyzak

Abstract:

Recommender systems, also known as recommender engines, have become an important research area and are now being applied in various fields. In addition, the techniques behind the recommender systems have been improved over the time. In general, such systems help users to find their required products or services (e.g. books, music) through analyzing and aggregating other users’ activities and behavior, mainly in form of reviews, and making the best recommendations. The recommendations can facilitate user’s decision making process. Despite the wide literature on the topic, using multiple data sources of different types as the input has not been widely studied. Recommender systems can benefit from the high availability of digital data to collect the input data of different types which implicitly or explicitly help the system to improve its accuracy. Moreover, most of the existing research in this area is based on single rating measures in which a single rating is used to link users to items. This paper proposes a highly accurate hotel recommender system, implemented in various layers. Using multi-aspect rating system and benefitting from large-scale data of different types, the recommender system suggests hotels that are personalized and tailored for the given user. The system employs natural language processing and topic modelling techniques to assess the sentiment of the users’ reviews and extract implicit features. The entire recommender engine contains multiple sub-systems, namely users clustering, matrix factorization module, and hybrid recommender system. Each sub-system contributes to the final composite set of recommendations through covering a specific aspect of the problem. The accuracy of the proposed recommender system has been tested intensively where the results confirm the high performance of the system.

Keywords: tourism, hotel recommender system, hybrid, implicit features

Procedia PDF Downloads 268
2515 Minimum Wages and Its Impact on Agriculture and Non Agricultural Sectors with Special Reference to Recent Labour Reforms in India

Authors: Bikash Kumar Malick

Abstract:

Labour reform is a most celebrated theme for policy makers, at the same time it is also a most misunderstood and skeptical concept even for the educated masses in India. One of the widely focused and discussed topics which needs an in-depth examination is India’s labour laws. It may actually help to reach points to understand the exact requirements in labour reforms by making the labour laws more simple and concise in form and its implementation. It is also a requirement to guide states in India in terms of making laws on it as Indian Constitution itself is federal in form and unitary in spirit. Recently, Codes of Wages Bill has been introduced in Indian Parliament while other three codes are waiting to come in the same line and those codes actually highlight the simplified features of labour laws to enable labour reform in a succinct manner. However, it still brings more confusion in minds of people. To wipe out the confusion and to bring a note and to put it for correlation among the labour reforms of both centre and states which both generates employment and make growth sustainable in India providing clear public understanding. This time is also ripe minimizing the apprehension about all the coming labour laws simplified in different codes in India. This article attempts to highlight the need of labour reform and its possible impact. It also examines the higher rates of minimum wages and its links with its coverage agriculture and nonagricultural sectors (including mines) over the period time. It also takes into consideration of central sphere and in states sphere minimum wage which are linked with Consumer Price Index to bring into account the living standard of workers and to examine the cause and effect between minimum wage and output in both agriculture and non agricultural sector with regression analysis. Increase in minimum wage has actually strengthened the sustainable output.

Keywords: codes of wages, indian constitution, minimum wage, labour laws, labour reforms

Procedia PDF Downloads 193
2514 Experimental Study of Boost Converter Based PV Energy System

Authors: T. Abdelkrim, K. Ben Seddik, B. Bezza, K. Benamrane, Aeh. Benkhelifa

Abstract:

This paper proposes an implementation of boost converter for a resistive load using photovoltaic energy as a source. The model of photovoltaic cell and operating principle of boost converter are presented. A PIC micro controller is used in the close loop control to generate pulses for controlling the converter circuit. To performance evaluation of boost converter, a variation of output voltage of PV panel is done by shading one and two cells.

Keywords: boost converter, microcontroller, photovoltaic power generation, shading cells

Procedia PDF Downloads 869
2513 Comparing Deep Architectures for Selecting Optimal Machine Translation

Authors: Despoina Mouratidis, Katia Lida Kermanidis

Abstract:

Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.

Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification

Procedia PDF Downloads 123
2512 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics

Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee

Abstract:

Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.

Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru

Procedia PDF Downloads 78
2511 Multivariate Analysis on Water Quality Attributes Using Master-Slave Neural Network Model

Authors: A. Clementking, C. Jothi Venkateswaran

Abstract:

Mathematical and computational functionalities such as descriptive mining, optimization, and predictions are espoused to resolve natural resource planning. The water quality prediction and its attributes influence determinations are adopted optimization techniques. The water properties are tainted while merging water resource one with another. This work aimed to predict influencing water resource distribution connectivity in accordance to water quality and sediment using an innovative proposed master-slave neural network back-propagation model. The experiment results are arrived through collecting water quality attributes, computation of water quality index, design and development of neural network model to determine water quality and sediment, master–slave back propagation neural network back-propagation model to determine variations on water quality and sediment attributes between the water resources and the recommendation for connectivity. The homogeneous and parallel biochemical reactions are influences water quality and sediment while distributing water from one location to another. Therefore, an innovative master-slave neural network model [M (9:9:2)::S(9:9:2)] designed and developed to predict the attribute variations. The result of training dataset given as an input to master model and its maximum weights are assigned as an input to the slave model to predict the water quality. The developed master-slave model is predicted physicochemical attributes weight variations for 85 % to 90% of water quality as a target values.The sediment level variations also predicated from 0.01 to 0.05% of each water quality percentage. The model produced the significant variations on physiochemical attribute weights. According to the predicated experimental weight variation on training data set, effective recommendations are made to connect different resources.

Keywords: master-slave back propagation neural network model(MSBPNNM), water quality analysis, multivariate analysis, environmental mining

Procedia PDF Downloads 470
2510 Development of Academic Software for Medial Axis Determination of Porous Media from High-Resolution X-Ray Microtomography Data

Authors: S. Jurado, E. Pazmino

Abstract:

Determination of the medial axis of a porous media sample is a non-trivial problem of interest for several disciplines, e.g., hydrology, fluid dynamics, contaminant transport, filtration, oil extraction, etc. However, the computational tools available for researchers are limited and restricted. The primary aim of this work was to develop a series of algorithms to extract porosity, medial axis structure, and pore-throat size distributions from porous media domains. A complementary objective was to provide the algorithms as free computational software available to the academic community comprising researchers and students interested in 3D data processing. The burn algorithm was tested on porous media data obtained from High-Resolution X-Ray Microtomography (HRXMT) and idealized computer-generated domains. The real data and idealized domains were discretized in voxels domains of 550³ elements and binarized to denote solid and void regions to determine porosity. Subsequently, the algorithm identifies the layer of void voxels next to the solid boundaries. An iterative process removes or 'burns' void voxels in sequence of layer by layer until all the void space is characterized. Multiples strategies were tested to optimize the execution time and use of computer memory, i.e., segmentation of the overall domain in subdomains, vectorization of operations, and extraction of single burn layer data during the iterative process. The medial axis determination was conducted identifying regions where burnt layers collide. The final medial axis structure was refined to avoid concave-grain effects and utilized to determine the pore throat size distribution. A graphic user interface software was developed to encompass all these algorithms, including the generation of idealized porous media domains. The software allows input of HRXMT data to calculate porosity, medial axis, and pore-throat size distribution and provide output in tabular and graphical formats. Preliminary tests of the software developed during this study achieved medial axis, pore-throat size distribution and porosity determination of 100³, 320³ and 550³ voxel porous media domains in 2, 22, and 45 minutes, respectively in a personal computer (Intel i7 processor, 16Gb RAM). These results indicate that the software is a practical and accessible tool in postprocessing HRXMT data for the academic community.

Keywords: medial axis, pore-throat distribution, porosity, porous media

Procedia PDF Downloads 109
2509 The Relationship Between Car Drivers' Background Information and Risky Events In I- Dreams Project

Authors: Dagim Dessalegn Haile

Abstract:

This study investigated the interaction between the drivers' socio-demographic background information (age, gender, and driving experience) and the risky events score in the i-DREAMS platform. Further, the relationship between the participants' background driving behavior and the i-DREAMS platform behavioral output scores of risky events was also investigated. The i-DREAMS acronym stands for Smart Driver and Road Environment Assessment and Monitoring System. It is a European Union Horizon 2020 funded project consisting of 13 partners, researchers, and industry partners from 8 countries. A total of 25 Belgian car drivers (16 male and nine female) were considered for analysis. Drivers' ages were categorized into ages 18-25, 26-45, 46-65, and 65 and older. Drivers' driving experience was also categorized into four groups: 1-15, 16-30, 31-45, and 46-60 years. Drivers are classified into two clusters based on the recorded score for risky events during phase 1 (baseline) using risky events; acceleration, deceleration, speeding, tailgating, overtaking, and lane discipline. Agglomerative hierarchical clustering using SPSS shows Cluster 1 drivers are safer drivers, and Cluster 2 drivers are identified as risky drivers. The analysis result indicated no significant relationship between age groups, gender, and experience groups except for risky events like acceleration, tailgating, and overtaking in a few phases. This is mainly because the fewer participants create less variability of socio-demographic background groups. Repeated measure ANOVA shows that cluster 2 drivers improved more than cluster 1 drivers for tailgating, lane discipline, and speeding events. A positive relationship between background drivers' behavior and i-DREAMS platform behavioral output scores is observed. It implies that car drivers who in the questionnaire data indicate committing more risky driving behavior demonstrate more risky driver behavior in the i-DREAMS observed driving data.

Keywords: i-dreams, car drivers, socio-demographic background, risky events

Procedia PDF Downloads 66
2508 Transient Simulation Using SPACE for ATLAS Facility to Investigate the Effect of Heat Loss on Major Parameters

Authors: Suhib A. Abu-Seini, Kyung-Doo Kim

Abstract:

A heat loss model for ATLAS facility was introduced using SPACE code predefined correlations and various dialing factors. As all previous simulations were carried out using a heat loss free input; the facility was considered to be completely insulated and the core power was reduced by the experimentally measured values of heat loss to compensate to the account for the loss of heat, this study will consider heat loss throughout the simulation. The new heat loss model will be affecting SPACE code simulation as heat being leaked out of the system throughout a transient will alter many parameters corresponding to temperature and temperature difference. For that, a Station Blackout followed by a multiple Steam Generator Tube Rupture accident will be simulated using both the insulated system approach and the newly introduced heat loss input of the steady state. Major parameters such as system temperatures, pressure values, and flow rates to be put into comparison and various analysis will be suggested upon it as the experimental values will not be the reference to validate the expected outcome. This study will not only show the significance of heat loss consideration in the processes of prevention and mitigation of various incidents, design basis and beyond accidents as it will give a detailed behavior of ATLAS facility during both processes of steady state and major transient, but will also present a verification of how credible the data acquired of ATLAS are; since heat loss values for steady state were already mismatched between SPACE simulation results and ATLAS data acquiring system. Acknowledgement- This work was supported by the Korean institute of Energy Technology Evaluation and Planning (KETEP) and the Ministry of Trade, Industry & Energy (MOTIE) of the Republic of Korea.

Keywords: ATLAS, heat loss, simulation, SPACE, station blackout, steam generator tube rupture, verification

Procedia PDF Downloads 218
2507 Performance and Voyage Analysis of Marine Gas Turbine Engine, Installed to Power and Propel an Ocean-Going Cruise Ship from Lagos to Jeddah

Authors: Mathias U. Bonet, Pericles Pilidis, Georgios Doulgeris

Abstract:

An aero-derivative marine Gas Turbine engine model is simulated to be installed as the main propulsion prime mover to power a cruise ship which is designed and routed to transport intending Muslim pilgrims for the annual hajj pilgrimage from Nigeria to the Islamic port city of Jeddah in Saudi Arabia. A performance assessment of the Gas Turbine engine has been conducted by examining the effect of varying aerodynamic and hydrodynamic conditions encountered at various geographical locations along the scheduled transit route during the voyage. The investigation focuses on the overall behavior of the Gas Turbine engine employed to power and propel the ship as it operates under ideal and adverse conditions to be encountered during calm and rough weather according to the different seasons of the year under which the voyage may be undertaken. The variation of engine performance under varying operating conditions has been considered as a very important economic issue by determining the time the speed by which the journey is completed as well as the quantity of fuel required for undertaking the voyage. The assessment also focuses on the increased resistance caused by the fouling of the submerged portion of the ship hull surface with its resultant effect on the power output of the engine as well as the overall performance of the propulsion system. Daily ambient temperature levels were obtained by accessing data from the UK Meteorological Office while the varying degree of turbulence along the transit route and according to the Beaufort scale were also obtained as major input variables of the investigation. By assuming the ship to be navigating the Atlantic Ocean and the Mediterranean Sea during winter, spring and summer seasons, the performance modeling and simulation was accomplished through the use of an integrated Gas Turbine performance simulation code known as ‘Turbomach’ along with a Matlab generated code named ‘Poseidon’, all of which have been developed at the Power and Propulsion Department of Cranfield University. As a case study, the results of the various assumptions have further revealed that the marine Gas Turbine is a reliable and available alternative to the conventional marine propulsion prime movers that have dominated the maritime industry before now. The techno-economic and environmental assessment of this type of propulsion prime mover has enabled the determination of the effect of changes in weather and sea conditions on the ship speed as well as trip time and the quantity of fuel required to be burned throughout the voyage.

Keywords: ambient temperature, hull fouling, marine gas turbine, performance, propulsion, voyage

Procedia PDF Downloads 181
2506 Hydrogen Storage Systems for Enhanced Grid Balancing Services in Wind Energy Conversion Systems

Authors: Nezmin Kayedpour, Arash E. Samani, Siavash Asiaban, Jeroen M. De Kooning, Lieven Vandevelde, Guillaume Crevecoeur

Abstract:

The growing adoption of renewable energy sources, such as wind power, in electricity generation is a significant step towards a sustainable and decarbonized future. However, the inherent intermittency and uncertainty of wind resources pose challenges to the reliable and stable operation of power grids. To address this, hydrogen storage systems have emerged as a promising and versatile technology to support grid balancing services in wind energy conversion systems. In this study, we propose a supplementary control design that enhances the performance of the hydrogen storage system by integrating wind turbine (WT) pitch and torque control systems. These control strategies aim to optimize the hydrogen production process, ensuring efficient utilization of wind energy while complying with grid requirements. The wind turbine pitch control system plays a crucial role in managing the turbine's aerodynamic performance. By adjusting the blade pitch angle, the turbine's rotational speed and power output can be regulated. Our proposed control design dynamically coordinates the pitch angle to match the wind turbine's power output with the optimal hydrogen production rate. This ensures that the electrolyzer receives a steady and optimal power supply, avoiding unnecessary strain on the system during high wind speeds and maximizing hydrogen production during low wind speeds. Moreover, the wind turbine torque control system is incorporated to facilitate efficient operation at varying wind speeds. The torque control system optimizes the energy capture from the wind while limiting mechanical stress on the turbine components. By harmonizing the torque control with hydrogen production requirements, the system maintains stable wind turbine operation, thereby enhancing the overall energy-to-hydrogen conversion efficiency. To enable grid-friendly operation, we introduce a cascaded controller that regulates the electrolyzer's electrical power-current in accordance with grid requirements. This controller ensures that the hydrogen production rate can be dynamically adjusted based on real-time grid demands, supporting grid balancing services effectively. By maintaining a close relationship between the wind turbine's power output and the electrolyzer's current, the hydrogen storage system can respond rapidly to grid fluctuations and contribute to enhanced grid stability. In this paper, we present a comprehensive analysis of the proposed supplementary control design's impact on the overall performance of the hydrogen storage system in wind energy conversion systems. Through detailed simulations and case studies, we assess the system's ability to provide grid balancing services, maximize wind energy utilization, and reduce greenhouse gas emissions.

Keywords: active power control, electrolyzer, grid balancing services, wind energy conversion systems

Procedia PDF Downloads 78
2505 Generation of Ultra-Broadband Supercontinuum Ultrashort Laser Pulses with High Energy

Authors: Walid Tawfik

Abstract:

The interaction of intense short nano- and picosecond laser pulses with plasma leads to reach variety of important applications, including time-resolved laser induced breakdown spectroscopy (LIBS), soft x-ray lasers, and laser-driven accelerators. The progress in generating of femtosecond down to sub-10 fs optical pulses has opened a door for scientists with an essential tool in many ultrafast phenomena, such as femto-chemistry, high field physics, and high harmonic generation (HHG). The advent of high-energy laser pulses with durations of few optical cycles provided scientists with very high electric fields, and produce coherent intense UV to NIR radiation with high energy which allows for the investigation of ultrafast molecular dynamics with femtosecond resolution. In this work, we could experimentally achieve the generation of a two-octave-wide supercontinuum ultrafast pulses extending from ultraviolet at 3.5 eV to the near-infrared at 1.3 eV in neon-filled capillary fiber. These pulses are created due to nonlinear self-phase modulation (SPM) in neon as a nonlinear medium. The measurements of the generated pulses were performed using spectral phase interferometry for direct electric-field reconstruction. A full characterization of the output pulses was studied. The output pulse characterization includes the pulse width, the beam profile, and the spectral bandwidth. Under optimization conditions, the reconstructed pulse intensity autocorrelation function was exposed for the shorts possible pulse duration to achieve transform-limited pulses with energies up to 600µJ. Furthermore, the effect of variation of neon pressure on the pulse-width was studied. The nonlinear SPM found to be increased with the neon pressure. The obtained results may give an opportunity to monitor and control ultrafast transit interaction in femtosecond chemistry.

Keywords: femtosecond laser, ultrafast, supercontinuum, ultra-broadband

Procedia PDF Downloads 200
2504 Management of Postoperative Pain, Intercultural Differences Among Registered Nurses: Czech Republic and Kingdom of Saudi Arabia

Authors: Denisa Mackova, Andrea Pokorna

Abstract:

The management of postoperative pain is a meaningful part of quality care. The experience and knowledge of registered nurses in postoperative pain management can be influenced by local know-how. Therefore, the research helps to understand the cultural differences between two countries with the aim of evaluating the management of postoperative pain management among the nurses from the Czech Republic and the Kingdom of Saudi Arabia. Both countries have different procedures on managing postoperative pain and the research will provide an understanding of both the advantages and disadvantages of the procedures and also highlight the knowledge and experience of registered nurses in both countries. Between the Czech Republic and the Kingdom of Saudi Arabia, the expectation is for differing results in the usage of opioid analgesia for the patients postoperatively and in the experience of registered nurses with Patient Controlled Analgesia. The aim is to evaluate the knowledge and awareness of registered nurses and to merge the data with the postoperative pain management in the early postoperative period in the Czech Republic and the Kingdom of Saudi Arabia. Also, the aim is to assess the knowledge and experience of registered nurses by using Patient Controlled Analgesia and epidural analgesia treatment in the early postoperative period. The criteria for those providing input into the study, are registered nurses, working in surgical settings (standard departments, post-anesthesia care unit, day care surgery or ICU’s) caring for patients in the postoperative period. Method: Research is being conducted by questionnaires. It is a quantitative research, a comparative study of registered nurses in the Czech Republic and the Kingdom of Saudi Arabia. Questionnaire surveys were distributed through an electronic Bristol online survey. Results: The collection of the data in the Kingdom of Saudi Arabia has been completed successfully, with 550 respondents, 77 were excluded and 473 respondents were included for statistical data analysis. The outcome of the research is expected to highlight the differences in treatment through Patient Controlled Analgesia, with more frequent use in the Kingdom of Saudi Arabia. A similar assumption is expected for treatment conducted by analgesia. We predict that opioids will be used more regularly in the Kingdom of Saudi Arabia, whilst therapy through NSAID’s being the most common approach in the Czech Republic. Discussion/Conclusion: The majority of respondents from the Kingdom of Saudi Arabia were female registered nurses from a multitude of nations. We are expecting a similar split in gender between the Czech Republic respondents; however, there will be a smaller number of nationalities. Relevance for research and practice: Output from the research will assess the knowledge, experience and practice of patient controlled analgesia and epidural analgesia treatment. Acknowledgement: This research was accepted and affiliated to the project: Postoperative pain management, knowledge and experience registered nurses (Czech Republic and Kingdom of Saudi Arabia) – SGS05/2019-2020.

Keywords: acute postoperative pain, epidural analgesia, nursing care, patient controlled analgesia

Procedia PDF Downloads 175
2503 A Qualitative Assessment of the Internal Communication of the College of Comunication: Basis for a Strategic Communication Plan

Authors: Edna T. Bernabe, Joshua Bilolo, Sheila Mae Artillero, Catlicia Joy Caseda, Liezel Once, Donne Ynah Grace Quirante

Abstract:

Internal communication is significant for an organization to function to its full extent. A strategic communication plan builds an organization’s structure and makes it more systematic. Information is a vital part of communication inside the organization as this lays every possible outcome—be it positive or negative. It is, therefore, imperative to assess the communication structure of a particular organization to secure a better and harmonious communication environment in any organization. Thus, this research was intended to identify the internal communication channels used in Polytechnic University of the Philippines-College of Communication (PUP-COC) as an organization, to identify the flow of information specifically in downward, upward, and horizontal communication, to assess the accuracy, consistency, and timeliness of its internal communication channels; and to come up with a proposed strategic communication plan of information dissemination to improve the existing communication flow in the college. The researchers formulated a framework from Input-Throughout-Output-Feedback-Goal of General System Theory and gathered data to assess the PUP-COC’s internal communication. The communication model links the objectives of the study to know the internal organization of the college. The qualitative approach and case study as the tradition of inquiry were used to gather deeper understanding of the internal organizational communication in PUP-COC, using Interview, as the primary methods for the study. This was supported with a quantitative data which were gathered through survey from the students of the college. The researchers interviewed 17 participants: the College dean, the 4 chairpersons of the college departments, the 11 faculty members and staff, and the acting Student Council president. An interview guide and a standardized questionnaire were formulated as instruments to generate the data. After a thorough analysis of the study, it was found out that two-way communication flow exists in PUP-COC. The type of communication channel the internal stakeholders use varies as to whom a particular person is communicating with. The members of the PUP-COC community also use different types of communication channels depending on the flow of communication being used. Moreover, the most common types of internal communication are the letters and memoranda for downward communication, while letters, text messages, and interpersonal communication are often used in upward communication. Various forms of social media have been found out to be of use in horizontal communication. Accuracy, consistency, and timeliness play a significant role in information dissemination within the college. However, some problems have also been found out in the communication system. The most common problem are the delay in the dissemination of memoranda and letters and the uneven distribution of information and instruction to faculty, staff, and students. This has led the researchers to formulate a strategic communication plan which aims to propose strategies that will solve the communication problems that are being experienced by the internal stakeholders.

Keywords: communication plan, downward communication, internal communication, upward communication

Procedia PDF Downloads 509
2502 Constitutional Identity: The Connection between National Constitutions and EU Law

Authors: Norbert Tribl

Abstract:

European contemporary scientific public opinion considers the concept of constitutional identity as a highlighted issue. Some scholars interpret the matter as the manifestation of a conflict of Europe. Nevertheless, constitutional identity is a bridge between the Member States and the EU rather than a river that will wash away the achievements of the integration. In accordance with the opinion of the author, the main problem of constitutional identity in Europe is the undetermined nature: the exact concept of constitutional identity has not been defined until now. However, this should be the first step to understand and use identity as a legal institution. Having regard to this undetermined nature, the legal-theoretical examination of constitutional identity is the main purpose of this study. The concept of constitutional identity appears in the Anglo-Saxon legal systems by a different approach than in the supranational system of European Integration. While the interpretation of legal institutions in conformity with the constitution is understood under it, the European concept is applied when possible conflicts arise between the legal system of the European supranational space and certain provisions of the national constitutions of the member states. The European concept of constitutional identity intends to offer input in determining the nature of the relationship between the constitutional provisions of the member states and the legal acts of the EU integration. In the EU system of multilevel constitutionalism, a long-standing central debate on integration surrounds the conflict between EU legal acts and the constitutional provisions of the member states. In spite of the fact that the Court of Justice of the European Union stated in Costa v. E.N.E.L. that the member states cannot refer to the provisions of their respective national constitutions against the integration. Based on the experience of more than 50 years since the above decision, and also in light of the Treaty of Lisbon, we now can clearly see that EU law has itself identified an obligation for the EU to protect the fundamental constitutional features of the Member States under Article 4 (2) of Treaty on European Union, by respecting the national identities of member states. In other words, the European concept intends to offer input for the determination of the nature of the relationship between the constitutional provisions of the member states and the legal acts of the EU integration.

Keywords: constitutional identity, EU law, European Integration, supranationalism

Procedia PDF Downloads 143
2501 Grid and Market Integration of Large Scale Wind Farms using Advanced Predictive Data Mining Techniques

Authors: Umit Cali

Abstract:

The integration of intermittent energy sources like wind farms into the electricity grid has become an important challenge for the utilization and control of electric power systems, because of the fluctuating behaviour of wind power generation. Wind power predictions improve the economic and technical integration of large amounts of wind energy into the existing electricity grid. Trading, balancing, grid operation, controllability and safety issues increase the importance of predicting power output from wind power operators. Therefore, wind power forecasting systems have to be integrated into the monitoring and control systems of the transmission system operator (TSO) and wind farm operators/traders. The wind forecasts are relatively precise for the time period of only a few hours, and, therefore, relevant with regard to Spot and Intraday markets. In this work predictive data mining techniques are applied to identify a statistical and neural network model or set of models that can be used to predict wind power output of large onshore and offshore wind farms. These advanced data analytic methods helps us to amalgamate the information in very large meteorological, oceanographic and SCADA data sets into useful information and manageable systems. Accurate wind power forecasts are beneficial for wind plant operators, utility operators, and utility customers. An accurate forecast allows grid operators to schedule economically efficient generation to meet the demand of electrical customers. This study is also dedicated to an in-depth consideration of issues such as the comparison of day ahead and the short-term wind power forecasting results, determination of the accuracy of the wind power prediction and the evaluation of the energy economic and technical benefits of wind power forecasting.

Keywords: renewable energy sources, wind power, forecasting, data mining, big data, artificial intelligence, energy economics, power trading, power grids

Procedia PDF Downloads 510
2500 Synthesis of Na-LSX Zeolite and Hydrosodalite from Polish Fly Ashes

Authors: Barbara Bialecka, Zdzislaw Adamczyk, Magdalena Cempa

Abstract:

In the work, the results of investigations into the hydrothermal zeolitization of fly ash from hard coal combustion in one of Polish Power Station have been presented. The chemical composition of the ash was determined by the method of X-ray fluorescence (XRF), whereas the phases of both fly ash and the products after synthesis were identified using microscopic observations, X-ray diffraction analysis (XRD) as well as electron scanning microscopy with measurements of the chemical compositions in micro areas (SEM/EDS). The synthesis was carried out with various concentrations of NaOH solution (3M, 4M and 6M) in the following conditions: synthesis temperature – 80ᵒC, synthesis time – 16 hours, volume of NaOH solution – 350ml, fly ash mass – 14g. The main chemical components of fly ash were SiO₂ and Al₂O₃, the contents of which reached 51.62 and 28.14%mas., respectively. The input ash contained mainly such phases as mullite, quarz, magnetite, and glass. The research results indicate that the phase composition of products after zeolitization was differentiated. The material after synthesis in 3M NaOH solution was found to contain mullite, quarz, magnetite, and Na-LSX zeolite. The products of synthesis in 4M NaOH solution were very similar to those in 3M solution (mullite, quarz, magnetite, Na-LSX zeolite), but they additionally contained hydrosodalite. The material after synthesis in 6M NaOH solution contains mullite, quarz, magnetite (similarly to synthesis in 3M and 4M NaOH solition) and additionally hydrosodalite. Therefore, the products of synthesis contain relic components from the fly ash input sample in the form of mullite, quarz, and magnetite, as well as new phases, which are Na-LSX zeolite and hydrosodalite. It should be noted that the products of synthesis in the case of 4M NaOH solution contained both new phases (Na-LSX zeolite and hydrosodalite), while the products from the extreme concentration of NaOH solutions (3M and 6M) contained only one of them. Observations in the scanning electron microscope revealed the new phases’ morphology. It was found that Na-LSX zeolite formed cubic crystals, whereas hydrosodalite formed characteristic aggregations. The results of investigations into the chemical composition in the micro area of phase grains in the products after synthesis reveal some dependencies, among others a characteristic increase in the content of sodium, related to the increased concentration of NaOH solution.

Keywords: Na-LSX, fly ash, hydrosodalite, zeolite

Procedia PDF Downloads 165
2499 Preparing Data for Calibration of Mechanistic-Empirical Pavement Design Guide in Central Saudi Arabia

Authors: Abdulraaof H. Alqaili, Hamad A. Alsoliman

Abstract:

Through progress in pavement design developments, a pavement design method was developed, which is titled the Mechanistic Empirical Pavement Design Guide (MEPDG). Nowadays, the evolution in roads network and highways is observed in Saudi Arabia as a result of increasing in traffic volume. Therefore, the MEPDG currently is implemented for flexible pavement design by the Saudi Ministry of Transportation. Implementation of MEPDG for local pavement design requires the calibration of distress models under the local conditions (traffic, climate, and materials). This paper aims to prepare data for calibration of MEPDG in Central Saudi Arabia. Thus, the first goal is data collection for the design of flexible pavement from the local conditions of the Riyadh region. Since, the modifying of collected data to input data is needed; the main goal of this paper is the analysis of collected data. The data analysis in this paper includes processing each: Trucks Classification, Traffic Growth Factor, Annual Average Daily Truck Traffic (AADTT), Monthly Adjustment Factors (MAFi), Vehicle Class Distribution (VCD), Truck Hourly Distribution Factors, Axle Load Distribution Factors (ALDF), Number of axle types (single, tandem, and tridem) per truck class, cloud cover percent, and road sections selected for the local calibration. Detailed descriptions of input parameters are explained in this paper, which leads to providing of an approach for successful implementation of MEPDG. Local calibration of MEPDG to the conditions of Riyadh region can be performed based on the findings in this paper.

Keywords: mechanistic-empirical pavement design guide (MEPDG), traffic characteristics, materials properties, climate, Riyadh

Procedia PDF Downloads 223
2498 Quality Improvement of the Sand Moulding Process in Foundries Using Six Sigma Technique

Authors: Cindy Sithole, Didier Nyembwe, Peter Olubambi

Abstract:

The sand casting process involves pattern making, mould making, metal pouring and shake out. Every step in the sand moulding process is very critical for production of good quality castings. However, waste generated during the sand moulding operation and lack of quality are matters that influences performance inefficiencies and lack of competitiveness in South African foundries. Defects produced from the sand moulding process are only visible in the final product (casting) which results in increased number of scrap, reduced sales and increases cost in the foundry. The purpose of this Research is to propose six sigma technique (DMAIC, Define, Measure, Analyze, Improve and Control) intervention in sand moulding foundries and to reduce variation caused by deficiencies in the sand moulding process in South African foundries. Its objective is to create sustainability and enhance productivity in the South African foundry industry. Six sigma is a data driven method to process improvement that aims to eliminate variation in business processes using statistical control methods .Six sigma focuses on business performance improvement through quality initiative using the seven basic tools of quality by Ishikawa. The objectives of six sigma are to eliminate features that affects productivity, profit and meeting customers’ demands. Six sigma has become one of the most important tools/techniques for attaining competitive advantage. Competitive advantage for sand casting foundries in South Africa means improved plant maintenance processes, improved product quality and proper utilization of resources especially scarce resources. Defects such as sand inclusion, Flashes and sand burn on were some of the defects that were identified as resulting from the sand moulding process inefficiencies using six sigma technique. The courses were we found to be wrong design of the mould due to the pattern used and poor ramming of the moulding sand in a foundry. Six sigma tools such as the voice of customer, the Fishbone, the voice of the process and process mapping were used to define the problem in the foundry and to outline the critical to quality elements. The SIPOC (Supplier Input Process Output Customer) Diagram was also employed to ensure that the material and process parameters were achieved to ensure quality improvement in a foundry. The process capability of the sand moulding process was measured to understand the current performance to enable improvement. The Expected results of this research are; reduced sand moulding process variation, increased productivity and competitive advantage.

Keywords: defects, foundries, quality improvement, sand moulding, six sigma (DMAIC)

Procedia PDF Downloads 188
2497 Marginal Productivity of Small Scale Yam and Cassava Farmers in Kogi State, Nigeria: Data Envelopment Analysis as a Complement

Authors: M. A. Ojo, O. A. Ojo, A. I. Odine, A. Ogaji

Abstract:

The study examined marginal productivity analysis of small scale yam and cassava farmers in Kogi State, Nigeria. Data used for the study were obtained from primary source using a multi-stage sampling technique with structured questionnaires administered to 150 randomly selected yam and cassava farmers from three Local Government Areas of the State. Description statistics, data envelopment analysis and Cobb-Douglas production function were used to analyze the data. The DEA result on the overall technical efficiency of the farmers showed that 40% of the sampled yam and cassava farmers in the study area were operating at frontier and optimum level of production with mean technical efficiency of 1.00. This implies that 60% of the yam and cassava farmers in the study area can still improve their level of efficiency through better utilization of available resources, given the current state of technology. The results of the Cobb-Douglas analysis of factors affecting the output of yam and cassava farmers showed that labour, planting materials, fertilizer and capital inputs positively and significantly affected the output of the yam and cassava farmers in the study area. The study further revealed that yam and cassava farms in the study area operated under increasing returns to scale. This result of marginal productivity analysis further showed that relatively efficient farms were more marginally productive in resource utilization This study also shows that estimating production functions without separating the farms to efficient and inefficient farms bias the parameter values obtained from such production function. It is therefore recommended that yam and cassava farmers in the study area should form cooperative societies so as to enable them have access to productive inputs that will enable them expand. Also, since using a single equation model for production function produces a bias parameter estimates as confirmed above, farms should, therefore, be decomposed into efficient and inefficient ones before production function estimation is done.

Keywords: marginal productivity, DEA, production function, Kogi state

Procedia PDF Downloads 477
2496 Influence of Microstructure on Deformation Mechanisms and Mechanical Properties of Additively Manufactured Steel

Authors: Etienne Bonnaud, David Lindell

Abstract:

Correlations between microstructure, deformation mechanisms, and mechanical properties in additively manufactured 316L steel components have been investigated. Mechanical properties in the vertical direction (building direction) and in the horizontal direction (in plane directions) are markedly different. Vertically built specimens show lower yield stress but higher elongation than their horizontally built counterparts. Microscopic observations by electron back scattered diffraction (EBSD) for both build orientations reveal a strong [110] fiber texture in the build direction but different grain morphologies. These microstructures are used as input in subsequent crystal plasticity numerical simulations to understand their influence on the deformation mechanisms and the mechanical properties. Mean field simulations using a visco plastic self consistent (VPSC) model were carried out first but did not give results consistent with the tensile test experiments. A more detailed full-field model had to be used based on the Visco Plastic Fast Fourier Transform (VPFTT) method. A more accurate microstructure description was then input to the simulation model, where thin vertical regions of smaller grains were also taken into account. It turned out that these small grain clusters were responsible for the discrepancies in yield stress and hardening. Texture and morphology have a strong effect on mechanical properties. The different mechanical behaviors between vertically and horizontally printed specimens could be explained by means of numerical full-field crystal plasticity simulations, and the presence of thin clusters of smaller grains was shown to play a central role in the deformation mechanisms.

Keywords: additive manufacturing, crystal plasticity, full-field simulations, mean-field simulations, texture

Procedia PDF Downloads 68
2495 Definition of Aerodynamic Coefficients for Microgravity Unmanned Aerial System

Authors: Gamaliel Salazar, Adriana Chazaro, Oscar Madrigal

Abstract:

The evolution of Unmanned Aerial Systems (UAS) has made it possible to develop new vehicles capable to perform microgravity experiments which due its cost and complexity were beyond the reach for many institutions. In this study, the aerodynamic behavior of an UAS is studied through its deceleration stage after an initial free fall phase (where the microgravity effect is generated) using Computational Fluid Dynamics (CFD). Due to the fact that the payload would be analyzed under a microgravity environment and the nature of the payload itself, the speed of the UAS must be reduced in a smoothly way. Moreover, the terminal speed of the vehicle should be low enough to preserve the integrity of the payload and vehicle during the landing stage. The UAS model is made by a study pod, control surfaces with fixed and mobile sections, landing gear and two semicircular wing sections. The speed of the vehicle is decreased by increasing the angle of attack (AoA) of each wing section from 2° (where the airfoil S1091 has its greatest aerodynamic efficiency) to 80°, creating a circular wing geometry. Drag coefficients (Cd) and forces (Fd) are obtained employing CFD analysis. A simplified 3D model of the vehicle is analyzed using Ansys Workbench 16. The distance between the object of study and the walls of the control volume is eight times the length of the vehicle. The domain is discretized using an unstructured mesh based on tetrahedral elements. The refinement of the mesh is made by defining an element size of 0.004 m in the wing and control surfaces in order to figure out the fluid behavior in the most important zones, as well as accurate approximations of the Cd. The turbulent model k-epsilon is selected to solve the governing equations of the fluids while a couple of monitors are placed in both wing and all-body vehicle to visualize the variation of the coefficients along the simulation process. Employing a statistical approximation response surface methodology the case of study is parametrized considering the AoA of the wing as the input parameter and Cd and Fd as output parameters. Based on a Central Composite Design (CCD), the Design Points (DP) are generated so the Cd and Fd for each DP could be estimated. Applying a 2nd degree polynomial approximation the drag coefficients for every AoA were determined. Using this values, the terminal speed at each position is calculated considering a specific Cd. Additionally, the distance required to reach the terminal velocity at each AoA is calculated, so the minimum distance for the entire deceleration stage without comprising the payload could be determine. The Cd max of the vehicle is 1.18, so its maximum drag will be almost like the drag generated by a parachute. This guarantees that aerodynamically the vehicle can be braked, so it could be utilized for several missions allowing repeatability of microgravity experiments.

Keywords: microgravity effect, response surface, terminal speed, unmanned system

Procedia PDF Downloads 168