Search results for: industrial wastewater
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4021

Search results for: industrial wastewater

421 The Role of Nickel on the High-Temperature Corrosion of Modell Alloys (Stainless Steels) before and after Breakaway Corrosion at 600°C: A Microstructural Investigation

Authors: Imran Hanif, Amanda Persdotter, Sedigheh Bigdeli, Jesper Liske, Torbjorn Jonsson

Abstract:

Renewable fuels such as biomass/waste for power production is an attractive alternative to fossil fuels in order to achieve a CO₂ -neutral power generation. However, the combustion results in the release of corrosive species. This puts high demands on the corrosion resistance of the alloys used in the boiler. Stainless steels containing nickel and/or nickel containing coatings are regarded as suitable corrosion resistance material especially in the superheater regions. However, the corrosive environment in the boiler caused by the presence of water vapour and reactive alkali very rapidly breaks down the primary protection, i.e., the Cr-rich oxide scale formed on stainless steels. The lifetime of the components, therefore, relies on the properties of the oxide scale formed after breakaway, i.e., the secondary protection. The aim of the current study is to investigate the role of varying nickel content (0–82%) on the high-temperature corrosion of model alloys with 18% Cr (Fe in balance) in the laboratory mimicking industrial conditions at 600°C. The influence of nickel is investigated on both the primary protection and especially the secondary protection, i.e., the scale formed after breakaway, during the oxidation/corrosion process in the dry O₂ (primary protection) and more aggressive environment such as H₂O, K₂CO₃ and KCl (secondary protection). All investigated alloys experience a very rapid loss of the primary protection, i.e., the Cr-rich (Cr, Fe)₂O₃, and the formation of secondary protection in the aggressive environments. The microstructural investigation showed that secondary protection of all alloys has a very similar microstructure in all more aggressive environments consisting of an outward growing iron oxide and inward growing spinel-oxide (Fe, Cr, Ni)₃O₄. The oxidation kinetics revealed that it is possible to influence the protectiveness of the scale formed after breakaway (secondary protection) through the amount of nickel in the alloy. The difference in oxidation kinetics of the secondary protection is linked to the microstructure and chemical composition of the complex spinel-oxide. The detailed microstructural investigations were carried out using the extensive analytical techniques such as electron back scattered diffraction (EBSD), energy dispersive X-rays spectroscopy (EDS) via the scanning and transmission electron microscopy techniques and results are compared with the thermodynamic calculations using the Thermo-Calc software.

Keywords: breakaway corrosion, EBSD, high-temperature oxidation, SEM, TEM

Procedia PDF Downloads 138
420 Erosion Wear of Cast Al-Si Alloys

Authors: Pooja Verma, Rajnesh Tyagi, Sunil Mohan

Abstract:

Al-Si alloys are widely used in various components such as liner-less engine blocks, piston, compressor bodies and pumps for automobile sector and aerospace industries due to their excellent combination of properties like low thermal expansion coefficient, low density, excellent wear resistance, high corrosion resistance, excellent cast ability, and high hardness. The low density and high hardness of primary Si phase results in significant reduction in density and improvement in wear resistance of hypereutectic Al-Si alloys. Keeping in view of the industrial importance of the alloys, hypereutectic Al-Si alloys containing 14, 16, 18 and 20 wt. % of Si were prepared in a resistance furnace using adequate amount of deoxidizer and degasser and their erosion behavior was evaluated by conducting tests at impingement angles of 30°, 60°, and 90° with an erodent discharge rate of 7.5 Hz, pressure 1 bar using erosion test rig. Microstructures of the cast alloys were examined using Optical microscopy (OM) and scanning electron microscopy (SEM) and the presence of Si particles was confirmed by x-ray diffractometer (XRD). The mechanical properties and hardness were measured using uniaxial tension tests at a strain rate of 10-3/s and Vickers hardness tester. Microstructures of the alloys and X-ray examination revealed the presence of primary and eutectic Si particles in the shape of cuboids or polyhedral and finer needles. Yield strength (YS), ultimate tensile strength (UTS), and uniform elongation of the hypereutectic Al-Si alloys were observed to increase with increasing content of Si. The optimal strength and ductility was observed for Al-20 wt. % Si alloy which is significantly higher than the Al-14 wt. % Si alloy. The increased hardness and the strength of the alloys with increasing amount of Si has been attributed presence of Si in the solid solution which creates strain, and this strain interacts with dislocations resulting in solid-solution strengthening. The interactions between distributed primary Si particles and dislocations also provide Orowan strengthening leading to increased strength. The steady state erosion rate was found to decrease with increasing angle of impact as well as Si content for all the alloys except at 900 where it was observed to increase with the increase in the Si content. The minimum erosion rate is observed in Al-20 wt. % Si alloy at 300 and 600 impingement angles because of its higher hardness in comparison to other alloys. However, at 90° impingement angle the wear rate for Al-20 wt. % Si alloy is found to be the minimum due to deformation, subsequent cracking and chipping off material.

Keywords: Al-Si alloy, erosion wear, cast alloys, dislocation, strengthening

Procedia PDF Downloads 65
419 The Development and Change of Settlement in Tainan County (1904-2015) Using Historical Geographic Information System

Authors: Wei Ting Han, Shiann-Far Kung

Abstract:

In the early time, most of the arable land is dry farming and using rainfall as water sources for irrigation in Tainan county. After the Chia-nan Irrigation System (CIS) was completed in 1930, Chia-nan Plain was more efficient allocation of limited water sources or irrigation, because of the benefit from irrigation systems, drainage systems, and land improvement projects. The problem of long-term drought, flood and salt damage in the past were also improved by CIS. The canal greatly improved the paddy field area and agricultural output, Tainan county has become one of the important agricultural producing areas in Taiwan. With the development of water conservancy facilities, affected by national policies and other factors, many agricultural communities and settlements are formed indirectly, also promoted the change of settlement patterns and internal structures. With the development of historical geographic information system (HGIS), Academia Sinica developed the WebGIS theme with the century old maps of Taiwan which is the most complete historical map of database in Taiwan. It can be used to overlay historical figures of different periods, present the timeline of the settlement change, also grasp the changes in the natural environment or social sciences and humanities, and the changes in the settlements presented by the visualized areas. This study will explore the historical development and spatial characteristics of the settlements in various areas of Tainan County. Using of large-scale areas to explore the settlement changes and spatial patterns of the entire county, through the dynamic time and space evolution from Japanese rule to the present day. Then, digitizing the settlement of different periods to perform overlay analysis by using Taiwan historical topographic maps in 1904, 1921, 1956 and 1989. Moreover, using document analysis to analyze the temporal and spatial changes of regional environment and settlement structure. In addition, the comparison analysis method is used to classify the spatial characteristics and differences between the settlements. Exploring the influence of external environments in different time and space backgrounds, such as government policies, major construction, and industrial development. This paper helps to understand the evolution of the settlement space and the internal structural changes in Tainan County.

Keywords: historical geographic information system, overlay analysis, settlement change, Tainan County

Procedia PDF Downloads 128
418 Reverse Engineering Genius: Through the Lens of World Language Collaborations

Authors: Cynthia Briggs, Kimberly Gerardi

Abstract:

Over the past six years, the authors have been working together on World Language Collaborations in the Middle School French Program at St. Luke's School in New Canaan, Connecticut, USA. Author 2 brings design expertise to the projects, and both teachers have utilized the fabrication lab, emerging technologies, and collaboration with students. Each year, author 1 proposes a project scope, and her students are challenged to design and engineer a signature project. Both partners have improved the iterative process to ensure deeper learning and sustained student inquiry. The projects range from a 1:32 scale model of the Eiffel Tower that was CNC routed to a fully functional jukebox that plays francophone music, lights up, and can hold up to one thousand songs powered by Raspberry Pi. The most recent project is a Fragrance Marketplace, culminating with a pop-up store for the entire community to discover. Each student will learn the history of fragrance and the chemistry behind making essential oils. Students then create a unique brand, marketing strategy, and concept for their signature fragrance. They are further tasked to use the industrial design process (bottling, packaging, and creating a brand name) to finalize their product for the public Marketplace. Sometimes, these dynamic projects require maintenance and updates. For example, our wall-mounted, three-foot francophone clock is constantly changing. The most recent iteration uses Chat GPT to program the Arduino to reconcile the real-time clock shield and keep perfect time as each hour passes. The lights, motors, and sounds from the clock are authentic to each region, represented with laser-cut embellishments. Inspired by Michel Parmigiani, the history of Swiss watch-making, and the precision of time instruments, we aim for perfection with each passing minute. The authors aim to share exemplary work that is possible with students of all ages. We implemented the reverse engineering process to focus on student outcomes to refine our collaborative process. The products that our students create are prime examples of how the design engineering process is applicable across disciplines. The authors firmly believe that the past and present of World cultures inspire innovation.

Keywords: collaboration, design thinking, emerging technologies, world language

Procedia PDF Downloads 43
417 Logistical Optimization of Nuclear Waste Flows during Decommissioning

Authors: G. Dottavio, M. F. Andrade, F. Renard, V. Cheutet, A.-L. Ladier, S. Vercraene, P. Hoang, S. Briet, R. Dachicourt, Y. Baizet

Abstract:

An important number of technological equipment and high-skilled workers over long periods of time have to be mobilized during nuclear decommissioning processes. The related operations generate complex flows of waste and high inventory levels, associated to information flows of heterogeneous types. Taking into account that more than 10 decommissioning operations are on-going in France and about 50 are expected toward 2025: A big challenge is addressed today. The management of decommissioning and dismantling of nuclear installations represents an important part of the nuclear-based energy lifecycle, since it has an environmental impact as well as an important influence on the electricity cost and therefore the price for end-users. Bringing new technologies and new solutions into decommissioning methodologies is thus mandatory to improve the quality, cost and delay efficiency of these operations. The purpose of our project is to improve decommissioning management efficiency by developing a decision-support framework dedicated to plan nuclear facility decommissioning operations and to optimize waste evacuation by means of a logistic approach. The target is to create an easy-to-handle tool capable of i) predicting waste flows and proposing the best decommissioning logistics scenario and ii) managing information during all the steps of the process and following the progress: planning, resources, delays, authorizations, saturation zones, waste volume, etc. In this article we present our results from waste nuclear flows simulation during decommissioning process, including discrete-event simulation supported by FLEXSIM 3-D software. This approach was successfully tested and our works confirms its ability to improve this type of industrial process by identifying the critical points of the chain and optimizing it by identifying improvement actions. This type of simulation, executed before the start of the process operations on the basis of a first conception, allow ‘what-if’ process evaluation and help to ensure quality of the process in an uncertain context. The simulation of nuclear waste flows before evacuation from the site will help reducing the cost and duration of the decommissioning process by optimizing the planning and the use of resources, transitional storage and expensive radioactive waste containers. Additional benefits are expected for the governance system of the waste evacuation since it will enable a shared responsibility of the waste flows.

Keywords: nuclear decommissioning, logistical optimization, decision-support framework, waste management

Procedia PDF Downloads 322
416 Lessons Learned from Interlaboratory Noise Modelling in Scope of Environmental Impact Assessments in Slovenia

Authors: S. Cencek, A. Markun

Abstract:

Noise assessment methods are regularly used in scope of Environmental Impact Assessments for planned projects to assess (predict) the expected noise emissions of these projects. Different noise assessment methods could be used. In recent years, we had an opportunity to collaborate in some noise assessment procedures where noise assessments of different laboratories have been performed simultaneously. We identified some significant differences in noise assessment results between laboratories in Slovenia. We estimate that despite good input Georeferenced Data to set up acoustic model exists in Slovenia; there is no clear consensus on methods for predictive noise methods for planned projects. We analyzed input data, methods and results of predictive noise methods for two planned industrial projects, both were done independently by two laboratories. We also analyzed the data, methods and results of two interlaboratory collaborative noise models for two existing noise sources (railway and motorway). In cases of predictive noise modelling, the validations of acoustic models were performed by noise measurements of surrounding existing noise sources, but in varying durations. The acoustic characteristics of existing buildings were also not described identically. The planned noise sources were described and digitized differently. Differences in noise assessment results between different laboratories have ranged up to 10 dBA, which considerably exceeds the acceptable uncertainty ranged between 3 to 6 dBA. Contrary to predictive noise modelling, in cases of collaborative noise modelling for two existing noise sources the possibility to perform the validation noise measurements of existing noise sources greatly increased the comparability of noise modelling results. In both cases of collaborative noise modelling for existing motorway and railway, the modelling results of different laboratories were comparable. Differences in noise modeling results between different laboratories were below 5 dBA, which was acceptable uncertainty set up by interlaboratory noise modelling organizer. The lessons learned from the study were: 1) Predictive noise calculation using formulae from International standard SIST ISO 9613-2: 1997 is not an appropriate method to predict noise emissions of planned projects since due to complexity of procedure they are not used strictly, 2) The noise measurements are important tools to minimize noise assessment errors of planned projects and should be in cases of predictive noise modelling performed at least for validation of acoustic model, 3) National guidelines should be made on the appropriate data, methods, noise source digitalization, validation of acoustic model etc. in order to unify the predictive noise models and their results in scope of Environmental Impact Assessments for planned projects.

Keywords: environmental noise assessment, predictive noise modelling, spatial planning, noise measurements, national guidelines

Procedia PDF Downloads 233
415 Youth and Employment: An Outlook on Challenges of Demographic Dividend

Authors: Vidya Yadav

Abstract:

India’s youth bulge is now sharpest at the critical 15-24 age group, even as its youngest, and oldest age groups begin to narrow. As the ‘single year, age data’ for the 2011 Census releases the data on the number of people at each year of age in the population. The data shows that India’s working age population (15-64 years) is now 63.4 percent of the total, as against just short of 60 percent in 2001. The numbers also show that the ‘dependency ratio’ the ratio of children (0-14) and the elderly (65 above) to those in the working age has shrunk further to 0.55. “Even as the western world is in ageing situation, these new numbers show that India’s population is still very young”. As the fertility falls faster in urban areas, rural India is younger than urban India; while 51.73 percent of rural Indians are under the age of 24 and 45.9 percent of urban Indians are under 24. The percentage of the population under the age of 24 has dropped, but many demographers say that it should not be interpreted as a sign of the youth bulge is shrinking. Rather it is because of “declining fertility, the number of infants and children reduces first, and this is what we see with the number of under age 24. Indeed the figure shows that the proportion of children in the 0-4 and 5-9 age groups has fallen in 2011 compared to 2001. For the first time, the percentage of children in the 10-14 age group has also fallen, as the effect of families reducing the number of children they have begins to be felt. The present paper key issue is to examine that “whether this growing youth bulge has the right skills for the workforce or not”. The study seeks to examine the youth population structure and employment distribution among them in India during 2001-2011 in different industrial category. It also tries to analyze the workforce participation rate as main and marginal workers both for male and female workers in rural and urban India by utilizing an abundant source of census data from 2001-2011. Result shows that an unconscionable number of adolescents are working when they should study. In rural areas, large numbers of youths are working as an agricultural labourer. Study shows that most of the youths working are in the 15-19 age groups. In fact, this is the age of entry into higher education, but due to economic compulsion forces them to take up jobs, killing their dreams of higher skills or education. Youths are primarily engaged in low paying irregular jobs which are clearly revealed by census data on marginal workers. That is those who get work for less than six months in a year. Large proportions of youths are involved in the cultivation and household industries works.

Keywords: main, marginal, youth, work

Procedia PDF Downloads 288
414 Equilibrium, Kinetic and Thermodynamic Studies of the Biosorption of Textile Dye (Yellow Bemacid) onto Brahea edulis

Authors: G. Henini, Y. Laidani, F. Souahi, A. Labbaci, S. Hanini

Abstract:

Environmental contamination is a major problem being faced by the society today. Industrial, agricultural, and domestic wastes, due to the rapid development in the technology, are discharged in the several receivers. Generally, this discharge is directed to the nearest water sources such as rivers, lakes, and seas. While the rates of development and waste production are not likely to diminish, efforts to control and dispose of wastes are appropriately rising. Wastewaters from textile industries represent a serious problem all over the world. They contain different types of synthetic dyes which are known to be a major source of environmental pollution in terms of both the volume of dye discharged and the effluent composition. From an environmental point of view, the removal of synthetic dyes is of great concern. Among several chemical and physical methods, adsorption is a promising technique due to the ease of use and low cost compared to other applications in the process of discoloration, especially if the adsorbent is inexpensive and readily available. The focus of the present study was to assess the potentiality of Brahea edulis (BE) for the removal of synthetic dye Yellow bemacid (YB) from aqueous solutions. The results obtained here may transfer to other dyes with a similar chemical structure. Biosorption studies were carried out under various parameters such as mass adsorbent particle, pH, contact time, initial dye concentration, and temperature. The biosorption kinetic data of the material (BE) was tested by the pseudo first-order and the pseudo-second-order kinetic models. Thermodynamic parameters including the Gibbs free energy ΔG, enthalpy ΔH, and entropy ΔS have revealed that the adsorption of YB on the BE is feasible, spontaneous, and endothermic. The equilibrium data were analyzed by using Langmuir, Freundlich, Elovich, and Temkin isotherm models. The experimental results show that the percentage of biosorption increases with an increase in the biosorbent mass (0.25 g: 12 mg/g; 1.5 g: 47.44 mg/g). The maximum biosorption occurred at around pH value of 2 for the YB. The equilibrium uptake was increased with an increase in the initial dye concentration in solution (Co = 120 mg/l; q = 35.97 mg/g). Biosorption kinetic data were properly fitted with the pseudo-second-order kinetic model. The best fit was obtained by the Langmuir model with high correlation coefficient (R2 > 0.998) and a maximum monolayer adsorption capacity of 35.97 mg/g for YB.

Keywords: adsorption, Brahea edulis, isotherm, yellow Bemacid

Procedia PDF Downloads 175
413 How to Reach Net Zero Emissions? On the Permissibility of Negative Emission Technologies and the Danger of Moral Hazards

Authors: Hanna Schübel, Ivo Wallimann-Helmer

Abstract:

In order to reach the goal of the Paris Agreement to not overshoot 1.5°C of warming above pre-industrial levels, various countries including the UK and Switzerland have committed themselves to net zero emissions by 2050. The employment of negative emission technologies (NETs) is very likely going to be necessary for meeting these national objectives as well as other internationally agreed climate targets. NETs are methods of removing carbon from the atmosphere and are thus a means for addressing climate change. They range from afforestation to technological measures such as direct air capture and carbon storage (DACCS), where CO2 is captured from the air and stored underground. As all so-called geoengineering technologies, the development and deployment of NETs are often subject to moral hazard arguments. As these technologies could be perceived as an alternative to mitigation efforts, so the argument goes, they are potentially a dangerous distraction from the main target of mitigating emissions. We think that this is a dangerous argument to make as it may hinder the development of NETs which are an essential element of net zero emission targets. In this paper we argue that the moral hazard argument is only problematic if we do not reflect upon which levels of emissions are at stake in order to meet net zero emissions. In response to the moral hazard argument we develop an account of which levels of emissions in given societies should be mitigated and not be the target of NETs and which levels of emissions can legitimately be a target of NETs. For this purpose, we define four different levels of emissions: the current level of individual emissions, the level individuals emit in order to appear in public without shame, the level of a fair share of individual emissions in the global budget, and finally the baseline of net zero emissions. At each level of emissions there are different subjects to be assigned responsibilities if societies and/or individuals are committed to the target of net zero emissions. We argue that all emissions within one’s fair share do not demand individual mitigation efforts. The same holds with regard to individuals and the baseline level of emissions necessary to appear in public in their societies without shame. Individuals are only under duty to reduce their emissions if they exceed this baseline level. This is different for whole societies. Societies demanding more emissions to appear in public without shame than the individual fair share are under duty to foster emission reductions and are not legitimate to reduce by introducing NETs. NETs are legitimate for reducing emissions only below the level of fair shares and for reaching net zero emissions. Since access to NETs to achieve net zero emissions demands technology not affordable to individuals there are also no full individual responsibilities to achieve net zero emissions. This is mainly a responsibility of societies as a whole.

Keywords: climate change, mitigation, moral hazard, negative emission technologies, responsibility

Procedia PDF Downloads 117
412 Structural Equation Modelling Based Approach to Integrate Customers and Suppliers with Internal Practices for Lean Manufacturing Implementation in the Indian Context

Authors: Protik Basu, Indranil Ghosh, Pranab K. Dan

Abstract:

Lean management is an integrated socio-technical system to bring about a competitive state in an organization. The purpose of this paper is to explore and integrate the role of customers and suppliers with the internal practices of the Indian manufacturing industries towards successful implementation of lean manufacturing (LM). An extensive literature survey is carried out. An attempt is made to build an exhaustive list of all the input manifests related to customers, suppliers and internal practices necessary for LM implementation, coupled with a similar exhaustive list of the benefits accrued from its successful implementation. A structural model is thus conceptualized, which is empirically validated based on the data from the Indian manufacturing sector. With the current impetus on developing the industrial sector, the Government of India recently introduced the Lean Manufacturing Competitiveness Scheme that aims to increase competitiveness with the help of lean concepts. There is a huge scope to enrich the Indian industries with the lean benefits, the implementation status being quite low. Hardly any survey-based empirical study in India has been found to integrate customers and suppliers with the internal processes towards successful LM implementation. This empirical research is thus carried out in the Indian manufacturing industries. The basic steps of the research methodology followed in this research are the identification of input and output manifest variables and latent constructs, model proposition and hypotheses development, development of survey instrument, sampling and data collection and model validation (exploratory factor analysis, confirmatory factor analysis, and structural equation modeling). The analysis reveals six key input constructs and three output constructs, indicating that these constructs should act in unison to maximize the benefits of implementing lean. The structural model presented in this paper may be treated as a guide to integrating customers and suppliers with internal practices to successfully implement lean. Integrating customers and suppliers with internal practices into a unified, coherent manufacturing system will lead to an optimum utilization of resources. This work is one of the very first researches to have a survey-based empirical analysis of the role of customers, suppliers and internal practices of the Indian manufacturing sector towards an effective lean implementation.

Keywords: customer management, internal manufacturing practices, lean benefits, lean implementation, lean manufacturing, structural model, supplier management

Procedia PDF Downloads 176
411 Developing a Maturity Model of Digital Twin Application for Infrastructure Asset Management

Authors: Qingqing Feng, S. Thomas Ng, Frank J. Xu, Jiduo Xing

Abstract:

Faced with unprecedented challenges including aging assets, lack of maintenance budget, overtaxed and inefficient usage, and outcry for better service quality from the society, today’s infrastructure systems has become the main focus of many metropolises to pursue sustainable urban development and improve resilience. Digital twin, being one of the most innovative enabling technologies nowadays, may open up new ways for tackling various infrastructure asset management (IAM) problems. Digital twin application for IAM, as its name indicated, represents an evolving digital model of intended infrastructure that possesses functions including real-time monitoring; what-if events simulation; and scheduling, maintenance, and management optimization based on technologies like IoT, big data and AI. Up to now, there are already vast quantities of global initiatives of digital twin applications like 'Virtual Singapore' and 'Digital Built Britain'. With digital twin technology permeating the IAM field progressively, it is necessary to consider the maturity of the application and how those institutional or industrial digital twin application processes will evolve in future. In order to deal with the gap of lacking such kind of benchmark, a draft maturity model is developed for digital twin application in the IAM field. Firstly, an overview of current smart cities maturity models is given, based on which the draft Maturity Model of Digital Twin Application for Infrastructure Asset Management (MM-DTIAM) is developed for multi-stakeholders to evaluate and derive informed decision. The process of development follows a systematic approach with four major procedures, namely scoping, designing, populating and testing. Through in-depth literature review, interview and focus group meeting, the key domain areas are populated, defined and iteratively tuned. Finally, the case study of several digital twin projects is conducted for self-verification. The findings of the research reveal that: (i) the developed maturity model outlines five maturing levels leading to an optimised digital twin application from the aspects of strategic intent, data, technology, governance, and stakeholders’ engagement; (ii) based on the case study, levels 1 to 3 are already partially implemented in some initiatives while level 4 is on the way; and (iii) more practices are still needed to refine the draft to be mutually exclusive and collectively exhaustive in key domain areas.

Keywords: digital twin, infrastructure asset management, maturity model, smart city

Procedia PDF Downloads 155
410 A Simulation-Based Investigation of the Smooth-Wall, Radial Gravity Problem of Granular Flow through a Wedge-Shaped Hopper

Authors: A. F. Momin, D. V. Khakhar

Abstract:

Granular materials consist of particulate particles found in nature and various industries that, due to gravity flow, behave macroscopically like liquids. A fundamental industrial unit operation is a hopper with inclined walls or a converging channel in which material flows downward under gravity and exits the storage bin through the bottom outlet. The simplest form of the flow corresponds to a wedge-shaped, quasi-two-dimensional geometry with smooth walls and radially directed gravitational force toward the apex of the wedge. These flows were examined using the Mohr-Coulomb criterion in the classic work of Savage (1965), while Ravi Prakash and Rao used the critical state theory (1988). The smooth-wall radial gravity (SWRG) wedge-shaped hopper is simulated using the discrete element method (DEM) to test existing theories. DEM simulations involve the solution of Newton's equations, taking particle-particle interactions into account to compute stress and velocity fields for the flow in the SWRG system. Our computational results are consistent with the predictions of Savage (1965) and Ravi Prakash and Rao (1988), except for the region near the exit, where both viscous and frictional effects are present. To further comprehend this behaviour, a parametric analysis is carried out to analyze the rheology of wedge-shaped hoppers by varying the orifice diameter, wedge angle, friction coefficient, and stiffness. The conclusion is that velocity increases as the flow rate increases but decreases as the wedge angle and friction coefficient increase. We observed no substantial changes in velocity due to varying stiffness. It is anticipated that stresses at the exit result from the transfer of momentum during particle collisions; for this reason, relationships between viscosity and shear rate are shown, and all data are collapsed into a single curve. In addition, it is demonstrated that viscosity and volume fraction exhibit power law correlations with the inertial number and that all the data collapse into a single curve. A continuum model for determining granular flows is presented using empirical correlations.

Keywords: discrete element method, gravity flow, smooth-wall, wedge-shaped hoppers

Procedia PDF Downloads 86
409 Assessment of Groundwater Quality in Karakulam Grama Panchayath in Thiruvananthapuram, Kerala State, South India

Authors: D. S. Jaya, G. P. Deepthi

Abstract:

Groundwater is vital to the livelihoods and health of the majority of the people since it provides almost the entire water resource for domestic, agricultural and industrial uses. Groundwater quality comprises the physical, chemical, and bacteriological qualities. The present investigation was carried out to determine the physicochemical and bacteriological quality of the ground water sources in the residential areas of Karakulam Grama Panchayath in Thiruvananthapuram district, Kerala state in India. Karakulam is located in the eastern suburbs of Thiruvananthapuram city. The major drinking water source of the residents in the study area are wells. The present study aims to assess the portability and irrigational suitability of groundwater in the study area. The water samples were collected from randomly selected dug wells and bore wells in the study area during post monsoon and pre-monsoon seasons of the year 2014 after a preliminary field survey. The physical, chemical and bacteriological parameters of the water samples were analysed following standard procedures. The concentration of heavy metals (Cd, Pb, and Mn) in the acid digested water samples were determined by using an Atomic Absorption Spectrophotometer. The results showed that the pH of well water samples ranged from acidic to the alkaline level. In the majority of well water samples ( > 54%) the iron and magnesium content were found high in both the seasons studied, and the values were above the permissible limits of WHO drinking water quality standards. Bacteriological analyses showed that 63% of the wells were contaminated with total coliforms in both the seasons studied. Irrigational suitability of groundwater was assessed by determining the chemical indices like Sodium Percentage (%Na), Sodium Adsorption Ratio (SAR), Residual Sodium Carbonate (RSC), Permeability Index (PI), and the results indicate that the well water in the study area is good for irrigation purposes. Therefore, the study reveals the degradation of drinking water quality groundwater sources in Karakulam Grama Panchayath in Thiruvananthapuram District, Kerala in terms of its chemical and bacteriological characteristics and is not potable without proper treatment. In the study, more than 1/3rd of the wells tested were positive for total coliforms, and the bacterial contamination may pose threats to public health. The study recommends the need for periodic well water quality monitoring in the study area and to conduct awareness programs among the residents.

Keywords: bacteriological, groundwater, irrigational suitability, physicochemical, portability

Procedia PDF Downloads 263
408 Exploring the Intersection Between the General Data Protection Regulation and the Artificial Intelligence Act

Authors: Maria Jędrzejczak, Patryk Pieniążek

Abstract:

The European legal reality is on the eve of significant change. In European Union law, there is talk of a “fourth industrial revolution”, which is driven by massive data resources linked to powerful algorithms and powerful computing capacity. The above is closely linked to technological developments in the area of artificial intelligence, which has prompted an analysis covering both the legal environment as well as the economic and social impact, also from an ethical perspective. The discussion on the regulation of artificial intelligence is one of the most serious yet widely held at both European Union and Member State level. The literature expects legal solutions to guarantee security for fundamental rights, including privacy, in artificial intelligence systems. There is no doubt that personal data have been increasingly processed in recent years. It would be impossible for artificial intelligence to function without processing large amounts of data (both personal and non-personal). The main driving force behind the current development of artificial intelligence is advances in computing, but also the increasing availability of data. High-quality data are crucial to the effectiveness of many artificial intelligence systems, particularly when using techniques involving model training. The use of computers and artificial intelligence technology allows for an increase in the speed and efficiency of the actions taken, but also creates security risks for the data processed of an unprecedented magnitude. The proposed regulation in the field of artificial intelligence requires analysis in terms of its impact on the regulation on personal data protection. It is necessary to determine what the mutual relationship between these regulations is and what areas are particularly important in the personal data protection regulation for processing personal data in artificial intelligence systems. The adopted axis of considerations is a preliminary assessment of two issues: 1) what principles of data protection should be applied in particular during processing personal data in artificial intelligence systems, 2) what regulation on liability for personal data breaches is in such systems. The need to change the regulations regarding the rights and obligations of data subjects and entities processing personal data cannot be excluded. It is possible that changes will be required in the provisions regarding the assignment of liability for a breach of personal data protection processed in artificial intelligence systems. The research process in this case concerns the identification of areas in the field of personal data protection that are particularly important (and may require re-regulation) due to the introduction of the proposed legal regulation regarding artificial intelligence. The main question that the authors want to answer is how the European Union regulation against data protection breaches in artificial intelligence systems is shaping up. The answer to this question will include examples to illustrate the practical implications of these legal regulations.

Keywords: data protection law, personal data, AI law, personal data breach

Procedia PDF Downloads 63
407 Selective Separation of Amino Acids by Reactive Extraction with Di-(2-Ethylhexyl) Phosphoric Acid

Authors: Alexandra C. Blaga, Dan Caşcaval, Alexandra Tucaliuc, Madalina Poştaru, Anca I. Galaction

Abstract:

Amino acids are valuable chemical products used in in human foods, in animal feed additives and in the pharmaceutical field. Recently, there has been a noticeable rise of amino acids utilization throughout the world to include their use as raw materials in the production of various industrial chemicals: oil gelating agents (amino acid-based surfactants) to recover effluent oil in seas and rivers and poly(amino acids), which are attracting attention for biodegradable plastics manufacture. The amino acids can be obtained by biosynthesis or from protein hydrolysis, but their separation from the obtained mixtures can be challenging. In the last decades there has been a continuous interest in developing processes that will improve the selectivity and yield of downstream processing steps. The liquid-liquid extraction of amino acids (dissociated at any pH-value of the aqueous solutions) is possible only by using the reactive extraction technique, mainly with extractants of organophosphoric acid derivatives, high molecular weight amines and crown-ethers. The purpose of this study was to analyse the separation of nine amino acids of acidic character (l-aspartic acid, l-glutamic acid), basic character (l-histidine, l-lysine, l-arginine) and neutral character (l-glycine, l-tryptophan, l-cysteine, l-alanine) by reactive extraction with di-(2-ethylhexyl)phosphoric acid (D2EHPA) dissolved in butyl acetate. The results showed that the separation yield is controlled by the pH value of the aqueous phase: the reactive extraction of amino acids with D2EHPA is possible only if the amino acids exist in aqueous solution in their cationic forms (pH of aqueous phase below the isoeletric point). The studies for individual amino acids indicated the possibility of selectively separate different groups of amino acids with similar acidic properties as a function of aqueous solution pH-value: the maximum yields are reached for a pH domain of 2–3, then strongly decreasing with the pH increase. Thus, for acidic and neutral amino acids, the extraction becomes impossible at the isolelectric point (pHi) and for basic amino acids at a pH value lower than pHi, as a result of the carboxylic group dissociation. From the results obtained for the separation from the mixture of the nine amino acids, at different pH, it can be observed that all amino acids are extracted with different yields, for a pH domain of 1.5–3. Over this interval, the extract contains only the amino acids with neutral and basic character. For pH 5–6, only the neutral amino acids are extracted and for pH > 6 the extraction becomes impossible. Using this technique, the total separation of the following amino acids groups has been performed: neutral amino acids at pH 5–5.5, basic amino acids and l-cysteine at pH 4–4.5, l-histidine at pH 3–3.5 and acidic amino acids at pH 2–2.5.

Keywords: amino acids, di-(2-ethylhexyl) phosphoric acid, reactive extraction, selective extraction

Procedia PDF Downloads 429
406 Automatic and High Precise Modeling for System Optimization

Authors: Stephanie Chen, Mitja Echim, Christof Büskens

Abstract:

To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.

Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization

Procedia PDF Downloads 408
405 Global Modeling of Drill String Dragging and Buckling in 3D Curvilinear Bore-Holes

Authors: Valery Gulyayev, Sergey Glazunov, Elena Andrusenko, Nataliya Shlyun

Abstract:

Enhancement of technology and techniques for drilling deep directed oil and gas bore-wells are of essential industrial significance because these wells make it possible to increase their productivity and output. Generally, they are used for drilling in hard and shale formations, that is why their drivage processes are followed by the emergency and failure effects. As is corroborated by practice, the principal drilling drawback occurring in drivage of long curvilinear bore-wells is conditioned by the need to obviate essential force hindrances caused by simultaneous action of the gravity, contact and friction forces. Primarily, these forces depend on the type of the technological regime, drill string stiffness, bore-hole tortuosity and its length. They can lead to the Eulerian buckling of the drill string and its sticking. To predict and exclude these states, special mathematic models and methods of computer simulation should play a dominant role. At the same time, one might note that these mechanical phenomena are very complex and only simplified approaches (‘soft string drag and torque models’) are used for their analysis. Taking into consideration that now the cost of directed wells increases essentially with complication of their geometry and enlargement of their lengths, it can be concluded that the price of mistakes of the drill string behavior simulation through the use of simplified approaches can be very high and so the problem of correct software elaboration is very urgent. This paper deals with the problem of simulating the regimes of drilling deep curvilinear bore-wells with prescribed imperfect geometrical trajectories of their axial lines. On the basis of the theory of curvilinear flexible elastic rods, methods of differential geometry, and numerical analysis methods, the 3D ‘stiff-string drag and torque model’ of the drill string bending and the appropriate software are elaborated for the simulation of the tripping in and out regimes and drilling operations. It is shown by the computer calculations that the contact and friction forces can be calculated and regulated, providing predesigned trouble-free modes of operation. The elaborated mathematic models and software can be used for the emergency situations prognostication and their exclusion at the stages of the drilling process design and realization.

Keywords: curvilinear drilling, drill string tripping in and out, contact forces, resistance forces

Procedia PDF Downloads 145
404 Estimation of Morbidity Level of Industrial Labour Conditions at Zestafoni Ferroalloy Plant

Authors: M. Turmanauli, T. Todua, O. Gvaberidze, R. Javakhadze, N. Chkhaidze, N. Khatiashvili

Abstract:

Background: Mining process has the significant influence on human health and quality of life. In recent years the events in Georgia were reflected on the industry working process, especially minimal requirements of labor safety, hygiene standards of workplace and the regime of work and rest are not observed. This situation is often caused by the lack of responsibility, awareness, and knowledge both of workers and employers. The control of working conditions and its protection has been worsened in many of industries. Materials and Methods: For evaluation of the current situation the prospective epidemiological study by face to face interview method was conducted at Georgian “Manganese Zestafoni Ferroalloy Plant” in 2011-2013. 65.7% of employees (1428 bulletin) were surveyed and the incidence rates of temporary disability days were studied. Results: The average length of a temporary disability single accident was studied taking into consideration as sex groups as well as the whole cohort. According to the classes of harmfulness the following results were received: Class 2.0-10.3%; 3.1-12.4%; 3.2-35.1%; 3.3-12.1%; 3.4-17.6%; 4.0-12.5%. Among the employees 47.5% and 83.1% were tobacco and alcohol consumers respectively. According to the age groups and years of work on the base of previous experience ≥50 ages and ≥21 years of work data prevalence respectively. The obtained data revealed increased morbidity rate according to age and years of work. It was found that the bone and articulate system and connective tissue diseases, aggravation of chronic respiratory diseases, ischemic heart diseases, hypertension and cerebral blood discirculation were the leading among the other diseases. High prevalence of morbidity observed in the workplace with not satisfactory labor conditions from the hygienic point of view. Conclusion: According to received data the causes of morbidity are the followings: unsafety labor conditions; incomplete of preventive medical examinations (preliminary and periodic); lack of access to appropriate health care services; derangement of gathering, recording, and analysis of morbidity data. This epidemiological study was conducted at the JSC “Manganese Ferro Alloy Plant” according to State program “ Prevention of Occupational Diseases” (Program code is 35 03 02 05).

Keywords: occupational health, mining process, morbidity level, cerebral blood discirculation

Procedia PDF Downloads 427
403 Design and Implementation of Generative Models for Odor Classification Using Electronic Nose

Authors: Kumar Shashvat, Amol P. Bhondekar

Abstract:

In the midst of the five senses, odor is the most reminiscent and least understood. Odor testing has been mysterious and odor data fabled to most practitioners. The delinquent of recognition and classification of odor is important to achieve. The facility to smell and predict whether the artifact is of further use or it has become undesirable for consumption; the imitation of this problem hooked on a model is of consideration. The general industrial standard for this classification is color based anyhow; odor can be improved classifier than color based classification and if incorporated in machine will be awfully constructive. For cataloging of odor for peas, trees and cashews various discriminative approaches have been used Discriminative approaches offer good prognostic performance and have been widely used in many applications but are incapable to make effectual use of the unlabeled information. In such scenarios, generative approaches have better applicability, as they are able to knob glitches, such as in set-ups where variability in the series of possible input vectors is enormous. Generative models are integrated in machine learning for either modeling data directly or as a transitional step to form an indeterminate probability density function. The algorithms or models Linear Discriminant Analysis and Naive Bayes Classifier have been used for classification of the odor of cashews. Linear Discriminant Analysis is a method used in data classification, pattern recognition, and machine learning to discover a linear combination of features that typifies or divides two or more classes of objects or procedures. The Naive Bayes algorithm is a classification approach base on Bayes rule and a set of qualified independence theory. Naive Bayes classifiers are highly scalable, requiring a number of restraints linear in the number of variables (features/predictors) in a learning predicament. The main recompenses of using the generative models are generally a Generative Models make stronger assumptions about the data, specifically, about the distribution of predictors given the response variables. The Electronic instrument which is used for artificial odor sensing and classification is an electronic nose. This device is designed to imitate the anthropological sense of odor by providing an analysis of individual chemicals or chemical mixtures. The experimental results have been evaluated in the form of the performance measures i.e. are accuracy, precision and recall. The investigational results have proven that the overall performance of the Linear Discriminant Analysis was better in assessment to the Naive Bayes Classifier on cashew dataset.

Keywords: odor classification, generative models, naive bayes, linear discriminant analysis

Procedia PDF Downloads 387
402 Experimental Study Analyzing the Similarity Theory Formulations for the Effect of Aerodynamic Roughness Length on Turbulence Length Scales in the Atmospheric Surface Layer

Authors: Matthew J. Emes, Azadeh Jafari, Maziar Arjomandi

Abstract:

Velocity fluctuations of shear-generated turbulence are largest in the atmospheric surface layer (ASL) of nominal 100 m depth, which can lead to dynamic effects such as galloping and flutter on small physical structures on the ground when the turbulence length scales and characteristic length of the physical structure are the same order of magnitude. Turbulence length scales are a measure of the average sizes of the energy-containing eddies that are widely estimated using two-point cross-correlation analysis to convert the temporal lag to a separation distance using Taylor’s hypothesis that the convection velocity is equal to the mean velocity at the corresponding height. Profiles of turbulence length scales in the neutrally-stratified ASL, as predicted by Monin-Obukhov similarity theory in Engineering Sciences Data Unit (ESDU) 85020 for single-point data and ESDU 86010 for two-point correlations, are largely dependent on the aerodynamic roughness length. Field measurements have shown that longitudinal turbulence length scales show significant regional variation, whereas length scales of the vertical component show consistent Obukhov scaling from site to site because of the absence of low-frequency components. Hence, the objective of this experimental study is to compare the similarity theory relationships between the turbulence length scales and aerodynamic roughness length with those calculated using the autocorrelations and cross-correlations of field measurement velocity data at two sites: the Surface Layer Turbulence and Environmental Science Test (SLTEST) facility in a desert ASL in Dugway, Utah, USA and the Commonwealth Scientific and Industrial Research Organisation (CSIRO) wind tower in a rural ASL in Jemalong, NSW, Australia. The results indicate that the longitudinal turbulence length scales increase with increasing aerodynamic roughness length, as opposed to the relationships derived by similarity theory correlations in ESDU models. However, the ratio of the turbulence length scales in the lateral and vertical directions to the longitudinal length scales is relatively independent of surface roughness, showing consistent inner-scaling between the two sites and the ESDU correlations. Further, the diurnal variation of wind velocity due to changes in atmospheric stability conditions has a significant effect on the turbulence structure of the energy-containing eddies in the lower ASL.

Keywords: aerodynamic roughness length, atmospheric surface layer, similarity theory, turbulence length scales

Procedia PDF Downloads 123
401 Utilization of Rice Husk Ash with Clay to Produce Lightweight Coarse Aggregates for Concrete

Authors: Shegufta Zahan, Muhammad A. Zahin, Muhammad M. Hossain, Raquib Ahsan

Abstract:

Rice Husk Ash (RHA) is one of the agricultural waste byproducts available widely in the world and contains a large amount of silica. In Bangladesh, stones cannot be used as coarse aggregate in infrastructure works as they are not available and need to be imported from abroad. As a result, bricks are mostly used as coarse aggregates in concrete as they are cheaper and easily produced here. Clay is the raw material for producing brick. Due to rapid urban growth and the industrial revolution, demand for brick is increasing, which led to a decrease in the topsoil. This study aims to produce lightweight block aggregates with sufficient strength utilizing RHA at low cost and use them as an ingredient of concrete. RHA, because of its pozzolanic behavior, can be utilized to produce better quality block aggregates at lower cost, replacing clay content in the bricks. The whole study can be divided into three parts. In the first part, characterization tests on RHA and clay were performed to determine their properties. Six different types of RHA from different mills were characterized by XRD and SEM analysis. Their fineness was determined by conducting a fineness test. The result of XRD confirmed the amorphous state of RHA. The characterization test for clay identifies the sample as “silty clay” with a specific gravity of 2.59 and 14% optimum moisture content. In the second part, blocks were produced with six different types of RHA with different combinations by volume with clay. Then mixtures were manually compacted in molds before subjecting them to oven drying at 120 °C for 7 days. After that, dried blocks were placed in a furnace at 1200 °C to produce ultimate blocks. Loss on ignition test, apparent density test, crushing strength test, efflorescence test, and absorption test were conducted on the blocks to compare their performance with the bricks. For 40% of RHA, the crushing strength result was found 60 MPa, where crushing strength for brick was observed 48.1 MPa. In the third part, the crushed blocks were used as coarse aggregate in concrete cylinders and compared them with brick concrete cylinders. Specimens were cured for 7 days and 28 days. The highest compressive strength of block cylinders for 7 days curing was calculated as 26.1 MPa, whereas, for 28 days curing, it was found 34 MPa. On the other hand, for brick cylinders, the value of compressing strength of 7 days and 28 days curing was observed as 20 MPa and 30 MPa, respectively. These research findings can help with the increasing demand for topsoil of the earth, and also turn a waste product into a valuable one.

Keywords: characterization, furnace, pozzolanic behavior, rice husk ash

Procedia PDF Downloads 106
400 A Multi-Scale Study of Potential-Dependent Ammonia Synthesis on IrO₂ (110): DFT, 3D-RISM, and Microkinetic Modeling

Authors: Shih-Huang Pan, Tsuyoshi Miyazaki, Minoru Otani, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang

Abstract:

Ammonia (NH₃) is crucial in renewable energy and agriculture, yet its traditional production via the Haber-Bosch process faces challenges due to the inherent inertness of nitrogen (N₂) and the need for high temperatures and pressures. The electrocatalytic nitrogen reduction (ENRR) presents a more sustainable option, functioning at ambient conditions. However, its advancement is limited by selectivity and efficiency challenges due to the competing hydrogen evolution reaction (HER). The critical roles of protonation of N-species and HER highlight the necessity of selecting optimal catalysts and solvents to enhance ENRR performance. Notably, transition metal oxides, with their adjustable electronic states and excellent chemical and thermal stability, have shown promising ENRR characteristics. In this study, we use density functional theory (DFT) methods to investigate the ENRR mechanisms on IrO₂ (110), a material known for its tunable electronic properties and exceptional chemical and thermal stability. Employing the constant electrode potential (CEP) model, where the electrode - electrolyte interface is treated as a polarizable continuum with implicit solvation, and adjusting electron counts to equalize work functions in the grand canonical ensemble, we further incorporate the advanced 3D Reference Interaction Site Model (3D-RISM) to accurately determine the ENRR limiting potential across various solvents and pH conditions. Our findings reveal that the limiting potential for ENRR on IrO₂ (110) is significantly more favorable than for HER, highlighting the efficiency of the IrO₂ catalyst for converting N₂ to NH₃. This is supported by the optimal *NH₃ desorption energy on IrO₂, which enhances the overall reaction efficiency. Microkinetic simulations further predict a promising NH₃ production rate, even at the solution's boiling point¸ reinforcing the catalytic viability of IrO₂ (110). This comprehensive approach provides an atomic-level understanding of the electrode-electrolyte interface in ENRR, demonstrating the practical application of IrO₂ in electrochemical catalysis. The findings provide a foundation for developing more efficient and selective catalytic strategies, potentially revolutionizing industrial NH₃ production.

Keywords: density functional theory, electrocatalyst, nitrogen reduction reaction, electrochemistry

Procedia PDF Downloads 17
399 Analysis in Mexico on Workers Performing Highly Repetitive Movements with Sensory Thermography in the Surface of the Wrist and Elbows

Authors: Sandra K. Enriquez, Claudia Camargo, Jesús E. Olguín, Juan A. López, German Galindo

Abstract:

Currently companies have increased the number of disorders of cumulative trauma (CTDs), these are increasing significantly due to the Highly Repetitive Movements (HRM) performed in workstations, which causes economic losses to businesses, due to temporary and permanent disabilities of workers. This analysis focuses on the prevention of disorders caused by: repeatability, duration and effort; And focuses on reducing cumulative trauma disorders such as occupational diseases using sensory thermography as a noninvasive method, the above is to evaluate the injuries could have workers to perform repetitive motions. Objectives: The aim is to define rest periods or job rotation before they generate a CTD, this sensory thermography by analyzing changes in temperature patterns on wrists and elbows when the worker is performing HRM over a period of time 2 hours and 30 minutes. Information on non-work variables such as wrist and elbow injuries, weight, gender, age, among others, and work variables such as temperature workspace, repetitiveness and duration also met. Methodology: The analysis to 4 industrial designers, 2 men and 2 women to be specific was conducted in a business in normal health for a period of 12 days, using the following time ranges: the first day for every 90 minutes continuous work were asked to rest 5 minutes, the second day for every 90 minutes of continuous work were asked to rest 10 minutes, the same to work 60 and 30 minutes straight. Each worker was tested with 6 different ranges at least twice. This analysis was performed in a controlled room temperature between 20 and 25 ° C, and a time to stabilize the temperature of the wrists and elbows than 20 minutes at the beginning and end of the analysis. Results: The range time of 90 minutes working continuous and a rest of 5 minutes of activity is where the maximum temperature (Tmax) was registered in the wrists and elbows in the office, we found the Tmax was 35.79 ° C with a difference of 2.79 ° C between the initial and final temperature of the left elbow presented at the individual 4 during the 86 minutes, in of range in 90 minutes continuously working and rested for 5 minutes of your activity. Conclusions: It is possible with this alternative technology is sensory thermography predict ranges of rotation or rest for the prevention of CTD to perform HRM work activities, obtaining with this reduce occupational disease, quotas by health agencies and increasing the quality of life of workers, taking this technology a cost-benefit acceptable in the future.

Keywords: sensory thermography, temperature, cumulative trauma disorder (CTD), highly repetitive movement (HRM)

Procedia PDF Downloads 429
398 The Staphylococcus aureus Exotoxin Recognition Using Nanobiosensor Designed by an Antibody-Attached Nanosilica Method

Authors: Hamed Ahari, Behrouz Akbari Adreghani, Vadood Razavilar, Amirali Anvar, Sima Moradi, Hourieh Shalchi

Abstract:

Considering the ever increasing population and industrialization of the developmental trend of humankind's life, we are no longer able to detect the toxins produced in food products using the traditional techniques. This is due to the fact that the isolation time for food products is not cost-effective and even in most of the cases, the precision in the practical techniques like the bacterial cultivation and other techniques suffer from operator errors or the errors of the mixtures used. Hence with the advent of nanotechnology, the design of selective and smart sensors is one of the greatest industrial revelations of the quality control of food products that in few minutes time, and with a very high precision can identify the volume and toxicity of the bacteria. Methods and Materials: In this technique, based on the bacterial antibody connection to nanoparticle, a sensor was used. In this part of the research, as the basis for absorption for the recognition of bacterial toxin, medium sized silica nanoparticles of 10 nanometer in form of solid powder were utilized with Notrino brand. Then the suspension produced from agent-linked nanosilica which was connected to bacterial antibody was positioned near the samples of distilled water, which were contaminated with Staphylococcus aureus bacterial toxin with the density of 10-3, so that in case any toxin exists in the sample, a connection between toxin antigen and antibody would be formed. Finally, the light absorption related to the connection of antigen to the particle attached antibody was measured using spectrophotometry. The gene of 23S rRNA that is conserved in all Staphylococcus spp., also used as control. The accuracy of the test was monitored by using serial dilution (l0-6) of overnight cell culture of Staphylococcus spp., bacteria (OD600: 0.02 = 107 cell). It showed that the sensitivity of PCR is 10 bacteria per ml of cells within few hours. Result: The results indicate that the sensor detects up to 10-4 density. Additionally, the sensitivity of the sensors was examined after 60 days, the sensor by the 56 days had confirmatory results and started to decrease after those time periods. Conclusions: Comparing practical nano biosensory to conventional methods like that culture and biotechnology methods(such as polymerase chain reaction) is accuracy, sensitiveness and being unique. In the other way, they reduce the time from the hours to the 30 minutes.

Keywords: exotoxin, nanobiosensor, recognition, Staphylococcus aureus

Procedia PDF Downloads 385
397 Engineering Topology of Ecological Model for Orientation Impact of Sustainability Urban Environments: The Spatial-Economic Modeling

Authors: Moustafa Osman Mohammed

Abstract:

The modeling of a spatial-economic database is crucial in recitation economic network structure to social development. Sustainability within the spatial-economic model gives attention to green businesses to comply with Earth’s Systems. The natural exchange patterns of ecosystems have consistent and periodic cycles to preserve energy and materials flow in systems ecology. When network topology influences formal and informal communication to function in systems ecology, ecosystems are postulated to valence the basic level of spatial sustainable outcome (i.e., project compatibility success). These referred instrumentalities impact various aspects of the second level of spatial sustainable outcomes (i.e., participant social security satisfaction). The sustainability outcomes are modeling composite structure based on a network analysis model to calculate the prosperity of panel databases for efficiency value, from 2005 to 2025. The database is modeling spatial structure to represent state-of-the-art value-orientation impact and corresponding complexity of sustainability issues (e.g., build a consistent database necessary to approach spatial structure; construct the spatial-economic-ecological model; develop a set of sustainability indicators associated with the model; allow quantification of social, economic and environmental impact; use the value-orientation as a set of important sustainability policy measures), and demonstrate spatial structure reliability. The structure of spatial-ecological model is established for management schemes from the perspective pollutants of multiple sources through the input–output criteria. These criteria evaluate the spillover effect to conduct Monte Carlo simulations and sensitivity analysis in a unique spatial structure. The balance within “equilibrium patterns,” such as collective biosphere features, has a composite index of many distributed feedback flows. The following have a dynamic structure related to physical and chemical properties for gradual prolong to incremental patterns. While these spatial structures argue from ecological modeling of resource savings, static loads are not decisive from an artistic/architectural perspective. The model attempts to unify analytic and analogical spatial structure for the development of urban environments in a relational database setting, using optimization software to integrate spatial structure where the process is based on the engineering topology of systems ecology.

Keywords: ecological modeling, spatial structure, orientation impact, composite index, industrial ecology

Procedia PDF Downloads 67
396 Driving Environmental Quality through Fuel Subsidy Reform in Nigeria

Authors: O. E. Akinyemi, P. O. Alege, O. O. Ajayi, L. A. Amaghionyediwe, A. A. Ogundipe

Abstract:

Nigeria as an oil-producing developing country in Africa is one of the many countries that had been subsidizing consumption of fossil fuel. Despite the numerous advantage of this policy ranging from increased energy access, fostering economic and industrial development, protecting the poor households from oil price shocks, political considerations, among others; they have been found to impose economic cost, wasteful, inefficient, create price distortions discourage investment in the energy sector and contribute to environmental pollution. These negative consequences coupled with the fact that the policy had not been very successful at achieving some of its stated objectives, led to a number of organisations and countries such as the Group of 7 (G7), World Bank, International Monetary Fund (IMF), International Energy Agency (IEA), Organisation for Economic Co-operation and Development (OECD), among others call for global effort towards reforming fossil fuel subsidies. This call became necessary in view of seeking ways to harmonise certain existing policies which may by design hamper current effort at tackling environmental concerns such as climate change. This is in addition to driving a green growth strategy and low carbon development in achieving sustainable development. The energy sector is identified to play a vital role. This study thus investigates the prospects of using fuel subsidy reform as a viable tool in driving an economy that de-emphasizes carbon growth in Nigeria. The method used is the Johansen and Engle-Granger two-step Co-integration procedure in order to investigate the existence or otherwise of a long-run equilibrium relationship for the period 1971 to 2011. Its theoretical framework is rooted in the Environmental Kuznet Curve (EKC) hypothesis. In developing three case scenarios (case of subsidy payment, no subsidy payment and effective subsidy), findings from the study supported evidence of a long run sustainable equilibrium model. Also, estimation results reflected that the first and the second scenario do not significantly influence the indicator of environmental quality. The implication of this is that in reforming fuel subsidy to drive environmental quality for an economy like Nigeria, strong and effective regulatory framework (measure that was interacted with fuel subsidy to yield effective subsidy) is essential.

Keywords: environmental quality, fuel subsidy, green growth, low carbon growth strategy

Procedia PDF Downloads 324
395 A Comparison of Direct Water Injection with Membrane Humidifier for Proton Exchange Membrane Fuel Cells Humification

Authors: Flavien Marteau, Pedro Affonso Nóbrega, Pascal Biwole, Nicolas Autrusson, Iona De Bievre, Christian Beauger

Abstract:

Effective water management is essential for the optimal performance of fuel cells. For this reason, many vehicle systems use a membrane humidifier, a passive device that humidifies the air before the cathode inlet. Although they offer good performance, humidifiers are voluminous, costly, and fragile, hence the desire to find an alternative. Direct water injection could be an option, although this method lacks maturity. It consists of injecting liquid water as a spray in the dry heated air coming out from the compressor. This work focuses on the evaluation of direct water injection and its performance compared to the membrane humidifier selected as a reference. Two architectures were experimentally tested to humidify an industrial 2 kW short stack made up of 20 cells of 150 cm² each. For the reference architecture, the inlet air is humidified with a commercial membrane humidifier. For the direct water injection architecture, a pneumatic nozzle was selected to generate a fine spray in the air flow with a Sauter mean diameter of about 20 μm. Initial performance was compared over the entire range of current based on polarisation curves. Then, the influence of various parameters impacting water management was studied, such as the temperature, the gas stoichiometry, and the water injection flow rate. The experimental results obtained confirm the possibility of humidifying the fuel cell using direct water injection. This study, however shows the limits of this humidification method, the mean cell voltage being significantly lower in some operating conditions with direct water injection than with the membrane humidifier. The voltage drop reaches 30 mV per cell (4 %) at 1 A/cm² (1,8 bara, 80 °C) and increases in more demanding humidification conditions. It is noteworthy that the heat of compression available is not enough to evaporate all the injected liquid water in the case of DWI, resulting in a mix of liquid and vapour water entering the fuel cell, whereas only vapour is present with the humidifier. Variation of the injection flow rate shows that part of the injected water is useless for humidification and seems to cross channels without reaching the membrane. The stack was successfully humidified thanks to direct water injection. Nevertheless, our work shows that its implementation requires substantial adaptations and may reduce the fuel cell stack performance when compared to conventional membrane humidifiers, but opportunities for optimisation have been identified.

Keywords: cathode humidification, direct water injection, membrane humidifier, proton exchange membrane fuel cell

Procedia PDF Downloads 43
394 A Comparison between TM: TM Co Doped and TM: RE Co Doped ZnO Based Advanced Materials for Spintronics Applications; Structural, Optical and Magnetic Property Analysis

Authors: V. V. Srinivasu, Jayashree Das

Abstract:

Owing to the industrial and technological importance, transition metal (TM) doped ZnO has been widely chosen for many practical applications in electronics and optoelectronics. Besides, though still a controversial issue, the reported room temperature ferromagnetism in transition metal doped ZnO has added a feather to its excellence and importance in current semiconductor research for prospective application in Spintronics. Anticipating non controversial and improved optical and magnetic properties, we adopted co doping method to synthesise polycrystalline Mn:TM (Fe,Ni) and Mn:RE(Gd,Sm) co doped ZnO samples by solid state sintering route with compositions Zn1-x (Mn:Fe/Ni)xO and Zn1-x(Mn:Gd/Sm)xO and sintered at two different temperatures. The structure, composition and optical changes induced in ZnO due to co doping and sintering were investigated by XRD, FTIR, UV, PL and ESR studies. X-ray peak profile analysis (XPPA) and Williamson-Hall analysis carried out shows changes in the values of stress, strain, FWHM and the crystallite size in both the co doped systems. FTIR spectra also show the effect of both type of co doping on the stretching and bending bonds of ZnO compound. UV-Vis study demonstrates changes in the absorption band edge as well as the significant change in the optical band gap due to exchange interactions inside the system after co doping. PL studies reveal effect of co doping on UV and visible emission bands in the co doped systems at two different sintering temperatures, indicating the existence of defects in the form of oxygen vacancies. While the TM: TM co doped samples of ZnO exhibit ferromagnetism at room temperature, the TM: RE co doped samples show paramagnetic behaviour. The magnetic behaviours observed are supported by results from Electron Spin resonance (ESR) study; which shows sharp resonance peaks with considerable line width (∆H) and g values more than 2. Such values are usually found due to the presence of an internal field inside the system giving rise to the shift of resonance field towards the lower field. The g values in this range are assigned to the unpaired electrons trapped in oxygen vacancies. TM: TM co doped ZnO samples exhibit low field absorption peaks in their ESR spectra, which is a new interesting observation. We emphasize that the interesting observations reported in this paper may be considered for the improved futuristic applications of ZnO based materials.

Keywords: co-doping, electro spin resonance, microwave absorption, spintronics

Procedia PDF Downloads 338
393 Characterization of Aerosol Droplet in Absorption Columns to Avoid Amine Emissions

Authors: Hammad Majeed, Hanna Knuutila, Magne Hilestad, Hallvard Svendsen

Abstract:

Formation of aerosols can cause serious complications in industrial exhaust gas CO2 capture processes. SO3 present in the flue gas can cause aerosol formation in an absorption based capture process. Small mist droplets and fog formed can normally not be removed in conventional demisting equipment because their submicron size allows the particles or droplets to follow the gas flow. As a consequence of this aerosol based emissions in the order of grams per Nm3 have been identified from PCCC plants. In absorption processes aerosols are generated by spontaneous condensation or desublimation processes in supersaturated gas phases. Undesired aerosol development may lead to amine emissions many times larger than what would be encountered in a mist free gas phase in PCCC development. It is thus of crucial importance to understand the formation and build-up of these aerosols in order to mitigate the problem.Rigorous modelling of aerosol dynamics leads to a system of partial differential equations. In order to understand mechanics of a particle entering an absorber an implementation of the model is created in Matlab. The model predicts the droplet size, the droplet internal variable profiles and the mass transfer fluxes as function of position in the absorber. The Matlab model is based on a subclass method of weighted residuals for boundary value problems named, orthogonal collocation method. The model comprises a set of mass transfer equations for transferring components and the essential diffusion reaction equations to describe the droplet internal profiles for all relevant constituents. Also included is heat transfer across the interface and inside the droplet. This paper presents results describing the basic simulation tool for the characterization of aerosols formed in CO2 absorption columns and gives examples as to how various entering droplets grow or shrink through an absorber and how their composition changes with respect to time. Below are given some preliminary simulation results for an aerosol droplet composition and temperature profiles. Results: As an example a droplet of initial size of 3 microns, initially containing a 5M MEA, solution is exposed to an atmosphere free of MEA. Composition of the gas phase and temperature is changing with respect to time throughout the absorber.

Keywords: amine solvents, emissions, global climate change, simulation and modelling, aerosol generation

Procedia PDF Downloads 262
392 Relationship between Functional Properties and Supramolecular Structure of the Poly(Trimethylene 2,5-Furanoate) Based Multiblock Copolymers with Aliphatic Polyethers or Aliphatic Polyesters

Authors: S. Paszkiewicz, A. Zubkiewicz, A. Szymczyk, D. Pawlikowska, I. Irska, E. Piesowicz, A. Linares, T. A. Ezquerra

Abstract:

Over the last century, the world has become increasingly dependent on oil as its main source of chemicals and energy. Driven largely by the strong economic growth of India and China, demand for oil is expected to increase significantly in the coming years. This growth in demand, combined with diminishing reserves, will require the development of new, sustainable sources for fuels and bulk chemicals. Biomass is an attractive alternative feedstock, as it is widely available carbon source apart from oil and coal. Nowadays, academic and industrial research in the field of polymer materials is strongly oriented towards bio-based alternatives to petroleum-derived plastics with enhanced properties for advanced applications. In this context, 2,5-furandicarboxylic acid (FDCA), a biomass-based chemical product derived from lignocellulose, is one of the most high-potential biobased building blocks for polymers and the first candidate to replace the petro-derived terephthalic acid. FDCA has been identified as one of the top 12 chemicals in the future, which may be used as a platform chemical for the synthesis of biomass-based polyester. The aim of this study is to synthesize and characterize the multiblock copolymers containing rigid segments of poly(trimethylene 2,5-furanoate) (PTF) and soft segments of poly(tetramethylene oxide) (PTMO) with excellent elastic properties or aliphatic polyesters of polycaprolactone (PCL). Two series of PTF based copolymers, i.e., PTF-block-PTMO-T and PTF-block-PCL-T, with different content of flexible segments were synthesized by means of a two-step melt polycondensation process and characterized by various methods. The rigid segments of PTF, as well as the flexible PTMO/or PCL ones, were randomly distributed along the chain. On the basis of 1H NMR, SAXS and WAXS, DSC an DMTA results, one can conclude that both phases were thermodynamically immiscible and the values of phase transition temperatures varied with the composition of the copolymer. The copolymers containing 25, 35 and 45wt.% of flexible segments (PTMO) exhibited elastomeric property characteristics. Moreover, with respect to the flexible segments content, the temperatures corresponding to 5%, 25%, 50% and 90% mass loss as well as the values of tensile modulus decrease with the increasing content of aliphatic polyether or aliphatic polyester in the composition.

Keywords: furan based polymers, multiblock copolymers, supramolecular structure, functional properties

Procedia PDF Downloads 128