Search results for: student industrial placement
467 Interactive Garments: Flexible Technologies for Textile Integration
Authors: Anupam Bhatia
Abstract:
Upon reviewing the literature and the pragmatic work done in the field of E- textiles, it is observed that the applications of wearable technologies have found a steady growth in the field of military, medical, industrial, sports; whereas fashion is at a loss to know how to treat this technology and bring it to market. The purpose of this paper is to understand the practical issues of integration of electronics in garments; cutting patterns for mass production, maintaining the basic properties of textiles and daily maintenance of garments that hinder the wide adoption of interactive fabric technology within Fashion and leisure wear. To understand the practical hindrances an experimental and laboratory approach is taken. “Techno Meets Fashion” has been an interactive fashion project where sensor technologies have been embedded with textiles that result in set of ensembles that are light emitting garments, sound sensing garments, proximity garments, shape memory garments etc. Smart textiles, especially in the form of textile interfaces, are drastically underused in fashion and other lifestyle product design. Clothing and some other textile products must be washable, which subjects to the interactive elements to water and chemical immersion, physical stress, and extreme temperature. The current state of the art tends to be too fragile for this treatment. The process for mass producing traditional textiles becomes difficult in interactive textiles. As cutting patterns from larger rolls of cloth and sewing them together to make garments breaks and reforms electronic connections in an uncontrolled manner. Because of this, interactive fabric elements are integrated by hand into textiles produced by standard methods. The Arduino has surely made embedding electronics into textiles much easier than before; even then electronics are not integral to the daily wear garments. Soft and flexible interfaces of MEMS (micro sensors and Micro actuators) can be an option to make this possible by blending electronics within E-textiles in a way that’s seamless and still retains functions of the circuits as well as the garment. Smart clothes, which offer simultaneously a challenging design and utility value, can be only mass produced if the demands of the body are taken care of i.e. protection, anthropometry, ergonomics of human movement, thermo- physiological regulation.Keywords: ambient intelligence, proximity sensors, shape memory materials, sound sensing garments, wearable technology
Procedia PDF Downloads 393466 Reduction of Residual Stress by Variothermal Processing and Validation via Birefringence Measurement Technique on Injection Molded Polycarbonate Samples
Authors: Christoph Lohr, Hanna Wund, Peter Elsner, Kay André Weidenmann
Abstract:
Injection molding is one of the most commonly used techniques in the industrial polymer processing. In the conventional process of injection molding, the liquid polymer is injected into the cavity of the mold, where the polymer directly starts hardening at the cooled walls. To compensate the shrinkage, which is caused predominantly by the immediate cooling, holding pressure is applied. Through that whole process, residual stresses are produced by the temperature difference of the polymer melt and the injection mold and the relocation of the polymer chains, which were oriented by the high process pressures and injection speeds. These residual stresses often weaken or change the structural behavior of the parts or lead to deformation of components. One solution to reduce the residual stresses is the use of variothermal processing. Hereby the mold is heated – i.e. near/over the glass transition temperature of the polymer – the polymer is injected and before opening the mold and ejecting the part the mold is cooled. For the next cycle, the mold gets heated again and the procedure repeats. The rapid heating and cooling of the mold are realized indirectly by convection of heated and cooled liquid (here: water) which is pumped through fluid channels underneath the mold surface. In this paper, the influences of variothermal processing on the residual stresses are analyzed with samples in a larger scale (500 mm x 250 mm x 4 mm). In addition, the influence on functional elements, such as abrupt changes in wall thickness, bosses, and ribs, on the residual stress is examined. Therefore the polycarbonate samples are produced by variothermal and isothermal processing. The melt is injected into a heated mold, which has in our case a temperature varying between 70 °C and 160 °C. After the filling of the cavity, the closed mold is cooled down varying from 70 °C to 100 °C. The pressure and temperature inside the mold are monitored and evaluated with cavity sensors. The residual stresses of the produced samples are illustrated by birefringence where the effect on the refractive index on the polymer under stress is used. The colorful spectrum can be uncovered by placing the sample between a polarized light source and a second polarization filter. To show the achievement and processing effects on the reduction of residual stress the birefringence images of the isothermal and variothermal produced samples are compared and evaluated. In this comparison to the variothermal produced samples have a lower amount of maxima of each color spectrum than the isothermal produced samples, which concludes that the residual stress of the variothermal produced samples is lower.Keywords: birefringence, injection molding, polycarbonate, residual stress, variothermal processing
Procedia PDF Downloads 283465 Environmental Accounting: A Conceptual Study of Indian Context
Authors: Pradip Kumar Das
Abstract:
As the entire world continues its rapid move towards industrialization, it has seriously threatened mankind’s ability to maintain an ecological balance. Geographical and natural forces have a significant influence on the location of industries. Industrialization is the foundation stone of the development of any country, while the unplanned industrialization and discharge of waste by industries is the cause of environmental pollution. There is growing degree of awareness and concern globally among nations about environmental degradation or pollution. Environmental resources endowed by the gift of nature and not manmade are invaluable natural resources of a country like India. Any developmental activity is directly related to natural and environmental resources. Economic development without environmental considerations brings about environmental crises and damages the quality of life of present, as well as future generation. As corporate sectors in the global market, especially in India, are becoming anxious about environmental degradation, naturally more and more emphasis will be ascribed to how environment-friendly the outcomes are. Maintaining accounts of such environmental and natural resources in the country has become more urgent. Moreover, international awareness and acceptance of the importance of environmental issues has motivated the development of a branch of accounting called “Environmental Accounting”. Environmental accounting attempts to detect and focus the resources consumed and the costs rendered by an industrial unit to the environment. For the sustainable development of mankind, a healthy environment is indispensable. Gradually, therefore, in many countries including India, environment matters are being given top most priority. Accounting and disclosure of environmental matters have been increasingly manifesting as an important dimension of corporate accounting and reporting practices. But, as conventional accounting deals with mainly non-living things, the formulation of valuation, and measurement and accounting techniques for incorporating environment-related matters in the corporate financial statement sometimes creates problems for the accountant. In the light of this situation, the conceptual analysis of the study is concerned with the rationale of environmental accounting on the economy and society as a whole, and focuses the failures of the traditional accounting system. A modest attempt has been made to throw light on the environmental awareness in developing nations like India and discuss the problems associated with the implementation of environmental accounting. The conceptual study also reflects that despite different anomalies, environmental accounting is becoming an increasing important aspect of the accounting agenda within the corporate sector in India. Lastly, a conclusion, along with recommendations, has been given to overcome the situation.Keywords: environmental accounting, environmental degradation, environmental management, environmental resources
Procedia PDF Downloads 343464 Maintenance Optimization for a Multi-Component System Using Factored Partially Observable Markov Decision Processes
Authors: Ipek Kivanc, Demet Ozgur-Unluakin
Abstract:
Over the past years, technological innovations and advancements have played an important role in the industrial world. Due to technological improvements, the degree of complexity of the systems has increased. Hence, all systems are getting more uncertain that emerges from increased complexity, resulting in more cost. It is challenging to cope with this situation. So, implementing efficient planning of maintenance activities in such systems are getting more essential. Partially Observable Markov Decision Processes (POMDPs) are powerful tools for stochastic sequential decision problems under uncertainty. Although maintenance optimization in a dynamic environment can be modeled as such a sequential decision problem, POMDPs are not widely used for tackling maintenance problems. However, they can be well-suited frameworks for obtaining optimal maintenance policies. In the classical representation of the POMDP framework, the system is denoted by a single node which has multiple states. The main drawback of this classical approach is that the state space grows exponentially with the number of state variables. On the other side, factored representation of POMDPs enables to simplify the complexity of the states by taking advantage of the factored structure already available in the nature of the problem. The main idea of factored POMDPs is that they can be compactly modeled through dynamic Bayesian networks (DBNs), which are graphical representations for stochastic processes, by exploiting the structure of this representation. This study aims to demonstrate how maintenance planning of dynamic systems can be modeled with factored POMDPs. An empirical maintenance planning problem of a dynamic system consisting of four partially observable components deteriorating in time is designed. To solve the empirical model, we resort to Symbolic Perseus solver which is one of the state-of-the-art factored POMDP solvers enabling approximate solutions. We generate some more predefined policies based on corrective or proactive maintenance strategies. We execute the policies on the empirical problem for many replications and compare their performances under various scenarios. The results show that the computed policies from the POMDP model are superior to the others. Acknowledgment: This work is supported by the Scientific and Technological Research Council of Turkey (TÜBİTAK) under grant no: 117M587.Keywords: factored representation, maintenance, multi-component system, partially observable Markov decision processes
Procedia PDF Downloads 134463 Experimental Design in Extraction of Pseudomonas sp. Protease from Fermented Broth by Polyethylene Glycol/Citrate Aqueous Two-Phase System
Authors: Omar Pillaca-Pullo, Arturo Alejandro-Paredes, Carol Flores-Fernandez, Marijuly Sayuri Kina, Amparo Iris Zavaleta
Abstract:
Aqueous two-phase system (ATPS) is an interesting alternative for separating industrial enzymes due to it is easy to scale-up and low cost. Polyethylene glycol (PEG) mixed with potassium phosphate or magnesium sulfate is one of the most frequently polymer/salt ATPS used, but the consequences of its use is a high concentration of phosphates and sulfates in wastewater causing environmental issues. Citrate could replace these inorganic salts due to it is biodegradable and does not produce toxic compounds. On the other hand, statistical design of experiments is widely used for ATPS optimization and it allows to study the effects of the involved variables in the purification, and to estimate their significant effects on selected responses and interactions. The 24 factorial design with four central points (20 experiments) was employed to study the partition and purification of proteases produced by Pseudomonas sp. in PEG/citrate ATPS system. ATPS was prepared with different sodium citrate concentrations [14, 16 and 18% (w/w)], pH values (7, 8 and 9), PEG molecular weight (2,000; 4,000 and 6,000 g/mol) and PEG concentrations [18, 20 and 22 % (w/w)]. All system components were mixed with 15% (w/w) of the fermented broth and deionized water was added to a final weight of 12.5 g. Then, the systems were mixed and kept at room temperature until to reach two-phases separation. Volumes of the top and bottom phases were measured, and aliquots from both phases were collected for subsequent proteolytic activity and total protein determination. Influence of variables such as PEG molar mass (MPEG), PEG concentration (CPEG), citrate concentration (CSal) and pH were evaluated on the following responses: purification factor (PF), activity yield (Y), partition coefficient (K) and selectivity (S). STATISTICA program version 10 was used for the analysis. According to the obtained results, higher levels of CPEG and MPEG had a positive effect on extraction, while pH did not influence on the process. On the other hand, the CSal could be related with low values of Y because of the citrate ions have a negative effect on solubility and enzymatic structure. The optimum values of Y (66.4 %), PF (1.8), K (5.5) and S (4.3) were obtained at CSal (18%), MPEG (6,000 g/mol), CPEG (22%) and pH 9. These results indicated that the PEG/citrate system is accurate to purify these Pseudomonas sp. proteases from fermented broth as a first purification step.Keywords: citrate, polyethylene glycol, protease, Pseudomonas sp
Procedia PDF Downloads 194462 Constitutive Flo1p Expression on Strains Bearing Deletions in Genes Involved in Cell Wall Biogenesis
Authors: Lethukuthula Ngobese, Abin Gupthar, Patrick Govender
Abstract:
The ability of yeast cell wall-derived mannoproteins (glycoproteins) to positively contribute to oenological properties has been a key factor that stimulates research initiatives into these industrially important glycoproteins. In addition, and from a fundamental research perspective, yeast cell wall glycoproteins are involved in a wide range of biological interactions. To date, and to the best of our knowledge, our understanding of the fine molecular structure of these mannoproteins is fairly limited. Generally, the amino acid sequences of their protein moieties have been established from structural and functional analysis of the genomic sequence of these yeasts whilst far less information is available on the glycosyl moieties of these mannoproteins. A novel strategy was devised in this study that entails the genetic engineering of yeast strains that over-express and release cell wall-associated glycoproteins into the liquid growth medium. To this end, the Flo1p mannoprotein was overexpressed in Saccharomyces cerevisiae laboratory strains bearing a specific deletion in KNR4 and GPI7 genes involved in cell wall biosynthesis that have been previously shown to extracellularly hyper-secrete cell wall-associated glycoproteins. A polymerase chain reaction (PCR) -based cloning strategy was employed to generate transgenic yeast strains in which the native cell wall FLO1 glycoprotein-encoding gene is brought under transcriptional control of the constitutive PGK1 promoter. The modified Helm’s flocculation assay was employed to assess flocculation intensities of a Flo1p over-expressing wild type and deletion mutant as an indirect measure of their abilities to release the desired mannoprotein. The flocculation intensities of the transformed strains were assessed and all the strains showed similar intensities (>98% flocculation). To assess if mannoproteins were released into the growth medium, the supernatant of each strain was subjected to the BCA protein assay and the transformed Δknr4 strain showed a considerable increase in protein levels. This study has the potential to produce mannoproteins in sufficient quantities that may be employed in future investigations to understand their molecular structures and mechanisms of interaction to the benefit of both fundamental and industrial applications.Keywords: glycoproteins, genetic engineering, flocculation, over-expression
Procedia PDF Downloads 415461 Sertraline Chronic Exposure: Impact on Reproduction and Behavior on the Key Benthic Invertebrate Capitella teleta
Authors: Martina Santobuono, Wing Sze Chan, Elettra D'Amico, Henriette Selck
Abstract:
Chemicals in modern society are fundamental in many different aspects of daily human life. We use a wide range of substances, including polychlorinated compounds, pesticides, plasticizers, and pharmaceuticals, to name a few. These compounds are excessively produced, and this has led to their introduction to the environment and food resources. Municipal and industrial effluents, landfills, and agricultural runoffs are a few examples of sources of chemical pollution. Many of these compounds, such as pharmaceuticals, have been proven to mimic or alter the performance of the hormone system, thus disrupting its normal function and altering the behavior and reproductive capability of non-target organisms. Antidepressants are pharmaceuticals commonly detected in the environment, usually in the range of ng L⁻¹ and µg L⁻¹. Since they are designed to have a biological effect at low concentrations, they might pose a risk to the native species, especially if exposure lasts for long periods. Hydrophobic antidepressants, like the selective serotonin reuptake inhibitor (SSRI) Sertraline, can sorb to the particles in the water column and eventually accumulate in the sediment compartment. Thus, deposit-feeding organisms may be at particular risk of exposure. The polychaete Capitella teleta is widespread in estuarine organically enriched sediments, being a key deposit-feeder involved in geochemistry processes happening in sediments. Since antidepressants are neurotoxic chemicals and endocrine disruptors, the aim of this work was to test if sediment-associated Sertraline impacts burrowing- and feeding behavior as well as reproduction capability in Capitella teleta in a chronic exposure set-up, which could better mimic what happens in the environment. 7 days old juveniles were selected and exposed to different concentrations of Sertraline for an entire generation until the mature stage was reached. This work was able to show that some concentrations of Sertraline altered growth and the time of first reproduction in Capitella teleta juveniles, potentially disrupting the population’s capability of survival. Acknowledgments: This Ph.D. position is part of the CHRONIC project “Chronic exposure scenarios driving environmental risks of Chemicals”, which is an Innovative Training Network (ITN) funded by the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Actions (MSCA).Keywords: antidepressants, Capitella teleta, chronic exposure, endocrine disruption, sublethal endpoints, neurotoxicity
Procedia PDF Downloads 95460 Laser-Dicing Modeling: Implementation of a High Accuracy Tool for Laser-Grooving and Cutting Application
Authors: Jeff Moussodji, Dominique Drouin
Abstract:
The highly complex technology requirements of today’s integrated circuits (ICs), lead to the increased use of several materials types such as metal structures, brittle and porous low-k materials which are used in both front end of line (FEOL) and back end of line (BEOL) process for wafer manufacturing. In order to singulate chip from wafer, a critical laser-grooving process, prior to blade dicing, is used to remove these layers of materials out of the dicing street. The combination of laser-grooving and blade dicing allows to reduce the potential risk of induced mechanical defects such micro-cracks, chipping, on the wafer top surface where circuitry is located. It seems, therefore, essential to have a fundamental understanding of the physics involving laser-dicing in order to maximize control of these critical process and reduce their undesirable effects on process efficiency, quality, and reliability. In this paper, the study was based on the convergence of two approaches, numerical and experimental studies which allowed us to investigate the interaction of a nanosecond pulsed laser and BEOL wafer materials. To evaluate this interaction, several laser grooved samples were compared with finite element modeling, in which three different aspects; phase change, thermo-mechanical and optic sensitive parameters were considered. The mathematical model makes it possible to highlight a groove profile (depth, width, etc.) of a single pulse or multi-pulses on BEOL wafer material. Moreover, the heat affected zone, and thermo-mechanical stress can be also predicted as a function of laser operating parameters (power, frequency, spot size, defocus, speed, etc.). After modeling validation and calibration, a satisfying correlation between experiment and modeling, results have been observed in terms of groove depth, width and heat affected zone. The study proposed in this work is a first step toward implementing a quick assessment tool for design and debug of multiple laser grooving conditions with limited experiments on hardware in industrial application. More correlations and validation tests are in progress and will be included in the full paper.Keywords: laser-dicing, nano-second pulsed laser, wafer multi-stack, multiphysics modeling
Procedia PDF Downloads 209459 The Role of Nickel on the High-Temperature Corrosion of Modell Alloys (Stainless Steels) before and after Breakaway Corrosion at 600°C: A Microstructural Investigation
Authors: Imran Hanif, Amanda Persdotter, Sedigheh Bigdeli, Jesper Liske, Torbjorn Jonsson
Abstract:
Renewable fuels such as biomass/waste for power production is an attractive alternative to fossil fuels in order to achieve a CO₂ -neutral power generation. However, the combustion results in the release of corrosive species. This puts high demands on the corrosion resistance of the alloys used in the boiler. Stainless steels containing nickel and/or nickel containing coatings are regarded as suitable corrosion resistance material especially in the superheater regions. However, the corrosive environment in the boiler caused by the presence of water vapour and reactive alkali very rapidly breaks down the primary protection, i.e., the Cr-rich oxide scale formed on stainless steels. The lifetime of the components, therefore, relies on the properties of the oxide scale formed after breakaway, i.e., the secondary protection. The aim of the current study is to investigate the role of varying nickel content (0–82%) on the high-temperature corrosion of model alloys with 18% Cr (Fe in balance) in the laboratory mimicking industrial conditions at 600°C. The influence of nickel is investigated on both the primary protection and especially the secondary protection, i.e., the scale formed after breakaway, during the oxidation/corrosion process in the dry O₂ (primary protection) and more aggressive environment such as H₂O, K₂CO₃ and KCl (secondary protection). All investigated alloys experience a very rapid loss of the primary protection, i.e., the Cr-rich (Cr, Fe)₂O₃, and the formation of secondary protection in the aggressive environments. The microstructural investigation showed that secondary protection of all alloys has a very similar microstructure in all more aggressive environments consisting of an outward growing iron oxide and inward growing spinel-oxide (Fe, Cr, Ni)₃O₄. The oxidation kinetics revealed that it is possible to influence the protectiveness of the scale formed after breakaway (secondary protection) through the amount of nickel in the alloy. The difference in oxidation kinetics of the secondary protection is linked to the microstructure and chemical composition of the complex spinel-oxide. The detailed microstructural investigations were carried out using the extensive analytical techniques such as electron back scattered diffraction (EBSD), energy dispersive X-rays spectroscopy (EDS) via the scanning and transmission electron microscopy techniques and results are compared with the thermodynamic calculations using the Thermo-Calc software.Keywords: breakaway corrosion, EBSD, high-temperature oxidation, SEM, TEM
Procedia PDF Downloads 142458 Erosion Wear of Cast Al-Si Alloys
Authors: Pooja Verma, Rajnesh Tyagi, Sunil Mohan
Abstract:
Al-Si alloys are widely used in various components such as liner-less engine blocks, piston, compressor bodies and pumps for automobile sector and aerospace industries due to their excellent combination of properties like low thermal expansion coefficient, low density, excellent wear resistance, high corrosion resistance, excellent cast ability, and high hardness. The low density and high hardness of primary Si phase results in significant reduction in density and improvement in wear resistance of hypereutectic Al-Si alloys. Keeping in view of the industrial importance of the alloys, hypereutectic Al-Si alloys containing 14, 16, 18 and 20 wt. % of Si were prepared in a resistance furnace using adequate amount of deoxidizer and degasser and their erosion behavior was evaluated by conducting tests at impingement angles of 30°, 60°, and 90° with an erodent discharge rate of 7.5 Hz, pressure 1 bar using erosion test rig. Microstructures of the cast alloys were examined using Optical microscopy (OM) and scanning electron microscopy (SEM) and the presence of Si particles was confirmed by x-ray diffractometer (XRD). The mechanical properties and hardness were measured using uniaxial tension tests at a strain rate of 10-3/s and Vickers hardness tester. Microstructures of the alloys and X-ray examination revealed the presence of primary and eutectic Si particles in the shape of cuboids or polyhedral and finer needles. Yield strength (YS), ultimate tensile strength (UTS), and uniform elongation of the hypereutectic Al-Si alloys were observed to increase with increasing content of Si. The optimal strength and ductility was observed for Al-20 wt. % Si alloy which is significantly higher than the Al-14 wt. % Si alloy. The increased hardness and the strength of the alloys with increasing amount of Si has been attributed presence of Si in the solid solution which creates strain, and this strain interacts with dislocations resulting in solid-solution strengthening. The interactions between distributed primary Si particles and dislocations also provide Orowan strengthening leading to increased strength. The steady state erosion rate was found to decrease with increasing angle of impact as well as Si content for all the alloys except at 900 where it was observed to increase with the increase in the Si content. The minimum erosion rate is observed in Al-20 wt. % Si alloy at 300 and 600 impingement angles because of its higher hardness in comparison to other alloys. However, at 90° impingement angle the wear rate for Al-20 wt. % Si alloy is found to be the minimum due to deformation, subsequent cracking and chipping off material.Keywords: Al-Si alloy, erosion wear, cast alloys, dislocation, strengthening
Procedia PDF Downloads 66457 The Development and Change of Settlement in Tainan County (1904-2015) Using Historical Geographic Information System
Authors: Wei Ting Han, Shiann-Far Kung
Abstract:
In the early time, most of the arable land is dry farming and using rainfall as water sources for irrigation in Tainan county. After the Chia-nan Irrigation System (CIS) was completed in 1930, Chia-nan Plain was more efficient allocation of limited water sources or irrigation, because of the benefit from irrigation systems, drainage systems, and land improvement projects. The problem of long-term drought, flood and salt damage in the past were also improved by CIS. The canal greatly improved the paddy field area and agricultural output, Tainan county has become one of the important agricultural producing areas in Taiwan. With the development of water conservancy facilities, affected by national policies and other factors, many agricultural communities and settlements are formed indirectly, also promoted the change of settlement patterns and internal structures. With the development of historical geographic information system (HGIS), Academia Sinica developed the WebGIS theme with the century old maps of Taiwan which is the most complete historical map of database in Taiwan. It can be used to overlay historical figures of different periods, present the timeline of the settlement change, also grasp the changes in the natural environment or social sciences and humanities, and the changes in the settlements presented by the visualized areas. This study will explore the historical development and spatial characteristics of the settlements in various areas of Tainan County. Using of large-scale areas to explore the settlement changes and spatial patterns of the entire county, through the dynamic time and space evolution from Japanese rule to the present day. Then, digitizing the settlement of different periods to perform overlay analysis by using Taiwan historical topographic maps in 1904, 1921, 1956 and 1989. Moreover, using document analysis to analyze the temporal and spatial changes of regional environment and settlement structure. In addition, the comparison analysis method is used to classify the spatial characteristics and differences between the settlements. Exploring the influence of external environments in different time and space backgrounds, such as government policies, major construction, and industrial development. This paper helps to understand the evolution of the settlement space and the internal structural changes in Tainan County.Keywords: historical geographic information system, overlay analysis, settlement change, Tainan County
Procedia PDF Downloads 128456 Logistical Optimization of Nuclear Waste Flows during Decommissioning
Authors: G. Dottavio, M. F. Andrade, F. Renard, V. Cheutet, A.-L. Ladier, S. Vercraene, P. Hoang, S. Briet, R. Dachicourt, Y. Baizet
Abstract:
An important number of technological equipment and high-skilled workers over long periods of time have to be mobilized during nuclear decommissioning processes. The related operations generate complex flows of waste and high inventory levels, associated to information flows of heterogeneous types. Taking into account that more than 10 decommissioning operations are on-going in France and about 50 are expected toward 2025: A big challenge is addressed today. The management of decommissioning and dismantling of nuclear installations represents an important part of the nuclear-based energy lifecycle, since it has an environmental impact as well as an important influence on the electricity cost and therefore the price for end-users. Bringing new technologies and new solutions into decommissioning methodologies is thus mandatory to improve the quality, cost and delay efficiency of these operations. The purpose of our project is to improve decommissioning management efficiency by developing a decision-support framework dedicated to plan nuclear facility decommissioning operations and to optimize waste evacuation by means of a logistic approach. The target is to create an easy-to-handle tool capable of i) predicting waste flows and proposing the best decommissioning logistics scenario and ii) managing information during all the steps of the process and following the progress: planning, resources, delays, authorizations, saturation zones, waste volume, etc. In this article we present our results from waste nuclear flows simulation during decommissioning process, including discrete-event simulation supported by FLEXSIM 3-D software. This approach was successfully tested and our works confirms its ability to improve this type of industrial process by identifying the critical points of the chain and optimizing it by identifying improvement actions. This type of simulation, executed before the start of the process operations on the basis of a first conception, allow ‘what-if’ process evaluation and help to ensure quality of the process in an uncertain context. The simulation of nuclear waste flows before evacuation from the site will help reducing the cost and duration of the decommissioning process by optimizing the planning and the use of resources, transitional storage and expensive radioactive waste containers. Additional benefits are expected for the governance system of the waste evacuation since it will enable a shared responsibility of the waste flows.Keywords: nuclear decommissioning, logistical optimization, decision-support framework, waste management
Procedia PDF Downloads 323455 Synthesis of High-Antifouling Ultrafiltration Polysulfone Membranes Incorporating Low Concentrations of Graphene Oxide
Authors: Abdulqader Alkhouzaam, Hazim Qiblawey, Majeda Khraisheh
Abstract:
Membrane treatment for desalination and wastewater treatment is one of the promising solutions to affordable clean water. It is a developing technology throughout the world and considered as the most effective and economical method available. However, the limitations of membranes’ mechanical and chemical properties restrict their industrial applications. Hence, developing novel membranes was the focus of most studies in the water treatment and desalination sector to find new materials that can improve the separation efficiency while reducing membrane fouling, which is the most important challenge in this field. Graphene oxide (GO) is one of the materials that have been recently investigated in the membrane water treatment sector. In this work, ultrafiltration polysulfone (PSF) membranes with high antifouling properties were synthesized by incorporating different loadings of GO. High-oxidation degree GO had been synthesized using a modified Hummers' method. The synthesized GO was characterized using different analytical techniques including elemental analysis, Fourier transform infrared spectroscopy - universal attenuated total reflectance sensor (FTIR-UATR), Raman spectroscopy, and CHNSO elemental analysis. CHNSO analysis showed a high oxidation degree of GO represented by its oxygen content (50 wt.%). Then, ultrafiltration PSF membranes incorporating GO were fabricated using the phase inversion technique. The prepared membranes were characterized using scanning electron microscopy (SEM) and atomic force microscopy (AFM) and showed a clear effect of GO on PSF physical structure and morphology. The water contact angle of the membranes was measured and showed better hydrophilicity of GO membranes compared to pure PSF caused by the hydrophilic nature of GO. Separation properties of the prepared membranes were investigated using a cross-flow membrane system. Antifouling properties were studied using bovine serum albumin (BSA) and humic acid (HA) as model foulants. It has been found that GO-based membranes exhibit higher antifouling properties compared to pure PSF. When using BSA, the flux recovery ratio (FRR %) increased from 65.4 ± 0.9 % for pure PSF to 84.0 ± 1.0 % with a loading of 0.05 wt.% GO in PSF. When using HA as model foulant, FRR increased from 87.8 ± 0.6 % to 93.1 ± 1.1 % with 0.02 wt.% of GO in PSF. The pure water permeability (PWP) decreased with loadings of GO from 181.7 L.m⁻².h⁻¹.bar⁻¹ of pure PSF to 181.1, and 157.6 L.m⁻².h⁻¹.bar⁻¹ with 0.02 and 0.05 wt.% GO respectively. It can be concluded from the obtained results that incorporating low loading of GO could enhance the antifouling properties of PSF hence improving its lifetime and reuse.Keywords: antifouling properties, GO based membranes, hydrophilicity, polysulfone, ultrafiltration
Procedia PDF Downloads 143454 Lessons Learned from Interlaboratory Noise Modelling in Scope of Environmental Impact Assessments in Slovenia
Abstract:
Noise assessment methods are regularly used in scope of Environmental Impact Assessments for planned projects to assess (predict) the expected noise emissions of these projects. Different noise assessment methods could be used. In recent years, we had an opportunity to collaborate in some noise assessment procedures where noise assessments of different laboratories have been performed simultaneously. We identified some significant differences in noise assessment results between laboratories in Slovenia. We estimate that despite good input Georeferenced Data to set up acoustic model exists in Slovenia; there is no clear consensus on methods for predictive noise methods for planned projects. We analyzed input data, methods and results of predictive noise methods for two planned industrial projects, both were done independently by two laboratories. We also analyzed the data, methods and results of two interlaboratory collaborative noise models for two existing noise sources (railway and motorway). In cases of predictive noise modelling, the validations of acoustic models were performed by noise measurements of surrounding existing noise sources, but in varying durations. The acoustic characteristics of existing buildings were also not described identically. The planned noise sources were described and digitized differently. Differences in noise assessment results between different laboratories have ranged up to 10 dBA, which considerably exceeds the acceptable uncertainty ranged between 3 to 6 dBA. Contrary to predictive noise modelling, in cases of collaborative noise modelling for two existing noise sources the possibility to perform the validation noise measurements of existing noise sources greatly increased the comparability of noise modelling results. In both cases of collaborative noise modelling for existing motorway and railway, the modelling results of different laboratories were comparable. Differences in noise modeling results between different laboratories were below 5 dBA, which was acceptable uncertainty set up by interlaboratory noise modelling organizer. The lessons learned from the study were: 1) Predictive noise calculation using formulae from International standard SIST ISO 9613-2: 1997 is not an appropriate method to predict noise emissions of planned projects since due to complexity of procedure they are not used strictly, 2) The noise measurements are important tools to minimize noise assessment errors of planned projects and should be in cases of predictive noise modelling performed at least for validation of acoustic model, 3) National guidelines should be made on the appropriate data, methods, noise source digitalization, validation of acoustic model etc. in order to unify the predictive noise models and their results in scope of Environmental Impact Assessments for planned projects.Keywords: environmental noise assessment, predictive noise modelling, spatial planning, noise measurements, national guidelines
Procedia PDF Downloads 234453 Small Community’s Proactive Thinking to Move from Zero to 100 Percent Water Reuse
Authors: Raj Chavan
Abstract:
The City of Jal serves a population of approximately 3,500 people, including 2,100 permanent inhabitants and 1,400 oil and gas sector workers and RV park occupants. Over the past three years, Jal's population has increased by about 70 percent, mostly due to the oil and gas industry. The City anticipates that the population will exceed 4,200 by 2020, necessitating the construction of a new wastewater treatment plant (WWTP) because the old plant (aerated lagoon system) cannot accommodate such rapid population expansion without major renovations or replacement. Adhering to discharge permit restrictions has been challenging due to aging infrastructure and equipment replacement needs, as well as increasing nutrient loading to the wastewater collecting system from the additional oil and gas residents' recreational vehicles. The WWTP has not been able to maintain permit discharge standards for total nitrogen of less than 20 mg N/L and other characteristics in recent years. Based on discussions with the state's environmental department, it is likely that the future permit renewal would impose stricter conditions. Given its location in the dry, western part of the country, the City must rely on its meager groundwater supplies and scant annual precipitation. The city's groundwater supplies will be depleted sooner than predicted due to rising demand from the growing population for drinking, leisure, and other industrial uses (fracking). The sole type of reuse the city was engaging in (recreational reuse for a golf course) had to be put on hold because of an effluent water compliance issue. As of right now, all treated effluent is evaporated. The city's long-term goal is to become a zero-waste community that sends all of its treated wastewater effluent either to the golf course, Jal Lake, or the oil and gas industry for reuse. Hydraulic fracturing uses a lot of water, but if the oil and gas industry can use recycled water, it can reduce its impact on freshwater supplies. The City's goal of 100% reuse has been delayed by the difficulties of meeting the constraints of the regular discharge permit due to the large rise in influent loads and the aging infrastructure. The City of Jal plans to build a new WWTP that can keep up with the city's rapid population increase due to the oil and gas industry. Several treatment methods were considered in light of the City's needs and its long-term goals, but MBR was ultimately chosen recommended since it meets all of the permit's requirements while also providing 100 percent beneficial reuse. This talk will lay out the plan for the city to reach its goal of 100 percent reuse, as well as the various avenues for funding the small community that have been considered.Keywords: membrane bioreactor, nitrogent, reuse, small community
Procedia PDF Downloads 87452 Youth and Employment: An Outlook on Challenges of Demographic Dividend
Authors: Vidya Yadav
Abstract:
India’s youth bulge is now sharpest at the critical 15-24 age group, even as its youngest, and oldest age groups begin to narrow. As the ‘single year, age data’ for the 2011 Census releases the data on the number of people at each year of age in the population. The data shows that India’s working age population (15-64 years) is now 63.4 percent of the total, as against just short of 60 percent in 2001. The numbers also show that the ‘dependency ratio’ the ratio of children (0-14) and the elderly (65 above) to those in the working age has shrunk further to 0.55. “Even as the western world is in ageing situation, these new numbers show that India’s population is still very young”. As the fertility falls faster in urban areas, rural India is younger than urban India; while 51.73 percent of rural Indians are under the age of 24 and 45.9 percent of urban Indians are under 24. The percentage of the population under the age of 24 has dropped, but many demographers say that it should not be interpreted as a sign of the youth bulge is shrinking. Rather it is because of “declining fertility, the number of infants and children reduces first, and this is what we see with the number of under age 24. Indeed the figure shows that the proportion of children in the 0-4 and 5-9 age groups has fallen in 2011 compared to 2001. For the first time, the percentage of children in the 10-14 age group has also fallen, as the effect of families reducing the number of children they have begins to be felt. The present paper key issue is to examine that “whether this growing youth bulge has the right skills for the workforce or not”. The study seeks to examine the youth population structure and employment distribution among them in India during 2001-2011 in different industrial category. It also tries to analyze the workforce participation rate as main and marginal workers both for male and female workers in rural and urban India by utilizing an abundant source of census data from 2001-2011. Result shows that an unconscionable number of adolescents are working when they should study. In rural areas, large numbers of youths are working as an agricultural labourer. Study shows that most of the youths working are in the 15-19 age groups. In fact, this is the age of entry into higher education, but due to economic compulsion forces them to take up jobs, killing their dreams of higher skills or education. Youths are primarily engaged in low paying irregular jobs which are clearly revealed by census data on marginal workers. That is those who get work for less than six months in a year. Large proportions of youths are involved in the cultivation and household industries works.Keywords: main, marginal, youth, work
Procedia PDF Downloads 290451 Equilibrium, Kinetic and Thermodynamic Studies of the Biosorption of Textile Dye (Yellow Bemacid) onto Brahea edulis
Authors: G. Henini, Y. Laidani, F. Souahi, A. Labbaci, S. Hanini
Abstract:
Environmental contamination is a major problem being faced by the society today. Industrial, agricultural, and domestic wastes, due to the rapid development in the technology, are discharged in the several receivers. Generally, this discharge is directed to the nearest water sources such as rivers, lakes, and seas. While the rates of development and waste production are not likely to diminish, efforts to control and dispose of wastes are appropriately rising. Wastewaters from textile industries represent a serious problem all over the world. They contain different types of synthetic dyes which are known to be a major source of environmental pollution in terms of both the volume of dye discharged and the effluent composition. From an environmental point of view, the removal of synthetic dyes is of great concern. Among several chemical and physical methods, adsorption is a promising technique due to the ease of use and low cost compared to other applications in the process of discoloration, especially if the adsorbent is inexpensive and readily available. The focus of the present study was to assess the potentiality of Brahea edulis (BE) for the removal of synthetic dye Yellow bemacid (YB) from aqueous solutions. The results obtained here may transfer to other dyes with a similar chemical structure. Biosorption studies were carried out under various parameters such as mass adsorbent particle, pH, contact time, initial dye concentration, and temperature. The biosorption kinetic data of the material (BE) was tested by the pseudo first-order and the pseudo-second-order kinetic models. Thermodynamic parameters including the Gibbs free energy ΔG, enthalpy ΔH, and entropy ΔS have revealed that the adsorption of YB on the BE is feasible, spontaneous, and endothermic. The equilibrium data were analyzed by using Langmuir, Freundlich, Elovich, and Temkin isotherm models. The experimental results show that the percentage of biosorption increases with an increase in the biosorbent mass (0.25 g: 12 mg/g; 1.5 g: 47.44 mg/g). The maximum biosorption occurred at around pH value of 2 for the YB. The equilibrium uptake was increased with an increase in the initial dye concentration in solution (Co = 120 mg/l; q = 35.97 mg/g). Biosorption kinetic data were properly fitted with the pseudo-second-order kinetic model. The best fit was obtained by the Langmuir model with high correlation coefficient (R2 > 0.998) and a maximum monolayer adsorption capacity of 35.97 mg/g for YB.Keywords: adsorption, Brahea edulis, isotherm, yellow Bemacid
Procedia PDF Downloads 177450 How to Reach Net Zero Emissions? On the Permissibility of Negative Emission Technologies and the Danger of Moral Hazards
Authors: Hanna Schübel, Ivo Wallimann-Helmer
Abstract:
In order to reach the goal of the Paris Agreement to not overshoot 1.5°C of warming above pre-industrial levels, various countries including the UK and Switzerland have committed themselves to net zero emissions by 2050. The employment of negative emission technologies (NETs) is very likely going to be necessary for meeting these national objectives as well as other internationally agreed climate targets. NETs are methods of removing carbon from the atmosphere and are thus a means for addressing climate change. They range from afforestation to technological measures such as direct air capture and carbon storage (DACCS), where CO2 is captured from the air and stored underground. As all so-called geoengineering technologies, the development and deployment of NETs are often subject to moral hazard arguments. As these technologies could be perceived as an alternative to mitigation efforts, so the argument goes, they are potentially a dangerous distraction from the main target of mitigating emissions. We think that this is a dangerous argument to make as it may hinder the development of NETs which are an essential element of net zero emission targets. In this paper we argue that the moral hazard argument is only problematic if we do not reflect upon which levels of emissions are at stake in order to meet net zero emissions. In response to the moral hazard argument we develop an account of which levels of emissions in given societies should be mitigated and not be the target of NETs and which levels of emissions can legitimately be a target of NETs. For this purpose, we define four different levels of emissions: the current level of individual emissions, the level individuals emit in order to appear in public without shame, the level of a fair share of individual emissions in the global budget, and finally the baseline of net zero emissions. At each level of emissions there are different subjects to be assigned responsibilities if societies and/or individuals are committed to the target of net zero emissions. We argue that all emissions within one’s fair share do not demand individual mitigation efforts. The same holds with regard to individuals and the baseline level of emissions necessary to appear in public in their societies without shame. Individuals are only under duty to reduce their emissions if they exceed this baseline level. This is different for whole societies. Societies demanding more emissions to appear in public without shame than the individual fair share are under duty to foster emission reductions and are not legitimate to reduce by introducing NETs. NETs are legitimate for reducing emissions only below the level of fair shares and for reaching net zero emissions. Since access to NETs to achieve net zero emissions demands technology not affordable to individuals there are also no full individual responsibilities to achieve net zero emissions. This is mainly a responsibility of societies as a whole.Keywords: climate change, mitigation, moral hazard, negative emission technologies, responsibility
Procedia PDF Downloads 119449 Biodegradation of Triclosan and Tetracycline in Sewage Sludge by Pleurotus Ostreatus Fungal Pellets
Authors: Ayda Maadani Mallak, Amir lakzian, Elham Khodaverdi, Gholam Hossein Haghnia
Abstract:
The use of pharmaceuticals and personal care products such as antibiotics and antibacterials has been increased in recent years. Since the major part of consumed compounds remains unchanged in the wastewater treatment plant, they will easily find their way into the human food chain following the land use of sewage sludge (SS). Biological treatment of SS is one the most effective methods for expunging contaminants. White rot fungi, due to their ligninolytic enzymes, are extensively used to degrade organic compounds. Among all three different morphological forms and growth patterns of filamentous fungi (mycelia, clumps, and pellets), fungal pellet formation has been the subject of interest in industrial bioprocesses. Therefore this study was aimed to investigate the uptake of tetracycline (TC) and triclosan (TCS) by radish plant (Raphanus sativus) from soil amended with untreated and pretreated SS by P. ostreatus fungal pellets under greenhouse conditions. The experimental soil was amended with 1) Contaminated SS with TC at a concentration of 100 mgkg-1 and pretreated by fungal pellets, 2) Contaminated SS with TC at 100 mgkg-1 and untreated with fungal pellets, 3) Contaminated SS with TCS at a concentration of 50 mgkg-1 and pretreated by fungal pellets, 4) contaminated SS with TCS at 50 mgkg-1 and untreated with fungal pellets. An uncontaminated and untreated SS-amended soil also was considered as control treatment. An AB SCIEX 3200 QTRAP LC-MS/MS system was used in order to analyze the concentration of TC and TCS in plant tissues and soil medium. Results of this study revealed that the presence of TC and TCS in SS-amended soil decreased the radish biomass significantly. The reduction effect of TCS on dry biomass of shoot and root was 39 and 45% compared to controls, whereas for TC, the reduction percentage for shoot and root was 27 and 40.6%, respectively. However, fungal treatment of SS by P. ostreatus pellets reduced the negative effect of both compounds on plant biomass remarkably, as no significant difference was observed compared to control treatments. Pretreatment of SS with P. ostreatus also caused a significant reduction in translocation factor (concentration in shoot/root), especially for TC compound up to 32.3%, whereas this reduction for TCS was less (8%) compared to untreated SS. Generally, the results of this study confirmed the positive effect of using fungal pellets in SS amendment to decrease TC and TCS uptake by radish plants. In conclusion, P. ostreatus fungal pellets might provide future insights into bioaugmentation to remove antibiotics from environmental matrices.Keywords: antibiotic, fungal pellet, sewage sludge, white-rot fungi
Procedia PDF Downloads 157448 Structural Equation Modelling Based Approach to Integrate Customers and Suppliers with Internal Practices for Lean Manufacturing Implementation in the Indian Context
Authors: Protik Basu, Indranil Ghosh, Pranab K. Dan
Abstract:
Lean management is an integrated socio-technical system to bring about a competitive state in an organization. The purpose of this paper is to explore and integrate the role of customers and suppliers with the internal practices of the Indian manufacturing industries towards successful implementation of lean manufacturing (LM). An extensive literature survey is carried out. An attempt is made to build an exhaustive list of all the input manifests related to customers, suppliers and internal practices necessary for LM implementation, coupled with a similar exhaustive list of the benefits accrued from its successful implementation. A structural model is thus conceptualized, which is empirically validated based on the data from the Indian manufacturing sector. With the current impetus on developing the industrial sector, the Government of India recently introduced the Lean Manufacturing Competitiveness Scheme that aims to increase competitiveness with the help of lean concepts. There is a huge scope to enrich the Indian industries with the lean benefits, the implementation status being quite low. Hardly any survey-based empirical study in India has been found to integrate customers and suppliers with the internal processes towards successful LM implementation. This empirical research is thus carried out in the Indian manufacturing industries. The basic steps of the research methodology followed in this research are the identification of input and output manifest variables and latent constructs, model proposition and hypotheses development, development of survey instrument, sampling and data collection and model validation (exploratory factor analysis, confirmatory factor analysis, and structural equation modeling). The analysis reveals six key input constructs and three output constructs, indicating that these constructs should act in unison to maximize the benefits of implementing lean. The structural model presented in this paper may be treated as a guide to integrating customers and suppliers with internal practices to successfully implement lean. Integrating customers and suppliers with internal practices into a unified, coherent manufacturing system will lead to an optimum utilization of resources. This work is one of the very first researches to have a survey-based empirical analysis of the role of customers, suppliers and internal practices of the Indian manufacturing sector towards an effective lean implementation.Keywords: customer management, internal manufacturing practices, lean benefits, lean implementation, lean manufacturing, structural model, supplier management
Procedia PDF Downloads 179447 Developing a Maturity Model of Digital Twin Application for Infrastructure Asset Management
Authors: Qingqing Feng, S. Thomas Ng, Frank J. Xu, Jiduo Xing
Abstract:
Faced with unprecedented challenges including aging assets, lack of maintenance budget, overtaxed and inefficient usage, and outcry for better service quality from the society, today’s infrastructure systems has become the main focus of many metropolises to pursue sustainable urban development and improve resilience. Digital twin, being one of the most innovative enabling technologies nowadays, may open up new ways for tackling various infrastructure asset management (IAM) problems. Digital twin application for IAM, as its name indicated, represents an evolving digital model of intended infrastructure that possesses functions including real-time monitoring; what-if events simulation; and scheduling, maintenance, and management optimization based on technologies like IoT, big data and AI. Up to now, there are already vast quantities of global initiatives of digital twin applications like 'Virtual Singapore' and 'Digital Built Britain'. With digital twin technology permeating the IAM field progressively, it is necessary to consider the maturity of the application and how those institutional or industrial digital twin application processes will evolve in future. In order to deal with the gap of lacking such kind of benchmark, a draft maturity model is developed for digital twin application in the IAM field. Firstly, an overview of current smart cities maturity models is given, based on which the draft Maturity Model of Digital Twin Application for Infrastructure Asset Management (MM-DTIAM) is developed for multi-stakeholders to evaluate and derive informed decision. The process of development follows a systematic approach with four major procedures, namely scoping, designing, populating and testing. Through in-depth literature review, interview and focus group meeting, the key domain areas are populated, defined and iteratively tuned. Finally, the case study of several digital twin projects is conducted for self-verification. The findings of the research reveal that: (i) the developed maturity model outlines five maturing levels leading to an optimised digital twin application from the aspects of strategic intent, data, technology, governance, and stakeholders’ engagement; (ii) based on the case study, levels 1 to 3 are already partially implemented in some initiatives while level 4 is on the way; and (iii) more practices are still needed to refine the draft to be mutually exclusive and collectively exhaustive in key domain areas.Keywords: digital twin, infrastructure asset management, maturity model, smart city
Procedia PDF Downloads 157446 The Impact of Supporting Productive Struggle in Learning Mathematics: A Quasi-Experimental Study in High School Algebra Classes
Authors: Sumeyra Karatas, Veysel Karatas, Reyhan Safak, Gamze Bulut-Ozturk, Ozgul Kartal
Abstract:
Productive struggle entails a student's cognitive exertion to comprehend mathematical concepts and uncover solutions not immediately apparent. The significance of productive struggle in learning mathematics is accentuated by influential educational theorists, emphasizing its necessity for learning mathematics with understanding. Consequently, supporting productive struggle in learning mathematics is recognized as a high-leverage and effective mathematics teaching practice. In this study, the investigation into the role of productive struggle in learning mathematics led to the development of a comprehensive rubric for productive struggle pedagogy through an exhaustive literature review. The rubric consists of eight primary criteria and 37 sub-criteria, providing a detailed description of teacher actions and pedagogical choices that foster students' productive struggles. These criteria encompass various pedagogical aspects, including task design, tool implementation, allowing time for struggle, posing questions, scaffolding, handling mistakes, acknowledging efforts, and facilitating discussion/feedback. Utilizing this rubric, a team of researchers and teachers designed eight 90-minute lesson plans, employing a productive struggle pedagogy, for a two-week unit on solving systems of linear equations. Simultaneously, another set of eight lesson plans on the same topic, featuring identical content and problems but employing a traditional lecture-and-practice model, was designed by the same team. The objective was to assess the impact of supporting productive struggle on students' mathematics learning, defined by the strands of mathematical proficiency. This quasi-experimental study compares the control group, which received traditional lecture- and practice instruction, with the treatment group, which experienced a productive struggle in pedagogy. Sixty-six 10th and 11th-grade students from two algebra classes, taught by the same teacher at a high school, underwent either the productive struggle pedagogy or lecture-and-practice approach over two-week eight 90-minute class sessions. To measure students' learning, an assessment was created and validated by a team of researchers and teachers. It comprised seven open-response problems assessing the strands of mathematical proficiency: procedural and conceptual understanding, strategic competence, and adaptive reasoning on the topic. The test was administered at the beginning and end of the two weeks as pre-and post-test. Students' solutions underwent scoring using an established rubric, subjected to expert validation and an inter-rater reliability process involving multiple criteria for each problem based on their steps and procedures. An analysis of covariance (ANCOVA) was conducted to examine the differences between the control group, which received traditional pedagogy, and the treatment group, exposed to the productive struggle pedagogy, on the post-test scores while controlling for the pre-test. The results indicated a significant effect of treatment on post-test scores for procedural understanding (F(2, 63) = 10.47, p < .001), strategic competence (F(2, 63) = 9.92, p < .001), adaptive reasoning (F(2, 63) = 10.69, p < .001), and conceptual understanding (F(2, 63) = 10.06, p < .001), controlling for pre-test scores. This demonstrates the positive impact of supporting productive struggle in learning mathematics. In conclusion, the results revealed the significance of the role of productive struggle in learning mathematics. The study further explored the practical application of productive struggle through the development of a comprehensive rubric describing the pedagogy of supporting productive struggle.Keywords: effective mathematics teaching practice, high school algebra, learning mathematics, productive struggle
Procedia PDF Downloads 51445 A Simulation-Based Investigation of the Smooth-Wall, Radial Gravity Problem of Granular Flow through a Wedge-Shaped Hopper
Authors: A. F. Momin, D. V. Khakhar
Abstract:
Granular materials consist of particulate particles found in nature and various industries that, due to gravity flow, behave macroscopically like liquids. A fundamental industrial unit operation is a hopper with inclined walls or a converging channel in which material flows downward under gravity and exits the storage bin through the bottom outlet. The simplest form of the flow corresponds to a wedge-shaped, quasi-two-dimensional geometry with smooth walls and radially directed gravitational force toward the apex of the wedge. These flows were examined using the Mohr-Coulomb criterion in the classic work of Savage (1965), while Ravi Prakash and Rao used the critical state theory (1988). The smooth-wall radial gravity (SWRG) wedge-shaped hopper is simulated using the discrete element method (DEM) to test existing theories. DEM simulations involve the solution of Newton's equations, taking particle-particle interactions into account to compute stress and velocity fields for the flow in the SWRG system. Our computational results are consistent with the predictions of Savage (1965) and Ravi Prakash and Rao (1988), except for the region near the exit, where both viscous and frictional effects are present. To further comprehend this behaviour, a parametric analysis is carried out to analyze the rheology of wedge-shaped hoppers by varying the orifice diameter, wedge angle, friction coefficient, and stiffness. The conclusion is that velocity increases as the flow rate increases but decreases as the wedge angle and friction coefficient increase. We observed no substantial changes in velocity due to varying stiffness. It is anticipated that stresses at the exit result from the transfer of momentum during particle collisions; for this reason, relationships between viscosity and shear rate are shown, and all data are collapsed into a single curve. In addition, it is demonstrated that viscosity and volume fraction exhibit power law correlations with the inertial number and that all the data collapse into a single curve. A continuum model for determining granular flows is presented using empirical correlations.Keywords: discrete element method, gravity flow, smooth-wall, wedge-shaped hoppers
Procedia PDF Downloads 88444 Assessment of Groundwater Quality in Karakulam Grama Panchayath in Thiruvananthapuram, Kerala State, South India
Authors: D. S. Jaya, G. P. Deepthi
Abstract:
Groundwater is vital to the livelihoods and health of the majority of the people since it provides almost the entire water resource for domestic, agricultural and industrial uses. Groundwater quality comprises the physical, chemical, and bacteriological qualities. The present investigation was carried out to determine the physicochemical and bacteriological quality of the ground water sources in the residential areas of Karakulam Grama Panchayath in Thiruvananthapuram district, Kerala state in India. Karakulam is located in the eastern suburbs of Thiruvananthapuram city. The major drinking water source of the residents in the study area are wells. The present study aims to assess the portability and irrigational suitability of groundwater in the study area. The water samples were collected from randomly selected dug wells and bore wells in the study area during post monsoon and pre-monsoon seasons of the year 2014 after a preliminary field survey. The physical, chemical and bacteriological parameters of the water samples were analysed following standard procedures. The concentration of heavy metals (Cd, Pb, and Mn) in the acid digested water samples were determined by using an Atomic Absorption Spectrophotometer. The results showed that the pH of well water samples ranged from acidic to the alkaline level. In the majority of well water samples ( > 54%) the iron and magnesium content were found high in both the seasons studied, and the values were above the permissible limits of WHO drinking water quality standards. Bacteriological analyses showed that 63% of the wells were contaminated with total coliforms in both the seasons studied. Irrigational suitability of groundwater was assessed by determining the chemical indices like Sodium Percentage (%Na), Sodium Adsorption Ratio (SAR), Residual Sodium Carbonate (RSC), Permeability Index (PI), and the results indicate that the well water in the study area is good for irrigation purposes. Therefore, the study reveals the degradation of drinking water quality groundwater sources in Karakulam Grama Panchayath in Thiruvananthapuram District, Kerala in terms of its chemical and bacteriological characteristics and is not potable without proper treatment. In the study, more than 1/3rd of the wells tested were positive for total coliforms, and the bacterial contamination may pose threats to public health. The study recommends the need for periodic well water quality monitoring in the study area and to conduct awareness programs among the residents.Keywords: bacteriological, groundwater, irrigational suitability, physicochemical, portability
Procedia PDF Downloads 263443 Exploring the Intersection Between the General Data Protection Regulation and the Artificial Intelligence Act
Authors: Maria Jędrzejczak, Patryk Pieniążek
Abstract:
The European legal reality is on the eve of significant change. In European Union law, there is talk of a “fourth industrial revolution”, which is driven by massive data resources linked to powerful algorithms and powerful computing capacity. The above is closely linked to technological developments in the area of artificial intelligence, which has prompted an analysis covering both the legal environment as well as the economic and social impact, also from an ethical perspective. The discussion on the regulation of artificial intelligence is one of the most serious yet widely held at both European Union and Member State level. The literature expects legal solutions to guarantee security for fundamental rights, including privacy, in artificial intelligence systems. There is no doubt that personal data have been increasingly processed in recent years. It would be impossible for artificial intelligence to function without processing large amounts of data (both personal and non-personal). The main driving force behind the current development of artificial intelligence is advances in computing, but also the increasing availability of data. High-quality data are crucial to the effectiveness of many artificial intelligence systems, particularly when using techniques involving model training. The use of computers and artificial intelligence technology allows for an increase in the speed and efficiency of the actions taken, but also creates security risks for the data processed of an unprecedented magnitude. The proposed regulation in the field of artificial intelligence requires analysis in terms of its impact on the regulation on personal data protection. It is necessary to determine what the mutual relationship between these regulations is and what areas are particularly important in the personal data protection regulation for processing personal data in artificial intelligence systems. The adopted axis of considerations is a preliminary assessment of two issues: 1) what principles of data protection should be applied in particular during processing personal data in artificial intelligence systems, 2) what regulation on liability for personal data breaches is in such systems. The need to change the regulations regarding the rights and obligations of data subjects and entities processing personal data cannot be excluded. It is possible that changes will be required in the provisions regarding the assignment of liability for a breach of personal data protection processed in artificial intelligence systems. The research process in this case concerns the identification of areas in the field of personal data protection that are particularly important (and may require re-regulation) due to the introduction of the proposed legal regulation regarding artificial intelligence. The main question that the authors want to answer is how the European Union regulation against data protection breaches in artificial intelligence systems is shaping up. The answer to this question will include examples to illustrate the practical implications of these legal regulations.Keywords: data protection law, personal data, AI law, personal data breach
Procedia PDF Downloads 65442 Selective Separation of Amino Acids by Reactive Extraction with Di-(2-Ethylhexyl) Phosphoric Acid
Authors: Alexandra C. Blaga, Dan Caşcaval, Alexandra Tucaliuc, Madalina Poştaru, Anca I. Galaction
Abstract:
Amino acids are valuable chemical products used in in human foods, in animal feed additives and in the pharmaceutical field. Recently, there has been a noticeable rise of amino acids utilization throughout the world to include their use as raw materials in the production of various industrial chemicals: oil gelating agents (amino acid-based surfactants) to recover effluent oil in seas and rivers and poly(amino acids), which are attracting attention for biodegradable plastics manufacture. The amino acids can be obtained by biosynthesis or from protein hydrolysis, but their separation from the obtained mixtures can be challenging. In the last decades there has been a continuous interest in developing processes that will improve the selectivity and yield of downstream processing steps. The liquid-liquid extraction of amino acids (dissociated at any pH-value of the aqueous solutions) is possible only by using the reactive extraction technique, mainly with extractants of organophosphoric acid derivatives, high molecular weight amines and crown-ethers. The purpose of this study was to analyse the separation of nine amino acids of acidic character (l-aspartic acid, l-glutamic acid), basic character (l-histidine, l-lysine, l-arginine) and neutral character (l-glycine, l-tryptophan, l-cysteine, l-alanine) by reactive extraction with di-(2-ethylhexyl)phosphoric acid (D2EHPA) dissolved in butyl acetate. The results showed that the separation yield is controlled by the pH value of the aqueous phase: the reactive extraction of amino acids with D2EHPA is possible only if the amino acids exist in aqueous solution in their cationic forms (pH of aqueous phase below the isoeletric point). The studies for individual amino acids indicated the possibility of selectively separate different groups of amino acids with similar acidic properties as a function of aqueous solution pH-value: the maximum yields are reached for a pH domain of 2–3, then strongly decreasing with the pH increase. Thus, for acidic and neutral amino acids, the extraction becomes impossible at the isolelectric point (pHi) and for basic amino acids at a pH value lower than pHi, as a result of the carboxylic group dissociation. From the results obtained for the separation from the mixture of the nine amino acids, at different pH, it can be observed that all amino acids are extracted with different yields, for a pH domain of 1.5–3. Over this interval, the extract contains only the amino acids with neutral and basic character. For pH 5–6, only the neutral amino acids are extracted and for pH > 6 the extraction becomes impossible. Using this technique, the total separation of the following amino acids groups has been performed: neutral amino acids at pH 5–5.5, basic amino acids and l-cysteine at pH 4–4.5, l-histidine at pH 3–3.5 and acidic amino acids at pH 2–2.5.Keywords: amino acids, di-(2-ethylhexyl) phosphoric acid, reactive extraction, selective extraction
Procedia PDF Downloads 431441 Automatic and High Precise Modeling for System Optimization
Authors: Stephanie Chen, Mitja Echim, Christof Büskens
Abstract:
To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization
Procedia PDF Downloads 409440 Global Modeling of Drill String Dragging and Buckling in 3D Curvilinear Bore-Holes
Authors: Valery Gulyayev, Sergey Glazunov, Elena Andrusenko, Nataliya Shlyun
Abstract:
Enhancement of technology and techniques for drilling deep directed oil and gas bore-wells are of essential industrial significance because these wells make it possible to increase their productivity and output. Generally, they are used for drilling in hard and shale formations, that is why their drivage processes are followed by the emergency and failure effects. As is corroborated by practice, the principal drilling drawback occurring in drivage of long curvilinear bore-wells is conditioned by the need to obviate essential force hindrances caused by simultaneous action of the gravity, contact and friction forces. Primarily, these forces depend on the type of the technological regime, drill string stiffness, bore-hole tortuosity and its length. They can lead to the Eulerian buckling of the drill string and its sticking. To predict and exclude these states, special mathematic models and methods of computer simulation should play a dominant role. At the same time, one might note that these mechanical phenomena are very complex and only simplified approaches (‘soft string drag and torque models’) are used for their analysis. Taking into consideration that now the cost of directed wells increases essentially with complication of their geometry and enlargement of their lengths, it can be concluded that the price of mistakes of the drill string behavior simulation through the use of simplified approaches can be very high and so the problem of correct software elaboration is very urgent. This paper deals with the problem of simulating the regimes of drilling deep curvilinear bore-wells with prescribed imperfect geometrical trajectories of their axial lines. On the basis of the theory of curvilinear flexible elastic rods, methods of differential geometry, and numerical analysis methods, the 3D ‘stiff-string drag and torque model’ of the drill string bending and the appropriate software are elaborated for the simulation of the tripping in and out regimes and drilling operations. It is shown by the computer calculations that the contact and friction forces can be calculated and regulated, providing predesigned trouble-free modes of operation. The elaborated mathematic models and software can be used for the emergency situations prognostication and their exclusion at the stages of the drilling process design and realization.Keywords: curvilinear drilling, drill string tripping in and out, contact forces, resistance forces
Procedia PDF Downloads 146439 Estimation of Morbidity Level of Industrial Labour Conditions at Zestafoni Ferroalloy Plant
Authors: M. Turmanauli, T. Todua, O. Gvaberidze, R. Javakhadze, N. Chkhaidze, N. Khatiashvili
Abstract:
Background: Mining process has the significant influence on human health and quality of life. In recent years the events in Georgia were reflected on the industry working process, especially minimal requirements of labor safety, hygiene standards of workplace and the regime of work and rest are not observed. This situation is often caused by the lack of responsibility, awareness, and knowledge both of workers and employers. The control of working conditions and its protection has been worsened in many of industries. Materials and Methods: For evaluation of the current situation the prospective epidemiological study by face to face interview method was conducted at Georgian “Manganese Zestafoni Ferroalloy Plant” in 2011-2013. 65.7% of employees (1428 bulletin) were surveyed and the incidence rates of temporary disability days were studied. Results: The average length of a temporary disability single accident was studied taking into consideration as sex groups as well as the whole cohort. According to the classes of harmfulness the following results were received: Class 2.0-10.3%; 3.1-12.4%; 3.2-35.1%; 3.3-12.1%; 3.4-17.6%; 4.0-12.5%. Among the employees 47.5% and 83.1% were tobacco and alcohol consumers respectively. According to the age groups and years of work on the base of previous experience ≥50 ages and ≥21 years of work data prevalence respectively. The obtained data revealed increased morbidity rate according to age and years of work. It was found that the bone and articulate system and connective tissue diseases, aggravation of chronic respiratory diseases, ischemic heart diseases, hypertension and cerebral blood discirculation were the leading among the other diseases. High prevalence of morbidity observed in the workplace with not satisfactory labor conditions from the hygienic point of view. Conclusion: According to received data the causes of morbidity are the followings: unsafety labor conditions; incomplete of preventive medical examinations (preliminary and periodic); lack of access to appropriate health care services; derangement of gathering, recording, and analysis of morbidity data. This epidemiological study was conducted at the JSC “Manganese Ferro Alloy Plant” according to State program “ Prevention of Occupational Diseases” (Program code is 35 03 02 05).Keywords: occupational health, mining process, morbidity level, cerebral blood discirculation
Procedia PDF Downloads 428438 A Review on Stormwater Harvesting and Reuse
Authors: Fatema Akram, Mohammad G. Rasul, M. Masud K. Khan, M. Sharif I. I. Amir
Abstract:
Australia is a country of some 7,700 million square kilometres with a population of about 22.6 million. At present water security is a major challenge for Australia. In some areas the use of water resources is approaching and in some parts it is exceeding the limits of sustainability. A focal point of proposed national water conservation programs is the recycling of both urban storm-water and treated wastewater. But till now it is not widely practiced in Australia, and particularly storm-water is neglected. In Australia, only 4% of storm-water and rainwater is recycled, whereas less than 1% of reclaimed wastewater is reused within urban areas. Therefore, accurately monitoring, assessing and predicting the availability, quality and use of this precious resource are required for better management. As storm-water is usually of better quality than untreated sewage or industrial discharge, it has better public acceptance for recycling and reuse, particularly for non-potable use such as irrigation, watering lawns, gardens, etc. Existing storm-water recycling practice is far behind of research and no robust technologies developed for this purpose. Therefore, there is a clear need for using modern technologies for assessing feasibility of storm-water harvesting and reuse. Numerical modelling has, in recent times, become a popular tool for doing this job. It includes complex hydrological and hydraulic processes of the study area. The hydrologic model computes storm-water quantity to design the system components, and the hydraulic model helps to route the flow through storm-water infrastructures. Nowadays water quality module is incorporated with these models. Integration of Geographic Information System (GIS) with these models provides extra advantage of managing spatial information. However for the overall management of a storm-water harvesting project, Decision Support System (DSS) plays an important role incorporating database with model and GIS for the proper management of temporal information. Additionally DSS includes evaluation tools and Graphical user interface. This research aims to critically review and discuss all the aspects of storm-water harvesting and reuse such as available guidelines of storm-water harvesting and reuse, public acceptance of water reuse, the scopes and recommendation for future studies. In addition to these, this paper identifies, understand and address the importance of modern technologies capable of proper management of storm-water harvesting and reuse.Keywords: storm-water management, storm-water harvesting and reuse, numerical modelling, geographic information system, decision support system, database
Procedia PDF Downloads 372