Search results for: reduced order models
21002 Effects of Level Densities and Those of a-Parameter in the Framework of Preequilibrium Model for 63,65Cu(n,xp) Reactions in Neutrons at 9 to 15 MeV
Authors: L. Yettou
Abstract:
In this study, the calculations of proton emission spectra produced by 63Cu(n,xp) and 65Cu(n,xp) reactions are used in the framework of preequilibrium models using the EMPIRE code and TALYS code. Exciton Model predidtions combined with the Kalbach angular distribution systematics and the Hybrid Monte Carlo Simulation (HMS) were used. The effects of levels densities and those of a-parameter have been investigated for our calculations. The comparison with experimental data shows clear improvement over the Exciton Model and HMS calculations.Keywords: Preequilibrium models , level density, level density a-parameter., Empire code, Talys code.
Procedia PDF Downloads 13521001 Deep Reinforcement Learning Model Using Parameterised Quantum Circuits
Authors: Lokes Parvatha Kumaran S., Sakthi Jay Mahenthar C., Sathyaprakash P., Jayakumar V., Shobanadevi A.
Abstract:
With the evolution of technology, the need to solve complex computational problems like machine learning and deep learning has shot up. But even the most powerful classical supercomputers find it difficult to execute these tasks. With the recent development of quantum computing, researchers and tech-giants strive for new quantum circuits for machine learning tasks, as present works on Quantum Machine Learning (QML) ensure less memory consumption and reduced model parameters. But it is strenuous to simulate classical deep learning models on existing quantum computing platforms due to the inflexibility of deep quantum circuits. As a consequence, it is essential to design viable quantum algorithms for QML for noisy intermediate-scale quantum (NISQ) devices. The proposed work aims to explore Variational Quantum Circuits (VQC) for Deep Reinforcement Learning by remodeling the experience replay and target network into a representation of VQC. In addition, to reduce the number of model parameters, quantum information encoding schemes are used to achieve better results than the classical neural networks. VQCs are employed to approximate the deep Q-value function for decision-making and policy-selection reinforcement learning with experience replay and the target network.Keywords: quantum computing, quantum machine learning, variational quantum circuit, deep reinforcement learning, quantum information encoding scheme
Procedia PDF Downloads 13721000 Preparation Static Dissipative Nanocomposites of Alkaline Earth Metal Doped Aluminium Oxide and Methyl Vinyl Silicone Polymer
Authors: Aparna M. Joshi
Abstract:
Methyl vinyl silicone polymer (VMQ) - alkaline earth metal doped aluminium oxide composites are prepared by conventional two rolls open mill mixing method. Doped aluminium oxides (DAO) using silvery white coloured alkaline earth metals such as Mg and Ca as dopants in the concentration of 0.4 % are synthesized by microwave combustion method and referred as MA ( Mg doped aluminium oxide) and CA ( Ca doped aluminium oxide). The as-synthesized materials are characterized for the electrical resistance, X–ray diffraction, FE-SEM, TEM and FTIR. The electrical resistances of the DAOs are observed to be ~ 8-20 MΩ. This means that the resistance of aluminium oxide (Corundum) α-Al2O3 which is ~ 1010Ω is reduced by the order of ~ 103 to 104 Ω after doping. XRD studies reveal the doping of Mg and Ca in aluminium oxide. The microstructural study using FE-SEM shows the flaky clusterous structures with the thickness of the flakes between 10 and 20 nm. TEM images depict the rod-shaped morphological geometry of the particles with the diameter of ~50-70 nm. The nanocomposites are synthesized by incorporating the DAOs in the concentration of 75 phr (parts per hundred parts of rubber) into VMQ polymer. The electrical resistance of VMQ polymer, which is ~ 1015Ω, drops by the order of 108Ω. There is a retention of the electrical resistance of ~ 30-50 MΩ for the nanocomposites which is a static dissipative range of electricity. In this work white coloured electrically conductive VMQ polymer-DAO nanocomposites (MAVMQ for Mg doping and CAVMQ for Ca doping) have been synthesized. The physical and mechanical properties of the composites such as specific gravity, hardness, tensile strength and rebound resilience are measured. Hardness and tensile strength are found to increase, with the negligible alteration in the other properties.Keywords: doped aluminium oxide, methyl vinyl silicone polymer, microwave synthesis, static dissipation
Procedia PDF Downloads 55820999 Deep Learning Approaches for Accurate Detection of Epileptic Seizures from Electroencephalogram Data
Authors: Ramzi Rihane, Yassine Benayed
Abstract:
Epilepsy is a chronic neurological disorder characterized by recurrent, unprovoked seizures resulting from abnormal electrical activity in the brain. Timely and accurate detection of these seizures is essential for improving patient care. In this study, we leverage the UK Bonn University open-source EEG dataset and employ advanced deep-learning techniques to automate the detection of epileptic seizures. By extracting key features from both time and frequency domains, as well as Spectrogram features, we enhance the performance of various deep learning models. Our investigation includes architectures such as Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM), 1D Convolutional Neural Networks (1D-CNN), and hybrid CNN-LSTM and CNN-BiLSTM models. The models achieved impressive accuracies: LSTM (98.52%), Bi-LSTM (98.61%), CNN-LSTM (98.91%), CNN-BiLSTM (98.83%), and CNN (98.73%). Additionally, we utilized a data augmentation technique called SMOTE, which yielded the following results: CNN (97.36%), LSTM (97.01%), Bi-LSTM (97.23%), CNN-LSTM (97.45%), and CNN-BiLSTM (97.34%). These findings demonstrate the effectiveness of deep learning in capturing complex patterns in EEG signals, providing a reliable and scalable solution for real-time seizure detection in clinical environments.Keywords: electroencephalogram, epileptic seizure, deep learning, LSTM, CNN, BI-LSTM, seizure detection
Procedia PDF Downloads 1820998 Classification of Political Affiliations by Reduced Number of Features
Authors: Vesile Evrim, Aliyu Awwal
Abstract:
By the evolvement in technology, the way of expressing opinions switched the direction to the digital world. The domain of politics as one of the hottest topics of opinion mining research merged together with the behavior analysis for affiliation determination in text which constitutes the subject of this paper. This study aims to classify the text in news/blogs either as Republican or Democrat with the minimum number of features. As an initial set, 68 features which 64 are constituted by Linguistic Inquiry and Word Count (LIWC) features are tested against 14 benchmark classification algorithms. In the later experiments, the dimensions of the feature vector reduced based on the 7 feature selection algorithms. The results show that Decision Tree, Rule Induction and M5 Rule classifiers when used with SVM and IGR feature selection algorithms performed the best up to 82.5% accuracy on a given dataset. Further tests on a single feature and the linguistic based feature sets showed the similar results. The feature “function” as an aggregate feature of the linguistic category, is obtained as the most differentiating feature among the 68 features with 81% accuracy by itself in classifying articles either as Republican or Democrat.Keywords: feature selection, LIWC, machine learning, politics
Procedia PDF Downloads 38320997 An IoT-Enabled Crop Recommendation System Utilizing Message Queuing Telemetry Transport (MQTT) for Efficient Data Transmission to AI/ML Models
Authors: Prashansa Singh, Rohit Bajaj, Manjot Kaur
Abstract:
In the modern agricultural landscape, precision farming has emerged as a pivotal strategy for enhancing crop yield and optimizing resource utilization. This paper introduces an innovative Crop Recommendation System (CRS) that leverages the Internet of Things (IoT) technology and the Message Queuing Telemetry Transport (MQTT) protocol to collect critical environmental and soil data via sensors deployed across agricultural fields. The system is designed to address the challenges of real-time data acquisition, efficient data transmission, and dynamic crop recommendation through the application of advanced Artificial Intelligence (AI) and Machine Learning (ML) models. The CRS architecture encompasses a network of sensors that continuously monitor environmental parameters such as temperature, humidity, soil moisture, and nutrient levels. This sensor data is then transmitted to a central MQTT server, ensuring reliable and low-latency communication even in bandwidth-constrained scenarios typical of rural agricultural settings. Upon reaching the server, the data is processed and analyzed by AI/ML models trained to correlate specific environmental conditions with optimal crop choices and cultivation practices. These models consider historical crop performance data, current agricultural research, and real-time field conditions to generate tailored crop recommendations. This implementation gets 99% accuracy.Keywords: Iot, MQTT protocol, machine learning, sensor, publish, subscriber, agriculture, humidity
Procedia PDF Downloads 7120996 Evaluating Performance of Value at Risk Models for the MENA Islamic Stock Market Portfolios
Authors: Abderrazek Ben Maatoug, Ibrahim Fatnassi, Wassim Ben Ayed
Abstract:
In this paper we investigate the issue of market risk quantification for Middle East and North Africa (MENA) Islamic market equity. We use Value-at-Risk (VaR) as a measure of potential risk in Islamic stock market, for long and short position, based on Riskmetrics model and the conditional parametric ARCH class model volatility with normal, student and skewed student distribution. The sample consist of daily data for the 2006-2014 of 11 Islamic stock markets indices. We conduct Kupiec and Engle and Manganelli tests to evaluate the performance for each model. The main finding of our empirical results show that (i) the superior performance of VaR models based on the Student and skewed Student distribution, for the significance level of α=1% , for all Islamic stock market indices, and for both long and short trading positions (ii) Risk Metrics model, and VaR model based on conditional volatility with normal distribution provides the best accurate VaR estimations for both long and short trading positions for a significance level of α=5%.Keywords: value-at-risk, risk management, islamic finance, GARCH models
Procedia PDF Downloads 59220995 Deep Brain Stimulation and Motor Cortex Stimulation for Post-Stroke Pain: A Systematic Review and Meta-Analysis
Authors: Siddarth Kannan
Abstract:
Objectives: Deep Brain Stimulation (DBS) and Motor Cortex stimulation (MCS) are innovative interventions in order to treat various neuropathic pain disorders such as post-stroke pain. While each treatment has a varying degree of success in managing pain, comparative analysis has not yet been performed, and the success rates of these techniques using validated, objective pain scores have not been synthesised. The aim of this study was to compare the effect of pain relief offered by MCS and DBS on patients with post-stroke pain and to assess if either of these procedures offered better results. Methods: A systematic review and meta-analysis were conducted in accordance with PRISMA guidelines (PROSPEROID CRD42021277542). Three databases were searched, and articles published from 2000 to June 2023 were included (last search date 25 June 2023). Meta-analysis was performed using random effects models. We evaluated the performance of DBS or MCS by assessing studies that reported pain relief using the Visual Analogue Scale (VAS). Data analysis of descriptive statistics was performed using SPSS (Version 27; IBM; Armonk; NY; USA). R statistics (Rstudio Version 4.0.1) was used to perform meta-analysis. Results: Of the 478 articles identified, 27 were included in the analysis (232 patients- 117 DBS & 115 MCS). The pooled number of patients who improved after DBS was 0.68 (95% CI, 0.57-0.77, I2=36%). The pooled number of patients who improved after MCS was 0.72 (95% CI, 0.62-0.80, I2=59%). Further sensitivity analysis was done to include only studies with a minimum of 5 patients in order to assess if there was any impact on the overall results. Nine studies each for DBS and MCS met these criteria. There seemed to be no significant difference in results. Conclusions: The use of surgical interventions such as DBS and MCS is an upcoming field for the treatment of post-stroke pain, with limited studies exploring and comparing these two techniques. While our study shows that MCS might be a slightly better treatment option, further research would need to be done in order to determine the appropriate surgical intervention for post-stroke pain.Keywords: post-stroke pain, deep brain stimulation, motor cortex stimulation, pain relief
Procedia PDF Downloads 14120994 Significant Stressed Zone of Highway Embankment
Authors: Sharifullah Ahmed, P. Eng
Abstract:
The Axle Pressure and the Consolidation Pressure decrease with the height of the highway embankment and the depth of subsoil. This reduction of pressure depends on the height and width of the embankment. The depth is defined as the significantly stressed zone at which the pressure is reduced to 0.2 or 20%. The axle pressure is reduced to 7% for embankment height 1-3m and to 0.7% for embankment height 4-12m at the bottom level of Highway Embankment. This observation implies that, the portion of axle pressure transferred to subsoil underlying the embankment is not significant for ESAL factor 4.8. The 70% consolidation to have occurred after the construction of the surface layer of pavement. Considering this ratio of post construction settlement, 70% consolidation pressure (Δσ70) is used in this analysis. The magnitude of influence depth or Significant Stressed Zone (Ds) had been obtained for the range of crest width (at the top level of the embankment) is kept between 5m and 50m and for the range of embankment height from 1.0m to 12.0m considering 70% of consolidation pressure (Δσ70). Significantly stressed zones (Ds) for 70% embankment pressure are found as 2-6.2He for embankment top width 5-50m.Keywords: consolidation pressure, consolidation settlement, ESAL, highway embankment, HS 20-44, significant stressed zone, stress distribution
Procedia PDF Downloads 9420993 Biogas from Cover Crops and Field Residues: Effects on Soil, Water, Climate and Ecological Footprint
Authors: Manfred Szerencsits, Christine Weinberger, Maximilian Kuderna, Franz Feichtinger, Eva Erhart, Stephan Maier
Abstract:
Cover or catch crops have beneficial effects for soil, water, erosion, etc. If harvested, they also provide feedstock for biogas without competition for arable land in regions, where only one main crop can be produced per year. On average gross energy yields of approx. 1300 m³ methane (CH4) ha-1 can be expected from 4.5 tonnes (t) of cover crop dry matter (DM) in Austria. Considering the total energy invested from cultivation to compression for biofuel use a net energy yield of about 1000 m³ CH4 ha-1 is remaining. With the straw of grain maize or Corn Cob Mix (CCM) similar energy yields can be achieved. In comparison to catch crops remaining on the field as green manure or to complete fallow between main crops the effects on soil, water and climate can be improved if cover crops are harvested without soil compaction and digestate is returned to the field in an amount equivalent to cover crop removal. In this way, the risk of nitrate leaching can be reduced approx. by 25% in comparison to full fallow. The risk of nitrous oxide emissions may be reduced up to 50% by contrast with cover crops serving as green manure. The effects on humus content and erosion are similar or better than those of cover crops used as green manure when the same amount of biomass was produced. With higher biomass production the positive effects increase even if cover crops are harvested and the only digestate is brought back to the fields. The ecological footprint of arable farming can be reduced by approx. 50% considering the substitution of natural gas with CH4 produced from cover crops.Keywords: biogas, cover crops, catch crops, land use competition, sustainable agriculture
Procedia PDF Downloads 54320992 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 13120991 The Relationship of Lean Management Principles with Lean Maturity Levels: Multiple Case Study in Manufacturing Companies
Authors: Alexandre D. Ferraz, Dario H. Alliprandini, Mauro Sampaio
Abstract:
Companies and other institutions are constantly seeking better organizational performance and greater competitiveness. In order to fulfill this purpose, there are many tools, methodologies and models for increasing performance. However, the Lean Management approach seems to be the most effective in terms of achieving a significant improvement in productivity relatively quickly. Although Lean tools are relatively easy to understand and implement in different contexts, many organizations are not able to transform themselves into 'Lean companies'. Most of the efforts in its implementation have shown single benefits, failing to achieve the desired impact on the performance of the overall enterprise system. There is also a growing perception of the importance of management in Lean transformation, but few studies have empirically investigated and described the 'Lean Management'. In order to understand more clearly the ideas that guide Lean Management and its influence on the maturity level of the production system, the objective of this research is analyze the relationship between the Lean Management principles and the Lean maturity level in the organizations. The research also analyzes the principles of Lean Management and its relationship with the 'Lean culture' and the results obtained. The research was developed using the case study methodology. Three manufacturing units of a German multinational company from industrial automation segment, located in different countries were studied, in order to have a better comparison between the practices and the level of maturity in the implementation. The primary source of information was the application of a research questionnaire based on the theoretical review. The research showed that higher the level of Lean Management principles, higher are the Lean maturity level, the Lean culture level, and the level of Lean results obtained in the organization. The research also showed that factors such as time for application of Lean concepts and company size were not determinant for the level of Lean Management principles and, consequently, for the level of Lean maturity in the organization. The characteristics of the production system showed much more influence in different evaluated aspects. The present research also left recommendations for the managers of the plants analyzed and suggestions for future research.Keywords: lean management, lean principles, lean maturity level, lean manufacturing
Procedia PDF Downloads 14720990 Home Made Rice Beer Waste (Choak): A Low Cost Feed for Sustainable Poultry Production
Authors: Vinay Singh, Chandra Deo, Asit Chakrabarti, Lopamudra Sahoo, Mahak Singh, Rakesh Kumar, Dinesh Kumar, H. Bharati, Biswajit Das, V. K. Mishra
Abstract:
The most widely used feed resources in poultry feed, like maize and soybean, are expensive as well as in short supply. Hence, there is a need to utilize non-conventional feed ingredients to cut down feed costs. As an alternative, brewery by-products like brewers’ dried grains are potential non-conventional feed resources. North-East India is inhabited by many tribes, and most of these tribes prepare their indigenous local brew, mostly using rice grains as the primary substrate. Choak, a homemade rice beer waste, is an excellent and cheap source of protein and other nutrients. Fresh homemade rice beer waste (rice brewer’s grain) was collected locally. The proximate analysis indicated 28.53% crude protein, 92.76% dry matter, 5.02% ether extract, 7.83% crude fibre, 2.85% total ash, 0.67% acid insoluble ash, 0.91% calcium, and 0.55% total phosphorus. A feeding trial with 5 treatments (incorporating rice beer waste at the inclusion levels of 0,10,20,30 & 40% by replacing maize and soybean from basal diet) was conducted with 25 laying hens per treatment for 16 weeks under completely randomized design in order to study the production performance, blood-biochemical parameters, immunity, egg quality and cost economics of laying hens. The results showed substantial variations (P<0.01) in egg production, egg mass, FCR per dozen eggs, FCR per kg egg mass, and net FCR. However, there was not a substantial difference in either body weight or feed intake or in egg weight. Total serum cholesterol reduced significantly (P<0.01) at 40% inclusion of rice beer waste. Additionally, the egg haugh unit grew considerably (P<0.01) when the graded levels of rice beer waste increased. The inclusion of 20% rice brewers dried grain reduced feed cost per kg egg mass and per dozen egg production by Rs. 15.97 and 9.99, respectively. Choak (homemade rice beer waste) can thus be safely incorporated into the diet of laying hens at a 20% inclusion level for better production performance and cost-effectiveness.Keywords: choak, rice beer waste, laying hen, production performance, cost economics
Procedia PDF Downloads 6220989 Analysis of Hard Turning Process of AISI D3-Thermal Aspects
Authors: B. Varaprasad, C. Srinivasa Rao
Abstract:
In the manufacturing sector, hard turning has emerged as vital machining process for cutting hardened steels. Besides many advantages of hard turning operation, one has to implement to achieve close tolerances in terms of surface finish, high product quality, reduced machining time, low operating cost and environmentally friendly characteristics. In the present study, three-dimensional CAE (Computer Aided Engineering) based simulation of hard turning by using commercial software DEFORM 3D has been compared to experimental results of stresses, temperatures and tool forces in machining of AISI D3 steel using mixed Ceramic inserts (CC6050). In the present analysis, orthogonal cutting models are proposed, considering several processing parameters such as cutting speed, feed, and depth of cut. An exhaustive friction modeling at the tool-work interfaces is carried out. Work material flow around the cutting edge is carefully modeled with adaptive re-meshing simulation capability. In process simulations, feed rate and cutting speed are constant (i.e.,. 0.075 mm/rev and 155 m/min), and analysis is focused on stresses, forces, and temperatures during machining. Close agreement is observed between CAE simulation and experimental values.Keywords: hard turning, computer aided engineering, computational machining, finite element method
Procedia PDF Downloads 45820988 Influence of Gum Acacia Karroo on Some Mechanical Properties of Cement Mortars and Concrete
Authors: Mbugua R. N., Salim R. W., Ndambuki J. M.
Abstract:
Natural admixtures provide concrete with enhanced properties but their processing end up making them very expensive resulting in increase to cost of concrete. In this study the effect of Gum from Acacia Karroo (GAK) as set-retarding admixture in cement pastes was studied. The possibility of using GAK as water reducing admixture both in cement mortar concrete was also investigated. Cement pastes with different dosages of GAK were prepared to measure the setting time using different dosages. Compressive strength of cement mortars with 0.7, 0.8 and 0.9% weight of cement and w/c ratio of 0.5 were compared to those with water cement (w/c) ratio of 0.44 but same dosage of GAK. Concrete samples were prepared using higher dosages of GAK (1, 2 and 3\% wt of cement) and a water bidder (w/b) of 0.61 were compared to those with the same GAK dosage but with reduced w/b ratio. There was increase in compressive strength of 9.3% at 28 days for cement mortar samples with 0.9% dosage of GAK and reduced w/c ratio.Keywords: compressive strength, Gum Acacia Karroo, retarding admixture, setting time, water-reducing admixture
Procedia PDF Downloads 31420987 Performance of Reinforced Concrete Beams under Different Fire Durations
Authors: Arifuzzaman Nayeem, Tafannum Torsha, Tanvir Manzur, Shaurav Alam
Abstract:
Performance evaluation of reinforced concrete (RC) beams subjected to accidental fire is significant for post-fire capacity measurement. Mechanical properties of any RC beam degrade due to heating since the strength and modulus of concrete and reinforcement suffer considerable reduction under elevated temperatures. Moreover, fire-induced thermal dilation and shrinkage cause internal stresses within the concrete and eventually result in cracking, spalling, and loss of stiffness, which ultimately leads to lower service life. However, conducting full-scale comprehensive experimental investigation for RC beams exposed to fire is difficult and cost-intensive, where the finite element (FE) based numerical study can provide an economical alternative for evaluating the post-fire capacity of RC beams. In this study, an attempt has been made to study the fire behavior of RC beams using FE software package ABAQUS under different durations of fire. The damaged plasticity model of concrete in ABAQUS was used to simulate behavior RC beams. The effect of temperature on strength and modulus of concrete and steel was simulated following relevant Eurocodes. Initially, the result of FE models was validated using several experimental results from available scholarly articles. It was found that the response of the developed FE models matched quite well with the experimental outcome for beams without heat. The FE analysis of beams subjected to fire showed some deviation from the experimental results, particularly in terms of stiffness degradation. However, the ultimate strength and deflection of FE models were similar to that of experimental values. The developed FE models, thus, exhibited the good potential to predict the fire behavior of RC beams. Once validated, FE models were then used to analyze several RC beams having different strengths (ranged between 20 MPa and 50 MPa) exposed to the standard fire curve (ASTM E119) for different durations. The post-fire performance of RC beams was investigated in terms of load-deflection behavior, flexural strength, and deflection characteristics.Keywords: fire durations, flexural strength, post fire capacity, reinforced concrete beam, standard fire
Procedia PDF Downloads 14420986 Variation in Complement Order in English: Implications for Interlanguage Syntax
Authors: Juliet Udoudom
Abstract:
Complement ordering principles of natural language phrases (XPs) stipulate that Head terms be consistently placed phrase initially or phrase-finally, yielding two basic theoretical orders – Head – Complement order or Complement – Head order. This paper examines the principles which determine complement ordering in English V- and N-bar structures. The aim is to determine the extent to which complement linearisations in the two phrase types are consistent with the two theoretical orders outlined above given the flexible and varied nature of natural language structures. The objective is to see whether there are variation(s) in the complement linearisations of the XPs studied and the implications which such variations hold for the inter-language syntax of English and Ibibio. A corpus-based approach was employed in obtaining the English data. V- and -N – bar structures containing complement structures were isolated for analysis. Data were examined from the perspective of the X-bar and Government – theories of Chomsky’s (1981) Government-Binding format. Findings from the analysis show that in V – bar structures in English, heads are consistently placed phrase – initially yielding a Head – Complement order; however, complement linearisation in the N – bar structures studied exhibited parametric variations. Thus, in some N – bar structures in English the nominal head is ordered to the left whereas in others, the head term occurs to the right. It may therefore be concluded that the principles which determine complement ordering are both Language – Particular and Phrase – specific following insights provided within Phrasal Syntax.Keywords: complement order, complement–head order, head–complement order, language–particular principles
Procedia PDF Downloads 35020985 Using Simulation Modeling Approach to Predict USMLE Steps 1 and 2 Performances
Authors: Chau-Kuang Chen, John Hughes, Jr., A. Dexter Samuels
Abstract:
The prediction models for the United States Medical Licensure Examination (USMLE) Steps 1 and 2 performances were constructed by the Monte Carlo simulation modeling approach via linear regression. The purpose of this study was to build robust simulation models to accurately identify the most important predictors and yield the valid range estimations of the Steps 1 and 2 scores. The application of simulation modeling approach was deemed an effective way in predicting student performances on licensure examinations. Also, sensitivity analysis (a/k/a what-if analysis) in the simulation models was used to predict the magnitudes of Steps 1 and 2 affected by changes in the National Board of Medical Examiners (NBME) Basic Science Subject Board scores. In addition, the study results indicated that the Medical College Admission Test (MCAT) Verbal Reasoning score and Step 1 score were significant predictors of the Step 2 performance. Hence, institutions could screen qualified student applicants for interviews and document the effectiveness of basic science education program based on the simulation results.Keywords: prediction model, sensitivity analysis, simulation method, USMLE
Procedia PDF Downloads 34120984 Thermal Reduction of Perfect Well Identified Hexagonal Graphene Oxide Nano-Sheets for Super-Capacitor Applications
Authors: A. N. Fouda
Abstract:
A novel well identified hexagonal graphene oxide (GO) nano-sheets were synthesized using modified Hummer method. Low temperature thermal reduction at 350°C in air ambient was performed. After thermal reduction, typical few layers of thermal reduced GO (TRGO) with dimension of few hundreds nanometers were observed using high resolution transmission electron microscopy (HRTEM). GO has a lot of structure models due to variation of the preparation process. Determining the atomic structure of GO is essential for a better understanding of its fundamental properties and for realization of the future technological applications. Structural characterization was identified by x-ray diffraction (XRD), Fourier transform infra-red spectroscopy (FTIR) measurements. A comparison between exper- imental and theoretical IR spectrum were done to confirm the match between experimentally and theoretically proposed GO structure. Partial overlap of the experimental IR spectrum with the theoretical IR was confirmed. The electrochemical properties of TRGO nano-sheets as electrode materials for supercapacitors were investigated by cyclic voltammetry and electrochemical impedance spectroscopy (EIS) measurements. An enhancement in supercapacitance after reduction was confirmed and the area of the CV curve for the TRGO electrode is larger than those for the GO electrode indicating higher specific capacitance which is promising in super-capacitor applicationsKeywords: hexagonal graphene oxide, thermal reduction, cyclic voltammetry
Procedia PDF Downloads 49520983 Mathematical Modeling of the Fouling Phenomenon in Ultrafiltration of Latex Effluent
Authors: Amira Abdelrasoul, Huu Doan, Ali Lohi
Abstract:
An efficient and well-planned ultrafiltration process is becoming a necessity for monetary returns in the industrial settings. The aim of the present study was to develop a mathematical model for an accurate prediction of ultrafiltration membrane fouling of latex effluent applied to homogeneous and heterogeneous membranes with uniform and non-uniform pore sizes, respectively. The models were also developed for an accurate prediction of power consumption that can handle the large-scale purposes. The model incorporated the fouling attachments as well as chemical and physical factors in membrane fouling for accurate prediction and scale-up application. Both Polycarbonate and Polysulfone flat membranes, with pore sizes of 0.05 µm and a molecular weight cut-off of 60,000, respectively, were used under a constant feed flow rate and a cross-flow mode in ultrafiltration of the simulated paint effluent. Furthermore, hydrophilic ultrafilic and hydrophobic PVDF membranes with MWCO of 100,000 were used to test the reliability of the models. Monodisperse particles of 50 nm and 100 nm in diameter, and a latex effluent with a wide range of particle size distributions were utilized to validate the models. The aggregation and the sphericity of the particles indicated a significant effect on membrane fouling.Keywords: membrane fouling, mathematical modeling, power consumption, attachments, ultrafiltration
Procedia PDF Downloads 47420982 Lessons Learned from Interlaboratory Noise Modelling in Scope of Environmental Impact Assessments in Slovenia
Abstract:
Noise assessment methods are regularly used in scope of Environmental Impact Assessments for planned projects to assess (predict) the expected noise emissions of these projects. Different noise assessment methods could be used. In recent years, we had an opportunity to collaborate in some noise assessment procedures where noise assessments of different laboratories have been performed simultaneously. We identified some significant differences in noise assessment results between laboratories in Slovenia. We estimate that despite good input Georeferenced Data to set up acoustic model exists in Slovenia; there is no clear consensus on methods for predictive noise methods for planned projects. We analyzed input data, methods and results of predictive noise methods for two planned industrial projects, both were done independently by two laboratories. We also analyzed the data, methods and results of two interlaboratory collaborative noise models for two existing noise sources (railway and motorway). In cases of predictive noise modelling, the validations of acoustic models were performed by noise measurements of surrounding existing noise sources, but in varying durations. The acoustic characteristics of existing buildings were also not described identically. The planned noise sources were described and digitized differently. Differences in noise assessment results between different laboratories have ranged up to 10 dBA, which considerably exceeds the acceptable uncertainty ranged between 3 to 6 dBA. Contrary to predictive noise modelling, in cases of collaborative noise modelling for two existing noise sources the possibility to perform the validation noise measurements of existing noise sources greatly increased the comparability of noise modelling results. In both cases of collaborative noise modelling for existing motorway and railway, the modelling results of different laboratories were comparable. Differences in noise modeling results between different laboratories were below 5 dBA, which was acceptable uncertainty set up by interlaboratory noise modelling organizer. The lessons learned from the study were: 1) Predictive noise calculation using formulae from International standard SIST ISO 9613-2: 1997 is not an appropriate method to predict noise emissions of planned projects since due to complexity of procedure they are not used strictly, 2) The noise measurements are important tools to minimize noise assessment errors of planned projects and should be in cases of predictive noise modelling performed at least for validation of acoustic model, 3) National guidelines should be made on the appropriate data, methods, noise source digitalization, validation of acoustic model etc. in order to unify the predictive noise models and their results in scope of Environmental Impact Assessments for planned projects.Keywords: environmental noise assessment, predictive noise modelling, spatial planning, noise measurements, national guidelines
Procedia PDF Downloads 23420981 Effects of Bone Marrow Derived Mesenchymal Stem Cells (MSC) in Acute Respiratory Distress Syndrome (ARDS) Lung Remodeling
Authors: Diana Islam, Juan Fang, Vito Fanelli, Bing Han, Julie Khang, Jianfeng Wu, Arthur S. Slutsky, Haibo Zhang
Abstract:
Introduction: MSC delivery in preclinical models of ARDS has demonstrated significant improvements in lung function and recovery from acute injury. However, the role of MSC delivery in ARDS associated pulmonary fibrosis is not well understood. Some animal studies using bleomycin, asbestos, and silica-induced pulmonary fibrosis show that MSC delivery can suppress fibrosis. While other animal studies using radiation induced pulmonary fibrosis, liver, and kidney fibrosis models show that MSC delivery can contribute to fibrosis. Hypothesis: The beneficial and deleterious effects of MSC in ARDS are modulated by the lung microenvironment at the time of MSC delivery. Methods: To induce ARDS a two-hit mouse model of Hydrochloric acid (HCl) aspiration (day 0) and mechanical ventilation (MV) (day 2) was used. HCl and injurious MV generated fibrosis within 14-28 days. 0.5x106 mouse MSCs were delivered (via both intratracheal and intravenous routes) either in the active inflammatory phase (day 2) or during the remodeling phase (day 14) of ARDS (mouse fibroblasts or PBS used as a control). Lung injury accessed using inflammation score and elastance measurement. Pulmonary fibrosis was accessed using histological score, tissue collagen level, and collagen expression. In addition alveolar epithelial (E) and mesenchymal (M) marker expression profile was also measured. All measurements were taken at day 2, 14, and 28. Results: MSC delivery 2 days after HCl exacerbated lung injury and fibrosis compared to HCl alone, while the day 14 delivery showed protective effects. However in the absence of HCl, MSC significantly reduced the injurious MV-induced fibrosis. HCl injury suppressed E markers and up-regulated M markers. MSC delivery 2 days after HCl further amplified M marker expression, indicating their role in myofibroblast proliferation/activation. While with 14-day delivery E marker up-regulation was observed indicating their role in epithelial restoration. Conclusions: Early MSC delivery can be protective of injurious MV. Late MSC delivery during repair phase may also aid in recovery. However, early MSC delivery during the exudative inflammatory phase of HCl-induced ARDS can result in pro-fibrotic profiles. It is critical to understand the interaction between MSC and the lung microenvironment before MSC-based therapies are utilized for ARDS.Keywords: acute respiratory distress syndrome (ARDS), mesenchymal stem cells (MSC), hydrochloric acid (HCl), mechanical ventilation (MV)
Procedia PDF Downloads 67220980 Stochastic Richelieu River Flood Modeling and Comparison of Flood Propagation Models: WMS (1D) and SRH (2D)
Authors: Maryam Safrai, Tewfik Mahdi
Abstract:
This article presents the stochastic modeling of the Richelieu River flood in Quebec, Canada, occurred in the spring of 2011. With the aid of the one-dimensional Watershed Modeling System (WMS (v.10.1) and HEC-RAS (v.4.1) as a flood simulator, the delineation of the probabilistic flooded areas was considered. Based on the Monte Carlo method, WMS (v.10.1) delineated the probabilistic flooded areas with corresponding occurrence percentages. Furthermore, results of this one-dimensional model were compared with the results of two-dimensional model (SRH-2D) for the evaluation of efficiency and precision of each applied model. Based on this comparison, computational process in two-dimensional model is longer and more complicated versus brief one-dimensional one. Although, two-dimensional models are more accurate than one-dimensional method, but according to existing modellers, delineation of probabilistic flooded areas based on Monte Carlo method is achievable via one-dimensional modeler. The applied software in this case study greatly responded to verify the research objectives. As a result, flood risk maps of the Richelieu River with the two applied models (1d, 2d) could elucidate the flood risk factors in hydrological, hydraulic, and managerial terms.Keywords: flood modeling, HEC-RAS, model comparison, Monte Carlo simulation, probabilistic flooded area, SRH-2D, WMS
Procedia PDF Downloads 14420979 Simulation Research of Diesel Aircraft Engine
Authors: Łukasz Grabowski, Michał Gęca, Mirosław Wendeker
Abstract:
This paper presents the simulation results of a new opposed piston diesel engine to power a light aircraft. Created in the AVL Boost, the model covers the entire charge passage, from the inlet up to the outlet. The model shows fuel injection into cylinders and combustion in cylinders. The calculation uses the module for two-stroke engines. The model was created using sub-models available in this software that structure the model. Each of the sub-models is complemented with parameters in line with the design premise. Since engine weight resulting from geometric dimensions is fundamental in aircraft engines, two configurations of stroke were studied. For each of the values, there were calculated selected operating conditions defined by crankshaft speed. The required power was achieved by changing air fuel ratio (AFR). There was also studied brake specific fuel consumption (BSFC). For stroke S1, the BSFC was lowest at all of the three operating points. This difference is approximately 1-2%, which means higher overall engine efficiency but the amount of fuel injected into cylinders is larger by several mg for S1. The cylinder maximum pressure is lower for S2 due to the fact that compressor gear driving remained the same and boost pressure was identical in the both cases. Calculations for various values of boost pressure were the next stage of the study. In each of the calculation case, the amount of fuel was changed to achieve the required engine power. In the former case, the intake system dimensions were modified, i.e. the duct connecting the compressor and the air cooler, so its diameter D = 40 mm was equal to the diameter of the compressor outlet duct. The impact of duct length was also examined to be able to reduce the flow pulsation during the operating cycle. For the so selected geometry of the intake system, there were calculations for various values of boost pressure. The boost pressure was changed by modifying the gear driving the compressor. To reach the required level of cruising power N = 68 kW. Due to the mechanical power consumed by the compressor, high pressure ratio results in a worsened overall engine efficiency. The figure on the change in BSFC from 210 g/kWh to nearly 270 g/kWh shows this correlation and the overall engine efficiency is reduced by about 8%. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK "PZL-KALISZ" S.A." and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.Keywords: aircraft, diesel, engine, simulation
Procedia PDF Downloads 20920978 Field Application of Reduced Crude Conversion Spent Lime
Authors: Brian H. Marsh, John H. Grove
Abstract:
Gypsum is being applied to ameliorate subsoil acidity and to overcome the problem of very slow lime movement from surface lime applications. Reduced Crude Conversion Spent Lime (RCCSL) containing anhydrite was evaluated for use as a liming material with specific consideration given to the movement of sulfate into the acid subsoil. Agricultural lime and RCCSL were applied at 0, 0.5, 1.0, and 1.5 times the lime requirement of 6.72 Mg ha-1 to an acid Trappist silt loam (Typic Hapuldult). Corn [Zea mays (L.)]was grown following lime material application and soybean [Glycine max (L.) Merr.]was grown in the second year. Soil pH increased rapidly with the addition of the RCCSL material. Over time there was no difference in soil pH between the materials but there was with increasing rate. None of the observed changes in plant nutrient concentration had an impact on yield. Grain yield was higher for the RCCSL amended treatments in the first year but not in the second. There was a significant increase in soybean grain yield from the full lime requirement treatments over no lime.Keywords: soil acidity, corn, soybean, liming materials
Procedia PDF Downloads 36220977 Direct Measurement of Pressure and Temperature Variations During High-Speed Friction Experiments
Authors: Simon Guerin-Marthe, Marie Violay
Abstract:
Thermal Pressurization (TP) has been proposed as a key mechanism involved in the weakening of faults during dynamic ruptures. Theoretical and numerical studies clearly show how frictional heating can lead to an increase in pore fluid pressure due to the rapid slip along faults occurring during earthquakes. In addition, recent laboratory studies have evidenced local pore pressure or local temperature variation during rotary shear tests, which are consistent with TP theoretical and numerical models. The aim of this study is to complement previous ones by measuring both local pore pressure and local temperature variations in the vicinity of a water-saturated calcite gouge layer subjected to a controlled slip velocity in direct double shear configuration. Laboratory investigation of TP process is crucial in order to understand the conditions at which it is likely to become a dominant mechanism controlling dynamic friction. It is also important in order to understand the timing and magnitude of temperature and pore pressure variations, to help understanding when it is negligible, and how it competes with other rather strengthening-mechanisms such as dilatancy, which can occur during rock failure. Here we present unique direct measurements of temperature and pressure variations during high-speed friction experiments under various load point velocities and show the timing of these variations relatively to the slip event.Keywords: thermal pressurization, double-shear test, high-speed friction, dilatancy
Procedia PDF Downloads 6620976 Consequences of Some Remediative Techniques Used in Sewaged Soil Bioremediation on Indigenous Microbial Activity
Authors: E. M. Hoballah, M. Saber, A. Turky, N. Awad, A. M. Zaghloul
Abstract:
Remediation of cultivated sewage soils in Egypt become an important aspect in last decade for having healthy crops and saving the human health. In this respect, a greenhouse experiment was conducted where contaminated sewage soil was treated with modified forms of 2% bentonite (T1), 2% kaolinite (T2), 1% bentonite+1% kaolinite (T3), 2% probentonite (T4), 2% prokaolinite (T5), 1% bentonite + 0.5% kaolinite + 0.5% rock phosphate (RP) (T6), 2% iron oxide (T7) and 1% iron oxide + 1% RP (T8). These materials were applied as remediative materials. Untreated soil was also used as a control. All soil samples were incubated for 2 months at 25°C at field capacity throughout the whole experiment. Carbon dioxide (CO2) efflux from both treated and untreated soils as a biomass indicator was measured through the incubation time and kinetic parameters of the best fitted models used to describe the phenomena were taken to evaluate the succession of sewaged soils remediation. The obtained results indicated that according to the kinetic parameters of used models, CO2 effluxes from remediated soils was significantly decreased compared to control treatment with variation in rate values according to type of remediation material applied. In addition, analyzed microbial biomass parameter showed that Ni and Zn were the most potential toxic elements (PTEs) that influenced the decreasing order of microbial activity in untreated soil. Meanwhile, Ni was the only influenced pollutant in treated soils. Although all applied materials significantly decreased the hazards of PTEs in treated soil, modified bentonite was the best treatment compared to other used materials. This work discussed different mechanisms taking place between applied materials and PTEs founded in the studied sewage soil.Keywords: remediation, potential toxic elements, soil biomass, sewage
Procedia PDF Downloads 22920975 Recovery of Fried Soybean Oil Using Bentonite as an Adsorbent: Optimization, Isotherm and Kinetics Studies
Authors: Prakash Kumar Nayak, Avinash Kumar, Uma Dash, Kalpana Rayaguru
Abstract:
Soybean oil is one of the most widely consumed cooking oils, worldwide. Deep-fat frying of foods at higher temperatures adds unique flavour, golden brown colour and crispy texture to foods. But it brings in various changes like hydrolysis, oxidation, hydrogenation and thermal alteration to oil. The presence of Peroxide value (PV) is one of the most important factors affecting the quality of the deep-fat fried oil. Using bentonite as an adsorbent, the PV can be reduced, thereby improving the quality of the soybean oil. In this study, operating parameters like heating time of oil (10, 15, 20, 25 & 30 h), contact time ( 5, 10, 15, 20, 25 h) and concentration of adsorbent (0.25, 0.5, 0.75, 1.0 and 1.25 g/ 100 ml of oil) have been optimized by response surface methodology (RSM) considering percentage reduction of PV as a response. Adsorption data were analysed by fitting with Langmuir and Freundlich isotherm model. The results show that the Langmuir model shows the best fit compared to the Freundlich model. The adsorption process was also found to follow a pseudo-second-order kinetic model.Keywords: bentonite, Langmuir isotherm, peroxide value, RSM, soybean oil
Procedia PDF Downloads 37620974 Effects of Plant Growth Promoting Rhizobacteria on the Yield and Nutritive Quality of Tomato Fruits
Authors: Narjes Dashti, Nida Ali, Magdy Montasser, Vineetha Cherian
Abstract:
The influence of two PGPR strains, Pseudomonas aeruginosa and Stenotrophomonas rhizophilia, on fruit yields, pomological traits and chemical contents of tomato (Solanum lycopersicum) fruits were studied. The study was conducted separately on two different cultivar varieties of tomato, namely Supermarmande and UC82B. The results indicated that the presence of the PGPR almost doubled the average yield per plant. There was a significant improvement in the pomological qualities of the PGPR treated tomato fruits compared to the corresponding healthy treatments especially in traits such as the average fruit weight, height, and fruit volume. The chemical analysis of tomato fruits revealed that the presence of the PGPRs increased the total protein, lycopene, alkalinity and phenol content of the tomato fruits compared to the healthy controls. They had no influence on the reduced sugar, total soluble solids or the titerable acid content of fruits. However their presence reduced the amount of ascorbic acid in tomato fruits compared to the healthy controls.Keywords: PGPR, tomato, fruit quality
Procedia PDF Downloads 33320973 Establishment of Air Quality Zones in Italy
Authors: M. G. Dirodi, G. Gugliotta, C. Leonardi
Abstract:
The member states shall establish zones and agglomerations throughout their territory to assess and manage air quality in order to comply with European directives. In Italy decree 155/2010, transposing Directive 2008/50/EC on ambient air quality and cleaner air for Europe, merged into a single act the previous provisions on ambient air quality assessment and management, including those resulting from the implementation of Directive 2004/107/EC relating to arsenic, cadmium, nickel, mercury, and polycyclic aromatic hydrocarbons in ambient air. Decree 155/2010 introduced stricter rules for identifying zones on the basis of the characteristics of the territory in spite of considering pollution levels, as it was in the past. The implementation of such new criteria has reduced the great variability of the previous zoning, leading to a significant reduction of the total number of zones and to a complete and uniform ambient air quality assessment and management throughout the Country. The present document is related to the new zones definition in Italy according to Decree 155/2010. In particular, the paper contains the description and the analysis of the outcome of zoning and classification.Keywords: zones, agglomerations, air quality assessment, classification
Procedia PDF Downloads 332