Search results for: uplink throughput prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2532

Search results for: uplink throughput prediction

1302 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sub lfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of fi lters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-fi lter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying fi lter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The signi ficance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II fi lters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the fi lter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic fi lter, aspect ratios (AR) ranging from 1 to 16 in LES fi lters are evaluated. The findings highlight the DDM's pro ficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as fi lter anisotropy intensify , the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all fi lter-anisotropy scenarios. The fi ndings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 58
1301 Soybean Seed Composition Prediction From Standing Crops Using Planet Scope Satellite Imagery and Machine Learning

Authors: Supria Sarkar, Vasit Sagan, Sourav Bhadra, Meghnath Pokharel, Felix B.Fritschi

Abstract:

Soybean and their derivatives are very important agricultural commodities around the world because of their wide applicability in human food, animal feed, biofuel, and industries. However, the significance of soybean production depends on the quality of the soybean seeds rather than the yield alone. Seed composition is widely dependent on plant physiological properties, aerobic and anaerobic environmental conditions, nutrient content, and plant phenological characteristics, which can be captured by high temporal resolution remote sensing datasets. Planet scope (PS) satellite images have high potential in sequential information of crop growth due to their frequent revisit throughout the world. In this study, we estimate soybean seed composition while the plants are in the field by utilizing PlanetScope (PS) satellite images and different machine learning algorithms. Several experimental fields were established with varying genotypes and different seed compositions were measured from the samples as ground truth data. The PS images were processed to extract 462 hand-crafted vegetative and textural features. Four machine learning algorithms, i.e., partial least squares (PLSR), random forest (RFR), gradient boosting machine (GBM), support vector machine (SVM), and two recurrent neural network architectures, i.e., long short-term memory (LSTM) and gated recurrent unit (GRU) were used in this study to predict oil, protein, sucrose, ash, starch, and fiber of soybean seed samples. The GRU and LSTM architectures had two separate branches, one for vegetative features and the other for textures features, which were later concatenated together to predict seed composition. The results show that sucrose, ash, protein, and oil yielded comparable prediction results. Machine learning algorithms that best predicted the six seed composition traits differed. GRU worked well for oil (R-Squared: of 0.53) and protein (R-Squared: 0.36), whereas SVR and PLSR showed the best result for sucrose (R-Squared: 0.74) and ash (R-Squared: 0.60), respectively. Although, the RFR and GBM provided comparable performance, the models tended to extremely overfit. Among the features, vegetative features were found as the most important variables compared to texture features. It is suggested to utilize many vegetation indices for machine learning training and select the best ones by using feature selection methods. Overall, the study reveals the feasibility and efficiency of PS images and machine learning for plot-level seed composition estimation. However, special care should be given while designing the plot size in the experiments to avoid mixed pixel issues.

Keywords: agriculture, computer vision, data science, geospatial technology

Procedia PDF Downloads 115
1300 Black-Box-Base Generic Perturbation Generation Method under Salient Graphs

Authors: Dingyang Hu, Dan Liu

Abstract:

DNN (Deep Neural Network) deep learning models are widely used in classification, prediction, and other task scenarios. To address the difficulties of generic adversarial perturbation generation for deep learning models under black-box conditions, a generic adversarial ingestion generation method based on a saliency map (CJsp) is proposed to obtain salient image regions by counting the factors that influence the input features of an image on the output results. This method can be understood as a saliency map attack algorithm to obtain false classification results by reducing the weights of salient feature points. Experiments also demonstrate that this method can obtain a high success rate of migration attacks and is a batch adversarial sample generation method.

Keywords: adversarial sample, gradient, probability, black box

Procedia PDF Downloads 74
1299 Gas Holdups in a Gas-Liquid Upflow Bubble Column With Internal

Authors: C. Milind Caspar, Valtonia Octavio Massingue, K. Maneesh Reddy, K. V. Ramesh

Abstract:

Gas holdup data were obtained from measured pressure drop values in a gas-liquid upflow bubble column in the presence of string of hemispheres promoter internal. The parameters that influenced the gas holdup are gas velocity, liquid velocity, promoter rod diameter, pitch and base diameter of hemisphere. Tap water was used as liquid phase and nitrogen as gas phase. About 26 percent in gas holdup was obtained due to the insertion of promoter in in the present study in comparison with empty conduit. Pitch and rod diameter have not shown any influence on gas holdup whereas gas holdup was strongly influenced by gas velocity, liquid velocity and hemisphere base diameter. Correlation equation was obtained for the prediction of gas holdup by least squares regression analysis.

Keywords: bubble column, gas-holdup, two-phase flow, turbulent promoter

Procedia PDF Downloads 90
1298 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model

Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle

Abstract:

In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.

Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model

Procedia PDF Downloads 87
1297 Diagnosis of Diabetes Using Computer Methods: Soft Computing Methods for Diabetes Detection Using Iris

Authors: Piyush Samant, Ravinder Agarwal

Abstract:

Complementary and Alternative Medicine (CAM) techniques are quite popular and effective for chronic diseases. Iridology is more than 150 years old CAM technique which analyzes the patterns, tissue weakness, color, shape, structure, etc. for disease diagnosis. The objective of this paper is to validate the use of iridology for the diagnosis of the diabetes. The suggested model was applied in a systemic disease with ocular effects. 200 subject data of 100 each diabetic and non-diabetic were evaluated. Complete procedure was kept very simple and free from the involvement of any iridologist. From the normalized iris, the region of interest was cropped. All 63 features were extracted using statistical, texture analysis, and two-dimensional discrete wavelet transformation. A comparison of accuracies of six different classifiers has been presented. The result shows 89.66% accuracy by the random forest classifier.

Keywords: complementary and alternative medicine, classification, iridology, iris, feature extraction, disease prediction

Procedia PDF Downloads 385
1296 Fat-Tail Test of Regulatory DNA Sequences

Authors: Jian-Jun Shu

Abstract:

The statistical properties of CRMs are explored by estimating similar-word set occurrence distribution. It is observed that CRMs tend to have a fat-tail distribution for similar-word set occurrence. Thus, the fat-tail test with two fatness coefficients is proposed to distinguish CRMs from non-CRMs, especially from exons. For the first fatness coefficient, the separation accuracy between CRMs and exons is increased as compared with the existing content-based CRM prediction method – fluffy-tail test. For the second fatness coefficient, the computing time is reduced as compared with fluffy-tail test, making it very suitable for long sequences and large data-base analysis in the post-genome time. Moreover, these indexes may be used to predict the CRMs which have not yet been observed experimentally. This can serve as a valuable filtering process for experiment.

Keywords: statistical approach, transcription factor binding sites, cis-regulatory modules, DNA sequences

Procedia PDF Downloads 273
1295 The Use of Beneficial Microorganisms from Diverse Environments for the Management of Aflatoxin in Maize

Authors: Mathias Twizeyimana, Urmila Adhikari, Julius P. Sserumaga, David Ingham

Abstract:

The management of aflatoxins (naturally occurring toxins produced by certain fungi, most importantly Aspergillus flavus and A. parasiticus) relies mostly on the use of best cultural practices and, in some cases, the use of the biological control consisting of atoxigenic strains inhibiting the toxigenic strains through competition resulting in considerable toxin reduction. At AgBiome, we have built a core collection of over 100,000 fully sequenced microbes from diverse environments and employ both the microbes and their sequences in the discovery of new biological products for disease and pest control. The most common approach to finding beneficial microbes consists of isolating microorganisms from samples collected from diverse environments, selecting antagonistic strains through empirical screening, studying modes of action, and stabilization through the formulation of selected microbial isolates. A total of 608 diverse bacterial strains were screened using a high-throughput assay (48-well assay) to identify strains that inhibit toxigenic A. flavus growth on maize kernels. Active strains in 48-well assay had their pathogen inhibiting activity confirmed using the Flask Assay and were concurrently tested for their ability to reduce the aflatoxin content in maize grains. Strains with best growth inhibition and reduction of aflatoxin were tested in the greenhouse and field trials. From the field trials, three bacterial strains, AFS000009 (Pseudomonas chlororaphis), AFS032321 (Bacillus subtilis), AFS024683 (Bacillus velezensis), had aflatoxin concentrations (ppb) values that were significantly lower than those of inoculated control. The identification of biological products with high efficacy in inhibiting pathogen growth and eventually reducing the aflatoxin content will provide a valuable alternative to control strategies used in aflatoxin contamination management.

Keywords: aflatoxin, microorganism bacteria, biocontrol, beneficial microbes

Procedia PDF Downloads 161
1294 A Comparative Study of Force Prediction Models during Static Bending Stage for 3-Roller Cone Frustum Bending

Authors: Mahesh Chudasama, Harit Raval

Abstract:

Conical sections and shells of metal plates manufactured by 3-roller conical bending process are widely used in the industries. The process is completed by first bending the metal plates statically and then dynamic roller bending sequentially. It is required to have an analytical model to get maximum bending force, for optimum design of the machine, for static bending stage. Analytical models assuming various stress conditions are considered and these analytical models are compared considering various parameters and reported in this paper. It is concluded from the study that for higher bottom roller inclination, the shear stress affects greatly to the static bending force whereas for lower bottom roller inclination it can be neglected.

Keywords: roller-bending, static-bending, stress-conditions, analytical-modeling

Procedia PDF Downloads 233
1293 The Response of Soil Biodiversity to Agriculture Practice in Rhizosphere

Authors: Yan Wang, Guowei Chen, Gang Wang

Abstract:

Soil microbial diversity is one of the important parameters to assess the soil fertility and soil health, even stability of the ecosystem. In this paper, we aim to reveal the soil microbial difference in rhizosphere and root zone, even to pick the special biomarkers influenced by the long term tillage practices, which included four treatments of no-tillage, ridge tillage, continuous cropping with corn and crop rotation with corn and soybean. Here, high-throughput sequencing was performed to investigate the difference of bacteria in rhizosphere and root zone. The results showed a very significant difference of species richness between rhizosphere and root zone soil at the same crop rotation system (p < 0.01), and also significant difference of species richness was found between continuous cropping with corn and corn-soybean rotation treatment in the rhizosphere statement, no-tillage and ridge tillage in root zone soils. Implied by further beta diversity analysis, both tillage methods and crop rotation systems influence the soil microbial diversity and community structure in varying degree. The composition and community structure of microbes in rhizosphere and root zone soils were clustered distinctly by the beta diversity (p < 0.05). Linear discriminant analysis coupled with effect size (LEfSe) analysis of total taxa in rhizosphere picked more than 100 bacterial taxa, which were significantly more abundant than that in root zone soils, whereas the number of biomarkers was lower between the continuous cropping with corn and crop rotation treatment, the same pattern was found at no-tillage and ridge tillage treatment. Bacterial communities were greatly influenced by main environmental factors in large scale, which is the result of biological adaptation and acclimation, hence it is beneficial for optimizing agricultural practices.

Keywords: tillage methods, biomarker, biodiversity, rhizosphere

Procedia PDF Downloads 145
1292 Prediction of Index-Mechanical Properties of Pyroclastic Rock Utilizing Electrical Resistivity Method

Authors: İsmail İnce

Abstract:

The aim of this study is to determine index and mechanical properties of pyroclastic rock in a practical way by means of electrical resistivity method. For this purpose, electrical resistivity, uniaxial compressive strength, point load strength, P-wave velocity, density and porosity values of 10 different pyroclastic rocks were measured in the laboratory. A simple regression analysis was made among the index-mechanical properties of the samples compatible with electrical resistivity values. A strong exponentially relation was found between index-mechanical properties and electrical resistivity values. The electrical resistivity method can be used to assess the engineering properties of the rock from which it is difficult to obtain regular shaped samples as a non-destructive method.

Keywords: electrical resistivity, index-mechanical properties, pyroclastic rocks, regression analysis

Procedia PDF Downloads 451
1291 Heat Transfer Enhancement by Turbulent Impinging Jet with Jet's Velocity Field Excitations Using OpenFOAM

Authors: Naseem Uddin

Abstract:

Impinging jets are used in variety of engineering and industrial applications. This paper is based on numerical simulations of heat transfer by turbulent impinging jet with velocity field excitations using different Reynolds Averaged Navier-Stokes Equations models. Also Detached Eddy Simulations are conducted to investigate the differences in the prediction capabilities of these two simulation approaches. In this paper the excited jet is simulated in non-commercial CFD code OpenFOAM with the goal to understand the influence of dynamics of impinging jet on heat transfer. The jet’s frequencies are altered keeping in view the preferred mode of the jet. The Reynolds number based on mean velocity and diameter is 23,000 and jet’s outlet-to-target wall distance is 2. It is found that heat transfer at the target wall can be influenced by judicious selection of amplitude and frequencies.

Keywords: excitation, impinging jet, natural frequency, turbulence models

Procedia PDF Downloads 257
1290 Aerodynamic Designing of Supersonic Centrifugal Compressor Stages

Authors: Y. Galerkin, A. Rekstin, K. Soldatova

Abstract:

Universal modeling method well proven for industrial compressors was applied for design of the high flow rate supersonic stage. Results were checked by ANSYS CFX and NUMECA Fine Turbo calculations. The impeller appeared to be very effective at transonic flow velocities. Stator elements efficiency is acceptable at design Mach numbers too. Their loss coefficient versus inlet flow angle performances correlates well with Universal modeling prediction. The impeller demonstrated ability of satisfactory operation at design flow rate. Supersonic flow behavior in the impeller inducer at the shroud blade to blade surface Φdes deserves additional study.

Keywords: centrifugal compressor stage, supersonic impeller, inlet flow angle, loss coefficient, return channel, shock wave, vane diffuser

Procedia PDF Downloads 451
1289 COSMO-RS Prediction for Choline Chloride/Urea Based Deep Eutectic Solvent: Chemical Structure and Application as Agent for Natural Gas Dehydration

Authors: Tayeb Aissaoui, Inas M. AlNashef

Abstract:

In recent years, green solvents named deep eutectic solvents (DESs) have been found to possess significant properties and to be applicable in several technologies. Choline chloride (ChCl) mixed with urea at a ratio of 1:2 and 80 °C was the first discovered DES. In this article, chemical structure and combination mechanism of ChCl: urea based DES were investigated. Moreover, the implementation of this DES in water removal from natural gas was reported. Dehydration of natural gas by ChCl:urea shows significant absorption efficiency compared to triethylene glycol. All above operations were retrieved from COSMOthermX software. This article confirms the potential application of DESs in gas industry.

Keywords: COSMO-RS, deep eutectic solvents, dehydration, natural gas, structure, organic salt

Procedia PDF Downloads 274
1288 Evaluating Service Trustworthiness for Service Selection in Cloud Environment

Authors: Maryam Amiri, Leyli Mohammad-Khanli

Abstract:

Cloud computing is becoming increasingly popular and more business applications are moving to cloud. In this regard, services that provide similar functional properties are increasing. So, the ability to select a service with the best non-functional properties, corresponding to the user preference, is necessary for the user. This paper presents an Evaluation Framework of Service Trustworthiness (EFST) that evaluates the trustworthiness of equivalent services without need to additional invocations of them. EFST extracts user preference automatically. Then, it assesses trustworthiness of services in two dimensions of qualitative and quantitative metrics based on the experiences of past usage of services. Finally, EFST determines the overall trustworthiness of services using Fuzzy Inference System (FIS). The results of experiments and simulations show that EFST is able to predict the missing values of Quality of Service (QoS) better than other competing approaches. Also, it propels users to select the most appropriate services.

Keywords: user preference, cloud service, trustworthiness, QoS metrics, prediction

Procedia PDF Downloads 268
1287 Mathematical Modeling and Optimization of Burnishing Parameters for 15NiCr6 Steel

Authors: Tarek Litim, Ouahiba Taamallah

Abstract:

The present paper is an investigation of the effect of burnishing on the surface integrity of a component made of 15NiCr6 steel. This work shows a statistical study based on regression, and Taguchi's design has allowed the development of mathematical models to predict the output responses as a function of the technological parameters studied. The response surface methodology (RSM) showed a simultaneous influence of the burnishing parameters and observe the optimal processing parameters. ANOVA analysis of the results resulted in the validation of the prediction model with a determination coefficient R=90.60% and 92.41% for roughness and hardness, respectively. Furthermore, a multi-objective optimization allowed to identify a regime characterized by P=10kgf, i=3passes, and f=0.074mm/rev, which favours minimum roughness and maximum hardness. The result was validated by the desirability of D= (0.99 and 0.95) for roughness and hardness, respectively.

Keywords: 15NiCr6 steel, burnishing, surface integrity, Taguchi, RSM, ANOVA

Procedia PDF Downloads 179
1286 An Approach for Thermal Resistance Prediction of Plain Socks in Wet State

Authors: Tariq Mansoor, Lubos Hes, Vladimir Bajzik

Abstract:

Socks comfort has great significance in our daily life. This significance even increased when we have undergone a work of low or high activity. It causes the sweating of our body with different rates. In this study, plain socks with differential fibre composition were wetted to saturated level. Then after successive intervals of conditioning, these socks are characterized by thermal resistance in dry and wet states. Theoretical thermal resistance is predicted by using combined filling coefficients and thermal conductivity of wet polymers instead of dry polymer (fibre) in different models. By this modification, different mathematical models could predict thermal resistance at different moisture levels. Furthermore, predicted thermal resistance by different models has reasonable correlation range between (0.84 -0.98) with experimental results in both dry (lab conditions moisture) and wet states. "This work is supported by Technical University of Liberec under SGC-2019. Project number is 21314".

Keywords: thermal resistance, mathematical model, plain socks, moisture loss rate

Procedia PDF Downloads 177
1285 Improving Grade Control Turnaround Times with In-Pit Hyperspectral Assaying

Authors: Gary Pattemore, Michael Edgar, Andrew Job, Marina Auad, Kathryn Job

Abstract:

As critical commodities become more scarce, significant time and resources have been used to better understand complicated ore bodies and extract their full potential. These challenging ore bodies provide several pain points for geologists and engineers to overcome, poor handling of these issues flows downs stream to the processing plant affecting throughput rates and recovery. Many open cut mines utilise blast hole drilling to extract additional information to feed back into the modelling process. This method requires samples to be collected during or after blast hole drilling. Samples are then sent for assay with turnaround times varying from 1 to 12 days. This method is time consuming, costly, requires human exposure on the bench and collects elemental data only. To address this challenge, research has been undertaken to utilise hyperspectral imaging across a broad spectrum to scan samples, collars or take down hole measurements for minerals and moisture content and grade abundances. Automation of this process using unmanned vehicles and on-board processing reduces human in pit exposure to ensure ongoing safety. On-board processing allows data to be integrated into modelling workflows with immediacy. The preliminary results demonstrate numerous direct and indirect benefits from this new technology, including rapid and accurate grade estimates, moisture content and mineralogy. These benefits allow for faster geo modelling updates, better informed mine scheduling and improved downstream blending and processing practices. The paper presents recommendations for implementation of the technology in open cut mining environments.

Keywords: grade control, hyperspectral scanning, artificial intelligence, autonomous mining, machine learning

Procedia PDF Downloads 91
1284 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 103
1283 Polymer Mixing in the Cavity Transfer Mixer

Authors: Giovanna Grosso, Martien A. Hulsen, Arash Sarhangi Fard, Andrew Overend, Patrick. D. Anderson

Abstract:

In many industrial applications and, in particular in polymer industry, the quality of mixing between different materials is fundamental to guarantee the desired properties of finished products. However, properly modelling and understanding polymer mixing often presents noticeable difficulties, because of the variety and complexity of the physical phenomena involved. This is the case of the Cavity Transfer Mixer (CTM), for which a clear understanding of mixing mechanisms is still missing, as well as clear guidelines for the system optimization. This device, invented and patented by Gale at Rapra Technology Limited, is an add-on to be mounted downstream of existing extruders, in order to improve distributive mixing. It consists of two concentric cylinders, the rotor and stator, both provided with staggered rows of hemispherical cavities. The inner cylinder (rotor) rotates, while the outer (stator) remains still. At the same time, the pressure load imposed upstream, pushes the fluid through the CTM. Mixing processes are driven by the flow field generated by the complex interaction between the moving geometry, the imposed pressure load and the rheology of the fluid. In such a context, the present work proposes a complete and accurate three dimensional modelling of the CTM and results of a broad range of simulations assessing the impact on mixing of several geometrical and functioning parameters. Among them, we find: the number of cavities per row, the number of rows, the size of the mixer, the rheology of the fluid and the ratio between the rotation speed and the fluid throughput. The model is composed of a flow part and a mixing part: a finite element solver computes the transient velocity field, which is used in the mapping method implementation in order to simulate the concentration field evolution. Results of simulations are summarized in guidelines for the device optimization.

Keywords: Mixing, non-Newtonian fluids, polymers, rheology.

Procedia PDF Downloads 359
1282 Evaluation of Sustainable Business Model Innovation in Increasing the Penetration of Renewable Energy in the Ghana Power Sector

Authors: Victor Birikorang Danquah

Abstract:

Ghana's primary energy supply is heavily reliant on petroleum, biomass, and hydropower. Currently, Ghana gets its energy from hydropower (Akosombo and Bui), thermal power plants powered by crude oil, natural gas, and diesel, solar power, and imports from La Cote d'Ivoire. Until the early 2000s, large hydroelectric dams dominated Ghana's electricity generation. Due to unreliable weather patterns, Ghana increased its reliance on thermal power. However, thermal power contributes the highest percentage in terms of electricity generation in Ghana and is predominantly supplied by Independent Power Producers (IPPs). Ghana's electricity industry operates the corporate utility model as its business model. This model is typically' vertically integrated,' with a single corporation selling the majority of power generated by its generation assets to its retail business, which then sells the electricity to retail market consumers. The corporate utility model has a straightforward value proposition that is based on increasing the number of energy units sold. The unit volume business model drives the entire energy value chain to increase throughput, locking system users into unsustainable practices. This report uses the qualitative research approach to explore the electricity industry in Ghana. There is a need for increasing renewable energy, such as wind and solar, in electricity generation. The research recommends two critical business models for the penetration of renewable energy in Ghana's power sector. The first model is the peer-to-peer electricity trading model, which relies on a software platform to connect consumers and generators in order for them to trade energy directly with one another. The second model is about encouraging local energy generation, incentivizing optimal time-of-use behaviour, and allowing any financial gains to be shared among the community members.

Keywords: business model innovation, electricity generation, renewable energy, solar energy, sustainability, wind energy

Procedia PDF Downloads 157
1281 The Implementation of a Numerical Technique to Thermal Design of Fluidized Bed Cooler

Authors: Damiaa Saad Khudor

Abstract:

The paper describes an investigation for the thermal design of a fluidized bed cooler and prediction of heat transfer rate among the media categories. It is devoted to the thermal design of such equipment and their application in the industrial fields. It outlines the strategy for the fluidization heat transfer mode and its implementation in industry. The thermal design for fluidized bed cooler is used to furnish a complete design for a fluidized bed cooler of Sodium Bicarbonate. The total thermal load distribution between the air-solid and water-solid along the cooler is calculated according to the thermal equilibrium. The step by step technique was used to accomplish the thermal design of the fluidized bed cooler. It predicts the load, air, solid and water temperature along the trough. The thermal design for fluidized bed cooler revealed to the installation of a heat exchanger consists of (65) horizontal tubes with (33.4) mm diameter and (4) m length inside the bed trough.

Keywords: fluidization, powder technology, thermal design, heat exchangers

Procedia PDF Downloads 492
1280 Comparison of ANN and Finite Element Model for the Prediction of Ultimate Load of Thin-Walled Steel Perforated Sections in Compression

Authors: Zhi-Jun Lu, Qi Lu, Meng Wu, Qian Xiang, Jun Gu

Abstract:

The analysis of perforated steel members is a 3D problem in nature, therefore the traditional analytical expressions for the ultimate load of thin-walled steel sections cannot be used for the perforated steel member design. In this study, finite element method (FEM) and artificial neural network (ANN) were used to simulate the process of stub column tests based on specific codes. Results show that compared with those of the FEM model, the ultimate load predictions obtained from ANN technique were much closer to those obtained from the physical experiments. The ANN model for the solving the hard problem of complex steel perforated sections is very promising.

Keywords: artificial neural network (ANN), finite element method (FEM), perforated sections, thin-walled Steel, ultimate load

Procedia PDF Downloads 331
1279 Multi-Label Approach to Facilitate Test Automation Based on Historical Data

Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally

Abstract:

The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.

Keywords: machine learning, multi-class, multi-label, supervised learning, test automation

Procedia PDF Downloads 114
1278 A Meso Macro Model Prediction of Laminated Composite Damage Elastic Behaviour

Authors: A. Hocine, A. Ghouaoula, S. M. Medjdoub, M. Cherifi

Abstract:

The present paper proposed a meso–macro model describing the mechanical behaviour composite laminates of staking sequence [+θ/-θ]s under tensil loading. The behaviour of a layer is ex-pressed through elasticity coupled to damage. The elastic strain is due to the elasticity of the layer and can be modeled by using the classical laminate theory, and the laminate is considered as an orthotropic material. This means that no coupling effect between strain and curvature is considered. In the present work, the damage is associated to cracking of the matrix and parallel to the fibers and it being taken into account by the changes in the stiffness of the layers. The anisotropic damage is completely described by a single scalar variable and its evolution law is specified from the principle of maximum dissipation. The stress/strain relationship is investigated in plane stress loading.

Keywords: damage, behavior modeling, meso-macro model, composite laminate, membrane loading

Procedia PDF Downloads 460
1277 Increase of the Nanofiber Degradation Rate Using PCL-PEO and PCL-PVP as a Shell in the Electrospun Core-Shell Nanofibers Using the Needleless Blades

Authors: Matej Buzgo, Erico Himawan, Ksenija JašIna, Aiva Simaite

Abstract:

Electrospinning is a versatile and efficient technology for producing nanofibers for biomedical applications. One of the most common polymers used for the preparation of nanofibers for regenerative medicine and drug delivery applications is polycaprolactone (PCL). PCL is a biocompatible and bioabsorbable material that can be used to stimulate the regeneration of various tissues. It is also a common material used for the development of drug delivery systems by blending the polymer with small active molecules. However, for many drug delivery applications, e.g. cancer immunotherapy, PCL biodegradation rate that may exceed 9 months is too long, and faster nanofiber dissolution is needed. In this paper, we investigate the dissolution and small molecule release rates of PCL blends with two hydrophilic polymers: polyethylene oxide (PEO) or polyvinylpyrrolidone (PVP). We show that adding hydrophilic polymer to the PCL reduces the water contact angle, increases the dissolution rate, and strengthens the interactions between the hydrophilic drug and polymer matrix that further sustain its release. Finally using this method, we were also able to increase the nanofiber degradation rate when PCL-PEO and PCL-PVP were used as a shell in the electrospun core-shell nanofibers and spread up the release of active proteins from their core. Electrospinning can be used for the preparation of the core-shell nanofibers, where active ingredients are encapsulated in the core and their release rate is regulated by the shell. However, such fibers are usually prepared by coaxial electrospinning that is an extremely low-throughput technique. An alternative is emulsion electrospinning that could be upscaled using needleless blades. In this work, we investigate the possibility of using emulsion electrospinning for encapsulation and sustained release of the growth factors for the development of the organotypic skin models. The core-shell nanofibers were prepared using the optimized formulation and the release rate of proteins from the fibers was investigated for 2 weeks – typical cell culture conditions.

Keywords: electrospinning, polycaprolactone (PCL), polyethylene oxide (PEO), polyvinylpyrrolidone (PVP)

Procedia PDF Downloads 263
1276 A Semantic Analysis of Modal Verbs in Barak Obama’s 2012 Presidential Campaign Speech

Authors: Kais A. Kadhim

Abstract:

This paper is a semantic analysis of the English modals in Obama’s speech. The main objective of this study is to analyze selected modal auxiliaries identified in selected speeches of Obama’s campaign based on Coates’ (1983) semantic clusters. A total of fifteen speeches of Obama’s campaign were selected as the primary data and the modal auxiliaries selected for analysis include will, would, can, could, should, must, ought, shall, may and might. All the modal auxiliaries taken from the speeches of Barack Obama were analyzed based on the framework of Coates’ semantic clusters. Such analytical framework was carried out to examine how modal auxiliaries are used in the context of persuading people in Obama’s campaign speeches. The findings reveal that modals of intention, prediction, futurity and modals of possibility, ability, permission are mostly used in Obama’s campaign speeches.

Keywords: modals, meaning, persuasion, speech

Procedia PDF Downloads 385
1275 Mean Velocity Modeling of Open-Channel Flow with Submerged Vegetation

Authors: Mabrouka Morri, Amel Soualmia, Philippe Belleudy

Abstract:

Vegetation affects the mean and turbulent flow structure. It may increase flood risks and sediment transport. Therefore, it is important to develop analytical approaches for the bed shear stress on vegetated bed, to predict resistance caused by vegetation. In the recent years, experimental and numerical models have both been developed to model the effects of submerged vegetation on open-channel flow. In this paper, different analytic models are compared and tested using the criteria of deviation, to explore their capacity for predicting the mean velocity and select the suitable one that will be applied in real case of rivers. The comparison between the measured data in vegetated flume and simulated mean velocities indicated, a good performance, in the case of rigid vegetation, whereas, Huthoff model shows the best agreement with a high coefficient of determination (R2=80%) and the smallest error in the prediction of the average velocities.

Keywords: analytic models, comparison, mean velocity, vegetation

Procedia PDF Downloads 258
1274 Economic Development Impacts of Connected and Automated Vehicles (CAV)

Authors: Rimon Rafiah

Abstract:

This paper will present a combination of two seemingly unrelated models, which are the one for estimating economic development impacts as a result of transportation investment and the other for increasing CAV penetration in order to reduce congestion. Measuring economic development impacts resulting from transportation investments is becoming more recognized around the world. Examples include the UK’s Wider Economic Benefits (WEB) model, Economic Impact Assessments in the USA, various input-output models, and additional models around the world. The economic impact model is based on WEB and is based on the following premise: investments in transportation will reduce the cost of personal travel, enabling firms to be more competitive, creating additional throughput (the same road allows more people to travel), and reducing the cost of travel of workers to a new workplace. This reduction in travel costs was estimated in out-of-pocket terms in a given localized area and was then translated into additional employment based on regional labor supply elasticity. This additional employment was conservatively assumed to be at minimum wage levels, translated into GDP terms, and from there into direct taxation (i.e., an increase in tax taken by the government). The CAV model is based on economic principles such as CAV usage, supply, and demand. Usage of CAVs can increase capacity using a variety of means – increased automation (known as Level I thru Level IV) and also by increased penetration and usage, which has been predicted to go up to 50% by 2030 according to several forecasts, with possible full conversion by 2045-2050. Several countries have passed policies and/or legislation on sales of gasoline-powered vehicles (none) starting in 2030 and later. Supply was measured via increased capacity on given infrastructure as a function of both CAV penetration and implemented technologies. The CAV model, as implemented in the USA, has shown significant savings in travel time and also in vehicle operating costs, which can be translated into economic development impacts in terms of job creation, GDP growth and salaries as well. The models have policy implications as well and can be adapted for use in Japan as well.

Keywords: CAV, economic development, WEB, transport economics

Procedia PDF Downloads 60
1273 BART Matching Method: Using Bayesian Additive Regression Tree for Data Matching

Authors: Gianna Zou

Abstract:

Propensity score matching (PSM), introduced by Paul R. Rosenbaum and Donald Rubin in 1983, is a popular statistical matching technique which tries to estimate the treatment effects by taking into account covariates that could impact the efficacy of study medication in clinical trials. PSM can be used to reduce the bias due to confounding variables. However, PSM assumes that the response values are normally distributed. In some cases, this assumption may not be held. In this paper, a machine learning method - Bayesian Additive Regression Tree (BART), is used as a more robust method of matching. BART can work well when models are misspecified since it can be used to model heterogeneous treatment effects. Moreover, it has the capability to handle non-linear main effects and multiway interactions. In this research, a BART Matching Method (BMM) is proposed to provide a more reliable matching method over PSM. By comparing the analysis results from PSM and BMM, BMM can perform well and has better prediction capability when the response values are not normally distributed.

Keywords: BART, Bayesian, matching, regression

Procedia PDF Downloads 133