Search results for: computational model(s)
7429 Experiments on Weakly-Supervised Learning on Imperfect Data
Authors: Yan Cheng, Yijun Shao, James Rudolph, Charlene R. Weir, Beth Sahlmann, Qing Zeng-Treitler
Abstract:
Supervised predictive models require labeled data for training purposes. Complete and accurate labeled data, i.e., a ‘gold standard’, is not always available, and imperfectly labeled data may need to serve as an alternative. An important question is if the accuracy of the labeled data creates a performance ceiling for the trained model. In this study, we trained several models to recognize the presence of delirium in clinical documents using data with annotations that are not completely accurate (i.e., weakly-supervised learning). In the external evaluation, the support vector machine model with a linear kernel performed best, achieving an area under the curve of 89.3% and accuracy of 88%, surpassing the 80% accuracy of the training sample. We then generated a set of simulated data and carried out a series of experiments which demonstrated that models trained on imperfect data can (but do not always) outperform the accuracy of the training data, e.g., the area under the curve for some models is higher than 80% when trained on the data with an error rate of 40%. Our experiments also showed that the error resistance of linear modeling is associated with larger sample size, error type, and linearity of the data (all p-values < 0.001). In conclusion, this study sheds light on the usefulness of imperfect data in clinical research via weakly-supervised learning.Keywords: weakly-supervised learning, support vector machine, prediction, delirium, simulation
Procedia PDF Downloads 1977428 Development of an Optimised, Automated Multidimensional Model for Supply Chains
Authors: Safaa H. Sindi, Michael Roe
Abstract:
This project divides supply chain (SC) models into seven Eras, according to the evolution of the market’s needs throughout time. The five earliest Eras describe the emergence of supply chains, while the last two Eras are to be created. Research objectives: The aim is to generate the two latest Eras with their respective models that focus on the consumable goods. Era Six contains the Optimal Multidimensional Matrix (OMM) that incorporates most characteristics of the SC and allocates them into four quarters (Agile, Lean, Leagile, and Basic SC). This will help companies, especially (SMEs) plan their optimal SC route. Era Seven creates an Automated Multidimensional Model (AMM) which upgrades the matrix of Era six, as it accounts for all the supply chain factors (i.e. Offshoring, sourcing, risk) into an interactive system with Heuristic Learning that helps larger companies and industries to select the best SC model for their market. Methodologies: The data collection is based on a Fuzzy-Delphi study that analyses statements using Fuzzy Logic. The first round of Delphi study will contain statements (fuzzy rules) about the matrix of Era six. The second round of Delphi contains the feedback given from the first round and so on. Preliminary findings: both models are applicable, Matrix of Era six reduces the complexity of choosing the best SC model for SMEs by helping them identify the best strategy of Basic SC, Lean, Agile and Leagile SC; that’s tailored to their needs. The interactive heuristic learning in the AMM of Era seven will help mitigate error and aid large companies to identify and re-strategize the best SC model and distribution system for their market and commodity, hence increasing efficiency. Potential contributions to the literature: The problematic issue facing many companies is to decide which SC model or strategy to incorporate, due to the many models and definitions developed over the years. This research simplifies this by putting most definition in a template and most models in the Matrix of era six. This research is original as the division of SC into Eras, the Matrix of Era six (OMM) with Fuzzy-Delphi and Heuristic Learning in the AMM of Era seven provides a synergy of tools that were not combined before in the area of SC. Additionally the OMM of Era six is unique as it combines most characteristics of the SC, which is an original concept in itself.Keywords: Leagile, automation, heuristic learning, supply chain models
Procedia PDF Downloads 3887427 Numerical Investigation of Two Turbulence Models for Predicting the Temperature Separation in Conical Vortex Tube
Authors: M. Guen
Abstract:
A three-dimensional numerical study is used to analyze the behavior of the flow inside a vortex tube. The vortex tube or Ranque-Hilsch vortex tube is a simple device which is capable of dividing compressed air from the inlet nozzle tangentially into two flow with different temperatures warm and cold. This phenomenon is known from literature by temperature separation. The K ω-SST and K-ε turbulence models are used to predict the turbulent flow behaviour inside the tube. The vortex tube is an Exair 708 slpm (25 scfm) commercial tube. The cold and hot exits areas are 30.2 and 95 mm2 respectively. The vortex nozzle consists of 6 straight slots; the height and the width of each slot are 0.97 mm and 1.41 mm. The total area normal to the flow associated with six nozzles is therefore 8.15 mm 2. The present study focuses on a comparison between two turbulence models K ω-SST, K-ε by using a new configuration of vortex tube (Conical Vortex Tube). The performance curves of the temperature separation versus cold outlet mass fraction were calculated and compared with experimental and numerical study of other researchers.Keywords: conical vortex tube, temperature separation, cold mass fraction, turbulence
Procedia PDF Downloads 2477426 Kinetics, Equilibrium and Thermodynamics of the Adsorption of Triphenyltin onto NanoSiO₂/Fly Ash/Activated Carbon Composite
Authors: Olushola S. Ayanda, Olalekan S. Fatoki, Folahan A. Adekola, Bhekumusa J. Ximba, Cecilia O. Akintayo
Abstract:
In the present study, the kinetics, equilibrium and thermodynamics of the adsorption of triphenyltin (TPT) from TPT-contaminated water onto nanoSiO2/fly ash/activated carbon composite was investigated in batch adsorption system. Equilibrium adsorption data were analyzed using Langmuir, Freundlich, Temkin and Dubinin–Radushkevich (D-R) isotherm models. Pseudo first- and second-order, Elovich and fractional power models were applied to test the kinetic data and in order to understand the mechanism of adsorption, thermodynamic parameters such as ΔG°, ΔSo and ΔH° were also calculated. The results showed a very good compliance with pseudo second-order equation while the Freundlich and D-R models fit the experiment data. Approximately 99.999 % TPT was removed from the initial concentration of 100 mg/L TPT at 80oC, contact time of 60 min, pH 8 and a stirring speed of 200 rpm. Thus, nanoSiO2/fly ash/activated carbon composite could be used as effective adsorbent for the removal of TPT from contaminated water and wastewater.Keywords: isotherm, kinetics, nanoSiO₂/fly ash/activated carbon composite, tributyltin
Procedia PDF Downloads 2917425 Experimental and Numerical Determination of the Freeze Point Depression of a Multi-Phase Flow in a Scraped Surface Heat Exchanger
Authors: Carlos A. Acosta, Amar Bhalla, Ruyan Guo
Abstract:
Scraped surface heat exchangers (SSHE) use a rotor shaft assembly with scraping blades to homogenize viscous fluids during the heat transfer process. Obtaining in-situ measurements is difficult because the rotor and scraping blades spin continuously inside the mixing chamber, obstructing the instrumentation pathway. Computational fluid dynamics simulations provide useful insight into the flow behavior around the scraper blades for a variety of fluids and blade geometries. However, numerical solutions often focus on the fluid dynamics and heat transfer phenomena of rotating flow, ignoring the glass-transition temperature and freezing point depression. This research studies the multi-phase fluid dynamics and freezing point depression inside the SSHE with non-isothermal conditions in a time dependent process using an aqueous solution that contains 13.5 wt.% high fructose corn syrup and CO₂. The computational results were validated with in-situ pressure, temperature, and optical spectroscopy measurements. Results from the numerical model show good quantitatively agreement with experimental values.Keywords: computational fluid dynamics, freezing point depression, phase-transition temperature, multi-phase flow
Procedia PDF Downloads 1467424 Computational Fluid Dynamics Analysis of Cyclone Separator Performance Using Discrete Phase Model
Authors: Sandeep Mohan Ahuja, Gulshan Kumar Jawa
Abstract:
Cyclone separators are crucial components in various industries tasked with efficiently separating particulate matter from gas streams. Achieving optimal performance hinges on a deep understanding of flow dynamics and particle behaviour within these separators. In this investigation, Computational Fluid Dynamics (CFD) simulations are conducted utilizing the Discrete Phase Model (DPM) to dissect the intricate flow patterns, particle trajectories, and separation efficiency within cyclone separators. The study delves into the influence of pivotal parameters like inlet velocity, particle size distribution, and cyclone geometry on separation efficiency. Through numerical simulations, a comprehensive comprehension of fluid-particle interaction phenomena within cyclone separators is attained, allowing for the assessment of solid collection efficiency across diverse operational conditions and geometrical setups. The insights gleaned from this study promise to advance our understanding of the complex interplay between fluid and particle within cyclone separators, thereby enabling optimization across a wide array of industrial applications. By harnessing the power of CFD simulations and the DPM, this research endeavours to furnish valuable insights for designing, operating, and evaluating the performance of cyclone separators, ultimately fostering greater efficiency and environmental sustainability within industrial processes.Keywords: cyclone separator, computational fluid dynamics, enhancing efficiency, discrete phase model
Procedia PDF Downloads 507423 Strategy Management of Soybean (Glycine max L.) for Dealing with Extreme Climate through the Use of Cropsyst Model
Authors: Aminah Muchdar, Nuraeni, Eddy
Abstract:
The aims of the research are: (1) to verify the cropsyst plant model of experimental data in the field of soybean plants and (2) to predict planting time and potential yield soybean plant with the use of cropsyst model. This research is divided into several stages: (1) first calibration stage which conducted in the field from June until September 2015.(2) application models stage, where the data obtained from calibration in the field will be included in cropsyst models. The required data models are climate data, ground data/soil data,also crop genetic data. The relationship between the obtained result in field with simulation cropsyst model indicated by Efficiency Index (EF) which the value is 0,939.That is showing that cropsyst model is well used. From the calculation result RRMSE which the value is 1,922%.That is showing that comparative fault prediction results from simulation with result obtained in the field is 1,92%. The conclusion has obtained that the prediction of soybean planting time cropsyst based models that have been made valid for use. and the appropriate planting time for planting soybeans mainly on rain-fed land is at the end of the rainy season, in which the above study first planting time (June 2, 2015) which gives the highest production, because at that time there was still some rain. Tanggamus varieties more resistant to slow planting time cause the percentage decrease in the yield of each decade is lower than the average of all varieties.Keywords: soybean, Cropsyst, calibration, efficiency Index, RRMSE
Procedia PDF Downloads 1787422 A Parallel Approach for 3D-Variational Data Assimilation on GPUs in Ocean Circulation Models
Authors: Rossella Arcucci, Luisa D'Amore, Simone Celestino, Giuseppe Scotti, Giuliano Laccetti
Abstract:
This work is the first dowel in a rather wide research activity in collaboration with Euro Mediterranean Center for Climate Changes, aimed at introducing scalable approaches in Ocean Circulation Models. We discuss designing and implementation of a parallel algorithm for solving the Variational Data Assimilation (DA) problem on Graphics Processing Units (GPUs). The algorithm is based on the fully scalable 3DVar DA model, previously proposed by the authors, which uses a Domain Decomposition approach (we refer to this model as the DD-DA model). We proceed with an incremental porting process consisting of 3 distinct stages: requirements and source code analysis, incremental development of CUDA kernels, testing and optimization. Experiments confirm the theoretic performance analysis based on the so-called scale up factor demonstrating that the DD-DA model can be suitably mapped on GPU architectures.Keywords: data assimilation, GPU architectures, ocean models, parallel algorithm
Procedia PDF Downloads 4107421 Kalman Filter for Bilinear Systems with Application
Authors: Abdullah E. Al-Mazrooei
Abstract:
In this paper, we present a new kind of the bilinear systems in the form of state space model. The evolution of this system depends on the product of state vector by its self. The well known Lotak Volterra and Lorenz models are special cases of this new model. We also present here a generalization of Kalman filter which is suitable to work with the new bilinear model. An application to real measurements is introduced to illustrate the efficiency of the proposed algorithm.Keywords: bilinear systems, state space model, Kalman filter, application, models
Procedia PDF Downloads 4397420 Housing Price Prediction Using Machine Learning Algorithms: The Case of Melbourne City, Australia
Authors: The Danh Phan
Abstract:
House price forecasting is a main topic in the real estate market research. Effective house price prediction models could not only allow home buyers and real estate agents to make better data-driven decisions but may also be beneficial for the property policymaking process. This study investigates the housing market by using machine learning techniques to analyze real historical house sale transactions in Australia. It seeks useful models which could be deployed as an application for house buyers and sellers. Data analytics show a high discrepancy between the house price in the most expensive suburbs and the most affordable suburbs in the city of Melbourne. In addition, experiments demonstrate that the combination of Stepwise and Support Vector Machine (SVM), based on the Mean Squared Error (MSE) measurement, consistently outperforms other models in terms of prediction accuracy.Keywords: house price prediction, regression trees, neural network, support vector machine, stepwise
Procedia PDF Downloads 2287419 A Numerical Model Simulation for an Updraft Gasifier Using High-Temperature Steam
Authors: T. M. Ismail, M. A. El-Salam
Abstract:
A mathematical model study was carried out to investigate gasification of biomass fuels using high-temperature air and steam as a gasifying agent using high-temperature air up to 1000°C. In this study, a 2D computational fluid dynamics model was developed to study the gasification process in an updraft gasifier, considering drying, pyrolysis, combustion, and gasification reactions. The gas and solid phases were resolved using a Euler−Euler multiphase approach, with exchange terms for the momentum, mass, and energy. The standard k−ε turbulence model was used in the gas phase, and the particle phase was modeled using the kinetic theory of granular flow. The results show that the present model giving a promising way in its capability and sensitivity for the parameter effects that influence the gasification process.Keywords: computational fluid dynamics, gasification, biomass fuel, fixed bed gasifier
Procedia PDF Downloads 4047418 A Computational Cost-Effective Clustering Algorithm in Multidimensional Space Using the Manhattan Metric: Application to the Global Terrorism Database
Authors: Semeh Ben Salem, Sami Naouali, Moetez Sallami
Abstract:
The increasing amount of collected data has limited the performance of the current analyzing algorithms. Thus, developing new cost-effective algorithms in terms of complexity, scalability, and accuracy raised significant interests. In this paper, a modified effective k-means based algorithm is developed and experimented. The new algorithm aims to reduce the computational load without significantly affecting the quality of the clusterings. The algorithm uses the City Block distance and a new stop criterion to guarantee the convergence. Conducted experiments on a real data set show its high performance when compared with the original k-means version.Keywords: pattern recognition, global terrorism database, Manhattan distance, k-means clustering, terrorism data analysis
Procedia PDF Downloads 3857417 Time Series Forecasting (TSF) Using Various Deep Learning Models
Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan
Abstract:
Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed-length window in the past as an explicit input. In this paper, we study how the performance of predictive models changes as a function of different look-back window sizes and different amounts of time to predict the future. We also consider the performance of the recent attention-based Transformer models, which have had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (RNN, LSTM, GRU, and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the UCI website, which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Average Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.Keywords: air quality prediction, deep learning algorithms, time series forecasting, look-back window
Procedia PDF Downloads 1517416 OpenMP Parallelization of Three-Dimensional Magnetohydrodynamic Code FOI-PERFECT
Authors: Jiao F. Huang, Shi Chen, Shu C. Duan, Gang H. Wang
Abstract:
Due to its complex spatial structure as well as dynamic temporal evolution, an analytic solution of an X-pinch process is out of question, and numerical simulation becomes an important tool in X-pinch studies. Intrinsically, simulations of X-pinch are three-dimensional (3D) because of the specific structure of its load. Furthermore, in order to resolve both its μm-scales and ns-durations, fine spatial mesh grid and short time steps are usually adopted. The resulting large computational scales make the parallelization of codes a vital problem to be solved if any practical simulations are to be carried out. In this work, we report OpenMP parallelization of our 3D magnetohydrodynamic (MHD) code FOI-PERFECT. Results of test runs confirm that computational efficiency has been improved after parallelization, and both the sequential and parallel versions give the same physical results under the same initial conditions.Keywords: MHD simulation, OpenMP, parallelization, X-pinch
Procedia PDF Downloads 3377415 Generalized Hyperbolic Functions: Exponential-Type Quantum Interactions
Authors: Jose Juan Peña, J. Morales, J. García-Ravelo
Abstract:
In the search of potential models applied in the theoretical treatment of diatomic molecules, some of them have been constructed by using standard hyperbolic functions as well as from the so-called q-deformed hyperbolic functions (sc q-dhf) for displacing and modifying the shape of the potential under study. In order to transcend the scope of hyperbolic functions, in this work, a kind of generalized q-deformed hyperbolic functions (g q-dhf) is presented. By a suitable transformation, through the q deformation parameter, it is shown that these g q-dhf can be expressed in terms of their corresponding standard ones besides they can be reduced to the sc q-dhf. As a useful application of the proposed approach, and considering a class of exactly solvable multi-parameter exponential-type potentials, some new q-deformed quantum interactions models that can be used as interesting alternative in quantum physics and quantum states are presented. Furthermore, due that quantum potential models are conditioned on the q-dependence of the parameters that characterize to the exponential-type potentials, it is shown that many specific cases of q-deformed potentials are obtained as particular cases from the proposal.Keywords: diatomic molecules, exponential-type potentials, hyperbolic functions, q-deformed potentials
Procedia PDF Downloads 1817414 Low Complexity Deblocking Algorithm
Authors: Jagroop Singh Sidhu, Buta Singh
Abstract:
A low computational deblocking filter including three frequency related modes (smooth mode, intermediate mode, and non-smooth mode for low-frequency, mid-frequency, and high frequency regions, respectively) is proposed. The suggested approach requires zero additions, zero subtractions, zero multiplications (for intermediate region), no divisions (for non-smooth region) and no comparison. The suggested method thus keeps the computation lower and thus suitable for image coding systems based on blocks. Comparison of average number of operations for smooth, non-smooth, intermediate (per pixel vector for each block) using filter suggested by Chen and the proposed method filter suggests that the proposed filter keeps the computation lower and is thus suitable for fast processing algorithms.Keywords: blocking artifacts, computational complexity, non-smooth, intermediate, smooth
Procedia PDF Downloads 4607413 Model-Based Process Development for the Comparison of a Radial Riveting and Roller Burnishing Process in Mechanical Joining Technology
Authors: Tobias Beyer, Christoph Friedrich
Abstract:
Modern simulation methodology using finite element models is nowadays a recognized tool for product design/optimization. Likewise, manufacturing process design is increasingly becoming the focus of simulation methodology in order to enable sustainable results based on reduced real-life tests here as well. In this article, two process simulations -radial riveting and roller burnishing- used for mechanical joining of components are explained. In the first step, the required boundary conditions are developed and implemented in the respective simulation models. This is followed by process space validation. With the help of the validated models, the interdependencies of the input parameters are investigated and evaluated by means of sensitivity analyses. Limit case investigations are carried out and evaluated with the aid of the process simulations. Likewise, a comparison of the two joining methods to each other becomes possible.Keywords: FEM, model-based process development, process simulation, radial riveting, roller burnishing, sensitivity analysis
Procedia PDF Downloads 1067412 A Study of Two Disease Models: With and Without Incubation Period
Authors: H. C. Chinwenyi, H. D. Ibrahim, J. O. Adekunle
Abstract:
The incubation period is defined as the time from infection with a microorganism to development of symptoms. In this research, two disease models: one with incubation period and another without incubation period were studied. The study involves the use of a mathematical model with a single incubation period. The test for the existence and stability of the disease free and the endemic equilibrium states for both models were carried out. The fourth order Runge-Kutta method was used to solve both models numerically. Finally, a computer program in MATLAB was developed to run the numerical experiments. From the results, we are able to show that the endemic equilibrium state of the model with incubation period is locally asymptotically stable whereas the endemic equilibrium state of the model without incubation period is unstable under certain conditions on the given model parameters. It was also established that the disease free equilibrium states of the model with and without incubation period are locally asymptotically stable. Furthermore, results from numerical experiments using empirical data obtained from Nigeria Centre for Disease Control (NCDC) showed that the overall population of the infected people for the model with incubation period is higher than that without incubation period. We also established from the results obtained that as the transmission rate from susceptible to infected population increases, the peak values of the infected population for the model with incubation period decrease and are always less than those for the model without incubation period.Keywords: asymptotic stability, Hartman-Grobman stability criterion, incubation period, Routh-Hurwitz criterion, Runge-Kutta method
Procedia PDF Downloads 1737411 Demand for Domestic Marine and Coastal Tourism and Day Trips on an Island Nation
Authors: John Deely, Stephen Hynes, Mary Cawley, Sarah Hogan
Abstract:
Domestic marine and coastal tourism have increased in importance over the last number of years due to the impacts of international travel, environmental concerns, associated health benefits and COVID-19 related travel restrictions. Consequently, this paper conceptualizes domestic marine and coastal tourism within an economic framework. Two logit models examine the factors that influence participation in the coastal day trips and overnight stays markets, respectively. Two truncated travel cost models are employed to explore trip duration, one analyzing the number of day trips taken and the other examining the number of nights spent in marine and coastal areas. Although a range of variables predicts participation, no one variable had a significant and consistent effect on every model. A division in access to domestic marine and coastal tourism is also observed based on variation in household income. The results also indicate a vibrant day trip market and large consumer surpluses.Keywords: domestic marine and coastal tourism, day tripper, participation models, truncated travel cost model
Procedia PDF Downloads 1317410 AI-Driven Forecasting Models for Anticipating Oil Market Trends and Demand
Authors: Gaurav Kumar Sinha
Abstract:
The volatility of the oil market, influenced by geopolitical, economic, and environmental factors, presents significant challenges for stakeholders in predicting trends and demand. This article explores the application of artificial intelligence (AI) in developing robust forecasting models to anticipate changes in the oil market more accurately. We delve into various AI techniques, including machine learning, deep learning, and time series analysis, that have been adapted to analyze historical data and current market conditions to forecast future trends. The study evaluates the effectiveness of these models in capturing complex patterns and dependencies in market data, which traditional forecasting methods often miss. Additionally, the paper discusses the integration of external variables such as political events, economic policies, and technological advancements that influence oil prices and demand. By leveraging AI, stakeholders can achieve a more nuanced understanding of market dynamics, enabling better strategic planning and risk management. The article concludes with a discussion on the potential of AI-driven models in enhancing the predictive accuracy of oil market forecasts and their implications for global economic planning and strategic resource allocation.Keywords: AI forecasting, oil market trends, machine learning, deep learning, time series analysis, predictive analytics, economic factors, geopolitical influence, technological advancements, strategic planning
Procedia PDF Downloads 337409 Numerical Simulation of Fluid Structure Interaction Using Two-Way Method
Authors: Samira Laidaoui, Mohammed Djermane, Nazihe Terfaya
Abstract:
The fluid-structure coupling is a natural phenomenon which reflects the effects of two continuums: fluid and structure of different types in the reciprocal action on each other, involving knowledge of elasticity and fluid mechanics. The solution for such problems is based on the relations of continuum mechanics and is mostly solved with numerical methods. It is a computational challenge to solve such problems because of the complex geometries, intricate physics of fluids, and complicated fluid-structure interactions. The way in which the interaction between fluid and solid is described gives the largest opportunity for reducing the computational effort. In this paper, a problem of fluid structure interaction is investigated with two-way coupling method. The formulation Arbitrary Lagrangian-Eulerian (ALE) was used, by considering a dynamic grid, where the solid is described by a Lagrangian formulation and the fluid by a Eulerian formulation. The simulation was made on the ANSYS software.Keywords: ALE, coupling, FEM, fluid-structure, interaction, one-way method, two-way method
Procedia PDF Downloads 6767408 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland
Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski
Abstract:
PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks
Procedia PDF Downloads 1487407 Investigation of Flow Characteristics on Upstream and Downstream of Orifice Using Computational Fluid Dynamics
Authors: War War Min Swe, Aung Myat Thu, Khin Cho Thet, Zaw Moe Htet, Thuzar Mon
Abstract:
The main parameter of the orifice hole diameter was designed according to the range of throttle diameter ratio which gave the required discharge coefficient. The discharge coefficient is determined by difference diameter ratios. The value of discharge coefficient is 0.958 occurred at throttle diameter ratio 0.5. The throttle hole diameter is 80 mm. The flow analysis is done numerically using ANSYS 17.0, computational fluid dynamics. The flow velocity was analyzed in the upstream and downstream of the orifice meter. The downstream velocity of non-standard orifice meter is 2.5% greater than that of standard orifice meter. The differential pressure is 515.379 Pa in standard orifice.Keywords: CFD-CFX, discharge coefficients, flow characteristics, inclined
Procedia PDF Downloads 1427406 Kinetic Modeling of Transesterification of Triacetin Using Synthesized Ion Exchange Resin (SIERs)
Authors: Hafizuddin W. Yussof, Syamsutajri S. Bahri, Adam P. Harvey
Abstract:
Strong anion exchange resins with QN+OH-, have the potential to be developed and employed as heterogeneous catalyst for transesterification, as they are chemically stable to leaching of the functional group. Nine different SIERs (SIER1-9) with QN+OH- were prepared by suspension polymerization of vinylbenzyl chloride-divinylbenzene (VBC-DVB) copolymers in the presence of n-heptane (pore-forming agent). The amine group was successfully grafted into the polymeric resin beads through functionalization with trimethylamine. These SIERs are then used as a catalyst for the transesterification of triacetin with methanol. A set of differential equations that represents the Langmuir-Hinshelwood-Hougen-Watson (LHHW) and Eley-Rideal (ER) models for the transesterification reaction were developed. These kinetic models of LHHW and ER were fitted to the experimental data. Overall, the synthesized ion exchange resin-catalyzed reaction were well-described by the Eley-Rideal model compared to LHHW models, with sum of square error (SSE) of 0.742 and 0.996, respectively.Keywords: anion exchange resin, Eley-Rideal, Langmuir-Hinshelwood-Hougen-Watson, transesterification
Procedia PDF Downloads 3597405 Simulation of the Large Hadrons Collisions Using Monte Carlo Tools
Authors: E. Al Daoud
Abstract:
In many cases, theoretical treatments are available for models for which there is no perfect physical realization. In this situation, the only possible test for an approximate theoretical solution is to compare with data generated from a computer simulation. In this paper, Monte Carlo tools are used to study and compare the elementary particles models. All the experiments are implemented using 10000 events, and the simulated energy is 13 TeV. The mean and the curves of several variables are calculated for each model using MadAnalysis 5. Anomalies in the results can be seen in the muons masses of the minimal supersymmetric standard model and the two Higgs doublet model.Keywords: Feynman rules, hadrons, Lagrangian, Monte Carlo, simulation
Procedia PDF Downloads 3157404 Using Machine Learning to Predict Answers to Big-Five Personality Questions
Authors: Aadityaa Singla
Abstract:
The big five personality traits are as follows: openness, conscientiousness, extraversion, agreeableness, and neuroticism. In order to get an insight into their personality, many flocks to these categories, which each have different meanings/characteristics. This information is important not only to individuals but also to career professionals and psychologists who can use this information for candidate assessment or job recruitment. The links between AI and psychology have been well studied in cognitive science, but it is still a rather novel development. It is possible for various AI classification models to accurately predict a personality question via ten input questions. This would contrast with the hundred questions that normal humans have to answer to gain a complete picture of their five personality traits. In order to approach this problem, various AI classification models were used on a dataset to predict what a user may answer. From there, the model's prediction was compared to its actual response. Normally, there are five answer choices (a 20% chance of correct guess), and the models exceed that value to different degrees, proving their significance. By utilizing an MLP classifier, decision tree, linear model, and K-nearest neighbors, they were able to obtain a test accuracy of 86.643, 54.625, 47.875, and 52.125, respectively. These approaches display that there is potential in the future for more nuanced predictions to be made regarding personality.Keywords: machine learning, personally, big five personality traits, cognitive science
Procedia PDF Downloads 1447403 Automatic Flood Prediction Using Rainfall Runoff Model in Moravian-Silesian Region
Authors: B. Sir, M. Podhoranyi, S. Kuchar, T. Kocyan
Abstract:
Rainfall-runoff models play important role in hydrological predictions. However, the model is only one part of the process for creation of flood prediction. The aim of this paper is to show the process of successful prediction for flood event (May 15–May 18 2014). The prediction was performed by rainfall runoff model HEC–HMS, one of the models computed within Floreon+ system. The paper briefly evaluates the results of automatic hydrologic prediction on the river Olše catchment and its gages Český Těšín and Věřňovice.Keywords: flood, HEC-HMS, prediction, rainfall, runoff
Procedia PDF Downloads 3927402 Visualization and Performance Measure to Determine Number of Topics in Twitter Data Clustering Using Hybrid Topic Modeling
Authors: Moulana Mohammed
Abstract:
Topic models are widely used in building clusters of documents for more than a decade, yet problems occurring in choosing optimal number of topics. The main problem is the lack of a stable metric of the quality of topics obtained during the construction of topic models. The authors analyzed from previous works, most of the models used in determining the number of topics are non-parametric and quality of topics determined by using perplexity and coherence measures and concluded that they are not applicable in solving this problem. In this paper, we used the parametric method, which is an extension of the traditional topic model with visual access tendency for visualization of the number of topics (clusters) to complement clustering and to choose optimal number of topics based on results of cluster validity indices. Developed hybrid topic models are demonstrated with different Twitter datasets on various topics in obtaining the optimal number of topics and in measuring the quality of clusters. The experimental results showed that the Visual Non-negative Matrix Factorization (VNMF) topic model performs well in determining the optimal number of topics with interactive visualization and in performance measure of the quality of clusters with validity indices.Keywords: interactive visualization, visual mon-negative matrix factorization model, optimal number of topics, cluster validity indices, Twitter data clustering
Procedia PDF Downloads 1337401 Using Divergent Nozzle with Aerodynamic Lens to Focus Nanoparticles
Authors: Hasan Jumaah Mrayeh, Fue-Sang Lien
Abstract:
ANSYS Fluent will be used to simulate Computational Fluid Dynamics (CFD) for an efficient lens and nozzle design which will be explained in this paper. We have designed and characterized an aerodynamic lens and a divergent nozzle for focusing flow that transmits sub 25 nm particles through the aerodynamic lens. The design of the lens and nozzle has been improved using CFD for particle trajectories. We obtained a case for calculating nanoparticles (25 nm) flowing through the aerodynamic lens and divergent nozzle. Nanoparticles are transported by air, which is pumped into the aerodynamic lens through the nozzle at 1 atmospheric pressure. We have also developed a computational methodology that can determine the exact focus characteristics of aerodynamic lens systems. Particle trajectories were traced using the Lagrange approach. The simulation shows the ability of the aerodynamic lens to focus on 25 nm particles after using a divergent nozzle.Keywords: aerodynamic lens, divergent nozzle, ANSYS Fluent, Lagrange approach
Procedia PDF Downloads 3047400 A Computational Study of Very High Turbulent Flow and Heat Transfer Characteristics in Circular Duct with Hemispherical Inline Baffles
Authors: Dipak Sen, Rajdeep Ghosh
Abstract:
This paper presents a computational study of steady state three dimensional very high turbulent flow and heat transfer characteristics in a constant temperature-surfaced circular duct fitted with 900 hemispherical inline baffles. The computations are based on realizable k-ɛ model with standard wall function considering the finite volume method, and the SIMPLE algorithm has been implemented. Computational Study are carried out for Reynolds number, Re ranging from 80000 to 120000, Prandtl Number, Pr of 0.73, Pitch Ratios, PR of 1,2,3,4,5 based on the hydraulic diameter of the channel, hydrodynamic entry length, thermal entry length and the test section. Ansys Fluent 15.0 software has been used to solve the flow field. Study reveals that circular pipe having baffles has a higher Nusselt number and friction factor compared to the smooth circular pipe without baffles. Maximum Nusselt number and friction factor are obtained for the PR=5 and PR=1 respectively. Nusselt number increases while pitch ratio increases in the range of study; however, friction factor also decreases up to PR 3 and after which it becomes almost constant up to PR 5. Thermal enhancement factor increases with increasing pitch ratio but with slightly decreasing Reynolds number in the range of study and becomes almost constant at higher Reynolds number. The computational results reveal that optimum thermal enhancement factor of 900 inline hemispherical baffle is about 1.23 for pitch ratio 5 at Reynolds number 120000.It also shows that the optimum pitch ratio for which the baffles can be installed in such very high turbulent flows should be 5. Results show that pitch ratio and Reynolds number play an important role on both fluid flow and heat transfer characteristics.Keywords: friction factor, heat transfer, turbulent flow, circular duct, baffle, pitch ratio
Procedia PDF Downloads 370