Search results for: churn prediction modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5778

Search results for: churn prediction modeling

3528 Black-Box-Base Generic Perturbation Generation Method under Salient Graphs

Authors: Dingyang Hu, Dan Liu

Abstract:

DNN (Deep Neural Network) deep learning models are widely used in classification, prediction, and other task scenarios. To address the difficulties of generic adversarial perturbation generation for deep learning models under black-box conditions, a generic adversarial ingestion generation method based on a saliency map (CJsp) is proposed to obtain salient image regions by counting the factors that influence the input features of an image on the output results. This method can be understood as a saliency map attack algorithm to obtain false classification results by reducing the weights of salient feature points. Experiments also demonstrate that this method can obtain a high success rate of migration attacks and is a batch adversarial sample generation method.

Keywords: adversarial sample, gradient, probability, black box

Procedia PDF Downloads 94
3527 Gas Holdups in a Gas-Liquid Upflow Bubble Column With Internal

Authors: C. Milind Caspar, Valtonia Octavio Massingue, K. Maneesh Reddy, K. V. Ramesh

Abstract:

Gas holdup data were obtained from measured pressure drop values in a gas-liquid upflow bubble column in the presence of string of hemispheres promoter internal. The parameters that influenced the gas holdup are gas velocity, liquid velocity, promoter rod diameter, pitch and base diameter of hemisphere. Tap water was used as liquid phase and nitrogen as gas phase. About 26 percent in gas holdup was obtained due to the insertion of promoter in in the present study in comparison with empty conduit. Pitch and rod diameter have not shown any influence on gas holdup whereas gas holdup was strongly influenced by gas velocity, liquid velocity and hemisphere base diameter. Correlation equation was obtained for the prediction of gas holdup by least squares regression analysis.

Keywords: bubble column, gas-holdup, two-phase flow, turbulent promoter

Procedia PDF Downloads 102
3526 Multi-Stakeholder Involvement in Construction and Challenges of Building Information Modeling Implementation

Authors: Zeynep Yazicioglu

Abstract:

Project development is a complex process where many stakeholders work together. Employers and main contractors are the base stakeholders, whereas designers, engineers, sub-contractors, suppliers, supervisors, and consultants are other stakeholders. A combination of the complexity of the building process with a large number of stakeholders often leads to time and cost overruns and irregular resource utilization. Failure to comply with the work schedule and inefficient use of resources in the construction processes indicate that it is necessary to accelerate production and increase productivity. The development of computer software called Building Information Modeling, abbreviated as BIM, is a major technological breakthrough in this area. The use of BIM enables architectural, structural, mechanical, and electrical projects to be drawn in coordination. BIM is a tool that should be considered by every stakeholder with the opportunities it offers, such as minimizing construction errors, reducing construction time, forecasting, and determination of the final construction cost. It is a process spreading over the years, enabling all stakeholders associated with the project and construction to use it. The main goal of this paper is to explore the problems associated with the adoption of BIM in multi-stakeholder projects. The paper is a conceptual study, summarizing the author’s practical experience with design offices and construction firms working with BIM. In the transition period to BIM, three of the challenges will be examined in this paper: 1. The compatibility of supplier companies with BIM, 2. The need for two-dimensional drawings, 3. Contractual issues related to BIM. The paper reviews the literature on BIM usage and reviews the challenges in the transition stage to BIM. Even on an international scale, the supplier that can work in harmony with BIM is not very common, which means that BIM's transition is continuing. In parallel, employers, local approval authorities, and material suppliers still need a 2-D drawing. In the BIM environment, different stakeholders can work on the same project simultaneously, giving rise to design ownership issues. Practical applications and problems encountered are also discussed, providing a number of suggestions for the future.

Keywords: BIM opportunities, collaboration, contract issues about BIM, stakeholders of project

Procedia PDF Downloads 101
3525 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model

Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle

Abstract:

In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.

Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model

Procedia PDF Downloads 98
3524 Diagnosis of Diabetes Using Computer Methods: Soft Computing Methods for Diabetes Detection Using Iris

Authors: Piyush Samant, Ravinder Agarwal

Abstract:

Complementary and Alternative Medicine (CAM) techniques are quite popular and effective for chronic diseases. Iridology is more than 150 years old CAM technique which analyzes the patterns, tissue weakness, color, shape, structure, etc. for disease diagnosis. The objective of this paper is to validate the use of iridology for the diagnosis of the diabetes. The suggested model was applied in a systemic disease with ocular effects. 200 subject data of 100 each diabetic and non-diabetic were evaluated. Complete procedure was kept very simple and free from the involvement of any iridologist. From the normalized iris, the region of interest was cropped. All 63 features were extracted using statistical, texture analysis, and two-dimensional discrete wavelet transformation. A comparison of accuracies of six different classifiers has been presented. The result shows 89.66% accuracy by the random forest classifier.

Keywords: complementary and alternative medicine, classification, iridology, iris, feature extraction, disease prediction

Procedia PDF Downloads 396
3523 Application of Building Information Modeling in Energy Management of Individual Departments Occupying University Facilities

Authors: Kung-Jen Tu, Danny Vernatha

Abstract:

To assist individual departments within universities in their energy management tasks, this study explores the application of Building Information Modeling in establishing the ‘BIM based Energy Management Support System’ (BIM-EMSS). The BIM-EMSS consists of six components: (1) sensors installed for each occupant and each equipment, (2) electricity sub-meters (constantly logging lighting, HVAC, and socket electricity consumptions of each room), (3) BIM models of all rooms within individual departments’ facilities, (4) data warehouse (for storing occupancy status and logged electricity consumption data), (5) building energy management system that provides energy managers with various energy management functions, and (6) energy simulation tool (such as eQuest) that generates real time 'standard energy consumptions' data against which 'actual energy consumptions' data are compared and energy efficiency evaluated. Through the building energy management system, the energy manager is able to (a) have 3D visualization (BIM model) of each room, in which the occupancy and equipment status detected by the sensors and the electricity consumptions data logged are displayed constantly; (b) perform real time energy consumption analysis to compare the actual and standard energy consumption profiles of a space; (c) obtain energy consumption anomaly detection warnings on certain rooms so that energy management corrective actions can be further taken (data mining technique is employed to analyze the relation between space occupancy pattern with current space equipment setting to indicate an anomaly, such as when appliances turn on without occupancy); and (d) perform historical energy consumption analysis to review monthly and annually energy consumption profiles and compare them against historical energy profiles. The BIM-EMSS was further implemented in a research lab in the Department of Architecture of NTUST in Taiwan and implementation results presented to illustrate how it can be used to assist individual departments within universities in their energy management tasks.

Keywords: database, electricity sub-meters, energy anomaly detection, sensor

Procedia PDF Downloads 302
3522 Designing Stochastic Non-Invasively Applied DC Pulses to Suppress Tremors in Multiple Sclerosis by Computational Modeling

Authors: Aamna Lawrence, Ashutosh Mishra

Abstract:

Tremors occur in 60% of the patients who have Multiple Sclerosis (MS), the most common demyelinating disease that affects the central and peripheral nervous system, and are the primary cause of disability in young adults. While pharmacological agents provide minimal benefits, surgical interventions like Deep Brain Stimulation and Thalamotomy are riddled with dangerous complications which make non-invasive electrical stimulation an appealing treatment of choice for dealing with tremors. Hence, we hypothesized that if the non-invasive electrical stimulation parameters (mainly frequency) can be computed by mathematically modeling the nerve fibre to take into consideration the minutest details of the axon morphologies, tremors due to demyelination can be optimally alleviated. In this computational study, we have modeled the random demyelination pattern in a nerve fibre that typically manifests in MS using the High-Density Hodgkin-Huxley model with suitable modifications to account for the myelin. The internode of the nerve fibre in our model could have up to ten demyelinated regions each having random length and myelin thickness. The arrival time of action potentials traveling the demyelinated and the normally myelinated nerve fibre between two fixed points in space was noted, and its relationship with the nerve fibre radius ranging from 5µm to 12µm was analyzed. It was interesting to note that there were no overlaps between the arrival time for action potentials traversing the demyelinated and normally myelinated nerve fibres even when a single internode of the nerve fibre was demyelinated. The study gave us an opportunity to design DC pulses whose frequency of application would be a function of the random demyelination pattern to block only the delayed tremor-causing action potentials. The DC pulses could be delivered to the peripheral nervous system non-invasively by an electrode bracelet that would suppress any shakiness beyond it thus paving the way for wearable neuro-rehabilitative technologies.

Keywords: demyelination, Hodgkin-Huxley model, non-invasive electrical stimulation, tremor

Procedia PDF Downloads 122
3521 Fat-Tail Test of Regulatory DNA Sequences

Authors: Jian-Jun Shu

Abstract:

The statistical properties of CRMs are explored by estimating similar-word set occurrence distribution. It is observed that CRMs tend to have a fat-tail distribution for similar-word set occurrence. Thus, the fat-tail test with two fatness coefficients is proposed to distinguish CRMs from non-CRMs, especially from exons. For the first fatness coefficient, the separation accuracy between CRMs and exons is increased as compared with the existing content-based CRM prediction method – fluffy-tail test. For the second fatness coefficient, the computing time is reduced as compared with fluffy-tail test, making it very suitable for long sequences and large data-base analysis in the post-genome time. Moreover, these indexes may be used to predict the CRMs which have not yet been observed experimentally. This can serve as a valuable filtering process for experiment.

Keywords: statistical approach, transcription factor binding sites, cis-regulatory modules, DNA sequences

Procedia PDF Downloads 285
3520 Simulation of Antimicrobial Resistance Gene Fate in Narrow Grass Hedges

Authors: Marzieh Khedmati, Shannon L. Bartelt-Hunt

Abstract:

Vegetative Filter Strips (VFS) are used for controlling the volume of runoff and decreasing contaminant concentrations in runoff before entering water bodies. Many studies have investigated the role of VFS in sediment and nutrient removal, but little is known about their efficiency for the removal of emerging contaminants such as antimicrobial resistance genes (ARGs). Vegetative Filter Strip Modeling System (VFSMOD) was used to simulate the efficiency of VFS in this regard. Several studies demonstrated the ability of VFSMOD to predict reductions in runoff volume and sediment concentration moving through the filters. The objectives of this study were to calibrate the VFSMOD with experimental data and assess the efficiency of the model in simulating the filter behavior in removing ARGs (ermB) and tylosin. The experimental data were obtained from a prior study conducted at the University of Nebraska (UNL) Rogers Memorial Farm. Three treatment factors were tested in the experiments, including manure amendment, narrow grass hedges and rainfall events. Sediment Delivery Ratio (SDR) was defined as the filter efficiency and the related experimental and model values were compared to each other. The VFS Model generally agreed with the experimental results and as a result, the model was used for predicting filter efficiencies when the runoff data are not available. Narrow Grass Hedges (NGH) were shown to be effective in reducing tylosin and ARGs concentration. The simulation showed that the filter efficiency in removing ARGs is different for different soil types and filter lengths. There is an optimum length for the filter strip that produces minimum runoff volume. Based on the model results increasing the length of the filter by 1-meter leads to higher efficiency but widening beyond that decreases the efficiency. The VFSMOD, which was proved to work well in estimation of VFS trapping efficiency, showed confirming results for ARG removal.

Keywords: antimicrobial resistance genes, emerging contaminants, narrow grass hedges, vegetative filter strips, vegetative filter strip modeling system

Procedia PDF Downloads 130
3519 Design and Application of a Model Eliciting Activity with Civil Engineering Students on Binomial Distribution to Solve a Decision Problem Based on Samples Data Involving Aspects of Randomness and Proportionality

Authors: Martha E. Aguiar-Barrera, Humberto Gutierrez-Pulido, Veronica Vargas-Alejo

Abstract:

Identifying and modeling random phenomena is a fundamental cognitive process to understand and transform reality. Recognizing situations governed by chance and giving them a scientific interpretation, without being carried away by beliefs or intuitions, is a basic training for citizens. Hence the importance of generating teaching-learning processes, supported using technology, paying attention to model creation rather than only executing mathematical calculations. In order to develop the student's knowledge about basic probability distributions and decision making; in this work a model eliciting activity (MEA) is reported. The intention was applying the Model and Modeling Perspective to design an activity related to civil engineering that would be understandable for students, while involving them in its solution. Furthermore, the activity should imply a decision-making challenge based on sample data, and the use of the computer should be considered. The activity was designed considering the six design principles for MEA proposed by Lesh and collaborators. These are model construction, reality, self-evaluation, model documentation, shareable and reusable, and prototype. The application and refinement of the activity was carried out during three school cycles in the Probability and Statistics class for Civil Engineering students at the University of Guadalajara. The analysis of the way in which the students sought to solve the activity was made using audio and video recordings, as well as with the individual and team reports of the students. The information obtained was categorized according to the activity phase (individual or team) and the category of analysis (sample, linearity, probability, distributions, mechanization, and decision-making). With the results obtained through the MEA, four obstacles have been identified to understand and apply the binomial distribution: the first one was the resistance of the student to move from the linear to the probabilistic model; the second one, the difficulty of visualizing (infering) the behavior of the population through the sample data; the third one, viewing the sample as an isolated event and not as part of a random process that must be viewed in the context of a probability distribution; and the fourth one, the difficulty of decision-making with the support of probabilistic calculations. These obstacles have also been identified in literature on the teaching of probability and statistics. Recognizing these concepts as obstacles to understanding probability distributions, and that these do not change after an intervention, allows for the modification of these interventions and the MEA. In such a way, the students may identify themselves the erroneous solutions when they carrying out the MEA. The MEA also showed to be democratic since several students who had little participation and low grades in the first units, improved their participation. Regarding the use of the computer, the RStudio software was useful in several tasks, for example in such as plotting the probability distributions and to exploring different sample sizes. In conclusion, with the models created to solve the MEA, the Civil Engineering students improved their probabilistic knowledge and understanding of fundamental concepts such as sample, population, and probability distribution.

Keywords: linear model, models and modeling, probability, randomness, sample

Procedia PDF Downloads 115
3518 Prediction of Index-Mechanical Properties of Pyroclastic Rock Utilizing Electrical Resistivity Method

Authors: İsmail İnce

Abstract:

The aim of this study is to determine index and mechanical properties of pyroclastic rock in a practical way by means of electrical resistivity method. For this purpose, electrical resistivity, uniaxial compressive strength, point load strength, P-wave velocity, density and porosity values of 10 different pyroclastic rocks were measured in the laboratory. A simple regression analysis was made among the index-mechanical properties of the samples compatible with electrical resistivity values. A strong exponentially relation was found between index-mechanical properties and electrical resistivity values. The electrical resistivity method can be used to assess the engineering properties of the rock from which it is difficult to obtain regular shaped samples as a non-destructive method.

Keywords: electrical resistivity, index-mechanical properties, pyroclastic rocks, regression analysis

Procedia PDF Downloads 467
3517 Modeling of Void Formation in 3D Woven Fabric During Resin Transfer Moulding

Authors: Debabrata Adhikari, Mikhail Matveev, Louise Brown, Jan Kočí, Andy Long

Abstract:

Resin transfer molding (RTM) is increasingly used for manufacturing high-quality composite structures due to its additional advantages over prepregs of low-cost out-of-autoclave processing. However, to retain the advantages, it is critical to reduce the void content during the injection. Reinforcements commonly used in RTM, such as woven fabrics, have dual-scale porosity with mesoscale pores between the yarns and the micro-scale pores within the yarns. Due to the fabric geometry and the nature of the dual-scale flow, the flow front during injection creates a complicated fingering formation which leads to void formation. Analytical modeling of void formation for woven fabrics has been widely studied elsewhere. However, there is scope for improvement to the reduction in void formation in 3D fabrics wherein the in-plane yarn layers are confined by additional through-thickness binder yarns. In the present study, the structural morphology of the tortuous pore spaces in the 3D fabric has been studied and implemented using open-source software TexGen. An analytical model for the void and the fingering formation has been implemented based on an idealized unit cell model of the 3D fabric. Since the pore spaces between the yarns are free domains, the region is treated as flow-through connected channels, whereas intra-yarn flow has been modeled using Darcy’s law with an additional term to account for capillary pressure. Later the void fraction has been characterised using the criterion of void formation by comparing the fill time for inter and intra yarn flow. Moreover, the dual-scale two-phase flow of resin with air has been simulated in the commercial CFD solver OpenFOAM/ANSYS to predict the probable location of voids and validate the analytical model. The use of an idealised unit cell model will give the insight to optimise the mesoscale geometry of the reinforcement and injection parameters to minimise the void content during the LCM process.

Keywords: 3D fiber, void formation, RTM, process modelling

Procedia PDF Downloads 93
3516 Heat Transfer Enhancement by Turbulent Impinging Jet with Jet's Velocity Field Excitations Using OpenFOAM

Authors: Naseem Uddin

Abstract:

Impinging jets are used in variety of engineering and industrial applications. This paper is based on numerical simulations of heat transfer by turbulent impinging jet with velocity field excitations using different Reynolds Averaged Navier-Stokes Equations models. Also Detached Eddy Simulations are conducted to investigate the differences in the prediction capabilities of these two simulation approaches. In this paper the excited jet is simulated in non-commercial CFD code OpenFOAM with the goal to understand the influence of dynamics of impinging jet on heat transfer. The jet’s frequencies are altered keeping in view the preferred mode of the jet. The Reynolds number based on mean velocity and diameter is 23,000 and jet’s outlet-to-target wall distance is 2. It is found that heat transfer at the target wall can be influenced by judicious selection of amplitude and frequencies.

Keywords: excitation, impinging jet, natural frequency, turbulence models

Procedia PDF Downloads 268
3515 Surface Water Flow of Urban Areas and Sustainable Urban Planning

Authors: Sheetal Sharma

Abstract:

Urban planning is associated with land transformation from natural areas to modified and developed ones which leads to modification of natural environment. The basic knowledge of relationship between both should be ascertained before proceeding for the development of natural areas. Changes on land surface due to build up pavements, roads and similar land cover, affect surface water flow. There is a gap between urban planning and basic knowledge of hydrological processes which should be known to the planners. The paper aims to identify these variations in surface flow due to urbanization for a temporal scale of 40 years using Storm Water Management Mode (SWMM) and again correlating these findings with the urban planning guidelines in study area along with geological background to find out the suitable combinations of land cover, soil and guidelines. For the purpose of identifying the changes in surface flows, 19 catchments were identified with different geology and growth in 40 years facing different ground water levels fluctuations. The increasing built up, varying surface runoff are studied using Arc GIS and SWMM modeling, regression analysis for runoff. Resulting runoff for various land covers and soil groups with varying built up conditions were observed. The modeling procedures also included observations for varying precipitation and constant built up in all catchments. All these observations were combined for individual catchment and single regression curve was obtained for runoff. Thus, it was observed that alluvial with suitable land cover was better for infiltration and least generation of runoff but excess built up could not be sustained on alluvial soil. Similarly, basalt had least recharge and most runoff demanding maximum vegetation over it. Sandstone resulted in good recharging if planned with more open spaces and natural soils with intermittent vegetation. Hence, these observations made a keystone base for planners while planning various land uses on different soils. This paper contributes and provides a solution to basic knowledge gap, which urban planners face during development of natural surfaces.

Keywords: runoff, built up, roughness, recharge, temporal changes

Procedia PDF Downloads 274
3514 3D Modeling of Flow and Sediment Transport in Tanks with the Influence of Cavity

Authors: A. Terfous, Y. Liu, A. Ghenaim, P. A. Garambois

Abstract:

With increasing urbanization worldwide, it is crucial to sustainably manage sediment flows in urban networks and especially in stormwater detention basins. One key aspect is to propose optimized designs for detention tanks in order to best reduce flood peak flows and in the meantime settle particles. It is, therefore, necessary to understand complex flows patterns and sediment deposition conditions in stormwater detention basins. The aim of this paper is to study flow structure and particle deposition pattern for a given tank geometry in view to control and maximize sediment deposition. Both numerical simulation and experimental works were done to investigate the flow and sediment distribution in a storm tank with a cavity. As it can be indicated, the settle distribution of the particle in a rectangular tank is mainly determined by the flow patterns and the bed shear stress. The flow patterns in a rectangular tank differ with different geometry, entrance flow rate and the water depth. With the changing of flow patterns, the bed shear stress will change respectively, which also play an influence on the particle settling. The accumulation of the particle in the bed changes the conditions at the bottom, which is ignored in the investigations, however it worth much more attention, the influence of the accumulation of the particle on the sedimentation should be important. The approach presented here is based on the resolution of the Reynolds averaged Navier-Stokes equations to account for turbulent effects and also a passive particle transport model. An analysis of particle deposition conditions is presented in this paper in terms of flow velocities and turbulence patterns. Then sediment deposition zones are presented thanks to the modeling with particle tracking method. It is shown that two recirculation zones seem to significantly influence sediment deposition. Due to the possible overestimation of particle trap efficiency with standard wall functions and stick conditions, further investigations seem required for basal boundary conditions based on turbulent kinetic energy and shear stress. These observations are confirmed by experimental investigations processed in the laboratory.

Keywords: storm sewers, sediment deposition, numerical simulation, experimental investigation

Procedia PDF Downloads 319
3513 Quantum Chemical Prediction of Standard Formation Enthalpies of Uranyl Nitrates and Its Degradation Products

Authors: Mohamad Saab, Florent Real, Francois Virot, Laurent Cantrel, Valerie Vallet

Abstract:

All spent nuclear fuel reprocessing plants use the PUREX process (Plutonium Uranium Refining by Extraction), which is a liquid-liquid extraction method. The organic extracting solvent is a mixture of tri-n-butyl phosphate (TBP) and hydrocarbon solvent such as hydrogenated tetra-propylene (TPH). By chemical complexation, uranium and plutonium (from spent fuel dissolved in nitric acid solution), are separated from fission products and minor actinides. During a normal extraction operation, uranium is extracted in the organic phase as the UO₂(NO₃)₂(TBP)₂ complex. The TBP solvent can form an explosive mixture called red oil when it comes in contact with nitric acid. The formation of this unstable organic phase originates from the reaction between TBP and its degradation products on the one hand, and nitric acid, its derivatives and heavy metal nitrate complexes on the other hand. The decomposition of the red oil can lead to violent explosive thermal runaway. These hazards are at the origin of several accidents such as the two in the United States in 1953 and 1975 (Savannah River) and, more recently, the one in Russia in 1993 (Tomsk). This raises the question of the exothermicity of reactions that involve TBP and all other degradation products, and calls for a better knowledge of the underlying chemical phenomena. A simulation tool (Alambic) is currently being developed at IRSN that integrates thermal and kinetic functions related to the deterioration of uranyl nitrates in organic and aqueous phases, but not of the n-butyl phosphate. To include them in the modeling scheme, there is an urgent need to obtain the thermodynamic and kinetic functions governing the deterioration processes in liquid phase. However, little is known about the thermodynamic properties, like standard enthalpies of formation, of the n-butyl phosphate molecules and of the UO₂(NO₃)₂(TBP)₂ UO₂(NO₃)₂(HDBP)(TBP) and UO₂(NO₃)₂(HDBP)₂ complexes. In this work, we propose to estimate the thermodynamic properties with Quantum Methods (QM). Thus, in the first part of our project, we focused on the mono, di, and tri-butyl complexes. Quantum chemical calculations have been performed to study several reactions leading to the formation of mono-(H₂MBP), di-(HDBP), and TBP in gas and liquid phases. In the gas phase, the optimal structures of all species were optimized using the B3LYP density functional. Triple-ζ def2-TZVP basis sets were used for all atoms. All geometries were optimized in the gas-phase, and the corresponding harmonic frequencies were used without scaling to compute the vibrational partition functions at 298.15 K and 0.1 Mpa. Accurate single point energies were calculated using the efficient localized LCCSD(T) method to the complete basis set limit. Whenever species in the liquid phase are considered, solvent effects are included with the COSMO-RS continuum model. The standard enthalpies of formation of TBP, HDBP, and H2MBP are finally predicted with an uncertainty of about 15 kJ mol⁻¹. In the second part of this project, we have investigated the fundamental properties of three organic species that mostly contribute to the thermal runaway: UO₂(NO₃)₂(TBP)₂, UO₂(NO₃)₂(HDBP)(TBP), and UO₂(NO₃)₂(HDBP)₂ using the same quantum chemical methods that were used for TBP and its derivatives in both the gas and the liquid phase. We will discuss the structures and thermodynamic properties of all these species.

Keywords: PUREX process, red oils, quantum chemical methods, hydrolysis

Procedia PDF Downloads 185
3512 COSMO-RS Prediction for Choline Chloride/Urea Based Deep Eutectic Solvent: Chemical Structure and Application as Agent for Natural Gas Dehydration

Authors: Tayeb Aissaoui, Inas M. AlNashef

Abstract:

In recent years, green solvents named deep eutectic solvents (DESs) have been found to possess significant properties and to be applicable in several technologies. Choline chloride (ChCl) mixed with urea at a ratio of 1:2 and 80 °C was the first discovered DES. In this article, chemical structure and combination mechanism of ChCl: urea based DES were investigated. Moreover, the implementation of this DES in water removal from natural gas was reported. Dehydration of natural gas by ChCl:urea shows significant absorption efficiency compared to triethylene glycol. All above operations were retrieved from COSMOthermX software. This article confirms the potential application of DESs in gas industry.

Keywords: COSMO-RS, deep eutectic solvents, dehydration, natural gas, structure, organic salt

Procedia PDF Downloads 286
3511 Evaluating Service Trustworthiness for Service Selection in Cloud Environment

Authors: Maryam Amiri, Leyli Mohammad-Khanli

Abstract:

Cloud computing is becoming increasingly popular and more business applications are moving to cloud. In this regard, services that provide similar functional properties are increasing. So, the ability to select a service with the best non-functional properties, corresponding to the user preference, is necessary for the user. This paper presents an Evaluation Framework of Service Trustworthiness (EFST) that evaluates the trustworthiness of equivalent services without need to additional invocations of them. EFST extracts user preference automatically. Then, it assesses trustworthiness of services in two dimensions of qualitative and quantitative metrics based on the experiences of past usage of services. Finally, EFST determines the overall trustworthiness of services using Fuzzy Inference System (FIS). The results of experiments and simulations show that EFST is able to predict the missing values of Quality of Service (QoS) better than other competing approaches. Also, it propels users to select the most appropriate services.

Keywords: user preference, cloud service, trustworthiness, QoS metrics, prediction

Procedia PDF Downloads 284
3510 Geomechanical Numerical Modeling of Well Wall in Drilling with Finite Difference Method

Authors: Marzieh Zarei

Abstract:

Well instability is one of the most fundamental challenges faced by the oil and gas industry. Well wall stability analysis is a gap to be filled in the oil industry. The collection of static data such as well logging leads to the construction of a geomechanical numerical model, which will help in assessing the probable risks in future drilling. In this paper, geomechanical model was designed, and mechanical properties of the rock was determined at all points of the model. It was found the safe mud window was determined and the minimum and maximum mud pressures were determined in the ranges of 70-60 MPa and 110-100 MPa, respectively.

Keywords: geomechanics, numerical model, well stability, in-situ stress, underbalanced drilling

Procedia PDF Downloads 119
3509 An Approach for Thermal Resistance Prediction of Plain Socks in Wet State

Authors: Tariq Mansoor, Lubos Hes, Vladimir Bajzik

Abstract:

Socks comfort has great significance in our daily life. This significance even increased when we have undergone a work of low or high activity. It causes the sweating of our body with different rates. In this study, plain socks with differential fibre composition were wetted to saturated level. Then after successive intervals of conditioning, these socks are characterized by thermal resistance in dry and wet states. Theoretical thermal resistance is predicted by using combined filling coefficients and thermal conductivity of wet polymers instead of dry polymer (fibre) in different models. By this modification, different mathematical models could predict thermal resistance at different moisture levels. Furthermore, predicted thermal resistance by different models has reasonable correlation range between (0.84 -0.98) with experimental results in both dry (lab conditions moisture) and wet states. "This work is supported by Technical University of Liberec under SGC-2019. Project number is 21314".

Keywords: thermal resistance, mathematical model, plain socks, moisture loss rate

Procedia PDF Downloads 192
3508 Factors Affecting Expectations and Intentions of University Students’ Mobile Phone Use in Educational Contexts

Authors: Davut Disci

Abstract:

Objective: to measure the factors affecting expectations and intentions of using mobile phone in educational contexts by university students, using advanced equations and modeling techniques. Design and Methodology: According to the literature, Mobile Addiction, Parental Surveillance- Safety/Security, Social Relations, and Mobile Behavior are most used terms of defining mobile use of people. Therefore these variables are tried to be measured to find and estimate their effects on expectations and intentions of using mobile phone in educational context. 421 university students participated in this study and there are 229 Female and 192 Male students. For the purpose of examining the mobile behavior and educational expectations and intentions, a questionnaire is prepared and applied to the participants who had to answer all the questions online. Furthermore, responses to close-ended questions are analyzed by using The Statistical Package for Social Sciences(SPSS) software, reliabilities are measured by Cronbach’s Alpha analysis and hypothesis are examined via using Multiple Regression and Linear Regression analysis and the model is tested with Structural Equation Modeling(SEM) technique which is important for testing the model scientifically. Besides these responses, open-ended questions are taken into consideration. Results: When analyzing data gathered from close-ended questions, it is found that Mobile Addiction, Parental Surveillance, Social Relations and Frequency of Using Mobile Phone Applications are affecting the mobile behavior of the participants in different levels, helping them to use mobile phone in educational context. Moreover, as for open-ended questions, participants stated that they use many mobile applications in their learning environment in terms of contacting with friends, watching educational videos, finding course material via internet. They also agree in that mobile phone brings greater flexibility to their lives. According to the SEM results the model is not evaluated and it can be said that it may be improved to show in SEM besides in multiple regression. Conclusion: This study shows that the specified model can be used by educationalist, school authorities to improve their learning environment.

Keywords: education, mobile behavior, mobile learning, technology, Turkey

Procedia PDF Downloads 417
3507 Factors Affecting Expectations and Intentions of University Students in Educational Context

Authors: Davut Disci

Abstract:

Objective: to measure the factors affecting expectations and intentions of using mobile phone in educational contexts by university students, using advanced equations and modeling techniques. Design and Methodology: According to the literature, Mobile Addiction, Parental Surveillance-Safety/Security, Social Relations, and Mobile Behavior are most used terms of defining mobile use of people. Therefore, these variables are tried to be measured to find and estimate their effects on expectations and intentions of using mobile phone in educational context. 421 university students participated in this study and there are 229 Female and 192 Male students. For the purpose of examining the mobile behavior and educational expectations and intentions, a questionnaire is prepared and applied to the participants who had to answer all the questions online. Furthermore, responses to close-ended questions are analyzed by using The Statistical Package for Social Sciences(SPSS) software, reliabilities are measured by Cronbach’s Alpha analysis and hypothesis are examined via using Multiple Regression and Linear Regression analysis and the model is tested with Structural Equation Modeling (SEM) technique which is important for testing the model scientifically. Besides these responses, open-ended questions are taken into consideration. Results: When analyzing data gathered from close-ended questions, it is found that Mobile Addiction, Parental Surveillance, Social Relations and Frequency of Using Mobile Phone Applications are affecting the mobile behavior of the participants in different levels, helping them to use mobile phone in educational context. Moreover, as for open-ended questions, participants stated that they use many mobile applications in their learning environment in terms of contacting with friends, watching educational videos, finding course material via internet. They also agree in that mobile phone brings greater flexibility to their lives. According to the SEM results the model is not evaluated and it can be said that it may be improved to show in SEM besides in multiple regression. Conclusion: This study shows that the specified model can be used by educationalist, school authorities to improve their learning environment.

Keywords: learning technology, instructional technology, mobile learning, technology

Procedia PDF Downloads 449
3506 The Implementation of a Numerical Technique to Thermal Design of Fluidized Bed Cooler

Authors: Damiaa Saad Khudor

Abstract:

The paper describes an investigation for the thermal design of a fluidized bed cooler and prediction of heat transfer rate among the media categories. It is devoted to the thermal design of such equipment and their application in the industrial fields. It outlines the strategy for the fluidization heat transfer mode and its implementation in industry. The thermal design for fluidized bed cooler is used to furnish a complete design for a fluidized bed cooler of Sodium Bicarbonate. The total thermal load distribution between the air-solid and water-solid along the cooler is calculated according to the thermal equilibrium. The step by step technique was used to accomplish the thermal design of the fluidized bed cooler. It predicts the load, air, solid and water temperature along the trough. The thermal design for fluidized bed cooler revealed to the installation of a heat exchanger consists of (65) horizontal tubes with (33.4) mm diameter and (4) m length inside the bed trough.

Keywords: fluidization, powder technology, thermal design, heat exchangers

Procedia PDF Downloads 508
3505 Comparison of ANN and Finite Element Model for the Prediction of Ultimate Load of Thin-Walled Steel Perforated Sections in Compression

Authors: Zhi-Jun Lu, Qi Lu, Meng Wu, Qian Xiang, Jun Gu

Abstract:

The analysis of perforated steel members is a 3D problem in nature, therefore the traditional analytical expressions for the ultimate load of thin-walled steel sections cannot be used for the perforated steel member design. In this study, finite element method (FEM) and artificial neural network (ANN) were used to simulate the process of stub column tests based on specific codes. Results show that compared with those of the FEM model, the ultimate load predictions obtained from ANN technique were much closer to those obtained from the physical experiments. The ANN model for the solving the hard problem of complex steel perforated sections is very promising.

Keywords: artificial neural network (ANN), finite element method (FEM), perforated sections, thin-walled Steel, ultimate load

Procedia PDF Downloads 346
3504 A Damage-Plasticity Concrete Model for Damage Modeling of Reinforced Concrete Structures

Authors: Thanh N. Do

Abstract:

This paper addresses the modeling of two critical behaviors of concrete material in reinforced concrete components: (1) the increase in strength and ductility due to confining stresses from surrounding transverse steel reinforcements, and (2) the progressive deterioration in strength and stiffness due to high strain and/or cyclic loading. To improve the state-of-the-art, the author presents a new 3D constitutive model of concrete material based on plasticity and continuum damage mechanics theory to simulate both the confinement effect and the strength deterioration in reinforced concrete components. The model defines a yield function of the stress invariants and a compressive damage threshold based on the level of confining stresses to automatically capture the increase in strength and ductility when subjected to high compressive stresses. The model introduces two damage variables to describe the strength and stiffness deterioration under tensile and compressive stress states. The damage formulation characterizes well the degrading behavior of concrete material, including the nonsymmetric strength softening in tension and compression, as well as the progressive strength and stiffness degradation under primary and follower load cycles. The proposed damage model is implemented in a general purpose finite element analysis program allowing an extensive set of numerical simulations to assess its ability to capture the confinement effect and the degradation of the load-carrying capacity and stiffness of structural elements. It is validated against a collection of experimental data of the hysteretic behavior of reinforced concrete columns and shear walls under different load histories. These correlation studies demonstrate the ability of the model to describe vastly different hysteretic behaviors with a relatively consistent set of parameters. The model shows excellent consistency in response determination with very good accuracy. Its numerical robustness and computational efficiency are also very good and will be further assessed with large-scale simulations of structural systems.

Keywords: concrete, damage-plasticity, shear wall, confinement

Procedia PDF Downloads 164
3503 Multi-Label Approach to Facilitate Test Automation Based on Historical Data

Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally

Abstract:

The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.

Keywords: machine learning, multi-class, multi-label, supervised learning, test automation

Procedia PDF Downloads 128
3502 Behavioral Patterns of Adopting Digitalized Services (E-Sport versus Sports Spectating) Using Agent-Based Modeling

Authors: Justyna P. Majewska, Szymon M. Truskolaski

Abstract:

The growing importance of digitalized services in the so-called new economy, including the e-sports industry, can be observed recently. Various demographic or technological changes lead consumers to modify their needs, not regarding the services themselves but the method of their application (attracting customers, forms of payment, new content, etc.). In the case of leisure-related to competitive spectating activities, there is a growing need to participate in events whose content is not sports competitions but computer games challenge – e-sport. The literature in this area so far focuses on determining the number of e-sport fans with elements of a simple statistical description (mainly concerning demographic characteristics such as age, gender, place of residence). Meanwhile, the development of the industry is influenced by a combination of many different, intertwined demographic, personality and psychosocial characteristics of customers, as well as the characteristics of their environment. Therefore, there is a need for a deeper recognition of the determinants of the behavioral patterns upon selecting digitalized services by customers, which, in the absence of available large data sets, can be achieved by using econometric simulations – multi-agent modeling. The cognitive aim of the study is to reveal internal and external determinants of behavioral patterns of customers taking into account various variants of economic development (the pace of digitization and technological development, socio-demographic changes, etc.). In the paper, an agent-based model with heterogeneous agents (characteristics of customers themselves and their environment) was developed, which allowed identifying a three-stage development scenario: i) initial interest, ii) standardization, and iii) full professionalization. The probabilities regarding the transition process were estimated using the Method of Simulated Moments. The estimation of the agent-based model parameters and sensitivity analysis reveals crucial factors that have driven a rising trend in e-sport spectating and, in a wider perspective, the development of digitalized services. Among the psychosocial characteristics of customers, they are the level of familiarization with the rules of games as well as sports disciplines, active and passive participation history and individual perception of challenging activities. Environmental factors include general reception of games, number and level of recognition of community builders and the level of technological development of streaming as well as community building platforms. However, the crucial factor underlying the good predictive power of the model is the level of professionalization. While in the initial interest phase, the entry barriers for new customers are high. They decrease during the phase of standardization and increase again in the phase of full professionalization when new customers perceive participation history inaccessible. In this case, they are prone to switch to new methods of service application – in the case of e-sport vs. sports to new content and more modern methods of its delivery. In a wider context, the findings in the paper support the idea of a life cycle of services regarding methods of their application from “traditional” to digitalized.

Keywords: agent-based modeling, digitalized services, e-sport, spectators motives

Procedia PDF Downloads 169
3501 Modeling and Control of an Acrobot Using MATLAB and Simulink

Authors: Dong Sang Yoo

Abstract:

The problem of finding control laws for underactuated systems has attracted growing attention since these systems are characterized by the fact that they have fewer actuators than the degrees of freedom to be controlled. The acrobot, which is a planar two-link robotic arm in the vertical plane with an actuator at the elbow but no actuator at the shoulder, is a representative of underactuated systems. In this paper, the dynamic model of the acrobot is implemented using Mathworks’ Simscape. And the sliding mode control is constructed using MATLAB and Simulink.

Keywords: acrobot, MATLAB and simulink, sliding mode control, underactuated system

Procedia PDF Downloads 794
3500 The UAV Feasibility Trajectory Prediction Using Convolution Neural Networks

Authors: Adrien Marque, Daniel Delahaye, Pierre Maréchal, Isabelle Berry

Abstract:

Wind direction and uncertainty are crucial in aircraft or unmanned aerial vehicle trajectories. By computing wind covariance matrices on each spatial grid point, these spatial grids can be defined as images with symmetric positive definite matrix elements. A data pre-processing step, a specific convolution, a specific max-pooling, and a specific flatten layers are implemented to process such images. Then, the neural network is applied to spatial grids, whose elements are wind covariance matrices, to solve classification problems related to the feasibility of unmanned aerial vehicles based on wind direction and wind uncertainty.

Keywords: wind direction, uncertainty level, unmanned aerial vehicle, convolution neural network, SPD matrices

Procedia PDF Downloads 31
3499 A Semantic Analysis of Modal Verbs in Barak Obama’s 2012 Presidential Campaign Speech

Authors: Kais A. Kadhim

Abstract:

This paper is a semantic analysis of the English modals in Obama’s speech. The main objective of this study is to analyze selected modal auxiliaries identified in selected speeches of Obama’s campaign based on Coates’ (1983) semantic clusters. A total of fifteen speeches of Obama’s campaign were selected as the primary data and the modal auxiliaries selected for analysis include will, would, can, could, should, must, ought, shall, may and might. All the modal auxiliaries taken from the speeches of Barack Obama were analyzed based on the framework of Coates’ semantic clusters. Such analytical framework was carried out to examine how modal auxiliaries are used in the context of persuading people in Obama’s campaign speeches. The findings reveal that modals of intention, prediction, futurity and modals of possibility, ability, permission are mostly used in Obama’s campaign speeches.

Keywords: modals, meaning, persuasion, speech

Procedia PDF Downloads 400