Search results for: sihar model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16850

Search results for: sihar model

15080 Using AI for Analysing Political Leaders

Authors: Shuai Zhao, Shalendra D. Sharma, Jin Xu

Abstract:

This research uses advanced machine learning models to learn a number of hypotheses regarding political executives. Specifically, it analyses the impact these powerful leaders have on economic growth by using leaders’ data from the Archigos database from 1835 to the end of 2015. The data is processed by the AutoGluon, which was developed by Amazon. Automated Machine Learning (AutoML) and AutoGluon can automatically extract features from the data and then use multiple classifiers to train the data. Use a linear regression model and classification model to establish the relationship between leaders and economic growth (GDP per capita growth), and to clarify the relationship between their characteristics and economic growth from a machine learning perspective. Our work may show as a model or signal for collaboration between the fields of statistics and artificial intelligence (AI) that can light up the way for political researchers and economists.

Keywords: comparative politics, political executives, leaders’ characteristics, artificial intelligence

Procedia PDF Downloads 87
15079 Modeling of a Vehicle Wheel System having a Built-in Suspension Structure Consisted of Radially Deployed Colloidal Spokes between Hub and Rim

Authors: Barenten Suciu

Abstract:

In this work, by replacing the traditional solid spokes with colloidal spokes, a vehicle wheel with a built-in suspension structure is proposed. Following the background and description of the wheel system, firstly, a vibration model of the wheel equipped with colloidal spokes is proposed, and based on such model the equivalent damping coefficients and spring constants are identified. Then, a modified model of a quarter-vehicle moving on a rough pavement is proposed in order to estimate the transmissibility of vibration from the road roughness to vehicle body. In the end, the optimal design of the colloidal spokes and the optimum number of colloidal spokes are decided in order to minimize the transmissibility of vibration, i.e., to maximize the ride comfort of the vehicle.

Keywords: built-in suspension, colloidal spoke, intrinsic spring, vibration analysis, wheel

Procedia PDF Downloads 509
15078 PitMod: The Lorax Pit Lake Hydrodynamic and Water Quality Model

Authors: Silvano Salvador, Maryam Zarrinderakht, Alan Martin

Abstract:

Open pits, which are the result of mining, are filled by water over time until the water reaches the elevation of the local water table and generates mine pit lakes. There are several specific regulations about the water quality of pit lakes, and mining operations should keep the quality of groundwater above pre-defined standards. Therefore, an accurate, acceptable numerical model predicting pit lakes’ water balance and water quality is needed in advance of mine excavation. We carry on analyzing and developing the model introduced by Crusius, Dunbar, et al. (2002) for pit lakes. This model, called “PitMod”, simulates the physical and geochemical evolution of pit lakes over time scales ranging from a few months up to a century or more. Here, a lake is approximated as one-dimensional, horizontally averaged vertical layers. PitMod calculates the time-dependent vertical distribution of physical and geochemical pit lake properties, like temperature, salinity, conductivity, pH, trace metals, and dissolved oxygen, within each model layer. This model considers the effect of pit morphology, climate data, multiple surface and subsurface (groundwater) inflows/outflows, precipitation/evaporation, surface ice formation/melting, vertical mixing due to surface wind stress, convection, background turbulence and equilibrium geochemistry using PHREEQC and linking that to the geochemical reactions. PitMod, which is used and validated in over 50 mines projects since 2002, incorporates physical processes like those found in other lake models such as DYRESM (Imerito 2007). However, unlike DYRESM PitMod also includes geochemical processes, pit wall runoff, and other effects. In addition, PitMod is actively under development and can be customized as required for a particular site.

Keywords: pit lakes, mining, modeling, hydrology

Procedia PDF Downloads 163
15077 Model Evaluation of Nanosecond, High-Intensity Electric Pulses Induced Cellular Apoptosis

Authors: Jiahui Song, Ravindra Joshi

Abstract:

High-intensity, nanosecond, pulsed electric fields have been shown to be useful non-thermal tools capable of producing a variety of specific cellular responses. While reversible and temporary changes are often desired based on electromanipulation, irreversible effects can also be important objectives. These include elimination of tumor cells and bacterial decontamination. A simple model-based rate-equation treatment of the various cellular biochemical processes was used to qualitatively predict the pulse number-dependent caspase activation and cell survival trends. The model incorporated the caspase-8 associated extrinsic pathway, the delay inherent in its activation, cytochrome c release, and the internal feedback mechanism between caspase-3 and Bid. Results were roughly in keeping with the experimental cell-survival data. A pulse-number threshold was predicted followed by a near-exponential fall-off. The intrinsic pathway was shown to be much weaker as compared to the extrinsic mechanism for electric pulse induced cell apoptosis. Also, delays of about an hour are predicted for detectable molecular concentration increases following electrical pulsing.

Keywords: apoptosis, cell survival, model, pathway

Procedia PDF Downloads 239
15076 Application of the Bionic Wavelet Transform and Psycho-Acoustic Model for Speech Compression

Authors: Chafik Barnoussi, Mourad Talbi, Adnane Cherif

Abstract:

In this paper we propose a new speech compression system based on the application of the Bionic Wavelet Transform (BWT) combined with the psychoacoustic model. This compression system is a modified version of the compression system using a MDCT (Modified Discrete Cosine Transform) filter banks of 32 filters each and the psychoacoustic model. This modification consists in replacing the banks of the MDCT filter banks by the bionic wavelet coefficients which are obtained from the application of the BWT to the speech signal to be compressed. These two methods are evaluated and compared with each other by computing bits before and bits after compression. They are tested on different speech signals and the obtained simulation results show that the proposed technique outperforms the second technique and this in term of compressed file size. In term of SNR, PSNR and NRMSE, the outputs speech signals of the proposed compression system are with acceptable quality. In term of PESQ and speech signal intelligibility, the proposed speech compression technique permits to obtain reconstructed speech signals with good quality.

Keywords: speech compression, bionic wavelet transform, filterbanks, psychoacoustic model

Procedia PDF Downloads 384
15075 Characteristics and Item Parameters Fitness on Chemistry Teacher-Made Test Instrument

Authors: Rizki Nor Amelia, Farida A. Setiawati

Abstract:

This study aimed to: (1) describe the characteristics of teacher-made test instrument used to measure the ability of students’chemistry, and (2) identify the presence of the compability difficulty level set by teachers to difficulty level by empirical results. Based on these objectives, this study was a descriptive research. The analysis in this study used the Rasch model and Chi-square statistics. Analysis using Rasch Model was based on the response patterns of high school students to the teacher-made test instrument on chemistry subject Academic Year 2015/2016 in the Yogyakarta. The sample of this research were 358 students taken by cluster random sampling technique. The analysis showed that: (1) a teacher-made tests instrument has a medium on the mean difficulty level. This instrument is capable to measure the ability on the interval of -0,259 ≤ θ ≤ 0,659 logit. Maximum Test Information Function obtained at 18.187 on the ability +0,2 logit; (2) 100% items categorized either as easy or difficult by rasch model is match with the teachers’ judgment; while 37 items are categorized according to rasch model which 8.10% and 10.81% categorized as easy and difficult items respectively according to the teachers, the others are medium categorized. Overall, the distribution of the level of difficulty formulated by the teachers has the distinction (not match) to the level of difficulty based on the empirical results.

Keywords: chemistry, items parameter fitness, Rasch model, teacher-made test

Procedia PDF Downloads 239
15074 Bubble Point Pressures of CO2+Ethyl Palmitate by a Cubic Equation of State and the Wong-Sandler Mixing Rule

Authors: M. A. Sedghamiz, S. Raeissi

Abstract:

This study presents three different approaches to estimate bubble point pressures for the binary system of CO2 and ethyl palmitate fatty acid ethyl ester. The first method involves the Peng-Robinson (PR) Equation of State (EoS) with the conventional mixing rule of Van der Waals. The second approach involves the PR EOS together with the Wong Sandler (WS) mixing rule, coupled with the Uniquac Ge model. In order to model the bubble point pressures with this approach, the volume and area parameter for ethyl palmitate were estimated by the Hansen group contribution method. The last method involved the Peng-Robinson, combined with the Wong-Sandler Method, but using NRTL as the GE model. Results using the Van der Waals mixing rule clearly indicated that this method has the largest errors among all three methods, with errors in the range of 3.96–6.22 %. The Pr-Ws-Uniquac method exhibited small errors, with average absolute deviations between 0.95 to 1.97 percent. The Pr-Ws-Nrtl method led to the least errors where average absolute deviations ranged between 0.65-1.7%.

Keywords: bubble pressure, Gibbs excess energy model, mixing rule, CO2 solubility, ethyl palmitate

Procedia PDF Downloads 476
15073 Digital Transformation as the Subject of the Knowledge Model of the Discursive Space

Authors: Rafal Maciag

Abstract:

Due to the development of the current civilization, one must create suitable models of its pervasive massive phenomena. Such a phenomenon is the digital transformation, which has a substantial number of disciplined, methodical interpretations forming the diversified reflection. This reflection could be understood pragmatically as the current temporal, a local differential state of knowledge. The model of the discursive space is proposed as a model for the analysis and description of this knowledge. Discursive space is understood as an autonomous multidimensional space where separate discourses traverse specific trajectories of what can be presented in multidimensional parallel coordinate system. Discursive space built on the world of facts preserves the complex character of that world. Digital transformation as a discursive space has a relativistic character that means that at the same time, it is created by the dynamic discourses and these discourses are molded by the shape of this space.

Keywords: complexity, digital transformation, discourse, discursive space, knowledge

Procedia PDF Downloads 192
15072 Assessment of High Frequency Solidly Mounted Resonator as Viscosity Sensor

Authors: Vinita Choudhary

Abstract:

Solidly Acoustic Resonators (SMR) based on ZnO piezoelectric material operating at a frequency of 3.96 GHz and 6.49% coupling factor are used to characterize liquids with different viscosities. This behavior of the sensor is analyzed using Finite Element Modeling. Device architectures encapsulate bulk acoustic wave resonators with MO/SiO₂ Bragg mirror reflector and the silicon substrate. The proposed SMR is based on the mass loading effect response of the sensor to the change in the resonant frequency of the resonator that is caused by the increased density due to the absorption of liquids (water, acetone, olive oil) used in theoretical calculation. The sensitivity of sensors ranges from 0.238 MHz/mPa.s to 83.33 MHz/mPa.s, supported by the Kanazawa model. Obtained results are also compared with previous works on BAW viscosity sensors.

Keywords: solidly mounted resonator, bragg mirror, kanazawa model, finite element model

Procedia PDF Downloads 82
15071 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures

Authors: Adriano Z. Zambom, Preethi Ravikumar

Abstract:

One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.

Keywords: additive model, nonparametric regression, variable selection, Akaike Information Criteria

Procedia PDF Downloads 266
15070 Simulation of Nonlinear Behavior of Reinforced Concrete Slabs Using Rigid Body-Spring Discrete Element Method

Authors: Felix Jr. Garde, Eric Augustus Tingatinga

Abstract:

Most analysis procedures of reinforced concrete (RC) slabs are based on elastic theory. When subjected to large forces, however, slabs deform beyond elastic range and the study of their behavior and performance require nonlinear analysis. This paper presents a numerical model to simulate nonlinear behavior of RC slabs using rigid body-spring discrete element method. The proposed slab model composed of rigid plate elements and nonlinear springs is based on the yield line theory which assumes that the nonlinear behavior of the RC slab subjected to transverse loads is contained in plastic or yield-lines. In this model, the displacement of the slab is completely described by the rigid elements and the deformation energy is concentrated in the flexural springs uniformly distributed at the potential yield lines. The spring parameters are determined from comparison of transverse displacements and stresses developed in the slab obtained using FEM and the proposed model with assumed homogeneous material. Numerical models of typical RC slabs with varying geometry, reinforcement, support conditions, and loading conditions, show reasonable agreement with available experimental data. The model was also shown to be useful in investigating dynamic behavior of slabs.

Keywords: RC slab, nonlinear behavior, yield line theory, rigid body-spring discrete element method

Procedia PDF Downloads 325
15069 Improved Structure and Performance by Shape Change of Foam Monitor

Authors: Tae Gwan Kim, Hyun Kyu Cho, Young Hoon Lee, Young Chul Park

Abstract:

Foam monitors are devices that are installed on cargo tank decks to suppress cargo area fires in oil tankers or hazardous chemical ship cargo ships. In general, the main design parameter of the foam monitor is the distance of the projection through the foam monitor. In this study, the relationship between flow characteristics and projection distance, depending on the shape was examined. Numerical techniques for fluid analysis of foam monitors have been developed for prediction. The flow pattern of the fluid varies depending on the shape of the flow path of the foam monitor, as the flow losses affecting projection distance were calculated through numerical analysis. The basic shape of the foam monitor was an L shape designed by N Company. The modified model increased the length of the flow path and used the S shape model. The calculation result shows that the L shape, which is the basic shape, has a problem that the force is directed to one side and the vibration and noise are generated there. In order to solve the problem, S-shaped model, which is a change model, was used. As a result, the problem is solved, and the projection distance from the nozzle is improved.

Keywords: CFD, foam monitor, projection distance, moment

Procedia PDF Downloads 345
15068 Application of Model Free Adaptive Control in Main Steam Temperature System of Thermal Power Plant

Authors: Khaing Yadana Swe, Lillie Dewan

Abstract:

At present, the cascade PID control is widely used to control the super-heating temperature (main steam temperature). As the main steam temperature has the characteristics of large inertia, large time-delay, and time varying, etc., conventional PID control strategy can not achieve good control performance. In order to overcome the bad performance and deficiencies of main steam temperature control system, Model Free Adaptive Control (MFAC) P cascade control system is proposed in this paper. By substituting MFAC in PID of the main control loop of the main steam temperature control, it can overcome time delays, non-linearity, disturbance and time variation.

Keywords: model-free adaptive control, cascade control, adaptive control, PID

Procedia PDF Downloads 603
15067 Constructing Service Innovation Model for SMEs in Automotive Service Industries: A Case Study of Auto Repair Motorcycle in Makassar City

Authors: Muhammad Farid, Jen Der Day

Abstract:

The purpose of this study is to explore the construct of service innovation model for Small and medium-sized enterprises (SMEs) in automotive service industries. A case study of repair shop of the motorcycle at Makassar city illustrates measure innovation implementation, the degree of innovation, and identifies the type of innovation by the service innovation model for SMEs. In this paper, we interview 10 managers of SMEs and analyze their answers. We find that innovation implementation has been slowly; only producing new service innovation 0.62 unit average per year. Incremental innovation is the present option for SMEs, because they choose safer roads to improve service continuously. If want to create radical innovation, they still consider the aspect of cost, system, and readiness of human resources.

Keywords: service innovation, incremental innovation, SMEs, automotive service industries

Procedia PDF Downloads 360
15066 Proposition Model of Micromechanical Damage to Predict Reduction in Stiffness of a Fatigued A-SMC Composite

Authors: Houssem Ayari

Abstract:

Sheet molding compounds (SMC) are high strength thermoset moulding materials reinforced with glass treated with thermocompression. SMC composites combine fibreglass resins and polyester/phenolic/vinyl and unsaturated acrylic to produce a high strength moulding compound. These materials are usually formulated to meet the performance requirements of the moulding part. In addition, the vinyl ester resins used in the new advanced SMC systems (A-SMC) have many desirable features, including mechanical properties comparable to epoxy, excellent chemical resistance and tensile resistance, and cost competitiveness. In this paper, a proposed model is used to take into account the Young modulus evolutions of advanced SMC systems (A-SMC) composite under fatigue tests. The proposed model and the used approach are in good agreement with the experimental results.

Keywords: composites SFRC, damage, fatigue, Mori-Tanaka

Procedia PDF Downloads 118
15065 Human Performance Technology (HPT) as an Entry Point to Achieve Organizational Development in Educational Institutions of the Ministry of Education

Authors: Alkhathlan Mansour

Abstract:

Current research aims at achieving the organizational development in the educational institutions in the governorate of Al-Kharj through the human performance technology (HPT) model that is named; “The Intellectual Model to improve human performance”. To achieve the goal of this research, it tools -that it is consisting of targeted questionnaires to research sample numbered (120)- have been set up. This sample is represented in; department managers in Prince Sattam Bin Abdulaziz University (50), educational supervisors in the Department of Education (40), school administrators in the governorate (30), and the views of education experts through personal interviews in the proposal to achieve organizational development through the intellectual model to improve human performance. Among the most important research results is that there are many obstacles prevent the organizational development in the educational institutions, so the research suggested a model to achieve organizational development through human performance technologies, as well as the researcher recommended through the results of his research that the administrators have to take into account the justice in the distribution of incentives to employees of educational institutions and training leaders in educational institutions on organizational development strategies and working on the preparation of experts of organizational development in the educational institutions to develop the necessary policies and procedures of each institution.

Keywords: human performance, development, education, organizational

Procedia PDF Downloads 290
15064 In vitro Skin Model for Enhanced Testing of Antimicrobial Textiles

Authors: Steven Arcidiacono, Robert Stote, Erin Anderson, Molly Richards

Abstract:

There are numerous standard test methods for antimicrobial textiles that measure activity against specific microorganisms. However, many times these results do not translate to the performance of treated textiles when worn by individuals. Standard test methods apply a single target organism grown under optimal conditions to a textile, then recover the organism to quantitate and determine activity; this does not reflect the actual performance environment that consists of polymicrobial communities in less than optimal conditions or interaction of the textile with the skin substrate. Here we propose the development of in vitro skin model method to bridge the gap between lab testing and wear studies. The model will consist of a defined polymicrobial community of 5-7 commensal microbes simulating the skin microbiome, seeded onto a solid tissue platform to represent the skin. The protocol would entail adding a non-commensal test organism of interest to the defined community and applying a textile sample to the solid substrate. Following incubation, the textile would be removed and the organisms recovered, which would then be quantitated to determine antimicrobial activity. Important parameters to consider include identification and assembly of the defined polymicrobial community, growth conditions to allow the establishment of a stable community, and choice of skin surrogate. This model could answer the following questions: 1) is the treated textile effective against the target organism? 2) How is the defined community affected? And 3) does the textile cause unwanted effects toward the skin simulant? The proposed model would determine activity under conditions comparable to the intended application and provide expanded knowledge relative to current test methods.

Keywords: antimicrobial textiles, defined polymicrobial community, in vitro skin model, skin microbiome

Procedia PDF Downloads 139
15063 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning

Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan

Abstract:

The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.

Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass

Procedia PDF Downloads 117
15062 The Strategy of Teaching Digital Art in Classroom as a Way of Enhancing Pupils’ Artistic Creativity

Authors: Aber Salem Aboalgasm, Rupert Ward

Abstract:

Teaching art by digital means is a big challenge for the majority of teachers of art and artistic design courses in primary education schools. These courses can clearly identify relationships between art, technology and creativity in the classroom .The aim of this article is to present a modern way of teaching art, using digital tools in the art classroom in order to improve creative ability in pupils aged between 9 and 11 years; it also presents a conceptual model for creativity based on digital art. The model could be useful for pupils interested in learning drawing and using an e-drawing package, and for teachers who are interested in teaching their students modern digital art, and improving children’s creativity. This model is designed to show the strategy of teaching art through technology, in order for children to learn how to be creative. This will also help education providers to make suitable choices about which technological approaches they should choose to teach students and enhance their creative ability. To define the digital art tools that can benefit children develop their technical skills. It is also expected that use of this model will help to develop social interactive qualities that may improve intellectual ability.

Keywords: digital tools, motivation, creative activity, technical skill

Procedia PDF Downloads 463
15061 Statistical Inferences for GQARCH-It\^{o} - Jumps Model Based on The Realized Range Volatility

Authors: Fu Jinyu, Lin Jinguan

Abstract:

This paper introduces a novel approach that unifies two types of models: one is the continuous-time jump-diffusion used to model high-frequency data, and the other is discrete-time GQARCH employed to model low-frequency financial data by embedding the discrete GQARCH structure with jumps in the instantaneous volatility process. This model is named “GQARCH-It\^{o} -Jumps mode.” We adopt the realized range-based threshold estimation for high-frequency financial data rather than the realized return-based volatility estimators, which entail the loss of intra-day information of the price movement. Meanwhile, a quasi-likelihood function for the low-frequency GQARCH structure with jumps is developed for the parametric estimate. The asymptotic theories are mainly established for the proposed estimators in the case of finite activity jumps. Moreover, simulation studies are implemented to check the finite sample performance of the proposed methodology. Specifically, it is demonstrated that how our proposed approaches can be practically used on some financial data.

Keywords: It\^{o} process, GQARCH, leverage effects, threshold, realized range-based volatility estimator, quasi-maximum likelihood estimate

Procedia PDF Downloads 160
15060 Evaluation of Model-Based Code Generation for Embedded Systems–Mature Approach for Development in Evolution

Authors: Nikolay P. Brayanov, Anna V. Stoynova

Abstract:

Model-based development approach is gaining more support and acceptance. Its higher abstraction level brings simplification of systems’ description that allows domain experts to do their best without particular knowledge in programming. The different levels of simulation support the rapid prototyping, verifying and validating the product even before it exists physically. Nowadays model-based approach is beneficial for modelling of complex embedded systems as well as a generation of code for many different hardware platforms. Moreover, it is possible to be applied in safety-relevant industries like automotive, which brings extra automation of the expensive device certification process and especially in the software qualification. Using it, some companies report about cost savings and quality improvements, but there are others claiming no major changes or even about cost increases. This publication demonstrates the level of maturity and autonomy of model-based approach for code generation. It is based on a real live automotive seat heater (ASH) module, developed using The Mathworks, Inc. tools. The model, created with Simulink, Stateflow and Matlab is used for automatic generation of C code with Embedded Coder. To prove the maturity of the process, Code generation advisor is used for automatic configuration. All additional configuration parameters are set to auto, when applicable, leaving the generation process to function autonomously. As a result of the investigation, the publication compares the quality of generated embedded code and a manually developed one. The measurements show that generally, the code generated by automatic approach is not worse than the manual one. A deeper analysis of the technical parameters enumerates the disadvantages, part of them identified as topics for our future work.

Keywords: embedded code generation, embedded C code quality, embedded systems, model-based development

Procedia PDF Downloads 244
15059 Predictive Semi-Empirical NOx Model for Diesel Engine

Authors: Saurabh Sharma, Yong Sun, Bruce Vernham

Abstract:

Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model.  Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.

Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical

Procedia PDF Downloads 114
15058 Unsteady Rayleigh-Bénard Convection of Nanoliquids in Enclosures

Authors: P. G. Siddheshwar, B. N. Veena

Abstract:

Rayleigh-B´enard convection of a nanoliquid in shallow, square and tall enclosures is studied using the Khanafer-Vafai-Lightstone single-phase model. The thermophysical properties of water, copper, copper-oxide, alumina, silver and titania at 3000 K under stagnant conditions that are collected from literature are used in calculating thermophysical properties of water-based nanoliquids. Phenomenological laws and mixture theory are used for calculating thermophysical properties. Free-free, rigid-rigid and rigid-free boundary conditions are considered in the study. Intractable Lorenz model for each boundary combination is derived and then reduced to the tractable Ginzburg-Landau model. The amplitude thus obtained is used to quantify the heat transport in terms of Nusselt number. Addition of nanoparticles is shown not to alter the influence of the nature of boundaries on the onset of convection as well as on heat transport. Amongst the three enclosures considered, it is found that tall and shallow enclosures transport maximum and minimum energy respectively. Enhancement of heat transport due to nanoparticles in the three enclosures is found to be in the range 3% - 11%. Comparison of results in the case of rigid-rigid boundaries is made with those of an earlier work and good agreement is found. The study has limitations in the sense that thermophysical properties are calculated by using various quantities modelled for static condition.

Keywords: enclosures, free-free, rigid-rigid, rigid-free boundaries, Ginzburg-Landau model, Lorenz model

Procedia PDF Downloads 257
15057 Evaluation of Turbulence Prediction over Washington, D.C.: Comparison of DCNet Observations and North American Mesoscale Model Outputs

Authors: Nebila Lichiheb, LaToya Myles, William Pendergrass, Bruce Hicks, Dawson Cagle

Abstract:

Atmospheric transport of hazardous materials in urban areas is increasingly under investigation due to the potential impact on human health and the environment. In response to health and safety concerns, several dispersion models have been developed to analyze and predict the dispersion of hazardous contaminants. The models of interest usually rely on meteorological information obtained from the meteorological models of NOAA’s National Weather Service (NWS). However, due to the complexity of the urban environment, NWS forecasts provide an inadequate basis for dispersion computation in urban areas. A dense meteorological network in Washington, DC, called DCNet, has been operated by NOAA since 2003 to support the development of urban monitoring methodologies and provide the driving meteorological observations for atmospheric transport and dispersion models. This study focuses on the comparison of wind observations from the DCNet station on the U.S. Department of Commerce Herbert C. Hoover Building against the North American Mesoscale (NAM) model outputs for the period 2017-2019. The goal is to develop a simple methodology for modifying NAM outputs so that the dispersion requirements of the city and its urban area can be satisfied. This methodology will allow us to quantify the prediction errors of the NAM model and propose adjustments of key variables controlling dispersion model calculation.

Keywords: meteorological data, Washington D.C., DCNet data, NAM model

Procedia PDF Downloads 234
15056 Development on the Modeling Driven Architecture

Authors: Sahar Shahsavaripour Ghazanfarpour

Abstract:

As our daily life depends on quality of built services by systems and using devices in our environment; so education and model of software′s quality will be so important. By daily growth in software′s systems and using them so much, progressing process and requirements′ evaluation in primary level of progress especially architecture level in software get more important. Modern driver architecture changes an in dependent model of a level into some specific models that their purpose is reducing number of software changes into an executive model. Process of designing software engineering is mid-automated. The needed quality attribute in designing architecture and quality attribute in representation are in architecture models. The main problem is the relationship between needs, and elements in some aspect with implicit models and input sources in process. It’s because there is no detection ability. The MART profile is use to describe real-time properties and perform plat form modeling.

Keywords: MDA, DW, OMG, UML, AKB, software architecture, ontology, evaluation

Procedia PDF Downloads 496
15055 Simulation Model of Induction Heating in COMSOL Multiphysics

Authors: K. Djellabi, M. E. H. Latreche

Abstract:

The induction heating phenomenon depends on various factors, making the problem highly nonlinear. The mathematical analysis of this problem in most cases is very difficult and it is reduced to simple cases. Another knowledge of induction heating systems is generated in production environments, but these trial-error procedures are long and expensive. The numerical models of induction heating problem are another approach to reduce abovementioned drawbacks. This paper deals with the simulation model of induction heating problem. The simulation model of induction heating system in COMSOL Multiphysics is created. In this work we present results of numerical simulations of induction heating process in pieces of cylindrical shapes, in an inductor with four coils. The modeling of the inducting heating process was made with the software COMSOL Multiphysics Version 4.2a, for the study we present the temperature charts.

Keywords: induction heating, electromagnetic field, inductor, numerical simulation, finite element

Procedia PDF Downloads 316
15054 Comparison of Johnson-Cook and Barlat Material Model for 316L Stainless Steel

Authors: Yiğit Gürler, İbrahim Şimşek, Müge Savaştaer, Ayberk Karakuş, Alper Taşdemirci

Abstract:

316L steel is frequently used in the industry due to its easy formability and accessibility in sheet metal forming processes. Numerical and experimental studies are frequently encountered in the literature to examine the mechanical behavior of 316L stainless steel during the forming process. 316L stainless steel is the most common material used in the production of plate heat exchangers and plate heat exchangers are produced by plastic deformation of the stainless steel. The motivation in this study is to determine the appropriate material model during the simulation of the sheet metal forming process. For this reason, two different material models were examined and Ls-Dyna material cards were created using material test data. These are MAT133_BARLAT_YLD2000 and MAT093_SIMPLIFIED_JOHNSON_COOK. In order to compare results of the tensile test & hydraulic bulge test performed both numerically and experimentally. The obtained results were evaluated comparatively and the most suitable material model was selected for the forming simulation. In future studies, this material model will be used in the numerical modeling of the sheet metal forming process.

Keywords: 316L, mechanical characterization, metal forming, Ls-Dyna

Procedia PDF Downloads 336
15053 Comparative Analysis of Dissimilarity Detection between Binary Images Based on Equivalency and Non-Equivalency of Image Inversion

Authors: Adnan A. Y. Mustafa

Abstract:

Image matching is a fundamental problem that arises frequently in many aspects of robot and computer vision. It can become a time-consuming process when matching images to a database consisting of hundreds of images, especially if the images are big. One approach to reducing the time complexity of the matching process is to reduce the search space in a pre-matching stage, by simply removing dissimilar images quickly. The Probabilistic Matching Model for Binary Images (PMMBI) showed that dissimilarity detection between binary images can be accomplished quickly by random pixel mapping and is size invariant. The model is based on the gamma binary similarity distance that recognizes an image and its inverse as containing the same scene and hence considers them to be the same image. However, in many applications, an image and its inverse are not treated as being the same but rather dissimilar. In this paper, we present a comparative analysis of dissimilarity detection between PMMBI based on the gamma binary similarity distance and a modified PMMBI model based on a similarity distance that does distinguish between an image and its inverse as being dissimilar.

Keywords: binary image, dissimilarity detection, probabilistic matching model for binary images, image mapping

Procedia PDF Downloads 156
15052 Probabilistic Graphical Model for the Web

Authors: M. Nekri, A. Khelladi

Abstract:

The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.

Keywords: clustering coefficient, preferential attachment, small world, web community

Procedia PDF Downloads 272
15051 Application of Data Mining Techniques for Tourism Knowledge Discovery

Authors: Teklu Urgessa, Wookjae Maeng, Joong Seek Lee

Abstract:

Application of five implementations of three data mining classification techniques was experimented for extracting important insights from tourism data. The aim was to find out the best performing algorithm among the compared ones for tourism knowledge discovery. Knowledge discovery process from data was used as a process model. 10-fold cross validation method is used for testing purpose. Various data preprocessing activities were performed to get the final dataset for model building. Classification models of the selected algorithms were built with different scenarios on the preprocessed dataset. The outperformed algorithm tourism dataset was Random Forest (76%) before applying information gain based attribute selection and J48 (C4.5) (75%) after selection of top relevant attributes to the class (target) attribute. In terms of time for model building, attribute selection improves the efficiency of all algorithms. Artificial Neural Network (multilayer perceptron) showed the highest improvement (90%). The rules extracted from the decision tree model are presented, which showed intricate, non-trivial knowledge/insight that would otherwise not be discovered by simple statistical analysis with mediocre accuracy of the machine using classification algorithms.

Keywords: classification algorithms, data mining, knowledge discovery, tourism

Procedia PDF Downloads 295