Search results for: model qualification
15087 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures
Authors: Adriano Z. Zambom, Preethi Ravikumar
Abstract:
One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.Keywords: additive model, nonparametric regression, variable selection, Akaike Information Criteria
Procedia PDF Downloads 26615086 Simulation of Nonlinear Behavior of Reinforced Concrete Slabs Using Rigid Body-Spring Discrete Element Method
Authors: Felix Jr. Garde, Eric Augustus Tingatinga
Abstract:
Most analysis procedures of reinforced concrete (RC) slabs are based on elastic theory. When subjected to large forces, however, slabs deform beyond elastic range and the study of their behavior and performance require nonlinear analysis. This paper presents a numerical model to simulate nonlinear behavior of RC slabs using rigid body-spring discrete element method. The proposed slab model composed of rigid plate elements and nonlinear springs is based on the yield line theory which assumes that the nonlinear behavior of the RC slab subjected to transverse loads is contained in plastic or yield-lines. In this model, the displacement of the slab is completely described by the rigid elements and the deformation energy is concentrated in the flexural springs uniformly distributed at the potential yield lines. The spring parameters are determined from comparison of transverse displacements and stresses developed in the slab obtained using FEM and the proposed model with assumed homogeneous material. Numerical models of typical RC slabs with varying geometry, reinforcement, support conditions, and loading conditions, show reasonable agreement with available experimental data. The model was also shown to be useful in investigating dynamic behavior of slabs.Keywords: RC slab, nonlinear behavior, yield line theory, rigid body-spring discrete element method
Procedia PDF Downloads 32515085 Improved Structure and Performance by Shape Change of Foam Monitor
Authors: Tae Gwan Kim, Hyun Kyu Cho, Young Hoon Lee, Young Chul Park
Abstract:
Foam monitors are devices that are installed on cargo tank decks to suppress cargo area fires in oil tankers or hazardous chemical ship cargo ships. In general, the main design parameter of the foam monitor is the distance of the projection through the foam monitor. In this study, the relationship between flow characteristics and projection distance, depending on the shape was examined. Numerical techniques for fluid analysis of foam monitors have been developed for prediction. The flow pattern of the fluid varies depending on the shape of the flow path of the foam monitor, as the flow losses affecting projection distance were calculated through numerical analysis. The basic shape of the foam monitor was an L shape designed by N Company. The modified model increased the length of the flow path and used the S shape model. The calculation result shows that the L shape, which is the basic shape, has a problem that the force is directed to one side and the vibration and noise are generated there. In order to solve the problem, S-shaped model, which is a change model, was used. As a result, the problem is solved, and the projection distance from the nozzle is improved.Keywords: CFD, foam monitor, projection distance, moment
Procedia PDF Downloads 34515084 Teachers’ Role and Principal’s Administrative Functions as Correlates of Effective Academic Performance of Public Secondary School Students in Imo State, Nigeria
Authors: Caroline Nnokwe, Iheanyi Eneremadu
Abstract:
Teachers and principals are vital and integral parts of the educational system. For educational objectives to be met, the role of teachers and the functions of the principals are not to be overlooked. However, the inability of teachers and principals to carry out their roles effectively has impacted the outcome of the students’ performance. The study, therefore, examined teachers’ roles and principal’s administrative functions as correlates of effective academic performance of public secondary school students in Imo state, Nigeria. Four research questions and two hypotheses guided the study. The study adopted a correlation research design. The sample size was 5,438 respondents via the Yaro-Yamane technique, which consists of 175 teachers, 13 principals and 5,250 students using the proportional stratified random sampling technique. The instruments for data collection were a researcher-made questionnaire titled Teachers’ Role/Principals’ Administrative Functions Questionnaire (TRPAFQ) with a Cronbach Alpha coefficient of .82 and student's internal results obtained from the school authorities. Data collected were analyzed using the Pearson product-moment correlation coefficient and simple linear regression. Research questions were answered using Pearson Product Moment Correlation statistics, while the hypotheses were tested at 0.05 level of significance using regression analysis. The findings of the study showed that the educational qualification of teachers, organizing, and planning correlated student’s academic performance to a great extent, while availability and proper use of instructional materials by teachers correlated the academic performance of students to a very high extent. The findings also revealed that there is a significant relationship between teachers’ role, principals’ administrative functions and student’s academic performance of public secondary schools in Imo State, The study recommended among others that there is the need for government, through the ministry of education, and education authorities to adequately staff their supervisory department in order to carry out proper supervision of secondary school teachers, and also provide adequate instructional materials to ensure greater academic performance among secondary school students of Imo state, Nigeria.Keywords: instructional materials, principals’ administrative functions, students’ academic performance, teacher role
Procedia PDF Downloads 8715083 Application of Model Free Adaptive Control in Main Steam Temperature System of Thermal Power Plant
Authors: Khaing Yadana Swe, Lillie Dewan
Abstract:
At present, the cascade PID control is widely used to control the super-heating temperature (main steam temperature). As the main steam temperature has the characteristics of large inertia, large time-delay, and time varying, etc., conventional PID control strategy can not achieve good control performance. In order to overcome the bad performance and deficiencies of main steam temperature control system, Model Free Adaptive Control (MFAC) P cascade control system is proposed in this paper. By substituting MFAC in PID of the main control loop of the main steam temperature control, it can overcome time delays, non-linearity, disturbance and time variation.Keywords: model-free adaptive control, cascade control, adaptive control, PID
Procedia PDF Downloads 60315082 Constructing Service Innovation Model for SMEs in Automotive Service Industries: A Case Study of Auto Repair Motorcycle in Makassar City
Authors: Muhammad Farid, Jen Der Day
Abstract:
The purpose of this study is to explore the construct of service innovation model for Small and medium-sized enterprises (SMEs) in automotive service industries. A case study of repair shop of the motorcycle at Makassar city illustrates measure innovation implementation, the degree of innovation, and identifies the type of innovation by the service innovation model for SMEs. In this paper, we interview 10 managers of SMEs and analyze their answers. We find that innovation implementation has been slowly; only producing new service innovation 0.62 unit average per year. Incremental innovation is the present option for SMEs, because they choose safer roads to improve service continuously. If want to create radical innovation, they still consider the aspect of cost, system, and readiness of human resources.Keywords: service innovation, incremental innovation, SMEs, automotive service industries
Procedia PDF Downloads 36015081 Proposition Model of Micromechanical Damage to Predict Reduction in Stiffness of a Fatigued A-SMC Composite
Authors: Houssem Ayari
Abstract:
Sheet molding compounds (SMC) are high strength thermoset moulding materials reinforced with glass treated with thermocompression. SMC composites combine fibreglass resins and polyester/phenolic/vinyl and unsaturated acrylic to produce a high strength moulding compound. These materials are usually formulated to meet the performance requirements of the moulding part. In addition, the vinyl ester resins used in the new advanced SMC systems (A-SMC) have many desirable features, including mechanical properties comparable to epoxy, excellent chemical resistance and tensile resistance, and cost competitiveness. In this paper, a proposed model is used to take into account the Young modulus evolutions of advanced SMC systems (A-SMC) composite under fatigue tests. The proposed model and the used approach are in good agreement with the experimental results.Keywords: composites SFRC, damage, fatigue, Mori-Tanaka
Procedia PDF Downloads 11815080 Human Performance Technology (HPT) as an Entry Point to Achieve Organizational Development in Educational Institutions of the Ministry of Education
Authors: Alkhathlan Mansour
Abstract:
Current research aims at achieving the organizational development in the educational institutions in the governorate of Al-Kharj through the human performance technology (HPT) model that is named; “The Intellectual Model to improve human performance”. To achieve the goal of this research, it tools -that it is consisting of targeted questionnaires to research sample numbered (120)- have been set up. This sample is represented in; department managers in Prince Sattam Bin Abdulaziz University (50), educational supervisors in the Department of Education (40), school administrators in the governorate (30), and the views of education experts through personal interviews in the proposal to achieve organizational development through the intellectual model to improve human performance. Among the most important research results is that there are many obstacles prevent the organizational development in the educational institutions, so the research suggested a model to achieve organizational development through human performance technologies, as well as the researcher recommended through the results of his research that the administrators have to take into account the justice in the distribution of incentives to employees of educational institutions and training leaders in educational institutions on organizational development strategies and working on the preparation of experts of organizational development in the educational institutions to develop the necessary policies and procedures of each institution.Keywords: human performance, development, education, organizational
Procedia PDF Downloads 29015079 In vitro Skin Model for Enhanced Testing of Antimicrobial Textiles
Authors: Steven Arcidiacono, Robert Stote, Erin Anderson, Molly Richards
Abstract:
There are numerous standard test methods for antimicrobial textiles that measure activity against specific microorganisms. However, many times these results do not translate to the performance of treated textiles when worn by individuals. Standard test methods apply a single target organism grown under optimal conditions to a textile, then recover the organism to quantitate and determine activity; this does not reflect the actual performance environment that consists of polymicrobial communities in less than optimal conditions or interaction of the textile with the skin substrate. Here we propose the development of in vitro skin model method to bridge the gap between lab testing and wear studies. The model will consist of a defined polymicrobial community of 5-7 commensal microbes simulating the skin microbiome, seeded onto a solid tissue platform to represent the skin. The protocol would entail adding a non-commensal test organism of interest to the defined community and applying a textile sample to the solid substrate. Following incubation, the textile would be removed and the organisms recovered, which would then be quantitated to determine antimicrobial activity. Important parameters to consider include identification and assembly of the defined polymicrobial community, growth conditions to allow the establishment of a stable community, and choice of skin surrogate. This model could answer the following questions: 1) is the treated textile effective against the target organism? 2) How is the defined community affected? And 3) does the textile cause unwanted effects toward the skin simulant? The proposed model would determine activity under conditions comparable to the intended application and provide expanded knowledge relative to current test methods.Keywords: antimicrobial textiles, defined polymicrobial community, in vitro skin model, skin microbiome
Procedia PDF Downloads 13915078 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning
Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan
Abstract:
The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass
Procedia PDF Downloads 11715077 The Strategy of Teaching Digital Art in Classroom as a Way of Enhancing Pupils’ Artistic Creativity
Authors: Aber Salem Aboalgasm, Rupert Ward
Abstract:
Teaching art by digital means is a big challenge for the majority of teachers of art and artistic design courses in primary education schools. These courses can clearly identify relationships between art, technology and creativity in the classroom .The aim of this article is to present a modern way of teaching art, using digital tools in the art classroom in order to improve creative ability in pupils aged between 9 and 11 years; it also presents a conceptual model for creativity based on digital art. The model could be useful for pupils interested in learning drawing and using an e-drawing package, and for teachers who are interested in teaching their students modern digital art, and improving children’s creativity. This model is designed to show the strategy of teaching art through technology, in order for children to learn how to be creative. This will also help education providers to make suitable choices about which technological approaches they should choose to teach students and enhance their creative ability. To define the digital art tools that can benefit children develop their technical skills. It is also expected that use of this model will help to develop social interactive qualities that may improve intellectual ability.Keywords: digital tools, motivation, creative activity, technical skill
Procedia PDF Downloads 46315076 Statistical Inferences for GQARCH-It\^{o} - Jumps Model Based on The Realized Range Volatility
Authors: Fu Jinyu, Lin Jinguan
Abstract:
This paper introduces a novel approach that unifies two types of models: one is the continuous-time jump-diffusion used to model high-frequency data, and the other is discrete-time GQARCH employed to model low-frequency financial data by embedding the discrete GQARCH structure with jumps in the instantaneous volatility process. This model is named “GQARCH-It\^{o} -Jumps mode.” We adopt the realized range-based threshold estimation for high-frequency financial data rather than the realized return-based volatility estimators, which entail the loss of intra-day information of the price movement. Meanwhile, a quasi-likelihood function for the low-frequency GQARCH structure with jumps is developed for the parametric estimate. The asymptotic theories are mainly established for the proposed estimators in the case of finite activity jumps. Moreover, simulation studies are implemented to check the finite sample performance of the proposed methodology. Specifically, it is demonstrated that how our proposed approaches can be practically used on some financial data.Keywords: It\^{o} process, GQARCH, leverage effects, threshold, realized range-based volatility estimator, quasi-maximum likelihood estimate
Procedia PDF Downloads 16015075 Predictive Semi-Empirical NOx Model for Diesel Engine
Authors: Saurabh Sharma, Yong Sun, Bruce Vernham
Abstract:
Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model. Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical
Procedia PDF Downloads 11415074 Unsteady Rayleigh-Bénard Convection of Nanoliquids in Enclosures
Authors: P. G. Siddheshwar, B. N. Veena
Abstract:
Rayleigh-B´enard convection of a nanoliquid in shallow, square and tall enclosures is studied using the Khanafer-Vafai-Lightstone single-phase model. The thermophysical properties of water, copper, copper-oxide, alumina, silver and titania at 3000 K under stagnant conditions that are collected from literature are used in calculating thermophysical properties of water-based nanoliquids. Phenomenological laws and mixture theory are used for calculating thermophysical properties. Free-free, rigid-rigid and rigid-free boundary conditions are considered in the study. Intractable Lorenz model for each boundary combination is derived and then reduced to the tractable Ginzburg-Landau model. The amplitude thus obtained is used to quantify the heat transport in terms of Nusselt number. Addition of nanoparticles is shown not to alter the influence of the nature of boundaries on the onset of convection as well as on heat transport. Amongst the three enclosures considered, it is found that tall and shallow enclosures transport maximum and minimum energy respectively. Enhancement of heat transport due to nanoparticles in the three enclosures is found to be in the range 3% - 11%. Comparison of results in the case of rigid-rigid boundaries is made with those of an earlier work and good agreement is found. The study has limitations in the sense that thermophysical properties are calculated by using various quantities modelled for static condition.Keywords: enclosures, free-free, rigid-rigid, rigid-free boundaries, Ginzburg-Landau model, Lorenz model
Procedia PDF Downloads 25715073 Evaluation of Turbulence Prediction over Washington, D.C.: Comparison of DCNet Observations and North American Mesoscale Model Outputs
Authors: Nebila Lichiheb, LaToya Myles, William Pendergrass, Bruce Hicks, Dawson Cagle
Abstract:
Atmospheric transport of hazardous materials in urban areas is increasingly under investigation due to the potential impact on human health and the environment. In response to health and safety concerns, several dispersion models have been developed to analyze and predict the dispersion of hazardous contaminants. The models of interest usually rely on meteorological information obtained from the meteorological models of NOAA’s National Weather Service (NWS). However, due to the complexity of the urban environment, NWS forecasts provide an inadequate basis for dispersion computation in urban areas. A dense meteorological network in Washington, DC, called DCNet, has been operated by NOAA since 2003 to support the development of urban monitoring methodologies and provide the driving meteorological observations for atmospheric transport and dispersion models. This study focuses on the comparison of wind observations from the DCNet station on the U.S. Department of Commerce Herbert C. Hoover Building against the North American Mesoscale (NAM) model outputs for the period 2017-2019. The goal is to develop a simple methodology for modifying NAM outputs so that the dispersion requirements of the city and its urban area can be satisfied. This methodology will allow us to quantify the prediction errors of the NAM model and propose adjustments of key variables controlling dispersion model calculation.Keywords: meteorological data, Washington D.C., DCNet data, NAM model
Procedia PDF Downloads 23415072 Development on the Modeling Driven Architecture
Authors: Sahar Shahsavaripour Ghazanfarpour
Abstract:
As our daily life depends on quality of built services by systems and using devices in our environment; so education and model of software′s quality will be so important. By daily growth in software′s systems and using them so much, progressing process and requirements′ evaluation in primary level of progress especially architecture level in software get more important. Modern driver architecture changes an in dependent model of a level into some specific models that their purpose is reducing number of software changes into an executive model. Process of designing software engineering is mid-automated. The needed quality attribute in designing architecture and quality attribute in representation are in architecture models. The main problem is the relationship between needs, and elements in some aspect with implicit models and input sources in process. It’s because there is no detection ability. The MART profile is use to describe real-time properties and perform plat form modeling.Keywords: MDA, DW, OMG, UML, AKB, software architecture, ontology, evaluation
Procedia PDF Downloads 49615071 Simulation Model of Induction Heating in COMSOL Multiphysics
Authors: K. Djellabi, M. E. H. Latreche
Abstract:
The induction heating phenomenon depends on various factors, making the problem highly nonlinear. The mathematical analysis of this problem in most cases is very difficult and it is reduced to simple cases. Another knowledge of induction heating systems is generated in production environments, but these trial-error procedures are long and expensive. The numerical models of induction heating problem are another approach to reduce abovementioned drawbacks. This paper deals with the simulation model of induction heating problem. The simulation model of induction heating system in COMSOL Multiphysics is created. In this work we present results of numerical simulations of induction heating process in pieces of cylindrical shapes, in an inductor with four coils. The modeling of the inducting heating process was made with the software COMSOL Multiphysics Version 4.2a, for the study we present the temperature charts.Keywords: induction heating, electromagnetic field, inductor, numerical simulation, finite element
Procedia PDF Downloads 31615070 Comparison of Johnson-Cook and Barlat Material Model for 316L Stainless Steel
Authors: Yiğit Gürler, İbrahim Şimşek, Müge Savaştaer, Ayberk Karakuş, Alper Taşdemirci
Abstract:
316L steel is frequently used in the industry due to its easy formability and accessibility in sheet metal forming processes. Numerical and experimental studies are frequently encountered in the literature to examine the mechanical behavior of 316L stainless steel during the forming process. 316L stainless steel is the most common material used in the production of plate heat exchangers and plate heat exchangers are produced by plastic deformation of the stainless steel. The motivation in this study is to determine the appropriate material model during the simulation of the sheet metal forming process. For this reason, two different material models were examined and Ls-Dyna material cards were created using material test data. These are MAT133_BARLAT_YLD2000 and MAT093_SIMPLIFIED_JOHNSON_COOK. In order to compare results of the tensile test & hydraulic bulge test performed both numerically and experimentally. The obtained results were evaluated comparatively and the most suitable material model was selected for the forming simulation. In future studies, this material model will be used in the numerical modeling of the sheet metal forming process.Keywords: 316L, mechanical characterization, metal forming, Ls-Dyna
Procedia PDF Downloads 33615069 Comparative Analysis of Dissimilarity Detection between Binary Images Based on Equivalency and Non-Equivalency of Image Inversion
Authors: Adnan A. Y. Mustafa
Abstract:
Image matching is a fundamental problem that arises frequently in many aspects of robot and computer vision. It can become a time-consuming process when matching images to a database consisting of hundreds of images, especially if the images are big. One approach to reducing the time complexity of the matching process is to reduce the search space in a pre-matching stage, by simply removing dissimilar images quickly. The Probabilistic Matching Model for Binary Images (PMMBI) showed that dissimilarity detection between binary images can be accomplished quickly by random pixel mapping and is size invariant. The model is based on the gamma binary similarity distance that recognizes an image and its inverse as containing the same scene and hence considers them to be the same image. However, in many applications, an image and its inverse are not treated as being the same but rather dissimilar. In this paper, we present a comparative analysis of dissimilarity detection between PMMBI based on the gamma binary similarity distance and a modified PMMBI model based on a similarity distance that does distinguish between an image and its inverse as being dissimilar.Keywords: binary image, dissimilarity detection, probabilistic matching model for binary images, image mapping
Procedia PDF Downloads 15615068 Probabilistic Graphical Model for the Web
Authors: M. Nekri, A. Khelladi
Abstract:
The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.Keywords: clustering coefficient, preferential attachment, small world, web community
Procedia PDF Downloads 27215067 Application of Data Mining Techniques for Tourism Knowledge Discovery
Authors: Teklu Urgessa, Wookjae Maeng, Joong Seek Lee
Abstract:
Application of five implementations of three data mining classification techniques was experimented for extracting important insights from tourism data. The aim was to find out the best performing algorithm among the compared ones for tourism knowledge discovery. Knowledge discovery process from data was used as a process model. 10-fold cross validation method is used for testing purpose. Various data preprocessing activities were performed to get the final dataset for model building. Classification models of the selected algorithms were built with different scenarios on the preprocessed dataset. The outperformed algorithm tourism dataset was Random Forest (76%) before applying information gain based attribute selection and J48 (C4.5) (75%) after selection of top relevant attributes to the class (target) attribute. In terms of time for model building, attribute selection improves the efficiency of all algorithms. Artificial Neural Network (multilayer perceptron) showed the highest improvement (90%). The rules extracted from the decision tree model are presented, which showed intricate, non-trivial knowledge/insight that would otherwise not be discovered by simple statistical analysis with mediocre accuracy of the machine using classification algorithms.Keywords: classification algorithms, data mining, knowledge discovery, tourism
Procedia PDF Downloads 29515066 A Literature Review of the Trend towards Indoor Dynamic Thermal Comfort
Authors: James Katungyi
Abstract:
The Steady State thermal comfort model which dominates thermal comfort practice and which posits the ideal thermal conditions in a narrow range of thermal conditions does not deliver the expected comfort levels among occupants. Furthermore, the buildings where this model is applied consume a lot of energy in conditioning. This paper reviews significant literature about thermal comfort in dynamic indoor conditions including the adaptive thermal comfort model and alliesthesia. A major finding of the paper is that the adaptive thermal comfort model is part of a trend from static to dynamic indoor environments in aspects such as lighting, views, sounds and ventilation. Alliesthesia or thermal delight is consistent with this trend towards dynamic thermal conditions. It is within this trend that the two fold goal of increased thermal comfort and reduced energy consumption lies. At the heart of this trend is a rediscovery of the link between the natural environment and human well-being, a link that was partially severed by over-reliance on mechanically dominated artificial indoor environments. The paper concludes by advocating thermal conditioning solutions that integrate mechanical with natural thermal conditioning in a balanced manner in order to meet occupant thermal needs without endangering the environment.Keywords: adaptive thermal comfort, alliesthesia, energy, natural environment
Procedia PDF Downloads 22015065 Stress and Strain Analysis of Notched Bodies Subject to Non-Proportional Loadings
Authors: Ayhan Ince
Abstract:
In this paper, an analytical simplified method for calculating elasto-plastic stresses strains of notched bodies subject to non-proportional loading paths is discussed. The method was based on the Neuber notch correction, which relates the incremental elastic and elastic-plastic strain energy densities at the notch root and the material constitutive relationship. The validity of the method was presented by comparing computed results of the proposed model against finite element numerical data of notched shaft. The comparison showed that the model estimated notch-root elasto-plastic stresses strains with good accuracy using linear-elastic stresses. The prosed model provides more efficient and simple analysis method preferable to expensive experimental component tests and more complex and time consuming incremental non-linear FE analysis. The model is particularly suitable to perform fatigue life and fatigue damage estimates of notched components subjected to non-proportional loading paths.Keywords: elasto-plastic, stress-strain, notch analysis, nonprortional loadings, cyclic plasticity, fatigue
Procedia PDF Downloads 46615064 Quantum Decision Making with Small Sample for Network Monitoring and Control
Authors: Tatsuya Otoshi, Masayuki Murata
Abstract:
With the development and diversification of applications on the Internet, applications that require high responsiveness, such as video streaming, are becoming mainstream. Application responsiveness is not only a matter of communication delay but also a matter of time required to grasp changes in network conditions. The tradeoff between accuracy and measurement time is a challenge in network control. We people make countless decisions all the time, and our decisions seem to resolve tradeoffs between time and accuracy. When making decisions, people are known to make appropriate choices based on relatively small samples. Although there have been various studies on models of human decision-making, a model that integrates various cognitive biases, called ”quantum decision-making,” has recently attracted much attention. However, the modeling of small samples has not been examined much so far. In this paper, we extend the model of quantum decision-making to model decision-making with a small sample. In the proposed model, the state is updated by value-based probability amplitude amplification. By analytically obtaining a lower bound on the number of samples required for decision-making, we show that decision-making with a small number of samples is feasible.Keywords: quantum decision making, small sample, MPEG-DASH, Grover's algorithm
Procedia PDF Downloads 8015063 Transfer Function Model-Based Predictive Control for Nuclear Core Power Control in PUSPATI TRIGA Reactor
Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha
Abstract:
The 1MWth PUSPATI TRIGA Reactor (RTP) in Malaysia Nuclear Agency has been operating more than 35 years. The existing core power control is using conventional controller known as Feedback Control Algorithm (FCA). It is technically challenging to keep the core power output always stable and operating within acceptable error bands for the safety demand of the RTP. Currently, the system could be considered unsatisfactory with power tracking performance, yet there is still significant room for improvement. Hence, a new design core power control is very important to improve the current performance in tracking and regulating reactor power by controlling the movement of control rods that suit the demand of highly sensitive of nuclear reactor power control. In this paper, the proposed Model Predictive Control (MPC) law was applied to control the core power. The model for core power control was based on mathematical models of the reactor core, MPC, and control rods selection algorithm. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The proposed MPC was presented in a transfer function model of the reactor core according to perturbations theory. The transfer function model-based predictive control (TFMPC) was developed to design the core power control with predictions based on a T-filter towards the real-time implementation of MPC on hardware. This paper introduces the sensitivity functions for TFMPC feedback loop to reduce the impact on the input actuation signal and demonstrates the behaviour of TFMPC in term of disturbance and noise rejections. The comparisons of both tracking and regulating performance between the conventional controller and TFMPC were made using MATLAB and analysed. In conclusion, the proposed TFMPC has satisfactory performance in tracking and regulating core power for controlling nuclear reactor with high reliability and safety.Keywords: core power control, model predictive control, PUSPATI TRIGA reactor, TFMPC
Procedia PDF Downloads 24515062 Exploring a Teaching Model in Cultural Education Using Video-Focused Social Networking Apps: An Example of Chinese Language Teaching for African Students
Authors: Zhao Hong
Abstract:
When international students study Chinese as a foreign or second language, it is important for them to form constructive viewpoints and possess an open mindset on Chinese culture. This helps them to make faster progress in their language acquisition. Observations from African students at Liaoning Institute of Science and Technology show that by integrating video-focused social networking apps such as Tiktok (“Douyin”) on a controlled basis, students raise their interest not only in making an effort in learning the Chinese language, but also in the understanding of the Chinese culture. During the last twelve months, our research group explored a teaching model using selected contents in certain classroom settings, including virtual classrooms during lockdown periods due to the COVID-19 pandemic. Using interviews, a survey was conducted on international students from African countries at the Liaoning Institute of Science and Technology in Chinese language courses. Based on the results, a teaching model was built for Chinese language acquisition by entering the "mobile Chinese culture".Keywords: Chinese as a foreign language, cultural education, social networking apps, teaching model
Procedia PDF Downloads 7415061 Simulation of Growth and Yield of Rice Under Irrigation and Nitrogen Management Using ORYZA2000
Authors: Mojtaba Esmaeilzad Limoudehi
Abstract:
To evaluate the model ORYZA2000, under the management of irrigation and nitrogen fertilization experiment, a split plot with a randomized complete block design with three replications on hybrid cultivars (spring) in the 1388-1387 crop year was conducted at the Rice Research Institute. Permanent flood irrigation as the main plot in the fourth level, around 5 days, from 11 days to 8 days away, and the four levels of nitrogen fertilizer as the subplots 0, 90, 120, and 150 kg N Ha were considered. Simulated and measured values of leaf area index, grain yield, and biological parameters using the regression coefficient, t-test, the root mean square error (RMSE), and normalized root mean square error (RMSEn) were performed. Results, the normalized root mean square error of 10% in grain yield, the biological yield of 9%, and 23% of maximum LAI was determined. The simulation results show that grain yield and biological ORYZA2000 model accuracy are good but do not simulate maximum LAI well. The results show that the model can support ORYZA2000 test results and can be used under conditions of nitrogen fertilizer and irrigation management.Keywords: evaluation, rice, nitrogen fertilizer, model ORYZA2000
Procedia PDF Downloads 7015060 Reliability Modeling on Drivers’ Decision during Yellow Phase
Authors: Sabyasachi Biswas, Indrajit Ghosh
Abstract:
The random and heterogeneous behavior of vehicles in India puts up a greater challenge for researchers. Stop-and-go modeling at signalized intersections under heterogeneous traffic conditions has remained one of the most sought-after fields. Vehicles are often caught up in the dilemma zone and are unable to take quick decisions whether to stop or cross the intersection. This hampers the traffic movement and may lead to accidents. The purpose of this work is to develop a stop and go prediction model that depicts the drivers’ decision during the yellow time at signalised intersections. To accomplish this, certain traffic parameters were taken into account to develop surrogate model. This research investigated the Stop and Go behavior of the drivers by collecting data from 4-signalized intersections located in two major Indian cities. Model was developed to predict the drivers’ decision making during the yellow phase of the traffic signal. The parameters used for modeling included distance to stop line, time to stop line, speed, and length of the vehicle. A Kriging base surrogate model has been developed to investigate the drivers’ decision-making behavior in amber phase. It is observed that the proposed approach yields a highly accurate result (97.4 percent) by Gaussian function. It was observed that the accuracy for the crossing probability was 95.45, 90.9 and 86.36.11 percent respectively as predicted by the Kriging models with Gaussian, Exponential and Linear functions.Keywords: decision-making decision, dilemma zone, surrogate model, Kriging
Procedia PDF Downloads 30915059 A Study on Improvement of the Torque Ripple and Demagnetization Characteristics of a PMSM
Authors: Yong Min You
Abstract:
The study on the torque ripple of Permanent Magnet Synchronous Motors (PMSMs) has been rapidly progressed, which effects on the noise and vibration of the electric vehicle. There are several ways to reduce torque ripple, which are the increase in the number of slots and poles, the notch of the rotor and stator teeth, and the skew of the rotor and stator. However, the conventional methods have the disadvantage in terms of material cost and productivity. The demagnetization characteristic of PMSMs must be attained for electric vehicle application. Due to rare earth supply issue, the demand for Dy-free permanent magnet has been increasing, which can be applied to PMSMs for the electric vehicle. Dy-free permanent magnet has lower the coercivity; the demagnetization characteristic has become more significant. To improve the torque ripple as well as the demagnetization characteristics, which are significant parameters for electric vehicle application, an unequal air-gap model is proposed for a PMSM. A shape optimization is performed to optimize the design variables of an unequal air-gap model. Optimal design variables are the shape of an unequal air-gap and the angle between V-shape magnets. An optimization process is performed by Latin Hypercube Sampling (LHS), Kriging Method, and Genetic Algorithm (GA). Finite element analysis (FEA) is also utilized to analyze the torque and demagnetization characteristics. The torque ripple and the demagnetization temperature of the initial model of 45kW PMSM with unequal air-gap are 10 % and 146.8 degrees, respectively, which are reaching a critical level for electric vehicle application. Therefore, the unequal air-gap model is proposed, and then an optimization process is conducted. Compared to the initial model, the torque ripple of the optimized unequal air-gap model was reduced by 7.7 %. In addition, the demagnetization temperature of the optimized model was also increased by 1.8 % while maintaining the efficiency. From these results, a shape optimized unequal air-gap PMSM has shown the usefulness of an improvement in the torque ripple and demagnetization temperature for the electric vehicle.Keywords: permanent magnet synchronous motor, optimal design, finite element method, torque ripple
Procedia PDF Downloads 27615058 Using Combination of Sets of Features of Molecules for Aqueous Solubility Prediction: A Random Forest Model
Authors: Muhammet Baldan, Emel Timuçin
Abstract:
Generally, absorption and bioavailability increase if solubility increases; therefore, it is crucial to predict them in drug discovery applications. Molecular descriptors and Molecular properties are traditionally used for the prediction of water solubility. There are various key descriptors that are used for this purpose, namely Drogan Descriptors, Morgan Descriptors, Maccs keys, etc., and each has different prediction capabilities with differentiating successes between different data sets. Another source for the prediction of solubility is structural features; they are commonly used for the prediction of solubility. However, there are little to no studies that combine three or more properties or descriptors for prediction to produce a more powerful prediction model. Unlike available models, we used a combination of those features in a random forest machine learning model for improved solubility prediction to better predict and, therefore, contribute to drug discovery systems.Keywords: solubility, random forest, molecular descriptors, maccs keys
Procedia PDF Downloads 49