Search results for: system models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22795

Search results for: system models

21445 Improved Dynamic Bayesian Networks Applied to Arabic On Line Characters Recognition

Authors: Redouane Tlemsani, Abdelkader Benyettou

Abstract:

Work is in on line Arabic character recognition and the principal motivation is to study the Arab manuscript with on line technology. This system is a Markovian system, which one can see as like a Dynamic Bayesian Network (DBN). One of the major interests of these systems resides in the complete models training (topology and parameters) starting from training data. Our approach is based on the dynamic Bayesian Networks formalism. The DBNs theory is a Bayesians networks generalization to the dynamic processes. Among our objective, amounts finding better parameters, which represent the links (dependences) between dynamic network variables. In applications in pattern recognition, one will carry out the fixing of the structure, which obliges us to admit some strong assumptions (for example independence between some variables). Our application will relate to the Arabic isolated characters on line recognition using our laboratory database: NOUN. A neural tester proposed for DBN external optimization. The DBN scores and DBN mixed are respectively 70.24% and 62.50%, which lets predict their further development; other approaches taking account time were considered and implemented until obtaining a significant recognition rate 94.79%.

Keywords: Arabic on line character recognition, dynamic Bayesian network, pattern recognition, computer vision

Procedia PDF Downloads 429
21444 Intelligent Computing with Bayesian Regularization Artificial Neural Networks for a Nonlinear System of COVID-19 Epidemic Model for Future Generation Disease Control

Authors: Tahir Nawaz Cheema, Dumitru Baleanu, Ali Raza

Abstract:

In this research work, we design intelligent computing through Bayesian Regularization artificial neural networks (BRANNs) introduced to solve the mathematical modeling of infectious diseases (Covid-19). The dynamical transmission is due to the interaction of people and its mathematical representation based on the system's nonlinear differential equations. The generation of the dataset of the Covid-19 model is exploited by the power of the explicit Runge Kutta method for different countries of the world like India, Pakistan, Italy, and many more. The generated dataset is approximately used for training, testing, and validation processes for every frequent update in Bayesian Regularization backpropagation for numerical behavior of the dynamics of the Covid-19 model. The performance and effectiveness of designed methodology BRANNs are checked through mean squared error, error histograms, numerical solutions, absolute error, and regression analysis.

Keywords: mathematical models, beysian regularization, bayesian-regularization backpropagation networks, regression analysis, numerical computing

Procedia PDF Downloads 149
21443 Impact of Artificial Intelligence Technologies on Information-Seeking Behaviors and the Need for a New Information Seeking Model

Authors: Mohammed Nasser Al-Suqri

Abstract:

Former information-seeking models are proposed more than two decades ago. These already existed models were given prior to the evolution of digital information era and Artificial Intelligence (AI) technologies. Lack of current information seeking models within Library and Information Studies resulted in fewer advancements for teaching students about information-seeking behaviors, design of library tools and services. In order to better facilitate the aforementioned concerns, this study aims to propose state-of-the-art model while focusing on the information seeking behavior of library users in the Sultanate of Oman. This study aims for the development, designing and contextualizing the real-time user-centric information seeking model capable of enhancing information needs and information usage along with incorporating critical insights for the digital library practices. Another aim is to establish far-sighted and state-of-the-art frame of reference covering Artificial Intelligence (AI) while synthesizing digital resources and information for optimizing information-seeking behavior. The proposed study is empirically designed based on a mix-method process flow, technical surveys, in-depth interviews, focus groups evaluations and stakeholder investigations. The study data pool is consist of users and specialist LIS staff at 4 public libraries and 26 academic libraries in Oman. The designed research model is expected to facilitate LIS by assisting multi-dimensional insights with AI integration for redefining the information-seeking process, and developing a technology rich model.

Keywords: artificial intelligence, information seeking, information behavior, information seeking models, libraries, Sultanate of Oman

Procedia PDF Downloads 116
21442 Development of Vapor Absorption Refrigeration System for Mini-Bus Car’s Air Conditioning: A Two-Fluid Model

Authors: Yoftahe Nigussie

Abstract:

This research explores the implementation of a vapor absorption refrigeration system (VARS) in mini-bus cars to enhance air conditioning efficiency. The conventional vapor compression refrigeration system (VCRS) in vehicles relies on mechanical work from the engine, leading to increased fuel consumption. The proposed VARS aims to utilize waste heat and exhaust gas from the internal combustion engine to cool the mini-bus cabin, thereby reducing fuel consumption and atmospheric pollution. The project involves two models: Model 1, a two-fluid vapor absorption system (VAS), and Model 2, a three-fluid VAS. Model 1 uses ammonia (NH₃) and water (H₂O) as refrigerants, where water absorbs ammonia rapidly, producing a cooling effect. The absorption cycle operates on the principle that absorbing ammonia in water decreases vapor pressure. The ammonia-water solution undergoes cycles of desorption, condensation, expansion, and absorption, facilitated by a generator, condenser, expansion valve, and absorber. The objectives of this research include reducing atmospheric pollution, minimizing air conditioning maintenance costs, lowering capital costs, enhancing fuel economy, and eliminating the need for a compressor. The comparison between vapor absorption and compression systems reveals advantages such as smoother operation, fewer moving parts, and the ability to work at lower evaporator pressures without affecting the Coefficient of Performance (COP). The proposed VARS demonstrates potential benefits for mini-bus air conditioning systems, providing a sustainable and energy-efficient alternative. By utilizing waste heat and exhaust gas, this system contributes to environmental preservation while addressing economic considerations for vehicle owners. Further research and development in this area could lead to the widespread adoption of vapor absorption technology in automotive air conditioning systems.

Keywords: room, zone, space, thermal resistance

Procedia PDF Downloads 72
21441 Restricted Boltzmann Machines and Deep Belief Nets for Market Basket Analysis: Statistical Performance and Managerial Implications

Authors: H. Hruschka

Abstract:

This paper presents the first comparison of the performance of the restricted Boltzmann machine and the deep belief net on binary market basket data relative to binary factor analysis and the two best-known topic models, namely Dirichlet allocation and the correlated topic model. This comparison shows that the restricted Boltzmann machine and the deep belief net are superior to both binary factor analysis and topic models. Managerial implications that differ between the investigated models are treated as well. The restricted Boltzmann machine is defined as joint Boltzmann distribution of hidden variables and observed variables (purchases). It comprises one layer of observed variables and one layer of hidden variables. Note that variables of the same layer are not connected. The comparison also includes deep belief nets with three layers. The first layer is a restricted Boltzmann machine based on category purchases. Hidden variables of the first layer are used as input variables by the second-layer restricted Boltzmann machine which then generates second-layer hidden variables. Finally, in the third layer hidden variables are related to purchases. A public data set is analyzed which contains one month of real-world point-of-sale transactions in a typical local grocery outlet. It consists of 9,835 market baskets referring to 169 product categories. This data set is randomly split into two halves. One half is used for estimation, the other serves as holdout data. Each model is evaluated by the log likelihood for the holdout data. Performance of the topic models is disappointing as the holdout log likelihood of the correlated topic model – which is better than Dirichlet allocation - is lower by more than 25,000 compared to the best binary factor analysis model. On the other hand, binary factor analysis on its own is clearly surpassed by both the restricted Boltzmann machine and the deep belief net whose holdout log likelihoods are higher by more than 23,000. Overall, the deep belief net performs best. We also interpret hidden variables discovered by binary factor analysis, the restricted Boltzmann machine and the deep belief net. Hidden variables characterized by the product categories to which they are related differ strongly between these three models. To derive managerial implications we assess the effect of promoting each category on total basket size, i.e., the number of purchased product categories, due to each category's interdependence with all the other categories. The investigated models lead to very different implications as they disagree about which categories are associated with higher basket size increases due to a promotion. Of course, recommendations based on better performing models should be preferred. The impressive performance advantages of the restricted Boltzmann machine and the deep belief net suggest continuing research by appropriate extensions. To include predictors, especially marketing variables such as price, seems to be an obvious next step. It might also be feasible to take a more detailed perspective by considering purchases of brands instead of purchases of product categories.

Keywords: binary factor analysis, deep belief net, market basket analysis, restricted Boltzmann machine, topic models

Procedia PDF Downloads 202
21440 Application of Artificial Immune Systems Combined with Collaborative Filtering in Movie Recommendation System

Authors: Pei-Chann Chang, Jhen-Fu Liao, Chin-Hung Teng, Meng-Hui Chen

Abstract:

This research combines artificial immune system with user and item based collaborative filtering to create an efficient and accurate recommendation system. By applying the characteristic of antibodies and antigens in the artificial immune system and using Pearson correlation coefficient as the affinity threshold to cluster the data, our collaborative filtering can effectively find useful users and items for rating prediction. This research uses MovieLens dataset as our testing target to evaluate the effectiveness of the algorithm developed in this study. The experimental results show that the algorithm can effectively and accurately predict the movie ratings. Compared to some state of the art collaborative filtering systems, our system outperforms them in terms of the mean absolute error on the MovieLens dataset.

Keywords: artificial immune system, collaborative filtering, recommendation system, similarity

Procedia PDF Downloads 536
21439 Elastoplastic and Ductile Damage Model Calibration of Steels for Bolt-Sphere Joints Used in China’s Space Structure Construction

Authors: Huijuan Liu, Fukun Li, Hao Yuan

Abstract:

The bolted spherical node is a common type of joint in space steel structures. The bolt-sphere joint portion almost always controls the bearing capacity of the bolted spherical node. The investigation of the bearing performance and progressive failure in service often requires high-fidelity numerical models. This paper focuses on the constitutive models of bolt steel and sphere steel used in China’s space structure construction. The elastoplastic model is determined by a standard tensile test and calibrated Voce saturated hardening rule. The ductile damage is found dominant based on the fractography analysis. Then Rice-Tracey ductile fracture rule is selected and the model parameters are calibrated based on tensile tests of notched specimens. These calibrated material models can benefit research or engineering work in similar fields.

Keywords: bolt-sphere joint, steel, constitutive model, ductile damage, model calibration

Procedia PDF Downloads 138
21438 Modeling Core Flooding Experiments for Co₂ Geological Storage Applications

Authors: Avinoam Rabinovich

Abstract:

CO₂ geological storage is a proven technology for reducing anthropogenic carbon emissions, which is paramount for achieving the ambitious net zero emissions goal. Core flooding experiments are an important step in any CO₂ storage project, allowing us to gain information on the flow of CO₂ and brine in the porous rock extracted from the reservoir. This information is important for understanding basic mechanisms related to CO₂ geological storage as well as for reservoir modeling, which is an integral part of a field project. In this work, a different method for constructing accurate models of CO₂-brine core flooding will be presented. Results for synthetic cases and real experiments will be shown and compared with numerical models to exhibit their predictive capabilities. Furthermore, the various mechanisms which impact the CO₂ distribution and trapping in the rock samples will be discussed, and examples from models and experiments will be provided. The new method entails solving an inverse problem to obtain a three-dimensional permeability distribution which, along with the relative permeability and capillary pressure functions, constitutes a model of the flow experiments. The model is more accurate when data from a number of experiments are combined to solve the inverse problem. This model can then be used to test various other injection flow rates and fluid fractions which have not been tested in experiments. The models can also be used to bridge the gap between small-scale capillary heterogeneity effects (sub-core and core scale) and large-scale (reservoir scale) effects, known as the upscaling problem.

Keywords: CO₂ geological storage, residual trapping, capillary heterogeneity, core flooding, CO₂-brine flow

Procedia PDF Downloads 71
21437 The Control System Architecture of Space Environment Simulator

Authors: Zhan Haiyang, Gu Miao

Abstract:

This article mainly introduces the control system architecture of space environment simulator, simultaneously also briefly introduce the automation control technology of industrial process and the measurement technology of vacuum and cold black environment. According to the volume of chamber, the space environment simulator is divided into three types of small, medium and large. According to the classification and application of space environment simulator, the control system is divided into the control system of small, medium, large space environment simulator and the centralized control system of multiple space environment simulators.

Keywords: space environment simulator, control system, architecture, automation control technology

Procedia PDF Downloads 475
21436 Developing A Third Degree Of Freedom For Opinion Dynamics Models Using Scales

Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle

Abstract:

Opinion dynamics models use an agent-based modeling approach to model people’s opinions. Model's properties are usually explored by testing the two 'degrees of freedom': the interaction rule and the network topology. The latter defines the connection, and thus the possible interaction, among agents. The interaction rule, instead, determines how agents select each other and update their own opinion. Here we show the existence of the third degree of freedom. This can be used for turning one model into each other or to change the model’s output up to 100% of its initial value. Opinion dynamics models represent the evolution of real-world opinions parsimoniously. Thus, it is fundamental to know how real-world opinion (e.g., supporting a candidate) could be turned into a number. Specifically, we want to know if, by choosing a different opinion-to-number transformation, the model’s dynamics would be preserved. This transformation is typically not addressed in opinion dynamics literature. However, it has already been studied in psychometrics, a branch of psychology. In this field, real-world opinions are converted into numbers using abstract objects called 'scales.' These scales can be converted one into the other, in the same way as we convert meters to feet. Thus, in our work, we analyze how this scale transformation may affect opinion dynamics models. We perform our analysis both using mathematical modeling and validating it via agent-based simulations. To distinguish between scale transformation and measurement error, we first analyze the case of perfect scales (i.e., no error or noise). Here we show that a scale transformation may change the model’s dynamics up to a qualitative level. Meaning that a researcher may reach a totally different conclusion, even using the same dataset just by slightly changing the way data are pre-processed. Indeed, we quantify that this effect may alter the model’s output by 100%. By using two models from the standard literature, we show that a scale transformation can transform one model into the other. This transformation is exact, and it holds for every result. Lastly, we also test the case of using real-world data (i.e., finite precision). We perform this test using a 7-points Likert scale, showing how even a small scale change may result in different predictions or a number of opinion clusters. Because of this, we think that scale transformation should be considered as a third-degree of freedom for opinion dynamics. Indeed, its properties have a strong impact both on theoretical models and for their application to real-world data.

Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics

Procedia PDF Downloads 155
21435 Understanding the Role of Gas Hydrate Morphology on the Producibility of a Hydrate-Bearing Reservoir

Authors: David Lall, Vikram Vishal, P. G. Ranjith

Abstract:

Numerical modeling of gas production from hydrate-bearing reservoirs requires the solution of various thermal, hydrological, chemical, and mechanical phenomena in a coupled manner. Among the various reservoir properties that influence gas production estimates, the distribution of permeability across the domain is one of the most crucial parameters since it determines both heat transfer and mass transfer. The aspect of permeability in hydrate-bearing reservoirs is particularly complex compared to conventional reservoirs since it depends on the saturation of gas hydrates and hence, is dynamic during production. The dependence of permeability on hydrate saturation is mathematically represented using permeability-reduction models, which are specific to the expected morphology of hydrate accumulations (such as grain-coating or pore-filling hydrates). In this study, we demonstrate the impact of various permeability-reduction models, and consequently, different morphologies of hydrate deposits on the estimates of gas production using depressurization at the reservoir scale. We observe significant differences in produced water volumes and cumulative mass of produced gas between the models, thereby highlighting the uncertainty in production behavior arising from the ambiguity in the prevalent gas hydrate morphology.

Keywords: gas hydrate morphology, multi-scale modeling, THMC, fluid flow in porous media

Procedia PDF Downloads 221
21434 Hybrid Direct Numerical Simulation and Large Eddy Simulating Wall Models Approach for the Analysis of Turbulence Entropy

Authors: Samuel Ahamefula

Abstract:

Turbulent motion is a highly nonlinear and complex phenomenon, and its modelling is still very challenging. In this study, we developed a hybrid computational approach to accurately simulate fluid turbulence phenomenon. The focus is coupling and transitioning between Direct Numerical Simulation (DNS) and Large Eddy Simulating Wall Models (LES-WM) regions. In the framework, high-order fidelity fluid dynamical methods are utilized to simulate the unsteady compressible Navier-Stokes equations in the Eulerian format on the unstructured moving grids. The coupling and transitioning of DNS and LES-WM are conducted through the linearly staggered Dirichlet-Neumann coupling scheme. The high-fidelity framework is verified and validated based on namely, DNS ability for capture full range of turbulent scales, giving accurate results and LES-WM efficiency in simulating near-wall turbulent boundary layer by using wall models.

Keywords: computational methods, turbulence modelling, turbulence entropy, navier-stokes equations

Procedia PDF Downloads 101
21433 Unified Public Transportation System for Mumbai Using Radio Frequency Identification

Authors: Saurabh Parkhedkar, Rajanikant Tenguria

Abstract:

The paper proposes revamping the public transportation system in Mumbai with the use of Radio Frequency Identification (RFID) technology in order to provide better integration and compatibility across various modes of transport. In Mumbai, mass transport system suffers from poor inter-compatible ticketing system, subpar money collection techniques, and lack of planning for optimum utilization of resources. Development of suburbs and growth in population will result in growing demand for mass transportation networks. Hence, the growing demand for the already overburdened public transportation system is only going to worsen the scenario. Thus, a superior system is essential in order to regulate, manage and supervise future transportation needs. The proposed RFID based system integrates Mumbai Suburban Railway, BEST (Brihanmumbai Electric Supply and Transport Undertaking transport wing) Bus, Mumbai Monorail and Mumbai Metro systems into a Unified Public Transportation System (UPTS). The UTPS takes into account various drawbacks of the present day system and offers solution, suitable for the modern age Mumbai.

Keywords: urbanization, transportation, RFID, Mumbai, public transportation, smart city.

Procedia PDF Downloads 414
21432 Comparison of Spiking Neuron Models in Terms of Biological Neuron Behaviours

Authors: Fikret Yalcinkaya, Hamza Unsal

Abstract:

To understand how neurons work, it is required to combine experimental studies on neural science with numerical simulations of neuron models in a computer environment. In this regard, the simplicity and applicability of spiking neuron modeling functions have been of great interest in computational neuron science and numerical neuroscience in recent years. Spiking neuron models can be classified by exhibiting various neuronal behaviors, such as spiking and bursting. These classifications are important for researchers working on theoretical neuroscience. In this paper, three different spiking neuron models; Izhikevich, Adaptive Exponential Integrate Fire (AEIF) and Hindmarsh Rose (HR), which are based on first order differential equations, are discussed and compared. First, the physical meanings, derivatives, and differential equations of each model are provided and simulated in the Matlab environment. Then, by selecting appropriate parameters, the models were visually examined in the Matlab environment and it was aimed to demonstrate which model can simulate well-known biological neuron behaviours such as Tonic Spiking, Tonic Bursting, Mixed Mode Firing, Spike Frequency Adaptation, Resonator and Integrator. As a result, the Izhikevich model has been shown to perform Regular Spiking, Continuous Explosion, Intrinsically Bursting, Thalmo Cortical, Low-Threshold Spiking and Resonator. The Adaptive Exponential Integrate Fire model has been able to produce firing patterns such as Regular Ignition, Adaptive Ignition, Initially Explosive Ignition, Regular Explosive Ignition, Delayed Ignition, Delayed Regular Explosive Ignition, Temporary Ignition and Irregular Ignition. The Hindmarsh Rose model showed three different dynamic neuron behaviours; Spike, Burst and Chaotic. From these results, the Izhikevich cell model may be preferred due to its ability to reflect the true behavior of the nerve cell, the ability to produce different types of spikes, and the suitability for use in larger scale brain models. The most important reason for choosing the Adaptive Exponential Integrate Fire model is that it can create rich ignition patterns with fewer parameters. The chaotic behaviours of the Hindmarsh Rose neuron model, like some chaotic systems, is thought to be used in many scientific and engineering applications such as physics, secure communication and signal processing.

Keywords: Izhikevich, adaptive exponential integrate fire, Hindmarsh Rose, biological neuron behaviours, spiking neuron models

Procedia PDF Downloads 183
21431 Parameter Estimation via Metamodeling

Authors: Sergio Haram Sarmiento, Arcady Ponosov

Abstract:

Based on appropriate multivariate statistical methodology, we suggest a generic framework for efficient parameter estimation for ordinary differential equations and the corresponding nonlinear models. In this framework classical linear regression strategies is refined into a nonlinear regression by a locally linear modelling technique (known as metamodelling). The approach identifies those latent variables of the given model that accumulate most information about it among all approximations of the same dimension. The method is applied to several benchmark problems, in particular, to the so-called ”power-law systems”, being non-linear differential equations typically used in Biochemical System Theory.

Keywords: principal component analysis, generalized law of mass action, parameter estimation, metamodels

Procedia PDF Downloads 518
21430 Aggregate Production Planning Framework in a Multi-Product Factory: A Case Study

Authors: Ignatio Madanhire, Charles Mbohwa

Abstract:

This study looks at the best model of aggregate planning activity in an industrial entity and uses the trial and error method on spreadsheets to solve aggregate production planning problems. Also linear programming model is introduced to optimize the aggregate production planning problem. Application of the models in a furniture production firm is evaluated to demonstrate that practical and beneficial solutions can be obtained from the models. Finally some benchmarking of other furniture manufacturing industries was undertaken to assess relevance and level of use in other furniture firms

Keywords: aggregate production planning, trial and error, linear programming, furniture industry

Procedia PDF Downloads 560
21429 Machine Learning Techniques for Estimating Ground Motion Parameters

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.

Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine

Procedia PDF Downloads 123
21428 Photovoltaic Array Cleaning System Design and Evaluation

Authors: Ghoname Abdullah, Hidekazu Nishimura

Abstract:

Dust accumulation on the photovoltaic module's surface results in appreciable loss and negatively affects the generated power. Hence, in this paper, the design of a photovoltaic array cleaning system is presented. The cleaning system utilizes one drive motor, two guide rails, and four sweepers during the cleaning process. The cleaning system was experimentally implemented for one month to investigate its efficiency on PV array energy output. The energy capture over a month for PV array cleaned using the proposed cleaning system is compared with that of the energy capture using soiled PV array. The results show a 15% increase in energy generation from PV array with cleaning. From the results, investigating the optimal scheduling of the PV array cleaning could be an interesting research topic.

Keywords: cleaning system, dust accumulation, PV array, PV module, soiling

Procedia PDF Downloads 130
21427 The Nuclear Power Plant Environment Monitoring System through Mobile Units

Authors: P. Tanuska, A. Elias, P. Vazan, B. Zahradnikova

Abstract:

This article describes the information system for measuring and evaluating the dose rate in the environment of nuclear power plants Mochovce and Bohunice in Slovakia. The article presents the results achieved in the implementation of the EU project–Research of monitoring and evaluation of non-standard conditions in the area of nuclear power plants. The objectives included improving the system of acquisition, measuring and evaluating data with mobile and autonomous units applying new knowledge from research. The article provides basic and specific features of the system and compared to the previous version of the system, also new functions.

Keywords: information system, dose rate, mobile devices, nuclear power plant

Procedia PDF Downloads 377
21426 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, penalized quasi-likelihood, power, quasi-likelihood, type-I error

Procedia PDF Downloads 143
21425 Cloud Computing in Data Mining: A Technical Survey

Authors: Ghaemi Reza, Abdollahi Hamid, Dashti Elham

Abstract:

Cloud computing poses a diversity of challenges in data mining operation arising out of the dynamic structure of data distribution as against the use of typical database scenarios in conventional architecture. Due to immense number of users seeking data on daily basis, there is a serious security concerns to cloud providers as well as data providers who put their data on the cloud computing environment. Big data analytics use compute intensive data mining algorithms (Hidden markov, MapReduce parallel programming, Mahot Project, Hadoop distributed file system, K-Means and KMediod, Apriori) that require efficient high performance processors to produce timely results. Data mining algorithms to solve or optimize the model parameters. The challenges that operation has to encounter is the successful transactions to be established with the existing virtual machine environment and the databases to be kept under the control. Several factors have led to the distributed data mining from normal or centralized mining. The approach is as a SaaS which uses multi-agent systems for implementing the different tasks of system. There are still some problems of data mining based on cloud computing, including design and selection of data mining algorithms.

Keywords: cloud computing, data mining, computing models, cloud services

Procedia PDF Downloads 481
21424 Design and Implementation of an Efficient Solar-Powered Pumping System

Authors: Mennatallah M. Fouad, Omar Hussein, Lamia A. Shihata

Abstract:

The main problem in many rural areas is the absence of electricity and limited access to water. The novelty of this work lies in implementing a small-scale experimental setup for a solar-powered water pumping system with a battery back-up system. Cooling and cleaning of the PV panel are implemented to enhance its overall efficiency and output. Moreover, a simulation for a large scale solar-powered pumping system is performed using PVSyst software. Results of the experimental setup show that the PV system with a battery backup proved to be a feasible and viable system to operate the water pumping system. Excess water from the pumping system is used to cool and clean the PV panel and achieved an average percentage increase in the PV output by 21.8%. Simulation results have shown that the system provides adequate output to power the solar-powered system and saves 0.3 tons of CO₂ compared to conventional fossil fuels. It is recommended for hot countries to adopt this system, which would help in decreasing the dependence on the depleting fossil fuels, provide access to electricity to areas where there is no electricity supply and also provide a source of water for crop growth as well as decrease the carbon emissions.

Keywords: efficient solar pumping, PV cleaning, PV cooling, PV-operated water pump

Procedia PDF Downloads 136
21423 State Budget Accounting: Factors Affected and Basic Orientation to Vietnamese Public Sector Entities

Authors: Pham Quang Huy

Abstract:

State budget is considered as an effective tool for controlling, adjusting and regulating the market economy of any countries. To ensure that the activities of the state in the fields of politics, economy and society has been efficiency, it requires major sources of certain budget. These financial funds are formed from tax revenues and tax revenues beyond. Therefore, the Governments need to have an accounting regime to manage the receipt, expenditure which are suitable for recording a full range of items. From that, it can help to increase the transparency and accountability in budget system. One of the main requirements in Vietnamese policies is to improve that accounting system of revenues and expenditures which can provide many reports to meet the information required of government and users, as well as directions to the trends of international standards requirements. By using quantitative research methods and analytical models to exploring factors, the main purpose of this article is to identify the factors affecting budget accounting and providing some direction for Vietnamese public sector in the future. The results indicated that Vietnam budget accounting has been impacted by seven factors and aims to implement three main orientations in the public sector units.

Keywords: state budget, accounting, IPSAS, budget management, government, public sector

Procedia PDF Downloads 272
21422 Business and Psychological Principles Integrated into Automated Capital Investment Systems through Mathematical Algorithms

Authors: Cristian Pauna

Abstract:

With few steps away from the 2020, investments in financial markets is a common activity nowadays. In the electronic trading environment, the automated investment software has become a major part in the business intelligence system of any modern financial company. The investment decisions are assisted and/or made automatically by computers using mathematical algorithms today. The complexity of these algorithms requires computer assistance in the investment process. This paper will present several investment strategies that can be automated with algorithmic trading for Deutscher Aktienindex DAX30. It was found that, based on several price action mathematical models used for high-frequency trading some investment strategies can be optimized and improved for automated investments with good results. This paper will present the way to automate these investment decisions. Automated signals will be built using all of these strategies. Three major types of investment strategies were found in this study. The types are separated by the target length and by the exit strategy used. The exit decisions will be also automated and the paper will present the specificity for each investment type. A comparative study will be also included in this paper in order to reveal the differences between strategies. Based on these results, the profit and the capital exposure will be compared and analyzed in order to qualify the investment methodologies presented and to compare them with any other investment system. As conclusion, some major investment strategies will be revealed and compared in order to be considered for inclusion in any automated investment system.

Keywords: Algorithmic trading, automated investment systems, limit conditions, trading principles, trading strategies

Procedia PDF Downloads 194
21421 Mixed Mode Fracture Analyses Using Finite Element Method of Edge Cracked Heavy Spinning Annulus Pulley

Authors: Bijit Kalita, K. V. N. Surendra

Abstract:

Rotating disk is one of the most indispensable parts of a rotating machine. Rotating disk has found many applications in the diverging field of science and technology. In this paper, we have taken into consideration the problem of a heavy spinning disk mounted on a rotor system acted upon by boundary traction. Finite element modelling is used at various loading condition to determine the mixed mode stress intensity factors. The effect of combined shear and normal traction on the boundary is incorporated in the analysis under the action of gravity. The variation near the crack tip is characterized in terms of the stress intensity factor (SIF) with an aim to find the SIF for a wide range of parameters. The results of the finite element analyses carried out on the compressed disk of a belt pulley arrangement using fracture mechanics concepts are shown. A total of hundred cases of the problem are solved for each of the variations in loading arc parameter and crack orientation using finite element models of the disc under compression. All models were prepared and analyzed for the uncracked disk, disk with a single crack at different orientation emanating from shaft hole as well as for a disc with pair of cracks emerging from the same center hole. Curves are plotted for various loading conditions. Finally, crack propagation paths are determined using kink angle concepts.

Keywords: crack-tip deformations, static loading, stress concentration, stress intensity factor

Procedia PDF Downloads 144
21420 Using Machine Learning to Classify Different Body Parts and Determine Healthiness

Authors: Zachary Pan

Abstract:

Our general mission is to solve the problem of classifying images into different body part types and deciding if each of them is healthy or not. However, for now, we will determine healthiness for only one-sixth of the body parts, specifically the chest. We will detect pneumonia in X-ray scans of those chest images. With this type of AI, doctors can use it as a second opinion when they are taking CT or X-ray scans of their patients. Another ad-vantage of using this machine learning classifier is that it has no human weaknesses like fatigue. The overall ap-proach to this problem is to split the problem into two parts: first, classify the image, then determine if it is healthy. In order to classify the image into a specific body part class, the body parts dataset must be split into test and training sets. We can then use many models, like neural networks or logistic regression models, and fit them using the training set. Now, using the test set, we can obtain a realistic accuracy the models will have on images in the real world since these testing images have never been seen by the models before. In order to increase this testing accuracy, we can also apply many complex algorithms to the models, like multiplicative weight update. For the second part of the problem, to determine if the body part is healthy, we can have another dataset consisting of healthy and non-healthy images of the specific body part and once again split that into the test and training sets. We then use another neural network to train on those training set images and use the testing set to figure out its accuracy. We will do this process only for the chest images. A major conclusion reached is that convolutional neural networks are the most reliable and accurate at image classification. In classifying the images, the logistic regression model, the neural network, neural networks with multiplicative weight update, neural networks with the black box algorithm, and the convolutional neural network achieved 96.83 percent accuracy, 97.33 percent accuracy, 97.83 percent accuracy, 96.67 percent accuracy, and 98.83 percent accuracy, respectively. On the other hand, the overall accuracy of the model that de-termines if the images are healthy or not is around 78.37 percent accuracy.

Keywords: body part, healthcare, machine learning, neural networks

Procedia PDF Downloads 109
21419 Research on the Ecological Impact Evaluation Index System of Transportation Construction Projects

Authors: Yu Chen, Xiaoguang Yang, Lin Lin

Abstract:

Traffic engineering construction is an important infrastructure for economic and social development. In the process of construction and operation, the ability to make a correct evaluation of the project's environmental impact appears to be crucial to the rational operation of existing transportation projects, the correct development of transportation engineering construction and the adoption of corresponding measures to scientifically carry out environmental protection work. Most of the existing research work on ecological and environmental impact assessment is limited to individual aspects of the environment and less to the overall evaluation of the environmental system; in terms of research conclusions, there are more qualitative analyses from the technical and policy levels, and there is a lack of quantitative research results and quantitative and operable evaluation models. In this paper, a comprehensive analysis of the ecological and environmental impacts of transportation construction projects is conducted, and factors such as the accessibility of data and the reliability of calculation results are comprehensively considered to extract indicators that can reflect the essence and characteristics. The qualitative evaluation indicators were screened using the expert review method, the qualitative indicators were measured using the fuzzy statistics method, the quantitative indicators were screened using the principal component analysis method, and the quantitative indicators were measured by both literature search and calculation. An environmental impact evaluation index system with the general objective layer, sub-objective layer and indicator layer was established, dividing the environmental impact of the transportation construction project into two periods: the construction period and the operation period. On the basis of the evaluation index system, the index weights are determined using the hierarchical analysis method, and the individual indicators to be evaluated are dimensionless, eliminating the influence of the original background and meaning of the indicators. Finally, the thesis uses the above research results, combined with the actual engineering practice, to verify the correctness and operability of the evaluation method.

Keywords: transportation construction projects, ecological and environmental impact, analysis and evaluation, indicator evaluation system

Procedia PDF Downloads 108
21418 Collaborative Early Warning System: An Integrated Framework for Mitigating Impacts of Natural Hazards in the UAE

Authors: Abdulla Al Hmoudi

Abstract:

The impacts and costs of natural disasters on people, properties and the environment is often severe when they occur on a large scale or when not prepared for. Factors such as impacts of climate change, urban growth, poor planning to mention a few, have continued to significantly increase the frequencies and aggravate the impacts of natural hazards across the world; the United Arab Emirates (UAE) inclusive. The lack of deployment of an early warning system, low risk and hazard knowledge and impact of natural hazard experienced in some communities in the UAE have emphasised the need for more effective early warning systems. This paper focuses on the collaborative approach taken to instituting and implementing an early warning system. Using mixed methods 888 people completed the questionnaire and eight people were interviewed in Abu Dhabi. The results indicate that the collaborative approach to early warning system is UAE is needed, but lacks essential principles of the early warning system and currently underutilised. It is recommended that the collaborative early warning system is applied at every stage of the early warning system with the specific responsibility of each stakeholder and actor.

Keywords: community, early warning system, emergency management, UAE

Procedia PDF Downloads 144
21417 Review of Hydrologic Applications of Conceptual Models for Precipitation-Runoff Process

Authors: Oluwatosin Olofintoye, Josiah Adeyemo, Gbemileke Shomade

Abstract:

The relationship between rainfall and runoff is an important issue in surface water hydrology therefore the understanding and development of accurate rainfall-runoff models and their applications in water resources planning, management and operation are of paramount importance in hydrological studies. This paper reviews some of the previous works on the rainfall-runoff process modeling. The hydrologic applications of conceptual models and artificial neural networks (ANNs) for the precipitation-runoff process modeling were studied. Gradient training methods such as error back-propagation (BP) and evolutionary algorithms (EAs) are discussed in relation to the training of artificial neural networks and it is shown that application of EAs to artificial neural networks training could be an alternative to other training methods. Therefore, further research interest to exploit the abundant expert knowledge in the area of artificial intelligence for the solution of hydrologic and water resources planning and management problems is needed.

Keywords: artificial intelligence, artificial neural networks, evolutionary algorithms, gradient training method, rainfall-runoff model

Procedia PDF Downloads 455
21416 Cantilever Secant Pile Constructed in Sand: Numerical Comparative Study and Design Aids – Part II

Authors: Khaled R. Khater

Abstract:

All civil engineering projects include excavation work and therefore need some retaining structures. Cantilever secant pile walls are an economical supporting system up to 5.0-m depths. The parameters controlling wall tip displacement are the focus of this paper. So, two analysis techniques have been investigated and arbitrated. They are the conventional method and finite element analysis. Accordingly, two computer programs have been used, Excel sheet and Plaxis-2D. Two soil models have been used throughout this study. They are Mohr-Coulomb soil model and Isotropic Hardening soil models. During this study, two soil densities have been considered, i.e. loose and dense sand. Ten wall rigidities have been analyzed covering ranges of perfectly flexible to completely rigid walls. Three excavation depths, i.e. 3.0-m, 4.0-m and 5.0-m were tested to cover the practical range of secant piles. This work submits beneficial hints about secant piles to assist designers and specification committees. Also, finite element analysis, isotropic hardening, is recommended to be the fair judge when two designs conflict. A rational procedure using empirical equations has been suggested to upgrade the conventional method to predict wall tip displacement ‘δ’. Also, a reasonable limitation of ‘δ’ as a function of excavation depth, ‘h’ has been suggested. Also, it has been found that, after a certain penetration depth any further increase of it does not positively affect the wall tip displacement, i.e. over design and uneconomic.

Keywords: design aids, numerical analysis, secant pile, Wall tip displacement

Procedia PDF Downloads 191