Search results for: predicting model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17379

Search results for: predicting model

16809 Identification of Dynamic Friction Model for High-Precision Motion Control

Authors: Martin Goubej, Tomas Popule, Alois Krejci

Abstract:

This paper deals with experimental identification of mechanical systems with nonlinear friction characteristics. Dynamic LuGre friction model is adopted and a systematic approach to parameter identification of both linear and nonlinear subsystems is given. The identification procedure consists of three subsequent experiments which deal with the individual parts of plant dynamics. The proposed method is experimentally verified on an industrial-grade robotic manipulator. Model fidelity is compared with the results achieved with a static friction model.

Keywords: mechanical friction, LuGre model, friction identification, motion control

Procedia PDF Downloads 414
16808 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 130
16807 Genesis of Entrepreneur Business Models in New Ventures

Authors: Arash Najmaei, Jo Rhodes, Peter Lok, Zahra Sadeghinejad

Abstract:

In this article, we endeavor to explore how a new business model comes into existence in the Australian cloud-computing eco-system. Findings from multiple case study methodology reveal that to develop a business model new ventures adopt a three-phase approach. In the first phase, labelled as business model ideation (BMID) various ideas for a viable business model are generated from both internal and external networks of the entrepreneurial team and the most viable one is chosen. Strategic consensus and commitment are generated in the second phase. This phase is a business modelling strategic action phase. We labelled this phase as business model strategic commitment (BMSC) because through commitment and the subsequent actions of executives resources are pooled, coordinated and allocated to the business model. Three complementary sets of resources shape the business model: managerial (MnRs), marketing (MRs) and technological resources (TRs). The third phase is the market-test phase where the business model is reified through the delivery of the intended value to customers and conversion of revenue into profit. We labelled this phase business model actualization (BMAC). Theoretical and managerial implications of these findings will be discussed and several directions for future research will be illuminated.

Keywords: entrepreneur business model, high-tech venture, resources, conversion of revenue

Procedia PDF Downloads 448
16806 Parental Drinking and Risky Alcohol Related Behaviors: Predicting Binge Drinking Trajectories and Their Influence on Impaired Driving among College Students

Authors: Shiran Bord, Assaf Oshri, Matthew W. Carlson, Sihong Liu

Abstract:

Background: Alcohol-impaired driving (AID) and binge drinking are major health concerns among college students. Although the link between binge drinking and AID is well established, knowledge regarding binge drinking patterns, the factors influencing binge drinking, and the associations between consumption patterns and alcohol-related risk behaviors is lacking. Aims: To examine heterogeneous trajectories of binge drinking during college and tests factors that might predict class membership as well as class membership outcomes. Methods: Data were obtained from a sample of 1,265 college students (Mage = 18.5, SD = .66) as part of the Longitudinal Study of Violence Against Women (N = 1,265; 59.3% female; 69.2% white). Analyses were completed in three stages. First, a growth curve analysis was conducted to identify trajectories of binge drinking over time. Second, growth curve mixture modeling analyses were pursued to assess unobserved growth trajectories of binge drinking without predictors. Lastly, parental drinking variables were added to the model as predictors of class membership, and AID and being a passenger of a drunk driver were added to the model as outcomes. Results: Three binge drinking trajectories were identified: high-convex, medium concave and low-increasing. Parental drinking was associated with being in high-convex and medium-concave classes. Compared to the low-increasing class, the high convex and medium concave classes reported more AID and being a passenger of a drunk driver more frequently. Conclusions: Parental drinking may affect children’s later engagement in AID. Efforts should focus on parents' education regarding the consequences of parental modeling of alcohol consumption.

Keywords: alcohol impaired driving, alcohol consumption, binge drinking, college students, parental modeling

Procedia PDF Downloads 282
16805 Model of Multi-Criteria Evaluation for Railway Lines

Authors: Juraj Camaj, Martin Kendra, Jaroslav Masek

Abstract:

The paper is focused to the evaluation railway tracks in the Slovakia by using Multi-Criteria method. Evaluation of railway tracks has important impacts for the assessment of investment in technical equipment. Evaluation of railway tracks also has an important impact for the allocation of marshalling yards. Marshalling yards are in transport model as centers for the operation assigned catchment area. This model is one of the effective ways to meet the development strategy of the European Community's railways. By applying this model in practice, a transport company can guarantee a higher quality of service and then expect an increase in performance. The model is also applicable to other rail networks. This model supplements a theoretical problem of train formation problem of new ways of looking at evaluation of factors affecting the organization of wagon flows.

Keywords: railway track, multi-criteria methods, evaluation, transportation model

Procedia PDF Downloads 470
16804 Research on Coordination Strategies for Coordinating Supply Chain Based on Auction Mechanisms

Authors: Changtong Wang, Lingyun Wei

Abstract:

The combination of auctions and supply chains is of great significance in improving the supply chain management system and enhancing the efficiency of economic and social operations. To address the gap in research on supply chain strategies under the auction mechanism, a model is developed for the 1-N auction model in a complete information environment, and it is concluded that the two-part contract auction model for retailers in this model can achieve supply chain coordination. The model is validated by substituting the model into the scenario of a fresh-cut flower industry flower auction in exchange for arithmetic examples to further prove the validity of the conclusions.

Keywords: auction mechanism, supply chain coordination strategy, fresh cut flowers industry, supply chain management

Procedia PDF Downloads 124
16803 Adaptive Thermal Comfort Model for Air-Conditioned Lecture Halls in Malaysia

Authors: B. T. Chew, S. N. Kazi, A. Amiri

Abstract:

This paper presents an adaptive thermal comfort model study in the tropical country of Malaysia. A number of researchers have been interested in applying the adaptive thermal comfort model to different climates throughout the world, but so far no study has been performed in Malaysia. For the use as a thermal comfort model, which better applies to hot and humid climates, the adaptive thermal comfort model was developed as part of this research by using the collected results from a large field study in six lecture halls with 178 students. The relationship between the operative temperature and behavioral adaptations was determined. In the developed adaptive model, the acceptable indoor neutral temperatures lay within the range of 23.9-26.0 oC, with outdoor temperatures ranging between 27.0–34.6oC. The most comfortable temperature for students in the lecture hall was 25.7 oC.

Keywords: hot and humid, lecture halls, neutral temperature, adaptive thermal comfort model

Procedia PDF Downloads 369
16802 A Model of a Non-expanding Universe

Authors: Yongbai Yin

Abstract:

We propose a non-expanding model of the universe based on the non-changing fine-structure constant and Einstein’s space-time relativity theory. This model consistently explains the Redshift, the ‘expanding’ and the age of the universe without introducing the singularity and inflationary issues that occurred in the ‘Big Bang’ model. It also offers an interpretation of the unexpected ‘accelerated expanding’ universe and the origin of the mystery of ‘Dark matter’. It predicts that the universe began with a ‘cold and peaceful’ rather than ‘extremely hot’ stage which is used to explain consistently the microwave background radiation. It predicts mathematically that galaxies could end in blackholes because blackholes should have the same environmental conditions as those at the beginning of the universe in this model, paving the way to offer a model of the cyclic universes without violating the first law of thermodynamics.

Keywords: big bang, accelerated expanding universe, dark matters, blackholes, microwave background radiation, universe modelling

Procedia PDF Downloads 17
16801 Method of Parameter Calibration for Error Term in Stochastic User Equilibrium Traffic Assignment Model

Authors: Xiang Zhang, David Rey, S. Travis Waller

Abstract:

Stochastic User Equilibrium (SUE) model is a widely used traffic assignment model in transportation planning, which is regarded more advanced than Deterministic User Equilibrium (DUE) model. However, a problem exists that the performance of the SUE model depends on its error term parameter. The objective of this paper is to propose a systematic method of determining the appropriate error term parameter value for the SUE model. First, the significance of the parameter is explored through a numerical example. Second, the parameter calibration method is developed based on the Logit-based route choice model. The calibration process is realized through multiple nonlinear regression, using sequential quadratic programming combined with least square method. Finally, case analysis is conducted to demonstrate the application of the calibration process and validate the better performance of the SUE model calibrated by the proposed method compared to the SUE models under other parameter values and the DUE model.

Keywords: parameter calibration, sequential quadratic programming, stochastic user equilibrium, traffic assignment, transportation planning

Procedia PDF Downloads 302
16800 Model-Based Control for Piezoelectric-Actuated Systems Using Inverse Prandtl-Ishlinskii Model and Particle Swarm Optimization

Authors: Jin-Wei Liang, Hung-Yi Chen, Lung Lin

Abstract:

In this paper feedforward controller is designed to eliminate nonlinear hysteresis behaviors of a piezoelectric stack actuator (PSA) driven system. The control design is based on inverse Prandtl-Ishlinskii (P-I) hysteresis model identified using particle swarm optimization (PSO) technique. Based on the identified P-I model, both the inverse P-I hysteresis model and feedforward controller can be determined. Experimental results obtained using the inverse P-I feedforward control are compared with their counterparts using hysteresis estimates obtained from the identified Bouc-Wen model. Effectiveness of the proposed feedforward control scheme is demonstrated. To improve control performance feedback compensation using traditional PID scheme is adopted to integrate with the feedforward controller.

Keywords: the Bouc-Wen hysteresis model, particle swarm optimization, Prandtl-Ishlinskii model, automation engineering

Procedia PDF Downloads 515
16799 Yang-Lee Edge Singularity of the Infinite-Range Ising Model

Authors: Seung-Yeon Kim

Abstract:

The Ising model, consisting magnetic spins, is the simplest system showing phase transitions and critical phenomena at finite temperatures. The Ising model has played a central role in our understanding of phase transitions and critical phenomena. Also, the Ising model explains the gas-liquid phase transitions accurately. However, the Ising model in a nonzero magnetic field has been one of the most intriguing and outstanding unsolved problems. We study analytically the partition function zeros in the complex magnetic-field plane and the Yang-Lee edge singularity of the infinite-range Ising model in an external magnetic field. In addition, we compare the Yang-Lee edge singularity of the infinite-range Ising model with that of the square-lattice Ising model in an external magnetic field.

Keywords: Ising ferromagnet, magnetic field, partition function zeros, Yang-Lee edge singularity

Procedia PDF Downloads 742
16798 Digital Twin for a Floating Solar Energy System with Experimental Data Mining and AI Modelling

Authors: Danlei Yang, Luofeng Huang

Abstract:

The integration of digital twin technology with renewable energy systems offers an innovative approach to predicting and optimising performance throughout the entire lifecycle. A digital twin is a continuously updated virtual replica of a real-world entity, synchronised with data from its physical counterpart and environment. Many digital twin companies today claim to have mature digital twin products, but their focus is primarily on equipment visualisation. However, the core of a digital twin should be its model, which can mirror, shadow, and thread with the real-world entity, which is still underdeveloped. For a floating solar energy system, a digital twin model can be defined in three aspects: (a) the physical floating solar energy system along with environmental factors such as solar irradiance and wave dynamics, (b) a digital model powered by artificial intelligence (AI) algorithms, and (c) the integration of real system data with the AI-driven model and a user interface. The experimental setup for the floating solar energy system, is designed to replicate real-ocean conditions of floating solar installations within a controlled laboratory environment. The system consists of a water tank that simulates an aquatic surface, where a floating catamaran structure supports a solar panel. The solar simulator is set up in three positions: one directly above and two inclined at a 45° angle in front and behind the solar panel. This arrangement allows the simulation of different sun angles, such as sunrise, midday, and sunset. The solar simulator is positioned 400 mm away from the solar panel to maintain consistent solar irradiance on its surface. Stability for the floating structure is achieved through ropes attached to anchors at the bottom of the tank, which simulates the mooring systems used in real-world floating solar applications. The floating solar energy system's sensor setup includes various devices to monitor environmental and operational parameters. An irradiance sensor measures solar irradiance on the photovoltaic (PV) panel. Temperature sensors monitor ambient air and water temperatures, as well as the PV panel temperature. Wave gauges measure wave height, while load cells capture mooring force. Inclinometers and ultrasonic sensors record heave and pitch amplitudes of the floating system’s motions. An electric load measures the voltage and current output from the solar panel. All sensors collect data simultaneously. Artificial neural network (ANN) algorithms are central to developing the digital model, which processes historical and real-time data, identifies patterns, and predicts the system’s performance in real time. The data collected from various sensors are partly used to train the digital model, with the remaining data reserved for validation and testing. The digital twin model combines the experimental setup with the ANN model, enabling monitoring, analysis, and prediction of the floating solar energy system's operation. The digital model mirrors the functionality of the physical setup, running in sync with the experiment to provide real-time insights and predictions. It provides useful industrial benefits, such as informing maintenance plans as well as design and control strategies for optimal energy efficiency. In long term, this digital twin will help improve overall solar energy yield whilst minimising the operational costs and risks.

Keywords: digital twin, floating solar energy system, experiment setup, artificial intelligence

Procedia PDF Downloads 14
16797 Qualitative Modeling of Transforming Growth Factor Beta-Associated Biological Regulatory Network: Insight into Renal Fibrosis

Authors: Ayesha Waqar Khan, Mariam Altaf, Jamil Ahmad, Shaheen Shahzad

Abstract:

Kidney fibrosis is an anticipated outcome of possibly all types of progressive chronic kidney disease (CKD). Epithelial-mesenchymal transition (EMT) signaling pathway is responsible for production of matrix-producing fibroblasts and myofibroblasts in diseased kidney. In this study, a discrete model of TGF-beta (transforming growth factor) and CTGF (connective tissue growth factor) was constructed using Rene Thomas formalism to investigate renal fibrosis turn over. The kinetic logic proposed by Rene Thomas is a renowned approach for modeling of Biological Regulatory Networks (BRNs). This modeling approach uses a set of constraints which represents the dynamics of the BRN thus analyzing the pathway and predicting critical trajectories that lead to a normal or diseased state. The molecular connection between TGF-beta, Smad 2/3 (transcription factor) phosphorylation and CTGF is modeled using GenoTech. The order of BRN is CTGF, TGF-B, and SMAD3 respectively. The predicted cycle depicts activation of TGF-B (TGF-β) via cleavage of its own pro-domain (0,1,0) and presentation to TGFR-II receptor phosphorylating SMAD3 (Smad2/3) in the state (0,1,1). Later TGF-B is turned off (0,0,1) thereby activating SMAD3 that further stimulates the expression of CTGF in the state (1,0,1) and itself turns off in (1,0,0). Elevated CTGF expression reactivates TGF-B (1,1,0) and the cycle continues. The predicted model has generated one cycle and two steady states. Cyclic behavior in this study represents the diseased state in which all three proteins contribute to renal fibrosis. The proposed model is in accordance with the experimental findings of the existing diseased state. Extended cycle results in enhanced CTGF expression through Smad2/3 and Smad4 translocation in the nucleus. The results suggest that the system converges towards organ fibrogenesis if CTGF remains constructively active along with Smad2/3 and Smad 4 that plays an important role in kidney fibrosis. Therefore, modeling regulatory pathways of kidney fibrosis will escort to the progress of therapeutic tools and real-world useful applications such as predictive and preventive medicine.

Keywords: CTGF, renal fibrosis signaling pathway, system biology, qualitative modeling

Procedia PDF Downloads 181
16796 Validation of a Fluid-Structure Interaction Model of an Aortic Dissection versus a Bench Top Model

Authors: K. Khanafer

Abstract:

The aim of this investigation was to validate the fluid-structure interaction (FSI) model of type B aortic dissection with our experimental results from a bench-top-model. Another objective was to study the relationship between the size of a septectomy that increases the outflow of the false lumen and its effect on the values of the differential of pressure between true lumen and false lumen. FSI analysis based on Galerkin’s formulation was used in this investigation to study flow pattern and hemodynamics within a flexible type B aortic dissection model using boundary conditions from our experimental data. The numerical results of our model were verified against the experimental data for various tear size and location. Thus, CFD tools have a potential role in evaluating different scenarios and aortic dissection configurations.

Keywords: aortic dissection, fluid-structure interaction, in vitro model, numerical

Procedia PDF Downloads 273
16795 Using Simulation Modeling Approach to Predict USMLE Steps 1 and 2 Performances

Authors: Chau-Kuang Chen, John Hughes, Jr., A. Dexter Samuels

Abstract:

The prediction models for the United States Medical Licensure Examination (USMLE) Steps 1 and 2 performances were constructed by the Monte Carlo simulation modeling approach via linear regression. The purpose of this study was to build robust simulation models to accurately identify the most important predictors and yield the valid range estimations of the Steps 1 and 2 scores. The application of simulation modeling approach was deemed an effective way in predicting student performances on licensure examinations. Also, sensitivity analysis (a/k/a what-if analysis) in the simulation models was used to predict the magnitudes of Steps 1 and 2 affected by changes in the National Board of Medical Examiners (NBME) Basic Science Subject Board scores. In addition, the study results indicated that the Medical College Admission Test (MCAT) Verbal Reasoning score and Step 1 score were significant predictors of the Step 2 performance. Hence, institutions could screen qualified student applicants for interviews and document the effectiveness of basic science education program based on the simulation results.

Keywords: prediction model, sensitivity analysis, simulation method, USMLE

Procedia PDF Downloads 340
16794 Oil Reservoir Asphalting Precipitation Estimating during CO2 Injection

Authors: I. Alhajri, G. Zahedi, R. Alazmi, A. Akbari

Abstract:

In this paper, an Artificial Neural Network (ANN) was developed to predict Asphaltene Precipitation (AP) during the injection of carbon dioxide into crude oil reservoirs. In this study, the experimental data from six different oil fields were collected. Seventy percent of the data was used to develop the ANN model, and different ANN architectures were examined. A network with the Trainlm training algorithm was found to be the best network to estimate the AP. To check the validity of the proposed model, the model was used to predict the AP for the thirty percent of the data that was unevaluated. The Mean Square Error (MSE) of the prediction was 0.0018, which confirms the excellent prediction capability of the proposed model. In the second part of this study, the ANN model predictions were compared with modified Hirschberg model predictions. The ANN was found to provide more accurate estimates compared to the modified Hirschberg model. Finally, the proposed model was employed to examine the effect of different operating parameters during gas injection on the AP. It was found that the AP is mostly sensitive to the reservoir temperature. Furthermore, the carbon dioxide concentration in liquid phase increases the AP.

Keywords: artificial neural network, asphaltene, CO2 injection, Hirschberg model, oil reservoirs

Procedia PDF Downloads 367
16793 Toward a Risk Assessment Model Based on Multi-Agent System for Cloud Consumer

Authors: Saadia Drissi

Abstract:

The cloud computing is an innovative paradigm that introduces several changes in technology that have resulted a new ways for cloud providers to deliver their services to cloud consumers mainly in term of security risk assessment, thus, adapting a current risk assessment tools to cloud computing is a very difficult task due to its several characteristics that challenge the effectiveness of risk assessment approaches. As consequence, there is a need of risk assessment model adapted to cloud computing. This paper requires a new risk assessment model based on multi-agent system and AHP model as fundamental steps towards the development of flexible risk assessment approach regarding cloud consumers.

Keywords: cloud computing, risk assessment model, multi-agent system, AHP model, cloud consumer

Procedia PDF Downloads 546
16792 Stability Analysis of Rabies Model with Vaccination Effect and Culling in Dogs

Authors: Eti Dwi Wiraningsih, Folashade Agusto, Lina Aryati, Syamsuddin Toaha, Suzanne Lenhart, Widodo, Willy Govaerts

Abstract:

This paper considers a deterministic model for the transmission dynamics of rabies virus in the wild dogs-domestic dogs-human zoonotic cycle. The effect of vaccination and culling in dogs is considered on the model, then the stability was analysed to get basic reproduction number. We use the next generation matrix method and Routh-Hurwitz test to analyze the stability of the Disease-Free Equilibrium and Endemic Equilibrium of this model.

Keywords: stability analysis, rabies model, vaccination effect, culling in dogs

Procedia PDF Downloads 631
16791 The DC Behavioural Electrothermal Model of Silicon Carbide Power MOSFETs under SPICE

Authors: Lakrim Abderrazak, Tahri Driss

Abstract:

This paper presents a new behavioural electrothermal model of power Silicon Carbide (SiC) MOSFET under SPICE. This model is based on the MOS model level 1 of SPICE, in which phenomena such as Drain Leakage Current IDSS, On-State Resistance RDSon, gate Threshold voltage VGSth, the transconductance (gfs), I-V Characteristics Body diode, temperature-dependent and self-heating are included and represented using behavioural blocks ABM (Analog Behavioural Models) of Spice library. This ultimately makes this model flexible and easily can be integrated into the various Spice -based simulation softwares. The internal junction temperature of the component is calculated on the basis of the thermal model through the electric power dissipated inside and its thermal impedance in the form of the localized Foster canonical network. The model parameters are extracted from manufacturers' data (curves data sheets) using polynomial interpolation with the method of simulated annealing (S A) and weighted least squares (WLS). This model takes into account the various important phenomena within transistor. The effectiveness of the presented model has been verified by Spice simulation results and as well as by data measurement for SiC MOS transistor C2M0025120D CREE (1200V, 90A).

Keywords: SiC power MOSFET, DC electro-thermal model, ABM Spice library, SPICE modelling, behavioural model, C2M0025120D CREE.

Procedia PDF Downloads 581
16790 A Study for the Effect of Fire Initiated Location on Evacuation Success Rate

Authors: Jin A Ryu, Hee Sun Kim

Abstract:

As the number of fire accidents is gradually raising, many studies have been reported on evacuation. Previous studies have mostly focused on evaluating the safety of evacuation and the risk of fire in particular buildings. However, studies on effects of various parameters on evacuation have not been nearly done. Therefore, this paper aims at observing evacuation time under the effect of fire initiated location. In this study, evacuation simulations are performed on a 5-floor building located in Seoul, South Korea using the commercial program, Fire Dynamics Simulator with Evacuation (FDS+EVAC). Only the fourth and fifth floors are modeled with an assumption that fire starts in a room located on the fourth floor. The parameter for evacuation simulations is location of fire initiation to observe the evacuation time and safety. Results show that the location of fire initiation is closer to exit, the more time is taken to evacuate. The case having the nearest location of fire initiation to exit has the lowest ratio of successful occupants to the total occupants. In addition, for safety evaluation, the evacuation time calculated from computer simulation model is compared with the tolerable evacuation time according to code in Japan. As a result, all cases are completed within the tolerable evacuation time. This study allows predicting evacuation time under various conditions of fire and can be used to evaluate evacuation appropriateness and fire safety of building.

Keywords: fire simulation, evacuation simulation, temperature, evacuation safety

Procedia PDF Downloads 352
16789 On Hyperbolic Gompertz Growth Model (HGGM)

Authors: S. O. Oyamakin, A. U. Chukwu,

Abstract:

We proposed a Hyperbolic Gompertz Growth Model (HGGM), which was developed by introducing a stabilizing parameter called θ using hyperbolic sine function into the classical gompertz growth equation. The resulting integral solution obtained deterministically was reprogrammed into a statistical model and used in modeling the height and diameter of Pines (Pinus caribaea). Its ability in model prediction was compared with the classical gompertz growth model, an approach which mimicked the natural variability of height/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using goodness of fit tests and model selection criteria. The Kolmogorov-Smirnov test and Shapiro-Wilk test was also used to test the compliance of the error term to normality assumptions while using testing the independence of the error term using the runs test. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic gompertz growth models better than the source model (classical gompertz growth model) while the results of R2, Adj. R2, MSE, and AIC confirmed the predictive power of the Hyperbolic Monomolecular growth models over its source model.

Keywords: height, Dbh, forest, Pinus caribaea, hyperbolic, gompertz

Procedia PDF Downloads 443
16788 The Role of Trust in Intention to Use Prescribed and Non-prescribed Connected Devices

Authors: Jean-michel Sahut, Lubica Hikkerova, Wissal Ben Arfi

Abstract:

The Internet of Things (IoT) emerged over the last few decades in many fields. Healthcare can significantly benefit from IoT. This study aims to examine factors influencing the adoption of IoT in eHealth. To do so, an innovative framework has been developed which applies both the Technology Acceptance Model (TAM) and the United Theory of Acceptance and Use of Technology (UTAUT) model and builds on them by analyzing trust and perceived-risk dimensions to predict intention to use IoT in eHealth. In terms of methodology, a Partial Least Approach Structural Equation Modelling was carried out on a sample of 267 French users. The findings of this research support the significant positive effect of constructs set out in the TAM (perceived ease of use) on predicting behavioral intention by adding the effects identified for UTAUT variables. This research also demonstrates how perceived risk and trust are significant factors for models examining behavioral intentions to use IoT. Perceived risk enhanced by the trust has a significant effect on patients’ behavioral intentions. Moreover, the results highlight the key role of prescription as a moderator of IoT adoption in eHealth. Depending on whether an individual has a prescription to use connected devices or not, ease of use has a stronger impact on adoption, while trust has a negative impact on adoption for users without a prescription. In accordance with the empirical results, several practical implications can be proposed. All connected devices applied in a medical context should be divided into groups according to their functionality: whether they are essential for the patient’s health and whether they require a prescription or not. Devices used with a prescription are easily accepted because the intention to use them is moderated by the medical trust (discussed above). For users without a prescription, ease of use is a more significant factor than for users who have a prescription. This suggests that currently, connected e-Health devices and online healthcare systems have to take this factor into account to better meet the needs and expectations of end-users.

Keywords: internet of things, Healthcare, trust, consumer acceptance

Procedia PDF Downloads 146
16787 An Automatic Model Transformation Methodology Based on Semantic and Syntactic Comparisons and the Granularity Issue Involved

Authors: Tiexin Wang, Sebastien Truptil, Frederick Benaben

Abstract:

Model transformation, as a pivotal aspect of Model-driven engineering, attracts more and more attentions both from researchers and practitioners. Many domains (enterprise engineering, software engineering, knowledge engineering, etc.) use model transformation principles and practices to serve to their domain specific problems; furthermore, model transformation could also be used to fulfill the gap between different domains: by sharing and exchanging knowledge. Since model transformation has been widely used, there comes new requirement on it: effectively and efficiently define the transformation process and reduce manual effort that involved in. This paper presents an automatic model transformation methodology based on semantic and syntactic comparisons, and focuses particularly on granularity issue that existed in transformation process. Comparing to the traditional model transformation methodologies, this methodology serves to a general purpose: cross-domain methodology. Semantic and syntactic checking measurements are combined into a refined transformation process, which solves the granularity issue. Moreover, semantic and syntactic comparisons are supported by software tool; manual effort is replaced in this way.

Keywords: automatic model transformation, granularity issue, model-driven engineering, semantic and syntactic comparisons

Procedia PDF Downloads 398
16786 Partial Differential Equation-Based Modeling of Brain Response to Stimuli

Authors: Razieh Khalafi

Abstract:

The brain is the information processing centre of the human body. Stimuli in the form of information are transferred to the brain and then brain makes the decision on how to respond to them. In this research, we propose a new partial differential equation which analyses the EEG signals and make a relationship between the incoming stimuli and the brain response to them. In order to test the proposed model, a set of external stimuli applied to the model and the model’s outputs were checked versus the real EEG data. The results show that this model can model the EEG signal well. The proposed model is useful not only for modelling of EEG signal in case external stimuli but it can be used for modelling of brain response in case of internal stimuli.

Keywords: brain, stimuli, partial differential equation, response, EEG signal

Procedia PDF Downloads 555
16785 MPC of Single Phase Inverter for PV System

Authors: Irtaza M. Syed, Kaamran Raahemifar

Abstract:

This paper presents a model predictive control (MPC) of a utility interactive (UI) single phase inverter (SPI) for a photovoltaic (PV) system at residential/distribution level. The proposed model uses single-phase phase locked loop (PLL) to synchronize SPI with the grid and performs MPC control in a dq reference frame. SPI model consists of boost converter (BC), maximum power point tracking (MPPT) control, and a full bridge (FB) voltage source inverter (VSI). No PI regulators to tune and carrier and modulating waves are required to produce switching sequence. Instead, the operational model of VSI is used to synthesize sinusoidal current and track the reference. Model is validated using a three kW PV system at the input of UI-SPI in Matlab/Simulink. Implementation and results demonstrate simplicity and accuracy, as well as reliability of the model.

Keywords: phase locked loop, voltage source inverter, single phase inverter, model predictive control, Matlab/Simulink

Procedia PDF Downloads 534
16784 Identifying Diabetic Retinopathy Complication by Predictive Techniques in Indian Type 2 Diabetes Mellitus Patients

Authors: Faiz N. K. Yusufi, Aquil Ahmed, Jamal Ahmad

Abstract:

Predicting the risk of diabetic retinopathy (DR) in Indian type 2 diabetes patients is immensely necessary. India, being the second largest country after China in terms of a number of diabetic patients, to the best of our knowledge not a single risk score for complications has ever been investigated. Diabetic retinopathy is a serious complication and is the topmost reason for visual impairment across countries. Any type or form of DR has been taken as the event of interest, be it mild, back, grade I, II, III, and IV DR. A sample was determined and randomly collected from the Rajiv Gandhi Centre for Diabetes and Endocrinology, J.N.M.C., A.M.U., Aligarh, India. Collected variables include patients data such as sex, age, height, weight, body mass index (BMI), blood sugar fasting (BSF), post prandial sugar (PP), glycosylated haemoglobin (HbA1c), diastolic blood pressure (DBP), systolic blood pressure (SBP), smoking, alcohol habits, total cholesterol (TC), triglycerides (TG), high density lipoprotein (HDL), low density lipoprotein (LDL), very low density lipoprotein (VLDL), physical activity, duration of diabetes, diet control, history of antihypertensive drug treatment, family history of diabetes, waist circumference, hip circumference, medications, central obesity and history of DR. Cox proportional hazard regression is used to design risk scores for the prediction of retinopathy. Model calibration and discrimination are assessed from Hosmer Lemeshow and area under receiver operating characteristic curve (ROC). Overfitting and underfitting of the model are checked by applying regularization techniques and best method is selected between ridge, lasso and elastic net regression. Optimal cut off point is chosen by Youden’s index. Five-year probability of DR is predicted by both survival function, and Markov chain two state model and the better technique is concluded. The risk scores developed can be applied by doctors and patients themselves for self evaluation. Furthermore, the five-year probabilities can be applied as well to forecast and maintain the condition of patients. This provides immense benefit in real application of DR prediction in T2DM.

Keywords: Cox proportional hazard regression, diabetic retinopathy, ROC curve, type 2 diabetes mellitus

Procedia PDF Downloads 186
16783 Forecast Financial Bubbles: Multidimensional Phenomenon

Authors: Zouari Ezzeddine, Ghraieb Ikram

Abstract:

From the results of the academic literature which evokes the limitations of previous studies, this article shows the reasons for multidimensionality Prediction of financial bubbles. A new framework for modeling study predicting financial bubbles by linking a set of variable presented on several dimensions dictating its multidimensional character. It takes into account the preferences of financial actors. A multicriteria anticipation of the appearance of bubbles in international financial markets helps to fight against a possible crisis.

Keywords: classical measures, predictions, financial bubbles, multidimensional, artificial neural networks

Procedia PDF Downloads 580
16782 Explaining E-Learning Systems Usage in Higher Education Institutions: UTAUT Model

Authors: Muneer Abbad

Abstract:

This research explains the e-learning usage in a university in Jordan. Unified theory of acceptance and use of technology (UTAUT) model has been used as a base model to explain the usage. UTAUT is a model of individual acceptance that is compiled mainly from different models of technology acceptance. This research is the initial part from full explanations of the users' acceptance model that use Structural Equation Modelling (SEM) method to explain the users' acceptance of the e-learning systems based on UTAUT model. In this part data has been collected and prepared for further analysis. The main factors of UTAUT model has been tested as different factors using exploratory factor analysis (EFA). The second phase will be confirmatory factor analysis (CFA) and SEM to explain the users' acceptance of e-learning systems.

Keywords: e-learning, moodle, adoption, Unified Theory of Acceptance and Use of Technology (UTAUT)

Procedia PDF Downloads 410
16781 Levy Model for Commodity Pricing

Authors: V. Benedico, C. Anacleto, A. Bearzi, L. Brice, V. Delahaye

Abstract:

The aim in present paper is to construct an affordable and reliable commodity prices based on a recalculation of its cost through time which allows visualize the potential risks and thus, take more appropriate decisions regarding forecasts. Here attention has been focused on Levy model, more reliable and realistic than classical random Gaussian one as it takes into consideration observed abrupt jumps in case of sudden price variation. In application to Energy Trading sector where it has never been used before, equations corresponding to Levy model have been written for electricity pricing in European market. Parameters have been set in order to predict and simulate the price and its evolution through time to remarkable accuracy. As predicted by Levy model, the results show significant spikes which reach unconventional levels contrary to currently used Brownian model.

Keywords: commodity pricing, Lévy Model, price spikes, electricity market

Procedia PDF Downloads 430
16780 Modelling Impacts of Global Financial Crises on Stock Volatility of Nigeria Banks

Authors: Maruf Ariyo Raheem, Patrick Oseloka Ezepue

Abstract:

This research aimed at determining most appropriate heteroskedastic model to predicting volatility of 10 major Nigerian banks: Access, United Bank for Africa (UBA), Guaranty Trust, Skye, Diamond, Fidelity, Sterling, Union, ETI and Zenith banks using daily closing stock prices of each of the banks from 2004 to 2014. The models employed include ARCH (1), GARCH (1, 1), EGARCH (1, 1) and TARCH (1, 1). The results show that all the banks returns are highly leptokurtic, significantly skewed and thus non-normal across the four periods except for Fidelity bank during financial crises; findings similar to those of other global markets. There is also strong evidence for the presence of heteroscedasticity, and that volatility persistence during crisis is higher than before the crisis across the 10 banks, with that of UBA taking the lead, about 11 times higher during the crisis. Findings further revealed that Asymmetric GARCH models became dominant especially during financial crises and post crises when the second reforms were introduced into the banking industry by the Central Bank of Nigeria (CBN). Generally, one could say that Nigerian banks returns are volatility persistent during and after the crises, and characterised by leverage effects of negative and positive shocks during these periods

Keywords: global financial crisis, leverage effect, persistence, volatility clustering

Procedia PDF Downloads 528