Search results for: vector error correction model (VECM)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18870

Search results for: vector error correction model (VECM)

16980 Unified Structured Process for Health Analytics

Authors: Supunmali Ahangama, Danny Chiang Choon Poo

Abstract:

Health analytics (HA) is used in healthcare systems for effective decision-making, management, and planning of healthcare and related activities. However, user resistance, the unique position of medical data content, and structure (including heterogeneous and unstructured data) and impromptu HA projects have held up the progress in HA applications. Notably, the accuracy of outcomes depends on the skills and the domain knowledge of the data analyst working on the healthcare data. The success of HA depends on having a sound process model, effective project management and availability of supporting tools. Thus, to overcome these challenges through an effective process model, we propose an HA process model with features from the rational unified process (RUP) model and agile methodology.

Keywords: agile methodology, health analytics, unified process model, UML

Procedia PDF Downloads 508
16979 Controller Design Using GA for SMC Systems

Authors: Susy Thomas, Sajju Thomas, Varghese Vaidyan

Abstract:

This paper considers SMCs using linear feedback with switched gains and proposes a method which can minimize the pole perturbation. The method is able to enhance the robustness property of the controller. A pre-assigned neighborhood of the ‘nominal’ positions is assigned and the system poles are not allowed to stray out of these bounds even when parameters variations/uncertainties act upon the system. A quasi SMM is maintained within the assigned boundaries of the sliding surface.

Keywords: parameter variations, pole perturbation, sliding mode control, switching surface, robust switching vector

Procedia PDF Downloads 368
16978 Genetic Algorithm Based Node Fault Detection and Recovery in Distributed Sensor Networks

Authors: N. Nalini, Lokesh B. Bhajantri

Abstract:

In Distributed Sensor Networks, the sensor nodes are prone to failure due to energy depletion and some other reasons. In this regard, fault tolerance of network is essential in distributed sensor environment. Energy efficiency, network or topology control and fault-tolerance are the most important issues in the development of next-generation Distributed Sensor Networks (DSNs). This paper proposes a node fault detection and recovery using Genetic Algorithm (GA) in DSN when some of the sensor nodes are faulty. The main objective of this work is to provide fault tolerance mechanism which is energy efficient and responsive to network using GA, which is used to detect the faulty nodes in the network based on the energy depletion of node and link failure between nodes. The proposed fault detection model is used to detect faults at node level and network level faults (link failure and packet error). Finally, the performance parameters for the proposed scheme are evaluated.

Keywords: distributed sensor networks, genetic algorithm, fault detection and recovery, information technology

Procedia PDF Downloads 453
16977 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks

Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez

Abstract:

Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.

Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning

Procedia PDF Downloads 342
16976 Analysis of the Temperature Dependence of Local Avalanche Compact Model for Bipolar Transistors

Authors: Robert Setekera, Ramses van der Toorn

Abstract:

We present an extensive analysis of the temperature dependence of the local avalanche model used in most of the modern compact models for bipolar transistors. This local avalanche model uses the Chynoweth's empirical law for ionization coefficient to define the generation of the avalanche current in terms of the local electric field. We carry out the model analysis using DC-measurements taken on both Si and advanced SiGe bipolar transistors. For the advanced industrial SiGe-HBTs, we consider both high-speed and high-power devices (both NPN and PNP transistors). The limitations of the local avalanche model in modeling the temperature dependence of the avalanche current mostly in the weak avalanche region are demonstrated. In addition, the model avalanche parameters are analyzed to see if they are in agreement with semiconductor device physics.

Keywords: avalanche multiplication, avalanche current, bipolar transistors, compact modeling, electric field, impact ionization, local avalanche

Procedia PDF Downloads 625
16975 Special Case of Trip Distribution Model and Its Use for Estimation of Detailed Transport Demand in the Czech Republic

Authors: Jiri Dufek

Abstract:

The national model of the Czech Republic has been modified in a detailed way to get detailed travel demand in the municipality level (cities, villages over 300 inhabitants). As a technique for this detailed modelling, three-dimensional procedure for calibrating gravity models, was used. Besides of zone production and attraction, which is usual in gravity models, the next additional parameter for trip distribution was introduced. Usually it is called by “third dimension”. In the model, this parameter is a demand between regions. The distribution procedure involved calculation of appropriate skim matrices and its multiplication by three coefficients obtained by iterative balancing of production, attraction and third dimension. This type of trip distribution was processed in R-project and the results were used in the Czech Republic transport model, created in PTV Vision. This process generated more precise results in local level od the model (towns, villages)

Keywords: trip distribution, three dimension, transport model, municipalities

Procedia PDF Downloads 133
16974 Forecasting Unemployment Rate in Selected European Countries Using Smoothing Methods

Authors: Ksenija Dumičić, Anita Čeh Časni, Berislav Žmuk

Abstract:

The aim of this paper is to select the most accurate forecasting method for predicting the future values of the unemployment rate in selected European countries. In order to do so, several forecasting techniques adequate for forecasting time series with trend component, were selected, namely: double exponential smoothing (also known as Holt`s method) and Holt-Winters` method which accounts for trend and seasonality. The results of the empirical analysis showed that the optimal model for forecasting unemployment rate in Greece was Holt-Winters` additive method. In the case of Spain, according to MAPE, the optimal model was double exponential smoothing model. Furthermore, for Croatia and Italy the best forecasting model for unemployment rate was Holt-Winters` multiplicative model, whereas in the case of Portugal the best model to forecast unemployment rate was Double exponential smoothing model. Our findings are in line with European Commission unemployment rate estimates.

Keywords: European Union countries, exponential smoothing methods, forecast accuracy unemployment rate

Procedia PDF Downloads 370
16973 Simulation of Flow Patterns in Vertical Slot Fishway with Cylindrical Obstacles

Authors: Mohsen Solimani Babarsad, Payam Taheri

Abstract:

Numerical results of vertical slot fishways with and without cylinders study are presented. The simulated results and the measured data in the fishways are compared to validate the application of the model. This investigation is made using FLUENT V.6.3, a Computational Fluid Dynamics solver. Advantages of using these types of numerical tools are the possibility of avoiding the St.-Venant equations’ limitations, and turbulence can be modeled by means of different models such as the k-ε model. In general, the present study has demonstrated that the CFD model could be useful for analysis and design of vertical slot fishways with cylinders.

Keywords: slot Fish-way, CFD, k-ε model, St.-Venant equations’

Procedia PDF Downloads 365
16972 An Integrated Real-Time Hydrodynamic and Coastal Risk Assessment Model

Authors: M. Reza Hashemi, Chris Small, Scott Hayward

Abstract:

The Northeast Coast of the US faces damaging effects of coastal flooding and winds due to Atlantic tropical and extratropical storms each year. Historically, several large storm events have produced substantial levels of damage to the region; most notably of which were the Great Atlantic Hurricane of 1938, Hurricane Carol, Hurricane Bob, and recently Hurricane Sandy (2012). The objective of this study was to develop an integrated modeling system that could be used as a forecasting/hindcasting tool to evaluate and communicate the risk coastal communities face from these coastal storms. This modeling system utilizes the ADvanced CIRCulation (ADCIRC) model for storm surge predictions and the Simulating Waves Nearshore (SWAN) model for the wave environment. These models were coupled, passing information to each other and computing over the same unstructured domain, allowing for the most accurate representation of the physical storm processes. The coupled SWAN-ADCIRC model was validated and has been set up to perform real-time forecast simulations (as well as hindcast). Modeled storm parameters were then passed to a coastal risk assessment tool. This tool, which is generic and universally applicable, generates spatial structural damage estimate maps on an individual structure basis for an area of interest. The required inputs for the coastal risk model included a detailed information about the individual structures, inundation levels, and wave heights for the selected region. Additionally, calculation of wind damage to structures was incorporated. The integrated coastal risk assessment system was then tested and applied to Charlestown, a small vulnerable coastal town along the southern shore of Rhode Island. The modeling system was applied to Hurricane Sandy and a synthetic storm. In both storm cases, effect of natural dunes on coastal risk was investigated. The resulting damage maps for the area (Charlestown) clearly showed that the dune eroded scenarios affected more structures, and increased the estimated damage. The system was also tested in forecast mode for a large Nor’Easters: Stella (March 2017). The results showed a good performance of the coupled model in forecast mode when compared to observations. Finally, a nearshore model XBeach was then nested within this regional grid (ADCIRC-SWAN) to simulate nearshore sediment transport processes and coastal erosion. Hurricane Irene (2011) was used to validate XBeach, on the basis of a unique beach profile dataset at the region. XBeach showed a relatively good performance, being able to estimate eroded volumes along the beach transects with a mean error of 16%. The validated model was then used to analyze the effectiveness of several erosion mitigation methods that were recommended in a recent study of coastal erosion in New England: beach nourishment, coastal bank (engineered core), and submerged breakwater as well as artificial surfing reef. It was shown that beach nourishment and coastal banks perform better to mitigate shoreline retreat and coastal erosion.

Keywords: ADCIRC, coastal flooding, storm surge, coastal risk assessment, living shorelines

Procedia PDF Downloads 117
16971 Identifying Autism Spectrum Disorder Using Optimization-Based Clustering

Authors: Sharifah Mousli, Sona Taheri, Jiayuan He

Abstract:

Autism spectrum disorder (ASD) is a complex developmental condition involving persistent difficulties with social communication, restricted interests, and repetitive behavior. The challenges associated with ASD can interfere with an affected individual’s ability to function in social, academic, and employment settings. Although there is no effective medication known to treat ASD, to our best knowledge, early intervention can significantly improve an affected individual’s overall development. Hence, an accurate diagnosis of ASD at an early phase is essential. The use of machine learning approaches improves and speeds up the diagnosis of ASD. In this paper, we focus on the application of unsupervised clustering methods in ASD as a large volume of ASD data generated through hospitals, therapy centers, and mobile applications has no pre-existing labels. We conduct a comparative analysis using seven clustering approaches such as K-means, agglomerative hierarchical, model-based, fuzzy-C-means, affinity propagation, self organizing maps, linear vector quantisation – as well as the recently developed optimization-based clustering (COMSEP-Clust) approach. We evaluate the performances of the clustering methods extensively on real-world ASD datasets encompassing different age groups: toddlers, children, adolescents, and adults. Our experimental results suggest that the COMSEP-Clust approach outperforms the other seven methods in recognizing ASD with well-separated clusters.

Keywords: autism spectrum disorder, clustering, optimization, unsupervised machine learning

Procedia PDF Downloads 119
16970 Reasons for the Selection of Information-Processing Framework and the Philosophy of Mind as a General Account for an Error Analysis and Explanation on Mathematics

Authors: Michael Lousis

Abstract:

This research study is concerned with learner’s errors on Arithmetic and Algebra. The data resulted from a broader international comparative research program called Kassel Project. However, its conceptualisation differed from and contrasted with that of the main program, which was mostly based on socio-demographic data. The way in which the research study was conducted, was not dependent on the researcher’s discretion, but was absolutely dictated by the nature of the problem under investigation. This is because the phenomenon of learners’ mathematical errors is due neither to the intentions of learners nor to institutional processes, rules and norms, nor to the educators’ intentions and goals; but rather to the way certain information is presented to learners and how their cognitive apparatus processes this information. Several approaches for the study of learners’ errors have been developed from the beginning of the 20th century, encompassing different belief systems. These approaches were based on the behaviourist theory, on the Piagetian- constructivist research framework, the perspective that followed the philosophy of science and the information-processing paradigm. The researcher of the present study was forced to disclose the learners’ course of thinking that led them in specific observable actions with the result of showing particular errors in specific problems, rather than analysing scripts with the students’ thoughts presented in a written form. This, in turn, entailed that the choice of methods would have to be appropriate and conducive to seeing and realising the learners’ errors from the perspective of the participants in the investigation. This particular fact determined important decisions to be made concerning the selection of an appropriate framework for analysing the mathematical errors and giving explanations. Thus the rejection of the belief systems concerning behaviourism, the Piagetian-constructivist, and philosophy of science perspectives took place, and the information-processing paradigm in conjunction with the philosophy of mind were adopted as a general account for the elaboration of data. This paper explains why these decisions were appropriate and beneficial for conducting the present study and for the establishment of the ensued thesis. Additionally, the reasons for the adoption of the information-processing paradigm in conjunction with the philosophy of mind give sound and legitimate bases for the development of future studies concerning mathematical error analysis are explained.

Keywords: advantages-disadvantages of theoretical prospects, behavioral prospect, critical evaluation of theoretical prospects, error analysis, information-processing paradigm, opting for the appropriate approach, philosophy of science prospect, Piagetian-constructivist research frameworks, review of research in mathematical errors

Procedia PDF Downloads 191
16969 Frequency Selective Filters for Estimating the Equivalent Circuit Parameters of Li-Ion Battery

Authors: Arpita Mondal, Aurobinda Routray, Sreeraj Puravankara, Rajashree Biswas

Abstract:

The most difficult part of designing a battery management system (BMS) is battery modeling. A good battery model can capture the dynamics which helps in energy management, by accurate model-based state estimation algorithms. So far the most suitable and fruitful model is the equivalent circuit model (ECM). However, in real-time applications, the model parameters are time-varying, changes with current, temperature, state of charge (SOC), and aging of the battery and this make a great impact on the performance of the model. Therefore, to increase the equivalent circuit model performance, the parameter estimation has been carried out in the frequency domain. The battery is a very complex system, which is associated with various chemical reactions and heat generation. Therefore, it’s very difficult to select the optimal model structure. As we know, if the model order is increased, the model accuracy will be improved automatically. However, the higher order model will face the tendency of over-parameterization and unfavorable prediction capability, while the model complexity will increase enormously. In the time domain, it becomes difficult to solve higher order differential equations as the model order increases. This problem can be resolved by frequency domain analysis, where the overall computational problems due to ill-conditioning reduce. In the frequency domain, several dominating frequencies can be found in the input as well as output data. The selective frequency domain estimation has been carried out, first by estimating the frequencies of the input and output by subspace decomposition, then by choosing the specific bands from the most dominating to the least, while carrying out the least-square, recursive least square and Kalman Filter based parameter estimation. In this paper, a second order battery model consisting of three resistors, two capacitors, and one SOC controlled voltage source has been chosen. For model identification and validation hybrid pulse power characterization (HPPC) tests have been carried out on a 2.6 Ah LiFePO₄ battery.

Keywords: equivalent circuit model, frequency estimation, parameter estimation, subspace decomposition

Procedia PDF Downloads 151
16968 Designing Equivalent Model of Floating Gate Transistor

Authors: Birinderjit Singh Kalyan, Inderpreet Kaur, Balwinder Singh Sohi

Abstract:

In this paper, an equivalent model for floating gate transistor has been proposed. Using the floating gate voltage value, capacitive coupling coefficients has been found at different bias conditions. The amount of charge present on the gate has been then calculated using the transient models of hot electron programming and Fowler-Nordheim Tunnelling. The proposed model can be extended to the transient conditions as well. The SPICE equivalent model is designed and current-voltage characteristics and Transfer characteristics are comparatively analysed. The dc current-voltage characteristics, as well as dc transfer characteristics, have been plotted for an FGMOS with W/L=0.25μm/0.375μm, the inter-poly capacitance of 0.8fF for both programmed and erased states. The Comparative analysis has been made between the present model and capacitive coefficient coupling methods which were already available.

Keywords: FGMOS, floating gate transistor, capacitive coupling coefficient, SPICE model

Procedia PDF Downloads 547
16967 Body Shape Control of Magnetic Soft Continuum Robots with PID Controller

Authors: M. H. Korayem, N. Sangsefidi

Abstract:

Magnetically guided soft robots have emerged as a promising technology in minimally invasive surgery due to their ability to adapt to complex environments. However, one of the main challenges in this field is damage to the vascular structure caused by unwanted stress on the vessel wall and deformation of the vessel due to improper control of the shape of the robot body during surgery. Therefore, this article proposes an approach for controlling the form of a magnetic, soft, continuous robot body using a PID controller. The magnetic soft continuous robot is modelled using Cosserat theory in static mode and solved numerically. The designed controller adjusts the position of each part of the robot to match the desired shape. The PID controller is considered to minimize the robot's contact with the vessel wall and prevent unwanted vessel deformation. The simulation results confirmed the accuracy of the numerical solution of the static Cosserat model. Also, they showed the effectiveness of the proposed contouring method in achieving the desired shape with a maximum error of about 0.3 millimetres.

Keywords: PID, magnetic soft continuous robot, soft robot shape control, Cosserat theory, minimally invasive surgery

Procedia PDF Downloads 112
16966 Effects of Merging Personal and Social Responsibility with Sports Education Model on Students' Game Performance and Responsibility

Authors: Yi-Hsiang Pan, Chen-Hui Huang, Wei-Ting Hsu

Abstract:

The purposes of the study were to understand these topics as follows: 1. To explore the effect of merging teaching personal and social responsibility (TPSR) with sports education model on students' game performance and responsibility. 2. To explore the effect of sports education model on students' game performance and responsibility. 3. To compare the difference between "merging TPSR with sports education model" and "sports education model" on students' game performance and responsibility. The participants include three high school physical education teachers and six physical education classes. Every teacher teaches an experimental group and a control group. The participants had 121 students, including 65 students in the experimental group and 56 students in the control group. The research methods had game performance assessment, questionnaire investigation, interview, focus group meeting. The research instruments include personal and social responsibility questionnaire and game performance assessment instrument. Paired t-test test and MANCOVA were used to test the difference between "merging TPSR with sports education model" and "sports education model" on students' learning performance. 1) "Merging TPSR with sports education model" showed significant improvements in students' game performance, and responsibilities with self-direction, helping others, cooperation. 2) "Sports education model" also had significant improvements in students' game performance, and responsibilities with effort, self-direction, helping others. 3.) There was no significant difference in game performance and responsibilities between "merging TPSR with sports education model" and "sports education model". 4)."Merging TPSR with sports education model" significantly improve learning atmosphere and peer relationships, it may be developed in the physical education curriculum. The conclusions were as follows: Both "Merging TPSR with sports education model" and "sports education model" can help improve students' responsibility and game performance. However, "Merging TPSR with sports education model" can reduce the competitive atmosphere in highly intensive games between students. The curricular projects of hybrid TPSR-Sport Education model is a good approach for moral character education.

Keywords: curriculum and teaching model, sports self-efficacy, sport enthusiastic, character education

Procedia PDF Downloads 315
16965 New Estimation in Autoregressive Models with Exponential White Noise by Using Reversible Jump MCMC Algorithm

Authors: Suparman Suparman

Abstract:

A white noise in autoregressive (AR) model is often assumed to be normally distributed. In application, the white noise usually do not follows a normal distribution. This paper aims to estimate a parameter of AR model that has a exponential white noise. A Bayesian method is adopted. A prior distribution of the parameter of AR model is selected and then this prior distribution is combined with a likelihood function of data to get a posterior distribution. Based on this posterior distribution, a Bayesian estimator for the parameter of AR model is estimated. Because the order of AR model is considered a parameter, this Bayesian estimator cannot be explicitly calculated. To resolve this problem, a method of reversible jump Markov Chain Monte Carlo (MCMC) is adopted. A result is a estimation of the parameter AR model can be simultaneously calculated.

Keywords: autoregressive (AR) model, exponential white Noise, bayesian, reversible jump Markov Chain Monte Carlo (MCMC)

Procedia PDF Downloads 358
16964 Numerical Pricing of Financial Options under Irrational Exercise Times and Regime-Switching Models

Authors: Mohammad Saber Rohi, Saghar Heidari

Abstract:

In this paper, we studied the pricing problem of American options under a regime-switching model with the possibility of a non-optimal exercise policy (early or late exercise time) which is called an irrational strategy. For this, we consider a Markovmodulated model for the dynamic of the underlying asset as an alternative model to the classical Balck-Scholes-Merton model (BSM) and an intensity-based model for the irrational strategy, to provide more realistic results for American option prices under the irrational behavior in real financial markets. Applying a partial differential equation (PDE) approach, the pricing problem of American options under regime-switching models can be formulated as coupled PDEs. To solve the resulting systems of PDEs in this model, we apply a finite element method as the numerical solving procedure to the resulting variational inequality. Under some appropriate assumptions, we establish the stability of the method and compare its accuracy to some recent works to illustrate the suitability of the proposed model and the accuracy of the applied numerical method for the pricing problem of American options under the regime-switching model with irrational behaviors.

Keywords: irrational exercise strategy, rationality parameter, regime-switching model, American option, finite element method, variational inequality

Procedia PDF Downloads 74
16963 Computational Fluid Dynamics Modeling of Liquefaction of Wood and It's Model Components Using a Modified Multistage Shrinking-Core Model

Authors: K. G. R. M. Jayathilake, S. Rudra

Abstract:

Wood degradation in hot compressed water is modeled with a Computational Fluid Dynamics (CFD) code using cellulose, xylan, and lignin as model compounds. Model compounds are reacted under catalyst-free conditions in a temperature range from 250 to 370 °C. Using a simplified reaction scheme where water soluble products, methanol soluble products, char like compounds and gas are generated through intermediates with each model compound. A modified multistage shrinking core model is developed to simulate particle degradation. In the modified shrinking core model, each model compound is hydrolyzed in separate stages. Cellulose is decomposed to glucose/oligomers before producing degradation products. Xylan is decomposed through xylose and then to degradation products where lignin is decomposed into soluble products before producing the total guaiacol, organic carbon (TOC) and then char and gas. Hydrolysis of each model compound is used as the main reaction of the process. Diffusion of water monomers to the particle surface to initiate hydrolysis and dissolution of the products in water is given importance during the modeling process. In the developed model the temperature variation depends on the Arrhenius relationship. Kinetic parameters from the literature are used for the mathematical model. Meanwhile, limited initial fast reaction kinetic data limit the development of more accurate CFD models. Liquefaction results of the CFD model are analyzed and validated using the experimental data available in the literature where it shows reasonable agreement.

Keywords: computational fluid dynamics, liquefaction, shrinking-core, wood

Procedia PDF Downloads 127
16962 Comparative Study Performance of the Induction Motor between SMC and NLC Modes Control

Authors: A. Oukaci, R. Toufouti, D. Dib, l. Atarsia

Abstract:

This article presents a multitude of alternative techniques to control the vector control, namely the nonlinear control and sliding mode control. Moreover, the implementation of their control law applied to the high-performance to the induction motor with the objective to improve the tracking control, ensure stability robustness to parameter variations and disturbance rejection. Tests are performed numerical simulations in the Matlab/Simulink interface, the results demonstrate the efficiency and dynamic performance of the proposed strategy.

Keywords: Induction Motor (IM), Non-linear Control (NLC), Sliding Mode Control (SMC), nonlinear sliding surface

Procedia PDF Downloads 574
16961 Contextual Factors of Innovation for Improving Commercial Banks' Performance in Nigeria

Authors: Tomola Obamuyi

Abstract:

The banking system in Nigeria adopted innovative banking, with the aim of enhancing financial inclusion, and making financial services readily and cheaply available to majority of the people, and to contribute to the efficiency of the financial system. Some of the innovative services include: Automatic Teller Machines (ATMs), National Electronic Fund Transfer (NEFT), Point of Sale (PoS), internet (Web) banking, Mobile Money payment (MMO), Real-Time Gross Settlement (RTGS), agent banking, among others. The introduction of these payment systems is expected to increase bank efficiency and customers' satisfaction, culminating in better performance for the commercial banks. However, opinions differ on the possible effects of the various innovative payment systems on the performance of commercial banks in the country. Thus, this study empirically determines how commercial banks use innovation to gain competitive advantage in the specific context of Nigeria's finance and business. The study also analyses the effects of financial innovation on the performance of commercial banks, when different periods of analysis are considered. The study employed secondary data from 2009 to 2018, the period that witnessed aggressive innovation in the financial sector of the country. The Vector Autoregression (VAR) estimation technique forecasts the relative variance of each random innovation to the variables in the VAR, examine the effect of standard deviation shock to one of the innovations on current and future values of the impulse response and determine the causal relationship between the variables (VAR granger causality test). The study also employed the Multi-Criteria Decision Making (MCDM) to rank the innovations and the performance criteria of Return on Assets (ROA) and Return on Equity (ROE). The entropy method of MCDM was used to determine which of the performance criteria better reflect the contributions of the various innovations in the banking sector. On the other hand, the Range of Values (ROV) method was used to rank the contributions of the seven innovations to performance. The analysis was done based on medium term (five years) and long run (ten years) of innovations in the sector. The impulse response function derived from the VAR system indicated that the response of ROA to the values of cheques transaction, values of NEFT transactions, values of POS transactions was positive and significant in the periods of analysis. The paper also confirmed with entropy and range of value that, in the long run, both the CHEQUE and MMO performed best while NEFT was next in performance. The paper concluded that commercial banks would enhance their performance by continuously improving on the services provided through Cheques, National Electronic Fund Transfer and Point of Sale since these instruments have long run effects on their performance. This will increase the confidence of the populace and encourage more usage/patronage of these services. The banking sector will in turn experience better performance which will improve the economy of the country. Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression,

Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression

Procedia PDF Downloads 123
16960 Landslide Susceptibility Mapping Using Soft Computing in Amhara Saint

Authors: Semachew M. Kassa, Africa M Geremew, Tezera F. Azmatch, Nandyala Darga Kumar

Abstract:

Frequency ratio (FR) and analytical hierarchy process (AHP) methods are developed based on past landslide failure points to identify the landslide susceptibility mapping because landslides can seriously harm both the environment and society. However, it is still difficult to select the most efficient method and correctly identify the main driving factors for particular regions. In this study, we used fourteen landslide conditioning factors (LCFs) and five soft computing algorithms, including Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), Artificial Neural Network (ANN), and Naïve Bayes (NB), to predict the landslide susceptibility at 12.5 m spatial scale. The performance of the RF (F1-score: 0.88, AUC: 0.94), ANN (F1-score: 0.85, AUC: 0.92), and SVM (F1-score: 0.82, AUC: 0.86) methods was significantly better than the LR (F1-score: 0.75, AUC: 0.76) and NB (F1-score: 0.73, AUC: 0.75) method, according to the classification results based on inventory landslide points. The findings also showed that around 35% of the study region was made up of places with high and very high landslide risk (susceptibility greater than 0.5). The very high-risk locations were primarily found in the western and southeastern regions, and all five models showed good agreement and similar geographic distribution patterns in landslide susceptibility. The towns with the highest landslide risk include Amhara Saint Town's western part, the Northern part, and St. Gebreal Church villages, with mean susceptibility values greater than 0.5. However, rainfall, distance to road, and slope were typically among the top leading factors for most villages. The primary contributing factors to landslide vulnerability were slightly varied for the five models. Decision-makers and policy planners can use the information from our study to make informed decisions and establish policies. It also suggests that various places should take different safeguards to reduce or prevent serious damage from landslide events.

Keywords: artificial neural network, logistic regression, landslide susceptibility, naïve Bayes, random forest, support vector machine

Procedia PDF Downloads 85
16959 Applying the Crystal Model Approach on Light Nuclei for Calculating Radii and Density Distribution

Authors: A. Amar

Abstract:

A new model, namely the crystal model, has been modified to calculate the radius and density distribution of light nuclei up to ⁸Be. The crystal model has been modified according to solid-state physics, which uses the analogy between nucleon distribution and atoms distribution in the crystal. The model has analytical analysis to calculate the radius where the density distribution of light nuclei has obtained from analogy of crystal lattice. The distribution of nucleons over crystal has been discussed in a general form. The equation that has been used to calculate binding energy was taken from the solid-state model of repulsive and attractive force. The numbers of the protons were taken to control repulsive force, where the atomic number was responsible for the attractive force. The parameter has been calculated from the crystal model was found to be proportional to the radius of the nucleus. The density distribution of light nuclei was taken as a summation of two clusters distribution as in ⁶Li=alpha+deuteron configuration. A test has been done on the data obtained for radius and density distribution using double folding for d+⁶,⁷Li with M3Y nucleon-nucleon interaction. Good agreement has been obtained for both the radius and density distribution of light nuclei. The model failed to calculate the radius of ⁹Be, so modifications should be done to overcome discrepancy.

Keywords: nuclear physics, nuclear lattice, study nucleus as crystal, light nuclei till to ⁸Be

Procedia PDF Downloads 179
16958 RAPDAC: Role Centric Attribute Based Policy Driven Access Control Model

Authors: Jamil Ahmed

Abstract:

Access control models aim to decide whether a user should be denied or granted access to the user‟s requested activity. Various access control models have been established and proposed. The most prominent of these models include role-based, attribute-based, policy based access control models as well as role-centric attribute based access control model. In this paper, a novel access control model is presented called “Role centric Attribute based Policy Driven Access Control (RAPDAC) model”. RAPDAC incorporates the concept of “policy” in the “role centric attribute based access control model”. It leverages the concept of "policy‟ by precisely combining the evaluation of conditions, attributes, permissions and roles in order to allow authorization access. This approach allows capturing the "access control policy‟ of a real time application in a well defined manner. RAPDAC model allows making access decision at much finer granularity as illustrated by the case study of a real time library information system.

Keywords: authorization, access control model, role based access control, attribute based access control

Procedia PDF Downloads 162
16957 Efficient Bargaining versus Right to Manage in the Era of Liberalization

Authors: Panagiota Koliousi, Natasha Miaouli

Abstract:

We compare product and labour market liberalization under the two trade union bargaining models: the Right-to-Manage (RTM) model and the Efficient Bargaining (EB) model. The vehicle is a dynamic general equilibrium (DGE) model that incorporates two types of agents (capitalists and workers), imperfectly competitive product and labour markets. The model is solved numerically employing common parameter values and data from the euro area. A key message is that product market deregulation is favourable under any labour market structure while opting for labour market deregulation one should provide special attention to the structure of the labour market such as the bargaining system of unions. If the prevailing way of bargaining is the RTM model then restructuring both markets is beneficial for all agents.

Keywords: market structure, structural reforms, trade unions, unemployment

Procedia PDF Downloads 198
16956 Metric Dimension on Line Graph of Honeycomb Networks

Authors: M. Hussain, Aqsa Farooq

Abstract:

Let G = (V,E) be a connected graph and distance between any two vertices a and b in G is a−b geodesic and is denoted by d(a, b). A set of vertices W resolves a graph G if each vertex is uniquely determined by its vector of distances to the vertices in W. A metric dimension of G is the minimum cardinality of a resolving set of G. In this paper line graph of honeycomb network has been derived and then we calculated the metric dimension on line graph of honeycomb network.

Keywords: Resolving set, Metric dimension, Honeycomb network, Line graph

Procedia PDF Downloads 205
16955 Infodemic Detection on Social Media with a Multi-Dimensional Deep Learning Framework

Authors: Raymond Xu, Cindy Jingru Wang

Abstract:

Social media has become a globally connected and influencing platform. Social media data, such as tweets, can help predict the spread of pandemics and provide individuals and healthcare providers early warnings. Public psychological reactions and opinions can be efficiently monitored by AI models on the progression of dominant topics on Twitter. However, statistics show that as the coronavirus spreads, so does an infodemic of misinformation due to pandemic-related factors such as unemployment and lockdowns. Social media algorithms are often biased toward outrage by promoting content that people have an emotional reaction to and are likely to engage with. This can influence users’ attitudes and cause confusion. Therefore, social media is a double-edged sword. Combating fake news and biased content has become one of the essential tasks. This research analyzes the variety of methods used for fake news detection covering random forest, logistic regression, support vector machines, decision tree, naive Bayes, BoW, TF-IDF, LDA, CNN, RNN, LSTM, DeepFake, and hierarchical attention network. The performance of each method is analyzed. Based on these models’ achievements and limitations, a multi-dimensional AI framework is proposed to achieve higher accuracy in infodemic detection, especially pandemic-related news. The model is trained on contextual content, images, and news metadata.

Keywords: artificial intelligence, fake news detection, infodemic detection, image recognition, sentiment analysis

Procedia PDF Downloads 263
16954 A Two Stage Stochastic Mathematical Model for the Tramp Ship Routing with Time Windows Problem

Authors: Amin Jamili

Abstract:

Nowadays, the majority of international trade in goods is carried by sea, and especially by ships deployed in the industrial and tramp segments. This paper addresses routing the tramp ships and determining the schedules including the arrival times to the ports, berthing times at the ports, and the departure times in an operational planning level. In the operational planning level, the weather can be almost exactly forecasted, however in some routes some uncertainties may remain. In this paper, the voyaging times between some of the ports are considered to be uncertain. To that end, a two-stage stochastic mathematical model is proposed. Moreover, a case study is tested with the presented model. The computational results show that this mathematical model is promising and can represent acceptable solutions.

Keywords: routing, scheduling, tram ships, two stage stochastic model, uncertainty

Procedia PDF Downloads 441
16953 Conduction Model Compatible for Multi-Physical Domain Dynamic Investigations: Bond Graph Approach

Authors: A. Zanj, F. He

Abstract:

In the current paper, a domain independent conduction model compatible for multi-physical system dynamic investigations is suggested. By means of a port-based approach, a classical nonlinear conduction model containing physical states is first represented. A compatible discrete configuration of the thermal domain in line with the elastic domain is then generated through the enhancement of the configuration of the conventional thermal element. The presented simulation results of a sample structure indicate that the suggested conductive model can cover a wide range of dynamic behavior of the thermal domain.

Keywords: multi-physical domain, conduction model, port based modeling, dynamic interaction, physical modeling

Procedia PDF Downloads 278
16952 Enhancement of Indexing Model for Heterogeneous Multimedia Documents: User Profile Based Approach

Authors: Aicha Aggoune, Abdelkrim Bouramoul, Mohamed Khiereddine Kholladi

Abstract:

Recent research shows that user profile as important element can improve heterogeneous information retrieval with its content. In this context, we present our indexing model for heterogeneous multimedia documents. This model is based on the combination of user profile to the indexing process. The general idea of our proposal is to operate the common concepts between the representation of a document and the definition of a user through his profile. These two elements will be added as additional indexing entities to enrich the heterogeneous corpus documents indexes. We have developed IRONTO domain ontology allowing annotation of documents. We will present also the developed tool validating the proposed model.

Keywords: indexing model, user profile, multimedia document, heterogeneous of sources, ontology

Procedia PDF Downloads 351
16951 An Improved C-Means Model for MRI Segmentation

Authors: Ying Shen, Weihua Zhu

Abstract:

Medical images are important to help identifying different diseases, for example, Magnetic resonance imaging (MRI) can be used to investigate the brain, spinal cord, bones, joints, breasts, blood vessels, and heart. Image segmentation, in medical image analysis, is usually the first step to find out some characteristics with similar color, intensity or texture so that the diagnosis could be further carried out based on these features. This paper introduces an improved C-means model to segment the MRI images. The model is based on information entropy to evaluate the segmentation results by achieving global optimization. Several contributions are significant. Firstly, Genetic Algorithm (GA) is used for achieving global optimization in this model where fuzzy C-means clustering algorithm (FCMA) is not capable of doing that. Secondly, the information entropy after segmentation is used for measuring the effectiveness of MRI image processing. Experimental results show the outperformance of the proposed model by comparing with traditional approaches.

Keywords: magnetic resonance image (MRI), c-means model, image segmentation, information entropy

Procedia PDF Downloads 229