Search results for: GARCH-in-Mean models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2523

Search results for: GARCH-in-Mean models

483 Virtual 3D Environments for Image-Based Navigation Algorithms

Authors: V. B. Bastos, M. P. Lima, P. R. G. Kurka

Abstract:

This paper applies to the creation of virtual 3D environments for the study and development of mobile robot image based navigation algorithms and techniques, which need to operate robustly and efficiently. The test of these algorithms can be performed in a physical way, from conducting experiments on a prototype, or by numerical simulations. Current simulation platforms for robotic applications do not have flexible and updated models for image rendering, being unable to reproduce complex light effects and materials. Thus, it is necessary to create a test platform that integrates sophisticated simulated applications of real environments for navigation, with data and image processing. This work proposes the development of a high-level platform for building 3D model’s environments and the test of image-based navigation algorithms for mobile robots. Techniques were used for applying texture and lighting effects in order to accurately represent the generation of rendered images regarding the real world version. The application will integrate image processing scripts, trajectory control, dynamic modeling and simulation techniques for physics representation and picture rendering with the open source 3D creation suite - Blender.

Keywords: Simulation, visual navigation, mobile robot, data visualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1048
482 Improving Spatiotemporal Change Detection: A High Level Fusion Approach for Discovering Uncertain Knowledge from Satellite Image Database

Authors: Wadii Boulila, Imed Riadh Farah, Karim Saheb Ettabaa, Basel Solaiman, Henda Ben Ghezala

Abstract:

This paper investigates the problem of tracking spa¬tiotemporal changes of a satellite image through the use of Knowledge Discovery in Database (KDD). The purpose of this study is to help a given user effectively discover interesting knowledge and then build prediction and decision models. Unfortunately, the KDD process for spatiotemporal data is always marked by several types of imperfections. In our paper, we take these imperfections into consideration in order to provide more accurate decisions. To achieve this objective, different KDD methods are used to discover knowledge in satellite image databases. Each method presents a different point of view of spatiotemporal evolution of a query model (which represents an extracted object from a satellite image). In order to combine these methods, we use the evidence fusion theory which considerably improves the spatiotemporal knowledge discovery process and increases our belief in the spatiotemporal model change. Experimental results of satellite images representing the region of Auckland in New Zealand depict the improvement in the overall change detection as compared to using classical methods.

Keywords: Knowledge discovery in satellite databases, knowledge fusion, data imperfection, data mining, spatiotemporal change detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1545
481 Computing Transition Intensity Using Time-Homogeneous Markov Jump Process: Case of South African HIV/AIDS Disposition

Authors: A. Bayaga

Abstract:

This research provides a technical account of estimating Transition Probability using Time-homogeneous Markov Jump Process applying by South African HIV/AIDS data from the Statistics South Africa. It employs Maximum Likelihood Estimator (MLE) model to explore the possible influence of Transition Probability of mortality cases in which case the data was based on actual Statistics South Africa. This was conducted via an integrated demographic and epidemiological model of South African HIV/AIDS epidemic. The model was fitted to age-specific HIV prevalence data and recorded death data using MLE model. Though the previous model results suggest HIV in South Africa has declined and AIDS mortality rates have declined since 2002 – 2013, in contrast, our results differ evidently with the generally accepted HIV models (Spectrum/EPP and ASSA2008) in South Africa. However, there is the need for supplementary research to be conducted to enhance the demographic parameters in the model and as well apply it to each of the nine (9) provinces of South Africa.

Keywords: AIDS mortality rates, Epidemiological model, Time-homogeneous Markov Jump Process, Transition Probability, Statistics South Africa.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2169
480 Business Intelligence for N=1 Analytics using Hybrid Intelligent System Approach

Authors: Rajendra M Sonar

Abstract:

The future of business intelligence (BI) is to integrate intelligence into operational systems that works in real-time analyzing small chunks of data based on requirements on continuous basis. This is moving away from traditional approach of doing analysis on ad-hoc basis or sporadically in passive and off-line mode analyzing huge amount data. Various AI techniques such as expert systems, case-based reasoning, neural-networks play important role in building business intelligent systems. Since BI involves various tasks and models various types of problems, hybrid intelligent techniques can be better choice. Intelligent systems accessible through web services make it easier to integrate them into existing operational systems to add intelligence in every business processes. These can be built to be invoked in modular and distributed way to work in real time. Functionality of such systems can be extended to get external inputs compatible with formats like RSS. In this paper, we describe a framework that use effective combinations of these techniques, accessible through web services and work in real-time. We have successfully developed various prototype systems and done few commercial deployments in the area of personalization and recommendation on mobile and websites.

Keywords: Business Intelligence, Customer Relationship Management, Hybrid Intelligent Systems, Personalization and Recommendation (P&R), Recommender Systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2076
479 MDA of Hexagonal Honeycomb Plates used for Space Applications

Authors: A. Boudjemai , M.H. Bouanane, Mankour, R. Amri, H. Salem, B. Chouchaoui

Abstract:

The purpose of this paper is to perform a multidisciplinary design and analysis (MDA) of honeycomb panels used in the satellites structural design. All the analysis is based on clamped-free boundary conditions. In the present work, detailed finite element models for honeycomb panels are developed and analysed. Experimental tests were carried out on a honeycomb specimen of which the goal is to compare the previous modal analysis made by the finite element method as well as the existing equivalent approaches. The obtained results show a good agreement between the finite element analysis, equivalent and tests results; the difference in the first two frequencies is less than 4% and less than 10% for the third frequency. The results of the equivalent model presented in this analysis are obtained with a good accuracy. Moreover, investigations carried out in this research relate to the honeycomb plate modal analysis under several aspects including the structural geometrical variation by studying the various influences of the dimension parameters on the modal frequency, the variation of core and skin material of the honeycomb. The various results obtained in this paper are promising and show that the geometry parameters and the type of material have an effect on the value of the honeycomb plate modal frequency.

Keywords: Satellite, honeycomb, finite element method, modal frequency, dynamic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4102
478 Modeling and Simulating Reaction-Diffusion Systems with State-Dependent Diffusion Coefficients

Authors: Paola Lecca, Lorenzo Dematte, Corrado Priami

Abstract:

The present models and simulation algorithms of intracellular stochastic kinetics are usually based on the premise that diffusion is so fast that the concentrations of all the involved species are homogeneous in space. However, recents experimental measurements of intracellular diffusion constants indicate that the assumption of a homogeneous well-stirred cytosol is not necessarily valid even for small prokaryotic cells. In this work a mathematical treatment of diffusion that can be incorporated in a stochastic algorithm simulating the dynamics of a reaction-diffusion system is presented. The movement of a molecule A from a region i to a region j of the space is represented as a first order reaction Ai k- ! Aj , where the rate constant k depends on the diffusion coefficient. The diffusion coefficients are modeled as function of the local concentration of the solutes, their intrinsic viscosities, their frictional coefficients and the temperature of the system. The stochastic time evolution of the system is given by the occurrence of diffusion events and chemical reaction events. At each time step an event (reaction or diffusion) is selected from a probability distribution of waiting times determined by the intrinsic reaction kinetics and diffusion dynamics. To demonstrate the method the simulation results of the reaction-diffusion system of chaperoneassisted protein folding in cytoplasm are shown.

Keywords: Reaction-diffusion systems, diffusion coefficient, stochastic simulation algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1523
477 Electro-Thermal Imaging of Breast Phantom: An Experimental Study

Authors: H. Feza Carlak, N. G. Gencer

Abstract:

To increase the temperature contrast in thermal images, the characteristics of the electrical conductivity and thermal imaging modalities can be combined. In this experimental study, it is objected to observe whether the temperature contrast created by the tumor tissue can be improved just due to the current application within medical safety limits. Various thermal breast phantoms are developed to simulate the female breast tissue. In vitro experiments are implemented using a thermal infrared camera in a controlled manner. Since experiments are implemented in vitro, there is no metabolic heat generation and blood perfusion. Only the effects and results of the electrical stimulation are investigated. Experimental study is implemented with two-dimensional models. Temperature contrasts due to the tumor tissues are obtained. Cancerous tissue is determined using the difference and ratio of healthy and tumor images. 1 cm diameter single tumor tissue causes almost 40 °mC temperature contrast on the thermal-breast phantom. Electrode artifacts are reduced by taking the difference and ratio of background (healthy) and tumor images. Ratio of healthy and tumor images show that temperature contrast is increased by the current application.

Keywords: Medical diagnostic imaging, breast phantom, active thermography, breast cancer detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1490
476 Impacts of Climate Change on Water Resources of Greater Zab and Lesser Zab Basins, Iraq, Using Soil and Water Assessment Tool Model

Authors: Nahlah Abbas, Saleh A. Wasimi, Nadhir Al-Ansari

Abstract:

The Greater Zab and Lesser Zab are the major tributaries of Tigris River contributing the largest flow volumes into the river. The impacts of climate change on water resources in these basins have not been well addressed. To gain a better understanding of the effects of climate change on water resources of the study area in near future (2049-2069) as well as in distant future (2080-2099), Soil and Water Assessment Tool (SWAT) was applied. The model was first calibrated for the period from 1979 to 2004 to test its suitability in describing the hydrological processes in the basins. The SWAT model showed a good performance in simulating streamflow. The calibrated model was then used to evaluate the impacts of climate change on water resources. Six general circulation models (GCMs) from phase five of the Coupled Model Intercomparison Project (CMIP5) under three Representative Concentration Pathways (RCPs) RCP 2.6, RCP 4.5, and RCP 8.5 for periods of 2049-2069 and 2080-2099 were used to project the climate change impacts on these basins. The results demonstrated a significant decline in water resources availability in the future.

Keywords: Tigris River, climate change, water resources, SWAT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1420
475 Analytical Modelling of Surface Roughness during Compacted Graphite Iron Milling Using Ceramic Inserts

Authors: S. Karabulut, A. Güllü, A. Güldas, R. Gürbüz

Abstract:

This study investigates the effects of the lead angle and chip thickness variation on surface roughness during the machining of compacted graphite iron using ceramic cutting tools under dry cutting conditions. Analytical models were developed for predicting the surface roughness values of the specimens after the face milling process. Experimental data was collected and imported to the artificial neural network model. A multilayer perceptron model was used with the back propagation algorithm employing the input parameters of lead angle, cutting speed and feed rate in connection with chip thickness. Furthermore, analysis of variance was employed to determine the effects of the cutting parameters on surface roughness. Artificial neural network and regression analysis were used to predict surface roughness. The values thus predicted were compared with the collected experimental data, and the corresponding percentage error was computed. Analysis results revealed that the lead angle is the dominant factor affecting surface roughness. Experimental results indicated an improvement in the surface roughness value with decreasing lead angle value from 88° to 45°.

Keywords: CGI, milling, surface roughness, ANN, regression, modeling, analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1968
474 Real-time Network Anomaly Detection Systems Based on Machine-Learning Algorithms

Authors: Zahra Ramezanpanah, Joachim Carvallo, Aurelien Rodriguez

Abstract:

This paper aims to detect anomalies in streaming data using machine learning algorithms. In this regard, we designed two separate pipelines and evaluated the effectiveness of each separately. The first pipeline, based on supervised machine learning methods, consists of two phases. In the first phase, we trained several supervised models using the UNSW-NB15 data set. We measured the efficiency of each using different performance metrics and selected the best model for the second phase. At the beginning of the second phase, we first, using Argus Server, sniffed a local area network. Several types of attacks were simulated and then sent the sniffed data to a running algorithm at short intervals. This algorithm can display the results of each packet of received data in real-time using the trained model. The second pipeline presented in this paper is based on unsupervised algorithms, in which a Temporal Graph Network (TGN) is used to monitor a local network. The TGN is trained to predict the probability of future states of the network based on its past behavior. Our contribution in this section is introducing an indicator to identify anomalies from these predicted probabilities.

Keywords: Cyber-security, Intrusion Detection Systems, Temporal Graph Network, Anomaly Detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 503
473 Developing and Validating an Instrument for Measuring Mobile Government Adoption in Saudi Arabia

Authors: Sultan Alotaibi, Dmitri Roussinov

Abstract:

Many governments recently started to change the ways of providing their services by allowing their citizens to access services from anywhere without the need of visiting the location of the service provider. Mobile government (M-government) is one of the techniques that fulfill that goal. It has been adopted by many governments. M-government can be defined as an implementation of Electronic Government (E-Government) by using mobile technology with the aim of improving service delivery to citizens, businesses and all government agencies. There have been several research projects developing models to understand the behavior of individuals towards the adoption of m-government. This paper proposes a model for adoption of m-government services in Saudi Arabia by extending Technology Acceptance Model (TAM) by introducing external factors. This paper also reports on the development of a survey instrument designed to measure user perception of mobile government acceptance. A survey instrument has been developed by using existing scales from prior instruments and a pilot study has been conducted by distributing the survey to 33 participants. As a result, a survey instrument has been refined to retain 43 items. The results also showed that the reliabilities of all the scales in the survey instrument are above the levels acceptable in current academic research, thus the instruments developed by us are capable of analyzing the factors in M-government adoption.

Keywords: TAM, m-government, e-government, model, acceptance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1607
472 Solubility of Water in CO2 Mixtures at Pipeline Operation Conditions

Authors: Mohammad Ahmad, Sander Gersen, Erwin Wilbers

Abstract:

Carbon capture, transport and underground storage have become a major solution to reduce CO2 emissions from power plants and other large CO2 sources. A big part of this captured CO2 stream is transported at high pressure dense phase conditions and stored in offshore underground depleted oil and gas fields. CO2 is also transported in offshore pipelines to be used for enhanced oil and gas recovery. The captured CO2 stream with impurities may contain water that causes severe corrosion problems, flow assurance failure and might damage valves and instrumentations. Thus, free water formation should be strictly prevented. The purpose of this work is to study the solubility of water in pure CO2 and in CO2 mixtures under real pipeline pressure (90-150 bar) and temperature operation conditions (5-35°C). A set up was constructed to generate experimental data. The results show the solubility of water in CO2 mixtures increasing with the increase of the temperature or/and with the increase in pressure. A drop in water solubility in CO2 is observed in the presence of impurities. The data generated were then used to assess the capabilities of two mixture models: the GERG-2008 model and the EOS-CG model. By generating the solubility data, this study contributes to determine the maximum allowable water content in CO2 pipelines.

Keywords: Carbon capture and storage, water solubility, equation of states.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2912
471 Current Distribution and Cathode Flooding Prediction in a PEM Fuel Cell

Authors: A. Jamekhorshid, G. Karimi, I. Noshadi, A. Jahangiri

Abstract:

Non-uniform current distribution in polymer electrolyte membrane fuel cells results in local over-heating, accelerated ageing, and lower power output than expected. This issue is very critical when fuel cell experiences water flooding. In this work, the performance of a PEM fuel cell is investigated under cathode flooding conditions. Two-dimensional partially flooded GDL models based on the conservation laws and electrochemical relations are proposed to study local current density distributions along flow fields over a wide range of cell operating conditions. The model results show a direct association between cathode inlet humidity increases and that of average current density but the system becomes more sensitive to flooding. The anode inlet relative humidity shows a similar effect. Operating the cell at higher temperatures would lead to higher average current densities and the chance of system being flooded is reduced. In addition, higher cathode stoichiometries prevent system flooding but the average current density remains almost constant. The higher anode stoichiometry leads to higher average current density and higher sensitivity to cathode flooding.

Keywords: Current distribution, Flooding, Hydrogen energysystem, PEM fuel cell.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2407
470 Financial Information and Collective Bargaining: Conflicting or Complementing?

Authors: Humayun Murshed, Shibly Abdullah

Abstract:

The research conducted in early seventies apparently assumed the existence of a universal decision model for union negotiators and furthermore tended to regard financial information as a ‘neutral’ input into a rational decision making process. However, research in the eighties began to question the neutrality of financial information as an input in collective bargaining rather viewing it as a potentially effective means for controlling the labour force. Furthermore, this later research also started challenging the simplistic assumptions relating particularly to union objectives which have underpinned the earlier search for universal union decision models. Despite the above developments there seems to be a dearth of studies in developing countries concerning the use of financial information in collective bargaining. This paper seeks to begin to remedy this deficiency. Utilising a case study approach based on two enterprises, one in the public sector and the other a multinational, the universal decision model is rejected and it is argued that the decision whether or not to use financial information is a contingent one and such a contingency is largely defined by the context and environment in which both union and management negotiators work. An attempt is also made to identify the factors constraining as well as promoting the use of financial information in collective bargaining, these being regarded as unique to the organisations within which the case studies are conducted.

Keywords: Collective Bargaining, Developing Countries, Disclosures, Financial Information.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1581
469 Inversion of Electrical Resistivity Data: A Review

Authors: Shrey Sharma, Gunjan Kumar Verma

Abstract:

High density electrical prospecting has been widely used in groundwater investigation, civil engineering and environmental survey. For efficient inversion, the forward modeling routine, sensitivity calculation, and inversion algorithm must be efficient. This paper attempts to provide a brief summary of the past and ongoing developments of the method. It includes reviews of the procedures used for data acquisition, processing and inversion of electrical resistivity data based on compilation of academic literature. In recent times there had been a significant evolution in field survey designs and data inversion techniques for the resistivity method. In general 2-D inversion for resistivity data is carried out using the linearized least-square method with the local optimization technique .Multi-electrode and multi-channel systems have made it possible to conduct large 2-D, 3-D and even 4-D surveys efficiently to resolve complex geological structures that were not possible with traditional 1-D surveys. 3-D surveys play an increasingly important role in very complex areas where 2-D models suffer from artifacts due to off-line structures. Continued developments in computation technology, as well as fast data inversion techniques and software, have made it possible to use optimization techniques to obtain model parameters to a higher accuracy. A brief discussion on the limitations of the electrical resistivity method has also been presented.

Keywords: Resistivity, inversion, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6071
468 Investigation on Toxicity of Manufactured Nanoparticles to Bioluminescence Bacteria Vibrio fischeri

Authors: E. Binaeian, SH. Soroushnia

Abstract:

Acute toxicity of nano SiO2, ZnO, MCM-41 (Meso pore silica), Cu, Multi Wall Carbon Nano Tube (MWCNT), Single Wall Carbon Nano Tube (SWCNT) , Fe (Coated) to bacteria Vibrio fischeri using a homemade luminometer , was evaluated. The values of the nominal effective concentrations (EC), causing 20% and 50% inhibition of biouminescence, using two mathematical models at two times of 5 and 30 minutes were calculated. Luminometer was designed with Photomultiplier (PMT) detector. Luminol chemiluminescence reaction was carried out for the calibration graph. In the linear calibration range, the correlation coefficients and coefficient of Variation (CV) were 0.988 and 3.21% respectively which demonstrate the accuracy and reproducibility of the instrument that are suitable. The important part of this research depends on how to optimize the best condition for maximum bioluminescence. The culture of Vibrio fischeri with optimal conditions in liquid media, were stirring at 120 rpm at a temperature of 150C to 180C and were incubated for 24 to 72 hours while solid medium was held at 180C and for 48 hours. Suspension of nanoparticles ZnO, after 30 min contact time to bacteria Vibrio fischeri, showed the highest toxicity while SiO2 nanoparticles showed the lowest toxicity. After 5 min exposure time, the toxicity of ZnO was the strongest and MCM-41 was the weakest toxicant component.

Keywords: Bioluminescence, effective concentration, nanomaterials, toxicity, Vibrio fischeri.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2959
467 Face Localization and Recognition in Varied Expressions and Illumination

Authors: Hui-Yu Huang, Shih-Hang Hsu

Abstract:

In this paper, we propose a robust scheme to work face alignment and recognition under various influences. For face representation, illumination influence and variable expressions are the important factors, especially the accuracy of facial localization and face recognition. In order to solve those of factors, we propose a robust approach to overcome these problems. This approach consists of two phases. One phase is preprocessed for face images by means of the proposed illumination normalization method. The location of facial features can fit more efficient and fast based on the proposed image blending. On the other hand, based on template matching, we further improve the active shape models (called as IASM) to locate the face shape more precise which can gain the recognized rate in the next phase. The other phase is to process feature extraction by using principal component analysis and face recognition by using support vector machine classifiers. The results show that this proposed method can obtain good facial localization and face recognition with varied illumination and local distortion.

Keywords: Gabor filter, improved active shape model (IASM), principal component analysis (PCA), face alignment, face recognition, support vector machine (SVM)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1489
466 Simulation Studies of Solid-Particle and Liquid-Drop Erosion of NiAl Alloy

Authors: Rong Liu, Kuiying Chen, Ju Chen, Jingrong Zhao, Ming Liang

Abstract:

This article presents modeling studies of NiAl alloy under solid-particle erosion and liquid-drop erosion. In the solid-particle erosion simulation, attention is paid to the oxide scale thickness variation on the alloy in high-temperature erosion environments. The erosion damage is assumed to be deformation wear and cutting wear mechanisms, incorporating the influence of the oxide scale on the eroded surface; thus the instantaneous oxide thickness is the result of synergetic effect of erosion and oxidation. For liquid-drop erosion, special interest is in investigating the effects of drop velocity and drop size on the damage of the target surface. The models of impact stress wave, mean depth of penetration, and maximum depth of erosion rate (Max DER) are employed to develop various maps for NiAl alloy, including target thickness vs. drop size (diameter), rate of mean depth of penetration (MDRP) vs. drop impact velocity, and damage threshold velocity (DTV) vs. drop size.

Keywords: Liquid-drop erosion, NiAl alloy, oxide scale thickness, solid-particle erosion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2646
465 Three-Dimensional, Non-Linear Finite Element Analysis of Bullet Penetration through Thin AISI 4340 Steel Target Plate

Authors: Abhishek Soni, A. Kumaraswamy, M. S. Mahesh

Abstract:

Bullet penetration in steel plate is investigated with the help of three-dimensional, non-linear, transient, dynamic, finite elements analysis using explicit time integration code LSDYNA. The effect of large strain, strain-rate and temperature at very high velocity regime was studied from number of simulations of semi-spherical nose shape bullet penetration through single layered circular plate with 2 mm thickness at impact velocities of 500, 1000, and 1500 m/s with the help of Johnson Cook material model. Mie-Gruneisen equation of state is used in conjunction with Johnson Cook material model to determine pressure-volume relationship at various points of interests. Two material models viz. Plastic-Kinematic and Johnson- Cook resulted in different deformation patterns in steel plate. It is observed from the simulation results that the velocity drop and loss of kinetic energy occurred very quickly up to perforation of plate, after that the change in velocity and changes in kinetic energy are negligibly small. The physics behind this kind of behaviour is presented in the paper.

Keywords: AISI 4340 steel, ballistic impact simulation, bullet penetration, non-linear FEM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1263
464 The Role of Business Process Management in Driving Digital Transformation: Insurance Company Case Study

Authors: Dalia Suša Vugec, Ana-Marija Stjepić, Darija Ivandić Vidović

Abstract:

Digital transformation is one of the latest trends on the global market. In order to maintain the competitive advantage and sustainability, increasing number of organizations are conducting digital transformation processes. Those organizations are changing their business processes and creating new business models with the help of digital technologies. In that sense, one should also observe the role of business process management (BPM) and its maturity in driving digital transformation. Therefore, the goal of this paper is to investigate the role of BPM in digital transformation process within one organization. Since experiences from practice show that organizations from financial sector could be observed as leaders in digital transformation, an insurance company has been selected to participate in the study. That company has been selected due to the high level of its BPM maturity and the fact that it has previously been through a digital transformation process. In order to fulfill the goals of the paper, several interviews, as well as questionnaires, have been conducted within the selected company. The results are presented in a form of a case study. Results indicate that digital transformation process within the observed company has been successful, with special focus on the development of digital strategy, BPM and change management. The role of BPM in the digital transformation of the observed company is further discussed in the paper.

Keywords: Business process management, case study, Croatia, digital transformation, insurance company.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1326
463 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks

Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar

Abstract:

DNA Barcode provides good sources of needed information to classify living species. The classification problem has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use the similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. However, all the used methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. In fact, our method permits to avoid the complex problem of form and structure in different classes of organisms. The empirical data and their classification performances are compared with other methods. Evenly, in this study, we present our system which is consisted of three phases. The first one, is called transformation, is composed of three sub steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. Moreover, the second phase step is an approximation; it is empowered by the use of Multi Library Wavelet Neural Networks (MLWNN). Finally, the third one, is called the classification of DNA Barcodes, is realized by applying the algorithm of hierarchical classification.

Keywords: DNA Barcode, Electron-Ion Interaction Pseudopotential, Multi Library Wavelet Neural Networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1966
462 Mixed Integer Programing for Multi-Tier Rebate with Discontinuous Cost Function

Authors: Y. Long, L. Liu, K. V. Branin

Abstract:

One challenge faced by procurement decision-maker during the acquisition process is how to compare similar products from different suppliers and allocate orders among different products or services. This work focuses on allocating orders among multiple suppliers considering rebate. The objective function is to minimize the total acquisition cost including purchasing cost and rebate benefit. Rebate benefit is complex and difficult to estimate at the ordering step. Rebate rules vary for different suppliers and usually change over time. In this work, we developed a system to collect the rebate policies, standardized the rebate policies and developed two-stage optimization models for ordering allocation. Rebate policy with multi-tiers is considered in modeling. The discontinuous cost function of rebate benefit is formulated for different scenarios. A piecewise linear function is used to approximate the discontinuous cost function of rebate benefit. And a Mixed Integer Programing (MIP) model is built for order allocation problem with multi-tier rebate. A case study is presented and it shows that our optimization model can reduce the total acquisition cost by considering rebate rules.

Keywords: Discontinuous cost function, mixed integer programming, optimization, procurement, rebate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 663
461 Hybrid of Hunting Search and Modified Simplex Methods for Grease Position Parameter Design Optimisation

Authors: P. Luangpaiboon, S. Boonhao

Abstract:

This study proposes a multi-response surface optimization problem (MRSOP) for determining the proper choices of a process parameter design (PPD) decision problem in a noisy environment of a grease position process in an electronic industry. The proposed models attempts to maximize dual process responses on the mean of parts between failure on left and right processes. The conventional modified simplex method and its hybridization of the stochastic operator from the hunting search algorithm are applied to determine the proper levels of controllable design parameters affecting the quality performances. A numerical example demonstrates the feasibility of applying the proposed model to the PPD problem via two iterative methods. Its advantages are also discussed. Numerical results demonstrate that the hybridization is superior to the use of the conventional method. In this study, the mean of parts between failure on left and right lines improve by 39.51%, approximately. All experimental data presented in this research have been normalized to disguise actual performance measures as raw data are considered to be confidential.

Keywords: Grease Position Process, Multi-response Surfaces, Modified Simplex Method, Hunting Search Method, Desirability Function Approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1686
460 Fuzzy Inference System for Determining Collision Risk of Ship in Madura Strait Using Automatic Identification System

Authors: Emmy Pratiwi, Ketut B. Artana, A. A. B. Dinariyana

Abstract:

Madura Strait is considered as one of the busiest shipping channels in Indonesia. High vessel traffic density in Madura Strait gives serious threat due to navigational safety in this area, i.e. ship collision. This study is necessary as an attempt to enhance the safety of marine traffic. Fuzzy inference system (FIS) is proposed to calculate risk collision of ships. Collision risk is evaluated based on ship domain, Distance to Closest Point of Approach (DCPA), and Time to Closest Point of Approach (TCPA). Data were collected by utilizing Automatic Identification System (AIS). This study considers several ships’ domain models to give the characteristic of marine traffic in the waterways. Each encounter in the ship domain is analyzed to obtain the level of collision risk. Risk level of ships, as the result in this study, can be used as guidance to avoid the accident, providing brief description about safety traffic in Madura Strait and improving the navigational safety in the area.

Keywords: Automatic identification system, collision risk, DCPA, fuzzy inference system, TCPA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1585
459 Online Battery Equivalent Circuit Model Estimation on Continuous-Time Domain Using Linear Integral Filter Method

Authors: Cheng Zhang, James Marco, Walid Allafi, Truong Q. Dinh, W. D. Widanage

Abstract:

Equivalent circuit models (ECMs) are widely used in battery management systems in electric vehicles and other battery energy storage systems. The battery dynamics and the model parameters vary under different working conditions, such as different temperature and state of charge (SOC) levels, and therefore online parameter identification can improve the modelling accuracy. This paper presents a way of online ECM parameter identification using a continuous time (CT) estimation method. The CT estimation method has several advantages over discrete time (DT) estimation methods for ECM parameter identification due to the widely separated battery dynamic modes and fast sampling. The presented method can be used for online SOC estimation. Test data are collected using a lithium ion cell, and the experimental results show that the presented CT method achieves better modelling accuracy compared with the conventional DT recursive least square method. The effectiveness of the presented method for online SOC estimation is also verified on test data.

Keywords: Equivalent circuit model, continuous time domain estimation, linear integral filter method, parameter and SOC estimation, recursive least square.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1342
458 Decision Analysis Module for Excel

Authors: Radomir Perzina, Jaroslav Ramik

Abstract:

The Analytic Hierarchy Process is frequently used approach for solving decision making problems. There exists wide range of software programs utilizing that approach. Their main disadvantage is that they are relatively expensive and missing intermediate calculations. This work introduces a Microsoft Excel add-in called DAME – Decision Analysis Module for Excel. Comparing to other computer programs DAME is free, can work with scenarios or multiple decision makers and displays intermediate calculations. Users can structure their decision models into three levels – scenarios/users, criteria and variants. Items on all levels can be evaluated either by weights or pair-wise comparisons. There are provided three different methods for the evaluation of the weights of criteria, the variants as well as the scenarios – Saaty’s Method, Geometric Mean Method and Fuller’s Triangle Method. Multiplicative and additive syntheses are supported. The proposed software package is demonstrated on couple of illustrating examples of real life decision problems.

Keywords: Analytic hierarchy process, multi-criteria decision making, pair-wise comparisons, Microsoft Excel, Scenarios.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3403
457 Dynamics of Roe Deer (Capreolus capreolus) Vehicle Collisions in Lithuania: Influence of the Time Factors

Authors: Lina Galinskaitė, Gytautas Ignatavičius

Abstract:

Animal vehicle collisions (AVCs) affect human safety, cause property damage and wildlife welfare. The number of AVCs are increasing and creating serious implications for the animal conservation and management. Roe deer (Capreolus capreolus) and other large ungulates (moose, wild boar, red deer) are the most frequently collided ungulate with vehicles in Europe. Therefore, we analyzed temporal patterns of roe deer vehicle collisions (RDVC) occurring in Lithuania. Using a comprehensive dataset, consisting of 15,891 data points, we examined the influence of different time units (i.e. time of the day, day of week, month, and season) on RDVC. We identified accident periods within the analyzed time units. Highest frequencies of RDVC occurred on Fridays. Highest frequencies of roe deer-vehicle accidents occurred in May, November and December. Regarding diurnal patterns, most of RDVC occur after sunset and before sunset (during dark hours). Since vehicle collisions with animals showed temporal variation, these should be taken into consideration in developing statistical models of spatial AVC patterns, and also in planning strategies to reduce accident risk.

Keywords: Animal vehicle collision, diurnal patterns, road safety, roe deer, statistical analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 494
456 A Quantitative Tool for Analyze Process Design

Authors: Andrés Carrión García, Aura López de Murillo, José Jabaloyes Vivas, Angela Grisales del Río

Abstract:

Some quality control tools use non metric subjective information coming from experts, who qualify the intensity of relations existing inside processes, but without quantifying them. In this paper we have developed a quality control analytic tool, measuring the impact or strength of the relationship between process operations and product characteristics. The tool includes two models: a qualitative model, allowing relationships description and analysis; and a formal quantitative model, by means of which relationship quantification is achieved. In the first one, concepts from the Graphs Theory were applied to identify those process elements which can be sources of variation, that is, those quality characteristics or operations that have some sort of prelacy over the others and that should become control items. Also the most dependent elements can be identified, that is those elements receiving the effects of elements identified as variation sources. If controls are focused in those dependent elements, efficiency of control is compromised by the fact that we are controlling effects, not causes. The second model applied adapts the multivariate statistical technique of Covariance Structural Analysis. This approach allowed us to quantify the relationships. The computer package LISREL was used to obtain statistics and to validate the model.

Keywords: Characteristics matrix, covariance structure analysis, LISREL.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1596
455 Viewers of Advertisements in Television and Cinema in the Shadow of Visuality

Authors: Mete Kazaz

Abstract:

Despite the internet, which is one of the mass media that has become quite common in recent years, the relationship of Advertisement with Television and Cinema, which have always drawn attention of researchers as basic media and where visual use is in the foreground, have also become the subject of various studies. Based on the assumption that the known fundamental effects of advertisements on consumers are closely related to the creative process of advertisements as well as the nature and characteristics of the medium where they are used, these basic mass media (Television and Cinema) and the consumer motivations of the advertisements they broadcast have become a focus of study. Given that the viewers of the mass media in question have shifted from a passive position to a more active one especially in recent years and approach contents of advertisements, as they do all contents, in a more critical and “pitiless" manner, it is possible to say that individuals make more use of advertisements than in the past and combine their individual goals with the goals of the advertisements. This study, which aims at finding out what the goals of these new individual advertisement use are, how they are shaped by the distinct characteristics of Television and Cinema, where visuality takes precedence as basic mass media, and what kind of places they occupy in the minds of consumers, has determined consumers- motivations as: “Entertainment", “Escapism", “Play", “Monitoring/Discovery", “Opposite Sex" and “Aspirations and Role Models". This study intends to reveal the differences or similarities among the needs and hence the gratifications of viewers who consume advertisements on Television or at the Cinema, which are two basic media where visuality is prioritized.

Keywords: Cinema, Television, Viewers of Advertisements.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1383
454 Optimum Design of an 8x8 Optical Switch with Thermal Compensated Mechanisms

Authors: Tien-Tung Chung, Chin-Te Lin, Chung-Yun Lee, Kuang-Chao Fan, Shou-Heng Chen

Abstract:

This paper studies the optimum design for reducing optical loss of an 8x8 mechanical type optical switch due to the temperature change. The 8x8 optical switch is composed of a base, 8 input fibers, 8 output fibers, 3 fixed mirrors and 17 movable mirrors. First, an innovative switch configuration is proposed with thermal-compensated design. Most mechanical type optical switches have a disadvantage that their precision and accuracy are influenced by the ambient temperature. Therefore, the thermal-compensated design is to deal with this situation by using materials with different thermal expansion coefficients (α). Second, a parametric modeling program is developed to generate solid models for finite element analysis, and the thermal and structural behaviors of the switch are analyzed. Finally, an integrated optimum design program, combining Autodesk Inventor Professional software, finite element analysis software, and genetic algorithms, is developed for improving the thermal behaviors that the optical loss of the switch is reduced. By changing design parameters of the switch in the integrated design program, the final optimum design that satisfies the design constraints and specifications can be found.

Keywords: Optical switch, finite element analysis, thermal-compensated design, optimum design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1547