Search results for: system dynamics modeling methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33776

Search results for: system dynamics modeling methods

32126 Foundation of the Information Model for Connected-Cars

Authors: Hae-Won Seo, Yong-Gu Lee

Abstract:

Recent progress in the next generation of automobile technology is geared towards incorporating information technology into cars. Collectively called smart cars are bringing intelligence to cars that provides comfort, convenience and safety. A branch of smart cars is connected-car system. The key concept in connected-cars is the sharing of driving information among cars through decentralized manner enabling collective intelligence. This paper proposes a foundation of the information model that is necessary to define the driving information for smart-cars. Road conditions are modeled through a unique data structure that unambiguously represent the time variant traffics in the streets. Additionally, the modeled data structure is exemplified in a navigational scenario and usage using UML. Optimal driving route searching is also discussed using the proposed data structure in a dynamically changing road conditions.

Keywords: connected-car, data modeling, route planning, navigation system

Procedia PDF Downloads 373
32125 Feasibility and Obstacles of Air Quality Attainment in Hong Kong from 2019 to 2025

Authors: Xuguo Zhang, Jimmy Fung, Kenneth Leung, Alexis Lau

Abstract:

Fine particulate matter concentrations have been decreasing in the past few years while the ozone concentrations are posing an increasing trend in the Greater Bay Area (GBA) of China. A series of control policies have been released to mitigate the country-wide air pollution, however, how to effectively evaluate the exercised control measures and efficiently reveal potential projected mitigation pathways are still limited. By refining an enhanced air-quality-modeling system, this study provides an account of the air quality assessments from 2019 to 2025 to appraise the air quality results and improvement under designed scenarios for assessing the optimum scope for tightening the Air Quality Objectives (AQOs). The results show that it is doable to tighten the 24-hour AQO for SO2 from the World Health Objective air quality guidelines Interim Targets Level-1 (IT-1) (125μg/m3) to IT-2 level (50μg/m3) with the current number of exceedance allowed (three) remains unchanged. It is also possible to tighten the annual AQO for PM2.5 from IT-1 (35 μg/m3) to IT 2 (25 μg/m3), and its 24-hr AQO from IT-1 (75 μg/m3) to IT 2 (50 μg/m3) with the number of exceedances allowed increased from current nine to 35. Regional cooperation under the development of the GBA cooperation are still needed to be focused and strengthen due to the cross-boundary transport characteristics of the air pollution.

Keywords: air quality attainment, Hong Kong, mitigation policy, chemical transport modeling, sensitivity analysis

Procedia PDF Downloads 81
32124 Determination of Poisson’s Ratio and Elastic Modulus of Compression Textile Materials

Authors: Chongyang Ye, Rong Liu

Abstract:

Compression textiles such as compression stockings (CSs) have been extensively applied for the prevention and treatment of chronic venous insufficiency of lower extremities. The involvement of multiple mechanical factors such as interface pressure, frictional force, and elastic materials make the interactions between lower limb and CSs to be complex. Determination of Poisson’s ratio and elastic moduli of CS materials are critical for constructing finite element (FE) modeling to numerically simulate a complex interactive system of CS and lower limb. In this study, a mixed approach, including an analytic model based on the orthotropic Hooke’s Law and experimental study (uniaxial tension testing and pure shear testing), has been proposed to determine Young’s modulus, Poisson’s ratio, and shear modulus of CS fabrics. The results indicated a linear relationship existing between the stress and strain properties of the studied CS samples under controlled stretch ratios (< 100%). The newly proposed method and the determined key mechanical properties of elastic orthotropic CS fabrics facilitate FE modeling for analyzing in-depth the effects of compression material design on their resultant biomechanical function in compression therapy.

Keywords: elastic compression stockings, Young’s modulus, Poisson’s ratio, shear modulus, mechanical analysis

Procedia PDF Downloads 113
32123 Poultry in Motion: Text Mining Social Media Data for Avian Influenza Surveillance in the UK

Authors: Samuel Munaf, Kevin Swingler, Franz Brülisauer, Anthony O’Hare, George Gunn, Aaron Reeves

Abstract:

Background: Avian influenza, more commonly known as Bird flu, is a viral zoonotic respiratory disease stemming from various species of poultry, including pets and migratory birds. Researchers have purported that the accessibility of health information online, in addition to the low-cost data collection methods the internet provides, has revolutionized the methods in which epidemiological and disease surveillance data is utilized. This paper examines the feasibility of using internet data sources, such as Twitter and livestock forums, for the early detection of the avian flu outbreak, through the use of text mining algorithms and social network analysis. Methods: Social media mining was conducted on Twitter between the period of 01/01/2021 to 31/12/2021 via the Twitter API in Python. The results were filtered firstly by hashtags (#avianflu, #birdflu), word occurrences (avian flu, bird flu, H5N1), and then refined further by location to include only those results from within the UK. Analysis was conducted on this text in a time-series manner to determine keyword frequencies and topic modeling to uncover insights in the text prior to a confirmed outbreak. Further analysis was performed by examining clinical signs (e.g., swollen head, blue comb, dullness) within the time series prior to the confirmed avian flu outbreak by the Animal and Plant Health Agency (APHA). Results: The increased search results in Google and avian flu-related tweets showed a correlation in time with the confirmed cases. Topic modeling uncovered clusters of word occurrences relating to livestock biosecurity, disposal of dead birds, and prevention measures. Conclusions: Text mining social media data can prove to be useful in relation to analysing discussed topics for epidemiological surveillance purposes, especially given the lack of applied research in the veterinary domain. The small sample size of tweets for certain weekly time periods makes it difficult to provide statistically plausible results, in addition to a great amount of textual noise in the data.

Keywords: veterinary epidemiology, disease surveillance, infodemiology, infoveillance, avian influenza, social media

Procedia PDF Downloads 104
32122 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System

Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko

Abstract:

Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.

Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic

Procedia PDF Downloads 60
32121 Examining the Structural Model of Mindfulness and Headache Intensity With the Mediation of Resilience and Perfectionism in Migraine Patients

Authors: Alireza Monzavi Chaleshtari, Mahnaz Aliakbari Dehkordi, Nazila Esmaeili, Ahmad Alipour, Amin Asadi Hieh

Abstract:

Headache disorders are one of the most common disorders of the nervous system and are associated with suffering, disability, and financial costs for patients. Mindfulness as a lifestyle, in line with human nature, has the ability to affect the emotional system, i.e. thoughts, body sensations, raw emotions and action impulses of people. The aim of this study was to test the fit of structural model of mindfulness and severity of headache mediated by resilience and perfectionism in patients with migraine. Methods: The statistical population of this study included all patients with migraine referred to neurologists in Tehran in the spring and summer of 1401. The inclusion criteria were diagnosis of migraine by a neurologist, not having mental disorders or other physical diseases, and having at least a diploma. According to the number of research variables, 180 people were selected by convenience sampling method, which online answered the Ahvaz perfectionism questionnaire (AMQ), Connor and Davidson resilience questionnaire (CD-RISC), Ahvaz migraine headache questionnaire (APS) and 5-factor mindfulness questionnaire ((MAAS). Data were analyzed using structural equation modeling and Amos software. Results: The results showed that the direct pathways of mindfulness were not significant for severe headache (P <0.05), but other direct pathways - mindfulness to resilience, mindfulness to perfectionism, resilience to severe headache and perfectionism to severe headache), Was significant (P <0.01). After modifying and removing the non-significant paths, the final model fitted. Mediating variables Resilience and perfectionism mediated all paths of predictor variables to the criterion. Conclusion: According to the findings of the present study, mindfulness in migraine patients reduces the severity of headache by promoting resilience and reducing perfectionism.

Keywords: migraine, headache severity, mindfulness, resilience, perfectionism

Procedia PDF Downloads 79
32120 Optimal Designof Brush Roll for Semiconductor Wafer Using CFD Analysis

Authors: Byeong-Sam Kim, Kyoungwoo Park

Abstract:

This research analyzes structure of flat panel display (FPD) such as LCD as quantitative through CFD analysis and modeling change to minimize the badness rate and rate of production decrease by damage of large scale plater at wafer heating chamber at semi-conductor manufacturing process. This glass panel and wafer device with atmospheric pressure or chemical vapor deposition equipment for transporting and transferring wafers, robot hands carry these longer and wider wafers can also be easily handled. As a contact handling system composed of several problems in increased potential for fracture or warping. A non-contact handling system is required to solve this problem. The panel and wafer warping makes it difficult to carry out conventional contact to analysis. We propose a new non-contact transportation system with combining air suction and blowout. The numerical analysis and experimental is, therefore, should be performed to obtain compared to results achieved with non-contact solutions. This wafer panel noncontact handler shows its strength in maintaining high cleanliness levels for semiconductor production processes.

Keywords: flat panel display, non contact transportation, heat treatment process, CFD analysis

Procedia PDF Downloads 415
32119 Lifespan Assessment of the Fish Crossing System of Itaipu Power Plant (Brazil/Paraguay) Based on the Reaching of Its Sedimentological Equilibrium Computed by 3D Modeling and Churchill Trapping Efficiency

Authors: Anderson Braga Mendes, Wallington Felipe de Almeida, Cicero Medeiros da Silva

Abstract:

This study aimed to assess the lifespan of the fish transposition system of the Itaipu Power Plant (Brazil/Paraguay) by using 3D hydrodynamic modeling and Churchill trapping effiency in order to identify the sedimentological equilibrium configuration in the main pond of the Piracema Channel, which is part of a 10 km hydraulic circuit that enables fish migration from downstream to upstream (and vice-versa) the Itaipu Dam, overcoming a 120 m water drop. For that, bottom data from 2002 (its opening year) and 2015 were collected and analyzed, besides bed material at 12 stations to the purpose of identifying their granulometric profiles. The Shields and Yalin and Karahan diagrams for initiation of motion of bed material were used to determine the critical bed shear stress for the sedimentological equilibrium state based on the sort of sediment (grain size) to be found at the bottom once the balance is reached. Such granulometry was inferred by analyzing the grosser material (fine and medium sands) which inflows the pond and deposits in its backwater zone, being adopted a range of diameters within the upper and lower limits of that sand stratification. The software Delft 3D was used in an attempt to compute the bed shear stress at every station under analysis. By modifying the input bathymetry of the main pond of the Piracema Channel so as to the computed bed shear stress at each station fell within the intervals of acceptable critical stresses simultaneously, it was possible to foresee the bed configuration of the main pond when the sedimentological equilibrium is reached. Under such condition, 97% of the whole pond capacity will be silted, and a shallow water course with depths ranging from 0.2 m to 1.5 m will be formed; in 2002, depths ranged from 2 m to 10 m. Out of that water path, the new bottom will be practically flat and covered by a layer of water 0.05 m thick. Thus, in the future the main pond of the Piracema Channel will lack its purpose of providing a resting place for migrating fish species, added to the fact that it may become an insurmountable barrier for medium and large sized specimens. Everything considered, it was estimated that its lifespan, from the year of its opening to the moment of the sedimentological equilibrium configuration, will be approximately 95 years–almost half of the computed lifespan of Itaipu Power Plant itself. However, it is worth mentioning that drawbacks concerning the silting in the main pond will start being noticed much earlier than such time interval owing to the reasons previously mentioned.

Keywords: 3D hydrodynamic modeling, Churchill trapping efficiency, fish crossing system, Itaipu power plant, lifespan, sedimentological equilibrium

Procedia PDF Downloads 232
32118 Comparison of Cyclone Design Methods for Removal of Fine Particles from Plasma Generated Syngas

Authors: Mareli Hattingh, I. Jaco Van der Walt, Frans B. Waanders

Abstract:

A waste-to-energy plasma system was designed by Necsa for commercial use to create electricity from unsorted municipal waste. Fly ash particles must be removed from the syngas stream at operating temperatures of 1000 °C and recycled back into the reactor for complete combustion. A 2D2D high efficiency cyclone separator was chosen for this purpose. During this study, two cyclone design methods were explored: The Classic Empirical Method (smaller cyclone) and the Flow Characteristics Method (larger cyclone). These designs were optimized with regard to efficiency, so as to remove at minimum 90% of the fly ash particles of average size 10 μm by 50 μm. Wood was used as feed source at a concentration of 20 g/m3 syngas. The two designs were then compared at room temperature, using Perspex test units and three feed gases of different densities, namely nitrogen, helium and air. System conditions were imitated by adapting the gas feed velocity and particle load for each gas respectively. Helium, the least dense of the three gases, would simulate higher temperatures, whereas air, the densest gas, simulates a lower temperature. The average cyclone efficiencies ranged between 94.96% and 98.37%, reaching up to 99.89% in individual runs. The lowest efficiency attained was 94.00%. Furthermore, the design of the smaller cyclone proved to be more robust, while the larger cyclone demonstrated a stronger correlation between its separation efficiency and the feed temperatures. The larger cyclone can be assumed to achieve slightly higher efficiencies at elevated temperatures. However, both design methods led to good designs. At room temperature, the difference in efficiency between the two cyclones was almost negligible. At higher temperatures, however, these general tendencies are expected to be amplified so that the difference between the two design methods will become more obvious. Though the design specifications were met for both designs, the smaller cyclone is recommended as default particle separator for the plasma system due to its robust nature.

Keywords: Cyclone, design, plasma, renewable energy, solid separation, waste processing

Procedia PDF Downloads 211
32117 3D Finite Element Analysis of Yoke Hybrid Electromagnet

Authors: Hasan Fatih Ertuğrul, Beytullah Okur, Huseyin Üvet, Kadir Erkan

Abstract:

The objective of this paper is to analyze a 4-pole hybrid magnetic levitation system by using 3D finite element and analytical methods. The magnetostatic analysis of the system is carried out by using ANSYS MAXWELL-3D package. An analytical model is derived by magnetic equivalent circuit (MEC) method. The purpose of magnetostatic analysis is to determine the characteristics of attractive force and rotational torques by the change of air gap clearances, inclination angles and current excitations. The comparison between 3D finite element analysis and analytical results are presented at the rest of the paper.

Keywords: yoke hybrid electromagnet, 3D finite element analysis, magnetic levitation system, magnetostatic analysis

Procedia PDF Downloads 725
32116 Finite Element Molecular Modeling: A Structural Method for Large Deformations

Authors: A. Rezaei, M. Huisman, W. Van Paepegem

Abstract:

Atomic interactions in molecular systems are mainly studied by particle mechanics. Nevertheless, researches have also put on considerable effort to simulate them using continuum methods. In early 2000, simple equivalent finite element models have been developed to study the mechanical properties of carbon nanotubes and graphene in composite materials. Afterward, many researchers have employed similar structural simulation approaches to obtain mechanical properties of nanostructured materials, to simplify interface behavior of fiber-reinforced composites, and to simulate defects in carbon nanotubes or graphene sheets, etc. These structural approaches, however, are limited to small deformations due to complicated local rotational coordinates. This article proposes a method for the finite element simulation of molecular mechanics. For ease in addressing the approach, here it is called Structural Finite Element Molecular Modeling (SFEMM). SFEMM method improves the available structural approaches for large deformations, without using any rotational degrees of freedom. Moreover, the method simulates molecular conformation, which is a big advantage over the previous approaches. Technically, this method uses nonlinear multipoint constraints to simulate kinematics of the atomic multibody interactions. Only truss elements are employed, and the bond potentials are implemented through constitutive material models. Because the equilibrium bond- length, bond angles, and bond-torsion potential energies are intrinsic material parameters, the model is independent of initial strains or stresses. In this paper, the SFEMM method has been implemented in ABAQUS finite element software. The constraints and material behaviors are modeled through two Fortran subroutines. The method is verified for the bond-stretch, bond-angle and bond-torsion of carbon atoms. Furthermore, the capability of the method in the conformation simulation of molecular structures is demonstrated via a case study of a graphene sheet. Briefly, SFEMM builds up a framework that offers more flexible features over the conventional molecular finite element models, serving the structural relaxation modeling and large deformations without incorporating local rotational degrees of freedom. Potentially, the method is a big step towards comprehensive molecular modeling with finite element technique, and thereby concurrently coupling an atomistic domain to a solid continuum domain within a single finite element platform.

Keywords: finite element, large deformation, molecular mechanics, structural method

Procedia PDF Downloads 151
32115 Amyloid-β Fibrils Remodeling by an Organic Molecule: Insight from All-Atomic Molecular Dynamics Simulations

Authors: Nikhil Agrawal, Adam A. Skelton

Abstract:

Alzheimer’s disease (AD) is one of the most common forms of dementia, which is caused by misfolding and aggregation of amyloid beta (Aβ) peptides into amyloid-β fibrils (Aβ fibrils). To disrupt the remodeling of Aβ fibrils, a number of candidate molecules have been proposed. To study the molecular mechanisms of Aβ fibrils remodeling we performed a series of all-atom molecular dynamics simulations, a total time of 3µs, in explicit solvent. Several previously undiscovered candidate molecule-Aβ fibrils binding modes are unraveled; one of which shows the direct conformational change of the Aβ fibril by understanding the physicochemical factors responsible for binding and subsequent remodeling of Aβ fibrils by the candidate molecule, open avenues into structure-based drug design for AD can be opened.

Keywords: alzheimer’s disease, amyloid, MD simulations, misfolded protein

Procedia PDF Downloads 345
32114 Continuous-Time and Discrete-Time Singular Value Decomposition of an Impulse Response Function

Authors: Rogelio Luck, Yucheng Liu

Abstract:

This paper proposes the continuous-time singular value decomposition (SVD) for the impulse response function, a special kind of Green’s functions e⁻⁽ᵗ⁻ ᵀ⁾, in order to find a set of singular functions and singular values so that the convolutions of such function with the set of singular functions on a specified domain are the solutions to the inhomogeneous differential equations for those singular functions. A numerical example was illustrated to verify the proposed method. Besides the continuous-time SVD, a discrete-time SVD is also presented for the impulse response function, which is modeled using a Toeplitz matrix in the discrete system. The proposed method has broad applications in signal processing, dynamic system analysis, acoustic analysis, thermal analysis, as well as macroeconomic modeling.

Keywords: singular value decomposition, impulse response function, Green’s function , Toeplitz matrix , Hankel matrix

Procedia PDF Downloads 154
32113 The Impact of City Mobility on Propagation of Infectious Diseases: Mathematical Modelling Approach

Authors: Asrat M.Belachew, Tiago Pereira, Institute of Mathematics, Computer Sciences, Avenida Trabalhador São Carlense, 400, São Carlos, 13566-590, Brazil

Abstract:

Infectious diseases are among the most prominent threats to human beings. They cause morbidity and mortality to an individual and collapse the social, economic, and political systems of the whole world collectively. Mathematical models are fundamental tools and provide a comprehensive understanding of how infectious diseases spread and designing the control strategy to mitigate infectious diseases from the host population. Modeling the spread of infectious diseases using a compartmental model of inhomogeneous populations is good in terms of complexity. However, in the real world, there is a situation that accounts for heterogeneity, such as ages, locations, and contact patterns of the population which are ignored in a homogeneous setting. In this work, we study how classical an SEIR infectious disease spreading of the compartmental model can be extended by incorporating the mobility of population between heterogeneous cities during an outbreak of infectious disease. We have formulated an SEIR multi-cities epidemic spreading model using a system of 4k ordinary differential equations to describe the disease transmission dynamics in k-cities during the day and night. We have shownthat the model is epidemiologically (i.e., variables have biological interpretation) and mathematically (i.e., a unique bounded solution exists all the time) well-posed. We constructed the next-generation matrix (NGM) for the model and calculated the basic reproduction number R0for SEIR-epidemic spreading model with cities mobility. R0of the disease depends on the spectral radius mobility operator, and it is a threshold between asymptotic stability of the disease-free equilibrium and disease persistence. Using the eigenvalue perturbation theorem, we showed that sending a fraction of the population between cities decreases the reproduction number of diseases in interconnected cities. As a result, disease transmissiondecreases in the population.

Keywords: SEIR-model, mathematical model, city mobility, epidemic spreading

Procedia PDF Downloads 108
32112 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults

Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter

Abstract:

Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.

Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization

Procedia PDF Downloads 143
32111 Enhancing Organizational Performance through Adaptive Learning: A Case Study of ASML

Authors: Ramin Shadani

Abstract:

This study introduces adaptive performance as a key organizational performance dimension and explores the relationship between the dimensions of a learning organization and adaptive performance. A survey was therefore conducted using the dimensions of the Learning Organization Questionnaire (DLOQ), followed by factor analysis and structural equation modeling in order to investigate the dynamics between learning organization practices and adaptive performance. Results confirm that adaptive performance is indeed one important dimension of organizational performance. The study also shows that perceived knowledge and adaptive performance mediate the positive relationship between the practices of a learning organization with perceived financial performance. We extend existing DLOQ research by demonstrating that adaptive performance, as a nonfinancial organizational learning outcome, has a significant impact on financial performance. Our study also provides additional validation of the measures of DLOQ's performance. Indeed, organizations need to take a glance at how the activities of learning and development can provide better overall improvement in performance, especially in enhancing adaptive capability. The study has provided requisite empirical support that activities of learning and development within organizations allow much-improved intangible performance outcomes, especially through adaptive performance.

Keywords: adaptive performance, continuous learning, financial performance, leadership style, organizational learning, organizational performance

Procedia PDF Downloads 25
32110 Amplitude Versus Offset (AVO) Modeling as a Tool for Seismic Reservoir Characterization of the Semliki Basin

Authors: Hillary Mwongyera

Abstract:

The Semliki basin has become a frontier for petroleum exploration in recent years. Exploration efforts have resulted into extensive seismic data acquisition and drilling of three wells namely; Turaco 1, Turaco 2 and Turaco 3. A petrophysical analysis of the Turaco 1 well was carried out to identify two reservoir zones on which AVO modeling was performed. A combination of seismic modeling and rock physics modeling was applied during reservoir characterization and monitoring to determine variations of seismic responses with amplitude characteristics. AVO intercept gradient analysis applied on AVO synthetic CDP gathers classified AVO anomalies associated with both reservoir zones as Class 1 AVO anomalies. Fluid replacement modeling was carried out on both reservoir zones using homogeneous mixing and patchy saturation patterns to determine effects of fluid substitution on rock property interactions. For both homogeneous mixing and saturation patterns, density (ρ) showed an increasing trend with increasing brine substitution while Shear wave velocity (Vs) decreased with increasing brine substitution. A study of compressional wave velocity (Vp) with increasing brine substitution for both homogeneous mixing and patchy saturation gave quite interesting results. During patchy saturation, Vp increased with increasing brine substitution. During homogeneous mixing however, Vp showed a slightly decreasing trend with increasing brine substitution but increased tremendously towards and at full brine saturation. A sensitivity analysis carried out showed that density was a very sensitive rock property responding to brine saturation except at full brine saturation during homogeneous mixing where Vp showed greater sensitivity with brine saturation. Rock physics modeling was performed to predict diagnostics of reservoir quality using an inverse deterministic approach which showed low shale content and a high degree of shale stiffness within reservoir zones.

Keywords: Amplitude Versus Offset (AVO), fluid replacement modelling, reservoir characterization, AVO attributes, rock physics modelling, reservoir monitoring

Procedia PDF Downloads 530
32109 Behavioral Patterns of Adopting Digitalized Services (E-Sport versus Sports Spectating) Using Agent-Based Modeling

Authors: Justyna P. Majewska, Szymon M. Truskolaski

Abstract:

The growing importance of digitalized services in the so-called new economy, including the e-sports industry, can be observed recently. Various demographic or technological changes lead consumers to modify their needs, not regarding the services themselves but the method of their application (attracting customers, forms of payment, new content, etc.). In the case of leisure-related to competitive spectating activities, there is a growing need to participate in events whose content is not sports competitions but computer games challenge – e-sport. The literature in this area so far focuses on determining the number of e-sport fans with elements of a simple statistical description (mainly concerning demographic characteristics such as age, gender, place of residence). Meanwhile, the development of the industry is influenced by a combination of many different, intertwined demographic, personality and psychosocial characteristics of customers, as well as the characteristics of their environment. Therefore, there is a need for a deeper recognition of the determinants of the behavioral patterns upon selecting digitalized services by customers, which, in the absence of available large data sets, can be achieved by using econometric simulations – multi-agent modeling. The cognitive aim of the study is to reveal internal and external determinants of behavioral patterns of customers taking into account various variants of economic development (the pace of digitization and technological development, socio-demographic changes, etc.). In the paper, an agent-based model with heterogeneous agents (characteristics of customers themselves and their environment) was developed, which allowed identifying a three-stage development scenario: i) initial interest, ii) standardization, and iii) full professionalization. The probabilities regarding the transition process were estimated using the Method of Simulated Moments. The estimation of the agent-based model parameters and sensitivity analysis reveals crucial factors that have driven a rising trend in e-sport spectating and, in a wider perspective, the development of digitalized services. Among the psychosocial characteristics of customers, they are the level of familiarization with the rules of games as well as sports disciplines, active and passive participation history and individual perception of challenging activities. Environmental factors include general reception of games, number and level of recognition of community builders and the level of technological development of streaming as well as community building platforms. However, the crucial factor underlying the good predictive power of the model is the level of professionalization. While in the initial interest phase, the entry barriers for new customers are high. They decrease during the phase of standardization and increase again in the phase of full professionalization when new customers perceive participation history inaccessible. In this case, they are prone to switch to new methods of service application – in the case of e-sport vs. sports to new content and more modern methods of its delivery. In a wider context, the findings in the paper support the idea of a life cycle of services regarding methods of their application from “traditional” to digitalized.

Keywords: agent-based modeling, digitalized services, e-sport, spectators motives

Procedia PDF Downloads 172
32108 Hydrodynamic Analysis on the Body of a Solar Autonomous Underwater Vehicle by Numerical Method

Authors: Mohammad Moonesun, Ehsan Asadi Asrami, Julia Bodnarchuk

Abstract:

In the case of Solar Autonomous Underwater Vehicle, which uses photovoltaic panels to provide its required power, due to limitation of energy, accurate estimation of resistance and energy has major sensitivity. In this work, hydrodynamic calculations by numerical method for a solar autonomous underwater vehicle equipped by two 50 W photovoltaic panels has been studied. To evaluate the required power and energy, hull hydrodynamic resistance in several velocities should be taken into account. To do this assessment, the ANSYS FLUENT 18 applied as Computational Fluid Dynamics (CFD) tool that solves Reynolds Average Navier Stokes (RANS) equations around AUV hull, and K-ω SST is used as turbulence model. To validate of solution method and modeling approach, the model of Myring submarine that it’s experimental data was available, is simulated. There is good agreement between numerical and experimental results. Also, these results showed that the K-ω SST Turbulence model is an ideal method to simulate the AUV motion in low velocities.

Keywords: underwater vehicle, hydrodynamic resistance, numerical modelling, CFD, RANS

Procedia PDF Downloads 203
32107 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning

Authors: Shayla He

Abstract:

Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.

Keywords: homeless, prediction, model, RNN

Procedia PDF Downloads 119
32106 Fault Diagnosis and Fault-Tolerant Control of Bilinear-Systems: Application to Heating, Ventilation, and Air Conditioning Systems in Multi-Zone Buildings

Authors: Abderrhamane Jarou, Dominique Sauter, Christophe Aubrun

Abstract:

Over the past decade, the growing demand for energy efficiency in buildings has attracted the attention of the control community. Failures in HVAC (heating, ventilation and air conditioning) systems in buildings can have a significant impact on the desired and expected energy performance of buildings and on the user's comfort as well. FTC is a recent technology area that studies the adaptation of control algorithms to faulty operating conditions of a system. The application of Fault-Tolerant Control (FTC) in HVAC systems has gained attention in the last two decades. The objective is to maintain the variations in system performance due to faults within an acceptable range with respect to the desired nominal behavior. This paper considers the so-called active approach, which is based on fault and identification scheme combined with a control reconfiguration algorithm that consists in determining a new set of control parameters so that the reconfigured performance is "as close as possible, "in some sense, to the nominal performance. Thermal models of buildings and their HVAC systems are described by non-linear (usually bi-linear) equations. Most of the works carried out so far in FDI (fault diagnosis and isolation) or FTC consider a linearized model of the studied system. However, this model is only valid in a reduced range of variation. This study presents a new fault diagnosis (FD) algorithm based on a bilinear observer for the detection and accurate estimation of the magnitude of the HVAC system failure. The main contribution of the proposed FD algorithm is that instead of using specific linearized models, the algorithm inherits the structure of the actual bilinear model of the building thermal dynamics. As an immediate consequence, the algorithm is applicable to a wide range of unpredictable operating conditions, i.e., weather dynamics, outdoor air temperature, zone occupancy profile. A bilinear fault detection observer is proposed for a bilinear system with unknown inputs. The residual vector in the observer design is decoupled from the unknown inputs and, under certain conditions, is made sensitive to all faults. Sufficient conditions are given for the existence of the observer and results are given for the explicit computation of observer design matrices. Dedicated observer schemes (DOS) are considered for sensor FDI while unknown input bilinear observers are considered for actuator or system components FDI. The proposed strategy for FTC works as follows: At a first level, FDI algorithms are implemented, making it also possible to estimate the magnitude of the fault. Once the fault is detected, the fault estimation is then used to feed the second level and reconfigure the control low so that that expected performances are recovered. This paper is organized as follows. A general structure for fault-tolerant control of buildings is first presented and the building model under consideration is introduced. Then, the observer-based design for Fault Diagnosis of bilinear systems is studied. The FTC approach is developed in Section IV. Finally, a simulation example is given in Section V to illustrate the proposed method.

Keywords: bilinear systems, fault diagnosis, fault-tolerant control, multi-zones building

Procedia PDF Downloads 171
32105 Real-Time Recognition of Dynamic Hand Postures on a Neuromorphic System

Authors: Qian Liu, Steve Furber

Abstract:

To explore how the brain may recognize objects in its general,accurate and energy-efficient manner, this paper proposes the use of a neuromorphic hardware system formed from a Dynamic Video Sensor~(DVS) silicon retina in concert with the SpiNNaker real-time Spiking Neural Network~(SNN) simulator. As a first step in the exploration on this platform a recognition system for dynamic hand postures is developed, enabling the study of the methods used in the visual pathways of the brain. Inspired by the behaviours of the primary visual cortex, Convolutional Neural Networks (CNNs) are modeled using both linear perceptrons and spiking Leaky Integrate-and-Fire (LIF) neurons. In this study's largest configuration using these approaches, a network of 74,210 neurons and 15,216,512 synapses is created and operated in real-time using 290 SpiNNaker processor cores in parallel and with 93.0% accuracy. A smaller network using only 1/10th of the resources is also created, again operating in real-time, and it is able to recognize the postures with an accuracy of around 86.4% -only 6.6% lower than the much larger system. The recognition rate of the smaller network developed on this neuromorphic system is sufficient for a successful hand posture recognition system, and demonstrates a much-improved cost to performance trade-off in its approach.

Keywords: spiking neural network (SNN), convolutional neural network (CNN), posture recognition, neuromorphic system

Procedia PDF Downloads 470
32104 The Effect of Annual Weather and Sowing Date on Different Genotype of Maize (Zea mays L.) in Germination and Yield

Authors: Ákos Tótin

Abstract:

In crop production the most modern hybrids are available for us, therefore the yield and yield stability is determined by the agro-technology. The purpose of the experiment is to adapt the modern agrotechnology to the new type of hybrids. The long-term experiment was set up in 2015-2016 on chernozem soil in the Hajdúság (eastern Hungary). The plots were set up in 75 thousand ha-1 plant density. We examined some mainly use hybrids of Hungary. The conducted studies are: germination dynamic, growing dynamic and the effect of annual weather for the yield. We use three different sowing date as early, average and late, and measure how many plant germinated during the germination process. In the experiment, we observed the germination dynamics in 6 hybrid in 4 replication. In each replication, we counted the germinated plants in 2m long 2 row wide area. Data will be shown in the average of the 6 hybrid and 4 replication. Growing dynamics were measured from the 10cm (4-6 leaf) plant highness. We measured 10 plants’ height in two weeks replication. The yield was measured buy a special plot harvester - the Sampo Rosenlew 2010 – what measured the weight of the harvested plot and also took a sample from it. We determined the water content of the samples for the water release dynamics. After it, we calculated the yield (t/ha) of each plot at 14% of moisture content to compare them. We evaluated the data using Microsoft Excel 2015. The annual weather in each crop year define the maize germination dynamics because the amount of heat is determinative for the plants. In cooler crop year the weather is prolonged the germination. At the 2015 crop year the weather was cold in the beginning what prolonged the first sowing germination. But the second and third sowing germinated faster. In the 2016 crop year the weather was much favorable for plants so the first sowing germinated faster than in the previous year. After it the weather cooled down, therefore the second and third sowing germinated slower than the last year. The statistical data analysis program determined that there is a significant difference between the early and late sowing date growing dynamics. In 2015 the first sowing date had the highest amount of yield. The second biggest yield was in the average sowing time. The late sowing date has lowest amount of yield.

Keywords: germination, maize, sowing date, yield

Procedia PDF Downloads 230
32103 Way to Successful Enterprise Resource Planning System Implementation in Developing Countries: Case of Public Sector Unit

Authors: Suraj Kumar Mukti

Abstract:

Enterprise Resource Planning (ERP) system is a management tool to integrate all departments in an organization. It integrates business processes, manages resources efficiently and provides an appropriate decision support system to management. ERP system implementation is a typical and time taking process as well as money consuming process. Articles related to key success factors of ERP system implementation are available in the literature, but rare authors have focused on roadmap of successful ERP system implementation. Postponement is better if the organization is not ready to implement ERP system in better way; hence checking of organization’s preparation to adopt new system is an important prerequisite to ensure the success of ERP system implementation in an organization. Then comes what will be called as success of ERP system implementation. Benefits achieved by ERP system may be categorized into two categories; viz. tangible and intangible benefits. This research article presents a roadmap to ensure the success of ERP system implementation and benefits achieved through the new system as in success indicator. A case study is presented to evaluate the success and benefit achieved through the new system. The article gives a comprehensive approach to academicians and a roadmap to the organizations seeking to implement the ERP system.

Keywords: ERP system, decision support system, tangible, intangible

Procedia PDF Downloads 331
32102 Establishment of Landslide Warning System Using Surface or Sub-Surface Sensors Data

Authors: Neetu Tyagi, Sumit Sharma

Abstract:

The study illustrates the results of an integrated study done on Tangni landslide located on NH-58 at Chamoli, Uttarakhand. Geological, geo-morphological and geotechnical investigations were carried out to understand the mechanism of landslide and to plan further investigation and monitoring. At any rate, the movements were favored by continuous rainfall water infiltration from the zones where the phyllites/slates and Dolomites outcrop. The site investigations were carried out including the monitoring of landslide movements and of the water level fluctuations due to rainfall give us a better understanding of landslide dynamics that have been causing in time soil instability at Tangni landslide site. The Early Warning System (EWS) installed different types of sensors and all sensors were directly connected to data logger and raw data transfer to the Defence Terrain Research Laboratory (DTRL) server room with the help of File Transfer Protocol (FTP). The slip surfaces were found at depths ranging from 8 to 10 m from Geophysical survey and hence sensors were installed to the depth of 15m at various locations of landslide. Rainfall is the main triggering factor of landslide. In this study, the developed model of unsaturated soil slope stability is carried out. The analysis of sensors data available for one year, indicated the sliding surface of landslide at depth between 6 to 12m with total displacement up to 6cm per year recorded at the body of landslide. The aim of this study is to set the threshold and generate early warning. Local peoples already alert towards landslide, if they have any types of warning system.

Keywords: early warning system, file transfer protocol, geo-morphological, geotechnical, landslide

Procedia PDF Downloads 155
32101 Speeding up Nonlinear Time History Analysis of Base-Isolated Structures Using a Nonlinear Exponential Model

Authors: Nicolò Vaiana, Giorgio Serino

Abstract:

The nonlinear time history analysis of seismically base-isolated structures can require a significant computational effort when the behavior of each seismic isolator is predicted by adopting the widely used differential equation Bouc-Wen model. In this paper, a nonlinear exponential model, able to simulate the response of seismic isolation bearings within a relatively large displacements range, is described and adopted in order to reduce the numerical computations and speed up the nonlinear dynamic analysis. Compared to the Bouc-Wen model, the proposed one does not require the numerical solution of a nonlinear differential equation for each time step of the analysis. The seismic response of a 3d base-isolated structure with a lead rubber bearing system subjected to harmonic earthquake excitation is simulated by modeling each isolator using the proposed analytical model. The comparison of the numerical results and computational time with those obtained by modeling the lead rubber bearings using the Bouc-Wen model demonstrates the good accuracy of the proposed model and its capability to reduce significantly the computational effort of the analysis.

Keywords: base isolation, computational efficiency, nonlinear exponential model, nonlinear time history analysis

Procedia PDF Downloads 382
32100 FlameCens: Visualization of Expressive Deviations in Music Performance

Authors: Y. Trantafyllou, C. Alexandraki

Abstract:

Music interpretation accounts to the way musicians shape their performance by deliberately deviating from composers’ intentions, which are commonly communicated via some form of music transcription, such as a music score. For transcribed and non-improvised music, music expression is manifested by introducing subtle deviations in tempo, dynamics and articulation during the evolution of performance. This paper presents an application, named FlameCens, which, given two recordings of the same piece of music, presumably performed by different musicians, allow visualising deviations in tempo and dynamics during playback. The application may also compare a certain performance to the music score of that piece (i.e. MIDI file), which may be thought of as an expression-neutral representation of that piece, hence depicting the expressive queues employed by certain performers. FlameCens uses the Dynamic Time Warping algorithm to compare two audio sequences, based on CENS (Chroma Energy distribution Normalized Statistics) audio features. Expressive deviations are illustrated in a moving flame, which is generated by an animation of particles. The length of the flame is mapped to deviations in dynamics, while the slope of the flame is mapped to tempo deviations so that faster tempo changes the slope to the right and slower tempo changes the slope to the left. Constant slope signifies no tempo deviation. The detected deviations in tempo and dynamics can be additionally recorded in a text file, which allows for offline investigation. Moreover, in the case of monophonic music, the color of particles is used to convey the pitch of the notes during performance. FlameCens has been implemented in Python and it is openly available via GitHub. The application has been experimentally validated for different music genres including classical, contemporary, jazz and popular music. These experiments revealed that FlameCens can be a valuable tool for music specialists (i.e. musicians or musicologists) to investigate the expressive performance strategies employed by different musicians, as well as for music audience to enhance their listening experience.

Keywords: audio synchronization, computational music analysis, expressive music performance, information visualization

Procedia PDF Downloads 128
32099 ESRA: An End-to-End System for Re-identification and Anonymization of Swiss Court Decisions

Authors: Joel Niklaus, Matthias Sturmer

Abstract:

The publication of judicial proceedings is a cornerstone of many democracies. It enables the court system to be made accountable by ensuring that justice is made in accordance with the laws. Equally important is privacy, as a fundamental human right (Article 12 in the Declaration of Human Rights). Therefore, it is important that the parties (especially minors, victims, or witnesses) involved in these court decisions be anonymized securely. Today, the anonymization of court decisions in Switzerland is performed either manually or semi-automatically using primitive software. While much research has been conducted on anonymization for tabular data, the literature on anonymization for unstructured text documents is thin and virtually non-existent for court decisions. In 2019, it has been shown that manual anonymization is not secure enough. In 21 of 25 attempted Swiss federal court decisions related to pharmaceutical companies, pharmaceuticals, and legal parties involved could be manually re-identified. This was achieved by linking the decisions with external databases using regular expressions. An automated re-identification system serves as an automated test for the safety of existing anonymizations and thus promotes the right to privacy. Manual anonymization is very expensive (recurring annual costs of over CHF 20M in Switzerland alone, according to an estimation). Consequently, many Swiss courts only publish a fraction of their decisions. An automated anonymization system reduces these costs substantially, further leading to more capacity for publishing court decisions much more comprehensively. For the re-identification system, topic modeling with latent dirichlet allocation is used to cluster an amount of over 500K Swiss court decisions into meaningful related categories. A comprehensive knowledge base with publicly available data (such as social media, newspapers, government documents, geographical information systems, business registers, online address books, obituary portal, web archive, etc.) is constructed to serve as an information hub for re-identifications. For the actual re-identification, a general-purpose language model is fine-tuned on the respective part of the knowledge base for each category of court decisions separately. The input to the model is the court decision to be re-identified, and the output is a probability distribution over named entities constituting possible re-identifications. For the anonymization system, named entity recognition (NER) is used to recognize the tokens that need to be anonymized. Since the focus lies on Swiss court decisions in German, a corpus for Swiss legal texts will be built for training the NER model. The recognized named entities are replaced by the category determined by the NER model and an identifier to preserve context. This work is part of an ongoing research project conducted by an interdisciplinary research consortium. Both a legal analysis and the implementation of the proposed system design ESRA will be performed within the next three years. This study introduces the system design of ESRA, an end-to-end system for re-identification and anonymization of Swiss court decisions. Firstly, the re-identification system tests the safety of existing anonymizations and thus promotes privacy. Secondly, the anonymization system substantially reduces the costs of manual anonymization of court decisions and thus introduces a more comprehensive publication practice.

Keywords: artificial intelligence, courts, legal tech, named entity recognition, natural language processing, ·privacy, topic modeling

Procedia PDF Downloads 147
32098 Analytical Modeling of Drain Current for DNA Biomolecule Detection in Double-Gate Tunnel Field-Effect Transistor Biosensor

Authors: Ashwani Kumar

Abstract:

Abstract- This study presents an analytical modeling approach for analyzing the drain current behavior in Tunnel Field-Effect Transistor (TFET) biosensors used for the detection of DNA biomolecules. The proposed model focuses on elucidating the relationship between the drain current and the presence of DNA biomolecules, taking into account the impact of various device parameters and biomolecule characteristics. Through comprehensive analysis, the model offers insights into the underlying mechanisms governing the sensing performance of TFET biosensors, aiding in the optimization of device design and operation. A non-local tunneling model is incorporated with other essential models to accurately trace the simulation and modeled data. An experimental validation of the model is provided, demonstrating its efficacy in accurately predicting the drain current response to DNA biomolecule detection. The sensitivity attained from the analytical model is compared and contrasted with the ongoing research work in this area.

Keywords: biosensor, double-gate TFET, DNA detection, drain current modeling, sensitivity

Procedia PDF Downloads 55
32097 Modelling Spatial Dynamics of Terrorism

Authors: André Python

Abstract:

To this day, terrorism persists as a worldwide threat, exemplified by the recent deadly attacks in January 2015 in Paris and the ongoing massacres perpetrated by ISIS in Iraq and Syria. In response to this threat, states deploy various counterterrorism measures, the cost of which could be reduced through effective preventive measures. In order to increase the efficiency of preventive measures, policy-makers may benefit from accurate predictive models that are able to capture the complex spatial dynamics of terrorism occurring at a local scale. Despite empirical research carried out at country-level that has confirmed theories explaining the diffusion processes of terrorism across space and time, scholars have failed to assess diffusion’s theories on a local scale. Moreover, since scholars have not made the most of recent statistical modelling approaches, they have been unable to build up predictive models accurate in both space and time. In an effort to address these shortcomings, this research suggests a novel approach to systematically assess the theories of terrorism’s diffusion on a local scale and provide a predictive model of the local spatial dynamics of terrorism worldwide. With a focus on the lethal terrorist events that occurred after 9/11, this paper addresses the following question: why and how does lethal terrorism diffuse in space and time? Based on geolocalised data on worldwide terrorist attacks and covariates gathered from 2002 to 2013, a binomial spatio-temporal point process is used to model the probability of terrorist attacks on a sphere (the world), the surface of which is discretised in the form of Delaunay triangles and refined in areas of specific interest. Within a Bayesian framework, the model is fitted through an integrated nested Laplace approximation - a recent fitting approach that computes fast and accurate estimates of posterior marginals. Hence, for each location in the world, the model provides a probability of encountering a lethal terrorist attack and measures of volatility, which inform on the model’s predictability. Diffusion processes are visualised through interactive maps that highlight space-time variations in the probability and volatility of encountering a lethal attack from 2002 to 2013. Based on the previous twelve years of observation, the location and lethality of terrorist events in 2014 are statistically accurately predicted. Throughout the global scope of this research, local diffusion processes such as escalation and relocation are systematically examined: the former process describes an expansion from high concentration areas of lethal terrorist events (hotspots) to neighbouring areas, while the latter is characterised by changes in the location of hotspots. By controlling for the effect of geographical, economical and demographic variables, the results of the model suggest that the diffusion processes of lethal terrorism are jointly driven by contagious and non-contagious factors that operate on a local scale – as predicted by theories of diffusion. Moreover, by providing a quantitative measure of predictability, the model prevents policy-makers from making decisions based on highly uncertain predictions. Ultimately, this research may provide important complementary tools to enhance the efficiency of policies that aim to prevent and combat terrorism.

Keywords: diffusion process, terrorism, spatial dynamics, spatio-temporal modeling

Procedia PDF Downloads 349