Search results for: generative models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6850

Search results for: generative models

4480 A Study on Reinforced Concrete Beams Enlarged with Polymer Mortar and UHPFRC

Authors: Ga Ye Kim, Hee Sun Kim, Yeong Soo Shin

Abstract:

Many studies have been done on the repair and strengthening method of concrete structure, so far. The traditional retrofit method was to attach fiber sheet such as CFRP (Carbon Fiber Reinforced Polymer), GFRP (Glass Fiber Reinforced Polymer) and AFRP (Aramid Fiber Reinforced Polymer) on the concrete structure. However, this method had many downsides in that there are a risk of debonding and an increase in displacement by a shortage of structure section. Therefore, it is effective way to enlarge the structural member with polymer mortar or Ultra-High Performance Fiber Reinforced Concrete (UHPFRC) as a means of strengthening concrete structure. This paper intends to investigate structural performance of reinforced concrete (RC) beams enlarged with polymer mortar and compare the experimental results with analytical results. Nonlinear finite element analyses were conducted to compare the experimental results and predict structural behavior of retrofitted RC beams accurately without cost consuming experimental process. In addition, this study aims at comparing differences of retrofit material between commonly used material (polymer mortar) and recently used material (UHPFRC) by conducting nonlinear finite element analyses. In the first part of this paper, the RC beams having different cover type were fabricated for the experiment and the size of RC beams was 250 millimeters in depth, 150 millimeters in width and 2800 millimeters in length. To verify the experiment, nonlinear finite element models were generated using commercial software ABAQUS 6.10-3. From this study, both experimental and analytical results demonstrated good strengthening effect on RC beam and showed similar tendency. For the future, the proposed analytical method can be used to predict the effect of strengthened RC beam. In the second part of the study, the main parameters were type of retrofit materials. The same nonlinear finite element models were generated to compare the polymer mortar with UHPFRCC. Two types of retrofit material were evaluated and retrofit effect was verified by analytical results.

Keywords: retrofit material, polymer mortar, UHPFRC, nonlinear finite element analysis

Procedia PDF Downloads 418
4479 Visco-Hyperelastic Finite Element Analysis for Diagnosis of Knee Joint Injury Caused by Meniscal Tearing

Authors: Eiji Nakamachi, Tsuyoshi Eguchi, Sayo Yamamoto, Yusuke Morita, H. Sakamoto

Abstract:

In this study, we aim to reveal the relationship between the meniscal tearing and the articular cartilage injury of knee joint by using the dynamic explicit finite element (FE) method. Meniscal injuries reduce its functional ability and consequently increase the load on the articular cartilage of knee joint. In order to prevent the induction of osteoarthritis (OA) caused by meniscal injuries, many medical treatment techniques, such as artificial meniscus replacement and meniscal regeneration, have been developed. However, it is reported that these treatments are not the comprehensive methods. In order to reveal the fundamental mechanism of OA induction, the mechanical characterization of meniscus under the condition of normal and injured states is carried out by using FE analyses. At first, a FE model of the human knee joint in the case of normal state – ‘intact’ - was constructed by using the magnetron resonance (MR) tomography images and the image construction code, Materialize Mimics. Next, two types of meniscal injury models with the radial tears of medial and lateral menisci were constructed. In FE analyses, the linear elastic constitutive law was adopted for the femur and tibia bones, the visco-hyperelastic constitutive law for the articular cartilage, and the visco-anisotropic hyperelastic constitutive law for the meniscus, respectively. Material properties of articular cartilage and meniscus were identified using the stress-strain curves obtained by our compressive and the tensile tests. The numerical results under the normal walking condition revealed how and where the maximum compressive stress occurred on the articular cartilage. The maximum compressive stress and its occurrence point were varied in the intact and two meniscal tear models. These compressive stress values can be used to establish the threshold value to cause the pathological change for the diagnosis. In this study, FE analyses of knee joint were carried out to reveal the influence of meniscal injuries on the cartilage injury. The following conclusions are obtained. 1. 3D FE model, which consists femur, tibia, articular cartilage and meniscus was constructed based on MR images of human knee joint. The image processing code, Materialize Mimics was used by using the tetrahedral FE elements. 2. Visco-anisotropic hyperelastic constitutive equation was formulated by adopting the generalized Kelvin model. The material properties of meniscus and articular cartilage were determined by curve fitting with experimental results. 3. Stresses on the articular cartilage and menisci were obtained in cases of the intact and two radial tears of medial and lateral menisci. Through comparison with the case of intact knee joint, two tear models show almost same stress value and higher value than the intact one. It was shown that both meniscal tears induce the stress localization in both medial and lateral regions. It is confirmed that our newly developed FE analysis code has a potential to be a new diagnostic system to evaluate the meniscal damage on the articular cartilage through the mechanical functional assessment.

Keywords: finite element analysis, hyperelastic constitutive law, knee joint injury, meniscal tear, stress concentration

Procedia PDF Downloads 246
4478 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference

Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade

Abstract:

In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.

Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory

Procedia PDF Downloads 89
4477 Cognitive Models of Health Marketing Communication in the Digital Era: Psychological Factors, Challenges, and Implications

Authors: Panas Gerasimos, Kotidou Varvara, Halkiopoulos Constantinos, Gkintoni Evgenia

Abstract:

As a result of growing technology and briefing by the internet, users resort to the internet and subsequently to the opinion of an expert. In many cases, they take control of their health in their hand and make a decision without the contribution of a doctor. According to that, this essay intends to analyze the confidence of searching health issues on the internet. For the fulfillment of this study, there has been a survey among doctors in order to find out the reasons a patient uses the internet about their health problems and the consequences that health information could lead by searching on the internet, as well. Specifically, the results regarding the research of the users demonstrate: a) the majority of users make use of the internet about health issues once or twice a month, b) individuals that possess chronic disease make health search on the internet more frequently, c) the most important topics that the majority of users usually search are pathological, dietary issues and the search of issues that are associated with doctors and hospitals. However, it observed that topic search varies depending on the users’ age, d) the most common sources of information concern the direct contact with doctors, as there is a huge preference from the majority of users over the use of the electronic form for their briefing and e) it has been observed that there is large lack of knowledge about e-health services. From the doctor's point of view, the following conclusions occur: a) almost all doctors use the internet as their main source of information, b) the internet has great influence over doctors’ relationship with the patients, c) in many cases a patient first makes a visit to the internet and then to the doctor, d) the internet significantly has a psychological impact on patients in order to for them to reach a decision, e) the most important reason users choose the internet instead of the health professional is economic, f) the negative consequence that emerges is inaccurate information, g) and the positive consequences are about the possibility of online contact with the doctor and contributes to the easy comprehension of the doctor, as well. Generally, it’s observed from both sides that the use of the internet in health issues is intense, which declares that the new means the doctors have at their disposal, produce the conditions for radical changes in the way of providing services and in the doctor-patient relationship.

Keywords: cognitive models, health marketing, e-health, psychological factors, digital marketing, e-health services

Procedia PDF Downloads 206
4476 Modeling the Impact of Time Pressure on Activity-Travel Rescheduling Heuristics

Authors: Jingsi Li, Neil S. Ferguson

Abstract:

Time pressure could have an influence on the productivity, quality of decision making, and the efficiency of problem-solving. This has been mostly stemmed from cognitive research or psychological literature. However, a salient scarce discussion has been held for transport adjacent fields. It is conceivable that in many activity-travel contexts, time pressure is a potentially important factor since an excessive amount of decision time may incur the risk of late arrival to the next activity. The activity-travel rescheduling behavior is commonly explained by costs and benefits of factors such as activity engagements, personal intentions, social requirements, etc. This paper hypothesizes that an additional factor of perceived time pressure could affect travelers’ rescheduling behavior, thus leading to an impact on travel demand management. Time pressure may arise from different ways and is assumed here to be essentially incurred due to travelers planning their schedules without an expectation of unforeseen elements, e.g., transport disruption. In addition to a linear-additive utility-maximization model, the less computationally compensatory heuristic models are considered as an alternative to simulate travelers’ responses. The paper will contribute to travel behavior modeling research by investigating the following questions: how to measure the time pressure properly in an activity-travel day plan context? How do travelers reschedule their plans to cope with the time pressure? How would the importance of the activity affect travelers’ rescheduling behavior? What will the behavioral model be identified to describe the process of making activity-travel rescheduling decisions? How do these identified coping strategies affect the transport network? In this paper, a Mixed Heuristic Model (MHM) is employed to identify the presence of different choice heuristics through a latent class approach. The data about travelers’ activity-travel rescheduling behavior is collected via a web-based interactive survey where a fictitious scenario is created comprising multiple uncertain events on the activity or travel. The experiments are conducted in order to gain a real picture of activity-travel reschedule, considering the factor of time pressure. The identified behavioral models are then integrated into a multi-agent transport simulation model to investigate the effect of the rescheduling strategy on the transport network. The results show that an increased proportion of travelers use simpler, non-compensatory choice strategies instead of compensatory methods to cope with time pressure. Specifically, satisfying - one of the heuristic decision-making strategies - is adopted commonly since travelers tend to abandon the less important activities and keep the important ones. Furthermore, the importance of the activity is found to increase the weight of negative information when making trip-related decisions, especially route choices. When incorporating the identified non-compensatory decision-making heuristic models into the agent-based transport model, the simulation results imply that neglecting the effect of perceived time pressure may result in an inaccurate forecast of choice probability and overestimate the affectability to the policy changes.

Keywords: activity-travel rescheduling, decision making under uncertainty, mixed heuristic model, perceived time pressure, travel demand management

Procedia PDF Downloads 112
4475 Critical Appraisal, Smart City Initiative: China vs. India

Authors: Suneet Jagdev, Siddharth Singhal, Dhrubajyoti Bordoloi, Peesari Vamshidhar Reddy

Abstract:

There is no universally accepted definition of what constitutes a Smart City. It means different things to different people. The definition varies from place to place depending on the level of development and the willingness of people to change and reform. It tries to improve the quality of resource management and service provisions for the people living in the cities. Smart city is an urban development vision to integrate multiple information and communication technology (ICT) solutions in a secure fashion to manage the assets of a city. But most of these projects are misinterpreted as being technology projects only. Due to urbanization, a lot of informal as well government funded settlements have come up during the last few decades, thus increasing the consumption of the limited resources available. The people of each city have their own definition of Smart City. In the imagination of any city dweller in India is the picture of a Smart City which contains a wish list of infrastructure and services that describe his or her level of aspiration. The research involved a comparative study of the Smart City models in India and in China. Behavioral changes experienced by the people living in the pilot/first ever smart cities have been identified and compared. This paper discussed what is the target of the quality of life for the people in India and in China and how well could that be realized with the facilities being included in these Smart City projects. Logical and comparative analyses of important data have been done, collected from government sources, government papers and research papers by various experts on the topic. Existing cities with historically grown infrastructure and administration systems will require a more moderate step-by-step approach to modernization. The models were compared using many different motivators and the data is collected from past journals, interacting with the people involved, videos and past submissions. In conclusion, we have identified how these projects could be combined with the ongoing small scale initiatives by the local people/ small group of individuals and what might be the outcome if these existing practices were implemented on a bigger scale.

Keywords: behavior change, mission monitoring, pilot smart cities, social capital

Procedia PDF Downloads 289
4474 Technical and Practical Aspects of Sizing a Autonomous PV System

Authors: Abdelhak Bouchakour, Mustafa Brahami, Layachi Zaghba

Abstract:

The use of photovoltaic energy offers an inexhaustible supply of energy but also a clean and non-polluting energy, which is a definite advantage. The geographical location of Algeria promotes the development of the use of this energy. Indeed, given the importance of the intensity of the radiation received and the duration of sunshine. For this reason, the objective of our work is to develop a data-processing tool (software) of calculation and optimization of dimensioning of the photovoltaic installations. Our approach of optimization is basing on mathematical models, which amongst other things describe the operation of each part of the installation, the energy production, the storage and the consumption of energy.

Keywords: solar panel, solar radiation, inverter, optimization

Procedia PDF Downloads 608
4473 Multi-Scale Modelling of the Cerebral Lymphatic System and Its Failure

Authors: Alexandra K. Diem, Giles Richardson, Roxana O. Carare, Neil W. Bressloff

Abstract:

Alzheimer's disease (AD) is the most common form of dementia and although it has been researched for over 100 years, there is still no cure or preventive medication. Its onset and progression is closely related to the accumulation of the neuronal metabolite Aβ. This raises the question of how metabolites and waste products are eliminated from the brain as the brain does not have a traditional lymphatic system. In recent years the rapid uptake of Aβ into cerebral artery walls and its clearance along those arteries towards the lymph nodes in the neck has been suggested and confirmed in mice studies, which has led to the hypothesis that interstitial fluid (ISF), in the basement membranes in the walls of cerebral arteries, provides the pathways for the lymphatic drainage of Aβ. This mechanism, however, requires a net reverse flow of ISF inside the blood vessel wall compared to the blood flow and the driving forces for such a mechanism remain unknown. While possible driving mechanisms have been studied using mathematical models in the past, a mechanism for net reverse flow has not been discovered yet. Here, we aim to address the question of the driving force of this reverse lymphatic drainage of Aβ (also called perivascular drainage) by using multi-scale numerical and analytical modelling. The numerical simulation software COMSOL Multiphysics 4.4 is used to develop a fluid-structure interaction model of a cerebral artery, which models blood flow and displacements in the artery wall due to blood pressure changes. An analytical model of a layer of basement membrane inside the wall governs the flow of ISF and, therefore, solute drainage based on the pressure changes and wall displacements obtained from the cerebral artery model. The findings suggest that an active role in facilitating a reverse flow is played by the components of the basement membrane and that stiffening of the artery wall during age is a major risk factor for the impairment of brain lymphatics. Additionally, our model supports the hypothesis of a close association between cerebrovascular diseases and the failure of perivascular drainage.

Keywords: Alzheimer's disease, artery wall mechanics, cerebral blood flow, cerebral lymphatics

Procedia PDF Downloads 526
4472 Environmental Conditions Simulation Device for Evaluating Fungal Growth on Wooden Surfaces

Authors: Riccardo Cacciotti, Jiri Frankl, Benjamin Wolf, Michael Machacek

Abstract:

Moisture fluctuations govern the occurrence of fungi-related problems in buildings, which may impose significant health risks for users and even lead to structural failures. Several numerical engineering models attempt to capture the complexity of mold growth on building materials. From real life observations, in cases with suppressed daily variations of boundary conditions, e.g. in crawlspaces, mold growth model predictions well correspond with the observed mold growth. On the other hand, in cases with substantial diurnal variations of boundary conditions, e.g. in the ventilated cavity of a cold flat roof, mold growth predicted by the models is significantly overestimated. This study, founded by the Grant Agency of the Czech Republic (GAČR 20-12941S), aims at gaining a better understanding of mold growth behavior on solid wood, under varying boundary conditions. In particular, the experimental investigation focuses on the response of mold to changing conditions in the boundary layer and its influence on heat and moisture transfer across the surface. The main results include the design and construction at the facilities of ITAM (Prague, Czech Republic) of an innovative device allowing for the simulation of changing environmental conditions in buildings. It consists of a square section closed circuit with rough dimensions 200 × 180 cm and cross section roughly 30 × 30 cm. The circuit is thermally insulated and equipped with an electric fan to control air flow inside the tunnel, a heat and humidity exchange unit to control the internal RH and variations in temperature. Several measuring points, including an anemometer, temperature and humidity sensor, a loading cell in the test section for recording mass changes, are provided to monitor the variations of parameters during the experiments. The research is ongoing and it is expected to provide the final results of the experimental investigation at the end of 2022.

Keywords: moisture, mold growth, testing, wood

Procedia PDF Downloads 133
4471 A Survey of Domain Name System Tunneling Attacks: Detection and Prevention

Authors: Lawrence Williams

Abstract:

As the mechanism which converts domains to internet protocol (IP) addresses, Domain Name System (DNS) is an essential part of internet usage. It was not designed securely and can be subject to attacks. DNS attacks have become more frequent and sophisticated and the need for detecting and preventing them becomes more important for the modern network. DNS tunnelling attacks are one type of attack that are primarily used for distributed denial-of-service (DDoS) attacks and data exfiltration. Discussion of different techniques to detect and prevent DNS tunneling attacks is done. The methods, models, experiments, and data for each technique are discussed. A proposal about feasibility is made. Future research on these topics is proposed.

Keywords: DNS, tunneling, exfiltration, botnet

Procedia PDF Downloads 75
4470 Ownership and Shareholder Schemes Effects on Airport Corporate Strategy in Europe

Authors: Dimitrios Dimitriou, Maria Sartzetaki

Abstract:

In the early days of the of civil aviation, airports are totally state-owned companies under the control of national authorities or regional governmental bodies. From that time the picture has totally changed and airports privatisation and airport business commercialisation are key success factors to stimulate air transport demand, generate revenues and attract investors, linked to reliable and resilience of air transport system. Nowadays, airport's corporate strategy deals with policies and actions, affecting essential the business plans, the financial targets and the economic footprint in a regional economy they serving. Therefore, exploring airport corporate strategy is essential to support the decision in business planning, management efficiency, sustainable development and investment attractiveness on one hand; and define policies towards traffic development, revenues generation, capacity expansion, cost efficiency and corporate social responsibility. This paper explores key outputs in airport corporate strategy for different ownership schemes. The airport corporations are grouped in three major schemes: (a) Public, in which the public airport operator acts as part of the government administration or as a corporised public operator; (b) Mixed scheme, in which the majority of the shares and the corporate strategy is driven by the private or the public sector; and (c) Private, in which the airport strategy is driven by the key aspects of globalisation and liberalisation of the aviation sector. By a systemic approach, the key drivers in corporate strategy for modern airport business structures are defined. Key objectives are to define the key strategic opportunities and challenges and assess the corporate goals and risks towards sustainable business development for each scheme. The analysis based on an extensive cross-sectional dataset for a sample of busy European airports providing results on corporate strategy key priorities, risks and business models. The conventional wisdom is to highlight key messages to authorities, institutes and professionals on airport corporate strategy trends and directions.

Keywords: airport corporate strategy, airport ownership, airports business models, corporate risks

Procedia PDF Downloads 304
4469 Exploration of Hydrocarbon Unconventional Accumulations in the Argillaceous Formation of the Autochthonous Miocene Succession in the Carpathian Foredeep

Authors: Wojciech Górecki, Anna Sowiżdżał, Grzegorz Machowski, Tomasz Maćkowski, Bartosz Papiernik, Michał Stefaniuk

Abstract:

The article shows results of the project which aims at evaluating possibilities of effective development and exploitation of natural gas from argillaceous series of the Autochthonous Miocene in the Carpathian Foredeep. To achieve the objective, the research team develop a world-trend based but unique methodology of processing and interpretation, adjusted to data, local variations and petroleum characteristics of the area. In order to determine the zones in which maximum volumes of hydrocarbons might have been generated and preserved as shale gas reservoirs, as well as to identify the most preferable well sites where largest gas accumulations are anticipated a number of task were accomplished. Evaluation of petrophysical properties and hydrocarbon saturation of the Miocene complex is based on laboratory measurements as well as interpretation of well-logs and archival data. The studies apply mercury porosimetry (MICP), micro CT and nuclear magnetic resonance imaging (using the Rock Core Analyzer). For prospective location (e.g. central part of Carpathian Foredeep – Brzesko-Wojnicz area) reprocessing and reinterpretation of detailed seismic survey data with the use of integrated geophysical investigations has been made. Construction of quantitative, structural and parametric models for selected areas of the Carpathian Foredeep is performed on the basis of integrated, detailed 3D computer models. Modeling are carried on with the Schlumberger’s Petrel software. Finally, prospective zones are spatially contoured in a form of regional 3D grid, which will be framework for generation modelling and comprehensive parametric mapping, allowing for spatial identification of the most prospective zones of unconventional gas accumulation in the Carpathian Foredeep. Preliminary results of research works indicate a potentially prospective area for occurrence of unconventional gas accumulations in the Polish part of Carpathian Foredeep.

Keywords: autochthonous Miocene, Carpathian foredeep, Poland, shale gas

Procedia PDF Downloads 228
4468 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood

Authors: Randa Alharbi, Vladislav Vyshemirsky

Abstract:

Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.

Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)

Procedia PDF Downloads 202
4467 The Collaboration between Resident and Non-resident Patent Applicants as a Strategy to Accelerate Technological Advance in Developing Nations

Authors: Hugo Rodríguez

Abstract:

Migrations of researchers, scientists, and inventors are a widespread phenomenon in modern times. In some cases, migrants stay linked to research groups in their countries of origin, either out of their own conviction or because of government policies. We examine different linear models of technological development (using the Ordinary Least Squares (OLS) technique) in eight selected countries and find that the collaborations between resident and nonresident patent applicants correlate with different levels of performance of the technological policies in three different scenarios. Therefore, the reinforcement of that link must be considered a powerful tool for technological development.

Keywords: development, collaboration, patents, technology

Procedia PDF Downloads 127
4466 Study on the Model Predicting Post-Construction Settlement of Soft Ground

Authors: Pingshan Chen, Zhiliang Dong

Abstract:

In order to estimate the post-construction settlement more objectively, the power-polynomial model is proposed, which can reflect the trend of settlement development based on the observed settlement data. It was demonstrated by an actual case history of an embankment, and during the prediction. Compared with the other three prediction models, the power-polynomial model can estimate the post-construction settlement more accurately with more simple calculation.

Keywords: prediction, model, post-construction settlement, soft ground

Procedia PDF Downloads 425
4465 Conceptual Model for Logistics Information System

Authors: Ana María Rojas Chaparro, Cristian Camilo Sarmiento Chaves

Abstract:

Given the growing importance of logistics as a discipline for efficient management of materials flow and information, the adoption of tools that permit to create facilities in making decisions based on a global perspective of the system studied has been essential. The article shows how from a concepts-based model is possible to organize and represent in appropriate way the reality, showing accurate and timely information, features that make this kind of models an ideal component to support an information system, recognizing that information as relevant to establish particularities that allow get a better performance about the evaluated sector.

Keywords: system, information, conceptual model, logistics

Procedia PDF Downloads 496
4464 Electromagnetic Tuned Mass Damper Approach for Regenerative Suspension

Authors: S. Kopylov, C. Z. Bo

Abstract:

This study is aimed at exploring the possibility of energy recovery through the suppression of vibrations. The article describes design of electromagnetic dynamic damper. The magnetic part of the device performs the function of a tuned mass damper, thereby providing both energy regeneration and damping properties to the protected mass. According to the theory of tuned mass damper, equations of mathematical models were obtained. Then, under given properties of current system, amplitude frequency response was investigated. Therefore, main ideas and methods for further research were defined.

Keywords: electromagnetic damper, oscillations with two degrees of freedom, regeneration systems, tuned mass damper

Procedia PDF Downloads 208
4463 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 167
4462 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 159
4461 Revolutionizing Legal Drafting: Leveraging Artificial Intelligence for Efficient Legal Work

Authors: Shreya Poddar

Abstract:

Legal drafting and revising are recognized as highly demanding tasks for legal professionals. This paper introduces an approach to automate and refine these processes through the use of advanced Artificial Intelligence (AI). The method employs Large Language Models (LLMs), with a specific focus on 'Chain of Thoughts' (CoT) and knowledge injection via prompt engineering. This approach differs from conventional methods that depend on comprehensive training or fine-tuning of models with extensive legal knowledge bases, which are often expensive and time-consuming. The proposed method incorporates knowledge injection directly into prompts, thereby enabling the AI to generate more accurate and contextually appropriate legal texts. This approach substantially decreases the necessity for thorough model training while preserving high accuracy and relevance in drafting. Additionally, the concept of guardrails is introduced. These are predefined parameters or rules established within the AI system to ensure that the generated content adheres to legal standards and ethical guidelines. The practical implications of this method for legal work are considerable. It has the potential to markedly lessen the time lawyers allocate to document drafting and revision, freeing them to concentrate on more intricate and strategic facets of legal work. Furthermore, this method makes high-quality legal drafting more accessible, possibly reducing costs and expanding the availability of legal services. This paper will elucidate the methodology, providing specific examples and case studies to demonstrate the effectiveness of 'Chain of Thoughts' and knowledge injection in legal drafting. The potential challenges and limitations of this approach will also be discussed, along with future prospects and enhancements that could further advance legal work. The impact of this research on the legal industry is substantial. The adoption of AI-driven methods by legal professionals can lead to enhanced efficiency, precision, and consistency in legal drafting, thereby altering the landscape of legal work. This research adds to the expanding field of AI in law, introducing a method that could significantly alter the nature of legal drafting and practice.

Keywords: AI-driven legal drafting, legal automation, futureoflegalwork, largelanguagemodels

Procedia PDF Downloads 65
4460 The Investigate Relationship between Moral Hazard and Corporate Governance with Earning Forecast Quality in the Tehran Stock Exchange

Authors: Fatemeh Rouhi, Hadi Nassiri

Abstract:

Earning forecast is a key element in economic decisions but there are some situations, such as conflicts of interest in financial reporting, complexity and lack of direct access to information has led to the phenomenon of information asymmetry among individuals within the organization and external investors and creditors that appear. The adverse selection and moral hazard in the investor's decision and allows direct assessment of the difficulties associated with data by users makes. In this regard, the role of trustees in corporate governance disclosure is crystallized that includes controls and procedures to ensure the lack of movement in the interests of the company's management and move in the direction of maximizing shareholder and company value. Therefore, the earning forecast of companies in the capital market and the need to identify factors influencing this study was an attempt to make relationship between moral hazard and corporate governance with earning forecast quality companies operating in the capital market and its impact on Earnings Forecasts quality by the company to be established. Getting inspiring from the theoretical basis of research, two main hypotheses and sub-hypotheses are presented in this study, which have been examined on the basis of available models, and with the use of Panel-Data method, and at the end, the conclusion has been made at the assurance level of 95% according to the meaningfulness of the model and each independent variable. In examining the models, firstly, Chow Test was used to specify either Panel Data method should be used or Pooled method. Following that Housman Test was applied to make use of Random Effects or Fixed Effects. Findings of the study show because most of the variables are positively associated with moral hazard with earnings forecasts quality, with increasing moral hazard, earning forecast quality companies listed on the Tehran Stock Exchange is increasing. Among the variables related to corporate governance, board independence variables have a significant relationship with earnings forecast accuracy and earnings forecast bias but the relationship between board size and earnings forecast quality is not statistically significant.

Keywords: corporate governance, earning forecast quality, moral hazard, financial sciences

Procedia PDF Downloads 322
4459 Modelling the Effect of Alcohol Consumption on the Accelerating and Braking Behaviour of Drivers

Authors: Ankit Kumar Yadav, Nagendra R. Velaga

Abstract:

Driving under the influence of alcohol impairs the driving performance and increases the crash risks worldwide. The present study investigated the effect of different Blood Alcohol Concentrations (BAC) on the accelerating and braking behaviour of drivers with the help of driving simulator experiments. Eighty-two licensed Indian drivers drove on the rural road environment designed in the driving simulator at BAC levels of 0.00%, 0.03%, 0.05%, and 0.08% respectively. Driving performance was analysed with the help of vehicle control performance indicators such as mean acceleration and mean brake pedal force of the participants. Preliminary analysis reported an increase in mean acceleration and mean brake pedal force with increasing BAC levels. Generalized linear mixed models were developed to quantify the effect of different alcohol levels and explanatory variables such as driver’s age, gender and other driver characteristic variables on the driving performance indicators. Alcohol use was reported as a significant factor affecting the accelerating and braking performance of the drivers. The acceleration model results indicated that mean acceleration of the drivers increased by 0.013 m/s², 0.026 m/s² and 0.027 m/s² for the BAC levels of 0.03%, 0.05% and 0.08% respectively. Results of the brake pedal force model reported that mean brake pedal force of the drivers increased by 1.09 N, 1.32 N and 1.44 N for the BAC levels of 0.03%, 0.05% and 0.08% respectively. Age was a significant factor in both the models where one year increase in drivers’ age resulted in 0.2% reduction in mean acceleration and 19% reduction in mean brake pedal force of the drivers. It shows that driving experience could compensate for the negative effects of alcohol to some extent while driving. Female drivers were found to accelerate slower and brake harder as compared to the male drivers which confirmed that female drivers are more conscious about their safety while driving. It was observed that drivers who were regular exercisers had better control on their accelerator pedal as compared to the non-regular exercisers during drunken driving. The findings of the present study revealed that drivers tend to be more aggressive and impulsive under the influence of alcohol which deteriorates their driving performance. Drunk driving state can be differentiated from sober driving state by observing the accelerating and braking behaviour of the drivers. The conclusions may provide reference in making countermeasures against drinking and driving and contribute to traffic safety.

Keywords: alcohol, acceleration, braking behaviour, driving simulator

Procedia PDF Downloads 146
4458 Performance of Reinforced Concrete Wall with Opening Using Analytical Model

Authors: Alaa Morsy, Youssef Ibrahim

Abstract:

Earthquake is one of the most catastrophic events, which makes enormous harm to properties and human lives. As a piece of a safe building configuration, reinforced concrete walls are given in structures to decrease horizontal displacements under seismic load. Shear walls are additionally used to oppose the horizontal loads that might be incited by the impact of wind. Reinforced concrete walls in residential buildings might have openings that are required for windows in outside walls or for doors in inside walls or different states of openings due to architectural purposes. The size, position, and area of openings may fluctuate from an engineering perspective. Shear walls can encounter harm around corners of entryways and windows because of advancement of stress concentration under the impact of vertical or horizontal loads. The openings cause a diminishing in shear wall capacity. It might have an unfavorable impact on the stiffness of reinforced concrete wall and on the seismic reaction of structures. Finite Element Method using software package ‘ANSYS ver. 12’ becomes an essential approach in analyzing civil engineering problems numerically. Now we can make various models with different parameters in short time by using ANSYS instead of doing it experimentally, which consumes a lot of time and money. Finite element modeling approach has been conducted to study the effect of opening shape, size and position in RC wall with different thicknesses under axial and lateral static loads. The proposed finite element approach has been verified with experimental programme conducted by the researchers and validated by their variables. A very good correlation has been observed between the model and experimental results including load capacity, failure mode, and lateral displacement. A parametric study is applied to investigate the effect of opening size, shape, position on different reinforced concrete wall thicknesses. The results may be useful for improving existing design models and to be applied in practice, as it satisfies both the architectural and the structural requirements.

Keywords: Ansys, concrete walls, openings, out of plane behavior, seismic, shear wall

Procedia PDF Downloads 168
4457 Capacitance Models of AlGaN/GaN High Electron Mobility Transistors

Authors: A. Douara, N. Kermas, B. Djellouli

Abstract:

In this study, we report calculations of gate capacitance of AlGaN/GaN HEMTs with nextnano device simulation software. We have used a physical gate capacitance model for III-V FETs that incorporates quantum capacitance and centroid capacitance in the channel. These simulations explore various device structures with different values of barrier thickness and channel thickness. A detailed understanding of the impact of gate capacitance in HEMTs will allow us to determine their role in future 10 nm physical gate length node.

Keywords: gate capacitance, AlGaN/GaN, HEMTs, quantum capacitance, centroid capacitance

Procedia PDF Downloads 396
4456 Evaluation of a Staffing to Workload Tool in a Multispecialty Clinic Setting

Authors: Kristin Thooft

Abstract:

— Increasing pressure to manage healthcare costs has resulted in shifting care towards ambulatory settings and is driving a focus on cost transparency. There are few nurse staffing to workload models developed for ambulatory settings, less for multi-specialty clinics. Of the existing models, few have been evaluated against outcomes to understand any impact. This evaluation took place after the AWARD model for nurse staffing to workload was implemented in a multi-specialty clinic at a regional healthcare system in the Midwest. The multi-specialty clinic houses 26 medical and surgical specialty practices. The AWARD model was implemented in two specialty practices in October 2020. Donabedian’s Structure-Process-Outcome (SPO) model was used to evaluate outcomes based on changes to the structure and processes of care provided. The AWARD model defined and quantified the processes, recommended changes in the structure of day-to-day nurse staffing. Cost of care per patient visit, total visits, a total nurse performed visits used as structural and process measures, influencing the outcomes of cost of care and access to care. Independent t-tests were used to compare the difference in variables pre-and post-implementation. The SPO model was useful as an evaluation tool, providing a simple framework that is understood by a diverse care team. No statistically significant changes in the cost of care, total visits, or nurse visits were observed, but there were differences. Cost of care increased and access to care decreased. Two weeks into the post-implementation period, the multi-specialty clinic paused all non-critical patient visits due to a second surge of the COVID-19 pandemic. Clinic nursing staff was re-allocated to support the inpatient areas. This negatively impacted the ability of the Nurse Manager to utilize the AWARD model to plan daily staffing fully. The SPO framework could be used for the ongoing assessment of nurse staffing performance. Additional variables could be measured, giving a complete picture of the impact of nurse staffing. Going forward, there must be a continued focus on the outcomes of care and the value of nursing

Keywords: ambulatory, clinic, evaluation, outcomes, staffing, staffing model, staffing to workload

Procedia PDF Downloads 173
4455 Detecting Covid-19 Fake News Using Deep Learning Technique

Authors: AnjalI A. Prasad

Abstract:

Nowadays, social media played an important role in spreading misinformation or fake news. This study analyzes the fake news related to the COVID-19 pandemic spread in social media. This paper aims at evaluating and comparing different approaches that are used to mitigate this issue, including popular deep learning approaches, such as CNN, RNN, LSTM, and BERT algorithm for classification. To evaluate models’ performance, we used accuracy, precision, recall, and F1-score as the evaluation metrics. And finally, compare which algorithm shows better result among the four algorithms.

Keywords: BERT, CNN, LSTM, RNN

Procedia PDF Downloads 206
4454 Next Generation Radiation Risk Assessment and Prediction Tools Generation Applying AI-Machine (Deep) Learning Algorithms

Authors: Selim M. Khan

Abstract:

Indoor air quality is strongly influenced by the presence of radioactive radon (222Rn) gas. Indeed, exposure to high 222Rn concentrations is unequivocally linked to DNA damage and lung cancer and is a worsening issue in North American and European built environments, having increased over time within newer housing stocks as a function of as yet unclear variables. Indoor air radon concentration can be influenced by a wide range of environmental, structural, and behavioral factors. As some of these factors are quantitative while others are qualitative, no single statistical model can determine indoor radon level precisely while simultaneously considering all these variables across a complex and highly diverse dataset. The ability of AI- machine (deep) learning to simultaneously analyze multiple quantitative and qualitative features makes it suitable to predict radon with a high degree of precision. Using Canadian and Swedish long-term indoor air radon exposure data, we are using artificial deep neural network models with random weights and polynomial statistical models in MATLAB to assess and predict radon health risk to human as a function of geospatial, human behavioral, and built environmental metrics. Our initial artificial neural network with random weights model run by sigmoid activation tested different combinations of variables and showed the highest prediction accuracy (>96%) within the reasonable iterations. Here, we present details of these emerging methods and discuss strengths and weaknesses compared to the traditional artificial neural network and statistical methods commonly used to predict indoor air quality in different countries. We propose an artificial deep neural network with random weights as a highly effective method for assessing and predicting indoor radon.

Keywords: radon, radiation protection, lung cancer, aI-machine deep learnng, risk assessment, risk prediction, Europe, North America

Procedia PDF Downloads 96
4453 Finite Element Analysis of Human Tarsals, Meta Tarsals and Phalanges for Predicting probable location of Fractures

Authors: Irfan Anjum Manarvi, Fawzi Aljassir

Abstract:

Human bones have been a keen area of research over a long time in the field of biomechanical engineering. Medical professionals, as well as engineering academics and researchers, have investigated various bones by using medical, mechanical, and materials approaches to discover the available body of knowledge. Their major focus has been to establish properties of these and ultimately develop processes and tools either to prevent fracture or recover its damage. Literature shows that mechanical professionals conducted a variety of tests for hardness, deformation, and strain field measurement to arrive at their findings. However, they considered these results accuracy to be insufficient due to various limitations of tools, test equipment, difficulties in the availability of human bones. They proposed the need for further studies to first overcome inaccuracies in measurement methods, testing machines, and experimental errors and then carry out experimental or theoretical studies. Finite Element analysis is a technique which was developed for the aerospace industry due to the complexity of design and materials. But over a period of time, it has found its applications in many other industries due to accuracy and flexibility in selection of materials and types of loading that could be theoretically applied to an object under study. In the past few decades, the field of biomechanical engineering has also started to see its applicability. However, the work done in the area of Tarsals, metatarsals and phalanges using this technique is very limited. Therefore, present research has been focused on using this technique for analysis of these critical bones of the human body. This technique requires a 3-dimensional geometric computer model of the object to be analyzed. In the present research, a 3d laser scanner was used for accurate geometric scans of individual tarsals, metatarsals, and phalanges from a typical human foot to make these computer geometric models. These were then imported into a Finite Element Analysis software and a length refining process was carried out prior to analysis to ensure the computer models were true representatives of actual bone. This was followed by analysis of each bone individually. A number of constraints and load conditions were applied to observe the stress and strain distributions in these bones under the conditions of compression and tensile loads or their combination. Results were collected for deformations in various axis, and stress and strain distributions were observed to identify critical locations where fracture could occur. A comparative analysis of failure properties of all the three types of bones was carried out to establish which of these could fail earlier which is presented in this research. Results of this investigation could be used for further experimental studies by the academics and researchers, as well as industrial engineers, for development of various foot protection devices or tools for surgical operations and recovery treatment of these bones. Researchers could build up on these models to carryout analysis of a complete human foot through Finite Element analysis under various loading conditions such as walking, marching, running, and landing after a jump etc.

Keywords: tarsals, metatarsals, phalanges, 3D scanning, finite element analysis

Procedia PDF Downloads 329
4452 The EU Omnipotence Paradox: Inclusive Cultural Policies and Effects of Exclusion

Authors: Emmanuel Pedler, Elena Raevskikh, Maxime Jaffré

Abstract:

Can the cultural geography of European cities be durably managed by European policies? To answer this question, two hypotheses can be proposed. (1) Either European cultural policies are able to erase cultural inequalities between the territories through the creation of new areas of cultural attractiveness in each beneficiary neighborhood, city or country. Or, (2) each European region historically rooted in a number of endogenous socio-historical, political or demographic factors is not receptive to exogenous political influences. Thus, the cultural attractiveness of a territory is difficult to measure and to impact by top-down policies in the long term. How do these two logics - European and local - interact and contribute to the emergence of a valued, popular sense of a common European cultural identity? Does this constant interaction between historical backgrounds and new political concepts encourage a positive identification with the European project? The European cultural policy programs, such as ECC (European Capital of Culture), seek to develop new forms of civic cohesion through inclusive and participative cultural events. The cultural assets of a city elected ‘ECC’ are mobilized to attract a wide range of new audiences, including populations poorly integrated into local cultural life – and consequently distant from pre-existing cultural offers. In the current context of increasingly heterogeneous individual perceptions of Europe, the ECC program aims to promote cultural forms and institutions that should accelerate both territorial and cross-border European cohesion. The new cultural consumption pattern is conceived to stimulate integration and mobility, but also to create a legitimate and transnational ideal European citizen type. Our comparative research confronts contrasting cases of ‘European Capitals of Culture’ from the south and from the north of Europe, cities recently concerned by the ECC political mechanism and cities that were elected ECC in the past, multi-centered cultural models vs. highly centralized cultural models. We aim to explore the impacts of European policies on the urban cultural geography, but also to understand the current obstacles for its efficient implementation.

Keywords: urbanism, cultural policies, cultural institutions, european cultural capitals, heritage industries, exclusion effects

Procedia PDF Downloads 261
4451 The Impact of Geopolitical Risks and the Oil Price Fluctuations on the Kuwaiti Financial Market

Authors: Layal Mansour

Abstract:

The aim of this paper is to identify whether oil price volatility or geopolitical risks can predict future financial stress periods or economic recessions in Kuwait. We construct the first Financial Stress Index for Kuwait (FSIK) that includes informative vulnerable indicators of the main financial sectors: the banking sector, the equities market, and the foreign exchange market. The study covers the period from 2000 to 2020, so it includes the two recent most devastating world economic crises with oil price fluctuation: the Covid-19 pandemic crisis and Ukraine-Russia War. All data are taken by the central bank of Kuwait, the World Bank, IMF, DataStream, and from Federal Reserve System St Louis. The variables are computed as the percentage growth rate, then standardized and aggregated into one index using the variance equal weights method, the most frequently used in the literature. The graphical FSIK analysis provides detailed information (by dates) to policymakers on how internal financial stability depends on internal policy and events such as government elections or resignation. It also shows how monetary authorities or internal policymakers’ decisions to relieve personal loans or increase/decrease the public budget trigger internal financial instability. The empirical analysis under vector autoregression (VAR) models shows the dynamic causal relationship between the oil price fluctuation and the Kuwaiti economy, which relies heavily on the oil price. Similarly, using vector autoregression (VAR) models to assess the impact of the global geopolitical risks on Kuwaiti financial stability, results reveal whether Kuwait is confronted with or sheltered from geopolitical risks. The Financial Stress Index serves as a guide for macroprudential regulators in order to understand the weakness of the overall Kuwaiti financial market and economy regardless of the Kuwaiti dinar strength and exchange rate stability. It helps policymakers predict future stress periods and, thus, address alternative cushions to confront future possible financial threats.

Keywords: Kuwait, financial stress index, causality test, VAR, oil price, geopolitical risks

Procedia PDF Downloads 81