Search results for: ocean models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6800

Search results for: ocean models

4370 Technical and Practical Aspects of Sizing a Autonomous PV System

Authors: Abdelhak Bouchakour, Mustafa Brahami, Layachi Zaghba

Abstract:

The use of photovoltaic energy offers an inexhaustible supply of energy but also a clean and non-polluting energy, which is a definite advantage. The geographical location of Algeria promotes the development of the use of this energy. Indeed, given the importance of the intensity of the radiation received and the duration of sunshine. For this reason, the objective of our work is to develop a data-processing tool (software) of calculation and optimization of dimensioning of the photovoltaic installations. Our approach of optimization is basing on mathematical models, which amongst other things describe the operation of each part of the installation, the energy production, the storage and the consumption of energy.

Keywords: solar panel, solar radiation, inverter, optimization

Procedia PDF Downloads 589
4369 Numerical Modelling of the Influence of Meteorological Forcing on Water-Level in the Head Bay of Bengal

Authors: Linta Rose, Prasad K. Bhaskaran

Abstract:

Water-level information along the coast is very important for disaster management, navigation, planning shoreline management, coastal engineering and protection works, port and harbour activities, and for a better understanding of near-shore ocean dynamics. The water-level variation along a coast attributes from various factors like astronomical tides, meteorological and hydrological forcing. The study area is the Head Bay of Bengal which is highly vulnerable to flooding events caused by monsoons, cyclones and sea-level rise. The study aims to explore the extent to which wind and surface pressure can influence water-level elevation, in view of the low-lying topography of the coastal zones in the region. The ADCIRC hydrodynamic model has been customized for the Head Bay of Bengal, discretized using flexible finite elements and validated against tide gauge observations. Monthly mean climatological wind and mean sea level pressure fields of ERA Interim reanalysis data was used as input forcing to simulate water-level variation in the Head Bay of Bengal, in addition to tidal forcing. The output water-level was compared against that produced using tidal forcing alone, so as to quantify the contribution of meteorological forcing to water-level. The average contribution of meteorological fields to water-level in January is 5.5% at a deep-water location and 13.3% at a coastal location. During the month of July, when the monsoon winds are strongest in this region, this increases to 10.7% and 43.1% respectively at the deep-water and coastal locations. The model output was tested by varying the input conditions of the meteorological fields in an attempt to quantify the relative significance of wind speed and wind direction on water-level. Under uniform wind conditions, the results showed a higher contribution of meteorological fields for south-west winds than north-east winds, when the wind speed was higher. A comparison of the spectral characteristics of output water-level with that generated due to tidal forcing alone showed additional modes with seasonal and annual signatures. Moreover, non-linear monthly mode was found to be weaker than during tidal simulation, all of which point out that meteorological fields do not cause much effect on the water-level at periods less than a day and that it induces non-linear interactions between existing modes of oscillations. The study signifies the role of meteorological forcing under fair weather conditions and points out that a combination of multiple forcing fields including tides, wind, atmospheric pressure, waves, precipitation and river discharge is essential for efficient and effective forecast modelling, especially during extreme weather events.

Keywords: ADCIRC, head Bay of Bengal, mean sea level pressure, meteorological forcing, water-level, wind

Procedia PDF Downloads 205
4368 Multi-Scale Modelling of the Cerebral Lymphatic System and Its Failure

Authors: Alexandra K. Diem, Giles Richardson, Roxana O. Carare, Neil W. Bressloff

Abstract:

Alzheimer's disease (AD) is the most common form of dementia and although it has been researched for over 100 years, there is still no cure or preventive medication. Its onset and progression is closely related to the accumulation of the neuronal metabolite Aβ. This raises the question of how metabolites and waste products are eliminated from the brain as the brain does not have a traditional lymphatic system. In recent years the rapid uptake of Aβ into cerebral artery walls and its clearance along those arteries towards the lymph nodes in the neck has been suggested and confirmed in mice studies, which has led to the hypothesis that interstitial fluid (ISF), in the basement membranes in the walls of cerebral arteries, provides the pathways for the lymphatic drainage of Aβ. This mechanism, however, requires a net reverse flow of ISF inside the blood vessel wall compared to the blood flow and the driving forces for such a mechanism remain unknown. While possible driving mechanisms have been studied using mathematical models in the past, a mechanism for net reverse flow has not been discovered yet. Here, we aim to address the question of the driving force of this reverse lymphatic drainage of Aβ (also called perivascular drainage) by using multi-scale numerical and analytical modelling. The numerical simulation software COMSOL Multiphysics 4.4 is used to develop a fluid-structure interaction model of a cerebral artery, which models blood flow and displacements in the artery wall due to blood pressure changes. An analytical model of a layer of basement membrane inside the wall governs the flow of ISF and, therefore, solute drainage based on the pressure changes and wall displacements obtained from the cerebral artery model. The findings suggest that an active role in facilitating a reverse flow is played by the components of the basement membrane and that stiffening of the artery wall during age is a major risk factor for the impairment of brain lymphatics. Additionally, our model supports the hypothesis of a close association between cerebrovascular diseases and the failure of perivascular drainage.

Keywords: Alzheimer's disease, artery wall mechanics, cerebral blood flow, cerebral lymphatics

Procedia PDF Downloads 512
4367 Environmental Conditions Simulation Device for Evaluating Fungal Growth on Wooden Surfaces

Authors: Riccardo Cacciotti, Jiri Frankl, Benjamin Wolf, Michael Machacek

Abstract:

Moisture fluctuations govern the occurrence of fungi-related problems in buildings, which may impose significant health risks for users and even lead to structural failures. Several numerical engineering models attempt to capture the complexity of mold growth on building materials. From real life observations, in cases with suppressed daily variations of boundary conditions, e.g. in crawlspaces, mold growth model predictions well correspond with the observed mold growth. On the other hand, in cases with substantial diurnal variations of boundary conditions, e.g. in the ventilated cavity of a cold flat roof, mold growth predicted by the models is significantly overestimated. This study, founded by the Grant Agency of the Czech Republic (GAČR 20-12941S), aims at gaining a better understanding of mold growth behavior on solid wood, under varying boundary conditions. In particular, the experimental investigation focuses on the response of mold to changing conditions in the boundary layer and its influence on heat and moisture transfer across the surface. The main results include the design and construction at the facilities of ITAM (Prague, Czech Republic) of an innovative device allowing for the simulation of changing environmental conditions in buildings. It consists of a square section closed circuit with rough dimensions 200 × 180 cm and cross section roughly 30 × 30 cm. The circuit is thermally insulated and equipped with an electric fan to control air flow inside the tunnel, a heat and humidity exchange unit to control the internal RH and variations in temperature. Several measuring points, including an anemometer, temperature and humidity sensor, a loading cell in the test section for recording mass changes, are provided to monitor the variations of parameters during the experiments. The research is ongoing and it is expected to provide the final results of the experimental investigation at the end of 2022.

Keywords: moisture, mold growth, testing, wood

Procedia PDF Downloads 112
4366 A Survey of Domain Name System Tunneling Attacks: Detection and Prevention

Authors: Lawrence Williams

Abstract:

As the mechanism which converts domains to internet protocol (IP) addresses, Domain Name System (DNS) is an essential part of internet usage. It was not designed securely and can be subject to attacks. DNS attacks have become more frequent and sophisticated and the need for detecting and preventing them becomes more important for the modern network. DNS tunnelling attacks are one type of attack that are primarily used for distributed denial-of-service (DDoS) attacks and data exfiltration. Discussion of different techniques to detect and prevent DNS tunneling attacks is done. The methods, models, experiments, and data for each technique are discussed. A proposal about feasibility is made. Future research on these topics is proposed.

Keywords: DNS, tunneling, exfiltration, botnet

Procedia PDF Downloads 62
4365 Ownership and Shareholder Schemes Effects on Airport Corporate Strategy in Europe

Authors: Dimitrios Dimitriou, Maria Sartzetaki

Abstract:

In the early days of the of civil aviation, airports are totally state-owned companies under the control of national authorities or regional governmental bodies. From that time the picture has totally changed and airports privatisation and airport business commercialisation are key success factors to stimulate air transport demand, generate revenues and attract investors, linked to reliable and resilience of air transport system. Nowadays, airport's corporate strategy deals with policies and actions, affecting essential the business plans, the financial targets and the economic footprint in a regional economy they serving. Therefore, exploring airport corporate strategy is essential to support the decision in business planning, management efficiency, sustainable development and investment attractiveness on one hand; and define policies towards traffic development, revenues generation, capacity expansion, cost efficiency and corporate social responsibility. This paper explores key outputs in airport corporate strategy for different ownership schemes. The airport corporations are grouped in three major schemes: (a) Public, in which the public airport operator acts as part of the government administration or as a corporised public operator; (b) Mixed scheme, in which the majority of the shares and the corporate strategy is driven by the private or the public sector; and (c) Private, in which the airport strategy is driven by the key aspects of globalisation and liberalisation of the aviation sector. By a systemic approach, the key drivers in corporate strategy for modern airport business structures are defined. Key objectives are to define the key strategic opportunities and challenges and assess the corporate goals and risks towards sustainable business development for each scheme. The analysis based on an extensive cross-sectional dataset for a sample of busy European airports providing results on corporate strategy key priorities, risks and business models. The conventional wisdom is to highlight key messages to authorities, institutes and professionals on airport corporate strategy trends and directions.

Keywords: airport corporate strategy, airport ownership, airports business models, corporate risks

Procedia PDF Downloads 290
4364 Exploration of Hydrocarbon Unconventional Accumulations in the Argillaceous Formation of the Autochthonous Miocene Succession in the Carpathian Foredeep

Authors: Wojciech Górecki, Anna Sowiżdżał, Grzegorz Machowski, Tomasz Maćkowski, Bartosz Papiernik, Michał Stefaniuk

Abstract:

The article shows results of the project which aims at evaluating possibilities of effective development and exploitation of natural gas from argillaceous series of the Autochthonous Miocene in the Carpathian Foredeep. To achieve the objective, the research team develop a world-trend based but unique methodology of processing and interpretation, adjusted to data, local variations and petroleum characteristics of the area. In order to determine the zones in which maximum volumes of hydrocarbons might have been generated and preserved as shale gas reservoirs, as well as to identify the most preferable well sites where largest gas accumulations are anticipated a number of task were accomplished. Evaluation of petrophysical properties and hydrocarbon saturation of the Miocene complex is based on laboratory measurements as well as interpretation of well-logs and archival data. The studies apply mercury porosimetry (MICP), micro CT and nuclear magnetic resonance imaging (using the Rock Core Analyzer). For prospective location (e.g. central part of Carpathian Foredeep – Brzesko-Wojnicz area) reprocessing and reinterpretation of detailed seismic survey data with the use of integrated geophysical investigations has been made. Construction of quantitative, structural and parametric models for selected areas of the Carpathian Foredeep is performed on the basis of integrated, detailed 3D computer models. Modeling are carried on with the Schlumberger’s Petrel software. Finally, prospective zones are spatially contoured in a form of regional 3D grid, which will be framework for generation modelling and comprehensive parametric mapping, allowing for spatial identification of the most prospective zones of unconventional gas accumulation in the Carpathian Foredeep. Preliminary results of research works indicate a potentially prospective area for occurrence of unconventional gas accumulations in the Polish part of Carpathian Foredeep.

Keywords: autochthonous Miocene, Carpathian foredeep, Poland, shale gas

Procedia PDF Downloads 217
4363 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood

Authors: Randa Alharbi, Vladislav Vyshemirsky

Abstract:

Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.

Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)

Procedia PDF Downloads 190
4362 The Collaboration between Resident and Non-resident Patent Applicants as a Strategy to Accelerate Technological Advance in Developing Nations

Authors: Hugo Rodríguez

Abstract:

Migrations of researchers, scientists, and inventors are a widespread phenomenon in modern times. In some cases, migrants stay linked to research groups in their countries of origin, either out of their own conviction or because of government policies. We examine different linear models of technological development (using the Ordinary Least Squares (OLS) technique) in eight selected countries and find that the collaborations between resident and nonresident patent applicants correlate with different levels of performance of the technological policies in three different scenarios. Therefore, the reinforcement of that link must be considered a powerful tool for technological development.

Keywords: development, collaboration, patents, technology

Procedia PDF Downloads 112
4361 Study on the Model Predicting Post-Construction Settlement of Soft Ground

Authors: Pingshan Chen, Zhiliang Dong

Abstract:

In order to estimate the post-construction settlement more objectively, the power-polynomial model is proposed, which can reflect the trend of settlement development based on the observed settlement data. It was demonstrated by an actual case history of an embankment, and during the prediction. Compared with the other three prediction models, the power-polynomial model can estimate the post-construction settlement more accurately with more simple calculation.

Keywords: prediction, model, post-construction settlement, soft ground

Procedia PDF Downloads 415
4360 Conceptual Model for Logistics Information System

Authors: Ana María Rojas Chaparro, Cristian Camilo Sarmiento Chaves

Abstract:

Given the growing importance of logistics as a discipline for efficient management of materials flow and information, the adoption of tools that permit to create facilities in making decisions based on a global perspective of the system studied has been essential. The article shows how from a concepts-based model is possible to organize and represent in appropriate way the reality, showing accurate and timely information, features that make this kind of models an ideal component to support an information system, recognizing that information as relevant to establish particularities that allow get a better performance about the evaluated sector.

Keywords: system, information, conceptual model, logistics

Procedia PDF Downloads 476
4359 Electromagnetic Tuned Mass Damper Approach for Regenerative Suspension

Authors: S. Kopylov, C. Z. Bo

Abstract:

This study is aimed at exploring the possibility of energy recovery through the suppression of vibrations. The article describes design of electromagnetic dynamic damper. The magnetic part of the device performs the function of a tuned mass damper, thereby providing both energy regeneration and damping properties to the protected mass. According to the theory of tuned mass damper, equations of mathematical models were obtained. Then, under given properties of current system, amplitude frequency response was investigated. Therefore, main ideas and methods for further research were defined.

Keywords: electromagnetic damper, oscillations with two degrees of freedom, regeneration systems, tuned mass damper

Procedia PDF Downloads 196
4358 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 150
4357 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 138
4356 Multi-Criteria Selection and Improvement of Effective Design for Generating Power from Sea Waves

Authors: Khaled M. Khader, Mamdouh I. Elimy, Omayma A. Nada

Abstract:

Sustainable development is the nominal goal of most countries at present. In general, fossil fuels are the development mainstay of most world countries. Regrettably, the fossil fuel consumption rate is very high, and the world is facing the problem of conventional fuels depletion soon. In addition, there are many problems of environmental pollution resulting from the emission of harmful gases and vapors during fuel burning. Thus, clean, renewable energy became the main concern of most countries for filling the gap between available energy resources and their growing needs. There are many renewable energy sources such as wind, solar and wave energy. Energy can be obtained from the motion of sea waves almost all the time. However, power generation from solar or wind energy is highly restricted to sunny periods or the availability of suitable wind speeds. Moreover, energy produced from sea wave motion is one of the cheapest types of clean energy. In addition, renewable energy usage of sea waves guarantees safe environmental conditions. Cheap electricity can be generated from wave energy using different systems such as oscillating bodies' system, pendulum gate system, ocean wave dragon system and oscillating water column device. In this paper, a multi-criteria model has been developed using Analytic Hierarchy Process (AHP) to support the decision of selecting the most effective system for generating power from sea waves. This paper provides a widespread overview of the different design alternatives for sea wave energy converter systems. The considered design alternatives have been evaluated using the developed AHP model. The multi-criteria assessment reveals that the off-shore Oscillating Water Column (OWC) system is the most appropriate system for generating power from sea waves. The OWC system consists of a suitable hollow chamber at the shore which is completely closed except at its base which has an open area for gathering moving sea waves. Sea wave's motion pushes the air up and down passing through a suitable well turbine for generating power. Improving the power generation capability of the OWC system is one of the main objectives of this research. After investigating the effect of some design modifications, it has been concluded that selecting the appropriate settings of some effective design parameters such as the number of layers of Wells turbine fans and the intermediate distance between the fans can result in significant improvements. Moreover, simple dynamic analysis of the Wells turbine is introduced. Furthermore, this paper strives for comparing the theoretical and experimental results of the built experimental prototype.

Keywords: renewable energy, oscillating water column, multi-criteria selection, Wells turbine

Procedia PDF Downloads 147
4355 The Epidemiology of Dengue in Taiwan during 2014-15: A Descriptive Analysis of the Severe Outbreaks of Central Surveillance System Data

Authors: Chu-Tzu Chen, Angela S. Huang, Yu-Min Chou, Chin-Hui Yang

Abstract:

Dengue is a major public health concern throughout tropical and sub-tropical regions. Taiwan is located in the Pacific Ocean and overlying the tropical and subtropical zones. The island remains humid throughout the year and receives abundant rainfall, and the temperature is very hot in summer at southern Taiwan. It is ideal for the growth of dengue vectors and would be increasing the risk on dengue outbreaks. During the first half of the 20th century, there were three island-wide dengue outbreaks (1915, 1931, and 1942). After almost forty years of dormancy, a DEN-2 outbreak occurred in Liuchiu Township, Pingtung County in 1981. Thereafter, more dengue outbreaks occurred with different scales in southern Taiwan. However, there were more than ten thousands of dengue cases in 2014 and in 2015. It did not only affect human health, but also caused widespread social disruption and economic losses. The study would like to reveal the epidemiology of dengue on Taiwan, especially the severe outbreak in 2015, and try to find the effective interventions in dengue control including dengue vaccine development for the elderly. Methods: The study applied the Notifiable Diseases Surveillance System database of the Taiwan Centers for Disease Control as data source. All cases were reported with the uniform case definition and confirmed by NS1 rapid diagnosis/laboratory diagnosis. Results: In 2014, Taiwan experienced a serious DEN-1 outbreak with 15,492 locally-acquired cases, including 136 cases of dengue hemorrhagic fever (DHF) which caused 21 deaths. However, a more serious DEN-2 outbreak occurred with 43,419 locally-acquired cases in 2015. The epidemic occurred mainly at Tainan City (22,760 cases) and Kaohsiung City (19,723 cases) in southern Taiwan. The age distribution for the cases were mainly adults. There were 228 deaths due to dengue infection, and the case fatality rate was 5.25 ‰. The average age of them was 73.66 years (range 29-96) and 86.84% of them were older than 60 years. Most of them were comorbidities. To review the clinical manifestations of the 228 death cases, 38.16% (N=87) of them were reported with warning signs, while 51.75% (N=118) were reported without warning signs. Among the 87 death cases reported to dengue with warning signs, 89.53% were diagnosed sever dengue and 84% needed the intensive care. Conclusion: The year 2015 was characterized by large dengue outbreaks worldwide. The risk of serious dengue outbreak may increase significantly in the future, and the elderly is the vulnerable group in Taiwan. However, a dengue vaccine has been licensed for use in people 9-45 years of age living in endemic settings at the end of 2015. In addition to carry out the research to find out new interventions in dengue control, developing the dengue vaccine for the elderly is very important to prevent severe dengue and deaths.

Keywords: case fatality rate, dengue, dengue vaccine, the elderly

Procedia PDF Downloads 269
4354 Revolutionizing Legal Drafting: Leveraging Artificial Intelligence for Efficient Legal Work

Authors: Shreya Poddar

Abstract:

Legal drafting and revising are recognized as highly demanding tasks for legal professionals. This paper introduces an approach to automate and refine these processes through the use of advanced Artificial Intelligence (AI). The method employs Large Language Models (LLMs), with a specific focus on 'Chain of Thoughts' (CoT) and knowledge injection via prompt engineering. This approach differs from conventional methods that depend on comprehensive training or fine-tuning of models with extensive legal knowledge bases, which are often expensive and time-consuming. The proposed method incorporates knowledge injection directly into prompts, thereby enabling the AI to generate more accurate and contextually appropriate legal texts. This approach substantially decreases the necessity for thorough model training while preserving high accuracy and relevance in drafting. Additionally, the concept of guardrails is introduced. These are predefined parameters or rules established within the AI system to ensure that the generated content adheres to legal standards and ethical guidelines. The practical implications of this method for legal work are considerable. It has the potential to markedly lessen the time lawyers allocate to document drafting and revision, freeing them to concentrate on more intricate and strategic facets of legal work. Furthermore, this method makes high-quality legal drafting more accessible, possibly reducing costs and expanding the availability of legal services. This paper will elucidate the methodology, providing specific examples and case studies to demonstrate the effectiveness of 'Chain of Thoughts' and knowledge injection in legal drafting. The potential challenges and limitations of this approach will also be discussed, along with future prospects and enhancements that could further advance legal work. The impact of this research on the legal industry is substantial. The adoption of AI-driven methods by legal professionals can lead to enhanced efficiency, precision, and consistency in legal drafting, thereby altering the landscape of legal work. This research adds to the expanding field of AI in law, introducing a method that could significantly alter the nature of legal drafting and practice.

Keywords: AI-driven legal drafting, legal automation, futureoflegalwork, largelanguagemodels

Procedia PDF Downloads 41
4353 The Investigate Relationship between Moral Hazard and Corporate Governance with Earning Forecast Quality in the Tehran Stock Exchange

Authors: Fatemeh Rouhi, Hadi Nassiri

Abstract:

Earning forecast is a key element in economic decisions but there are some situations, such as conflicts of interest in financial reporting, complexity and lack of direct access to information has led to the phenomenon of information asymmetry among individuals within the organization and external investors and creditors that appear. The adverse selection and moral hazard in the investor's decision and allows direct assessment of the difficulties associated with data by users makes. In this regard, the role of trustees in corporate governance disclosure is crystallized that includes controls and procedures to ensure the lack of movement in the interests of the company's management and move in the direction of maximizing shareholder and company value. Therefore, the earning forecast of companies in the capital market and the need to identify factors influencing this study was an attempt to make relationship between moral hazard and corporate governance with earning forecast quality companies operating in the capital market and its impact on Earnings Forecasts quality by the company to be established. Getting inspiring from the theoretical basis of research, two main hypotheses and sub-hypotheses are presented in this study, which have been examined on the basis of available models, and with the use of Panel-Data method, and at the end, the conclusion has been made at the assurance level of 95% according to the meaningfulness of the model and each independent variable. In examining the models, firstly, Chow Test was used to specify either Panel Data method should be used or Pooled method. Following that Housman Test was applied to make use of Random Effects or Fixed Effects. Findings of the study show because most of the variables are positively associated with moral hazard with earnings forecasts quality, with increasing moral hazard, earning forecast quality companies listed on the Tehran Stock Exchange is increasing. Among the variables related to corporate governance, board independence variables have a significant relationship with earnings forecast accuracy and earnings forecast bias but the relationship between board size and earnings forecast quality is not statistically significant.

Keywords: corporate governance, earning forecast quality, moral hazard, financial sciences

Procedia PDF Downloads 301
4352 Modelling the Effect of Alcohol Consumption on the Accelerating and Braking Behaviour of Drivers

Authors: Ankit Kumar Yadav, Nagendra R. Velaga

Abstract:

Driving under the influence of alcohol impairs the driving performance and increases the crash risks worldwide. The present study investigated the effect of different Blood Alcohol Concentrations (BAC) on the accelerating and braking behaviour of drivers with the help of driving simulator experiments. Eighty-two licensed Indian drivers drove on the rural road environment designed in the driving simulator at BAC levels of 0.00%, 0.03%, 0.05%, and 0.08% respectively. Driving performance was analysed with the help of vehicle control performance indicators such as mean acceleration and mean brake pedal force of the participants. Preliminary analysis reported an increase in mean acceleration and mean brake pedal force with increasing BAC levels. Generalized linear mixed models were developed to quantify the effect of different alcohol levels and explanatory variables such as driver’s age, gender and other driver characteristic variables on the driving performance indicators. Alcohol use was reported as a significant factor affecting the accelerating and braking performance of the drivers. The acceleration model results indicated that mean acceleration of the drivers increased by 0.013 m/s², 0.026 m/s² and 0.027 m/s² for the BAC levels of 0.03%, 0.05% and 0.08% respectively. Results of the brake pedal force model reported that mean brake pedal force of the drivers increased by 1.09 N, 1.32 N and 1.44 N for the BAC levels of 0.03%, 0.05% and 0.08% respectively. Age was a significant factor in both the models where one year increase in drivers’ age resulted in 0.2% reduction in mean acceleration and 19% reduction in mean brake pedal force of the drivers. It shows that driving experience could compensate for the negative effects of alcohol to some extent while driving. Female drivers were found to accelerate slower and brake harder as compared to the male drivers which confirmed that female drivers are more conscious about their safety while driving. It was observed that drivers who were regular exercisers had better control on their accelerator pedal as compared to the non-regular exercisers during drunken driving. The findings of the present study revealed that drivers tend to be more aggressive and impulsive under the influence of alcohol which deteriorates their driving performance. Drunk driving state can be differentiated from sober driving state by observing the accelerating and braking behaviour of the drivers. The conclusions may provide reference in making countermeasures against drinking and driving and contribute to traffic safety.

Keywords: alcohol, acceleration, braking behaviour, driving simulator

Procedia PDF Downloads 133
4351 Performance of Reinforced Concrete Wall with Opening Using Analytical Model

Authors: Alaa Morsy, Youssef Ibrahim

Abstract:

Earthquake is one of the most catastrophic events, which makes enormous harm to properties and human lives. As a piece of a safe building configuration, reinforced concrete walls are given in structures to decrease horizontal displacements under seismic load. Shear walls are additionally used to oppose the horizontal loads that might be incited by the impact of wind. Reinforced concrete walls in residential buildings might have openings that are required for windows in outside walls or for doors in inside walls or different states of openings due to architectural purposes. The size, position, and area of openings may fluctuate from an engineering perspective. Shear walls can encounter harm around corners of entryways and windows because of advancement of stress concentration under the impact of vertical or horizontal loads. The openings cause a diminishing in shear wall capacity. It might have an unfavorable impact on the stiffness of reinforced concrete wall and on the seismic reaction of structures. Finite Element Method using software package ‘ANSYS ver. 12’ becomes an essential approach in analyzing civil engineering problems numerically. Now we can make various models with different parameters in short time by using ANSYS instead of doing it experimentally, which consumes a lot of time and money. Finite element modeling approach has been conducted to study the effect of opening shape, size and position in RC wall with different thicknesses under axial and lateral static loads. The proposed finite element approach has been verified with experimental programme conducted by the researchers and validated by their variables. A very good correlation has been observed between the model and experimental results including load capacity, failure mode, and lateral displacement. A parametric study is applied to investigate the effect of opening size, shape, position on different reinforced concrete wall thicknesses. The results may be useful for improving existing design models and to be applied in practice, as it satisfies both the architectural and the structural requirements.

Keywords: Ansys, concrete walls, openings, out of plane behavior, seismic, shear wall

Procedia PDF Downloads 152
4350 Capacitance Models of AlGaN/GaN High Electron Mobility Transistors

Authors: A. Douara, N. Kermas, B. Djellouli

Abstract:

In this study, we report calculations of gate capacitance of AlGaN/GaN HEMTs with nextnano device simulation software. We have used a physical gate capacitance model for III-V FETs that incorporates quantum capacitance and centroid capacitance in the channel. These simulations explore various device structures with different values of barrier thickness and channel thickness. A detailed understanding of the impact of gate capacitance in HEMTs will allow us to determine their role in future 10 nm physical gate length node.

Keywords: gate capacitance, AlGaN/GaN, HEMTs, quantum capacitance, centroid capacitance

Procedia PDF Downloads 385
4349 Evaluation of a Staffing to Workload Tool in a Multispecialty Clinic Setting

Authors: Kristin Thooft

Abstract:

— Increasing pressure to manage healthcare costs has resulted in shifting care towards ambulatory settings and is driving a focus on cost transparency. There are few nurse staffing to workload models developed for ambulatory settings, less for multi-specialty clinics. Of the existing models, few have been evaluated against outcomes to understand any impact. This evaluation took place after the AWARD model for nurse staffing to workload was implemented in a multi-specialty clinic at a regional healthcare system in the Midwest. The multi-specialty clinic houses 26 medical and surgical specialty practices. The AWARD model was implemented in two specialty practices in October 2020. Donabedian’s Structure-Process-Outcome (SPO) model was used to evaluate outcomes based on changes to the structure and processes of care provided. The AWARD model defined and quantified the processes, recommended changes in the structure of day-to-day nurse staffing. Cost of care per patient visit, total visits, a total nurse performed visits used as structural and process measures, influencing the outcomes of cost of care and access to care. Independent t-tests were used to compare the difference in variables pre-and post-implementation. The SPO model was useful as an evaluation tool, providing a simple framework that is understood by a diverse care team. No statistically significant changes in the cost of care, total visits, or nurse visits were observed, but there were differences. Cost of care increased and access to care decreased. Two weeks into the post-implementation period, the multi-specialty clinic paused all non-critical patient visits due to a second surge of the COVID-19 pandemic. Clinic nursing staff was re-allocated to support the inpatient areas. This negatively impacted the ability of the Nurse Manager to utilize the AWARD model to plan daily staffing fully. The SPO framework could be used for the ongoing assessment of nurse staffing performance. Additional variables could be measured, giving a complete picture of the impact of nurse staffing. Going forward, there must be a continued focus on the outcomes of care and the value of nursing

Keywords: ambulatory, clinic, evaluation, outcomes, staffing, staffing model, staffing to workload

Procedia PDF Downloads 163
4348 Detecting Covid-19 Fake News Using Deep Learning Technique

Authors: AnjalI A. Prasad

Abstract:

Nowadays, social media played an important role in spreading misinformation or fake news. This study analyzes the fake news related to the COVID-19 pandemic spread in social media. This paper aims at evaluating and comparing different approaches that are used to mitigate this issue, including popular deep learning approaches, such as CNN, RNN, LSTM, and BERT algorithm for classification. To evaluate models’ performance, we used accuracy, precision, recall, and F1-score as the evaluation metrics. And finally, compare which algorithm shows better result among the four algorithms.

Keywords: BERT, CNN, LSTM, RNN

Procedia PDF Downloads 192
4347 Next Generation Radiation Risk Assessment and Prediction Tools Generation Applying AI-Machine (Deep) Learning Algorithms

Authors: Selim M. Khan

Abstract:

Indoor air quality is strongly influenced by the presence of radioactive radon (222Rn) gas. Indeed, exposure to high 222Rn concentrations is unequivocally linked to DNA damage and lung cancer and is a worsening issue in North American and European built environments, having increased over time within newer housing stocks as a function of as yet unclear variables. Indoor air radon concentration can be influenced by a wide range of environmental, structural, and behavioral factors. As some of these factors are quantitative while others are qualitative, no single statistical model can determine indoor radon level precisely while simultaneously considering all these variables across a complex and highly diverse dataset. The ability of AI- machine (deep) learning to simultaneously analyze multiple quantitative and qualitative features makes it suitable to predict radon with a high degree of precision. Using Canadian and Swedish long-term indoor air radon exposure data, we are using artificial deep neural network models with random weights and polynomial statistical models in MATLAB to assess and predict radon health risk to human as a function of geospatial, human behavioral, and built environmental metrics. Our initial artificial neural network with random weights model run by sigmoid activation tested different combinations of variables and showed the highest prediction accuracy (>96%) within the reasonable iterations. Here, we present details of these emerging methods and discuss strengths and weaknesses compared to the traditional artificial neural network and statistical methods commonly used to predict indoor air quality in different countries. We propose an artificial deep neural network with random weights as a highly effective method for assessing and predicting indoor radon.

Keywords: radon, radiation protection, lung cancer, aI-machine deep learnng, risk assessment, risk prediction, Europe, North America

Procedia PDF Downloads 83
4346 Finite Element Analysis of Human Tarsals, Meta Tarsals and Phalanges for Predicting probable location of Fractures

Authors: Irfan Anjum Manarvi, Fawzi Aljassir

Abstract:

Human bones have been a keen area of research over a long time in the field of biomechanical engineering. Medical professionals, as well as engineering academics and researchers, have investigated various bones by using medical, mechanical, and materials approaches to discover the available body of knowledge. Their major focus has been to establish properties of these and ultimately develop processes and tools either to prevent fracture or recover its damage. Literature shows that mechanical professionals conducted a variety of tests for hardness, deformation, and strain field measurement to arrive at their findings. However, they considered these results accuracy to be insufficient due to various limitations of tools, test equipment, difficulties in the availability of human bones. They proposed the need for further studies to first overcome inaccuracies in measurement methods, testing machines, and experimental errors and then carry out experimental or theoretical studies. Finite Element analysis is a technique which was developed for the aerospace industry due to the complexity of design and materials. But over a period of time, it has found its applications in many other industries due to accuracy and flexibility in selection of materials and types of loading that could be theoretically applied to an object under study. In the past few decades, the field of biomechanical engineering has also started to see its applicability. However, the work done in the area of Tarsals, metatarsals and phalanges using this technique is very limited. Therefore, present research has been focused on using this technique for analysis of these critical bones of the human body. This technique requires a 3-dimensional geometric computer model of the object to be analyzed. In the present research, a 3d laser scanner was used for accurate geometric scans of individual tarsals, metatarsals, and phalanges from a typical human foot to make these computer geometric models. These were then imported into a Finite Element Analysis software and a length refining process was carried out prior to analysis to ensure the computer models were true representatives of actual bone. This was followed by analysis of each bone individually. A number of constraints and load conditions were applied to observe the stress and strain distributions in these bones under the conditions of compression and tensile loads or their combination. Results were collected for deformations in various axis, and stress and strain distributions were observed to identify critical locations where fracture could occur. A comparative analysis of failure properties of all the three types of bones was carried out to establish which of these could fail earlier which is presented in this research. Results of this investigation could be used for further experimental studies by the academics and researchers, as well as industrial engineers, for development of various foot protection devices or tools for surgical operations and recovery treatment of these bones. Researchers could build up on these models to carryout analysis of a complete human foot through Finite Element analysis under various loading conditions such as walking, marching, running, and landing after a jump etc.

Keywords: tarsals, metatarsals, phalanges, 3D scanning, finite element analysis

Procedia PDF Downloads 316
4345 The EU Omnipotence Paradox: Inclusive Cultural Policies and Effects of Exclusion

Authors: Emmanuel Pedler, Elena Raevskikh, Maxime Jaffré

Abstract:

Can the cultural geography of European cities be durably managed by European policies? To answer this question, two hypotheses can be proposed. (1) Either European cultural policies are able to erase cultural inequalities between the territories through the creation of new areas of cultural attractiveness in each beneficiary neighborhood, city or country. Or, (2) each European region historically rooted in a number of endogenous socio-historical, political or demographic factors is not receptive to exogenous political influences. Thus, the cultural attractiveness of a territory is difficult to measure and to impact by top-down policies in the long term. How do these two logics - European and local - interact and contribute to the emergence of a valued, popular sense of a common European cultural identity? Does this constant interaction between historical backgrounds and new political concepts encourage a positive identification with the European project? The European cultural policy programs, such as ECC (European Capital of Culture), seek to develop new forms of civic cohesion through inclusive and participative cultural events. The cultural assets of a city elected ‘ECC’ are mobilized to attract a wide range of new audiences, including populations poorly integrated into local cultural life – and consequently distant from pre-existing cultural offers. In the current context of increasingly heterogeneous individual perceptions of Europe, the ECC program aims to promote cultural forms and institutions that should accelerate both territorial and cross-border European cohesion. The new cultural consumption pattern is conceived to stimulate integration and mobility, but also to create a legitimate and transnational ideal European citizen type. Our comparative research confronts contrasting cases of ‘European Capitals of Culture’ from the south and from the north of Europe, cities recently concerned by the ECC political mechanism and cities that were elected ECC in the past, multi-centered cultural models vs. highly centralized cultural models. We aim to explore the impacts of European policies on the urban cultural geography, but also to understand the current obstacles for its efficient implementation.

Keywords: urbanism, cultural policies, cultural institutions, european cultural capitals, heritage industries, exclusion effects

Procedia PDF Downloads 248
4344 The Impact of Geopolitical Risks and the Oil Price Fluctuations on the Kuwaiti Financial Market

Authors: Layal Mansour

Abstract:

The aim of this paper is to identify whether oil price volatility or geopolitical risks can predict future financial stress periods or economic recessions in Kuwait. We construct the first Financial Stress Index for Kuwait (FSIK) that includes informative vulnerable indicators of the main financial sectors: the banking sector, the equities market, and the foreign exchange market. The study covers the period from 2000 to 2020, so it includes the two recent most devastating world economic crises with oil price fluctuation: the Covid-19 pandemic crisis and Ukraine-Russia War. All data are taken by the central bank of Kuwait, the World Bank, IMF, DataStream, and from Federal Reserve System St Louis. The variables are computed as the percentage growth rate, then standardized and aggregated into one index using the variance equal weights method, the most frequently used in the literature. The graphical FSIK analysis provides detailed information (by dates) to policymakers on how internal financial stability depends on internal policy and events such as government elections or resignation. It also shows how monetary authorities or internal policymakers’ decisions to relieve personal loans or increase/decrease the public budget trigger internal financial instability. The empirical analysis under vector autoregression (VAR) models shows the dynamic causal relationship between the oil price fluctuation and the Kuwaiti economy, which relies heavily on the oil price. Similarly, using vector autoregression (VAR) models to assess the impact of the global geopolitical risks on Kuwaiti financial stability, results reveal whether Kuwait is confronted with or sheltered from geopolitical risks. The Financial Stress Index serves as a guide for macroprudential regulators in order to understand the weakness of the overall Kuwaiti financial market and economy regardless of the Kuwaiti dinar strength and exchange rate stability. It helps policymakers predict future stress periods and, thus, address alternative cushions to confront future possible financial threats.

Keywords: Kuwait, financial stress index, causality test, VAR, oil price, geopolitical risks

Procedia PDF Downloads 65
4343 On the Topological Entropy of Nonlinear Dynamical Systems

Authors: Graziano Chesi

Abstract:

The topological entropy plays a key role in linear dynamical systems, allowing one to establish the existence of stabilizing feedback controllers for linear systems in the presence of communications constraints. This paper addresses the determination of a robust value of the topological entropy in nonlinear dynamical systems, specifically the largest value of the topological entropy over all linearized models in a region of interest of the state space. It is shown that a sufficient condition for establishing upper bounds of the sought robust value of the topological entropy can be given in terms of a semidefinite program (SDP), which belongs to the class of convex optimization problems.

Keywords: non-linear system, communication constraint, topological entropy

Procedia PDF Downloads 308
4342 An Unexpected Helping Hand: Consequences of Redistribution on Personal Ideology

Authors: Simon B.A. Egli, Katja Rost

Abstract:

Literature on redistributive preferences has proliferated in past decades. A core assumption behind it is that variation in redistributive preferences can explain different levels of redistribution. In contrast, this paper considers the reverse. What if it is redistribution that changes redistributive preferences? The core assumption behind the argument is that if self-interest - which we label concrete preferences - and ideology - which we label abstract preferences - come into conflict, the former will prevail and lead to an adjustment of the latter. To test the hypothesis, data from a survey conducted in Switzerland during the first wave of the COVID-19 crisis is used. A significant portion of the workforce at the time unexpectedly received state money through the short-time working program. Short-time work was used as a proxy for self-interest and was tested (1) on the support given to hypothetical, ailing firms during the crisis and (2) on the prioritization of justice principles guiding state action. In a first step, several models using OLS-regressions on political orientation were estimated to test our hypothesis as well as to check for non-linear effects. We expected support for ailing firms to be the same regardless of ideology but only for people on short-time work. The results both confirm our hypothesis and suggest a non-linear effect. Far-right individuals on short-time work were disproportionally supportive compared to moderate ones. In a second step, ordered logit models were estimated to test the impact of short-time work and political orientation on the rankings of the distributive justice principles need, performance, entitlement, and equality. The results show that being on short-time work significantly alters the prioritization of justice principles. Right-wing individuals are much more likely to prioritize need and equality over performance and entitlement when they receive government assistance. No such effect is found among left-wing individuals. In conclusion, we provide moderate to strong evidence that unexpectedly finding oneself at the receiving end changes redistributive preferences if personal ideology is antithetical to redistribution. The implications of our findings on the study of populism, personal ideologies, and political change are discussed.

Keywords: COVID-19, ideology, redistribution, redistributive preferences, self-interest

Procedia PDF Downloads 130
4341 From Cascade to Cluster School Model of Teachers’ Professional Development Training Programme: Nigerian Experience, Ondo State: A Case Study

Authors: Oloruntegbe Kunle Oke, Alake Ese Monica, Odutuyi Olubu Musili

Abstract:

This research explores the differing effectiveness of cascade and cluster models in professional development programs for educators in Ondo State, Nigeria. The cascade model emphasizes a top-down approach, where training is cascaded from expert trainers to lower levels of teachers. In contrast, the cluster model, a bottom-up approach, fosters collaborative learning among teachers within specific clusters. Through a review of the literature and empirical studies of the implementations of the former in two academic sessions followed by the cluster model in another two, the study examined their effectiveness on teacher development, productivity and students’ achievements. The study also drew a comparative analysis of the strengths and weaknesses associated with each model, considering factors such as scalability, cost-effectiveness, adaptability in various contexts, and sustainability. 2500 teachers from Ondo State Primary Schools participated in the cascade with intensive training in five zones for a week each in two academic sessions. On the other hand, 1,980 and 1,663 teachers in 52 and 34 clusters, respectively, were in the first and the following session. The programs were designed for one week of rigorous training of teachers by facilitators in the former while the latter was made up of four components: sit-in-observation, need-based assessment workshop, pre-cluster and the actual cluster meetings in addition to sensitization, and took place one day a week for ten weeks. Validated Cluster Impact Survey Instruments, CISI and Teacher’s Assessment Questionnaire (TAQ) were administered to ascertain the effectiveness of the models during and after implementation. The findings from the literature detailed specific effectiveness, strengths and limitations of each approach, especially the potential for inconsistencies and resistance to change. Findings from the data collected revealed the superiority of the cluster model. Response to TAQ equally showed content knowledge and skill update in both but were more sustained in the cluster model. Overall, the study contributes to the ongoing discourse on effective strategies for improving teacher training and enhancing student outcomes, offering practical recommendations for the development and implementation of future professional development projects.

Keywords: cascade model, cluster model, teachers’ development, productivity, students’ achievement

Procedia PDF Downloads 20