Search results for: shared parameter model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18854

Search results for: shared parameter model

17504 Jurisdictional Federalism and Formal Federalism: Levels of Political Centralization on American and Brazilian Models

Authors: Henrique Rangel, Alexandre Fadel, Igor De Lazari, Bianca Neri, Carlos Bolonha

Abstract:

This paper promotes a comparative analysis of American and Brazilian models of federalism assuming their levels of political centralization as main criterion. The central problem faced herein is the Brazilian approach of Unitarian regime. Although the hegemony of federative form after 1989, Brazil had a historical frame of political centralization that remains under the 1988 constitutional regime. Meanwhile, United States framed a federalism in which States absorb significant authorities. The hypothesis holds that the amount of alternative criteria of federalization – which can generate political centralization –, and the way they are upheld on judicial review, are crucial to understand the levels of political centralization achieved in each model. To test this hypothesis, the research is conducted by a methodology temporally delimited to 1994-2014 period. Three paradigmatic precedents of U.S. Supreme Court were selected: United States vs. Morrison (2000), on gender-motivated violence, Gonzales vs. Raich (2005), on medical use of marijuana, and United States vs. Lopez (1995), on firearm possession on scholar zones. These most relevant cases over federalism in the recent activity of Supreme Court indicates a determinant parameter of deliberation: the commerce clause. After observe the criterion used to permit or prohibit the political centralization in America, the Brazilian normative context is presented. In this sense, it is possible to identify the eventual legal treatment these controversies could receive in this Country. The decision-making reveals some deliberative parameters, which characterizes each federative model. At the end of research, the precedents of Rehnquist Court promote a broad revival of federalism debate, establishing the commerce clause as a secure criterion to uphold or not the necessity of centralization – even with decisions considered conservative. Otherwise, the Brazilian federalism solves them controversies upon in a formalist fashion, within numerous and comprehensive – sometimes casuistic too – normative devices, oriented to make an intense centralization. The aim of this work is indicate how jurisdictional federalism found in United States can preserve a consistent model with States robustly autonomous, while Brazil gives preference to normative mechanisms designed to starts from centralization.

Keywords: constitutional design, federalism, U.S. Supreme Court, legislative authority

Procedia PDF Downloads 516
17503 Theoretical Reflections on Metaphor and Cohesion and the Coherence of Face-To-Face Interactions

Authors: Afef Badri

Abstract:

The role of metaphor in creating the coherence and the cohesion of discourse in online interactive talk has almost received no attention. This paper intends to provide some theoretical reflections on metaphorical coherence as a jointly constructed process that evolves in online, face-to-face interactions. It suggests that the presence of a global conceptual structure in a conversation makes it conceptually cohesive. Yet, coherence remains a process largely determined by other variables (shared goals, communicative intentions, and framework of understanding). Metaphorical coherence created by these variables can be useful in detecting bias in media reporting.

Keywords: coherence, cohesion, face-to-face interactions, metaphor

Procedia PDF Downloads 247
17502 Influence of Physicochemical Water Quality Parameters on Abundance of Aquatic Insects in Rivers of Perak, Malaysia

Authors: Nur Atirah Hasmi, Nadia Nisha Musa, Hasnun Nita Ismail, Zulfadli Mahfodz

Abstract:

The effect of water quality parameters on the abundance of aquatic insects has been studied in Batu Berangkai, Dipang, Kuala Woh and Lata Kinjang Rivers, Perak, northern peninsular Malaysia. The focuses are to compare the abundance of aquatic insects in each sampling areas and to investigate the physical and chemical factors (water temperature, depth of water, canopy, water velocity, pH value, and dissolved oxygen) on the abundance of aquatic insects. The samples and data were collected by using aquatic net and multi-probe parameter. Physical parameters; water velocity, water temperature, depth, canopy cover, and two chemical parameters; pH value and dissolved oxygen have been measured in situ and recorded. A total of 631 individuals classified into 6 orders and 18 families of aquatic insects were identified from four sampling sites. The largest percentage of samples collected is from order Plecoptera 35.8%, followed by Ephemeroptera 32.6%, Trichoptera 17.0%, Hemiptera 8.1%, Coleoptera 4.8%, and the least is Odonata 1.7%. The aquatic insects collected from Dipang River have the highest abundance of 273 individuals from 6 orders and 13 families and the least insects trapped at Lata Kinjang which only have 64 individuals from 5 orders and 6 families. There is significant association between different sampling areas and abundance of aquatic insects (p<0.05). High abundance of aquatic insects was found in higher water temperature, low water velocity, deeper water, low pH, high amount of dissolved oxygen, and the area that is not covered by canopy.

Keywords: aquatic insect, physicochemical parameter, river, water quality

Procedia PDF Downloads 216
17501 Application Difference between Cox and Logistic Regression Models

Authors: Idrissa Kayijuka

Abstract:

The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.

Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio

Procedia PDF Downloads 455
17500 Comparison of Wake Oscillator Models to Predict Vortex-Induced Vibration of Tall Chimneys

Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta

Abstract:

The present study compares the semi-empirical wake-oscillator models that are used to predict vortex-induced vibration of structures. These models include those proposed by Facchinetti, Farshidian, and Dolatabadi, and Skop and Griffin. These models combine a wake oscillator model resembling the Van der Pol oscillator model and a single degree of freedom oscillation model. In order to use these models for estimating the top displacement of chimneys, the first mode vibration of the chimneys is only considered. The modal equation of the chimney constitutes the single degree of freedom model (SDOF). The equations of the wake oscillator model and the SDOF are simultaneously solved using an iterative procedure. The empirical parameters used in the wake-oscillator models are estimated using a newly developed approach, and response is compared with experimental data, which appeared comparable. For carrying out the iterative solution, the ode solver of MATLAB is used. To carry out the comparative study, a tall concrete chimney of height 210m has been chosen with the base diameter as 28m, top diameter as 20m, and thickness as 0.3m. The responses of the chimney are also determined using the linear model proposed by E. Simiu and the deterministic model given in Eurocode. It is observed from the comparative study that the responses predicted by the Facchinetti model and the model proposed by Skop and Griffin are nearly the same, while the model proposed by Fashidian and Dolatabadi predicts a higher response. The linear model without considering the aero-elastic phenomenon provides a less response as compared to the non-linear models. Further, for large damping, the prediction of the response by the Euro code is relatively well compared to those of non-linear models.

Keywords: chimney, deterministic model, van der pol, vortex-induced vibration

Procedia PDF Downloads 221
17499 Modelling the Impact of Installation of Heat Cost Allocators in District Heating Systems Using Machine Learning

Authors: Danica Maljkovic, Igor Balen, Bojana Dalbelo Basic

Abstract:

Following the regulation of EU Directive on Energy Efficiency, specifically Article 9, individual metering in district heating systems has to be introduced by the end of 2016. These directions have been implemented in member state’s legal framework, Croatia is one of these states. The directive allows installation of both heat metering devices and heat cost allocators. Mainly due to bad communication and PR, the general public false image was created that the heat cost allocators are devices that save energy. Although this notion is wrong, the aim of this work is to develop a model that would precisely express the influence of installation heat cost allocators on potential energy savings in each unit within multifamily buildings. At the same time, in recent years, a science of machine learning has gain larger application in various fields, as it is proven to give good results in cases where large amounts of data are to be processed with an aim to recognize a pattern and correlation of each of the relevant parameter as well as in the cases where the problem is too complex for a human intelligence to solve. A special method of machine learning, decision tree method, has proven an accuracy of over 92% in prediction general building consumption. In this paper, a machine learning algorithms will be used to isolate the sole impact of installation of heat cost allocators on a single building in multifamily houses connected to district heating systems. Special emphasises will be given regression analysis, logistic regression, support vector machines, decision trees and random forest method.

Keywords: district heating, heat cost allocator, energy efficiency, machine learning, decision tree model, regression analysis, logistic regression, support vector machines, decision trees and random forest method

Procedia PDF Downloads 249
17498 A Broadband Tri-Cantilever Vibration Energy Harvester with Magnetic Oscillator

Authors: Xiaobo Rui, Zhoumo Zeng, Yibo Li

Abstract:

A novel tri-cantilever energy harvester with magnetic oscillator was presented, which could convert the ambient vibration into electrical energy to power the low-power devices such as wireless sensor networks. The most common way to harvest vibration energy is based on the use of linear resonant devices such as cantilever beam, since this structure creates the highest strain for a given force. The highest efficiency will be achieved when the resonance frequency of the harvester matches the vibration frequency. The limitation of the structure is the narrow effective bandwidth. To overcome this limitation, this article introduces a broadband tri-cantilever harvester with nonlinear stiffness. This energy harvester typically consists of three thin cantilever beams vertically arranged with Neodymium Magnets ( NdFeB)magnetics at its free end and a fixed base at the other end. The three cantilevers have different resonant frequencies by designed in different thicknesses. It is obviously that a similar advantage of multiple resonant frequencies as piezoelectric cantilevers array structure is built. To achieve broadband energy harvesting, magnetic interaction is used to introduce the nonlinear system stiffness to tune the resonant frequency to match the excitation. Since the three cantilever tips are all free and the magnetic force is distance dependent, the resonant frequencies will be complexly changed with the vertical vibration of the free end. Both model and experiment are built. The electromechanically coupled lumped-parameter model is presented. An electromechanical formulation and analytical expressions for the coupled nonlinear vibration response and voltage response are given. The entire structure is fabricated and mechanically attached to a electromagnetic shaker as a vibrating body via the fixed base, in order to couple the vibrations to the cantilever. The cantilevers are bonded with piezoelectric macro-fiber composite (MFC) materials (Model: M8514P2). The size of the cantilevers is 120*20mm2 and the thicknesses are separately 1mm, 0.8mm, 0.6mm. The prototype generator has a measured performance of 160.98 mW effective electrical power and 7.93 DC output voltage via the excitation level of 10m/s2. The 130% increase in the operating bandwidth is achieved. This device is promising to support low-power devices, peer-to-peer wireless nodes, and small-scale wireless sensor networks in ambient vibration environment.

Keywords: tri-cantilever, ambient vibration, energy harvesting, magnetic oscillator

Procedia PDF Downloads 154
17497 Development of a Predictive Model to Prevent Financial Crisis

Authors: Tengqin Han

Abstract:

Delinquency has been a crucial factor in economics throughout the years. Commonly seen in credit card and mortgage, it played one of the crucial roles in causing the most recent financial crisis in 2008. In each case, a delinquency is a sign of the loaner being unable to pay off the debt, and thus may cause a lost of property in the end. Individually, one case of delinquency seems unimportant compared to the entire credit system. China, as an emerging economic entity, the national strength and economic strength has grown rapidly, and the gross domestic product (GDP) growth rate has remained as high as 8% in the past decades. However, potential risks exist behind the appearance of prosperity. Among the risks, the credit system is the most significant one. Due to long term and a large amount of balance of the mortgage, it is critical to monitor the risk during the performance period. In this project, about 300,000 mortgage account data are analyzed in order to develop a predictive model to predict the probability of delinquency. Through univariate analysis, the data is cleaned up, and through bivariate analysis, the variables with strong predictive power are detected. The project is divided into two parts. In the first part, the analysis data of 2005 are split into 2 parts, 60% for model development, and 40% for in-time model validation. The KS of model development is 31, and the KS for in-time validation is 31, indicating the model is stable. In addition, the model is further validation by out-of-time validation, which uses 40% of 2006 data, and KS is 33. This indicates the model is still stable and robust. In the second part, the model is improved by the addition of macroeconomic economic indexes, including GDP, consumer price index, unemployment rate, inflation rate, etc. The data of 2005 to 2010 is used for model development and validation. Compared with the base model (without microeconomic variables), KS is increased from 41 to 44, indicating that the macroeconomic variables can be used to improve the separation power of the model, and make the prediction more accurate.

Keywords: delinquency, mortgage, model development, model validation

Procedia PDF Downloads 228
17496 Forecasting Age-Specific Mortality Rates and Life Expectancy at Births for Malaysian Sub-Populations

Authors: Syazreen N. Shair, Saiful A. Ishak, Aida Y. Yusof, Azizah Murad

Abstract:

In this paper, we forecast age-specific Malaysian mortality rates and life expectancy at births by gender and ethnic groups including Malay, Chinese and Indian. Two mortality forecasting models are adopted the original Lee-Carter model and its recent modified version, the product ratio coherent model. While the first forecasts the mortality rates for each subpopulation independently, the latter accounts for the relationship between sub-populations. The evaluation of both models is performed using the out-of-sample forecast errors which are mean absolute percentage errors (MAPE) for mortality rates and mean forecast errors (MFE) for life expectancy at births. The best model is then used to perform the long-term forecasts up to the year 2030, the year when Malaysia is expected to become an aged nation. Results suggest that in terms of overall accuracy, the product ratio model performs better than the original Lee-Carter model. The association of lower mortality group (Chinese) in the subpopulation model can improve the forecasts of high mortality groups (Malay and Indian).

Keywords: coherent forecasts, life expectancy at births, Lee-Carter model, product-ratio model, mortality rates

Procedia PDF Downloads 219
17495 Efficient Sampling of Probabilistic Program for Biological Systems

Authors: Keerthi S. Shetty, Annappa Basava

Abstract:

In recent years, modelling of biological systems represented by biochemical reactions has become increasingly important in Systems Biology. Biological systems represented by biochemical reactions are highly stochastic in nature. Probabilistic model is often used to describe such systems. One of the main challenges in Systems biology is to combine absolute experimental data into probabilistic model. This challenge arises because (1) some molecules may be present in relatively small quantities, (2) there is a switching between individual elements present in the system, and (3) the process is inherently stochastic on the level at which observations are made. In this paper, we describe a novel idea of combining absolute experimental data into probabilistic model using tool R2. Through a case study of the Transcription Process in Prokaryotes we explain how biological systems can be written as probabilistic program to combine experimental data into the model. The model developed is then analysed in terms of intrinsic noise and exact sampling of switching times between individual elements in the system. We have mainly concentrated on inferring number of genes in ON and OFF states from experimental data.

Keywords: systems biology, probabilistic model, inference, biology, model

Procedia PDF Downloads 349
17494 Machine Learning Model Applied for SCM Processes to Efficiently Determine Its Impacts on the Environment

Authors: Elena Puica

Abstract:

This paper aims to investigate the impact of Supply Chain Management (SCM) on the environment by applying a Machine Learning model while pointing out the efficiency of the technology used. The Machine Learning model was used to derive the efficiency and optimization of technology used in SCM and the environmental impact of SCM processes. The model applied is a predictive classification model and was trained firstly to determine which stage of the SCM has more outputs and secondly to demonstrate the efficiency of using advanced technology in SCM instead of recuring to traditional SCM. The outputs are the emissions generated in the environment, the consumption from different steps in the life cycle, the resulting pollutants/wastes emitted, and all the releases to air, land, and water. This manuscript presents an innovative approach to applying advanced technology in SCM and simultaneously studies the efficiency of technology and the SCM's impact on the environment. Identifying the conceptual relationships between SCM practices and their impact on the environment is a new contribution to the research. The authors can take a forward step in developing recent studies in SCM and its effects on the environment by applying technology.

Keywords: machine-learning model in SCM, SCM processes, SCM and the environmental impact, technology in SCM

Procedia PDF Downloads 116
17493 The Effect of Action Potential Duration and Conduction Velocity on Cardiac Pumping Efficacy: Simulation Study

Authors: Ana Rahma Yuniarti, Ki Moo Lim

Abstract:

Slowed myocardial conduction velocity (CV) and shortened action potential duration (APD) due to some reason are associated with an increased risk of re-entrant excitation, predisposing to cardiac arrhythmia. That is because both of CV reduction and APD shortening induces shortening of wavelength. In this study, we investigated quantitatively the cardiac mechanical responses under various CV and APD using multi-scale computational model of the heart. The model consisted of electrical model coupled with the mechanical contraction model together with a lumped model of the circulatory system. The electrical model consisted of 149.344 numbers of nodes and 183.993 numbers of elements of tetrahedral mesh, whereas the mechanical model consisted of 356 numbers of nodes and 172 numbers of elements of hexahedral mesh with hermite basis. We performed the electrical simulation with two scenarios: 1) by varying the CV values with constant APD and 2) by varying the APD values with constant CV. Then, we compared the electrical and mechanical responses for both scenarios. Our simulation showed that faster CV and longer APD induced largest resultants wavelength and generated better cardiac pumping efficacy by increasing the cardiac output and consuming less energy. This is due to the long wave propagation and faster conduction generated more synchronous contraction of whole ventricle.

Keywords: conduction velocity, action potential duration, mechanical contraction model, circulatory model

Procedia PDF Downloads 204
17492 Application of Computational Flow Dynamics (CFD) Analysis for Surge Inception and Propagation for Low Head Hydropower Projects

Authors: M. Mohsin Munir, Taimoor Ahmad, Javed Munir, Usman Rashid

Abstract:

Determination of maximum elevation of a flowing fluid due to sudden rejection of load in a hydropower facility is of great interest to hydraulic engineers to ensure safety of the hydraulic structures. Several mathematical models exist that employ one-dimensional modeling for the determination of surge but none of these perfectly simulate real-time circumstances. The paper envisages investigation of surge inception and propagation for a Low Head Hydropower project using Computational Fluid Dynamics (CFD) analysis on FLOW-3D software package. The fluid dynamic model utilizes its analysis for surge by employing Reynolds’ Averaged Navier-Stokes Equations (RANSE). The CFD model is designed for a case study at Taunsa hydropower Project in Pakistan. Various scenarios have run through the model keeping in view upstream boundary conditions. The prototype results were then compared with the results of physical model testing for the same scenarios. The results of the numerical model proved quite accurate coherence with the physical model testing and offers insight into phenomenon which are not apparent in physical model and shall be adopted in future for the similar low head projects limiting delays and cost incurred in the physical model testing.

Keywords: surge, FLOW-3D, numerical model, Taunsa, RANSE

Procedia PDF Downloads 361
17491 Gene Expression Meta-Analysis of Potential Shared and Unique Pathways Between Autoimmune Diseases Under anti-TNFα Therapy

Authors: Charalabos Antonatos, Mariza Panoutsopoulou, Georgios K. Georgakilas, Evangelos Evangelou, Yiannis Vasilopoulos

Abstract:

The extended tissue damage and severe clinical outcomes of autoimmune diseases, accompanied by the high annual costs to the overall health care system, highlight the need for an efficient therapy. Increasing knowledge over the pathophysiology of specific chronic inflammatory diseases, namely Psoriasis (PsO), Inflammatory Bowel Diseases (IBD) consisting of Crohn’s disease (CD) and Ulcerative colitis (UC), and Rheumatoid Arthritis (RA), has provided insights into the underlying mechanisms that lead to the maintenance of the inflammation, such as Tumor Necrosis Factor alpha (TNF-α). Hence, the anti-TNFα biological agents pose as an ideal therapeutic approach. Despite the efficacy of anti-TNFα agents, several clinical trials have shown that 20-40% of patients do not respond to treatment. Nowadays, high-throughput technologies have been recruited in order to elucidate the complex interactions in multifactorial phenotypes, with the most ubiquitous ones referring to transcriptome quantification analyses. In this context, a random effects meta-analysis of available gene expression cDNA microarray datasets was performed between responders and non-responders to anti-TNFα therapy in patients with IBD, PsO, and RA. Publicly available datasets were systematically searched from inception to 10th of November 2020 and selected for further analysis if they assessed the response to anti-TNFα therapy with clinical score indexes from inflamed biopsies. Specifically, 4 IBD (79 responders/72 non-responders), 3 PsO (40 responders/11 non-responders) and 2 RA (16 responders/6 non-responders) datasetswere selected. After the separate pre-processing of each dataset, 4 separate meta-analyses were conducted; three disease-specific and a single combined meta-analysis on the disease-specific results. The MetaVolcano R package (v.1.8.0) was utilized for a random-effects meta-analysis through theRestricted Maximum Likelihood (RELM) method. The top 1% of the most consistently perturbed genes in the included datasets was highlighted through the TopConfects approach while maintaining a 5% False Discovery Rate (FDR). Genes were considered as Differentialy Expressed (DEGs) as those with P ≤ 0.05, |log2(FC)| ≥ log2(1.25) and perturbed in at least 75% of the included datasets. Over-representation analysis was performed using Gene Ontology and Reactome Pathways for both up- and down-regulated genes in all 4 performed meta-analyses. Protein-Protein interaction networks were also incorporated in the subsequentanalyses with STRING v11.5 and Cytoscape v3.9. Disease-specific meta-analyses detected multiple distinct pro-inflammatory and immune-related down-regulated genes for each disease, such asNFKBIA, IL36, and IRAK1, respectively. Pathway analyses revealed unique and shared pathways between each disease, such as Neutrophil Degranulation and Signaling by Interleukins. The combined meta-analysis unveiled 436 DEGs, 86 out of which were up- and 350 down-regulated, confirming the aforementioned shared pathways and genes, as well as uncovering genes that participate in anti-inflammatory pathways, namely IL-10 signaling. The identification of key biological pathways and regulatory elements is imperative for the accurate prediction of the patient’s response to biological drugs. Meta-analysis of such gene expression data could aid the challenging approach to unravel the complex interactions implicated in the response to anti-TNFα therapy in patients with PsO, IBD, and RA, as well as distinguish gene clusters and pathways that are altered through this heterogeneous phenotype.

Keywords: anti-TNFα, autoimmune, meta-analysis, microarrays

Procedia PDF Downloads 181
17490 Multiphase Flow Model for 3D Numerical Model Using ANSYS for Flow over Stepped Cascade with End Sill

Authors: Dheyaa Wajid Abbood, Hanan Hussien Abood

Abstract:

Stepped cascade has been utilized as a hydraulic structure for years. It has proven to be the least costly aeration system in replenishing dissolved oxygen. Numerical modeling of stepped cascade with end sill is very complicated and challenging because of the high roughness and velocity re circulation regions. Volume of fluid multiphase flow model (VOF) is used .The realizable k-ξ model is chosen to simulate turbulence. The computational results are compared with lab-scale stepped cascade data. The lab –scale model was constructed in the hydraulic laboratory, Al-Mustansiriya University, Iraq. The stepped cascade was 0.23 m wide and consisted of 3 steps each 0.2m high and 0.6 m long with variable end sill. The discharge was varied from 1 to 4 l/s. ANSYS has been employed to simulate the experimental data and their related results. This study shows that ANSYS is able to predict results almost the same as experimental findings in some regions of the structure.

Keywords: stepped cascade weir, aeration, multiphase flow model, ansys

Procedia PDF Downloads 336
17489 Developing an Integrated Seismic Risk Model for Existing Buildings in Northern Algeria

Authors: R. Monteiro, A. Abarca

Abstract:

Large scale seismic risk assessment has become increasingly popular to evaluate the physical vulnerability of a given region to seismic events, by putting together hazard, exposure and vulnerability components. This study, developed within the scope of the EU-funded project ITERATE (Improved Tools for Disaster Risk Mitigation in Algeria), explains the steps and expected results for the development of an integrated seismic risk model for assessment of the vulnerability of residential buildings in Northern Algeria. For this purpose, the model foresees the consideration of an updated seismic hazard model, as well as ad-hoc exposure and physical vulnerability models for local residential buildings. The first results of this endeavor, such as the hazard model and a specific taxonomy to be used for the exposure and fragility components of the model are presented, using as starting point the province of Blida, in Algeria. Specific remarks and conclusions regarding the characteristics of the Northern Algerian in-built are then made based on these results.

Keywords: Northern Algeria, risk, seismic hazard, vulnerability

Procedia PDF Downloads 201
17488 Modelling of Atomic Force Microscopic Nano Robot's Friction Force on Rough Surfaces

Authors: M. Kharazmi, M. Zakeri, M. Packirisamy, J. Faraji

Abstract:

Micro/Nanorobotics or manipulation of nanoparticles by Atomic Force Microscopic (AFM) is one of the most important solutions for controlling the movement of atoms, particles and micro/nano metrics components and assembling of them to design micro/nano-meter tools. Accurate modelling of manipulation requires identification of forces and mechanical knowledge in the Nanoscale which are different from macro world. Due to the importance of the adhesion forces and the interaction of surfaces at the nanoscale several friction models were presented. In this research, friction and normal forces that are applied on the AFM by using of the dynamic bending-torsion model of AFM are obtained based on Hurtado-Kim friction model (HK), Johnson-Kendall-Robert contact model (JKR) and Greenwood-Williamson roughness model (GW). Finally, the effect of standard deviation of asperities height on the normal load, friction force and friction coefficient are studied.

Keywords: atomic force microscopy, contact model, friction coefficient, Greenwood-Williamson model

Procedia PDF Downloads 199
17487 Superiority of High Frequency Based Volatility Models: Empirical Evidence from an Emerging Market

Authors: Sibel Celik, Hüseyin Ergin

Abstract:

The paper aims to find the best volatility forecasting model for stock markets in Turkey. For this purpose, we compare performance of different volatility models-both traditional GARCH model and high frequency based volatility models- and conclude that both in pre-crisis and crisis period, the performance of high frequency based volatility models are better than traditional GARCH model. The findings of paper are important for policy makers, financial institutions and investors.

Keywords: volatility, GARCH model, realized volatility, high frequency data

Procedia PDF Downloads 486
17486 Application of the Tripartite Model to the Link between Non-Suicidal Self-Injury and Suicidal Risk

Authors: Ashley Wei-Ting Wang, Wen-Yau Hsu

Abstract:

Objectives: The current study applies and expands the Tripartite Model to elaborate the link between non-suicidal self-injury (NSSI) and suicidal behavior. We propose a structural model of NSSI and suicidal risk, in which negative affect (NA) predicts both anxiety and depression, positive affect (PA) predicts depression only, anxiety is linked to NSSI, and depression is linked to suicidal risk. Method: Four hundreds and eighty seven undergraduates participated. Data were collected by administering self-report questionnaires. We performed hierarchical regression and structural equation modeling to test the proposed structural model. Results: The results largely support the proposed structural model, with one exception: anxiety was strongly associated with NSSI and to a lesser extent with suicidal risk. Conclusions: We conclude that the co-occurrence of NSSI and suicidal risk is due to NA and anxiety, and suicidal risk can be differentiated by depression. Further theoretical and practical implications are discussed.

Keywords: non-suicidal self-injury, suicidal risk, anxiety, depression, the tripartite model, hierarchical relationship

Procedia PDF Downloads 470
17485 Magnetic Resonance Imaging for Assessment of the Quadriceps Tendon Cross-Sectional Area as an Adjunctive Diagnostic Parameter in Patients with Patellofemoral Pain Syndrome

Authors: Jae Ni Jang, SoYoon Park, Sukhee Park, Yumin Song, Jae Won Kim, Keum Nae Kang, Young Uk Kim

Abstract:

Objectives: Patellofemoral pain syndrome (PFPS) is a common clinical condition characterized by anterior knee pain. Here, we investigated the quadriceps tendon cross-sectional area (QTCSA) as a novel predictor for the diagnosis of PFPS. By examining the association between the QTCSA and PFPS, we aimed to provide a more valuable diagnostic parameter and more equivocal assessment of the diagnostic potential of PFPS by comparing the QTCSA with the quadriceps tendon thickness (QTT), a traditional measure of quadriceps tendon hypertrophy. Patients and Methods: This retrospective study included 30 patients with PFPS and 30 healthy participants who underwent knee magnetic resonance imaging. T1-weighted turbo spin echo transverse magnetic resonance images were obtained. The QTCSA was measured on the axial-angled phases of the images by drawing outlines, and the QTT was measured at the most hypertrophied quadriceps tendon. Results: The average QTT and QTCSA for patients with PFPS (6.33±0.80 mm and 155.77±36.60 mm², respectively) were significantly greater than those for healthy participants (5.77±0.36 mm and 111.90±24.10 mm2, respectively; both P<0.001). We used a receiver operating characteristic curve to confirm the sensitivities and specificities for both the QTT and QTCSA as predictors of PFPS. The optimal diagnostic cutoff value for QTT was 5.98 mm, with a sensitivity of 66.7%, a specificity of 70.0%, and an area under the curve of 0.75 (0.62–0.88). The optimal diagnostic cutoff value for QTCSA was 121.04 mm², with a sensitivity of 73.3%, a specificity of 70.0%, and an area under the curve of 0.83 (0.74–0.93). Conclusion: The QTCSA was found to be a more reliable diagnostic indicator for PFPS than QTT.

Keywords: patellofemoral pain syndrome, quadriceps muscle, hypertrophy, magnetic resonance imaging

Procedia PDF Downloads 50
17484 Influence of a High-Resolution Land Cover Classification on Air Quality Modelling

Authors: C. Silveira, A. Ascenso, J. Ferreira, A. I. Miranda, P. Tuccella, G. Curci

Abstract:

Poor air quality is one of the main environmental causes of premature deaths worldwide, and mainly in cities, where the majority of the population lives. It is a consequence of successive land cover (LC) and use changes, as a result of the intensification of human activities. Knowing these landscape modifications in a comprehensive spatiotemporal dimension is, therefore, essential for understanding variations in air pollutant concentrations. In this sense, the use of air quality models is very useful to simulate the physical and chemical processes that affect the dispersion and reaction of chemical species into the atmosphere. However, the modelling performance should always be evaluated since the resolution of the input datasets largely dictates the reliability of the air quality outcomes. Among these data, the updated LC is an important parameter to be considered in atmospheric models, since it takes into account the Earth’s surface changes due to natural and anthropic actions, and regulates the exchanges of fluxes (emissions, heat, moisture, etc.) between the soil and the air. This work aims to evaluate the performance of the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem), when different LC classifications are used as an input. The influence of two LC classifications was tested: i) the 24-classes USGS (United States Geological Survey) LC database included by default in the model, and the ii) CLC (Corine Land Cover) and specific high-resolution LC data for Portugal, reclassified according to the new USGS nomenclature (33-classes). Two distinct WRF-Chem simulations were carried out to assess the influence of the LC on air quality over Europe and Portugal, as a case study, for the year 2015, using the nesting technique over three simulation domains (25 km2, 5 km2 and 1 km2 horizontal resolution). Based on the 33-classes LC approach, particular emphasis was attributed to Portugal, given the detail and higher LC spatial resolution (100 m x 100 m) than the CLC data (5000 m x 5000 m). As regards to the air quality, only the LC impacts on tropospheric ozone concentrations were evaluated, because ozone pollution episodes typically occur in Portugal, in particular during the spring/summer, and there are few research works relating to this pollutant with LC changes. The WRF-Chem results were validated by season and station typology using background measurements from the Portuguese air quality monitoring network. As expected, a better model performance was achieved in rural stations: moderate correlation (0.4 – 0.7), BIAS (10 – 21µg.m-3) and RMSE (20 – 30 µg.m-3), and where higher average ozone concentrations were estimated. Comparing both simulations, small differences grounded on the Leaf Area Index and air temperature values were found, although the high-resolution LC approach shows a slight enhancement in the model evaluation. This highlights the role of the LC on the exchange of atmospheric fluxes, and stresses the need to consider a high-resolution LC characterization combined with other detailed model inputs, such as the emission inventory, to improve air quality assessment.

Keywords: land use, spatial resolution, WRF-Chem, air quality assessment

Procedia PDF Downloads 158
17483 Valuation of Caps and Floors in a LIBOR Market Model with Markov Jump Risks

Authors: Shih-Kuei Lin

Abstract:

The characterization of the arbitrage-free dynamics of interest rates is developed in this study under the presence of Markov jump risks, when the term structure of the interest rates is modeled through simple forward rates. We consider Markov jump risks by allowing randomness in jump sizes, independence between jump sizes and jump times. The Markov jump diffusion model is used to capture empirical phenomena and to accurately describe interest jump risks in a financial market. We derive the arbitrage-free model of simple forward rates under the spot measure. Moreover, the analytical pricing formulas for a cap and a floor are derived under the forward measure when the jump size follows a lognormal distribution. In our empirical analysis, we find that the LIBOR market model with Markov jump risk better accounts for changes from/to different states and different rates.

Keywords: arbitrage-free, cap and floor, Markov jump diffusion model, simple forward rate model, volatility smile, EM algorithm

Procedia PDF Downloads 421
17482 The Missing Link in Holistic Health Care: Value-Based Medicine in Entrustable Professional Activities for Doctor-Patient Relationship

Authors: Ling-Lang Huang

Abstract:

Background: The holistic health care should ideally cover physical, mental, spiritual, and social aspects of a patient. With very constrained time in current clinical practice system, medical decisions often tip the balance in favor of evidence-based medicine (EBM) in comparison to patient's personal values. Even in the era of competence-based medical education (CBME), when scrutinizing the items of entrustable professional activities (EPAs), we found that EPAs of establishing doctor-patient relationship remained incomplete or even missing. This phenomenon prompted us to raise this project aiming at advocating value-based medicine (VBM), which emphasizes the importance of patient’s values in medical decisions. A true and effective doctor-patient communication and relationship should be a well-balanced harmony of EBM and VBM. By constructing VBM into current EPAs, we can further promote genuine shared decision making (SDM) and fix the missing link in holistic health care. Methods: In this project, we are going to find out EPA elements crucial for establishing an ideal doctor-patient relationship through three distinct pairs of doctor-patient relationships: patients with pulmonary arterial hypertension (relatively young but with grave disease), patients undergoing surgery (facing critical medical decisions), and patients with terminal diseases (facing forthcoming death). We’ll search for important EPA elements through the following steps: 1. Narrative approach to delineate patients’ values among 2. distinct groups. 3.Hermeneutics-based interview: semi-structured interview will be conducted for both patients and physicians, followed by qualitative analysis of collected information by compiling, disassembling, reassembling, interpreting, and concluding. 4. Preliminarily construct those VBM elements into EPAs for doctor-patient relationships in 3 groups. Expected Outcomes: The results of this project are going to give us invaluable information regarding the impact of patients’ values, while facing different medical situations, on the final medical decision. The competence of well-blending and -balanced both values from patients and evidence from clinical sciences is the missing link in holistic health care and should be established in future EPAs to enhance an effective SDM.

Keywords: value-based medicine, shared decision making, entrustable professional activities, holistic health care

Procedia PDF Downloads 121
17481 An Adjusted Network Information Criterion for Model Selection in Statistical Neural Network Models

Authors: Christopher Godwin Udomboso, Angela Unna Chukwu, Isaac Kwame Dontwi

Abstract:

In selecting a Statistical Neural Network model, the Network Information Criterion (NIC) has been observed to be sample biased, because it does not account for sample sizes. The selection of a model from a set of fitted candidate models requires objective data-driven criteria. In this paper, we derived and investigated the Adjusted Network Information Criterion (ANIC), based on Kullback’s symmetric divergence, which has been designed to be an asymptotically unbiased estimator of the expected Kullback-Leibler information of a fitted model. The analyses show that on a general note, the ANIC improves model selection in more sample sizes than does the NIC.

Keywords: statistical neural network, network information criterion, adjusted network, information criterion, transfer function

Procedia PDF Downloads 566
17480 Privacy Policy Prediction for Uploaded Image on Content Sharing Sites

Authors: Pallavi Mane, Nikita Mankar, Shraddha Mazire, Rasika Pashankar

Abstract:

Content sharing sites are very useful in sharing information and images. However, with the increasing demand of content sharing sites privacy and security concern have also increased. There is need to develop a tool for controlling user access to their shared content. Therefore, we are developing an Adaptive Privacy Policy Prediction (A3P) system which is helpful for users to create privacy settings for their images. We propose the two-level framework which assigns the best available privacy policy for the users images according to users available histories on the site.

Keywords: online information services, prediction, security and protection, web based services

Procedia PDF Downloads 358
17479 Causal Modeling of the Glucose-Insulin System in Type-I Diabetic Patients

Authors: J. Fernandez, N. Aguilar, R. Fernandez de Canete, J. C. Ramos-Diaz

Abstract:

In this paper, a simulation model of the glucose-insulin system for a patient undergoing diabetes Type 1 is developed by using a causal modeling approach under system dynamics. The OpenModelica simulation environment has been employed to build the so called causal model, while the glucose-insulin model parameters were adjusted to fit recorded mean data of a diabetic patient database. Model results under different conditions of a three-meal glucose and exogenous insulin ingestion patterns have been obtained. This simulation model can be useful to evaluate glucose-insulin performance in several circumstances, including insulin infusion algorithms in open-loop and decision support systems in closed-loop.

Keywords: causal modeling, diabetes, glucose-insulin system, diabetes, causal modeling, OpenModelica software

Procedia PDF Downloads 330
17478 A Mathematical Optimization Model for Locating and Fortifying Capacitated Warehouses under Risk of Failure

Authors: Tareq Oshan

Abstract:

Facility location and size decisions are important to any company because they affect profitability and success. However, warehouses are exposed to various risks of failure that affect their activity. This paper presents a mixed-integer non-linear mathematical model that can be used to determine optimal warehouse locations and sizes, which warehouses to fortify, and which branches should be assigned to specific warehouses when there is a risk of warehouse failure. Every branch is assigned to a fortified primary warehouse or a nonfortified primary warehouse and a fortified backup warehouse. The standard method and an introduced method, based on the average probabilities, for linearizing this mathematical model were used. A Canadian case study was used to demonstrate the developed mathematical model, followed by some sensitivity analysis.

Keywords: supply chain network design, fortified warehouse, mixed-integer mathematical model, warehouse failure risk

Procedia PDF Downloads 243
17477 A Comparative Study of Environment Risk Assessment Guidelines of Developing and Developed Countries Including Bangladesh

Authors: Syeda Fahria Hoque Mimmi, Aparna Islam

Abstract:

Genetically engineered (GE) plants are the need of time for increased demand for food. A complete set of regulations need to be followed from the development of a GE plant to its release into the environment. The whole regulation system is categorized into separate stages for maintaining the proper biosafety. Environmental risk assessment (ERA) is one of such crucial stages in the whole process. ERA identifies potential risks and their impacts through science-based evaluation where it is done in a case-by-case study. All the countries which deal with GE plants follow specific guidelines to conduct a successful ERA. In this study, ERA guidelines of 4 developing and 4 developed countries, including Bangladesh, were compared. ERA guidelines of countries such as India, Canada, Australia, the European Union, Argentina, Brazil, and the US were considered as a model to conduct the comparison study with Bangladesh. Initially, ten parameters were detected to compare the required data and information among all the guidelines. Surprisingly, an adequate amount of data and information requirements (e.g., if the intended modification/new traits of interest has been achieved or not, the growth habit of GE plants, consequences of any potential gene flow upon the cultivation of GE plants to sexually compatible plant species, potential adverse effects on the human health, etc.) matched between all the countries. However, a few differences in data requirement (e.g., agronomic conventions of non-transformed plants, applicants should clearly describe experimental procedures followed, etc.) were also observed in the study. Moreover, it was found that only a few countries provide instructions on the quality of the data used for ERA. If these similarities are recognized in a more framed manner, then the approval pathway of GE plants can be shared.

Keywords: GE plants, ERA, harmonization, ERA guidelines, Information and data requirements

Procedia PDF Downloads 187
17476 Flow and Heat Transfer Analysis of Copper-Water Nanofluid with Temperature Dependent Viscosity past a Riga Plate

Authors: Fahad Abbasi

Abstract:

Flow of electrically conducting nanofluids is of pivotal importance in countless industrial and medical appliances. Fluctuations in thermophysical properties of such fluids due to variations in temperature have not received due attention in the available literature. Present investigation aims to fill this void by analyzing the flow of copper-water nanofluid with temperature dependent viscosity past a Riga plate. Strong wall suction and viscous dissipation have also been taken into account. Numerical solutions for the resulting nonlinear system have been obtained. Results are presented in the graphical and tabular format in order to facilitate the physical analysis. An estimated expression for skin friction coefficient and Nusselt number are obtained by performing linear regression on numerical data for embedded parameters. Results indicate that the temperature dependent viscosity alters the velocity, as well as the temperature of the nanofluid and, is of considerable importance in the processes where high accuracy is desired. Addition of copper nanoparticles makes the momentum boundary layer thinner whereas viscosity parameter does not affect the boundary layer thickness. Moreover, the regression expressions indicate that magnitude of rate of change in effective skin friction coefficient and Nusselt number with respect to nanoparticles volume fraction is prominent when compared with the rate of change with variable viscosity parameter and modified Hartmann number.

Keywords: heat transfer, peristaltic flows, radially varying magnetic field, curved channel

Procedia PDF Downloads 166
17475 A Study for the Effect of Fire Initiated Location on Evacuation Success Rate

Authors: Jin A Ryu, Hee Sun Kim

Abstract:

As the number of fire accidents is gradually raising, many studies have been reported on evacuation. Previous studies have mostly focused on evaluating the safety of evacuation and the risk of fire in particular buildings. However, studies on effects of various parameters on evacuation have not been nearly done. Therefore, this paper aims at observing evacuation time under the effect of fire initiated location. In this study, evacuation simulations are performed on a 5-floor building located in Seoul, South Korea using the commercial program, Fire Dynamics Simulator with Evacuation (FDS+EVAC). Only the fourth and fifth floors are modeled with an assumption that fire starts in a room located on the fourth floor. The parameter for evacuation simulations is location of fire initiation to observe the evacuation time and safety. Results show that the location of fire initiation is closer to exit, the more time is taken to evacuate. The case having the nearest location of fire initiation to exit has the lowest ratio of successful occupants to the total occupants. In addition, for safety evaluation, the evacuation time calculated from computer simulation model is compared with the tolerable evacuation time according to code in Japan. As a result, all cases are completed within the tolerable evacuation time. This study allows predicting evacuation time under various conditions of fire and can be used to evaluate evacuation appropriateness and fire safety of building.

Keywords: fire simulation, evacuation simulation, temperature, evacuation safety

Procedia PDF Downloads 349