Search results for: threshold STIRPAT model
16699 Meeting the Energy Balancing Needs in a Fully Renewable European Energy System: A Stochastic Portfolio Framework
Authors: Iulia E. Falcan
Abstract:
The transition of the European power sector towards a clean, renewable energy (RE) system faces the challenge of meeting power demand in times of low wind speed and low solar radiation, at a reasonable cost. This is likely to be achieved through a combination of 1) energy storage technologies, 2) development of the cross-border power grid, 3) installed overcapacity of RE and 4) dispatchable power sources – such as biomass. This paper uses NASA; derived hourly data on weather patterns of sixteen European countries for the past twenty-five years, and load data from the European Network of Transmission System Operators-Electricity (ENTSO-E), to develop a stochastic optimization model. This model aims to understand the synergies between the four classes of technologies mentioned above and to determine the optimal configuration of the energy technologies portfolio. While this issue has been addressed before, it was done so using deterministic models that extrapolated historic data on weather patterns and power demand, as well as ignoring the risk of an unbalanced grid-risk stemming from both the supply and the demand side. This paper aims to explicitly account for the inherent uncertainty in the energy system transition. It articulates two levels of uncertainty: a) the inherent uncertainty in future weather patterns and b) the uncertainty of fully meeting power demand. The first level of uncertainty is addressed by developing probability distributions for future weather data and thus expected power output from RE technologies, rather than known future power output. The latter level of uncertainty is operationalized by introducing a Conditional Value at Risk (CVaR) constraint in the portfolio optimization problem. By setting the risk threshold at different levels – 1%, 5% and 10%, important insights are revealed regarding the synergies of the different energy technologies, i.e., the circumstances under which they behave as either complements or substitutes to each other. The paper concludes that allowing for uncertainty in expected power output - rather than extrapolating historic data - paints a more realistic picture and reveals important departures from results of deterministic models. In addition, explicitly acknowledging the risk of an unbalanced grid - and assigning it different thresholds - reveals non-linearity in the cost functions of different technology portfolio configurations. This finding has significant implications for the design of the European energy mix.Keywords: cross-border grid extension, energy storage technologies, energy system transition, stochastic portfolio optimization
Procedia PDF Downloads 17016698 Moving beyond the Social Model of Disability by Engaging in Anti-Oppressive Social Work Practice
Authors: Irene Carter, Roy Hanes, Judy MacDonald
Abstract:
Considering that disability is universal and people with disabilities are part of all societies; that there is a connection between the disabled individual and the societal; and that it is society and social arrangements that disable people with impairments, contemporary disability discourse emphasizes the social model of disability to counter medical and rehabilitative models of disability. However, the social model does not go far enough in addressing the issues of oppression and inclusion. The authors indicate that the social model does not specifically or adequately denote the oppression of persons with disabilities, which is a central component of progressive social work practice with people with disabilities. The social model of disability does not go far enough in deconstructing disability and offering social workers, as well as people with disabilities a way of moving forward in terms of practice anchored in individual, familial and societal change. The social model of disability is expanded by incorporating principles of anti-oppression social work practice. Although the contextual analysis of the social model of disability is an important component there remains a need for social workers to provide service to individuals and their families, which will be illustrated through anti-oppressive practice (AOP). By applying an anti-oppressive model of practice to the above definitions, the authors not only deconstruct disability paradigms but illustrate how AOP offers a framework for social workers to engage with people with disabilities at the individual, familial and community levels of practice, promoting an emancipatory focus in working with people with disabilities. An anti- social- oppression social work model of disability connects the day-to-day hardships of people with disabilities to the direct consequence of oppression in the form of ableism. AOP theory finds many of its basic concepts within social-oppression theory and the social model of disability. It is often the case that practitioners, including social workers and psychologists, define people with disabilities’ as having or being a problem with the focus placed upon adjustment and coping. A case example will be used to illustrate how an AOP paradigm offers social work a more comprehensive and critical analysis and practice model for social work practice with and for people with disabilities than the traditional medical model, rehabilitative and social model approaches.Keywords: anti-oppressive practice, disability, people with disabilities, social model of disability
Procedia PDF Downloads 108316697 Evolving Software Assessment and Certification Models Using Ant Colony Optimization Algorithm
Authors: Saad M. Darwish
Abstract:
Recently, software quality issues have come to be seen as important subject as we see an enormous growth of agencies involved in software industries. However, these agencies cannot guarantee the quality of their products, thus leaving users in uncertainties. Software certification is the extension of quality by means that quality needs to be measured prior to certification granting process. This research participates in solving the problem of software assessment by proposing a model for assessment and certification of software product that uses a fuzzy inference engine to integrate both of process–driven and application-driven quality assurance strategies. The key idea of the on hand model is to improve the compactness and the interpretability of the model’s fuzzy rules via employing an ant colony optimization algorithm (ACO), which tries to find good rules description by dint of compound rules initially expressed with traditional single rules. The model has been tested by case study and the results have demonstrated feasibility and practicability of the model in a real environment.Keywords: software quality, quality assurance, software certification model, software assessment
Procedia PDF Downloads 52416696 Local Image Features Emerging from Brain Inspired Multi-Layer Neural Network
Authors: Hui Wei, Zheng Dong
Abstract:
Object recognition has long been a challenging task in computer vision. Yet the human brain, with the ability to rapidly and accurately recognize visual stimuli, manages this task effortlessly. In the past decades, advances in neuroscience have revealed some neural mechanisms underlying visual processing. In this paper, we present a novel model inspired by the visual pathway in primate brains. This multi-layer neural network model imitates the hierarchical convergent processing mechanism in the visual pathway. We show that local image features generated by this model exhibit robust discrimination and even better generalization ability compared with some existing image descriptors. We also demonstrate the application of this model in an object recognition task on image data sets. The result provides strong support for the potential of this model.Keywords: biological model, feature extraction, multi-layer neural network, object recognition
Procedia PDF Downloads 54216695 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture
Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán
Abstract:
Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing
Procedia PDF Downloads 9316694 Simulation of Optimal Runoff Hydrograph Using Ensemble of Radar Rainfall and Blending of Runoffs Model
Authors: Myungjin Lee, Daegun Han, Jongsung Kim, Soojun Kim, Hung Soo Kim
Abstract:
Recently, the localized heavy rainfall and typhoons are frequently occurred due to the climate change and the damage is becoming bigger. Therefore, we may need a more accurate prediction of the rainfall and runoff. However, the gauge rainfall has the limited accuracy in space. Radar rainfall is better than gauge rainfall for the explanation of the spatial variability of rainfall but it is mostly underestimated with the uncertainty involved. Therefore, the ensemble of radar rainfall was simulated using error structure to overcome the uncertainty and gauge rainfall. The simulated ensemble was used as the input data of the rainfall-runoff models for obtaining the ensemble of runoff hydrographs. The previous studies discussed about the accuracy of the rainfall-runoff model. Even if the same input data such as rainfall is used for the runoff analysis using the models in the same basin, the models can have different results because of the uncertainty involved in the models. Therefore, we used two models of the SSARR model which is the lumped model, and the Vflo model which is a distributed model and tried to simulate the optimum runoff considering the uncertainty of each rainfall-runoff model. The study basin is located in Han river basin and we obtained one integrated runoff hydrograph which is an optimum runoff hydrograph using the blending methods such as Multi-Model Super Ensemble (MMSE), Simple Model Average (SMA), Mean Square Error (MSE). From this study, we could confirm the accuracy of rainfall and rainfall-runoff model using ensemble scenario and various rainfall-runoff model and we can use this result to study flood control measure due to climate change. Acknowledgements: This work is supported by the Korea Agency for Infrastructure Technology Advancement(KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 18AWMP-B083066-05).Keywords: radar rainfall ensemble, rainfall-runoff models, blending method, optimum runoff hydrograph
Procedia PDF Downloads 28016693 Biological Soil Crust Effects on Dust Control Around the Urmia Lake
Authors: Abbas Ahmadi, Nasser Aliasgharzad, Ali Asghar Jafarzadeh
Abstract:
Nowadays, drying of the Urmia Lake as a largest saline lake in the world and emerging its saline bed from water has caused the risk of salty dune storms, which threats the health of human society and also plants and animal communities living in the region. Biological soil crusts (BSCs) as a dust stabilizer attracted the attention of Soil conservation experts in recent years. Although the presence of water by the impenetrable lake bed and endorheic basin can be an advantage to create BSCs, but the extraordinary of the lake bed salinity is a factor for prevention of its establishment in the region. Therefore, the present research work has been carried out to investigate the effects of inoculating the Cyanobacteria, algae and their combination to create BSCs for dust control. In this study, an algae attributed to Chlamydomonas sp and a cyanobacteria attributed to Anabaena sp isolated from the soils of Urmia Lake margin were used to create BSC in four soil samples which collected from 0-10 cm of the current margin (A), the previous bed (B), affected lands by lake (C) and Quomtappe sand dune (D). The main characteristics of the A, B and C soil samples are their highly salinity (their ECe are 108, 140 and 118 dS/m, respectively) and sodicity. Also, texture class of the soil A was loamy sand, and other two soils had clay textures. Soil D was Non-saline, but it was sodic with a sandy texture class. This study was conducted separately in each soil in a completely randomized design under four inoculation treatments of non-inoculated (T0), Algae (T1), cyanobacteria (T2) and equal mixture of algae and cyanobacteria (T3) with three replications. In the experiment, the soil was placed into wind tunnel trays, and a suspension containing microorganisms mixed with the trays surface soil. During the experiment, water was sprayed to the trays at the morning and evening of every day. After passing the incubation period (30 days), some characteristics of samples such as pH, EC, cold water extractable carbohydrate (CWEC), hot water extractable carbohydrate (HWEC), sulfuric acid extractable carbohydrate (SAEC), organic matter, crust thickness, penetration resistance, wind erosion threshold velocity and soil loss in the wind tunnel were measured, and Correlation between the measured characteristics was obtained through the SPSS software. Analysis of variance and so comparison between the means of treatments were analyzed with MSTATC software. In this research, Chlorophyll, an amount, was used as an indicator of the microorganism's population in the samples. Based on obtained results, the amount of Chlorophyll a in the T2 treatment of soil A and all treatments of soil D was significantly increased in comparison to the control and crust thickness showed increase in all treatments by microorganism’s inoculation. But effect of the treatments was significant in soils A and D. At all treatment’s inoculation of microorganisms in soil A caused to increase %46, %34 and %55 of the wind erosion threshold velocity in T1, T2 and T3 treatments in comparison to the control, respectively, and in soil D all treatments caused wind erosion threshold velocity became two times more than control. However, soil loss in the wind tunnel experiments was significant in T2 and T3 treatments of these soils and T1 treatment had no effect in reducing soil loss. Correlation between Chlorophyll a and salinity shows the important role of salinity in microbial growth prevention and formation of BSCs in the studied samples. In general, according to the obtained results, it can be concluded that salinity reduces the growth of microorganisms in saline soils of the region, and in soils with fine textures, salinity role in prevention of the microbial growth is clear. Also, using the mix of algae and cyanobacteria together caused the synergistic growth of them and consequently, better protection of the soil against wind erosion was provided.Keywords: wind erosion, algae, cyanobacteria, carbohydrate
Procedia PDF Downloads 6316692 Application Difference between Cox and Logistic Regression Models
Authors: Idrissa Kayijuka
Abstract:
The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio
Procedia PDF Downloads 45516691 Comparison of Wake Oscillator Models to Predict Vortex-Induced Vibration of Tall Chimneys
Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta
Abstract:
The present study compares the semi-empirical wake-oscillator models that are used to predict vortex-induced vibration of structures. These models include those proposed by Facchinetti, Farshidian, and Dolatabadi, and Skop and Griffin. These models combine a wake oscillator model resembling the Van der Pol oscillator model and a single degree of freedom oscillation model. In order to use these models for estimating the top displacement of chimneys, the first mode vibration of the chimneys is only considered. The modal equation of the chimney constitutes the single degree of freedom model (SDOF). The equations of the wake oscillator model and the SDOF are simultaneously solved using an iterative procedure. The empirical parameters used in the wake-oscillator models are estimated using a newly developed approach, and response is compared with experimental data, which appeared comparable. For carrying out the iterative solution, the ode solver of MATLAB is used. To carry out the comparative study, a tall concrete chimney of height 210m has been chosen with the base diameter as 28m, top diameter as 20m, and thickness as 0.3m. The responses of the chimney are also determined using the linear model proposed by E. Simiu and the deterministic model given in Eurocode. It is observed from the comparative study that the responses predicted by the Facchinetti model and the model proposed by Skop and Griffin are nearly the same, while the model proposed by Fashidian and Dolatabadi predicts a higher response. The linear model without considering the aero-elastic phenomenon provides a less response as compared to the non-linear models. Further, for large damping, the prediction of the response by the Euro code is relatively well compared to those of non-linear models.Keywords: chimney, deterministic model, van der pol, vortex-induced vibration
Procedia PDF Downloads 22116690 On Differential Growth Equation to Stochastic Growth Model Using Hyperbolic Sine Function in Height/Diameter Modeling of Pines
Authors: S. O. Oyamakin, A. U. Chukwu
Abstract:
Richard's growth equation being a generalized logistic growth equation was improved upon by introducing an allometric parameter using the hyperbolic sine function. The integral solution to this was called hyperbolic Richard's growth model having transformed the solution from deterministic to a stochastic growth model. Its ability in model prediction was compared with the classical Richard's growth model an approach which mimicked the natural variability of heights/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using the coefficient of determination (R2), Mean Absolute Error (MAE) and Mean Square Error (MSE) results. The Kolmogorov-Smirnov test and Shapiro-Wilk test was also used to test the behavior of the error term for possible violations. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic Richard's nonlinear growth models better than the classical Richard's growth model.Keywords: height, Dbh, forest, Pinus caribaea, hyperbolic, Richard's, stochastic
Procedia PDF Downloads 48016689 Development of a Predictive Model to Prevent Financial Crisis
Authors: Tengqin Han
Abstract:
Delinquency has been a crucial factor in economics throughout the years. Commonly seen in credit card and mortgage, it played one of the crucial roles in causing the most recent financial crisis in 2008. In each case, a delinquency is a sign of the loaner being unable to pay off the debt, and thus may cause a lost of property in the end. Individually, one case of delinquency seems unimportant compared to the entire credit system. China, as an emerging economic entity, the national strength and economic strength has grown rapidly, and the gross domestic product (GDP) growth rate has remained as high as 8% in the past decades. However, potential risks exist behind the appearance of prosperity. Among the risks, the credit system is the most significant one. Due to long term and a large amount of balance of the mortgage, it is critical to monitor the risk during the performance period. In this project, about 300,000 mortgage account data are analyzed in order to develop a predictive model to predict the probability of delinquency. Through univariate analysis, the data is cleaned up, and through bivariate analysis, the variables with strong predictive power are detected. The project is divided into two parts. In the first part, the analysis data of 2005 are split into 2 parts, 60% for model development, and 40% for in-time model validation. The KS of model development is 31, and the KS for in-time validation is 31, indicating the model is stable. In addition, the model is further validation by out-of-time validation, which uses 40% of 2006 data, and KS is 33. This indicates the model is still stable and robust. In the second part, the model is improved by the addition of macroeconomic economic indexes, including GDP, consumer price index, unemployment rate, inflation rate, etc. The data of 2005 to 2010 is used for model development and validation. Compared with the base model (without microeconomic variables), KS is increased from 41 to 44, indicating that the macroeconomic variables can be used to improve the separation power of the model, and make the prediction more accurate.Keywords: delinquency, mortgage, model development, model validation
Procedia PDF Downloads 22816688 Proactive WPA/WPA2 Security Using DD-WRT Firmware
Authors: Mustafa Kamoona, Mohamed El-Sharkawy
Abstract:
Although the latest Wireless Local Area Network technology Wi-Fi 802.11i standard addresses many of the security weaknesses of the antecedent Wired Equivalent Privacy (WEP) protocol, there are still scenarios where the network security are still vulnerable. The first security model that 802.11i offers is the Personal model which is very cheap and simple to install and maintain, yet it uses a Pre Shared Key (PSK) and thus has a low to medium security level. The second model that 802.11i provide is the Enterprise model which is highly secured but much more expensive and difficult to install/maintain and requires the installation and maintenance of an authentication server that will handle the authentication and key management for the wireless network. A central issue with the personal model is that the PSK needs to be shared with all the devices that are connected to the specific Wi-Fi network. This pre-shared key, unless changed regularly, can be cracked using offline dictionary attacks within a matter of hours. The key is burdensome to change in all the connected devices manually unless there is some kind of algorithm that coordinate this PSK update. The key idea of this paper is to propose a new algorithm that proactively and effectively coordinates the pre-shared key generation, management, and distribution in the cheap WPA/WPA2 personal security model using only a DD-WRT router.Keywords: Wi-Fi, WPS, TLS, DD-WRT
Procedia PDF Downloads 23316687 Forecasting Age-Specific Mortality Rates and Life Expectancy at Births for Malaysian Sub-Populations
Authors: Syazreen N. Shair, Saiful A. Ishak, Aida Y. Yusof, Azizah Murad
Abstract:
In this paper, we forecast age-specific Malaysian mortality rates and life expectancy at births by gender and ethnic groups including Malay, Chinese and Indian. Two mortality forecasting models are adopted the original Lee-Carter model and its recent modified version, the product ratio coherent model. While the first forecasts the mortality rates for each subpopulation independently, the latter accounts for the relationship between sub-populations. The evaluation of both models is performed using the out-of-sample forecast errors which are mean absolute percentage errors (MAPE) for mortality rates and mean forecast errors (MFE) for life expectancy at births. The best model is then used to perform the long-term forecasts up to the year 2030, the year when Malaysia is expected to become an aged nation. Results suggest that in terms of overall accuracy, the product ratio model performs better than the original Lee-Carter model. The association of lower mortality group (Chinese) in the subpopulation model can improve the forecasts of high mortality groups (Malay and Indian).Keywords: coherent forecasts, life expectancy at births, Lee-Carter model, product-ratio model, mortality rates
Procedia PDF Downloads 21916686 Efficient Sampling of Probabilistic Program for Biological Systems
Authors: Keerthi S. Shetty, Annappa Basava
Abstract:
In recent years, modelling of biological systems represented by biochemical reactions has become increasingly important in Systems Biology. Biological systems represented by biochemical reactions are highly stochastic in nature. Probabilistic model is often used to describe such systems. One of the main challenges in Systems biology is to combine absolute experimental data into probabilistic model. This challenge arises because (1) some molecules may be present in relatively small quantities, (2) there is a switching between individual elements present in the system, and (3) the process is inherently stochastic on the level at which observations are made. In this paper, we describe a novel idea of combining absolute experimental data into probabilistic model using tool R2. Through a case study of the Transcription Process in Prokaryotes we explain how biological systems can be written as probabilistic program to combine experimental data into the model. The model developed is then analysed in terms of intrinsic noise and exact sampling of switching times between individual elements in the system. We have mainly concentrated on inferring number of genes in ON and OFF states from experimental data.Keywords: systems biology, probabilistic model, inference, biology, model
Procedia PDF Downloads 34916685 Machine Learning Model Applied for SCM Processes to Efficiently Determine Its Impacts on the Environment
Authors: Elena Puica
Abstract:
This paper aims to investigate the impact of Supply Chain Management (SCM) on the environment by applying a Machine Learning model while pointing out the efficiency of the technology used. The Machine Learning model was used to derive the efficiency and optimization of technology used in SCM and the environmental impact of SCM processes. The model applied is a predictive classification model and was trained firstly to determine which stage of the SCM has more outputs and secondly to demonstrate the efficiency of using advanced technology in SCM instead of recuring to traditional SCM. The outputs are the emissions generated in the environment, the consumption from different steps in the life cycle, the resulting pollutants/wastes emitted, and all the releases to air, land, and water. This manuscript presents an innovative approach to applying advanced technology in SCM and simultaneously studies the efficiency of technology and the SCM's impact on the environment. Identifying the conceptual relationships between SCM practices and their impact on the environment is a new contribution to the research. The authors can take a forward step in developing recent studies in SCM and its effects on the environment by applying technology.Keywords: machine-learning model in SCM, SCM processes, SCM and the environmental impact, technology in SCM
Procedia PDF Downloads 11616684 The Effect of Action Potential Duration and Conduction Velocity on Cardiac Pumping Efficacy: Simulation Study
Authors: Ana Rahma Yuniarti, Ki Moo Lim
Abstract:
Slowed myocardial conduction velocity (CV) and shortened action potential duration (APD) due to some reason are associated with an increased risk of re-entrant excitation, predisposing to cardiac arrhythmia. That is because both of CV reduction and APD shortening induces shortening of wavelength. In this study, we investigated quantitatively the cardiac mechanical responses under various CV and APD using multi-scale computational model of the heart. The model consisted of electrical model coupled with the mechanical contraction model together with a lumped model of the circulatory system. The electrical model consisted of 149.344 numbers of nodes and 183.993 numbers of elements of tetrahedral mesh, whereas the mechanical model consisted of 356 numbers of nodes and 172 numbers of elements of hexahedral mesh with hermite basis. We performed the electrical simulation with two scenarios: 1) by varying the CV values with constant APD and 2) by varying the APD values with constant CV. Then, we compared the electrical and mechanical responses for both scenarios. Our simulation showed that faster CV and longer APD induced largest resultants wavelength and generated better cardiac pumping efficacy by increasing the cardiac output and consuming less energy. This is due to the long wave propagation and faster conduction generated more synchronous contraction of whole ventricle.Keywords: conduction velocity, action potential duration, mechanical contraction model, circulatory model
Procedia PDF Downloads 20416683 Application of Computational Flow Dynamics (CFD) Analysis for Surge Inception and Propagation for Low Head Hydropower Projects
Authors: M. Mohsin Munir, Taimoor Ahmad, Javed Munir, Usman Rashid
Abstract:
Determination of maximum elevation of a flowing fluid due to sudden rejection of load in a hydropower facility is of great interest to hydraulic engineers to ensure safety of the hydraulic structures. Several mathematical models exist that employ one-dimensional modeling for the determination of surge but none of these perfectly simulate real-time circumstances. The paper envisages investigation of surge inception and propagation for a Low Head Hydropower project using Computational Fluid Dynamics (CFD) analysis on FLOW-3D software package. The fluid dynamic model utilizes its analysis for surge by employing Reynolds’ Averaged Navier-Stokes Equations (RANSE). The CFD model is designed for a case study at Taunsa hydropower Project in Pakistan. Various scenarios have run through the model keeping in view upstream boundary conditions. The prototype results were then compared with the results of physical model testing for the same scenarios. The results of the numerical model proved quite accurate coherence with the physical model testing and offers insight into phenomenon which are not apparent in physical model and shall be adopted in future for the similar low head projects limiting delays and cost incurred in the physical model testing.Keywords: surge, FLOW-3D, numerical model, Taunsa, RANSE
Procedia PDF Downloads 36116682 Joint Modeling of Bottle Use, Daily Milk Intake from Bottles, and Daily Energy Intake in Toddlers
Authors: Yungtai Lo
Abstract:
The current study follows an educational intervention on bottle-weaning to simultaneously evaluate the effect of the bottle-weaning intervention on reducing bottle use, daily milk intake from bottles, and daily energy intake in toddlers aged 11 to 13 months. A shared parameter model and a random effects model are used to jointly model bottle use, daily milk intake from bottles, and daily energy intake. We show in the two joint models that the bottle-weaning intervention promotes bottleweaning, and reduces daily milk intake from bottles in toddlers not off bottles and daily energy intake. We also show that the odds of drinking from a bottle were positively associated with the amount of milk intake from bottles and increased daily milk intake from bottles was associated with increased daily energy intake. The effect of bottle use on daily energy intake is through its effect on increasing daily milk intake from bottles that in turn increases daily energy intake.Keywords: two-part model, semi-continuous variable, joint model, gamma regression, shared parameter model, random effects model
Procedia PDF Downloads 28716681 A Numerical Model Simulation for an Updraft Gasifier Using High-Temperature Steam
Authors: T. M. Ismail, M. A. El-Salam
Abstract:
A mathematical model study was carried out to investigate gasification of biomass fuels using high-temperature air and steam as a gasifying agent using high-temperature air up to 1000°C. In this study, a 2D computational fluid dynamics model was developed to study the gasification process in an updraft gasifier, considering drying, pyrolysis, combustion, and gasification reactions. The gas and solid phases were resolved using a Euler−Euler multiphase approach, with exchange terms for the momentum, mass, and energy. The standard k−ε turbulence model was used in the gas phase, and the particle phase was modeled using the kinetic theory of granular flow. The results show that the present model giving a promising way in its capability and sensitivity for the parameter effects that influence the gasification process.Keywords: computational fluid dynamics, gasification, biomass fuel, fixed bed gasifier
Procedia PDF Downloads 40616680 Multiphase Flow Model for 3D Numerical Model Using ANSYS for Flow over Stepped Cascade with End Sill
Authors: Dheyaa Wajid Abbood, Hanan Hussien Abood
Abstract:
Stepped cascade has been utilized as a hydraulic structure for years. It has proven to be the least costly aeration system in replenishing dissolved oxygen. Numerical modeling of stepped cascade with end sill is very complicated and challenging because of the high roughness and velocity re circulation regions. Volume of fluid multiphase flow model (VOF) is used .The realizable k-ξ model is chosen to simulate turbulence. The computational results are compared with lab-scale stepped cascade data. The lab –scale model was constructed in the hydraulic laboratory, Al-Mustansiriya University, Iraq. The stepped cascade was 0.23 m wide and consisted of 3 steps each 0.2m high and 0.6 m long with variable end sill. The discharge was varied from 1 to 4 l/s. ANSYS has been employed to simulate the experimental data and their related results. This study shows that ANSYS is able to predict results almost the same as experimental findings in some regions of the structure.Keywords: stepped cascade weir, aeration, multiphase flow model, ansys
Procedia PDF Downloads 33616679 Developing an Integrated Seismic Risk Model for Existing Buildings in Northern Algeria
Authors: R. Monteiro, A. Abarca
Abstract:
Large scale seismic risk assessment has become increasingly popular to evaluate the physical vulnerability of a given region to seismic events, by putting together hazard, exposure and vulnerability components. This study, developed within the scope of the EU-funded project ITERATE (Improved Tools for Disaster Risk Mitigation in Algeria), explains the steps and expected results for the development of an integrated seismic risk model for assessment of the vulnerability of residential buildings in Northern Algeria. For this purpose, the model foresees the consideration of an updated seismic hazard model, as well as ad-hoc exposure and physical vulnerability models for local residential buildings. The first results of this endeavor, such as the hazard model and a specific taxonomy to be used for the exposure and fragility components of the model are presented, using as starting point the province of Blida, in Algeria. Specific remarks and conclusions regarding the characteristics of the Northern Algerian in-built are then made based on these results.Keywords: Northern Algeria, risk, seismic hazard, vulnerability
Procedia PDF Downloads 20116678 Modelling of Atomic Force Microscopic Nano Robot's Friction Force on Rough Surfaces
Authors: M. Kharazmi, M. Zakeri, M. Packirisamy, J. Faraji
Abstract:
Micro/Nanorobotics or manipulation of nanoparticles by Atomic Force Microscopic (AFM) is one of the most important solutions for controlling the movement of atoms, particles and micro/nano metrics components and assembling of them to design micro/nano-meter tools. Accurate modelling of manipulation requires identification of forces and mechanical knowledge in the Nanoscale which are different from macro world. Due to the importance of the adhesion forces and the interaction of surfaces at the nanoscale several friction models were presented. In this research, friction and normal forces that are applied on the AFM by using of the dynamic bending-torsion model of AFM are obtained based on Hurtado-Kim friction model (HK), Johnson-Kendall-Robert contact model (JKR) and Greenwood-Williamson roughness model (GW). Finally, the effect of standard deviation of asperities height on the normal load, friction force and friction coefficient are studied.Keywords: atomic force microscopy, contact model, friction coefficient, Greenwood-Williamson model
Procedia PDF Downloads 19916677 Utilizing Artificial Intelligence to Predict Post Operative Atrial Fibrillation in Non-Cardiac Transplant
Authors: Alexander Heckman, Rohan Goswami, Zachi Attia, Paul Friedman, Peter Noseworthy, Demilade Adedinsewo, Pablo Moreno-Franco, Rickey Carter, Tathagat Narula
Abstract:
Background: Postoperative atrial fibrillation (POAF) is associated with adverse health consequences, higher costs, and longer hospital stays. Utilizing existing predictive models that rely on clinical variables and circulating biomarkers, multiple societies have published recommendations on the treatment and prevention of POAF. Although reasonably practical, there is room for improvement and automation to help individualize treatment strategies and reduce associated complications. Methods and Results: In this retrospective cohort study of solid organ transplant recipients, we evaluated the diagnostic utility of a previously developed AI-based ECG prediction for silent AF on the development of POAF within 30 days of transplant. A total of 2261 non-cardiac transplant patients without a preexisting diagnosis of AF were found to have a 5.8% (133/2261) incidence of POAF. While there were no apparent sex differences in POAF incidence (5.8% males vs. 6.0% females, p=.80), there were differences by race and ethnicity (p<0.001 and 0.035, respectively). The incidence in white transplanted patients was 7.2% (117/1628), whereas the incidence in black patients was 1.4% (6/430). Lung transplant recipients had the highest incidence of postoperative AF (17.4%, 37/213), followed by liver (5.6%, 56/1002) and kidney (3.6%, 32/895) recipients. The AUROC in the sample was 0.62 (95% CI: 0.58-0.67). The relatively low discrimination may result from undiagnosed AF in the sample. In particular, 1,177 patients had at least 1 AI-ECG screen for AF pre-transplant above .10, a value slightly higher than the published threshold of 0.08. The incidence of POAF in the 1104 patients without an elevated prediction pre-transplant was lower (3.7% vs. 8.0%; p<0.001). While this supported the hypothesis that potentially undiagnosed AF may have contributed to the diagnosis of POAF, the utility of the existing AI-ECG screening algorithm remained modest. When the prediction for POAF was made using the first postoperative ECG in the sample without an elevated screen pre-transplant (n=1084 on account of n=20 missing postoperative ECG), the AUROC was 0.66 (95% CI: 0.57-0.75). While this discrimination is relatively low, at a threshold of 0.08, the AI-ECG algorithm had a 98% (95% CI: 97 – 99%) negative predictive value at a sensitivity of 66% (95% CI: 49-80%). Conclusions: This study's principal finding is that the incidence of POAF is rare, and a considerable fraction of the POAF cases may be latent and undiagnosed. The high negative predictive value of AI-ECG screening suggests utility for prioritizing monitoring and evaluation on transplant patients with a positive AI-ECG screening. Further development and refinement of a post-transplant-specific algorithm may be warranted further to enhance the diagnostic yield of the ECG-based screening.Keywords: artificial intelligence, atrial fibrillation, cardiology, transplant, medicine, ECG, machine learning
Procedia PDF Downloads 13516676 Wind Wave Modeling Using MIKE 21 SW Spectral Model
Authors: Pouya Molana, Zeinab Alimohammadi
Abstract:
Determining wind wave characteristics is essential for implementing projects related to Coastal and Marine engineering such as designing coastal and marine structures, estimating sediment transport rates and coastal erosion rates in order to predict significant wave height (H_s), this study applies the third generation spectral wave model, Mike 21 SW, along with CEM model. For SW model calibration and verification, two data sets of meteorology and wave spectroscopy are used. The model was exposed to time-varying wind power and the results showed that difference ratio mean, standard deviation of difference ratio and correlation coefficient in SW model for H_s parameter are 1.102, 0.279 and 0.983, respectively. Whereas, the difference ratio mean, standard deviation and correlation coefficient in The Choice Experiment Method (CEM) for the same parameter are 0.869, 1.317 and 0.8359, respectively. Comparing these expected results it is revealed that the Choice Experiment Method CEM has more errors in comparison to MIKE 21 SW third generation spectral wave model and higher correlation coefficient does not necessarily mean higher accuracy.Keywords: MIKE 21 SW, CEM method, significant wave height, difference ratio
Procedia PDF Downloads 40216675 Audit Examining Maternity Assessment Suite Triage Compliance with Birmingham Symptom Specific Obstetric Triage System in a London Teaching Hospital
Authors: Sarah Atalla, Shubham Gupta, Kim Alipio, Tanya Maric
Abstract:
Background: Chelsea and Westminster Hospital have introduced the Birmingham Symptom Specific Obstetric Triage System (BSOTS) for patients who present acutely to the Maternity Assessment Suite (MAS) to prioritise care by urgency. The primary objective was to evaluate whether BSOTS was used appropriately to assess patients (defined as a 90% threshold). The secondary objective was to assess whether patients were seen within their designated triaged timeframe (defined as a 90% threshold). Methodology: MAS records were retrospectively reviewed for a randomly selected one-week period of data from 2020 (21/09/2020 - 27/09/2020). 189 patients presented to MAS during this time. Data were collected on the presenting complaint, time of attendance (divided into four time categories), and triage colour code for the urgency of a review by a doctor (red: immediately, orange: within 15 minutes, yellow: within 1 hour, green: within 4 hours). The number of triage waiting times that were breached and the outcome of the attendance was noted. Results: 49% of patients presenting to MAS during this time period were triaged, which therefore did not meet the 90% target. 67% of patients who were triaged were seen within their allocated timeframe as designated by their triage colour code, which therefore did not meet the 90% target. The most frequent reason for patient attendance was reduced fetal movements (30.5% of attendances). The busiest time of day (when most patients presented) was between 06:01-12:00, and this was also when the highest number of patients were not triaged (26 patients or 54% of patients presenting in this time category). The most used triage category (59%) was the green colour code (to be seen by a doctor within 4 hours), followed by orange (24%), yellow (14%), and red (3%). 45% of triaged patients were admitted, whilst 55% were discharged. 62% of patients allocated to the green triage category were discharged, as compared to 56% of yellow category patients, 27% of orange category patients, and 50% of red category patients. The time of patient presentation to the hospital was also associated with the level of urgency and outcome. Patients presenting from 12:01 to 18:00 were more likely to be discharged (72% discharged) compared to 00:01-06:00 where only 12.5% of patients were discharged. Conclusion: The triage system for assessing the urgency of acutely presenting obstetric patients is only being effectively utilised for 49% of patients. There is potential for enhancing the employment of the triage system to enable further efficiency and boost the promotion of patient safety. It is noted that MAS was busiest at 06:01 - 12:00 when there was also the highest number of non-triaged patients – this highlights some areas where we can improve, including higher levels of staffing, better use of BSOTS to triage patients, and patient education.Keywords: birmingham, BSOTS, maternal, obstetric, pregnancy, specific, symptom, triage
Procedia PDF Downloads 10516674 Superiority of High Frequency Based Volatility Models: Empirical Evidence from an Emerging Market
Authors: Sibel Celik, Hüseyin Ergin
Abstract:
The paper aims to find the best volatility forecasting model for stock markets in Turkey. For this purpose, we compare performance of different volatility models-both traditional GARCH model and high frequency based volatility models- and conclude that both in pre-crisis and crisis period, the performance of high frequency based volatility models are better than traditional GARCH model. The findings of paper are important for policy makers, financial institutions and investors.Keywords: volatility, GARCH model, realized volatility, high frequency data
Procedia PDF Downloads 48616673 Application of the Tripartite Model to the Link between Non-Suicidal Self-Injury and Suicidal Risk
Authors: Ashley Wei-Ting Wang, Wen-Yau Hsu
Abstract:
Objectives: The current study applies and expands the Tripartite Model to elaborate the link between non-suicidal self-injury (NSSI) and suicidal behavior. We propose a structural model of NSSI and suicidal risk, in which negative affect (NA) predicts both anxiety and depression, positive affect (PA) predicts depression only, anxiety is linked to NSSI, and depression is linked to suicidal risk. Method: Four hundreds and eighty seven undergraduates participated. Data were collected by administering self-report questionnaires. We performed hierarchical regression and structural equation modeling to test the proposed structural model. Results: The results largely support the proposed structural model, with one exception: anxiety was strongly associated with NSSI and to a lesser extent with suicidal risk. Conclusions: We conclude that the co-occurrence of NSSI and suicidal risk is due to NA and anxiety, and suicidal risk can be differentiated by depression. Further theoretical and practical implications are discussed.Keywords: non-suicidal self-injury, suicidal risk, anxiety, depression, the tripartite model, hierarchical relationship
Procedia PDF Downloads 47016672 Valuation of Caps and Floors in a LIBOR Market Model with Markov Jump Risks
Authors: Shih-Kuei Lin
Abstract:
The characterization of the arbitrage-free dynamics of interest rates is developed in this study under the presence of Markov jump risks, when the term structure of the interest rates is modeled through simple forward rates. We consider Markov jump risks by allowing randomness in jump sizes, independence between jump sizes and jump times. The Markov jump diffusion model is used to capture empirical phenomena and to accurately describe interest jump risks in a financial market. We derive the arbitrage-free model of simple forward rates under the spot measure. Moreover, the analytical pricing formulas for a cap and a floor are derived under the forward measure when the jump size follows a lognormal distribution. In our empirical analysis, we find that the LIBOR market model with Markov jump risk better accounts for changes from/to different states and different rates.Keywords: arbitrage-free, cap and floor, Markov jump diffusion model, simple forward rate model, volatility smile, EM algorithm
Procedia PDF Downloads 42116671 An Adjusted Network Information Criterion for Model Selection in Statistical Neural Network Models
Authors: Christopher Godwin Udomboso, Angela Unna Chukwu, Isaac Kwame Dontwi
Abstract:
In selecting a Statistical Neural Network model, the Network Information Criterion (NIC) has been observed to be sample biased, because it does not account for sample sizes. The selection of a model from a set of fitted candidate models requires objective data-driven criteria. In this paper, we derived and investigated the Adjusted Network Information Criterion (ANIC), based on Kullback’s symmetric divergence, which has been designed to be an asymptotically unbiased estimator of the expected Kullback-Leibler information of a fitted model. The analyses show that on a general note, the ANIC improves model selection in more sample sizes than does the NIC.Keywords: statistical neural network, network information criterion, adjusted network, information criterion, transfer function
Procedia PDF Downloads 56616670 Addressing the Oracle Problem: Decentralized Authentication in Blockchain-Based Green Hydrogen Certification
Authors: Volker Wannack
Abstract:
The aim of this paper is to present a concept for addressing the Oracle Problem in the context of hydrogen production using renewable energy sources. The proposed approach relies on the authentication of the electricity used for hydrogen production by multiple surrounding actors with similar electricity generation facilities, which attest to the authenticity of the electricity production. The concept introduces an Authenticity Score assigned to each certificate, as well as a Trust Score assigned to each witness. Each certificate must be attested by different actors with a sufficient Trust Score to achieve an Authenticity Score above a predefined threshold, thereby demonstrating that the produced hydrogen is indeed "green."Keywords: hydrogen, blockchain, sustainability, structural change
Procedia PDF Downloads 64