Search results for: vector error correction model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18461

Search results for: vector error correction model

16571 Local Image Features Emerging from Brain Inspired Multi-Layer Neural Network

Authors: Hui Wei, Zheng Dong

Abstract:

Object recognition has long been a challenging task in computer vision. Yet the human brain, with the ability to rapidly and accurately recognize visual stimuli, manages this task effortlessly. In the past decades, advances in neuroscience have revealed some neural mechanisms underlying visual processing. In this paper, we present a novel model inspired by the visual pathway in primate brains. This multi-layer neural network model imitates the hierarchical convergent processing mechanism in the visual pathway. We show that local image features generated by this model exhibit robust discrimination and even better generalization ability compared with some existing image descriptors. We also demonstrate the application of this model in an object recognition task on image data sets. The result provides strong support for the potential of this model.

Keywords: biological model, feature extraction, multi-layer neural network, object recognition

Procedia PDF Downloads 530
16570 Towards Accurate Velocity Profile Models in Turbulent Open-Channel Flows: Improved Eddy Viscosity Formulation

Authors: W. Meron Mebrahtu, R. Absi

Abstract:

Velocity distribution in turbulent open-channel flows is organized in a complex manner. This is due to the large spatial and temporal variability of fluid motion resulting from the free-surface turbulent flow condition. This phenomenon is complicated further due to the complex geometry of channels and the presence of solids transported. Thus, several efforts were made to understand the phenomenon and obtain accurate mathematical models that are suitable for engineering applications. However, predictions are inaccurate because oversimplified assumptions are involved in modeling this complex phenomenon. Therefore, the aim of this work is to study velocity distribution profiles and obtain simple, more accurate, and predictive mathematical models. Particular focus will be made on the acceptable simplification of the general transport equations and an accurate representation of eddy viscosity. Wide rectangular open-channel seems suitable to begin the study; other assumptions are smooth-wall, and sediment-free flow under steady and uniform flow conditions. These assumptions will allow examining the effect of the bottom wall and the free surface only, which is a necessary step before dealing with more complex flow scenarios. For this flow condition, two ordinary differential equations are obtained for velocity profiles; from the Reynolds-averaged Navier-Stokes (RANS) equation and equilibrium consideration between turbulent kinetic energy (TKE) production and dissipation. Then different analytic models for eddy viscosity, TKE, and mixing length were assessed. Computation results for velocity profiles were compared to experimental data for different flow conditions and the well-known linear, log, and log-wake laws. Results show that the model based on the RANS equation provides more accurate velocity profiles. In the viscous sublayer and buffer layer, the method based on Prandtl’s eddy viscosity model and Van Driest mixing length give a more precise result. For the log layer and outer region, a mixing length equation derived from Von Karman’s similarity hypothesis provides the best agreement with measured data except near the free surface where an additional correction based on a damping function for eddy viscosity is used. This method allows more accurate velocity profiles with the same value of the damping coefficient that is valid under different flow conditions. This work continues with investigating narrow channels, complex geometries, and the effect of solids transported in sewers.

Keywords: accuracy, eddy viscosity, sewers, velocity profile

Procedia PDF Downloads 100
16569 Assessing the Spatial Distribution of Urban Parks Using Remote Sensing and Geographic Information Systems Techniques

Authors: Hira Jabbar, Tanzeel-Ur Rehman

Abstract:

Urban parks and open spaces play a significant role in improving physical and mental health of the citizens, strengthen the societies and make the cities more attractive places to live and work. As the world’s cities continue to grow, continuing to value green space in cities is vital but is also a challenge, particularly in developing countries where there is pressure for space, resources, and development. Offering equal opportunity of accessibility to parks is one of the important issues of park distribution. The distribution of parks should allow all inhabitants to have close proximity to their residence. Remote sensing and Geographic information systems (GIS) can provide decision makers with enormous opportunities to improve the planning and management of Park facilities. This study exhibits the capability of GIS and RS techniques to provide baseline knowledge about the distribution of parks, level of accessibility and to help in identification of potential areas for such facilities. For this purpose Landsat OLI imagery for year 2016 was acquired from USGS Earth Explorer. Preprocessing models were applied using Erdas Imagine 2014v for the atmospheric correction and NDVI model was developed and applied to quantify the land use/land cover classes including built up, barren land, water, and vegetation. The parks amongst total public green spaces were selected based on their signature in remote sensing image and distribution. Percentages of total green and parks green were calculated for each town of Lahore City and results were then synchronized with the recommended standards. ANGSt model was applied to calculate the accessibility from parks. Service area analysis was performed using Network Analyst tool. Serviceability of these parks has been evaluated by employing statistical indices like service area, service population and park area per capita. Findings of the study may contribute in helping the town planners for understanding the distribution of parks, demands for new parks and potential areas which are deprived of parks. The purpose of present study is to provide necessary information to planners, policy makers and scientific researchers in the process of decision making for the management and improvement of urban parks.

Keywords: accessible natural green space standards (ANGSt), geographic information systems (GIS), remote sensing (RS), United States geological survey (USGS)

Procedia PDF Downloads 319
16568 Life Time Improvement of Clamp Structural by Using Fatigue Analysis

Authors: Pisut Boonkaew, Jatuporn Thongsri

Abstract:

In hard disk drive manufacturing industry, the process of reducing an unnecessary part and qualifying the quality of part before assembling is important. Thus, clamp was designed and fabricated as a fixture for holding in testing process. Basically, testing by trial and error consumes a long time to improve. Consequently, the simulation was brought to improve the part and reduce the time taken. The problem is the present clamp has a low life expectancy because of the critical stress that occurred. Hence, the simulation was brought to study the behavior of stress and compressive force to improve the clamp expectancy with all probability of designs which are present up to 27 designs, which excluding the repeated designs. The probability was calculated followed by the full fractional rules of six sigma methodology which was provided correctly. The six sigma methodology is a well-structured method for improving quality level by detecting and reducing the variability of the process. Therefore, the defective will be decreased while the process capability increasing. This research focuses on the methodology of stress and fatigue reduction while compressive force still remains in the acceptable range that has been set by the company. In the simulation, ANSYS simulates the 3D CAD with the same condition during the experiment. Then the force at each distance started from 0.01 to 0.1 mm will be recorded. The setting in ANSYS was verified by mesh convergence methodology and compared the percentage error with the experimental result; the error must not exceed the acceptable range. Therefore, the improved process focuses on degree, radius, and length that will reduce stress and still remain in the acceptable force number. Therefore, the fatigue analysis will be brought as the next process in order to guarantee that the lifetime will be extended by simulating through ANSYS simulation program. Not only to simulate it, but also to confirm the setting by comparing with the actual clamp in order to observe the different of fatigue between both designs. This brings the life time improvement up to 57% compared with the actual clamp in the manufacturing. This study provides a precise and trustable setting enough to be set as a reference methodology for the future design. Because of the combination and adaptation from the six sigma method, finite element, fatigue and linear regressive analysis that lead to accurate calculation, this project will able to save up to 60 million dollars annually.

Keywords: clamp, finite element analysis, structural, six sigma, linear regressive analysis, fatigue analysis, probability

Procedia PDF Downloads 226
16567 Structural Design Optimization of Reinforced Thin-Walled Vessels under External Pressure Using Simulation and Machine Learning Classification Algorithm

Authors: Lydia Novozhilova, Vladimir Urazhdin

Abstract:

An optimization problem for reinforced thin-walled vessels under uniform external pressure is considered. The conventional approaches to optimization generally start with pre-defined geometric parameters of the vessels, and then employ analytic or numeric calculations and/or experimental testing to verify functionality, such as stability under the projected conditions. The proposed approach consists of two steps. First, the feasibility domain will be identified in the multidimensional parameter space. Every point in the feasibility domain defines a design satisfying both geometric and functional constraints. Second, an objective function defined in this domain is formulated and optimized. The broader applicability of the suggested methodology is maximized by implementing the Support Vector Machines (SVM) classification algorithm of machine learning for identification of the feasible design region. Training data for SVM classifier is obtained using the Simulation package of SOLIDWORKS®. Based on the data, the SVM algorithm produces a curvilinear boundary separating admissible and not admissible sets of design parameters with maximal margins. Then optimization of the vessel parameters in the feasibility domain is performed using the standard algorithms for the constrained optimization. As an example, optimization of a ring-stiffened closed cylindrical thin-walled vessel with semi-spherical caps under high external pressure is implemented. As a functional constraint, von Mises stress criterion is used but any other stability constraint admitting mathematical formulation can be incorporated into the proposed approach. Suggested methodology has a good potential for reducing design time for finding optimal parameters of thin-walled vessels under uniform external pressure.

Keywords: design parameters, feasibility domain, von Mises stress criterion, Support Vector Machine (SVM) classifier

Procedia PDF Downloads 313
16566 Development of a Computer Aided Diagnosis Tool for Brain Tumor Extraction and Classification

Authors: Fathi Kallel, Abdulelah Alabd Uljabbar, Abdulrahman Aldukhail, Abdulaziz Alomran

Abstract:

The brain is an important organ in our body since it is responsible about the majority actions such as vision, memory, etc. However, different diseases such as Alzheimer and tumors could affect the brain and conduct to a partial or full disorder. Regular diagnosis are necessary as a preventive measure and could help doctors to early detect a possible trouble and therefore taking the appropriate treatment, especially in the case of brain tumors. Different imaging modalities are proposed for diagnosis of brain tumor. The powerful and most used modality is the Magnetic Resonance Imaging (MRI). MRI images are analyzed by doctor in order to locate eventual tumor in the brain and describe the appropriate and needed treatment. Diverse image processing methods are also proposed for helping doctors in identifying and analyzing the tumor. In fact, a large Computer Aided Diagnostic (CAD) tools including developed image processing algorithms are proposed and exploited by doctors as a second opinion to analyze and identify the brain tumors. In this paper, we proposed a new advanced CAD for brain tumor identification, classification and feature extraction. Our proposed CAD includes three main parts. Firstly, we load the brain MRI. Secondly, a robust technique for brain tumor extraction is proposed. This technique is based on both Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA). DWT is characterized by its multiresolution analytic property, that’s why it was applied on MRI images with different decomposition levels for feature extraction. Nevertheless, this technique suffers from a main drawback since it necessitates a huge storage and is computationally expensive. To decrease the dimensions of the feature vector and the computing time, PCA technique is considered. In the last stage, according to different extracted features, the brain tumor is classified into either benign or malignant tumor using Support Vector Machine (SVM) algorithm. A CAD tool for brain tumor detection and classification, including all above-mentioned stages, is designed and developed using MATLAB guide user interface.

Keywords: MRI, brain tumor, CAD, feature extraction, DWT, PCA, classification, SVM

Procedia PDF Downloads 239
16565 Application Difference between Cox and Logistic Regression Models

Authors: Idrissa Kayijuka

Abstract:

The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.

Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio

Procedia PDF Downloads 439
16564 Comparison of Wake Oscillator Models to Predict Vortex-Induced Vibration of Tall Chimneys

Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta

Abstract:

The present study compares the semi-empirical wake-oscillator models that are used to predict vortex-induced vibration of structures. These models include those proposed by Facchinetti, Farshidian, and Dolatabadi, and Skop and Griffin. These models combine a wake oscillator model resembling the Van der Pol oscillator model and a single degree of freedom oscillation model. In order to use these models for estimating the top displacement of chimneys, the first mode vibration of the chimneys is only considered. The modal equation of the chimney constitutes the single degree of freedom model (SDOF). The equations of the wake oscillator model and the SDOF are simultaneously solved using an iterative procedure. The empirical parameters used in the wake-oscillator models are estimated using a newly developed approach, and response is compared with experimental data, which appeared comparable. For carrying out the iterative solution, the ode solver of MATLAB is used. To carry out the comparative study, a tall concrete chimney of height 210m has been chosen with the base diameter as 28m, top diameter as 20m, and thickness as 0.3m. The responses of the chimney are also determined using the linear model proposed by E. Simiu and the deterministic model given in Eurocode. It is observed from the comparative study that the responses predicted by the Facchinetti model and the model proposed by Skop and Griffin are nearly the same, while the model proposed by Fashidian and Dolatabadi predicts a higher response. The linear model without considering the aero-elastic phenomenon provides a less response as compared to the non-linear models. Further, for large damping, the prediction of the response by the Euro code is relatively well compared to those of non-linear models.

Keywords: chimney, deterministic model, van der pol, vortex-induced vibration

Procedia PDF Downloads 208
16563 Artificial Intelligence in the Design of a Retaining Structure

Authors: Kelvin Lo

Abstract:

Nowadays, numerical modelling in geotechnical engineering is very common but sophisticated. Many advanced input settings and considerable computational efforts are required to optimize the design to reduce the construction cost. To optimize a design, it usually requires huge numerical models. If the optimization is conducted manually, there is a potentially dangerous consequence from human errors, and the time spent on the input and data extraction from output is significant. This paper presents an automation process introduced to numerical modelling (Plaxis 2D) of a trench excavation supported by a secant-pile retaining structure for a top-down tunnel project. Python code is adopted to control the process, and numerical modelling is conducted automatically in every 20m chainage along the 200m tunnel, with maximum retained height occurring in the middle chainage. Python code continuously changes the geological stratum and excavation depth under groundwater flow conditions in each 20m section. It automatically conducts trial and error to determine the required pile length and the use of props to achieve the required factor of safety and target displacement. Once the bending moment of the pile exceeds its capacity, it will increase in size. When the pile embedment reaches the default maximum length, it will turn on the prop system. Results showed that it saves time, increases efficiency, lowers design costs, and replaces human labor to minimize error.

Keywords: automation, numerical modelling, Python, retaining structures

Procedia PDF Downloads 42
16562 Development of a Predictive Model to Prevent Financial Crisis

Authors: Tengqin Han

Abstract:

Delinquency has been a crucial factor in economics throughout the years. Commonly seen in credit card and mortgage, it played one of the crucial roles in causing the most recent financial crisis in 2008. In each case, a delinquency is a sign of the loaner being unable to pay off the debt, and thus may cause a lost of property in the end. Individually, one case of delinquency seems unimportant compared to the entire credit system. China, as an emerging economic entity, the national strength and economic strength has grown rapidly, and the gross domestic product (GDP) growth rate has remained as high as 8% in the past decades. However, potential risks exist behind the appearance of prosperity. Among the risks, the credit system is the most significant one. Due to long term and a large amount of balance of the mortgage, it is critical to monitor the risk during the performance period. In this project, about 300,000 mortgage account data are analyzed in order to develop a predictive model to predict the probability of delinquency. Through univariate analysis, the data is cleaned up, and through bivariate analysis, the variables with strong predictive power are detected. The project is divided into two parts. In the first part, the analysis data of 2005 are split into 2 parts, 60% for model development, and 40% for in-time model validation. The KS of model development is 31, and the KS for in-time validation is 31, indicating the model is stable. In addition, the model is further validation by out-of-time validation, which uses 40% of 2006 data, and KS is 33. This indicates the model is still stable and robust. In the second part, the model is improved by the addition of macroeconomic economic indexes, including GDP, consumer price index, unemployment rate, inflation rate, etc. The data of 2005 to 2010 is used for model development and validation. Compared with the base model (without microeconomic variables), KS is increased from 41 to 44, indicating that the macroeconomic variables can be used to improve the separation power of the model, and make the prediction more accurate.

Keywords: delinquency, mortgage, model development, model validation

Procedia PDF Downloads 213
16561 Review of Hydrologic Applications of Conceptual Models for Precipitation-Runoff Process

Authors: Oluwatosin Olofintoye, Josiah Adeyemo, Gbemileke Shomade

Abstract:

The relationship between rainfall and runoff is an important issue in surface water hydrology therefore the understanding and development of accurate rainfall-runoff models and their applications in water resources planning, management and operation are of paramount importance in hydrological studies. This paper reviews some of the previous works on the rainfall-runoff process modeling. The hydrologic applications of conceptual models and artificial neural networks (ANNs) for the precipitation-runoff process modeling were studied. Gradient training methods such as error back-propagation (BP) and evolutionary algorithms (EAs) are discussed in relation to the training of artificial neural networks and it is shown that application of EAs to artificial neural networks training could be an alternative to other training methods. Therefore, further research interest to exploit the abundant expert knowledge in the area of artificial intelligence for the solution of hydrologic and water resources planning and management problems is needed.

Keywords: artificial intelligence, artificial neural networks, evolutionary algorithms, gradient training method, rainfall-runoff model

Procedia PDF Downloads 434
16560 A Method for Clinical Concept Extraction from Medical Text

Authors: Moshe Wasserblat, Jonathan Mamou, Oren Pereg

Abstract:

Natural Language Processing (NLP) has made a major leap in the last few years, in practical integration into medical solutions; for example, extracting clinical concepts from medical texts such as medical condition, medication, treatment, and symptoms. However, training and deploying those models in real environments still demands a large amount of annotated data and NLP/Machine Learning (ML) expertise, which makes this process costly and time-consuming. We present a practical and efficient method for clinical concept extraction that does not require costly labeled data nor ML expertise. The method includes three steps: Step 1- the user injects a large in-domain text corpus (e.g., PubMed). Then, the system builds a contextual model containing vector representations of concepts in the corpus, in an unsupervised manner (e.g., Phrase2Vec). Step 2- the user provides a seed set of terms representing a specific medical concept (e.g., for the concept of the symptoms, the user may provide: ‘dry mouth,’ ‘itchy skin,’ and ‘blurred vision’). Then, the system matches the seed set against the contextual model and extracts the most semantically similar terms (e.g., additional symptoms). The result is a complete set of terms related to the medical concept. Step 3 –in production, there is a need to extract medical concepts from the unseen medical text. The system extracts key-phrases from the new text, then matches them against the complete set of terms from step 2, and the most semantically similar will be annotated with the same medical concept category. As an example, the seed symptom concepts would result in the following annotation: “The patient complaints on fatigue [symptom], dry skin [symptom], and Weight loss [symptom], which can be an early sign for Diabetes.” Our evaluations show promising results for extracting concepts from medical corpora. The method allows medical analysts to easily and efficiently build taxonomies (in step 2) representing their domain-specific concepts, and automatically annotate a large number of texts (in step 3) for classification/summarization of medical reports.

Keywords: clinical concepts, concept expansion, medical records annotation, medical records summarization

Procedia PDF Downloads 124
16559 Proactive WPA/WPA2 Security Using DD-WRT Firmware

Authors: Mustafa Kamoona, Mohamed El-Sharkawy

Abstract:

Although the latest Wireless Local Area Network technology Wi-Fi 802.11i standard addresses many of the security weaknesses of the antecedent Wired Equivalent Privacy (WEP) protocol, there are still scenarios where the network security are still vulnerable. The first security model that 802.11i offers is the Personal model which is very cheap and simple to install and maintain, yet it uses a Pre Shared Key (PSK) and thus has a low to medium security level. The second model that 802.11i provide is the Enterprise model which is highly secured but much more expensive and difficult to install/maintain and requires the installation and maintenance of an authentication server that will handle the authentication and key management for the wireless network. A central issue with the personal model is that the PSK needs to be shared with all the devices that are connected to the specific Wi-Fi network. This pre-shared key, unless changed regularly, can be cracked using offline dictionary attacks within a matter of hours. The key is burdensome to change in all the connected devices manually unless there is some kind of algorithm that coordinate this PSK update. The key idea of this paper is to propose a new algorithm that proactively and effectively coordinates the pre-shared key generation, management, and distribution in the cheap WPA/WPA2 personal security model using only a DD-WRT router.

Keywords: Wi-Fi, WPS, TLS, DD-WRT

Procedia PDF Downloads 219
16558 Forecasting Age-Specific Mortality Rates and Life Expectancy at Births for Malaysian Sub-Populations

Authors: Syazreen N. Shair, Saiful A. Ishak, Aida Y. Yusof, Azizah Murad

Abstract:

In this paper, we forecast age-specific Malaysian mortality rates and life expectancy at births by gender and ethnic groups including Malay, Chinese and Indian. Two mortality forecasting models are adopted the original Lee-Carter model and its recent modified version, the product ratio coherent model. While the first forecasts the mortality rates for each subpopulation independently, the latter accounts for the relationship between sub-populations. The evaluation of both models is performed using the out-of-sample forecast errors which are mean absolute percentage errors (MAPE) for mortality rates and mean forecast errors (MFE) for life expectancy at births. The best model is then used to perform the long-term forecasts up to the year 2030, the year when Malaysia is expected to become an aged nation. Results suggest that in terms of overall accuracy, the product ratio model performs better than the original Lee-Carter model. The association of lower mortality group (Chinese) in the subpopulation model can improve the forecasts of high mortality groups (Malay and Indian).

Keywords: coherent forecasts, life expectancy at births, Lee-Carter model, product-ratio model, mortality rates

Procedia PDF Downloads 209
16557 Efficient Sampling of Probabilistic Program for Biological Systems

Authors: Keerthi S. Shetty, Annappa Basava

Abstract:

In recent years, modelling of biological systems represented by biochemical reactions has become increasingly important in Systems Biology. Biological systems represented by biochemical reactions are highly stochastic in nature. Probabilistic model is often used to describe such systems. One of the main challenges in Systems biology is to combine absolute experimental data into probabilistic model. This challenge arises because (1) some molecules may be present in relatively small quantities, (2) there is a switching between individual elements present in the system, and (3) the process is inherently stochastic on the level at which observations are made. In this paper, we describe a novel idea of combining absolute experimental data into probabilistic model using tool R2. Through a case study of the Transcription Process in Prokaryotes we explain how biological systems can be written as probabilistic program to combine experimental data into the model. The model developed is then analysed in terms of intrinsic noise and exact sampling of switching times between individual elements in the system. We have mainly concentrated on inferring number of genes in ON and OFF states from experimental data.

Keywords: systems biology, probabilistic model, inference, biology, model

Procedia PDF Downloads 333
16556 Machine Learning Model Applied for SCM Processes to Efficiently Determine Its Impacts on the Environment

Authors: Elena Puica

Abstract:

This paper aims to investigate the impact of Supply Chain Management (SCM) on the environment by applying a Machine Learning model while pointing out the efficiency of the technology used. The Machine Learning model was used to derive the efficiency and optimization of technology used in SCM and the environmental impact of SCM processes. The model applied is a predictive classification model and was trained firstly to determine which stage of the SCM has more outputs and secondly to demonstrate the efficiency of using advanced technology in SCM instead of recuring to traditional SCM. The outputs are the emissions generated in the environment, the consumption from different steps in the life cycle, the resulting pollutants/wastes emitted, and all the releases to air, land, and water. This manuscript presents an innovative approach to applying advanced technology in SCM and simultaneously studies the efficiency of technology and the SCM's impact on the environment. Identifying the conceptual relationships between SCM practices and their impact on the environment is a new contribution to the research. The authors can take a forward step in developing recent studies in SCM and its effects on the environment by applying technology.

Keywords: machine-learning model in SCM, SCM processes, SCM and the environmental impact, technology in SCM

Procedia PDF Downloads 105
16555 The Effect of Action Potential Duration and Conduction Velocity on Cardiac Pumping Efficacy: Simulation Study

Authors: Ana Rahma Yuniarti, Ki Moo Lim

Abstract:

Slowed myocardial conduction velocity (CV) and shortened action potential duration (APD) due to some reason are associated with an increased risk of re-entrant excitation, predisposing to cardiac arrhythmia. That is because both of CV reduction and APD shortening induces shortening of wavelength. In this study, we investigated quantitatively the cardiac mechanical responses under various CV and APD using multi-scale computational model of the heart. The model consisted of electrical model coupled with the mechanical contraction model together with a lumped model of the circulatory system. The electrical model consisted of 149.344 numbers of nodes and 183.993 numbers of elements of tetrahedral mesh, whereas the mechanical model consisted of 356 numbers of nodes and 172 numbers of elements of hexahedral mesh with hermite basis. We performed the electrical simulation with two scenarios: 1) by varying the CV values with constant APD and 2) by varying the APD values with constant CV. Then, we compared the electrical and mechanical responses for both scenarios. Our simulation showed that faster CV and longer APD induced largest resultants wavelength and generated better cardiac pumping efficacy by increasing the cardiac output and consuming less energy. This is due to the long wave propagation and faster conduction generated more synchronous contraction of whole ventricle.

Keywords: conduction velocity, action potential duration, mechanical contraction model, circulatory model

Procedia PDF Downloads 192
16554 Application of Computational Flow Dynamics (CFD) Analysis for Surge Inception and Propagation for Low Head Hydropower Projects

Authors: M. Mohsin Munir, Taimoor Ahmad, Javed Munir, Usman Rashid

Abstract:

Determination of maximum elevation of a flowing fluid due to sudden rejection of load in a hydropower facility is of great interest to hydraulic engineers to ensure safety of the hydraulic structures. Several mathematical models exist that employ one-dimensional modeling for the determination of surge but none of these perfectly simulate real-time circumstances. The paper envisages investigation of surge inception and propagation for a Low Head Hydropower project using Computational Fluid Dynamics (CFD) analysis on FLOW-3D software package. The fluid dynamic model utilizes its analysis for surge by employing Reynolds’ Averaged Navier-Stokes Equations (RANSE). The CFD model is designed for a case study at Taunsa hydropower Project in Pakistan. Various scenarios have run through the model keeping in view upstream boundary conditions. The prototype results were then compared with the results of physical model testing for the same scenarios. The results of the numerical model proved quite accurate coherence with the physical model testing and offers insight into phenomenon which are not apparent in physical model and shall be adopted in future for the similar low head projects limiting delays and cost incurred in the physical model testing.

Keywords: surge, FLOW-3D, numerical model, Taunsa, RANSE

Procedia PDF Downloads 349
16553 Joint Modeling of Bottle Use, Daily Milk Intake from Bottles, and Daily Energy Intake in Toddlers

Authors: Yungtai Lo

Abstract:

The current study follows an educational intervention on bottle-weaning to simultaneously evaluate the effect of the bottle-weaning intervention on reducing bottle use, daily milk intake from bottles, and daily energy intake in toddlers aged 11 to 13 months. A shared parameter model and a random effects model are used to jointly model bottle use, daily milk intake from bottles, and daily energy intake. We show in the two joint models that the bottle-weaning intervention promotes bottleweaning, and reduces daily milk intake from bottles in toddlers not off bottles and daily energy intake. We also show that the odds of drinking from a bottle were positively associated with the amount of milk intake from bottles and increased daily milk intake from bottles was associated with increased daily energy intake. The effect of bottle use on daily energy intake is through its effect on increasing daily milk intake from bottles that in turn increases daily energy intake.

Keywords: two-part model, semi-continuous variable, joint model, gamma regression, shared parameter model, random effects model

Procedia PDF Downloads 276
16552 A Biologically Inspired Approach to Automatic Classification of Textile Fabric Prints Based On Both Texture and Colour Information

Authors: Babar Khan, Wang Zhijie

Abstract:

Machine Vision has been playing a significant role in Industrial Automation, to imitate the wide variety of human functions, providing improved safety, reduced labour cost, the elimination of human error and/or subjective judgments, and the creation of timely statistical product data. Despite the intensive research, there have not been any attempts to classify fabric prints based on printed texture and colour, most of the researches so far encompasses only black and white or grey scale images. We proposed a biologically inspired processing architecture to classify fabrics w.r.t. the fabric print texture and colour. We created a texture descriptor based on the HMAX model for machine vision, and incorporated colour descriptor based on opponent colour channels simulating the single opponent and double opponent neuronal function of the brain. We found that our algorithm not only outperformed the original HMAX algorithm on classification of fabric print texture and colour, but we also achieved a recognition accuracy of 85-100% on different colour and different texture fabric.

Keywords: automatic classification, texture descriptor, colour descriptor, opponent colour channel

Procedia PDF Downloads 468
16551 A Numerical Model Simulation for an Updraft Gasifier Using High-Temperature Steam

Authors: T. M. Ismail, M. A. El-Salam

Abstract:

A mathematical model study was carried out to investigate gasification of biomass fuels using high-temperature air and steam as a gasifying agent using high-temperature air up to 1000°C. In this study, a 2D computational fluid dynamics model was developed to study the gasification process in an updraft gasifier, considering drying, pyrolysis, combustion, and gasification reactions. The gas and solid phases were resolved using a Euler−Euler multiphase approach, with exchange terms for the momentum, mass, and energy. The standard k−ε turbulence model was used in the gas phase, and the particle phase was modeled using the kinetic theory of granular flow. The results show that the present model giving a promising way in its capability and sensitivity for the parameter effects that influence the gasification process.

Keywords: computational fluid dynamics, gasification, biomass fuel, fixed bed gasifier

Procedia PDF Downloads 386
16550 Multiphase Flow Model for 3D Numerical Model Using ANSYS for Flow over Stepped Cascade with End Sill

Authors: Dheyaa Wajid Abbood, Hanan Hussien Abood

Abstract:

Stepped cascade has been utilized as a hydraulic structure for years. It has proven to be the least costly aeration system in replenishing dissolved oxygen. Numerical modeling of stepped cascade with end sill is very complicated and challenging because of the high roughness and velocity re circulation regions. Volume of fluid multiphase flow model (VOF) is used .The realizable k-ξ model is chosen to simulate turbulence. The computational results are compared with lab-scale stepped cascade data. The lab –scale model was constructed in the hydraulic laboratory, Al-Mustansiriya University, Iraq. The stepped cascade was 0.23 m wide and consisted of 3 steps each 0.2m high and 0.6 m long with variable end sill. The discharge was varied from 1 to 4 l/s. ANSYS has been employed to simulate the experimental data and their related results. This study shows that ANSYS is able to predict results almost the same as experimental findings in some regions of the structure.

Keywords: stepped cascade weir, aeration, multiphase flow model, ansys

Procedia PDF Downloads 324
16549 Developing an Integrated Seismic Risk Model for Existing Buildings in Northern Algeria

Authors: R. Monteiro, A. Abarca

Abstract:

Large scale seismic risk assessment has become increasingly popular to evaluate the physical vulnerability of a given region to seismic events, by putting together hazard, exposure and vulnerability components. This study, developed within the scope of the EU-funded project ITERATE (Improved Tools for Disaster Risk Mitigation in Algeria), explains the steps and expected results for the development of an integrated seismic risk model for assessment of the vulnerability of residential buildings in Northern Algeria. For this purpose, the model foresees the consideration of an updated seismic hazard model, as well as ad-hoc exposure and physical vulnerability models for local residential buildings. The first results of this endeavor, such as the hazard model and a specific taxonomy to be used for the exposure and fragility components of the model are presented, using as starting point the province of Blida, in Algeria. Specific remarks and conclusions regarding the characteristics of the Northern Algerian in-built are then made based on these results.

Keywords: Northern Algeria, risk, seismic hazard, vulnerability

Procedia PDF Downloads 190
16548 Modelling of Atomic Force Microscopic Nano Robot's Friction Force on Rough Surfaces

Authors: M. Kharazmi, M. Zakeri, M. Packirisamy, J. Faraji

Abstract:

Micro/Nanorobotics or manipulation of nanoparticles by Atomic Force Microscopic (AFM) is one of the most important solutions for controlling the movement of atoms, particles and micro/nano metrics components and assembling of them to design micro/nano-meter tools. Accurate modelling of manipulation requires identification of forces and mechanical knowledge in the Nanoscale which are different from macro world. Due to the importance of the adhesion forces and the interaction of surfaces at the nanoscale several friction models were presented. In this research, friction and normal forces that are applied on the AFM by using of the dynamic bending-torsion model of AFM are obtained based on Hurtado-Kim friction model (HK), Johnson-Kendall-Robert contact model (JKR) and Greenwood-Williamson roughness model (GW). Finally, the effect of standard deviation of asperities height on the normal load, friction force and friction coefficient are studied.

Keywords: atomic force microscopy, contact model, friction coefficient, Greenwood-Williamson model

Procedia PDF Downloads 187
16547 Wind Wave Modeling Using MIKE 21 SW Spectral Model

Authors: Pouya Molana, Zeinab Alimohammadi

Abstract:

Determining wind wave characteristics is essential for implementing projects related to Coastal and Marine engineering such as designing coastal and marine structures, estimating sediment transport rates and coastal erosion rates in order to predict significant wave height (H_s), this study applies the third generation spectral wave model, Mike 21 SW, along with CEM model. For SW model calibration and verification, two data sets of meteorology and wave spectroscopy are used. The model was exposed to time-varying wind power and the results showed that difference ratio mean, standard deviation of difference ratio and correlation coefficient in SW model for H_s parameter are 1.102, 0.279 and 0.983, respectively. Whereas, the difference ratio mean, standard deviation and correlation coefficient in The Choice Experiment Method (CEM) for the same parameter are 0.869, 1.317 and 0.8359, respectively. Comparing these expected results it is revealed that the Choice Experiment Method CEM has more errors in comparison to MIKE 21 SW third generation spectral wave model and higher correlation coefficient does not necessarily mean higher accuracy.

Keywords: MIKE 21 SW, CEM method, significant wave height, difference ratio

Procedia PDF Downloads 388
16546 Superiority of High Frequency Based Volatility Models: Empirical Evidence from an Emerging Market

Authors: Sibel Celik, Hüseyin Ergin

Abstract:

The paper aims to find the best volatility forecasting model for stock markets in Turkey. For this purpose, we compare performance of different volatility models-both traditional GARCH model and high frequency based volatility models- and conclude that both in pre-crisis and crisis period, the performance of high frequency based volatility models are better than traditional GARCH model. The findings of paper are important for policy makers, financial institutions and investors.

Keywords: volatility, GARCH model, realized volatility, high frequency data

Procedia PDF Downloads 479
16545 Storms Dynamics in the Black Sea in the Context of the Climate Changes

Authors: Eugen Rusu

Abstract:

The objective of the work proposed is to perform an analysis of the wave conditions in the Black Sea basin. This is especially focused on the spatial and temporal occurrences and on the dynamics of the most extreme storms in the context of the climate changes. A numerical modelling system, based on the spectral phase averaged wave model SWAN, has been implemented and validated against both in situ measurements and remotely sensed data, all along the sea. Moreover, a successive correction method for the assimilation of the satellite data has been associated with the wave modelling system. This is based on the optimal interpolation of the satellite data. Previous studies show that the process of data assimilation improves considerably the reliability of the results provided by the modelling system. This especially concerns the most sensitive cases from the point of view of the accuracy of the wave predictions, as the extreme storm situations are. Following this numerical approach, it has to be highlighted that the results provided by the wave modelling system above described are in general in line with those provided by some similar wave prediction systems implemented in enclosed or semi-enclosed sea basins. Simulations of this wave modelling system with data assimilation have been performed for the 30-year period 1987-2016. Considering this database, the next step was to analyze the intensity and the dynamics of the higher storms encountered in this period. According to the data resulted from the model simulations, the western side of the sea is considerably more energetic than the rest of the basin. In this western region, regular strong storms provide usually significant wave heights greater than 8m. This may lead to maximum wave heights even greater than 15m. Such regular strong storms may occur several times in one year, usually in the wintertime, or in late autumn, and it can be noticed that their frequency becomes higher in the last decade. As regards the case of the most extreme storms, significant wave heights greater than 10m and maximum wave heights close to 20m (and even greater) may occur. Such extreme storms, which in the past were noticed only once in four or five years, are more recent to be faced almost every year in the Black Sea, and this seems to be a consequence of the climate changes. The analysis performed included also the dynamics of the monthly and annual significant wave height maxima as well as the identification of the most probable spatial and temporal occurrences of the extreme storm events. Finally, it can be concluded that the present work provides valuable information related to the characteristics of the storm conditions and on their dynamics in the Black Sea. This environment is currently subjected to high navigation traffic and intense offshore and nearshore activities and the strong storms that systematically occur may produce accidents with very serious consequences.

Keywords: Black Sea, extreme storms, SWAN simulations, waves

Procedia PDF Downloads 231
16544 Application of the Tripartite Model to the Link between Non-Suicidal Self-Injury and Suicidal Risk

Authors: Ashley Wei-Ting Wang, Wen-Yau Hsu

Abstract:

Objectives: The current study applies and expands the Tripartite Model to elaborate the link between non-suicidal self-injury (NSSI) and suicidal behavior. We propose a structural model of NSSI and suicidal risk, in which negative affect (NA) predicts both anxiety and depression, positive affect (PA) predicts depression only, anxiety is linked to NSSI, and depression is linked to suicidal risk. Method: Four hundreds and eighty seven undergraduates participated. Data were collected by administering self-report questionnaires. We performed hierarchical regression and structural equation modeling to test the proposed structural model. Results: The results largely support the proposed structural model, with one exception: anxiety was strongly associated with NSSI and to a lesser extent with suicidal risk. Conclusions: We conclude that the co-occurrence of NSSI and suicidal risk is due to NA and anxiety, and suicidal risk can be differentiated by depression. Further theoretical and practical implications are discussed.

Keywords: non-suicidal self-injury, suicidal risk, anxiety, depression, the tripartite model, hierarchical relationship

Procedia PDF Downloads 454
16543 Valuation of Caps and Floors in a LIBOR Market Model with Markov Jump Risks

Authors: Shih-Kuei Lin

Abstract:

The characterization of the arbitrage-free dynamics of interest rates is developed in this study under the presence of Markov jump risks, when the term structure of the interest rates is modeled through simple forward rates. We consider Markov jump risks by allowing randomness in jump sizes, independence between jump sizes and jump times. The Markov jump diffusion model is used to capture empirical phenomena and to accurately describe interest jump risks in a financial market. We derive the arbitrage-free model of simple forward rates under the spot measure. Moreover, the analytical pricing formulas for a cap and a floor are derived under the forward measure when the jump size follows a lognormal distribution. In our empirical analysis, we find that the LIBOR market model with Markov jump risk better accounts for changes from/to different states and different rates.

Keywords: arbitrage-free, cap and floor, Markov jump diffusion model, simple forward rate model, volatility smile, EM algorithm

Procedia PDF Downloads 413
16542 An Adjusted Network Information Criterion for Model Selection in Statistical Neural Network Models

Authors: Christopher Godwin Udomboso, Angela Unna Chukwu, Isaac Kwame Dontwi

Abstract:

In selecting a Statistical Neural Network model, the Network Information Criterion (NIC) has been observed to be sample biased, because it does not account for sample sizes. The selection of a model from a set of fitted candidate models requires objective data-driven criteria. In this paper, we derived and investigated the Adjusted Network Information Criterion (ANIC), based on Kullback’s symmetric divergence, which has been designed to be an asymptotically unbiased estimator of the expected Kullback-Leibler information of a fitted model. The analyses show that on a general note, the ANIC improves model selection in more sample sizes than does the NIC.

Keywords: statistical neural network, network information criterion, adjusted network, information criterion, transfer function

Procedia PDF Downloads 552