Search results for: applied stochastic model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 23120

Search results for: applied stochastic model

22790 Development of a Human Vibration Model Considering Muscles and Stiffness of Intervertebral Discs

Authors: Young Nam Jo, Moon Jeong Kang, Hong Hee Yoo

Abstract:

Most human vibration models have been modeled as a multibody system consisting of some rigid bodies and spring-dampers. These models are developed for certain posture and conditions. So, the models cannot be used in vibration analysis in various posture and conditions. The purpose of this study is to develop a human vibration model that represent human vibration characteristics under various conditions by employing a musculoskeletal model. To do this, the human vibration model is developed based on biomechanical models. In addition, muscle models are employed instead of spring-dampers. Activations of muscles are controlled by PD controller to maintain body posture under vertical vibration is applied. Each gain value of the controller is obtained to minimize the difference of apparent mass and acceleration transmissibility between experim ent and analysis by using an optimization method.

Keywords: human vibration analysis, hill type muscle model, PD control, whole-body vibration

Procedia PDF Downloads 448
22789 Self-denigration in Doctoral Defense Sessions: Scale Development and Validation

Authors: Alireza Jalilifar, Nadia Mayahi

Abstract:

The dissertation defense as a complicated conflict-prone context entails the adoption of elegant interactional strategies, one of which is self-denigration. This study aimed to develop and validate a self-denigration model that fits the context of doctoral defense sessions in applied linguistics. Two focus group discussions provided the basis for developing this conceptual model, which assumed 10 functions for self-denigration, namely good manners, modesty, affability, altruism, assertiveness, diffidence, coercive self-deprecation, evasion, diplomacy, and flamboyance. These functions were used to design a 40-item questionnaire on the attitudes of applied linguists concerning self-denigration in defense sessions. The confirmatory factor analysis of the questionnaire indicated the predictive ability of the measurement model. The findings of this study suggest that self-denigration in doctoral defense sessions is the social representation of the participants’ values, ideas and practices adopted as a negotiation strategy and a conflict management policy for the purpose of establishing harmony and maintaining resilience. This study has implications for doctoral students and academics and illuminates further research on self-denigration in other contexts.

Keywords: academic discourse, politeness, self-denigration, grounded theory, dissertation defense

Procedia PDF Downloads 137
22788 Spatial Time Series Models for Rice and Cassava Yields Based on Bayesian Linear Mixed Models

Authors: Panudet Saengseedam, Nanthachai Kantanantha

Abstract:

This paper proposes a linear mixed model (LMM) with spatial effects to forecast rice and cassava yields in Thailand at the same time. A multivariate conditional autoregressive (MCAR) model is assumed to present the spatial effects. A Bayesian method is used for parameter estimation via Gibbs sampling Markov Chain Monte Carlo (MCMC). The model is applied to the rice and cassava yields monthly data which have been extracted from the Office of Agricultural Economics, Ministry of Agriculture and Cooperatives of Thailand. The results show that the proposed model has better performance in most provinces in both fitting part and validation part compared to the simple exponential smoothing and conditional auto regressive models (CAR) from our previous study.

Keywords: Bayesian method, linear mixed model, multivariate conditional autoregressive model, spatial time series

Procedia PDF Downloads 395
22787 Model of Transhipment and Routing Applied to the Cargo Sector in Small and Medium Enterprises of Bogotá, Colombia

Authors: Oscar Javier Herrera Ochoa, Ivan Dario Romero Fonseca

Abstract:

This paper presents a design of a model for planning the distribution logistics operation. The significance of this work relies on the applicability of this fact to the analysis of small and medium enterprises (SMEs) of dry freight in Bogotá. Two stages constitute this implementation: the first one is the place where optimal planning is achieved through a hybrid model developed with mixed integer programming, which considers the transhipment operation based on a combined load allocation model as a classic transshipment model; the second one is the specific routing of that operation through the heuristics of Clark and Wright. As a result, an integral model is obtained to carry out the step by step planning of the distribution of dry freight for SMEs in Bogotá. In this manner, optimum assignments are established by utilizing transshipment centers with that purpose of determining the specific routing based on the shortest distance traveled.

Keywords: transshipment model, mixed integer programming, saving algorithm, dry freight transportation

Procedia PDF Downloads 229
22786 Fuzzy Inference System for Risk Assessment Evaluation of Wheat Flour Product Manufacturing Systems

Authors: Yas Barzegaar, Atrin Barzegar

Abstract:

The aim of this research is to develop an intelligent system to analyze the risk level of wheat flour product manufacturing system. The model consists of five Fuzzy Inference Systems in two different layers to analyse the risk of a wheat flour product manufacturing system. The first layer of the model consists of four Fuzzy Inference Systems with three criteria. The output of each one of the Physical, Chemical, Biological and Environmental Failures will be the input of the final manufacturing systems. The proposed model based on Mamdani Fuzzy Inference Systems gives a performance ranking of wheat flour products manufacturing systems. The first step is obtaining data to identify the failure modes from expert’s opinions. The second step is the fuzzification process to convert crisp input to a fuzzy set., then the IF-then fuzzy rule applied through inference engine, and in the final step, the defuzzification process is applied to convert the fuzzy output into real numbers.

Keywords: failure modes, fuzzy rules, fuzzy inference system, risk assessment

Procedia PDF Downloads 102
22785 Estimation of Optimum Parameters of Non-Linear Muskingum Model of Routing Using Imperialist Competition Algorithm (ICA)

Authors: Davood Rajabi, Mojgan Yazdani

Abstract:

Non-linear Muskingum model is an efficient method for flood routing, however, the efficiency of this method is influenced by three applied parameters. Therefore, efficiency assessment of Imperialist Competition Algorithm (ICA) to evaluate optimum parameters of non-linear Muskingum model was addressed through this study. In addition to ICA, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) were also used aiming at an available criterion to verdict ICA. In this regard, ICA was applied for Wilson flood routing; then, routing of two flood events of DoAab Samsami River was investigated. In case of Wilson flood that the target function was considered as the sum of squared deviation (SSQ) of observed and calculated discharges. Routing two other floods, in addition to SSQ, another target function was also considered as the sum of absolute deviations of observed and calculated discharge. For the first floodwater based on SSQ, GA indicated the best performance, however, ICA was on first place, based on SAD. For the second floodwater, based on both target functions, ICA indicated a better operation. According to the obtained results, it can be said that ICA could be used as an appropriate method to evaluate the parameters of Muskingum non-linear model.

Keywords: Doab Samsami river, genetic algorithm, imperialist competition algorithm, meta-exploratory algorithms, particle swarm optimization, Wilson flood

Procedia PDF Downloads 504
22784 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes

Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono

Abstract:

Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is a widely used approach for LV segmentation but suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is proposed to improve the accuracy and speed of the model-based segmentation. Firstly, a robust and efficient detector based on Hough forest is proposed to localize cardiac feature points, and such points are used to predict the initial fitting of the LV shape model. Secondly, to achieve more accurate and detailed segmentation, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. The performance of the proposed method is evaluated on a dataset of 800 cardiac ultrasound images that are mostly of abnormal shapes. The proposed method is compared to several combinations of ASM and existing initialization methods. The experiment results demonstrate that the accuracy of feature point detection for initialization was improved by 40% compared to the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops, thus speeding up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.

Keywords: hough forest, active shape model, segmentation, cardiac left ventricle

Procedia PDF Downloads 339
22783 Nonequilibrium Effects in Photoinduced Ultrafast Charge Transfer Reactions

Authors: Valentina A. Mikhailova, Serguei V. Feskov, Anatoly I. Ivanov

Abstract:

In the last decade the nonequilibrium charge transfer have attracted considerable interest from the scientific community. Examples of such processes are the charge recombination in excited donor-acceptor complexes and the intramolecular electron transfer from the second excited electronic state. In these reactions the charge transfer proceeds predominantly in the nonequilibrium mode. In the excited donor-acceptor complexes the nuclear nonequilibrium is created by the pump pulse. The intramolecular electron transfer from the second excited electronic state is an example where the nuclear nonequilibrium is created by the forward electron transfer. The kinetics of these nonequilibrium reactions demonstrate a number of peculiar properties. Most important from them are: (i) the absence of the Marcus normal region in the free energy gap law for the charge recombination in excited donor-acceptor complexes, (ii) extremely low quantum yield of thermalized charge separated state in the ultrafast charge transfer from the second excited state, (iii) the nonexponential charge recombination dynamics in excited donor-acceptor complexes, (iv) the dependence of the charge transfer rate constant on the excitation pulse frequency. This report shows that most of these kinetic features can be well reproduced in the framework of stochastic point-transition multichannel model. The model involves an explicit description of the nonequilibrium excited state formation by the pump pulse and accounts for the reorganization of intramolecular high-frequency vibrational modes, for their relaxation as well as for the solvent relaxation. The model is able to quantitatively reproduce complex nonequilibrium charge transfer kinetics observed in modern experiments. The interpretation of the nonequilibrium effects from a unified point of view in the terms of the multichannel point transition stochastic model allows to see similarities and differences of electron transfer mechanism in various molecular donor-acceptor systems and formulates general regularities inherent in these phenomena. The nonequilibrium effects in photoinduced ultrafast charge transfer which have been studied for the last 10 years are analyzed. The methods of suppression of the ultrafast charge recombination, similarities and dissimilarities of electron transfer mechanism in different molecular donor-acceptor systems are discussed. The extremely low quantum yield of the thermalized charge separated state observed in the ultrafast charge transfer from the second excited state in the complex consisting of 1,2,4-trimethoxybenzene and tetracyanoethylene in acetonitrile solution directly demonstrates that its effectiveness can be close to unity. This experimental finding supports the idea that the nonequilibrium charge recombination in the excited donor-acceptor complexes can be also very effective so that the part of thermalized complexes is negligible. It is discussed the regularities inherent to the equilibrium and nonequilibrium reactions. Their fundamental differences are analyzed. Namely the opposite dependencies of the charge transfer rates on the dynamical properties of the solvent. The increase of the solvent viscosity results in decreasing the thermal rate and vice versa increasing the nonequilibrium rate. The dependencies of the rates on the solvent reorganization energy and the free energy gap also can considerably differ. This work was supported by the Russian Science Foundation (Grant No. 16-13-10122).

Keywords: Charge recombination, higher excited states, free energy gap law, nonequilibrium

Procedia PDF Downloads 325
22782 The Optimum Mel-Frequency Cepstral Coefficients (MFCCs) Contribution to Iranian Traditional Music Genre Classification by Instrumental Features

Authors: M. Abbasi Layegh, S. Haghipour, K. Athari, R. Khosravi, M. Tafkikialamdari

Abstract:

An approach to find the optimum mel-frequency cepstral coefficients (MFCCs) for the Radif of Mirzâ Ábdollâh, which is the principal emblem and the heart of Persian music, performed by most famous Iranian masters on two Iranian stringed instruments ‘Tar’ and ‘Setar’ is proposed. While investigating the variance of MFCC for each record in themusic database of 1500 gushe of the repertoire belonging to 12 modal systems (dastgâh and âvâz), we have applied the Fuzzy C-Mean clustering algorithm on each of the 12 coefficient and different combinations of those coefficients. We have applied the same experiment while increasing the number of coefficients but the clustering accuracy remained the same. Therefore, we can conclude that the first 7 MFCCs (V-7MFCC) are enough for classification of The Radif of Mirzâ Ábdollâh. Classical machine learning algorithms such as MLP neural networks, K-Nearest Neighbors (KNN), Gaussian Mixture Model (GMM), Hidden Markov Model (HMM) and Support Vector Machine (SVM) have been employed. Finally, it can be realized that SVM shows a better performance in this study.

Keywords: radif of Mirzâ Ábdollâh, Gushe, mel frequency cepstral coefficients, fuzzy c-mean clustering algorithm, k-nearest neighbors (KNN), gaussian mixture model (GMM), hidden markov model (HMM), support vector machine (SVM)

Procedia PDF Downloads 446
22781 Regional Advantages Analysis: An Interactive Approach of Comparative and Competitive Advantages

Authors: Abdolrasoul Ghasemi, Ali Arabmazar Yazdi, Yasaman Boroumand, Aliasghar Banouei

Abstract:

In regional studies, choosing an appropriate approach to analyze regional success or failure has always been a challenge. Hence, this study introduces an innovative approach to establish a link between regional success and failure in the past as well as the potential success of a region in the future. The former can be sought in the historical evaluation of comparative advantages, while the latter is portrayed as competitive advantage analysis with a forward-looking approach. Based on the interaction of comparative and competitive advantages, activities are classified into four groups, including activities with no advantage, hidden advantage, fragile advantage and synergistic advantage. In analyzing the comparative advantage of activities, the location quotient method is applied, and in analyzing their competitive advantage, Porter`s diamond model using the survey method is applied. According to the results, the share of no advantage, fragile advantage, hidden advantage and synergic advantage activities are respectively 10%, 42%, 16%, and 32%. Also, to achieve economic development in regional activities, our model provides various levels of priority. First, the activities with synergistic advantage should be prioritized, then the ones with hidden advantage, and finally the activities with fragile advantage.

Keywords: regional advantage, comparative advantage, competitive advantage, Porter's diamond model

Procedia PDF Downloads 353
22780 The University of California at Los Angeles-Young Autism Project: A Systematic Review of Replication Studies

Authors: Michael Nicolosi, Karola Dillenburger

Abstract:

The University of California at Los Angeles-Young Autism Project (UCLA-YAP) provides one of the best-known and most researched comprehensive applied behavior analysis-based intervention models for young children on the autism spectrum. This paper reports a systematic literature review of replication studies over more than 30 years. The data show that the relatively high-intensity UCLA-YAP model can be greatly beneficial for children on the autism spectrum, particularly with regard to their cognitive functioning and adaptive behavior. This review concludes that, while more research is always welcome, the impact of the UCLA-YAP model on autism interventions is justified by more than 30 years of outcome evidence.

Keywords: ABA, applied behavior analysis, autism, California at Los Angeles Young Autism project, intervention, Lovaas, UCLA-YAP

Procedia PDF Downloads 103
22779 Modelling of Atomic Force Microscopic Nano Robot's Friction Force on Rough Surfaces

Authors: M. Kharazmi, M. Zakeri, M. Packirisamy, J. Faraji

Abstract:

Micro/Nanorobotics or manipulation of nanoparticles by Atomic Force Microscopic (AFM) is one of the most important solutions for controlling the movement of atoms, particles and micro/nano metrics components and assembling of them to design micro/nano-meter tools. Accurate modelling of manipulation requires identification of forces and mechanical knowledge in the Nanoscale which are different from macro world. Due to the importance of the adhesion forces and the interaction of surfaces at the nanoscale several friction models were presented. In this research, friction and normal forces that are applied on the AFM by using of the dynamic bending-torsion model of AFM are obtained based on Hurtado-Kim friction model (HK), Johnson-Kendall-Robert contact model (JKR) and Greenwood-Williamson roughness model (GW). Finally, the effect of standard deviation of asperities height on the normal load, friction force and friction coefficient are studied.

Keywords: atomic force microscopy, contact model, friction coefficient, Greenwood-Williamson model

Procedia PDF Downloads 199
22778 A Study of Mode Choice Model Improvement Considering Age Grouping

Authors: Young-Hyun Seo, Hyunwoo Park, Dong-Kyu Kim, Seung-Young Kho

Abstract:

The purpose of this study is providing an improved mode choice model considering parameters including age grouping of prime-aged and old age. In this study, 2010 Household Travel Survey data were used and improper samples were removed through the analysis. Chosen alternative, date of birth, mode, origin code, destination code, departure time, and arrival time are considered from Household Travel Survey. By preprocessing data, travel time, travel cost, mode, and ratio of people aged 45 to 55 years, 55 to 65 years and over 65 years were calculated. After the manipulation, the mode choice model was constructed using LIMDEP by maximum likelihood estimation. A significance test was conducted for nine parameters, three age groups for three modes. Then the test was conducted again for the mode choice model with significant parameters, travel cost variable and travel time variable. As a result of the model estimation, as the age increases, the preference for the car decreases and the preference for the bus increases. This study is meaningful in that the individual and households characteristics are applied to the aggregate model.

Keywords: age grouping, aging, mode choice model, multinomial logit model

Procedia PDF Downloads 322
22777 Nonlinear Modeling of the PEMFC Based on NNARX Approach

Authors: Shan-Jen Cheng, Te-Jen Chang, Kuang-Hsiung Tan, Shou-Ling Kuo

Abstract:

Polymer Electrolyte Membrane Fuel Cell (PEMFC) is such a time-vary nonlinear dynamic system. The traditional linear modeling approach is hard to estimate structure correctly of PEMFC system. From this reason, this paper presents a nonlinear modeling of the PEMFC using Neural Network Auto-regressive model with eXogenous inputs (NNARX) approach. The multilayer perception (MLP) network is applied to evaluate the structure of the NNARX model of PEMFC. The validity and accuracy of NNARX model are tested by one step ahead relating output voltage to input current from measured experimental of PEMFC. The results show that the obtained nonlinear NNARX model can efficiently approximate the dynamic mode of the PEMFC and model output and system measured output consistently.

Keywords: PEMFC, neural network, nonlinear modeling, NNARX

Procedia PDF Downloads 381
22776 Tree-Based Inference for Regionalization: A Comparative Study of Global Topological Perturbation Methods

Authors: Orhun Aydin, Mark V. Janikas, Rodrigo Alves, Renato Assuncao

Abstract:

In this paper, a tree-based perturbation methodology for regionalization inference is presented. Regionalization is a constrained optimization problem that aims to create groups with similar attributes while satisfying spatial contiguity constraints. Similar to any constrained optimization problem, the spatial constraint may hinder convergence to some global minima, resulting in spatially contiguous members of a group with dissimilar attributes. This paper presents a general methodology for rigorously perturbing spatial constraints through the use of random spanning trees. The general framework presented can be used to quantify the effect of the spatial constraints in the overall regionalization result. We compare several types of stochastic spanning trees used in inference problems such as fuzzy regionalization and determining the number of regions. Performance of stochastic spanning trees is juxtaposed against the traditional permutation-based hypothesis testing frequently used in spatial statistics. Inference results for fuzzy regionalization and determining the number of regions is presented on the Local Area Personal Incomes for Texas Counties provided by the Bureau of Economic Analysis.

Keywords: regionalization, constrained clustering, probabilistic inference, fuzzy clustering

Procedia PDF Downloads 228
22775 Energy Performance of Buildings Due to Downscaled Seasonal Models

Authors: Anastasia K. Eleftheriadou, Athanasios Sfetsos, Nikolaos Gounaris

Abstract:

The present work examines the suitability of a seasonal forecasting model downscaled with a very high spatial resolution in order to assess the energy performance and requirements of buildings. The application of the developed model is applied on Greece for a period and with a forecast horizon of 5 months in the future. Greece, as a country in the middle of a financial crisis and facing serious societal challenges, is also very sensitive to climate changes. The commonly used method for the correlation of climate change with the buildings energy consumption is the concept of Degree Days (DD). This method can be applied to heating and cooling systems for a better management of environmental, economic and energy crisis, and can be used as medium (3-6 months) planning tools in order to predict the building needs and country’s requirements for residential energy use.

Keywords: downscaled seasonal models, degree days, energy performance

Procedia PDF Downloads 453
22774 A Multi Objective Reliable Location-Inventory Capacitated Disruption Facility Problem with Penalty Cost Solve with Efficient Meta Historic Algorithms

Authors: Elham Taghizadeh, Mostafa Abedzadeh, Mostafa Setak

Abstract:

Logistics network is expected that opened facilities work continuously for a long time horizon without any failure; but in real world problems, facilities may face disruptions. This paper studies a reliable joint inventory location problem to optimize cost of facility locations, customers’ assignment, and inventory management decisions when facilities face failure risks and doesn’t work. In our model we assume when a facility is out of work, its customers may be reassigned to other operational facilities otherwise they must endure high penalty costs associated with losing service. For defining the model closer to real world problems, the model is proposed based on p-median problem and the facilities are considered to have limited capacities. We define a new binary variable (Z_is) for showing that customers are not assigned to any facilities. Our problem involve a bi-objective model; the first one minimizes the sum of facility construction costs and expected inventory holding costs, the second one function that mention for the first one is minimizes maximum expected customer costs under normal and failure scenarios. For solving this model we use NSGAII and MOSS algorithms have been applied to find the pareto- archive solution. Also Response Surface Methodology (RSM) is applied for optimizing the NSGAII Algorithm Parameters. We compare performance of two algorithms with three metrics and the results show NSGAII is more suitable for our model.

Keywords: joint inventory-location problem, facility location, NSGAII, MOSS

Procedia PDF Downloads 525
22773 Logistic Regression Model versus Additive Model for Recurrent Event Data

Authors: Entisar A. Elgmati

Abstract:

Recurrent infant diarrhea is studied using daily data collected in Salvador, Brazil over one year and three months. A logistic regression model is fitted instead of Aalen's additive model using the same covariates that were used in the analysis with the additive model. The model gives reasonably similar results to that using additive regression model. In addition, the problem with the estimated conditional probabilities not being constrained between zero and one in additive model is solved here. Also martingale residuals that have been used to judge the goodness of fit for the additive model are shown to be useful for judging the goodness of fit of the logistic model.

Keywords: additive model, cumulative probabilities, infant diarrhoea, recurrent event

Procedia PDF Downloads 635
22772 Percolation Transition in an Agglomeration of Spherical Particles

Authors: Johannes J. Schneider, Mathias S. Weyland, Peter Eggenberger Hotz, William D. Jamieson, Oliver Castell, Alessia Faggian, Rudolf M. Füchslin

Abstract:

Agglomerations of polydisperse systems of spherical particles are created in computer simulations using a simplified stochastic-hydrodynamic model: Particles sink to the bottom of the cylinder, taking into account gravity reduced by the buoyant force, the Stokes friction force, the added mass effect, and random velocity changes. Two types of particles are considered, with one of them being able to create connections to neighboring particles of the same type, thus forming a network within the agglomeration at the bottom of a cylinder. Decreasing the fraction of these particles, a percolation transition occurs. The critical regime is determined by investigating the maximum cluster size and the percolation susceptibility.

Keywords: binary system, maximum cluster size, percolation, polydisperse

Procedia PDF Downloads 61
22771 Electricity Demand Modeling and Forecasting in Singapore

Authors: Xian Li, Qing-Guo Wang, Jiangshuai Huang, Jidong Liu, Ming Yu, Tan Kok Poh

Abstract:

In power industry, accurate electricity demand forecasting for a certain leading time is important for system operation and control, etc. In this paper, we investigate the modeling and forecasting of Singapore’s electricity demand. Several standard models, such as HWT exponential smoothing model, the ARMA model and the ANNs model have been proposed based on historical demand data. We applied them to Singapore electricity market and proposed three refinements based on simulation to improve the modeling accuracy. Compared with existing models, our refined model can produce better forecasting accuracy. It is demonstrated in the simulation that by adding forecasting error into the forecasting equation, the modeling accuracy could be improved greatly.

Keywords: power industry, electricity demand, modeling, forecasting

Procedia PDF Downloads 640
22770 Urban Big Data: An Experimental Approach to Building-Value Estimation Using Web-Based Data

Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin

Abstract:

Current real-estate value estimation, difficult for laymen, usually is performed by specialists. This paper presents an automated estimation process based on big data and machine-learning technology that calculates influences of building conditions on real-estate price measurement. The present study analyzed actual building sales sample data for Nonhyeon-dong, Gangnam-gu, Seoul, Korea, measuring the major influencing factors among the various building conditions. Further to that analysis, a prediction model was established and applied using RapidMiner Studio, a graphical user interface (GUI)-based tool for derivation of machine-learning prototypes. The prediction model is formulated by reference to previous examples. When new examples are applied, it analyses and predicts accordingly. The analysis process discerns the crucial factors effecting price increases by calculation of weighted values. The model was verified, and its accuracy determined, by comparing its predicted values with actual price increases.

Keywords: apartment complex, big data, life-cycle building value analysis, machine learning

Procedia PDF Downloads 374
22769 Effects of Using Alternative Energy Sources and Technologies to Reduce Energy Consumption and Expenditure of a Single Detached House

Authors: Gul Nihal Gugul, Merih Aydinalp-Koksal

Abstract:

In this study, hourly energy consumption model of a single detached house in Ankara, Turkey is developed using ESP-r building energy simulation software. Natural gas is used for space heating, cooking, and domestic water heating in this two story 4500 square feet four-bedroom home. Hourly electricity consumption of the home is monitored by an automated meter reading system, and daily natural gas consumption is recorded by the owners during 2013. Climate data of the region and building envelope data are used to develop the model. The heating energy consumption of the house that is estimated by the ESP-r model is then compared with the actual heating demand to determine the performance of the model. Scenarios are applied to the model to determine the amount of reduction in the total energy consumption of the house. The scenarios are using photovoltaic panels to generate electricity, ground source heat pumps for space heating and solar panels for domestic hot water generation. Alternative scenarios such as improving wall and roof insulations and window glazing are also applied. These scenarios are evaluated based on annual energy, associated CO2 emissions, and fuel expenditure savings. The pay-back periods for each scenario are also calculated to determine best alternative energy source or technology option for this home to reduce annual energy use and CO2 emission.

Keywords: ESP-r, building energy simulation, residential energy saving, CO2 reduction

Procedia PDF Downloads 199
22768 A Parallel Cellular Automaton Model of Tumor Growth for Multicore and GPU Programming

Authors: Manuel I. Capel, Antonio Tomeu, Alberto Salguero

Abstract:

Tumor growth from a transformed cancer-cell up to a clinically apparent mass spans through a range of spatial and temporal magnitudes. Through computer simulations, Cellular Automata (CA) can accurately describe the complexity of the development of tumors. Tumor development prognosis can now be made -without making patients undergo through annoying medical examinations or painful invasive procedures- if we develop appropriate CA-based software tools. In silico testing mainly refers to Computational Biology research studies of application to clinical actions in Medicine. To establish sound computer-based models of cellular behavior, certainly reduces costs and saves precious time with respect to carrying out experiments in vitro at labs or in vivo with living cells and organisms. These aim to produce scientifically relevant results compared to traditional in vitro testing, which is slow, expensive, and does not generally have acceptable reproducibility under the same conditions. For speeding up computer simulations of cellular models, specific literature shows recent proposals based on the CA approach that include advanced techniques, such the clever use of supporting efficient data structures when modeling with deterministic stochastic cellular automata. Multiparadigm and multiscale simulation of tumor dynamics is just beginning to be developed by the concerned research community. The use of stochastic cellular automata (SCA), whose parallel programming implementations are open to yield a high computational performance, are of much interest to be explored up to their computational limits. There have been some approaches based on optimizations to advance in multiparadigm models of tumor growth, which mainly pursuit to improve performance of these models through efficient memory accesses guarantee, or considering the dynamic evolution of the memory space (grids, trees,…) that holds crucial data in simulations. In our opinion, the different optimizations mentioned above are not decisive enough to achieve the high performance computing power that cell-behavior simulation programs actually need. The possibility of using multicore and GPU parallelism as a promising multiplatform and framework to develop new programming techniques to speed-up the computation time of simulations is just starting to be explored in the few last years. This paper presents a model that incorporates parallel processing, identifying the synchronization necessary for speeding up tumor growth simulations implemented in Java and C++ programming environments. The speed up improvement that specific parallel syntactic constructs, such as executors (thread pools) in Java, are studied. The new tumor growth parallel model is proved using implementations with Java and C++ languages on two different platforms: chipset Intel core i-X and a HPC cluster of processors at our university. The parallelization of Polesczuk and Enderling model (normally used by researchers in mathematical oncology) proposed here is analyzed with respect to performance gain. We intend to apply the model and overall parallelization technique presented here to solid tumors of specific affiliation such as prostate, breast, or colon. Our final objective is to set up a multiparadigm model capable of modelling angiogenesis, or the growth inhibition induced by chemotaxis, as well as the effect of therapies based on the presence of cytotoxic/cytostatic drugs.

Keywords: cellular automaton, tumor growth model, simulation, multicore and manycore programming, parallel programming, high performance computing, speed up

Procedia PDF Downloads 244
22767 Calibration and Validation of the Aquacrop Model for Simulating Growth and Yield of Rain-fed Sesame (Sesamum indicum L.) Under Different Soil Fertility Levels in the Semi-arid Areas of Tigray

Authors: Abadi Berhane, Walelign Worku, Berhanu Abrha, Gebre Hadgu, Tigray

Abstract:

Sesame is an important oilseed crop in Ethiopia; which is the second most exported agricultural commodity next to coffee. However, there is poor soil fertility management and a research-led farming system for the crop. The AquaCrop model was applied as a decision-support tool; which performs a semi-quantitative approach to simulate the yield of crops under different soil fertility levels. The objective of this experiment was to calibrate and validated the AquaCrop model for simulating the growth and yield of sesame under different nitrogen fertilizer levels and to test the performance of the model as a decision-support tool for improved sesame cultivation in the study area. The experiment was laid out as a randomized complete block design (RCBD) in a factorial arrangement in the 2016, 2017, and 2018 main cropping seasons. In this experiment, four nitrogen fertilizer rates; 0, 23, 46, and 69 Kg/ha nitrogen, and three improved varieties (Setit-1, Setit-2, and Humera-1). In the meantime, growth, yield, and yield components of sesame were collected from each treatment. Coefficient of determination (R2), Root mean square error (RMSE), Normalized root mean square error (N-RMSE), Model efficiency (E), and Degree of agreement (D) were used to test the performance of the model. The results indicated that the AquaCrop model successfully simulated soil water content with R2 varying from 0.92 to 0.98, RMSE 6.5 to 13.9 mm, E 0.78 to 0.94, and D 0.95 to 0.99; and the corresponding values for AB also varied from 0.92 to 0.98, 0.33 to 0.54 tons/ha, 0.74 to 0.93, and 0.9 to 0.98, respectively. The results on the canopy cover of sesame also showed that the model acceptably simulated canopy cover with R2 varying from 0.95 to 0.99, and a RMSE of 5.3 to 8.6%. The AquaCrop model was appropriately calibrated to simulate soil water content, canopy cover, aboveground biomass, and sesame yield; the results indicated that the model adequately simulated the growth and yield of sesame under the different nitrogen fertilizer levels. The AquaCrop model might be an important tool for improved soil fertility management and yield enhancement strategies of sesame. Hence, the model might be applied as a decision-support tool in soil fertility management in sesame production.

Keywords: aquacrop model, sesame, normalized water productivity, nitrogen fertilizer

Procedia PDF Downloads 75
22766 Multilevel Modeling of the Progression of HIV/AIDS Disease among Patients under HAART Treatment

Authors: Awol Seid Ebrie

Abstract:

HIV results as an incurable disease, AIDS. After a person is infected with virus, the virus gradually destroys all the infection fighting cells called CD4 cells and makes the individual susceptible to opportunistic infections which cause severe or fatal health problems. Several studies show that the CD4 cells count is the most determinant indicator of the effectiveness of the treatment or progression of the disease. The objective of this paper is to investigate the progression of the disease over time among patient under HAART treatment. Two main approaches of the generalized multilevel ordinal models; namely the proportional odds model and the nonproportional odds model have been applied to the HAART data. Also, the multilevel part of both models includes random intercepts and random coefficients. In general, four models are explored in the analysis and then the models are compared using the deviance information criteria. Of these models, the random coefficients nonproportional odds model is selected as the best model for the HAART data used as it has the smallest DIC value. The selected model shows that the progression of the disease increases as the time under the treatment increases. In addition, it reveals that gender, baseline clinical stage and functional status of the patient have a significant association with the progression of the disease.

Keywords: nonproportional odds model, proportional odds model, random coefficients model, random intercepts model

Procedia PDF Downloads 421
22765 The Volume–Volatility Relationship Conditional to Market Efficiency

Authors: Massimiliano Frezza, Sergio Bianchi, Augusto Pianese

Abstract:

The relation between stock price volatility and trading volume represents a controversial issue which has received a remarkable attention over the past decades. In fact, an extensive literature shows a positive relation between price volatility and trading volume in the financial markets, but the causal relationship which originates such association is an open question, from both a theoretical and empirical point of view. In this regard, various models, which can be considered as complementary rather than competitive, have been introduced to explain this relationship. They include the long debated Mixture of Distributions Hypothesis (MDH); the Sequential Arrival of Information Hypothesis (SAIH); the Dispersion of Beliefs Hypothesis (DBH); the Noise Trader Hypothesis (NTH). In this work, we analyze whether stock market efficiency can explain the diversity of results achieved during the years. For this purpose, we propose an alternative measure of market efficiency, based on the pointwise regularity of a stochastic process, which is the Hurst–H¨older dynamic exponent. In particular, we model the stock market by means of the multifractional Brownian motion (mBm) that displays the property of a time-changing regularity. Mostly, such models have in common the fact that they locally behave as a fractional Brownian motion, in the sense that their local regularity at time t0 (measured by the local Hurst–H¨older exponent in a neighborhood of t0 equals the exponent of a fractional Brownian motion of parameter H(t0)). Assuming that the stock price follows an mBm, we introduce and theoretically justify the Hurst–H¨older dynamical exponent as a measure of market efficiency. This allows to measure, at any time t, markets’ departures from the martingale property, i.e. from efficiency as stated by the Efficient Market Hypothesis. This approach is applied to financial markets; using data for the SP500 index from 1978 to 2017, on the one hand we find that when efficiency is not accounted for, a positive contemporaneous relationship emerges and is stable over time. Conversely, it disappears as soon as efficiency is taken into account. In particular, this association is more pronounced during time frames of high volatility and tends to disappear when market becomes fully efficient.

Keywords: volume–volatility relationship, efficient market hypothesis, martingale model, Hurst–Hölder exponent

Procedia PDF Downloads 78
22764 Evaluation of High Damping Rubber Considering Initial History through Dynamic Loading Test and Program Analysis

Authors: Kyeong Hoon Park, Taiji Mazuda

Abstract:

High damping rubber (HDR) bearings are dissipating devices mainly used in seismic isolation systems and have a great damping performance. Although many studies have been conducted on the dynamic model of HDR bearings, few models can reflect phenomena such as dependency of experienced shear strain on initial history. In order to develop a model that can represent the dependency of experienced shear strain of HDR by Mullins effect, dynamic loading test was conducted using HDR specimen. The reaction of HDR was measured by applying a horizontal vibration using a hybrid actuator under a constant vertical load. Dynamic program analysis was also performed after dynamic loading test. The dynamic model applied in program analysis is a bilinear type double-target model. This model is modified from typical bilinear model. This model can express the nonlinear characteristics related to the initial history of HDR bearings. Based on the dynamic loading test and program analysis results, equivalent stiffness and equivalent damping ratio were calculated to evaluate the mechanical properties of HDR and the feasibility of the bilinear type double-target model was examined.

Keywords: base-isolation, bilinear model, high damping rubber, loading test

Procedia PDF Downloads 123
22763 Fuzzy Inference System for Risk Assessment Evaluation of Wheat Flour Product Manufacturing Systems

Authors: Atrin Barzegar, Yas Barzegar, Stefano Marrone, Francesco Bellini, Laura Verde

Abstract:

The aim of this research is to develop an intelligent system to analyze the risk level of wheat flour product manufacturing system. The model consists of five Fuzzy Inference Systems in two different layers to analyse the risk of a wheat flour product manufacturing system. The first layer of the model consists of four Fuzzy Inference Systems with three criteria. The output of each one of the Physical, Chemical, Biological and Environmental Failures will be the input of the final manufacturing systems. The proposed model based on Mamdani Fuzzy Inference Systems gives a performance ranking of wheat flour products manufacturing systems. The first step is obtaining data to identify the failure modes from expert’s opinions. The second step is the fuzzification process to convert crisp input to a fuzzy set., then the IF-then fuzzy rule applied through inference engine, and in the final step, the defuzzification process is applied to convert the fuzzy output into real numbers.

Keywords: failure modes, fuzzy rules, fuzzy inference system, risk assessment

Procedia PDF Downloads 75
22762 Springback Prediction for Sheet Metal Cold Stamping Using Convolutional Neural Networks

Authors: Lei Zhu, Nan Li

Abstract:

Cold stamping has been widely applied in the automotive industry for the mass production of a great range of automotive panels. Predicting the springback to ensure the dimensional accuracy of the cold-stamped components is a critical step. The main approaches for the prediction and compensation of springback in cold stamping include running Finite Element (FE) simulations and conducting experiments, which require forming process expertise and can be time-consuming and expensive for the design of cold stamping tools. Machine learning technologies have been proven and successfully applied in learning complex system behaviours using presentative samples. These technologies exhibit the promising potential to be used as supporting design tools for metal forming technologies. This study, for the first time, presents a novel application of a Convolutional Neural Network (CNN) based surrogate model to predict the springback fields for variable U-shape cold bending geometries. A dataset is created based on the U-shape cold bending geometries and the corresponding FE simulations results. The dataset is then applied to train the CNN surrogate model. The result shows that the surrogate model can achieve near indistinguishable full-field predictions in real-time when compared with the FE simulation results. The application of CNN in efficient springback prediction can be adopted in industrial settings to aid both conceptual and final component designs for designers without having manufacturing knowledge.

Keywords: springback, cold stamping, convolutional neural networks, machine learning

Procedia PDF Downloads 149
22761 Development of a Data-Driven Method for Diagnosing the State of Health of Battery Cells, Based on the Use of an Electrochemical Aging Model, with a View to Their Use in Second Life

Authors: Desplanches Maxime

Abstract:

Accurate estimation of the remaining useful life of lithium-ion batteries for electronic devices is crucial. Data-driven methodologies encounter challenges related to data volume and acquisition protocols, particularly in capturing a comprehensive range of aging indicators. To address these limitations, we propose a hybrid approach that integrates an electrochemical model with state-of-the-art data analysis techniques, yielding a comprehensive database. Our methodology involves infusing an aging phenomenon into a Newman model, leading to the creation of an extensive database capturing various aging states based on non-destructive parameters. This database serves as a robust foundation for subsequent analysis. Leveraging advanced data analysis techniques, notably principal component analysis and t-Distributed Stochastic Neighbor Embedding, we extract pivotal information from the data. This information is harnessed to construct a regression function using either random forest or support vector machine algorithms. The resulting predictor demonstrates a 5% error margin in estimating remaining battery life, providing actionable insights for optimizing usage. Furthermore, the database was built from the Newman model calibrated for aging and performance using data from a European project called Teesmat. The model was then initialized numerous times with different aging values, for instance, with varying thicknesses of SEI (Solid Electrolyte Interphase). This comprehensive approach ensures a thorough exploration of battery aging dynamics, enhancing the accuracy and reliability of our predictive model. Of particular importance is our reliance on the database generated through the integration of the electrochemical model. This database serves as a crucial asset in advancing our understanding of aging states. Beyond its capability for precise remaining life predictions, this database-driven approach offers valuable insights for optimizing battery usage and adapting the predictor to various scenarios. This underscores the practical significance of our method in facilitating better decision-making regarding lithium-ion battery management.

Keywords: Li-ion battery, aging, diagnostics, data analysis, prediction, machine learning, electrochemical model, regression

Procedia PDF Downloads 69