Search results for: Active shape models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4119

Search results for: Active shape models

3279 Fuzzy EOQ Models for Deteriorating Items with Stock Dependent Demand and Non-Linear Holding Costs

Authors: G. C. Mahata, A. Goswami

Abstract:

This paper deals with infinite time horizon fuzzy Economic Order Quantity (EOQ) models for deteriorating items with  stock dependent demand rate and nonlinear holding costs by taking deterioration rate θ0 as a triangular fuzzy number  (θ0 −δ 1, θ0, θ0 +δ 2), where 1 2 0 0 <δ ,δ <θ are fixed real numbers. The traditional parameters such as unit cost and ordering  cost have been kept constant but holding cost is considered to vary. Two possibilities of variations in the holding cost function namely, a non-linear function of the length of time for which the item is held in stock and a non-linear function of the amount of on-hand inventory have been used in the models. The approximate optimal solution for the fuzzy cost functions in both these cases have been obtained and the effect of non-linearity in holding costs is studied with the help of a numerical example.

Keywords: Inventory Model, Deterioration, Holding Cost, Fuzzy Total Cost, Extension Principle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1810
3278 Kinetic Studies on Microbial Production of Tannase Using Redgram Husk

Authors: S. K. Mohan, T. Viruthagiri, C. Arunkumar

Abstract:

Tannase (tannin acyl hydrolase, E.C.3.1.1.20) is an important hydrolysable enzyme with innumerable applications and industrial potential. In the present study, a kinetic model has been developed for the batch fermentation used for the production of tannase by A.flavus MTCC 3783. Maximum tannase activity of 143.30 U/ml was obtained at 96 hours under optimum operating conditions at 35oC, an initial pH of 5.5 and with an inducer tannic acid concentration of 3% (w/v) for a fermentation period of 120 hours. The biomass concentration reaches a maximum of 6.62 g/l at 96 hours and further there was no increase in biomass concentration till the end of the fermentation. Various unstructured kinetic models were analyzed to simulate the experimental values of microbial growth, tannase activity and substrate concentration. The Logistic model for microbial growth , Luedeking - Piret model for production of tannase and Substrate utilization kinetic model for utilization of substrate were capable of predicting the fermentation profile with high coefficient of determination (R2) values of 0.980, 0.942 and 0.983 respectively. The results indicated that the unstructured models were able to describe the fermentation kinetics more effectively.

Keywords: Aspergillus flavus, Batch fermentation, Kinetic model, Tannase, Unstructured models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1560
3277 Estimation of Natural Frequency of the Bearing System under Periodic Force Based on Principal of Hydrodynamic Mass of Fluid

Authors: M. H. Pol, A. Bidi, A. V. Hoseini

Abstract:

Estimation of natural frequency of structures is very important and isn-t usually calculated simply and sometimes complicated. Lack of knowledge about that caused hard damage and hazardous effects. In this paper, with using from two different models in FEM method and based on hydrodynamic mass of fluids, natural frequency of an especial bearing (Fig. 1) in an electric field (or, a periodic force) is calculated in different stiffness and different geometric. In final, the results of two models and analytical solution are compared.

Keywords: Natural frequency of the bearing, Hydrodynamic mass of fluid method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2643
3276 Application of Differential Transformation Method for Solving Dynamical Transmission of Lassa Fever Model

Authors: M. A. Omoloye, M. I. Yusuff, O. K. S. Emiola

Abstract:

The use of mathematical models for solving biological problems varies from simple to complex analyses, depending on the nature of the research problems and applicability of the models. The method is more common nowadays. Many complex models become impractical when transmitted analytically. However, alternative approach such as numerical method can be employed. It appropriateness in solving linear and non-linear model equation in Differential Transformation Method (DTM) which depends on Taylor series make it applicable. Hence this study investigates the application of DTM to solve dynamic transmission of Lassa fever model in a population. The mathematical model was formulated using first order differential equation. Firstly, existence and uniqueness of the solution was determined to establish that the model is mathematically well posed for the application of DTM. Numerically, simulations were conducted to compare the results obtained by DTM and that of fourth-order Runge-Kutta method. As shown, DTM is very effective in predicting the solution of epidemics of Lassa fever model.

Keywords: Differential Transform Method, Existence and uniqueness, Lassa fever, Runge-Kutta Method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 479
3275 Thermodynamic Analyses of Information Dissipation along the Passive Dendritic Trees and Active Action Potential

Authors: Bahar Hazal Yalçınkaya, Bayram Yılmaz, Mustafa Özilgen

Abstract:

Brain information transmission in the neuronal network occurs in the form of electrical signals. Neural work transmits information between the neurons or neurons and target cells by moving charged particles in a voltage field; a fraction of the energy utilized in this process is dissipated via entropy generation. Exergy loss and entropy generation models demonstrate the inefficiencies of the communication along the dendritic trees. In this study, neurons of 4 different animals were analyzed with one dimensional cable model with N=6 identical dendritic trees and M=3 order of symmetrical branching. Each branch symmetrically bifurcates in accordance with the 3/2 power law in an infinitely long cylinder with the usual core conductor assumptions, where membrane potential is conserved in the core conductor at all branching points. In the model, exergy loss and entropy generation rates are calculated for each branch of equivalent cylinders of electrotonic length (L) ranging from 0.1 to 1.5 for four different dendritic branches, input branch (BI), and sister branch (BS) and two cousin branches (BC-1 & BC-2). Thermodynamic analysis with the data coming from two different cat motoneuron studies show that in both experiments nearly the same amount of exergy is lost while generating nearly the same amount of entropy. Guinea pig vagal motoneuron loses twofold more exergy compared to the cat models and the squid exergy loss and entropy generation were nearly tenfold compared to the guinea pig vagal motoneuron model. Thermodynamic analysis show that the dissipated energy in the dendritic tress is directly proportional with the electrotonic length, exergy loss and entropy generation. Entropy generation and exergy loss show variability not only between the vertebrate and invertebrates but also within the same class. Concurrently, single action potential Na+ ion load, metabolic energy utilization and its thermodynamic aspect contributed for squid giant axon and mammalian motoneuron model. Energy demand is supplied to the neurons in the form of Adenosine triphosphate (ATP). Exergy destruction and entropy generation upon ATP hydrolysis are calculated. ATP utilization, exergy destruction and entropy generation showed differences in each model depending on the variations in the ion transport along the channels.

Keywords: ATP utilization, entropy generation, exergy loss, neuronal information transmittance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1008
3274 The Effect of Modification and Initial Concentration on Ammonia Removal from Leachate by Zeolite

Authors: Fulya Aydın, Ayşe Kuleyin

Abstract:

The purpose of this study is to investigate the capacity of natural Turkish zeolite for NH4-N removal from landfill leachate. The effects of modification and initial concentration on the removal of NH4-N from leachate were also investigated. The kinetics of adsorption of NH4-N has been discussed using three kinetic models, i.e., the pseudo-second order model, the Elovich equation, the intraparticle diffuion model. Kinetic parameters and correlation coefficients were determined. Equilibrium isotherms for the adsorption of NH4-N were analyzed by Langmuir, Freundlich and Tempkin isotherm models. Langmuir isotherm model was found to best represent the data for NH4-N.

Keywords: Leachate, Ammonium, zeolite

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2361
3273 A Study of the Replacement of Natural Coarse Aggregate by Spherically-Shaped and Crushed Waste Cathode Ray Tube Glass in Concrete

Authors: N. N. M. Pauzi, M. R. Karim, M. Jamil, R. Hamid, M. F. M. Zain

Abstract:

The aim of this study is to conduct an experimental investigation on the influence of complete replacement of natural coarse aggregate with spherically-shape and crushed waste cathode ray tube (CRT) glass to the aspect of workability, density, and compressive strength of the concrete. After characterizing the glass, a group of concrete mixes was prepared to contain a 40% spherical CRT glass and 60% crushed CRT glass as a complete (100%) replacement of natural coarse aggregates. From a total of 16 types of concrete mixes, the optimum proportion was selected based on its best performance. The test results showed that the use of spherical and crushed glass that possesses a smooth surface, rounded, irregular and elongated shape, and low water absorption affects the workability of concrete. Due to a higher specific gravity of crushed glass, concrete mixes containing CRT glass had a higher density compared to ordinary concrete. Despite the spherical and crushed CRT glass being stronger than gravel, the results revealed a reduction in compressive strength of the concrete. However, using a lower water to binder (w/b) ratio and a higher superplasticizer (SP) dosage, it is found to enhance the compressive strength of 60.97 MPa at 28 days that is lower by 13% than the control specimen. These findings indicate that waste CRT glass in the form of spherical and crushed could be used as an alternative of coarse aggregate that may pave the way for the disposal of hazardous e-waste.

Keywords: Cathode ray tube, glass, coarse aggregate, compressive strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1367
3272 Classifying and Predicting Efficiencies Using Interval DEA Grid Setting

Authors: Yiannis G. Smirlis

Abstract:

The classification and the prediction of efficiencies in Data Envelopment Analysis (DEA) is an important issue, especially in large scale problems or when new units frequently enter the under-assessment set. In this paper, we contribute to the subject by proposing a grid structure based on interval segmentations of the range of values for the inputs and outputs. Such intervals combined, define hyper-rectangles that partition the space of the problem. This structure, exploited by Interval DEA models and a dominance relation, acts as a DEA pre-processor, enabling the classification and prediction of efficiency scores, without applying any DEA models.

Keywords: Data envelopment analysis, interval DEA, efficiency classification, efficiency prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 934
3271 Prediction of Computer and Video Game Playing Population: An Age Structured Model

Authors: T. K. Sriram, Joydip Dhar

Abstract:

Models based on stage structure have found varied applications in population models. This paper proposes a stage structured model to study the trends in the computer and video game playing population of US. The game paying population is divided into three compartments based on their age group. After simulating the mathematical model, a forecast of the number of game players in each stage as well as an approximation of the average age of game players in future has been made.

Keywords: Age structure, Forecasting, Mathematical modeling, Stage structure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1897
3270 Hybrid Equity Warrants Pricing Formulation under Stochastic Dynamics

Authors: Teh Raihana Nazirah Roslan, Siti Zulaiha Ibrahim, Sharmila Karim

Abstract:

A warrant is a financial contract that confers the right but not the obligation, to buy or sell a security at a certain price before expiration. The standard procedure to value equity warrants using call option pricing models such as the Black–Scholes model had been proven to contain many flaws, such as the assumption of constant interest rate and constant volatility. In fact, existing alternative models were found focusing more on demonstrating techniques for pricing, rather than empirical testing. Therefore, a mathematical model for pricing and analyzing equity warrants which comprises stochastic interest rate and stochastic volatility is essential to incorporate the dynamic relationships between the identified variables and illustrate the real market. Here, the aim is to develop dynamic pricing formulations for hybrid equity warrants by incorporating stochastic interest rates from the Cox-Ingersoll-Ross (CIR) model, along with stochastic volatility from the Heston model. The development of the model involves the derivations of stochastic differential equations that govern the model dynamics. The resulting equations which involve Cauchy problem and heat equations are then solved using partial differential equation approaches. The analytical pricing formulas obtained in this study comply with the form of analytical expressions embedded in the Black-Scholes model and other existing pricing models for equity warrants. This facilitates the practicality of this proposed formula for comparison purposes and further empirical study.

Keywords: Cox-Ingersoll-Ross model, equity warrants, Heston model, hybrid models, stochastic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 576
3269 Specialized Reduced Models of Dynamic Flows in 2-Stroke Engines

Authors: S. Cagin, X. Fischer, E. Delacourt, N. Bourabaa, C. Morin, D. Coutellier, B. Carré, S. Loumé

Abstract:

The complexity of scavenging by ports and its impact on engine efficiency create the need to understand and to model it as realistically as possible. However, there are few empirical scavenging models and these are highly specialized. In a design optimization process, they appear very restricted and their field of use is limited. This paper presents a comparison of two methods to establish and reduce a model of the scavenging process in 2-stroke diesel engines. To solve the lack of scavenging models, a CFD model has been developed and is used as the referent case. However, its large size requires a reduction. Two techniques have been tested depending on their fields of application: The NTF method and neural networks. They both appear highly appropriate drastically reducing the model’s size (over 90% reduction) with a low relative error rate (under 10%). Furthermore, each method produces a reduced model which can be used in distinct specialized fields of application: the distribution of a quantity (mass fraction for example) in the cylinder at each time step (pseudo-dynamic model) or the qualification of scavenging at the end of the process (pseudo-static model).

Keywords: Diesel engine, Design optimization, Model reduction, Neural network, NTF algorithm, Scavenging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1324
3268 On the Performance of Information Criteria in Latent Segment Models

Authors: Jaime R. S. Fonseca

Abstract:

Nevertheless the widespread application of finite mixture models in segmentation, finite mixture model selection is still an important issue. In fact, the selection of an adequate number of segments is a key issue in deriving latent segments structures and it is desirable that the selection criteria used for this end are effective. In order to select among several information criteria, which may support the selection of the correct number of segments we conduct a simulation study. In particular, this study is intended to determine which information criteria are more appropriate for mixture model selection when considering data sets with only categorical segmentation base variables. The generation of mixtures of multinomial data supports the proposed analysis. As a result, we establish a relationship between the level of measurement of segmentation variables and some (eleven) information criteria-s performance. The criterion AIC3 shows better performance (it indicates the correct number of the simulated segments- structure more often) when referring to mixtures of multinomial segmentation base variables.

Keywords: Quantitative Methods, Multivariate Data Analysis, Clustering, Finite Mixture Models, Information Theoretical Criteria, Simulation experiments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1516
3267 Simulation and Statistical Analysis of Motion Behavior of a Single Rockfall

Authors: Iau-Teh Wang, Chin-Yu Lee

Abstract:

The impact force of a rockfall is mainly determined by its moving behavior and velocity, which are contingent on the rock shape, slope gradient, height, and surface roughness of the moving path. It is essential to precisely calculate the moving path of the rockfall in order to effectively minimize and prevent damages caused by the rockfall. By applying the Colorado Rockfall Simulation Program (CRSP) program as the analysis tool, this research studies the influence of three shapes of rock (spherical, cylindrical and discoidal) and surface roughness on the moving path of a single rockfall. As revealed in the analysis, in addition to the slope gradient, the geometry of the falling rock and joint roughness coefficient ( JRC ) of the slope are the main factors affecting the moving behavior of a rockfall. On a single flat slope, both the rock-s bounce height and moving velocity increase as the surface gradient increases, with a critical gradient value of 1:m = 1 . Bouncing behavior and faster moving velocity occur more easily when the rock geometry is more oval. A flat piece tends to cause sliding behavior and is easily influenced by the change of surface undulation. When JRC <1.4 the moving velocity decreases and the bounce height increases as JRC increases. If the gradient is fixed, when JRC is greater, the bounce height will be higher, while the moving velocity will experience a downward trend. Therefore, the best protecting point and facilities can be chosen if the moving paths of rockfalls are precisely estimated.

Keywords: rock shape, surface roughness, moving path.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1943
3266 Mapping Knowledge Model Onto Java Codes

Authors: B.A.Gobin, R.K.Subramanian

Abstract:

This paper gives an overview of the mapping mechanism of SEAM-a methodology for the automatic generation of knowledge models and its mapping onto Java codes. It discusses the rules that will be used to map the different components in the knowledge model automatically onto Java classes, properties and methods. The aim of developing this mechanism is to help in the creation of a prototype which will be used to validate the knowledge model which has been generated automatically. It will also help to link the modeling phase with the implementation phase as existing knowledge engineering methodologies do not provide for proper guidelines for the transition from the knowledge modeling phase to development phase. This will decrease the development overheads associated to the development of Knowledge Based Systems.

Keywords: KBS, OWL, ontology, knowledge models

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1381
3265 Feature Analysis of Predictive Maintenance Models

Authors: Zhaoan Wang

Abstract:

Research in predictive maintenance modeling has improved in the recent years to predict failures and needed maintenance with high accuracy, saving cost and improving manufacturing efficiency. However, classic prediction models provide little valuable insight towards the most important features contributing to the failure. By analyzing and quantifying feature importance in predictive maintenance models, cost saving can be optimized based on business goals. First, multiple classifiers are evaluated with cross-validation to predict the multi-class of failures. Second, predictive performance with features provided by different feature selection algorithms are further analyzed. Third, features selected by different algorithms are ranked and combined based on their predictive power. Finally, linear explainer SHAP (SHapley Additive exPlanations) is applied to interpret classifier behavior and provide further insight towards the specific roles of features in both local predictions and global model behavior. The results of the experiments suggest that certain features play dominant roles in predictive models while others have significantly less impact on the overall performance. Moreover, for multi-class prediction of machine failures, the most important features vary with type of machine failures. The results may lead to improved productivity and cost saving by prioritizing sensor deployment, data collection, and data processing of more important features over less importance features.

Keywords: Automated supply chain, intelligent manufacturing, predictive maintenance machine learning, feature engineering, model interpretation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1996
3264 Negative Selection as a Means of Discovering Unknown Temporal Patterns

Authors: Wanli Ma, Dat Tran, Dharmendra Sharma

Abstract:

The temporal nature of negative selection is an under exploited area. In a negative selection system, newly generated antibodies go through a maturing phase, and the survivors of the phase then wait to be activated by the incoming antigens after certain number of matches. These without having enough matches will age and die, while these with enough matches (i.e., being activated) will become active detectors. A currently active detector may also age and die if it cannot find any match in a pre-defined (lengthy) period of time. Therefore, what matters in a negative selection system is the dynamics of the involved parties in the current time window, not the whole time duration, which may be up to eternity. This property has the potential to define the uniqueness of negative selection in comparison with the other approaches. On the other hand, a negative selection system is only trained with “normal" data samples. It has to learn and discover unknown “abnormal" data patterns on the fly by itself. Consequently, it is more appreciate to utilize negation selection as a system for pattern discovery and recognition rather than just pattern recognition. In this paper, we study the potential of using negative selection in discovering unknown temporal patterns.

Keywords: Artificial Immune Systems, ComputationalIntelligence, Negative Selection, Pattern Discovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1659
3263 Identification, Prediction and Detection of the Process Fault in a Cement Rotary Kiln by Locally Linear Neuro-Fuzzy Technique

Authors: Masoud Sadeghian, Alireza Fatehi

Abstract:

In this paper, we use nonlinear system identification method to predict and detect process fault of a cement rotary kiln. After selecting proper inputs and output, an input-output model is identified for the plant. To identify the various operation points in the kiln, Locally Linear Neuro-Fuzzy (LLNF) model is used. This model is trained by LOLIMOT algorithm which is an incremental treestructure algorithm. Then, by using this method, we obtained 3 distinct models for the normal and faulty situations in the kiln. One of the models is for normal condition of the kiln with 15 minutes prediction horizon. The other two models are for the two faulty situations in the kiln with 7 minutes prediction horizon are presented. At the end, we detect these faults in validation data. The data collected from White Saveh Cement Company is used for in this study.

Keywords: Cement Rotary Kiln, Fault Detection, Delay Estimation Method, Locally Linear Neuro Fuzzy Model, LOLIMOT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1668
3262 SIFT Accordion: A Space-Time Descriptor Applied to Human Action Recognition

Authors: Olfa.Ben Ahmed, Mahmoud. Mejdoub, Chokri. Ben Amar

Abstract:

Recognizing human action from videos is an active field of research in computer vision and pattern recognition. Human activity recognition has many potential applications such as video surveillance, human machine interaction, sport videos retrieval and robot navigation. Actually, local descriptors and bag of visuals words models achieve state-of-the-art performance for human action recognition. The main challenge in features description is how to represent efficiently the local motion information. Most of the previous works focus on the extension of 2D local descriptors on 3D ones to describe local information around every interest point. In this paper, we propose a new spatio-temporal descriptor based on a spacetime description of moving points. Our description is focused on an Accordion representation of video which is well-suited to recognize human action from 2D local descriptors without the need to 3D extensions. We use the bag of words approach to represent videos. We quantify 2D local descriptor describing both temporal and spatial features with a good compromise between computational complexity and action recognition rates. We have reached impressive results on publicly available action data set

Keywords: Accordion, Bag of Features, Human action, Motion, Moving point, Space-Time Descriptor, SIFT, Video.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2103
3261 Multi-models Approach for Describing and Verifying Constraints Based Interactive Systems

Authors: Mamoun Sqali, Mohamed Wassim Trojet

Abstract:

The requirements analysis, modeling, and simulation have consistently been one of the main challenges during the development of complex systems. The scenarios and the state machines are two successful models to describe the behavior of an interactive system. The scenarios represent examples of system execution in the form of sequences of messages exchanged between objects and are a partial view of the system. In contrast, state machines can represent the overall system behavior. The automation of processing scenarios in the state machines provide some answers to various problems such as system behavior validation and scenarios consistency checking. In this paper, we propose a method for translating scenarios in state machines represented by Discreet EVent Specification and procedure to detect implied scenarios. Each induced DEVS model represents the behavior of an object of the system. The global system behavior is described by coupling the atomic DEVS models and validated through simulation. We improve the validation process with integrating formal methods to eliminate logical inconsistencies in the global model. For that end, we use the Z notation.

Keywords: Scenarios, DEVS, synthesis, validation and verification, simulation, formal verification, z notation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1382
3260 Hybrid Project Management Model Based on Lean and Agile Approach

Authors: F. Z. Eddoug, J. Benhra, R. Benabbou

Abstract:

Excellence and Success are the ultimate goal for any project and in order to achieve it, every project manager looks for the convenient tools and methods. This work proposes a framework that seeks an efficient management of general project through a lean and agile approach. In order to get this objective, the article was divided in two stages, the first one was emphasized on exploring and analyzing the existing project management models and then in the second one the desired framework was created, beginning by focusing on seven existing models and then proposing for each phase of the framework the convenient lean and agile tools.

Keywords: Agility, hybrid project management, lean, scrum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 409
3259 Voltage Stability Investigation of Grid Connected Wind Farm

Authors: Trinh Trong Chuong

Abstract:

At present, it is very common to find renewable energy resources, especially wind power, connected to distribution systems. The impact of this wind power on voltage distribution levels has been addressed in the literature. The majority of this works deals with the determination of the maximum active and reactive power that is possible to be connected on a system load bus, until the voltage at that bus reaches the voltage collapse point. It is done by the traditional methods of PV curves reported in many references. Theoretical expression of maximum power limited by voltage stability transfer through a grid is formulated using an exact representation of distribution line with ABCD parameters. The expression is used to plot PV curves at various power factors of a radial system. Limited values of reactive power can be obtained. This paper presents a method to study the relationship between the active power and voltage (PV) at the load bus to identify the voltage stability limit. It is a foundation to build a permitted working operation region in complying with the voltage stability limit at the point of common coupling (PCC) connected wind farm.

Keywords: Wind generator, Voltage stability, grid connected

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3650
3258 Co-payment Strategies for Chronic Medications: A Qualitative and Comparative Analysis at European Level

Authors: Pedro M. Abreu, Bruno R. Mendes

Abstract:

The management of pharmacotherapy and the process of dispensing medicines is becoming critical in clinical pharmacy due to the increase of incidence and prevalence of chronic diseases, the complexity and customization of therapeutic regimens, the introduction of innovative and more expensive medicines, the unbalanced relation between expenditure and revenue as well as due to the lack of rationalization associated with medication use. For these reasons, co-payments emerged in Europe in the 70s and have been applied over the past few years in healthcare. Co-payments lead to a rationing and rationalization of user’s access under healthcare services and products, and simultaneously, to a qualification and improvement of the services and products for the end-user. This analysis, under hospital practices particularly and co-payment strategies in general, was carried out on all the European regions and identified four reference countries, that apply repeatedly this tool and with different approaches. The structure, content and adaptation of European co-payments were analyzed through 7 qualitative attributes and 19 performance indicators, and the results expressed in a scorecard, allowing to conclude that the German models (total score of 68,2% and 63,6% in both elected co-payments) can collect more compliance and effectiveness, the English models (total score of 50%) can be more accessible, and the French models (total score of 50%) can be more adequate to the socio-economic and legal framework. Other European models did not show the same quality and/or performance, so were not taken as a standard in the future design of co-payments strategies. In this sense, we can see in the co-payments a strategy not only to moderate the consumption of healthcare products and services, but especially to improve them, as well as a strategy to increment the value that the end-user assigns to these services and products, such as medicines.

Keywords: Clinical pharmacy, co-payments, healthcare, medicines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1345
3257 Efficient Tuning Parameter Selection by Cross-Validated Score in High Dimensional Models

Authors: Yoonsuh Jung

Abstract:

As DNA microarray data contain relatively small sample size compared to the number of genes, high dimensional models are often employed. In high dimensional models, the selection of tuning parameter (or, penalty parameter) is often one of the crucial parts of the modeling. Cross-validation is one of the most common methods for the tuning parameter selection, which selects a parameter value with the smallest cross-validated score. However, selecting a single value as an ‘optimal’ value for the parameter can be very unstable due to the sampling variation since the sample sizes of microarray data are often small. Our approach is to choose multiple candidates of tuning parameter first, then average the candidates with different weights depending on their performance. The additional step of estimating the weights and averaging the candidates rarely increase the computational cost, while it can considerably improve the traditional cross-validation. We show that the selected value from the suggested methods often lead to stable parameter selection as well as improved detection of significant genetic variables compared to the tradition cross-validation via real data and simulated data sets.

Keywords: Cross Validation, Parameter Averaging, Parameter Selection, Regularization Parameter Search.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1570
3256 Seismic Vulnerability Assessment of Buildings in Algiers Area

Authors: F. Lazzali, M. Farsi

Abstract:

Several models of vulnerability assessment have been proposed. The selection of one of these models depends on the objectives of the study. The classical methodologies for seismic vulnerability analysis, as a part of seismic risk analysis, have been formulated with statistical criteria based on a rapid observation. The information relating to the buildings performance is statistically elaborated. In this paper, we use the European Macroseismic Scale EMS-98 to define the relationship between damage and macroseismic intensity to assess the seismic vulnerability. Applying to Algiers area, the first step is to identify building typologies and to assign vulnerability classes. In the second step, damages are investigated according to EMS-98.

Keywords: Damage, EMS-98, inventory building, vulnerability classes

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1810
3255 Passenger Seat Vibration Control of Quarter Car System with MR Shock Absorber

Authors: Devdutt, M. L. Aggarwal

Abstract:

Semi-active Fuzzy control of quarter car system having three degrees of freedom and assembled with magneto-rheological (MR) shock absorber is studied in present paper. First, experimental work was performed on an MR shock absorber under different excitation conditions to obtain force-displacement and force-velocity curves. Then, for the application of experimental data in semi-active quarter car system, a polynomial model was selected. Finally, Fuzzy logic controller was designed having the combination of Forward fuzzy controller and Inverse fuzzy controller for integration in secondary suspension system of concerned model. The proposed controlled quarter car model was compared with uncontrolled system using simulation work under bump type of road excitation. Results obtained by simulation work shows the effectiveness of fuzzy controlled suspension system in improving the ride comfort and safety of travelling passengers compared to uncontrolled suspension system.

Keywords: MR shock absorber, three degrees of freedom, quarter car model, fuzzy controller.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3293
3254 Effects of Level Densities and Those of a-Parameter in the Framework of Preequilibrium Model for 63,65Cu(n,xp) Reactions in Neutrons at 9 to 15 MeV

Authors: L. Yettou

Abstract:

In this study, the calculations of proton emission spectra produced by 63Cu(n,xp) and 65Cu(n,xp) reactions are used in the framework of preequilibrium models using the EMPIRE code and TALYS code. Exciton Model predidtions combined with the Kalbach angular distribution systematics and the Hybrid Monte Carlo Simulation (HMS) were used. The effects of levels densities and those of a-parameter have been investigated for our calculations. The comparison with experimental data shows clear improvement over the Exciton Model and HMS calculations.

Keywords: Preequilibrium models, level density, level density a-parameter, 63Cu(n, xp) and 65Cu(n, xp) reactions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 517
3253 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: Goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, type-I error, penalized quasi-likelihood, power, quasi-likelihood.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 731
3252 Photogrammetry and GIS Integration for Archaeological Documentation of Ahl-Alkahf, Jordan

Authors: Rami Al-Ruzouq, Abdallah Al-Zoubi, Abdel-Rahman Abueladas, Petya Dimitrova

Abstract:

Protection and proper management of archaeological heritage are an essential process of studying and interpreting the generations present and future. Protecting the archaeological heritage is based upon multidiscipline professional collaboration. This study aims to gather data by different sources (Photogrammetry and Geographic Information System (GIS)) integrated for the purpose of documenting one the of significant archeological sites (Ahl-Alkahf, Jordan). 3D modeling deals with the actual image of the features, shapes and texture to represent reality as realistically as possible by using texture. The 3D coordinates that result of the photogrammetric adjustment procedures are used to create 3D-models of the study area. Adding Textures to the 3D-models surfaces gives a 'real world' appearance to the displayed models. GIS system combined all data, including boundary maps, indicating the location of archeological sites, transportation layer, digital elevation model and orthoimages. For realistic representation of the study area, 3D - GIS model prepared, where efficient generation, management and visualization of such special data can be achieved.

Keywords: Archaeology, close range photogrammetry, ortho-photo, 3D-GIS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2158
3251 The Effect of Symmetry on the Perception of Happiness and Boredom in Design Products

Authors: Michele Sinico

Abstract:

The present research investigates the effect of symmetry on the perception of happiness and boredom in design products. Three experiments were carried out in order to verify the degree of the visual expressive value on different models of bookcases, wall clocks, and chairs. 60 participants directly indicated the degree of happiness and boredom using 7-point rating scales. The findings show that the participants acknowledged a different value of expressive quality in the different product models. Results show also that symmetry is not a significant constraint for an emotional design project.

Keywords: Product experience, emotional design, symmetry, expressive qualities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 800
3250 Hand Gesture Recognition Based on Combined Features Extraction

Authors: Mahmoud Elmezain, Ayoub Al-Hamadi, Bernd Michaelis

Abstract:

Hand gesture is an active area of research in the vision community, mainly for the purpose of sign language recognition and Human Computer Interaction. In this paper, we propose a system to recognize alphabet characters (A-Z) and numbers (0-9) in real-time from stereo color image sequences using Hidden Markov Models (HMMs). Our system is based on three main stages; automatic segmentation and preprocessing of the hand regions, feature extraction and classification. In automatic segmentation and preprocessing stage, color and 3D depth map are used to detect hands where the hand trajectory will take place in further step using Mean-shift algorithm and Kalman filter. In the feature extraction stage, 3D combined features of location, orientation and velocity with respected to Cartesian systems are used. And then, k-means clustering is employed for HMMs codeword. The final stage so-called classification, Baum- Welch algorithm is used to do a full train for HMMs parameters. The gesture of alphabets and numbers is recognized using Left-Right Banded model in conjunction with Viterbi algorithm. Experimental results demonstrate that, our system can successfully recognize hand gestures with 98.33% recognition rate.

Keywords: Gesture Recognition, Computer Vision & Image Processing, Pattern Recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4027