Search results for: stability functions.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2297

Search results for: stability functions.

227 An Intelligent Water Drop Algorithm for Solving Economic Load Dispatch Problem

Authors: S. Rao Rayapudi

Abstract:

Economic Load Dispatch (ELD) is a method of determining the most efficient, low-cost and reliable operation of a power system by dispatching available electricity generation resources to supply load on the system. The primary objective of economic dispatch is to minimize total cost of generation while honoring operational constraints of available generation resources. In this paper an intelligent water drop (IWD) algorithm has been proposed to solve ELD problem with an objective of minimizing the total cost of generation. Intelligent water drop algorithm is a swarm-based natureinspired optimization algorithm, which has been inspired from natural rivers. A natural river often finds good paths among lots of possible paths in its ways from source to destination and finally find almost optimal path to their destination. These ideas are embedded into the proposed algorithm for solving economic load dispatch problem. The main advantage of the proposed technique is easy is implement and capable of finding feasible near global optimal solution with less computational effort. In order to illustrate the effectiveness of the proposed method, it has been tested on 6-unit and 20-unit test systems with incremental fuel cost functions taking into account the valve point-point loading effects. Numerical results shows that the proposed method has good convergence property and better in quality of solution than other algorithms reported in recent literature.

Keywords: Economic load dispatch, Transmission loss, Optimization, Valve point loading, Intelligent Water Drop Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3619
226 Action Potential Propagation in Inhomogeneous 2D Mouse Ventricular Tissue Model

Authors: Mouse, cardiac myocytes, computer simulation, action potential.

Abstract:

Heterogeneous repolarization causes dispersion of the T-wave and has been linked to arrhythmogenesis. Such heterogeneities appear due to differential expression of ionic currents in different regions of the heart, both in healthy and diseased animals and humans. Mice are important animals for the study of heart diseases because of the ability to create transgenic animals. We used our previously reported model of mouse ventricular myocytes to develop 2D mouse ventricular tissue model consisting of 14,000 cells (apical or septal ventricular myocytes) and to study the stability of action potential propagation and Ca2+ dynamics. The 2D tissue model was implemented as a FORTRAN program code for highperformance multiprocessor computers that runs on 36 processors. Our tissue model is able to simulate heterogeneities not only in action potential repolarization, but also heterogeneities in intracellular Ca2+ transients. The multicellular model reproduced experimentally observed velocities of action potential propagation and demonstrated the importance of incorporation of realistic Ca2+ dynamics for action potential propagation. The simulations show that relatively sharp gradients of repolarization are predicted to exist in 2D mouse tissue models, and they are primarily determined by the cellular properties of ventricular myocytes. Abrupt local gradients of channel expression can cause alternans at longer pacing basic cycle lengths than gradual changes, and development of alternans depends on the site of stimulation.

Keywords: Mouse, cardiac myocytes, computer simulation, action potential

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1465
225 Simplified Stress Gradient Method for Stress-Intensity Factor Determination

Authors: Jeries J. Abou-Hanna

Abstract:

Several techniques exist for determining stress-intensity factors in linear elastic fracture mechanics analysis. These techniques are based on analytical, numerical, and empirical approaches that have been well documented in literature and engineering handbooks. However, not all techniques share the same merit. In addition to overly-conservative results, the numerical methods that require extensive computational effort, and those requiring copious user parameters hinder practicing engineers from efficiently evaluating stress-intensity factors. This paper investigates the prospects of reducing the complexity and required variables to determine stress-intensity factors through the utilization of the stress gradient and a weighting function. The heart of this work resides in the understanding that fracture emanating from stress concentration locations cannot be explained by a single maximum stress value approach, but requires use of a critical volume in which the crack exists. In order to understand the effectiveness of this technique, this study investigated components of different notch geometry and varying levels of stress gradients. Two forms of weighting functions were employed to determine stress-intensity factors and results were compared to analytical exact methods. The results indicated that the “exponential” weighting function was superior to the “absolute” weighting function. An error band +/- 10% was met for cases ranging from a steep stress gradient in a sharp v-notch to the less severe stress transitions of a large circular notch. The incorporation of the proposed method has shown to be a worthwhile consideration.

Keywords: Fracture mechanics, finite element method, stress intensity factor, stress gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 753
224 Transmission Model for Plasmodium Vivax Malaria: Conditions for Bifurcation

Authors: P. Pongsumpun, I.M. Tang

Abstract:

Plasmodium vivax malaria differs from P. falciparum malaria in that a person suffering from P. vivax infection can suffer relapses of the disease. This is due the parasite being able to remain dormant in the liver of the patients where it is able to re-infect the patient after a passage of time. During this stage, the patient is classified as being in the dormant class. The model to describe the transmission of P. vivax malaria consists of a human population divided into four classes, the susceptible, the infected, the dormant and the recovered. The effect of a time delay on the transmission of this disease is studied. The time delay is the period in which the P. vivax parasite develops inside the mosquito (vector) before the vector becomes infectious (i.e., pass on the infection). We analyze our model by using standard dynamic modeling method. Two stable equilibrium states, a disease free state E0 and an endemic state E1, are found to be possible. It is found that the E0 state is stable when a newly defined basic reproduction number G is less than one. If G is greater than one the endemic state E1 is stable. The conditions for the endemic equilibrium state E1 to be a stable spiral node are established. For realistic values of the parameters in the model, it is found that solutions in phase space are trajectories spiraling into the endemic state. It is shown that the limit cycle and chaotic behaviors can only be achieved with unrealistic parameter values.

Keywords: Equilibrium states, Hopf bifurcation, limit cyclebehavior, local stability, Plasmodium Vivax, time delay.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2234
223 Mathematical Model for the Transmission of P. Falciparum and P. Vivax Malaria along the Thai-Myanmar Border

Authors: Puntani Pongsumpun, I-Ming Tang

Abstract:

The most Malaria cases are occur along Thai-Mynmar border. Mathematical model for the transmission of Plasmodium falciparum and Plasmodium vivax malaria in a mixed population of Thais and migrant Burmese living along the Thai-Myanmar Border is studied. The population is separated into two groups, Thai and Burmese. Each population is divided into susceptible, infected, dormant and recovered subclasses. The loss of immunity by individuals in the infected class causes them to move back into the susceptible class. The person who is infected with Plasmodium vivax and is a member of the dormant class can relapse back into the infected class. A standard dynamical method is used to analyze the behaviors of the model. Two stable equilibrium states, a disease-free state and an epidemic state, are found to be possible in each population. A disease-free equilibrium state in the Thai population occurs when there are no infected Burmese entering the community. When infected Burmese enter the Thai community, an epidemic state can occur. It is found that the disease-free state is stable when the threshold number is less than one. The epidemic state is stable when a second threshold number is greater than one. Numerical simulations are used to confirm the results of our model.

Keywords: Basic reproduction number, Burmese, local stability, Plasmodium Vivax malaria.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1858
222 Using the Minnesota Multiphasic Personality Inventory-2 and Mini Mental State Examination-2 in Cognitive Behavioral Therapy: Case Studies

Authors: Cornelia-Eugenia Munteanu

Abstract:

From a psychological perspective, psychopathology is the area of clinical psychology that has at its core psychological assessment and psychotherapy. In day-to-day clinical practice, psychodiagnosis and psychotherapy are used independently, according to their intended purpose and their specific methods of application. The paper explores how the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) and Mini Mental State Examination-2 (MMSE-2) psychological tools contribute to enhancing the effectiveness of cognitive behavioral psychotherapy (CBT). This combined approach, psychotherapy in conjunction with assessment of personality and cognitive functions, is illustrated by two cases, a severe depressive episode with psychotic symptoms and a mixed anxiety-depressive disorder. The order in which CBT, MMPI-2, and MMSE-2 were used in the diagnostic and therapeutic process was determined by the particularities of each case. In the first case, the sequence started with psychotherapy, followed by the administration of blue form MMSE-2, MMPI-2, and red form MMSE-2. In the second case, the cognitive screening with blue form MMSE-2 led to a personality assessment using MMPI-2, followed by red form MMSE-2; reapplication of the MMPI-2 due to the invalidation of the first profile, and finally, psychotherapy. The MMPI-2 protocols gathered useful information that directed the steps of therapeutic intervention: a detailed symptom picture of potentially self-destructive thoughts and behaviors otherwise undetected during the interview. The memory loss and poor concentration were confirmed by MMSE-2 cognitive screening. This combined approach, psychotherapy with psychological assessment, aligns with the trend of adaptation of the psychological services to the everyday life of contemporary man and paves the way for deepening and developing the field.

Keywords: Assessment, cognitive behavioral psychotherapy, MMPI-2, MMSE-2, psychopathology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2085
221 An Autonomous Collaborative Forecasting System Implementation – The First Step towards Successful CPFR System

Authors: Chi-Fang Huang, Yun-Shiow Chen, Yun-Kung Chung

Abstract:

In the past decade, artificial neural networks (ANNs) have been regarded as an instrument for problem-solving and decision-making; indeed, they have already done with a substantial efficiency and effectiveness improvement in industries and businesses. In this paper, the Back-Propagation neural Networks (BPNs) will be modulated to demonstrate the performance of the collaborative forecasting (CF) function of a Collaborative Planning, Forecasting and Replenishment (CPFR®) system. CPFR functions the balance between the sufficient product supply and the necessary customer demand in a Supply and Demand Chain (SDC). Several classical standard BPN will be grouped, collaborated and exploited for the easy implementation of the proposed modular ANN framework based on the topology of a SDC. Each individual BPN is applied as a modular tool to perform the task of forecasting SKUs (Stock-Keeping Units) levels that are managed and supervised at a POS (point of sale), a wholesaler, and a manufacturer in an SDC. The proposed modular BPN-based CF system will be exemplified and experimentally verified using lots of datasets of the simulated SDC. The experimental results showed that a complex CF problem can be divided into a group of simpler sub-problems based on the single independent trading partners distributed over SDC, and its SKU forecasting accuracy was satisfied when the system forecasted values compared to the original simulated SDC data. The primary task of implementing an autonomous CF involves the study of supervised ANN learning methodology which aims at making “knowledgeable" decision for the best SKU sales plan and stocks management.

Keywords: CPFR, artificial neural networks, global logistics, supply and demand chain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1981
220 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function

Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos

Abstract:

Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.

Keywords: Diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion equation, trends functions, bi-parameters Weibull density function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1962
219 Banking Risk Management between the Prudential and the Operational Approaches

Authors: Mustapha Achibane, Imane Allam

Abstract:

Since the nineties, all Moroccan banking institutions have to respect an arsenal of prudential ratios. The respect of these prudential measures aims to ensure the financial system stability. In order to do so, regulatory authorities tried to reduce the financial and operational risks incurred by the banking entities. Meanwhile, regulatory authorities demanded a balance sheet management work from banks. They also asked them to establish a management control system to manage operational risk, as well as an effort in terms of incurred risk-based commitments. Therefore, the prudential approach has a macroeconomic nature and it is presented as a determinant of the operational, microeconomic approach. This operational approach takes the form of a strategy that each banking entity must develop to manage the different banking risks. This study seeks to analyze the problem of risk management between the prudential and the operational approaches. It was processed through a literature review followed by an analysis of the Moroccan banking sector’s performance. At first, we will reconcile the inductive logic and then, the analytical one. The first approach consists of analyzing the phenomenon from a normative and conceptual perspective, while the second one will consist of considering the Moroccan banking system and analyzing the behavior of Moroccan banking entities in terms of risk management and performance. The results identified a favorable growth in terms of performance, despite the huge provisioning effort made to meet the international standards and the harmonization of the regulations.

Keywords: Banking performance, financial intermediation, operational approach, prudential standards, risk management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 624
218 Nonlinear Sensitive Control of Centrifugal Compressor

Authors: F. Laaouad, M. Bouguerra, A. Hafaifa, A. Iratni

Abstract:

In this work, we treat the problems related to chemical and petrochemical plants of a certain complex process taking the centrifugal compressor as an example, a system being very complex by its physical structure as well as its behaviour (surge phenomenon). We propose to study the application possibilities of the recent control approaches to the compressor behaviour, and consequently evaluate their contribution in the practical and theoretical fields. Facing the studied industrial process complexity, we choose to make recourse to fuzzy logic for analysis and treatment of its control problem owing to the fact that these techniques constitute the only framework in which the types of imperfect knowledge can jointly be treated (uncertainties, inaccuracies, etc..) offering suitable tools to characterise them. In the particular case of the centrifugal compressor, these imperfections are interpreted by modelling errors, the neglected dynamics, no modelisable dynamics and the parametric variations. The purpose of this paper is to produce a total robust nonlinear controller design method to stabilize the compression process at its optimum steady state by manipulating the gas rate flow. In order to cope with both the parameter uncertainty and the structured non linearity of the plant, the proposed method consists of a linear steady state regulation that ensures robust optimal control and of a nonlinear compensation that achieves the exact input/output linearization.

Keywords: Compressor, Fuzzy logic, Surge control, Bilinearcontroller, Stability analysis, Nonlinear plant.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2134
217 Fluorescence Quenching as an Efficient Tool for Sensing Application: Study on the Fluorescence Quenching of Naphthalimide Dye by Graphene Oxide

Authors: Sanaz Seraj, Shohre Rouhani

Abstract:

Recently, graphene has gained much attention because of its unique optical, mechanical, electrical, and thermal properties. Graphene has been used as a key material in the technological applications in various areas such as sensors, drug delivery, super capacitors, transparent conductor, and solar cell. It has a superior quenching efficiency for various fluorophores. Based on these unique properties, the optical sensors with graphene materials as the energy acceptors have demonstrated great success in recent years. During quenching, the emission of a fluorophore is perturbed by a quencher which can be a substrate or biomolecule, and due to this phenomenon, fluorophore-quencher has been used for selective detection of target molecules. Among fluorescence dyes, 1,8-naphthalimide is well known for its typical intramolecular charge transfer (ICT) and photo-induced charge transfer (PET) fluorophore, strong absorption and emission in the visible region, high photo stability, and large Stokes shift. Derivatives of 1,8-naphthalimides have found applications in some areas, especially fluorescence sensors. Herein, the fluorescence quenching of graphene oxide has been carried out on a naphthalimide dye as a fluorescent probe model. The quenching ability of graphene oxide on naphthalimide dye was studied by UV-VIS and fluorescence spectroscopy. This study showed that graphene is an efficient quencher for fluorescent dyes. Therefore, it can be used as a suitable candidate sensing platform. To the best of our knowledge, studies on the quenching and absorption of naphthalimide dyes by graphene oxide are rare.

Keywords: Fluorescence, graphene oxide, naphthalimide dye, quenching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 745
216 Study of Rayleigh-Bénard-Brinkman Convection Using LTNE Model and Coupled, Real Ginzburg-Landau Equations

Authors: P. G. Siddheshwar, R. K. Vanishree, C. Kanchana

Abstract:

A local nonlinear stability analysis using a eight-mode expansion is performed in arriving at the coupled amplitude equations for Rayleigh-Bénard-Brinkman convection (RBBC) in the presence of LTNE effects. Streamlines and isotherms are obtained in the two-dimensional unsteady finite-amplitude convection regime. The parameters’ influence on heat transport is found to be more pronounced at small time than at long times. Results of the Rayleigh-Bénard convection is obtained as a particular case of the present study. Additional modes are shown not to significantly influence the heat transport thus leading us to infer that five minimal modes are sufficient to make a study of RBBC. The present problem that uses rolls as a pattern of manifestation of instability is a needed first step in the direction of making a very general non-local study of two-dimensional unsteady convection. The results may be useful in determining the preferred range of parameters’ values while making rheometric measurements in fluids to ascertain fluid properties such as viscosity. The results of LTE are obtained as a limiting case of the results of LTNE obtained in the paper.

Keywords: Rayleigh-Bénard convection, heat transport, porous media, generalized Lorenz model, coupled Ginzburg-Landau model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 919
215 Evaluation of Easy-to-Use Energy Building Design Tools for Solar Access Analysis in Urban Contexts: Comparison of Friendly Simulation Design Tools for Architectural Practice in the Early Design Stage

Authors: M. Iommi, G. Losco

Abstract:

Current building sector is focused on reduction of energy requirements, on renewable energy generation and on regeneration of existing urban areas. These targets need to be solved with a systemic approach, considering several aspects simultaneously such as climate conditions, lighting conditions, solar radiation, PV potential, etc. The solar access analysis is an already known method to analyze the solar potentials, but in current years, simulation tools have provided more effective opportunities to perform this type of analysis, in particular in the early design stage. Nowadays, the study of the solar access is related to the easiness of the use of simulation tools, in rapid and easy way, during the design process. This study presents a comparison of three simulation tools, from the point of view of the user, with the aim to highlight differences in the easy-to-use of these tools. Using a real urban context as case study, three tools; Ecotect, Townscope and Heliodon, are tested, performing models and simulations and examining the capabilities and output results of solar access analysis. The evaluation of the ease-to-use of these tools is based on some detected parameters and features, such as the types of simulation, requirements of input data, types of results, etc. As a result, a framework is provided in which features and capabilities of each tool are shown. This framework shows the differences among these tools about functions, features and capabilities. The aim of this study is to support users and to improve the integration of simulation tools for solar access with the design process.

Keywords: Solar access analysis, energy building design tools, urban planning, solar potential.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2057
214 Thermal Management of Space Power Electronics using TLM-3D

Authors: R. Hocine, K. Belkacemi, A. Boukortt, A. Boudjemai

Abstract:

When designing satellites, one of the major issues aside for designing its primary subsystems is to devise its thermal. The thermal management of satellites requires solving different sets of issues with regards to modelling. If the satellite is well conditioned all other parts of the satellite will have higher temperature no matter what. The main issue of thermal modelling for satellite design is really making sure that all the other points of the satellite will be within the temperature limits they are designed. The insertion of power electronics in aerospace technologies is becoming widespread and the modern electronic systems used in space must be reliable and efficient with thermal management unaffected by outer space constraints. Many advanced thermal management techniques have been developed in recent years that have application in high power electronic systems. This paper presents a Three-Dimensional Modal Transmission Line Matrix (3D-TLM) implementation of transient heat flow in space power electronics. In such kind of components heat dissipation and good thermal management are essential. Simulation provides the cheapest tool to investigate all aspects of power handling. The 3DTLM has been successful in modeling heat diffusion problems and has proven to be efficient in terms of stability and complex geometry. The results show a three-dimensional visualisation of self-heating phenomena in the device affected by outer space constraints, and will presents possible approaches for increasing the heat dissipation capability of the power modules.

Keywords: Thermal management, conduction, heat dissipation, CTE, ceramic, heat spreader, nodes, 3D-TLM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2778
213 Evaluation of Urban Development Proposals An ANP Approach

Authors: T. Gómez-Navarro, M. García-Melón, D. Díaz-Martín, S. Acuna-Dutra,

Abstract:

In this paper a new approach to prioritize urban planning projects in an efficient and reliable way is presented. It is based on environmental pressure indices and multicriteria decision methods. The paper introduces a rigorous method with acceptable complexity of rank ordering urban development proposals according to their environmental pressure. The technique combines the use of Environmental Pressure Indicators, the aggregation of indicators in an Environmental Pressure Index by means of the Analytic Network Process method and interpreting the information obtained from the experts during the decision-making process. The ANP method allows the aggregation of the experts- judgments on each of the indicators into one Environmental Pressure Index. In addition, ANP is based on utility ratio functions which are the most appropriate for the analysis of uncertain data, like experts- estimations. Finally, unlike the other multicriteria techniques, ANP allows the decision problem to be modelled using the relationships among dependent criteria. The method has been applied to the proposal for urban development of La Carlota airport in Caracas (Venezuela). The Venezuelan Government would like to see a recreational project develop on the abandoned area and mean a significant improvement for the capital. There are currently three options on their table which are currently under evaluation. They include a Health Club, a Residential area and a Theme Park. The participating experts coincided in the appreciation that the method proposed in this paper is useful and an improvement from traditional techniques such as environmental impact studies, lifecycle analysis, etc. They find the results obtained coherent, the process seems sufficiently rigorous and precise, and the use of resources is significantly less than in other methods.

Keywords: Environmental pressure indicators, multicriteria decision analysis, analytic network process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1796
212 Effect of Sensory Manipulations on Human Joint Stiffness Strategy and Its Adaptation for Human Dynamic Stability

Authors: Aizreena Azaman, Mai Ishibashi, Masanori Ishizawa, Shin-Ichiroh Yamamoto

Abstract:

Sensory input plays an important role to human posture control system to initiate strategy in order to counterpart any unbalance condition and thus, prevent fall. In previous study, joint stiffness was observed able to describe certain issues regarding to movement performance. But, correlation between balance ability and joint stiffness is still remains unknown. In this study, joint stiffening strategy at ankle and hip were observed under different sensory manipulations and its correlation with conventional clinical test (Functional Reach Test) for balance ability was investigated. In order to create unstable condition, two different surface perturbations (tilt up-tilt (TT) down and forward-backward (FB)) at four different frequencies (0.2, 0.4, 0.6 and 0.8 Hz) were introduced. Furthermore, four different sensory manipulation conditions (include vision and vestibular system) were applied to the subject and they were asked to maintain their position as possible. The results suggested that joint stiffness were high during difficult balance situation. Less balance people generated high average joint stiffness compared to balance people. Besides, adaptation of posture control system under repetitive external perturbation also suggested less during sensory limited condition. Overall, analysis of joint stiffening response possible to predict unbalance situation faced by human

Keywords: Balance ability, joint stiffness, sensory, adaptation, dynamic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1942
211 An Algorithm Proposed for FIR Filter Coefficients Representation

Authors: Mohamed Al Mahdi Eshtawie, Masuri Bin Othman

Abstract:

Finite impulse response (FIR) filters have the advantage of linear phase, guaranteed stability, fewer finite precision errors, and efficient implementation. In contrast, they have a major disadvantage of high order need (more coefficients) than IIR counterpart with comparable performance. The high order demand imposes more hardware requirements, arithmetic operations, area usage, and power consumption when designing and fabricating the filter. Therefore, minimizing or reducing these parameters, is a major goal or target in digital filter design task. This paper presents an algorithm proposed for modifying values and the number of non-zero coefficients used to represent the FIR digital pulse shaping filter response. With this algorithm, the FIR filter frequency and phase response can be represented with a minimum number of non-zero coefficients. Therefore, reducing the arithmetic complexity needed to get the filter output. Consequently, the system characteristic i.e. power consumption, area usage, and processing time are also reduced. The proposed algorithm is more powerful when integrated with multiplierless algorithms such as distributed arithmetic (DA) in designing high order digital FIR filters. Here the DA usage eliminates the need for multipliers when implementing the multiply and accumulate unit (MAC) and the proposed algorithm will reduce the number of adders and addition operations needed through the minimization of the non-zero values coefficients to get the filter output.

Keywords: Pulse shaping Filter, Distributed Arithmetic, Optimization algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3162
210 Development of Circulating Support Environment of Multilingual Medical Communication using Parallel Texts for Foreign Patients

Authors: Mai Miyabe, Taku Fukushima, Takashi Yoshino, Aguri Shigeno

Abstract:

The need for multilingual communication in Japan has increased due to an increase in the number of foreigners in the country. When people communicate in their nonnative language, the differences in language prevent mutual understanding among the communicating individuals. In the medical field, communication between the hospital staff and patients is a serious problem. Currently, medical translators accompany patients to medical care facilities, and the demand for medical translators is increasing. However, medical translators cannot necessarily provide support, especially in cases in which round-the-clock support is required or in case of emergencies. The medical field has high expectations from information technology. Hence, a system that supports accurate multilingual communication is required. Despite recent advances in machine translation technology, it is very difficult to obtain highly accurate translations. We have developed a support system called M3 for multilingual medical reception. M3 provides support functions that aid foreign patients in the following respects: conversation, questionnaires, reception procedures, and hospital navigation; it also has a Q&A function. Users can operate M3 using a touch screen and receive text-based support. In addition, M3 uses accurate translation tools called parallel texts to facilitate reliable communication through conversations between the hospital staff and the patients. However, if there is no parallel text that expresses what users want to communicate, the users cannot communicate. In this study, we have developed a circulating support environment for multilingual medical communication using parallel texts. The proposed environment can circulate necessary parallel texts through the following procedure: (1) a user provides feedback about the necessary parallel texts, following which (2) these parallel texts are created and evaluated.

Keywords: multilingual medical communication, parallel texts.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1478
209 Advanced Stochastic Models for Partially Developed Speckle

Authors: Jihad S. Daba (Jean-Pierre Dubois), Philip Jreije

Abstract:

Speckled images arise when coherent microwave, optical, and acoustic imaging techniques are used to image an object, surface or scene. Examples of coherent imaging systems include synthetic aperture radar, laser imaging systems, imaging sonar systems, and medical ultrasound systems. Speckle noise is a form of object or target induced noise that results when the surface of the object is Rayleigh rough compared to the wavelength of the illuminating radiation. Detection and estimation in images corrupted by speckle noise is complicated by the nature of the noise and is not as straightforward as detection and estimation in additive noise. In this work, we derive stochastic models for speckle noise, with an emphasis on speckle as it arises in medical ultrasound images. The motivation for this work is the problem of segmentation and tissue classification using ultrasound imaging. Modeling of speckle in this context involves partially developed speckle model where an underlying Poisson point process modulates a Gram-Charlier series of Laguerre weighted exponential functions, resulting in a doubly stochastic filtered Poisson point process. The statistical distribution of partially developed speckle is derived in a closed canonical form. It is observed that as the mean number of scatterers in a resolution cell is increased, the probability density function approaches an exponential distribution. This is consistent with fully developed speckle noise as demonstrated by the Central Limit theorem.

Keywords: Doubly stochastic filtered process, Poisson point process, segmentation, speckle, ultrasound

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1735
208 The Effects of Increasing Unsaturation in Palm Oil and Incorporation of Carbon Nanotubes on Resinous Properties

Authors: Muhammad R. Islam, Mohammad Dalour H. Beg, Saidatul S. Jamari

Abstract:

Considering palm oil as non-drying oil owing to its low iodine value, an attempt was taken to increase the unsaturation in the fatty acid chains of palm oil for the preparation of alkyds. To increase the unsaturation in the palm oil, sulphuric acid (SA) and para-toluene sulphonic acid (PTSA) was used prior to alcoholysis for the dehydration process. The iodine number of the oil samples was checked for the unsaturation measurement by Wijs method. Alkyd resin was prepared using the dehydrated palm oil by following alcoholysis and esterification reaction. To improve the film properties 0.5wt.% multi-wall carbon nano tubes (MWCNTs) were used to manufacture polymeric film. The properties of the resins were characterized by various physico-chemical properties such as density, viscosity, iodine value, saponification value, etc. Structural elucidation was confirmed by Fourier transform of infrared spectroscopy and proton nuclear magnetic resonance; surfaces of the films were examined by field-emission scanning electron microscope. In addition, pencil hardness and chemical resistivity was also measured by using standard methods. The effect of enhancement of the unsaturation in the fatty acid chain found significant and motivational. The resin prepared with dehydrated palm oil showed improved properties regarding hardness and chemical resistivity testing. The incorporation of MWCNTs enhanced the thermal stability and hardness of the films as well.

Keywords: Alkyd resin, nano-coatings, dehydration, palm oil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2440
207 Numerical Simulations of Acoustic Imaging in Hydrodynamic Tunnel with Model Adaptation and Boundary Layer Noise Reduction

Authors: Sylvain Amailland, Jean-Hugh Thomas, Charles Pézerat, Romuald Boucheron, Jean-Claude Pascal

Abstract:

The noise requirements for naval and research vessels have seen an increasing demand for quieter ships in order to fulfil current regulations and to reduce the effects on marine life. Hence, new methods dedicated to the characterization of propeller noise, which is the main source of noise in the far-field, are needed. The study of cavitating propellers in closed-section is interesting for analyzing hydrodynamic performance but could involve significant difficulties for hydroacoustic study, especially due to reverberation and boundary layer noise in the tunnel. The aim of this paper is to present a numerical methodology for the identification of hydroacoustic sources on marine propellers using hydrophone arrays in a large hydrodynamic tunnel. The main difficulties are linked to the reverberation of the tunnel and the boundary layer noise that strongly reduce the signal-to-noise ratio. In this paper it is proposed to estimate the reflection coefficients using an inverse method and some reference transfer functions measured in the tunnel. This approach allows to reduce the uncertainties of the propagation model used in the inverse problem. In order to reduce the boundary layer noise, a cleaning algorithm taking advantage of the low rank and sparse structure of the cross-spectrum matrices of the acoustic and the boundary layer noise is presented. This approach allows to recover the acoustic signal even well under the boundary layer noise. The improvement brought by this method is visible on acoustic maps resulting from beamforming and DAMAS algorithms.

Keywords: Acoustic imaging, boundary layer noise denoising, inverse problems, model adaptation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 966
206 Changes of Poultry Meat Chemical Composition, in Relationship with Lighting Schedule

Authors: P. C. Boisteanu, M. G. Usturoi, Roxana Lazar, B. V. Avarvarei

Abstract:

The paper is included within the framework of a complex research program, which was initiated from the hypothesis arguing on the existence of a correlation between pineal indolic and peptide hormones and the somatic development rhythm, including thus the epithalamium-epiphysis complex involvement. At birds, pineal gland contains a circadian oscillator, playing a main role in the temporal organization of the cerebral functions. The secretion of pineal indolic hormones is characterized by a high endogenous rhythmic alternation, modulated by the light/darkness (L/D) succession and by temperature as well. The research has been carried out using 100 chicken broilers - “Ross" commercial hybrid, randomly allocated in two experimental batches: Lc batch, reared under a 12L/12D lighting schedule and Lexp batch, which was photic pinealectomised through continuous exposition to light (150 lux, 24 hours, 56 days). Chemical and physical features of the meat issued from breast fillet and thighs muscles have been studied, determining the dry matter, proteins, fat, collagen, salt content and pH value, as well. Besides the variations of meat chemical composition in relation with lighting schedule, other parameters have been studied: live weight dynamics, feed intake and somatic development degree. The achieved results became significant since chickens have 7 days of age, some variations of the studied parameters being registered, revealing that the pineal gland physiologic activity, in relation with the lighting schedule, could be interpreted through the monitoring of the somatic development technological parameters, usually studied within the chicken broilers rearing aviculture practice.

Keywords: lighting schedule, physic-chemical characteristics ofmeat, pineal gland at birds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1576
205 A Comparative Analysis of Multiple Criteria Decision Making Analysis Methods for Strategic, Tactical, and Operational Decisions in Military Fighter Aircraft Selection

Authors: C. Ardil

Abstract:

This paper considers a comparative analysis of multiple criteria decision making analysis methods for strategic, tactical, and operational decisions in military fighter aircraft selection for the air force fleet planning. The evaluation criteria governing the decision analysis process are determined from the literature for the three existing military combat aircraft. Military fighter aircraft selection problem is structured using "preference analysis for reference ideal solution (PARIS)” approach in multiple criteria decision analysis (MCDMA). Systematic comparisons were made with existing MCDMA methods (PARIS, and TOPSIS) to verify the stability and accuracy of the results obtained. The proposed integrated MCDMA systematic approach is expected to address the issues encountered in the aircraft selection process. The comparative analysis results show that the proposed method is an effective and accurate tool that can help analysts make better strategic, tactical, and operational decisions.

Keywords: aircraft, military fighter aircraft selection, multiple criteria decision making, multiple criteria decision making analysis, mean weight, entropy weight, MCDMA, PARIS, TOPSIS, Saab Gripen, Dassault Rafale, Eurofighter Typhoon

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 554
204 Evaluation of drought Tolerance Indices in Dryland Bread wheat Genotypes under Post-Anthesis drought Stress

Authors: Mokhtar Ghobadi , Mohammad-Eghbal Ghobadi, Danial Kahrizi, Alireza Zebarjadi, Mahdi Geravandi

Abstract:

Post-anthesis drought stress is the most important problem affecting wheat production in dryland fields, specially in Mediterranean regions. The main objective of this research was to evaluate drought tolerance indices in dryland wheat genotypes under post-anthesis drought stress. The research was including two different experiments. In each experiment, twenty dryland bread wheat genotypes were sown in a randomized complete blocks design (RCBD) with three replications. One of experiments belonged to rain-fed conditions (post-anthesis drought stress) and other experiment was under non-stress conditions (with supplemental irrigation). Different drought tolerance indices include Stress Tolerance (Tol), Mean Productivity (MP), Geometric Mean Productivity (GMP), Stress Susceptibility Index (SSI), Stress Tolerance Index (STI), Harmonic Mean (HAM), Yield Index (YI) and Yield Stability Index (YSI) were evaluate based on grain yield under rain-fed (Ys) and supplemental irrigation (Yp) environments. G10 and G12 were the most tolerant genotypes based on TOL and SSI. But, based on MP, GMP, STI, HAM and YI indices, G1 and G2 were selected. STI, GMP and MP indices had high correlation with grain yield under rain-fed and supplementary irrigation conditions and were recognized as appropriate indices to identify genotypes with high grain yield and low sensitivity to drought stress environments.

Keywords: Dryland wheat, Supplemental irrigation, Tolerance indices

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2044
203 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things

Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker

Abstract:

Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.

Keywords: CUSUM, evidence theory, KL divergence, quickest change detection, time series data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 987
202 Improving Fault Resilience and Reconstruction of Overlay Multicast Tree Using Leaving Time of Participants

Authors: Bhed Bahadur Bista

Abstract:

Network layer multicast, i.e. IP multicast, even after many years of research, development and standardization, is not deployed in large scale due to both technical (e.g. upgrading of routers) and political (e.g. policy making and negotiation) issues. Researchers looked for alternatives and proposed application/overlay multicast where multicast functions are handled by end hosts, not network layer routers. Member hosts wishing to receive multicast data form a multicast delivery tree. The intermediate hosts in the tree act as routers also, i.e. they forward data to the lower hosts in the tree. Unlike IP multicast, where a router cannot leave the tree until all members below it leave, in overlay multicast any member can leave the tree at any time thus disjoining the tree and disrupting the data dissemination. All the disrupted hosts have to rejoin the tree. This characteristic of the overlay multicast causes multicast tree unstable, data loss and rejoin overhead. In this paper, we propose that each node sets its leaving time from the tree and sends join request to a number of nodes in the tree. The nodes in the tree will reject the request if their leaving time is earlier than the requesting node otherwise they will accept the request. The node can join at one of the accepting nodes. This makes the tree more stable as the nodes will join the tree according to their leaving time, earliest leaving time node being at the leaf of the tree. Some intermediate nodes may not follow their leaving time and leave earlier than their leaving time thus disrupting the tree. For this, we propose a proactive recovery mechanism so that disrupted nodes can rejoin the tree at predetermined nodes immediately. We have shown by simulation that there is less overhead when joining the multicast tree and the recovery time of the disrupted nodes is much less than the previous works. Keywords

Keywords: Network layer multicast, Fault Resilience, IP multicast

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1381
201 Improved Segmentation of Speckled Images Using an Arithmetic-to-Geometric Mean Ratio Kernel

Authors: J. Daba, J. Dubois

Abstract:

In this work, we improve a previously developed segmentation scheme aimed at extracting edge information from speckled images using a maximum likelihood edge detector. The scheme was based on finding a threshold for the probability density function of a new kernel defined as the arithmetic mean-to-geometric mean ratio field over a circular neighborhood set and, in a general context, is founded on a likelihood random field model (LRFM). The segmentation algorithm was applied to discriminated speckle areas obtained using simple elliptic discriminant functions based on measures of the signal-to-noise ratio with fractional order moments. A rigorous stochastic analysis was used to derive an exact expression for the cumulative density function of the probability density function of the random field. Based on this, an accurate probability of error was derived and the performance of the scheme was analysed. The improved segmentation scheme performed well for both simulated and real images and showed superior results to those previously obtained using the original LRFM scheme and standard edge detection methods. In particular, the false alarm probability was markedly lower than that of the original LRFM method with oversegmentation artifacts virtually eliminated. The importance of this work lies in the development of a stochastic-based segmentation, allowing an accurate quantification of the probability of false detection. Non visual quantification and misclassification in medical ultrasound speckled images is relatively new and is of interest to clinicians.

Keywords: Discriminant function, false alarm, segmentation, signal-to-noise ratio, skewness, speckle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1648
200 Numerical Study of Bubbling Fluidized Beds Operating at Sub-atmospheric Conditions

Authors: Lanka Dinushke Weerasiri, Subrat Das, Daniel Fabijanic, William Yang

Abstract:

Fluidization at vacuum pressure has been a topic that is of growing research interest. Several industrial applications (such as drying, extractive metallurgy, and chemical vapor deposition (CVD)) can potentially take advantage of vacuum pressure fluidization. Particularly, the fine chemical industry requires processing under safe conditions for thermolabile substances, and reduced pressure fluidized beds offer an alternative. Fluidized beds under vacuum conditions provide optimal conditions for treatment of granular materials where the reduced gas pressure maintains an operational environment outside of flammability conditions. The fluidization at low-pressure is markedly different from the usual gas flow patterns of atmospheric fluidization. The different flow regimes can be characterized by the dimensionless Knudsen number. Nevertheless, hydrodynamics of bubbling vacuum fluidized beds has not been investigated to author’s best knowledge. In this work, the two-fluid numerical method was used to determine the impact of reduced pressure on the fundamental properties of a fluidized bed. The slip flow model implemented by Ansys Fluent User Defined Functions (UDF) was used to determine the interphase momentum exchange coefficient. A wide range of operating pressures was investigated (1.01, 0.5, 0.25, 0.1 and 0.03 Bar). The gas was supplied by a uniform inlet at 1.5Umf and 2Umf. The predicted minimum fluidization velocity (Umf) shows excellent agreement with the experimental data. The results show that the operating pressure has a notable impact on the bed properties and its hydrodynamics. Furthermore, it also shows that the existing Gorosko correlation that predicts bed expansion is not applicable under reduced pressure conditions.

Keywords: Computational fluid dynamics, fluidized bed, gas-solid flow, vacuum pressure, slip flow, minimum fluidization velocity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 764
199 An Educational Application of Online Games for Learning Difficulties

Authors: M. Margoudi, Z. Smyrnaiou

Abstract:

The current paper presents the results of a conducted case study. During the past few years the number of children diagnosed with Learning Difficulties has drastically augmented and especially the cases of ADHD (Attention Deficit Hyperactivity Disorder). One of the core characteristics of ADHD is a deficit in working memory functions. The review of the literature indicates a plethora of educational software that aim at training and enhancing the working memory. Nevertheless, in the current paper, the possibility of using for the same purpose free, online games will be explored. Another issue of interest is the potential effect of the working memory training to the core symptoms of ADHD. In order to explore the abovementioned research questions, three digital tests are employed, all of which are developed on the E-slate platform by the author, in order to check the levels of ADHD’s symptoms and to be used as diagnostic tools, both in the beginning and in the end of the case study. The tools used during the main intervention of the research are free online games for the training of working memory. The research and the data analysis focus on the following axes: a) the presence and the possible change in two of the core symptoms of ADHD, attention and impulsivity and b) a possible change in the general cognitive abilities of the individual. The case study was conducted with the participation of a thirteen year-old, female student, diagnosed with ADHD, during after-school hours. The results of the study indicate positive changes both in the levels of attention and impulsivity. Therefore, we conclude that the training of working memory through the use of free, online games has a positive impact on the characteristics of ADHD. Finally, concerning the second research question, the change in general cognitive abilities, no significant changes were noted.

Keywords: ADHD, attention, impulsivity, online games.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1862
198 Development of a Double Coating Technique for Recycled Concrete Aggregates Used in Hot-mix Asphalt

Authors: Abbaas I. Kareem, H. Nikraz

Abstract:

The use of recycled concrete aggregates (RCAs) in hot-mix asphalt (HMA) production could ease natural aggregate shortage and maintain sustainability in modern societies. However, it was the attached cement mortar and other impurities that make the RCAs behave differently than high-quality aggregates. Therefore, different upgrading treatments were suggested to enhance its properties before being used in HMA production. Disappointedly, some of these treatments had caused degradation to some RCA properties. In order to avoid degradation, a coating technique is developed. This technique is based on combining of two main treatments, so it is named as double coating technique (DCT). Dosages of 0%, 20%, 40% and 60% uncoated RCA, RCA coated with Cement Slag Paste (CSP), and Double Coated Recycled Concrete Aggregates (DCRCAs) in place of granite aggregates were evaluated. The results indicated that the DCT improves strength and reduces water absorption of the DCRCAs compared with uncoated RCAs and RCA coated with CSP. In addition, the DCRCA asphalt mixtures exhibit stability values higher than those obtained for mixes made with granite aggregates, uncoated RCAs and RCAs coated with CSP. Also, the DCRCA asphalt mixtures require less bitumen to achieve the optimum bitumen content (OBC) than those manufactured with uncoated RCA and RCA-coated with CSP. Although the results obtained were encouraging, more testing is required in order to examine the effect of the DCT on performance properties of DCRCA- asphalt mixtures such as rutting and fatigue.

Keywords: Recycled concrete aggregates, hot mix asphalt, double coating technique, aggregate crashed value, Marshall parameters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 836