Search results for: Inverse q-Gaussian distribution
1265 Numerical and Experimental Investigation of Airflow inside a Car Cabin
Authors: Mokhtar Djeddou, Amine Mehel, Georges Fokoua, Anne Tanière, Patrick Chevrier
Abstract:
Commuters’ exposure to air pollution, particularly to particle matter inside vehicles, is a significant health issue. Assessing particle concentrations and characterizing their distribution is an important first step in understanding and proposing solutions to improve car cabin air quality. It is known that particle dynamics is intimately driven by particle-turbulence interactions. In order to analyze and model pollutants distribution inside car cabins, it is crucial to examine first the single-phase flow topology and its associated turbulence characteristics. Within this context, Computational Fluid Dynamics (CFD) simulations were conducted to model airflow inside a full-scale car cabin using Reynolds Averaged Navier-Stokes (RANS) approach combined with the first order Realizable k-ε model to close the RANS equations. To assess the numerical model, a campaign of velocity field measurements at different locations in the front and back of the car cabin has been carried out using hot-wire anemometry technique. Comparison between numerical and experimental results shows a good agreement of velocity profiles. Additionally, visualization of streamlines shows the formation of jet flow developing out of the dashboard air vents and the formation of large vortex structures, particularly between the front and back-seat compartments. These vortical structures could play a key role in the accumulation and clustering of particles in a turbulent flow.
Keywords: Car cabin, CFD, hot-wire anemometry, vortical flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4681264 Information Retrieval: A Comparative Study of Textual Indexing Using an Oriented Object Database (db4o) and the Inverted File
Authors: Mohammed Erritali
Abstract:
The growth in the volume of text data such as books and articles in libraries for centuries has imposed to establish effective mechanisms to locate them. Early techniques such as abstraction, indexing and the use of classification categories have marked the birth of a new field of research called "Information Retrieval". Information Retrieval (IR) can be defined as the task of defining models and systems whose purpose is to facilitate access to a set of documents in electronic form (corpus) to allow a user to find the relevant ones for him, that is to say, the contents which matches with the information needs of the user. Most of the models of information retrieval use a specific data structure to index a corpus which is called "inverted file" or "reverse index". This inverted file collects information on all terms over the corpus documents specifying the identifiers of documents that contain the term in question, the frequency of each term in the documents of the corpus, the positions of the occurrences of the word... In this paper we use an oriented object database (db4o) instead of the inverted file, that is to say, instead to search a term in the inverted file, we will search it in the db4o database. The purpose of this work is to make a comparative study to see if the oriented object databases may be competing for the inverse index in terms of access speed and resource consumption using a large volume of data.
Keywords: Information Retrieval, indexation, oriented object database (db4o), inverted file.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17341263 Lower energy Gait Pattern Generation in 5-Link Biped Robot Using Image Processing
Authors: Byounghyun Kim, Youngjoon Han, Hernsoo Hahn
Abstract:
The purpose of this study is to find natural gait of biped robot such as human being by analyzing the COG (Center Of Gravity) trajectory of human being's gait. It is discovered that human beings gait naturally maintain the stability and use the minimum energy. This paper intends to find the natural gait pattern of biped robot using the minimum energy as well as maintaining the stability by analyzing the human's gait pattern that is measured from gait image on the sagittal plane and COG trajectory on the frontal plane. It is not possible to apply the torques of human's articulation to those of biped robot's because they have different degrees of freedom. Nonetheless, human and 5-link biped robots are similar in kinematics. For this, we generate gait pattern of the 5-link biped robot by using the GA algorithm of adaptation gait pattern which utilize the human's ZMP (Zero Moment Point) and torque of all articulation that are measured from human's gait pattern. The algorithm proposed creates biped robot's fluent gait pattern as that of human being's and to minimize energy consumption because the gait pattern of the 5-link biped robot model is modeled after consideration about the torque of human's each articulation on the sagittal plane and ZMP trajectory on the frontal plane. This paper demonstrate that the algorithm proposed is superior by evaluating 2 kinds of the 5-link biped robot applied to each gait patterns generated both in the general way using inverse kinematics and in the special way in which by considering visuality and efficiency.Keywords: 5-link biped robot, gait pattern, COG (Center OfGravity), ZMP (Zero Moment Point).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18931262 Optimal Consume of NaOH in Starches Gelatinization for Froth Flotation
Authors: André C. Silva, Débora N. Sousa, Elenice M. S. Silva, Thales P. Fontes, Raphael S. Tomaz
Abstract:
Starches are widely used as depressant in froth flotation operations in Brazil due to their efficiency, increasing the selectivity in the inverse flotation of quartz depressing iron ore. Starches market have been growing and improving in recent years, leading to better products attending the requirements of the mineral industry. The major source of starch used for iron ore is corn starch, which needs to be gelatinized with sodium hydroxide (NaOH) prior to use. This stage has a direct impact on industrials costs, once the lowest consumption of NaOH in gelatinization provides better control of the pH in the froth flotation and reduces the amount of electrolytes present in the pulp. In order to evaluate the gelatinization degree of different starches and flour were subjected to the addiction of NaOH and temperature variation experiments. Samples of starch (corn, cassava, HIPIX 100, HIPIX 101 and HIPIX 102 commercialized by Ingredion) and flour (cassava and potato) were tested. The starch samples were characterized through Scanning Electronic Microscopy and the amylose content were determined through spectrometry, swelling and solubility tests. The gelatinization was carried out through titration with NaOH, keeping the solution temperature constant at 40 oC. At the end of the tests, the optimal amount of NaOH consumed to gelatinize the starch or flour from different botanical sources was established and a correlation between the content of amylopectin in the starch and the starch/NaOH ratio needed for its gelatinization.
Keywords: Froth flotation, gelatinization, sodium hydroxide, starches and flours.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19291261 Hematologic Inflammatory Markers and Inflammation-Related Hepatokines in Pediatric Obesity
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
Obesity in children particularly draws attention, because it may threaten the individual’s future life due to many chronic diseases it may lead to. Most of these diseases including obesity itself altogether are related to inflammation. For this reason, inflammation-related parameters gain importance. Within this context, complete blood cell counts, ratios or indices derived from these counts have recently found some platform to be used as inflammatory markers. So far, mostly adipokines were investigated within the field of obesity. Metabolic inflammation is closely associated with cellular dysfunction. In this study, hematologic inflammatory markers and cytokines produced predominantly by the liver (fibroblast growth factor-21 (FGF-21) and fetuin A) were investigated in pediatric obesity. Two groups were constituted from 76 obese children based on World Health Organization criteria. Group 1 was composed of children, whose age- and sex-adjusted body mass index (BMI) percentiles were between 95 and 99. Group 2 consists of children, who are above 99th percentile. The first and the latter groups were defined as obese (OB) and morbid obese (MO). Anthropometric measurements of the children were performed. Informed consent forms and the approval of the institutional ethics committee were obtained. Blood cell counts and ratios were determined by automated hematology analyzer. The related ratios and indexes were calculated. Statistical evaluation of the data was performed by SPSS program. There was no statistically significant difference in terms of neutrophil-to lymphocyte ratio, monocyte-to-high density lipoprotein cholesterol ratio and platelet-to-lymphocyte ratio between the groups. Mean platelet volume and platelet distribution width values were decreased (p < 0.05), total platelet count, red cell distribution width (RDW) and systemic immune inflammation index values were increased (p < 0.01) in MO group. Both hepatokines were increased in the same group, however increases were not statistically significant. In this group, also a strong correlation was calculated between FGF-21 and RDW when controlled by age, hematocrit, iron and ferritin (r = 0.425; p < 0.01). In conclusion, the association between RDW, a hematologic inflammatory marker, and FGF-21, an inflammation-related hepatokine, found in MO group is an important finding discriminating between OB and MO children. This association is even more powerful when controlled by age and iron-related parameters.
Keywords: Childhood obesity, fetuin A, fibroblast growth factor-21, hematologic markers, red cell distribution width.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7011260 Studying the Value-Added Chain for the Fish Distribution Process at Quang Binh Fishing Port in Vietnam
Authors: Van Chung Nguyen
Abstract:
The purpose of this research is to study the current status of the value chain for fish distribution at Quang Binh Fishing Port with 360 research samples, in which the research subjects are fishermen, traders, retailers, and businesses. The research uses the approach of applying the value chain theoretical framework of Kaplinsky and Morris to quantify and describe market channels and actors participating in the value chain and analyze the value-added process of these companies according to market channels. The analysis results show that fishermen directly catch fish with high economic efficiency, but processing enterprises and, especially retailers, are the agents to obtain higher added value. Processing enterprises play a role that is not really clear due to outdated processing technology; in contrast, retailers have the highest added value. This shows that the added value of the fish supply chain at Quang Binh fishing port is still limited, leading to low output quality. Therefore, the selling price of fish to the market is still high compared to the abundant fish resources, leading to low consumption and limiting exports due to the quality of processing enterprises. This reduces demand and fishing capacity, and productivity is lower than potential. To improve the fish value chain at fishing ports, it is necessary to focus on improving product quality, strengthening linkages between actors, building brands and product consumption markets at the same time, improving the capacity of export processing enterprises.
Keywords: Quang Binh fishing port, value chain, fish market, distributions channel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 731259 Complex Wavelet Transform Based Image Denoising and Zooming Under the LMMSE Framework
Authors: T. P. Athira, Gibin Chacko George
Abstract:
This paper proposes a dual tree complex wavelet transform (DT-CWT) based directional interpolation scheme for noisy images. The problems of denoising and interpolation are modelled as to estimate the noiseless and missing samples under the same framework of optimal estimation. Initially, DT-CWT is used to decompose an input low-resolution noisy image into low and high frequency subbands. The high-frequency subband images are interpolated by linear minimum mean square estimation (LMMSE) based interpolation, which preserves the edges of the interpolated images. For each noisy LR image sample, we compute multiple estimates of it along different directions and then fuse those directional estimates for a more accurate denoised LR image. The estimation parameters calculated in the denoising processing can be readily used to interpolate the missing samples. The inverse DT-CWT is applied on the denoised input and interpolated high frequency subband images to obtain the high resolution image. Compared with the conventional schemes that perform denoising and interpolation in tandem, the proposed DT-CWT based noisy image interpolation method can reduce many noise-caused interpolation artifacts and preserve well the image edge structures. The visual and quantitative results show that the proposed technique outperforms many of the existing denoising and interpolation methods.
Keywords: Dual-tree complex wavelet transform (DT-CWT), denoising, interpolation, optimal estimation, super resolution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21631258 Roundabout Optimal Entry and Circulating Flow Induced by Road Hump
Authors: Amir Hossein Pakshir, A. Hossein Pour, N. Jahandar, Ali Paydar
Abstract:
Roundabout work on the principle of circulation and entry flows, where the maximum entry flow rates depend largely on circulating flow bearing in mind that entry flows must give away to circulating flows. Where an existing roundabout has a road hump installed at the entry arm, it can be hypothesized that the kinematics of vehicles may prevent the entry arm from achieving optimum performance. Road humps are traffic calming devices placed across road width solely as speed reduction mechanism. They are the preferred traffic calming option in Malaysia and often used on single and dual carriageway local routes. The speed limit on local routes is 30mph (50 km/hr). Road humps in their various forms achieved the biggest mean speed reduction (based on a mean speed before traffic calming of 30mph) of up to 10mph or 16 km/hr according to the UK Department of Transport. The underlying aim of reduced speed should be to achieve a 'safe' distribution of speeds which reflects the function of the road and the impacts on the local community. Constraining safe distribution of speeds may lead to poor drivers timing and delayed reflex reaction that can probably cause accident. Previous studies on road hump impact have focused mainly on speed reduction, traffic volume, noise and vibrations, discomfort and delay from the use of road humps. The paper is aimed at optimal entry and circulating flow induced by road humps. Results show that roundabout entry and circulating flow perform better in circumstances where there is no road hump at entrance.Keywords: Road hump, Roundabout, Speed Reduction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30161257 Reconstitute Information about Discontinued Water Quality Variables in the Nile Delta Monitoring Network Using Two Record Extension Techniques
Authors: Bahaa Khalil, Taha B. M. J. Ouarda, André St-Hilaire
Abstract:
The world economic crises and budget constraints have caused authorities, especially those in developing countries, to rationalize water quality monitoring activities. Rationalization consists of reducing the number of monitoring sites, the number of samples, and/or the number of water quality variables measured. The reduction in water quality variables is usually based on correlation. If two variables exhibit high correlation, it is an indication that some of the information produced may be redundant. Consequently, one variable can be discontinued, and the other continues to be measured. Later, the ordinary least squares (OLS) regression technique is employed to reconstitute information about discontinued variable by using the continuously measured one as an explanatory variable. In this paper, two record extension techniques are employed to reconstitute information about discontinued water quality variables, the OLS and the Line of Organic Correlation (LOC). An empirical experiment is conducted using water quality records from the Nile Delta water quality monitoring network in Egypt. The record extension techniques are compared for their ability to predict different statistical parameters of the discontinued variables. Results show that the OLS is better at estimating individual water quality records. However, results indicate an underestimation of the variance in the extended records. The LOC technique is superior in preserving characteristics of the entire distribution and avoids underestimation of the variance. It is concluded from this study that the OLS can be used for the substitution of missing values, while LOC is preferable for inferring statements about the probability distribution.Keywords: Record extension, record augmentation, monitoringnetworks, water quality indicators.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16121256 An Innovative Transient Free Adaptive SVC in Stepless Mode of Control
Authors: U. Gudaru, D. R. Patil
Abstract:
Electrical distribution systems are incurring large losses as the loads are wide spread, inadequate reactive power compensation facilities and their improper control. A comprehensive static VAR compensator consisting of capacitor bank in five binary sequential steps in conjunction with a thyristor controlled reactor of smallest step size is employed in the investigative work. The work deals with the performance evaluation through analytical studies and practical implementation on an existing system. A fast acting error adaptive controller is developed suitable both for contactor and thyristor switched capacitors. The switching operations achieved are transient free, practically no need to provide inrush current limiting reactors, TCR size minimum providing small percentages of nontriplen harmonics, facilitates stepless variation of reactive power depending on load requirement so as maintain power factor near unity always. It is elegant, closed loop microcontroller system having the features of self regulation in adaptive mode for automatic adjustment. It is successfully tested on a distribution transformer of three phase 50 Hz, Dy11, 11KV/440V, 125 KVA capacity and the functional feasibility and technical soundness are established. The controller developed is new, adaptable to both LT & HT systems and practically established to be giving reliable performance.
Keywords: Binary Sequential switched capacitor bank, TCR, Nontriplen harmonics, step less Q control, transient free
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23361255 An Inverse Approach for Determining Creep Properties from a Miniature Thin Plate Specimen under Bending
Abstract:
This paper describes a new approach which can be used to interpret the experimental creep deformation data obtained from miniaturized thin plate bending specimen test to the corresponding uniaxial data based on an inversed application of the reference stress method. The geometry of the thin plate is fully defined by the span of the support, l, the width, b, and the thickness, d. Firstly, analytical solutions for the steady-state, load-line creep deformation rate of the thin plates for a Norton’s power law under plane stress (b→0) and plane strain (b→∞) conditions were obtained, from which it can be seen that the load-line deformation rate of the thin plate under plane-stress conditions is much higher than that under the plane-strain conditions. Since analytical solution is not available for the plates with random b-values, finite element (FE) analyses are used to obtain the solutions. Based on the FE results obtained for various b/l ratios and creep exponent, n, as well as the analytical solutions under plane stress and plane strain conditions, an approximate, numerical solutions for the deformation rate are obtained by curve fitting. Using these solutions, a reference stress method is utilised to establish the conversion relationships between the applied load and the equivalent uniaxial stress and between the creep deformations of thin plate and the equivalent uniaxial creep strains. Finally, the accuracy of the empirical solution was assessed by using a set of “theoretical” experimental data.Keywords: Bending, Creep, Miniature Specimen, Thin Plate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19131254 Conventional and PSO Based Approaches for Model Reduction of SISO Discrete Systems
Authors: S. K. Tomar, R. Prasad, S. Panda, C. Ardil
Abstract:
Reduction of Single Input Single Output (SISO) discrete systems into lower order model, using a conventional and an evolutionary technique is presented in this paper. In the conventional technique, the mixed advantages of Modified Cauer Form (MCF) and differentiation are used. In this method the original discrete system is, first, converted into equivalent continuous system by applying bilinear transformation. The denominator of the equivalent continuous system and its reciprocal are differentiated successively, the reduced denominator of the desired order is obtained by combining the differentiated polynomials. The numerator is obtained by matching the quotients of MCF. The reduced continuous system is converted back into discrete system using inverse bilinear transformation. In the evolutionary technique method, Particle Swarm Optimization (PSO) is employed to reduce the higher order model. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example.
Keywords: Discrete System, Single Input Single Output (SISO), Bilinear Transformation, Reduced Order Model, Modified CauerForm, Polynomial Differentiation, Particle Swarm Optimization, Integral Squared Error.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19431253 Power Production Performance of Different Wave Energy Converters in the Southwestern Black Sea
Authors: Ajab G. Majidi, Bilal Bingölbali, Adem Akpınar
Abstract:
This study aims to investigate the amount of energy (economic wave energy potential) that can be obtained from the existing wave energy converters in the high wave energy potential region of the Black Sea in terms of wave energy potential and their performance at different depths in the region. The data needed for this purpose were obtained using the calibrated nested layered SWAN wave modeling program version 41.01AB, which was forced with Climate Forecast System Reanalysis (CFSR) winds from 1979 to 2009. The wave dataset at a time interval of 2 hours was accumulated for a sub-grid domain for around Karaburun beach in Arnavutkoy, a district of Istanbul city. The annual sea state characteristic matrices for the five different depths along with a vertical line to the coastline were calculated for 31 years. According to the power matrices of different wave energy converter systems and characteristic matrices for each possible installation depth, the probability distribution tables of the specified mean wave period or wave energy period and significant wave height were calculated. Then, by using the relationship between these distribution tables, according to the present wave climate, the energy that the wave energy converter systems at each depth can produce was determined. Thus, the economically feasible potential of the relevant coastal zone was revealed, and the effect of different depths on energy converter systems is presented. The Oceantic at 50, 75 and 100 m depths and Oyster at 5 and 25 m depths presents the best performance. In the 31-year long period 1998 the most and 1989 is the least dynamic year.Keywords: Annual power production, Black Sea, efficiency, power production performance, wave energy converter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6601252 Stress Analysis of Non-persistent Rock Joints under Biaxial Loading
Authors: Omer S. Mughieda
Abstract:
Two-dimensional finite element model was created in this work to investigate the stresses distribution within rock-like samples with offset open non-persistent joints under biaxial loading. The results of this study have explained the fracture mechanisms observed in tests on rock-like material with open non-persistent offset joints [1]. Finite element code SAP2000 was used to study the stresses distribution within the specimens. Four-nodded isoperimetric plain strain element with two degree of freedom per node, and the three-nodded constant strain triangular element with two degree of freedom per node were used in the present study.The results of the present study explained the formation of wing cracks at the tip of the joints for low confining stress as well as the formation of wing cracks at the middle of the joint for the higher confining stress. High shear stresses found in the numerical study at the tip of the joints explained the formation of secondary cracks at the tip of the joints in the experimental study. The study results coincide with the experimental observations which showed that for bridge inclination of 0o, the coalescence occurred due to shear failure and for bridge inclination of 90o the coalescence occurred due to tensile failure while for the other bridge inclinations coalescence occurred due to mixed tensile and shear failure.
Keywords: Finite element, open offset rock joint, SAP2000, biaxial loading.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21491251 Simulation of Lid Cavity Flow in Rectangular, Half-Circular and Beer Bucket Shapes using Quasi-Molecular Modeling
Authors: S. Kulsri, M. Jaroensutasinee, K. Jaroensutasinee
Abstract:
We developed a new method based on quasimolecular modeling to simulate the cavity flow in three cavity shapes: rectangular, half-circular and bucket beer in cgs units. Each quasi-molecule was a group of particles that interacted in a fashion entirely analogous to classical Newtonian molecular interactions. When a cavity flow was simulated, the instantaneous velocity vector fields were obtained by using an inverse distance weighted interpolation method. In all three cavity shapes, fluid motion was rotated counter-clockwise. The velocity vector fields of the three cavity shapes showed a primary vortex located near the upstream corners at time t ~ 0.500 s, t ~ 0.450 s and t ~ 0.350 s, respectively. The configurational kinetic energy of the cavities increased as time increased until the kinetic energy reached a maximum at time t ~ 0.02 s and, then, the kinetic energy decreased as time increased. The rectangular cavity system showed the lowest kinetic energy, while the half-circular cavity system showed the highest kinetic energy. The kinetic energy of rectangular, beer bucket and half-circular cavities fluctuated about stable average values 35.62 x 103, 38.04 x 103 and 40.80 x 103 ergs/particle, respectively. This indicated that the half-circular shapes were the most suitable shape for a shrimp pond because the water in shrimp pond flows best when we compared with rectangular and beer bucket shape.Keywords: Quasi-molecular modelling, particle modelling, lid driven cavity flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17301250 Experimental and Numerical Study of the Effect of Lateral Wind on the Feeder Airship
Authors: A. Suñol, D. Vucinic, S.Vanlanduit, T. Markova, A. Aksenov, I. Moskalyov
Abstract:
Feeder is one of the airships of the Multibody Advanced Airship for Transport (MAAT) system, under development within the EU FP7 project. MAAT is based on a modular concept composed of two different parts that have the possibility to join; respectively they are the so-called Cruiser and Feeder, designed on the lighter than air principle. Feeder, also named ATEN (Airship Transport Elevator Network), is the smaller one which joins the bigger one, Cruiser, also named PTAH (Photovoltaic modular Transport Airship for High altitude),envisaged to happen at 15km altitude. During the MAAT design phase, the aerodynamic studies of the both airships and their interactions are analyzed. The objective of these studies is to understand the aerodynamic behavior of all the preselected configurations, as an important element in the overall MAAT system design. The most of these configurations are only simulated by CFD, while the most feasible one is experimentally analyzed in order to validate and thrust the CFD predictions. This paper presents the numerical and experimental investigation of the Feeder “conical like" shape configuration. The experiments are focused on the aerodynamic force coefficients and the pressure distribution over the Feeder outer surface, while the numerical simulation cover also the analysis of the velocity and pressure distribution. Finally, the wind tunnel experiment is compared with its CFD model in order to validate such specific simulations with respective experiments and to better understand the difference between the wind tunnel and in-flight circumstances.
Keywords: MAAT project Feeder, CFD simulations, wind tunnel experiments, lateral wind influence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25731249 FACTS Based Stabilization for Smart Grid Applications
Authors: Adel M. Sharaf, Foad H. Gandoman
Abstract:
Nowadays, Photovoltaic-PV Farms/ Parks and large PV-Smart Grid Interface Schemes are emerging and commonly utilized in Renewable Energy distributed generation. However, PVhybrid- Dc-Ac Schemes using interface power electronic converters usually has negative impact on power quality and stabilization of modern electrical network under load excursions and network fault conditions in smart grid. Consequently, robust FACTS based interface schemes are required to ensure efficient energy utilization and stabilization of bus voltages as well as limiting switching/fault onrush current condition. FACTS devices are also used in smart grid- Battery Interface and Storage Schemes with PV-Battery Storage hybrid systems as an elegant alternative to renewable energy utilization with backup battery storage for electric utility energy and demand side management to provide needed energy and power capacity under heavy load conditions. The paper presents a robust interface PV-Li-Ion Battery Storage Interface Scheme for Distribution/Utilization Low Voltage Interface using FACTS stabilization enhancement and dynamic maximum PV power tracking controllers. Digital simulation and validation of the proposed scheme is done using MATLAB/Simulink software environment for Low Voltage- Distribution/Utilization system feeding a hybrid Linear-Motorized inrush and nonlinear type loads from a DC-AC Interface VSC-6- pulse Inverter Fed from the PV Park/Farm with a back-up Li-Ion Storage Battery.
Keywords: AC FACTS, Smart grid, Stabilization, PV-Battery Storage, Switched Filter-Compensation (SFC).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32471248 Adaptive Kalman Filter for Noise Estimation and Identification with Bayesian Approach
Authors: Farhad Asadi, S. Hossein Sadati
Abstract:
Bayesian approach can be used for parameter identification and extraction in state space models and its ability for analyzing sequence of data in dynamical system is proved in different literatures. In this paper, adaptive Kalman filter with Bayesian approach for identification of variances in measurement parameter noise is developed. Next, it is applied for estimation of the dynamical state and measurement data in discrete linear dynamical system. This algorithm at each step time estimates noise variance in measurement noise and state of system with Kalman filter. Next, approximation is designed at each step separately and consequently sufficient statistics of the state and noise variances are computed with a fixed-point iteration of an adaptive Kalman filter. Different simulations are applied for showing the influence of noise variance in measurement data on algorithm. Firstly, the effect of noise variance and its distribution on detection and identification performance is simulated in Kalman filter without Bayesian formulation. Then, simulation is applied to adaptive Kalman filter with the ability of noise variance tracking in measurement data. In these simulations, the influence of noise distribution of measurement data in each step is estimated, and true variance of data is obtained by algorithm and is compared in different scenarios. Afterwards, one typical modeling of nonlinear state space model with inducing noise measurement is simulated by this approach. Finally, the performance and the important limitations of this algorithm in these simulations are explained.
Keywords: adaptive filtering, Bayesian approach Kalman filtering approach, variance tracking
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6191247 Integrating Geographic Information into Diabetes Disease Management
Authors: Tsu-Yun Chiu, Tsung-Hsueh Lu, Tain-Junn Cheng
Abstract:
Background: Traditional chronic disease management did not pay attention to effects of geographic factors on the compliance of treatment regime, which resulted in geographic inequality in outcomes of chronic disease management. This study aims to examine the geographic distribution and clustering of quality indicators of diabetes care. Method: We first extracted address, demographic information and quality of care indicators (number of visits, complications, prescription and laboratory records) of patients with diabetes for 2014 from medical information system in a medical center in Tainan City, Taiwan, and the patients’ addresses were transformed into district- and village-level data. We then compared the differences of geographic distribution and clustering of quality of care indicators between districts and villages. Despite the descriptive results, rate ratios and 95% confidence intervals (CI) were estimated for indices of care in order to compare the quality of diabetes care among different areas. Results: A total of 23,588 patients with diabetes were extracted from the hospital data system; whereas 12,716 patients’ information and medical records were included to the following analysis. More than half of the subjects in this study were male and between 60-79 years old. Furthermore, the quality of diabetes care did indeed vary by geographical levels. Thru the smaller level, we could point out clustered areas more specifically. Fuguo Village (of Yongkang District) and Zhiyi Village (of Sinhua District) were found to be “hotspots” for nephropathy and cerebrovascular disease; while Wangliau Village and Erwang Village (of Yongkang District) would be “coldspots” for lowest proportion of ≥80% compliance to blood lipids examination. On the other hand, Yuping Village (in Anping District) was the area with the lowest proportion of ≥80% compliance to all laboratory examination. Conclusion: In spite of examining the geographic distribution, calculating rate ratios and their 95% CI could also be a useful and consistent method to test the association. This information is useful for health planners, diabetes case managers and other affiliate practitioners to organize care resources to the areas most needed.
Keywords: Geocoding, chronic disease management, quality of diabetes care, rate ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9971246 Effect of Scale on Slab Heat Transfer in a Walking Beam Type Reheating Furnace
Authors: Man Young Kim
Abstract:
In this work, the effects of scale on thermal behavior of the slab in a walking-beam type reheating furnace is studied by considering scale formation and growth in a furnace environment. Also, mathematical heat transfer model to predict the thermal radiation in a complex shaped reheating furnace with slab and skid buttons is developed with combined nongray WSGGM and blocked-off solution procedure. The model can attack the heat flux distribution within the furnace and the temperature distribution in the slab throughout the reheating furnace process by considering the heat exchange between the slab and its surroundings, including the radiant heat transfer among the slabs, the skids, the hot combustion gases and the furnace wall as well as the gas convective heat transfer in the furnace. With the introduction of the mathematical formulations validation of the present numerical model is conducted by calculating two example problems of blocked-off and nongray gas radiative heat transfer. After discussing the formation and growth of the scale on the slab surface, slab heating characteristics with scale is investigated in terms of temperature rise with time.
Keywords: Reheating Furnace, Scale, Steel Slab, Radiative Heat Transfer, WSGGM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 43721245 Tool for Analysing the Sensitivity and Tolerance of Mechatronic Systems in Matlab GUI
Authors: Bohuslava Juhasova, Martin Juhas, Renata Masarova, Zuzana Sutova
Abstract:
The article deals with the tool in Matlab GUI form that is designed to analyse a mechatronic system sensitivity and tolerance. In the analysed mechatronic system, a torque is transferred from the drive to the load through a coupling containing flexible elements. Different methods of control system design are used. The classic form of the feedback control is proposed using Naslin method, modulus optimum criterion and inverse dynamics method. The cascade form of the control is proposed based on combination of modulus optimum criterion and symmetric optimum criterion. The sensitivity is analysed on the basis of absolute and relative sensitivity of system function to the change of chosen parameter value of the mechatronic system, as well as the control subsystem. The tolerance is analysed in the form of determining the range of allowed relative changes of selected system parameters in the field of system stability. The tool allows to analyse an influence of torsion stiffness, torsion damping, inertia moments of the motor and the load and controller(s) parameters. The sensitivity and tolerance are monitored in terms of the impact of parameter change on the response in the form of system step response and system frequency-response logarithmic characteristics. The Symbolic Math Toolbox for expression of the final shape of analysed system functions was used. The sensitivity and tolerance are graphically represented as 2D graph of sensitivity or tolerance of the system function and 3D/2D static/interactive graph of step/frequency response.Keywords: Mechatronic systems, Matlab GUI, sensitivity, tolerance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20511244 Machine Learning Facing Behavioral Noise Problem in an Imbalanced Data Using One Side Behavioral Noise Reduction: Application to a Fraud Detection
Authors: Salma El Hajjami, Jamal Malki, Alain Bouju, Mohammed Berrada
Abstract:
With the expansion of machine learning and data mining in the context of Big Data analytics, the common problem that affects data is class imbalance. It refers to an imbalanced distribution of instances belonging to each class. This problem is present in many real world applications such as fraud detection, network intrusion detection, medical diagnostics, etc. In these cases, data instances labeled negatively are significantly more numerous than the instances labeled positively. When this difference is too large, the learning system may face difficulty when tackling this problem, since it is initially designed to work in relatively balanced class distribution scenarios. Another important problem, which usually accompanies these imbalanced data, is the overlapping instances between the two classes. It is commonly referred to as noise or overlapping data. In this article, we propose an approach called: One Side Behavioral Noise Reduction (OSBNR). This approach presents a way to deal with the problem of class imbalance in the presence of a high noise level. OSBNR is based on two steps. Firstly, a cluster analysis is applied to groups similar instances from the minority class into several behavior clusters. Secondly, we select and eliminate the instances of the majority class, considered as behavioral noise, which overlap with behavior clusters of the minority class. The results of experiments carried out on a representative public dataset confirm that the proposed approach is efficient for the treatment of class imbalances in the presence of noise.Keywords: Machine learning, Imbalanced data, Data mining, Big data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11371243 Methodologies for Crack Initiation in Welded Joints Applied to Inspection Planning
Authors: Guang Zou, Kian Banisoleiman, Arturo González
Abstract:
Crack initiation and propagation threatens structural integrity of welded joints and normally inspections are assigned based on crack propagation models. However, the approach based on crack propagation models may not be applicable for some high-quality welded joints, because the initial flaws in them may be so small that it may take long time for the flaws to develop into a detectable size. This raises a concern regarding the inspection planning of high-quality welded joins, as there is no generally acceptable approach for modeling the whole fatigue process that includes the crack initiation period. In order to address the issue, this paper reviews treatment methods for crack initiation period and initial crack size in crack propagation models applied to inspection planning. Generally, there are four approaches, by: 1) Neglecting the crack initiation period and fitting a probabilistic distribution for initial crack size based on statistical data; 2) Extrapolating the crack propagation stage to a very small fictitious initial crack size, so that the whole fatigue process can be modeled by crack propagation models; 3) Assuming a fixed detectable initial crack size and fitting a probabilistic distribution for crack initiation time based on specimen tests; and, 4) Modeling the crack initiation and propagation stage separately using small crack growth theories and Paris law or similar models. The conclusion is that in view of trade-off between accuracy and computation efforts, calibration of a small fictitious initial crack size to S-N curves is the most efficient approach.
Keywords: Crack initiation, fatigue reliability, inspection planning, welded joints.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13971242 Optimizing the Components of Grid-Independent Microgrids for Rural Electrification Utilizing Solar Panel and Supercapacitor
Authors: Astiaj Khoramshahi, Hossein Ahmadi Danesh Ashtiani, Ahmad Khoshgard, Hamidreza Damghani, Leila Damghani
Abstract:
Rural electrification rates are generally low in Iran and many parts of the world that lack sustainable renewable energy resources. Many homes are based on polluting solutions such as crude oil and diesel generators for lighting, heating, and charging electrical gadgets. Small-scale portable solar battery packs are accessible to the public; however, they have low capacity and are challenging to be distributed in developing countries. To design a battery-based microgrid power systems, the load profile is one of the key parameters. Additionally, the reliability of the system should be taken into account. A conventional microgrid system can be either AC or coupling DC. Both AC and DC microgrids have advantages and disadvantages depending on their application and can be either connected to the main grid or perform independently. This article proposes a tool for optimal sizing of microgrid-independent systems via respective analysis. To show such an analysis, the type of power generation, number of panels, battery capacity, microgrid size, and group of available consumers should be considered. Therefore, the optimization of different design scenarios is based on number of solar panels and super saving sources, ranges of the depth of discharges, to calculate size and estimate the overall cost. Generally, it is observed that there is an inverse relationship between the depth spectrum of discharge and the solar microgrid costs.
Keywords: Storage, super-storage, grid-independent, economic factors, microgrid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3151241 A Novel Neighborhood Defined Feature Selection on Phase Congruency Images for Recognition of Faces with Extreme Variations
Authors: Satyanadh Gundimada, Vijayan K Asari
Abstract:
A novel feature selection strategy to improve the recognition accuracy on the faces that are affected due to nonuniform illumination, partial occlusions and varying expressions is proposed in this paper. This technique is applicable especially in scenarios where the possibility of obtaining a reliable intra-class probability distribution is minimal due to fewer numbers of training samples. Phase congruency features in an image are defined as the points where the Fourier components of that image are maximally inphase. These features are invariant to brightness and contrast of the image under consideration. This property allows to achieve the goal of lighting invariant face recognition. Phase congruency maps of the training samples are generated and a novel modular feature selection strategy is implemented. Smaller sub regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are arranged in the order of increasing distance between the sub regions involved in merging. The assumption behind the proposed implementation of the region merging and arrangement strategy is that, local dependencies among the pixels are more important than global dependencies. The obtained feature sets are then arranged in the decreasing order of discriminating capability using a criterion function, which is the ratio of the between class variance to the within class variance of the sample set, in the PCA domain. The results indicate high improvement in the classification performance compared to baseline algorithms.
Keywords: Discriminant analysis, intra-class probability distribution, principal component analysis, phase congruency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18501240 Thermo-mechanical Deformation Behavior of Functionally Graded Rectangular Plates Subjected to Various Boundary Conditions and Loadings
Authors: Mohammad Talha, B. N. Singh
Abstract:
This paper deals with the thermo-mechanical deformation behavior of shear deformable functionally graded ceramicmetal (FGM) plates. Theoretical formulations are based on higher order shear deformation theory with a considerable amendment in the transverse displacement using finite element method (FEM). The mechanical properties of the plate are assumed to be temperaturedependent and graded in the thickness direction according to a powerlaw distribution in terms of the volume fractions of the constituents. The temperature field is supposed to be a uniform distribution over the plate surface (XY plane) and varied in the thickness direction only. The fundamental equations for the FGM plates are obtained using variational approach by considering traction free boundary conditions on the top and bottom faces of the plate. A C0 continuous isoparametric Lagrangian finite element with thirteen degrees of freedom per node have been employed to accomplish the results. Convergence and comparison studies have been performed to demonstrate the efficiency of the present model. The numerical results are obtained for different thickness ratios, aspect ratios, volume fraction index and temperature rise with different loading and boundary conditions. Numerical results for the FGM plates are provided in dimensionless tabular and graphical forms. The results proclaim that the temperature field and the gradient in the material properties have significant role on the thermo-mechanical deformation behavior of the FGM plates.
Keywords: Functionally graded material, higher order shear deformation theory, finite element method, independent field variables.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23341239 Life Estimation of Induction Motor Insulation under Non-Sinusoidal Voltage and Current Waveforms Using Fuzzy Logic
Authors: Triloksingh G. Arora, Mohan V. Aware, Dhananjay R. Tutakne
Abstract:
Thyristor based firing angle controlled voltage regulators are extensively used for speed control of single phase induction motors. This leads to power saving but the applied voltage and current waveforms become non-sinusoidal. These non-sinusoidal waveforms increase voltage and thermal stresses which result into accelerated insulation aging, thus reducing the motor life. Life models that allow predicting the capability of insulation under such multi-stress situations tend to be very complex and somewhat impractical. This paper presents the fuzzy logic application to investigate the synergic effect of voltage and thermal stresses on intrinsic aging of induction motor insulation. A fuzzy expert system is developed to estimate the life of induction motor insulation under multiple stresses. Three insulation degradation parameters, viz. peak modification factor, wave shape modification factor and thermal loss are experimentally obtained for different firing angles. Fuzzy expert system consists of fuzzyfication of the insulation degradation parameters, algorithms based on inverse power law to estimate the life and defuzzyficaton process to output the life. An electro-thermal life model is developed from the results of fuzzy expert system. This fuzzy logic based electro-thermal life model can be used for life estimation of induction motors operated with non-sinusoidal voltage and current waveforms.
Keywords: Aging, Dielectric losses, Insulation and Life Estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30531238 Advanced Stochastic Models for Partially Developed Speckle
Authors: Jihad S. Daba (Jean-Pierre Dubois), Philip Jreije
Abstract:
Speckled images arise when coherent microwave, optical, and acoustic imaging techniques are used to image an object, surface or scene. Examples of coherent imaging systems include synthetic aperture radar, laser imaging systems, imaging sonar systems, and medical ultrasound systems. Speckle noise is a form of object or target induced noise that results when the surface of the object is Rayleigh rough compared to the wavelength of the illuminating radiation. Detection and estimation in images corrupted by speckle noise is complicated by the nature of the noise and is not as straightforward as detection and estimation in additive noise. In this work, we derive stochastic models for speckle noise, with an emphasis on speckle as it arises in medical ultrasound images. The motivation for this work is the problem of segmentation and tissue classification using ultrasound imaging. Modeling of speckle in this context involves partially developed speckle model where an underlying Poisson point process modulates a Gram-Charlier series of Laguerre weighted exponential functions, resulting in a doubly stochastic filtered Poisson point process. The statistical distribution of partially developed speckle is derived in a closed canonical form. It is observed that as the mean number of scatterers in a resolution cell is increased, the probability density function approaches an exponential distribution. This is consistent with fully developed speckle noise as demonstrated by the Central Limit theorem.Keywords: Doubly stochastic filtered process, Poisson point process, segmentation, speckle, ultrasound
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17441237 Apparent Temperature Distribution on Scaffoldings during Construction Works
Authors: I. Szer, J. Szer, K. Czarnocki, E. Błazik-Borowa
Abstract:
People on construction scaffoldings work in dynamically changing, often unfavourable climate. Additionally, this kind of work is performed on low stiffness structures at high altitude, which increases the risk of accidents. It is therefore desirable to define the parameters of the work environment that contribute to increasing the construction worker occupational safety level. The aim of this article is to present how changes in microclimate parameters on scaffolding can impact the development of dangerous situations and accidents. For this purpose, indicators based on the human thermal balance were used. However, use of this model under construction conditions is often burdened by significant errors or even impossible to implement due to the lack of precise data. Thus, in the target model, the modified parameter was used – apparent environmental temperature. Apparent temperature in the proposed Scaffold Use Risk Assessment Model has been a perceived outdoor temperature, caused by the combined effects of air temperature, radiative temperature, relative humidity and wind speed (wind chill index, heat index). In the paper, correlations between component factors and apparent temperature for facade scaffolding with a width of 24.5 m and a height of 42.3 m, located at south-west side of building are presented. The distribution of factors on the scaffolding has been used to evaluate fitting of the microclimate model. The results of the studies indicate that observed ranges of apparent temperature on the scaffolds frequently results in a worker’s inability to adapt. This leads to reduced concentration and increased fatigue, adversely affects health, and consequently increases the risk of dangerous situations and accidental injuries
Keywords: Apparent temperature, health, safety work, scaffoldings.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9291236 Detecting Financial Bubbles Using Gap between Common Stocks and Preferred Stocks
Authors: Changju Lee, Seungmo Ku, Sondo Kim, Woojin Chang
Abstract:
How to detecting financial bubble? Addressing this simple question has been the focus of a vast amount of empirical research spanning almost half a century. However, financial bubble is hard to observe and varying over the time; there needs to be more research on this area. In this paper, we used abnormal difference between common stocks price and those preferred stocks price to explain financial bubble. First, we proposed the ‘W-index’ which indicates spread between common stocks and those preferred stocks in stock market. Second, to prove that this ‘W-index’ is valid for measuring financial bubble, we showed that there is an inverse relationship between this ‘W-index’ and S&P500 rate of return. Specifically, our hypothesis is that when ‘W-index’ is comparably higher than other periods, financial bubbles are added up in stock market and vice versa; according to our hypothesis, if investors made long term investments when ‘W-index’ is high, they would have negative rate of return; however, if investors made long term investments when ‘W-index’ is low, they would have positive rate of return. By comparing correlation values and adjusted R-squared values of between W-index and S&P500 return, VIX index and S&P500 return, and TED index and S&P500 return, we showed only W-index has significant relationship between S&P500 rate of return. In addition, we figured out how long investors should hold their investment position regard the effect of financial bubble. Using this W-index, investors could measure financial bubble in the market and invest with low risk.
Keywords: Financial bubbles, detection, preferred stocks, pairs trading, future return, forecast.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1131