Search results for: scatterer density estimation
1327 Estimation of Relative Permeabilities and Capillary Pressures in Shale Using Simulation Method
Authors: F. C. Amadi, G. C. Enyi, G. Nasr
Abstract:
Relative permeabilities are practical factors that are used to correct the single phase Darcy’s law for application to multiphase flow. For effective characterisation of large-scale multiphase flow in hydrocarbon recovery, relative permeability and capillary pressures are used. These parameters are acquired via special core flooding experiments. Special core analysis (SCAL) module of reservoir simulation is applied by engineers for the evaluation of these parameters. But, core flooding experiments in shale core sample are expensive and time consuming before various flow assumptions are achieved for instance Darcy’s law. This makes it imperative for the application of coreflooding simulations in which various analysis of relative permeabilities and capillary pressures of multiphase flow can be carried out efficiently and effectively at a relative pace. This paper presents a Sendra software simulation of core flooding to achieve to relative permeabilities and capillary pressures using different correlations. The approach used in this study was three steps. The first step, the basic petrophysical parameters of Marcellus shale sample such as porosity was determined using laboratory techniques. Secondly, core flooding was simulated for particular scenario of injection using different correlations. And thirdly the best fit correlations for the estimation of relative permeability and capillary pressure was obtained. This research approach saves cost and time and very reliable in the computation of relative permeability and capillary pressures at steady or unsteady state, drainage or imbibition processes in oil and gas industry when compared to other methods.
Keywords: Special core analysis (SCAL), relative permeability, capillary pressures, drainage, imbibition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18161326 Improved Estimation of Evolutionary Spectrum based on Short Time Fourier Transforms and Modified Magnitude Group Delay by Signal Decomposition
Authors: H K Lakshminarayana, J S Bhat, H M Mahesh
Abstract:
A new estimator for evolutionary spectrum (ES) based on short time Fourier transform (STFT) and modified group delay function (MGDF) by signal decomposition (SD) is proposed. The STFT due to its built-in averaging, suppresses the cross terms and the MGDF preserves the frequency resolution of the rectangular window with the reduction in the Gibbs ripple. The present work overcomes the magnitude distortion observed in multi-component non-stationary signals with STFT and MGDF estimation of ES using SD. The SD is achieved either through discrete cosine transform based harmonic wavelet transform (DCTHWT) or perfect reconstruction filter banks (PRFB). The MGDF also improves the signal to noise ratio by removing associated noise. The performance of the present method is illustrated for cross chirp and frequency shift keying (FSK) signals, which indicates that its performance is better than STFT-MGDF (STFT-GD) alone. Further its noise immunity is better than STFT. The SD based methods, however cannot bring out the frequency transition path from band to band clearly, as there will be gap in the contour plot at the transition. The PRFB based STFT-SD shows good performance than DCTHWT decomposition method for STFT-GD.Keywords: Evolutionary Spectrum, Modified Group Delay, Discrete Cosine Transform, Harmonic Wavelet Transform, Perfect Reconstruction Filter Banks, Short Time Fourier Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16111325 Electronics Thermal Management Driven Design of an IP65-Rated Motor Inverter
Authors: Sachin Kamble, Raghothama Anekal, Shivakumar Bhavi
Abstract:
Thermal management of electronic components packaged inside an IP65 rated enclosure is of prime importance in industrial applications. Electrical enclosure protects the multiple board configurations such as inverter, power, controller board components, busbars, and various power dissipating components from harsh environments. Industrial environments often experience relatively warm ambient conditions, and the electronic components housed in the enclosure dissipate heat, due to which the enclosures and the components require thermal management as well as reduction of internal ambient temperatures. Design of Experiments based thermal simulation approach with MOSFET arrangement, Heat sink design, Enclosure Volume, Copper and Aluminum Spreader, Power density, and Printed Circuit Board (PCB) type were considered to optimize air temperature inside the IP65 enclosure to ensure conducive operating temperature for controller board and electronic components through the different modes of heat transfer viz. conduction, natural convection and radiation using Ansys ICEPAK. MOSFET’s with the parallel arrangement, IP65 enclosure molded heat sink with rectangular fins on both enclosures, specific enclosure volume to satisfy the power density, Copper spreader to conduct heat to the enclosure, optimized power density value and selecting Aluminum clad PCB which improves the heat transfer were the contributors towards achieving a conducive operating temperature inside the IP-65 rated Motor Inverter enclosure. A reduction of 52 ℃ was achieved in internal ambient temperature inside the IP65 enclosure between baseline and final design parameters, which met the operative temperature requirements of the electronic components inside the IP-65 rated Motor Inverter.
Keywords: Ansys ICEPAK, Aluminum Clad PCB, IP 65 enclosure, motor inverter, thermal simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6651324 Statistical Assessment of Models for Determination of Soil – Water Characteristic Curves of Sand Soils
Authors: S. J. Matlan, M. Mukhlisin, M. R. Taha
Abstract:
Characterization of the engineering behavior of unsaturated soil is dependent on the soil-water characteristic curve (SWCC), a graphical representation of the relationship between water content or degree of saturation and soil suction. A reasonable description of the SWCC is thus important for the accurate prediction of unsaturated soil parameters. The measurement procedures for determining the SWCC, however, are difficult, expensive, and timeconsuming. During the past few decades, researchers have laid a major focus on developing empirical equations for predicting the SWCC, with a large number of empirical models suggested. One of the most crucial questions is how precisely existing equations can represent the SWCC. As different models have different ranges of capability, it is essential to evaluate the precision of the SWCC models used for each particular soil type for better SWCC estimation. It is expected that better estimation of SWCC would be achieved via a thorough statistical analysis of its distribution within a particular soil class. With this in view, a statistical analysis was conducted in order to evaluate the reliability of the SWCC prediction models against laboratory measurement. Optimization techniques were used to obtain the best-fit of the model parameters in four forms of SWCC equation, using laboratory data for relatively coarse-textured (i.e., sandy) soil. The four most prominent SWCCs were evaluated and computed for each sample. The result shows that the Brooks and Corey model is the most consistent in describing the SWCC for sand soil type. The Brooks and Corey model prediction also exhibit compatibility with samples ranging from low to high soil water content in which subjected to the samples that evaluated in this study.
Keywords: Soil-water characteristic curve (SWCC), statistical analysis, unsaturated soil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26651323 Effects of Coupling Agent on the Properties of Henequen Microfiber (NF) Filled High Density Polyethylene (HDPE) Composites
Authors: Pravin Gaikwad, Prakash Mahanwar
Abstract:
The main objective of incorporating natural fibers such as Henequen microfibers (NF) into the High Density Polyethylene (HDPE) polymer matrix is to reduce the cost and to enhance the mechanical as well as other properties. The Henequen microfibers were chopped manually to 5-7mm in length and added into the polymer matrix at the optimized concentration of 8 wt %. In order to facilitate the link between Henequen microfibers (NF) and HDPE matrix, coupling agent such as Glycidoxy (Epoxy) Functional Methoxy Silane (GPTS) at various concentrations from 0.1%, 0.3%, 0.5%, 0.7%, 0.9% and 1% by weight to the total fibers were added. The tensile strength of the composite increased marginally while % elongation at break of the composites decreased with increase in silane loading by wt %. Tensile modulus and stiffness observed increased at 0.9 wt % GPTS loading. Flexural as well as impact strength of the composite decreased with increase in GPTS loading by weight %. Dielectric strength of the composite also found increased marginally up to 0.5wt % silane loading and thereafter remained constant.
Keywords: Henequen microfibers (NF), polymer composites, HDPE, coupling agent, GPTS
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24231322 Advanced Micromanufacturing for Ultra Precision Part by Soft Lithography and Nano Powder Injection Molding
Authors: Andy Tirta, Yus Prasetyo, Eung-Ryul. Baek, Chul-Jin. Choi , Hye-Moon. Lee
Abstract:
Recently, the advanced technologies that offer high precision product, relative easy, economical process and also rapid production are needed to realize the high demand of ultra precision micro part. In our research, micromanufacturing based on soft lithography and nanopowder injection molding was investigated. The silicone metal pattern with ultra thick and high aspect ratio succeeds to fabricate Polydimethylsiloxane (PDMS) micro mold. The process followed by nanopowder injection molding (PIM) by a simple vacuum hot press. The 17-4ph nanopowder with diameter of 100 nm, succeed to be injected and it forms green sample microbearing with thickness, microchannel and aspect ratio is 700μm, 60μm and 12, respectively. Sintering process was done in 1200 C for 2 hours and heating rate 0.83oC/min. Since low powder load (45% PL) was applied to achieve green sample fabrication, ~15% shrinkage happen in the 86% relative density. Several improvements should be done to produce high accuracy and full density sintered part.Keywords: Micromanufacturing, Nano PIM, PDMS micro mould.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20631321 FEM Simulation of HE Blast-Fragmentation Warhead and the Calculation of Lethal Range
Authors: G. Tanapornraweekit, W. Kulsirikasem
Abstract:
This paper presents the simulation of fragmentation warhead using a hydrocode, Autodyn. The goal of this research is to determine the lethal range of such a warhead. This study investigates the lethal range of warheads with and without steel balls as preformed fragments. The results from the FE simulation, i.e. initial velocities and ejected spray angles of fragments, are further processed using an analytical approach so as to determine a fragment hit density and probability of kill of a modelled warhead. In order to simulate a plenty of preformed fragments inside a warhead, the model requires expensive computation resources. Therefore, this study attempts to model the problem in an alternative approach by considering an equivalent mass of preformed fragments to the mass of warhead casing. This approach yields approximately 7% and 20% difference of fragment velocities from the analytical results for one and two layers of preformed fragments, respectively. The lethal ranges of the simulated warheads are 42.6 m and 56.5 m for warheads with one and two layers of preformed fragments, respectively, compared to 13.85 m for a warhead without preformed fragment. These lethal ranges are based on the requirement of fragment hit density. The lethal ranges which are based on the probability of kill are 27.5 m, 61 m and 70 m for warheads with no preformed fragment, one and two layers of preformed fragments, respectively.Keywords: Lethal Range, Natural Fragment, Preformed Fragment, Warhead.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 43101320 Effects of Four Dietary Oils on Cholesterol and Fatty Acid Composition of Egg Yolk in Layers
Authors: A. F. Agboola, B. R. O. Omidiwura, A. Oyeyemi, E. A. Iyayi, A. S. Adelani
Abstract:
Dietary cholesterol has elicited the most public interest as it relates with coronary heart disease. Thus, humans have been paying more attention to health, thereby reducing consumption of cholesterol enriched food. Egg is considered as one of the major sources of human dietary cholesterol. However, an alternative way to reduce the potential cholesterolemic effect of eggs is to modify the fatty acid composition of the yolk. The effect of palm oil (PO), soybean oil (SO), sesame seed oil (SSO) and fish oil (FO) supplementation in the diets of layers on egg yolk fatty acid, cholesterol, egg production and egg quality parameters were evaluated in a 42-day feeding trial. One hundred and five Isa Brown laying hens of 34 weeks of age were randomly distributed into seven groups of five replicates and three birds per replicate in a completely randomized design. Seven corn-soybean basal diets (BD) were formulated: BD+No oil (T1), BD+1.5% PO (T2), BD+1.5% SO (T3), BD+1.5% SSO (T4), BD+1.5% FO (T5), BD+0.75% SO+0.75% FO (T6) and BD+0.75% SSO+0.75% FO (T7). Five eggs were randomly sampled at day 42 from each replicate to assay for the cholesterol, fatty acid profile of egg yolk and egg quality assessment. Results showed that there were no significant (P>0.05) differences observed in production performance, egg cholesterol and egg quality parameters except for yolk height, albumen height, yolk index, egg shape index, haugh unit, and yolk colour. There were no significant differences (P>0.05) observed in total cholesterol, high density lipoprotein and low density lipoprotein levels of egg yolk across the treatments. However, diets had effect (P<0.05) on TAG (triacylglycerol) and VLDL (very low density lipoprotein) of the egg yolk. The highest TAG (603.78 mg/dl) and VLDL values (120.76 mg/dl) were recorded in eggs of hens on T4 (1.5% sesame seed oil) and was similar to those on T3 (1.5% soybean oil), T5 (1.5% fish oil) and T6 (0.75% soybean oil + 0.75% fish oil). However, results revealed a significant (P<0.05) variations on eggs’ summation of polyunsaturated fatty acid (PUFA). In conclusion, it is suggested that dietary oils could be included in layers’ diets to produce designer eggs low in cholesterol and high in PUFA especially omega-3 fatty acids.Keywords: Dietary oils, Egg cholesterol, Egg fatty acid profile, Egg quality parameters.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20801319 Production of Spherical Cementite within Bainitic Matrix Microstructures in High Carbon Powder Metallurgy Steels
Authors: O. Altuntaş, A. Güral
Abstract:
The hardness-microstructure relationships of spherical cementite in bainitic matrix obtained by a different heat treatment cycles carried out to high carbon powder metallurgy (P/M) steel were investigated. For this purpose, 1.5 wt.% natural graphite powder admixed in atomized iron powders and the mixed powders were compacted under 700 MPa at room temperature and then sintered at 1150 °C under a protective argon gas atmosphere. The densities of the green and sintered samples were measured via the Archimedes method. A density of 7.4 g/cm3 was obtained after sintering and a density of 94% was achieved. The sintered specimens having primary cementite plus lamellar pearlitic structures were fully quenched from 950 °C temperature and then over-tempered at 705 °C temperature for 60 minutes to produce spherical-fine cementite particles in the ferritic matrix. After by this treatment, these samples annealed at 735 °C temperature for 3 minutes were austempered at 300 °C salt bath for a period of 1 to 5 hours. As a result of this process, it could be able to produced spherical cementite particle in the bainitic matrix. This microstructure was designed to improve wear and toughness of P/M steels. The microstructures were characterized and analyzed by SEM and micro and macro hardness.
Keywords: Powder metallurgy steel, heat treatment, bainite, spherical cementite.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9961318 Estimation and Removal of Chlorophenolic Compounds from Paper Mill Waste Water by Electrochemical Treatment
Authors: R. Sharma, S. Kumar, C. Sharma
Abstract:
A number of toxic chlorophenolic compounds are formed during pulp bleaching. The nature and concentration of these chlorophenolic compounds largely depends upon the amount and nature of bleaching chemicals used. These compounds are highly recalcitrant and difficult to remove but are partially removed by the biochemical treatment processes adopted by the paper industry. Identification and estimation of these chlorophenolic compounds has been carried out in the primary and secondary clarified effluents from the paper mill by GCMS. Twenty-six chorophenolic compounds have been identified and estimated in paper mill waste waters. Electrochemical treatment is an efficient method for oxidation of pollutants and has successfully been used to treat textile and oil waste water. Electrochemical treatment using less expensive anode material, stainless steel electrodes has been tried to study their removal. The electrochemical assembly comprised a DC power supply, a magnetic stirrer and stainless steel (316 L) electrode. The optimization of operating conditions has been carried out and treatment has been performed under optimized treatment conditions. Results indicate that 68.7% and 83.8% of cholorphenolic compounds are removed during 2 h of electrochemical treatment from primary and secondary clarified effluent respectively. Further, there is a reduction of 65.1, 60 and 92.6% of COD, AOX and color, respectively for primary clarified and 83.8%, 75.9% and 96.8% of COD, AOX and color, respectively for secondary clarified effluent. EC treatment has also been found to increase significantly the biodegradability index of wastewater because of conversion of non- biodegradable fraction into biodegradable fraction. Thus, electrochemical treatment is an efficient method for the degradation of cholorophenolic compounds, removal of color, AOX and other recalcitrant organic matter present in paper mill waste water.
Keywords: Chlorophenolics, effluent, electrochemical treatment, wastewater.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18981317 Drainage Prediction for Dam using Fuzzy Support Vector Regression
Authors: S. Wiriyarattanakun, A. Ruengsiriwatanakun, S. Noimanee
Abstract:
The drainage Estimating is an important factor in dam management. In this paper, we use fuzzy support vector regression (FSVR) to predict the drainage of the Sirikrit Dam at Uttaradit province, Thailand. The results show that the FSVR is a suitable method in drainage estimating.Keywords: Drainage Estimation, Prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12711316 Generalization of Clustering Coefficient on Lattice Networks Applied to Criminal Networks
Authors: Christian H. Sanabria-Montaña, Rodrigo Huerta-Quintanilla
Abstract:
A lattice network is a special type of network in which all nodes have the same number of links, and its boundary conditions are periodic. The most basic lattice network is the ring, a one-dimensional network with periodic border conditions. In contrast, the Cartesian product of d rings forms a d-dimensional lattice network. An analytical expression currently exists for the clustering coefficient in this type of network, but the theoretical value is valid only up to certain connectivity value; in other words, the analytical expression is incomplete. Here we obtain analytically the clustering coefficient expression in d-dimensional lattice networks for any link density. Our analytical results show that the clustering coefficient for a lattice network with density of links that tend to 1, leads to the value of the clustering coefficient of a fully connected network. We developed a model on criminology in which the generalized clustering coefficient expression is applied. The model states that delinquents learn the know-how of crime business by sharing knowledge, directly or indirectly, with their friends of the gang. This generalization shed light on the network properties, which is important to develop new models in different fields where network structure plays an important role in the system dynamic, such as criminology, evolutionary game theory, econophysics, among others.Keywords: Clustering coefficient, criminology, generalized, regular network d-dimensional.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16361315 Stature Estimation Using Foot and Shoeprint Length of Malaysian Population
Authors: M. Khairulmazidah, A. B. Nurul Nadiah, A. R. Rumiza
Abstract:
Formulation of biological profile is one of the modern roles of forensic anthropologist. The present study was conducted to estimate height using foot and shoeprint length of Malaysian population. The present work can be very useful information in the process of identification of individual in forensic cases based on shoeprint evidence. It can help to narrow down suspects and ease the police investigation. Besides, stature is important parameters in determining the partial identify of unidentified and mutilated bodies. Thus, this study can help the problem encountered in cases of mass disaster, massacre, explosions and assault cases. This is because it is very hard to identify parts of bodies in these cases where people are dismembered and become unrecognizable. Samples in this research were collected from 200 Malaysian adults (100 males and 100 females) with age ranging from 20 to 45 years old. In this research, shoeprint length were measured based on the print of the shoes made from the flat shoes. Other information like gender, foot length and height of subject were also recorded. The data was analyzed using IBM® SPSS Statistics 19 software. Results indicated that, foot length has a strong correlation with stature than shoeprint length for both sides of the feet. However, in the unknown, where the gender was undetermined have shown a better correlation in foot length and shoeprint length parameter compared to males and females analyzed separately. In addition, prediction equations are developed to estimate the stature using linear regression analysis of foot length and shoeprint length. However, foot lengths give better prediction than shoeprint length.
Keywords: Forensic anthropology, foot length, shoeprints, stature estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30561314 District 10 in Tehran: Urban Transformation and the Survey Evidence of Loss in Place Attachment in High Rises
Authors: Roya Morad, W. Eirik Heintz
Abstract:
The identity of a neighborhood is inevitably shaped by the architecture and the people of that place. Conventionally the streets within each neighborhood served as a semi-public-private extension of the private living spaces. The street as a design element formed a hybrid condition that was neither totally public nor private, and it encouraged social interactions. Thus through creating a sense of community, one of the most basic human needs of belonging was achieved. Similar to major global cities, Tehran has undergone serious urbanization. Developing into a capital city of high rises has resulted in an increase in urban density. Although allocating more residential units in each neighborhood was a critical response to the population boom and the limited land area of the city, it also created a crisis in terms of social communication and place attachment. District 10 in Tehran is a neighborhood that has undergone the most urban transformation among the other 22 districts in the capital and currently has the highest population density. This paper will explore how the active streets in district 10 have changed into their current condition of high rises with a lack of meaningful social interactions amongst its inhabitants. A residential building can be thought of as a large group of people. One would think that as the number of people increases, the opportunities for social communications would increase as well. However, according to the survey, there is an indirect relationship between the two. As the number of people of a residential building increases, the quality of each acquaintance reduces, and the depth of relationships between people tends to decrease. This comes from the anonymity of being part of a crowd and the lack of social spaces characterized by most high-rise apartment buildings. Without a sense of community, the attachment to a neighborhood is decreased. This paper further explores how the neighborhood participates to fulfill ones need for social interaction and focuses on the qualitative aspects of alternative spaces that can redevelop the sense of place attachment within the community.
Keywords: High density, place attachment, social communication, street life, urban transformation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5141313 Novel Adaptive Channel Equalization Algorithms by Statistical Sampling
Authors: János Levendovszky, András Oláh
Abstract:
In this paper, novel statistical sampling based equalization techniques and CNN based detection are proposed to increase the spectral efficiency of multiuser communication systems over fading channels. Multiuser communication combined with selective fading can result in interferences which severely deteriorate the quality of service in wireless data transmission (e.g. CDMA in mobile communication). The paper introduces new equalization methods to combat interferences by minimizing the Bit Error Rate (BER) as a function of the equalizer coefficients. This provides higher performance than the traditional Minimum Mean Square Error equalization. Since the calculation of BER as a function of the equalizer coefficients is of exponential complexity, statistical sampling methods are proposed to approximate the gradient which yields fast equalization and superior performance to the traditional algorithms. Efficient estimation of the gradient is achieved by using stratified sampling and the Li-Silvester bounds. A simple mechanism is derived to identify the dominant samples in real-time, for the sake of efficient estimation. The equalizer weights are adapted recursively by minimizing the estimated BER. The near-optimal performance of the new algorithms is also demonstrated by extensive simulations. The paper has also developed a (Cellular Neural Network) CNN based approach to detection. In this case fast quadratic optimization has been carried out by t, whereas the task of equalizer is to ensure the required template structure (sparseness) for the CNN. The performance of the method has also been analyzed by simulations.
Keywords: Cellular Neural Network, channel equalization, communication over fading channels, multiuser communication, spectral efficiency, statistical sampling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15201312 Likelihood Estimation for Stochastic Epidemics with Heterogeneous Mixing Populations
Authors: Yilun Shang
Abstract:
We consider a heterogeneously mixing SIR stochastic epidemic process in populations described by a general graph. Likelihood theory is developed to facilitate statistic inference for the parameters of the model under complete observation. We show that these estimators are asymptotically Gaussian unbiased estimates by using a martingale central limit theorem.Keywords: statistic inference, maximum likelihood, epidemicmodel, heterogeneous mixing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14091311 Subpixel Detection of Circular Objects Using Geometric Property
Authors: Wen-Yen Wu, Wen-Bin Yu
Abstract:
In this paper, we propose a method for detecting circular shapes with subpixel accuracy. First, the geometric properties of circles have been used to find the diameters as well as the circumference pixels. The center and radius are then estimated by the circumference pixels. Both synthetic and real images have been tested by the proposed method. The experimental results show that the new method is efficient.Keywords: Subpixel, least squares estimation, circle detection, Hough transformation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21371310 Foreign Direct Investment on Economic Growth by Industries in Central and Eastern European Countries
Authors: Shorena Pharjiani
Abstract:
Present empirical paper investigates the relationship between FDI and economic growth by 10 selected industries in 10 Central and Eastern European countries from the period 1995 to 2012. Different estimation approaches were used to explore the connection between FDI and economic growth, for example OLS, RE, FE with and without time dummies. Obtained empirical results leads to some main consequences: First, the Central and East European countries (CEEC) attracted foreign direct investment, which raised the productivity of industries they entered in. It should be concluded that the linkage between FDI and output growth by industries is positive and significant enough to suggest that foreign firm’s participation enhanced the productivity of the industries they occupied. There had been an endogeneity problem in the regression and fixed effects estimation approach was used which partially corrected the regression analysis in order to make the results less biased. Second, it should be stressed that the results show that time has an important role in making FDI operational for enhancing output growth by industries via total factor productivity. Third, R&D positively affected economic growth and at the same time, it should take some time for research and development to influence economic growth. Fourth, the general trends masked crucial differences at the country level: over the last 20 years, the analysis of the tables and figures at the country level show that the main recipients of FDI of the 11 Central and Eastern European countries were Hungary, Poland and the Czech Republic. The main reason was that these countries had more open door policies for attracting the FDI. Fifth, according to the graphical analysis, while Hungary had the highest FDI inflow in this region, it was not reflected in the GDP growth as much as in other Central and Eastern European countries.Keywords: Central and East European countries (CEEC), economic growth, FDI, panel data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16651309 Nonlinear Estimation Model for Rail Track Deterioration
Authors: M. Karimpour, L. Hitihamillage, N. Elkhoury, S. Moridpour, R. Hesami
Abstract:
Rail transport authorities around the world have been facing a significant challenge when predicting rail infrastructure maintenance work for a long period of time. Generally, maintenance monitoring and prediction is conducted manually. With the restrictions in economy, the rail transport authorities are in pursuit of improved modern methods, which can provide precise prediction of rail maintenance time and location. The expectation from such a method is to develop models to minimize the human error that is strongly related to manual prediction. Such models will help them in understanding how the track degradation occurs overtime under the change in different conditions (e.g. rail load, rail type, rail profile). They need a well-structured technique to identify the precise time that rail tracks fail in order to minimize the maintenance cost/time and secure the vehicles. The rail track characteristics that have been collected over the years will be used in developing rail track degradation prediction models. Since these data have been collected in large volumes and the data collection is done both electronically and manually, it is possible to have some errors. Sometimes these errors make it impossible to use them in prediction model development. This is one of the major drawbacks in rail track degradation prediction. An accurate model can play a key role in the estimation of the long-term behavior of rail tracks. Accurate models increase the track safety and decrease the cost of maintenance in long term. In this research, a short review of rail track degradation prediction models has been discussed before estimating rail track degradation for the curve sections of Melbourne tram track system using Adaptive Network-based Fuzzy Inference System (ANFIS) model.
Keywords: ANFIS, MGT, Prediction modeling, rail track degradation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15951308 Decision Support System for Hospital Selection in Emergency Medical Services: A Discrete Event Simulation Approach
Authors: D. Tedesco, G. Feletti, P. Trucco
Abstract:
The present study aims to develop a Decision Support System (DSS) to support operational decisions in Emergency Medical Service (EMS) systems regarding the assignment of medical emergency requests to Emergency Departments (ED). This problem is called “hospital selection” and concerns the definition of policies for the selection of the ED to which patients who require further treatment are transported by ambulance. The employed research methodology consists of a first phase of review of the technical-scientific literature concerning DSSs to support the EMS management and, in particular, the hospital selection decision. From the literature analysis, it emerged that current studies mainly focused on the EMS phases related to the ambulance service and consider a process that ends when the ambulance is available after completing a mission. Therefore, all the ED-related issues are excluded and considered as part of a separate process. Indeed, the most studied hospital selection policy turned out to be proximity, thus allowing to minimize the travelling time and to free-up the ambulance in the shortest possible time. The purpose of the present study consists in developing an optimization model for assigning medical emergency requests to the EDs also considering the expected time performance in the subsequent phases of the process, such as the case mix, the expected service throughput times, and the operational capacity of different EDs in hospitals. To this end, a Discrete Event Simulation (DES) model was created to compare different hospital selection policies. The model was implemented with the AnyLogic software and finally validated on a realistic case. The hospital selection policy that returned the best results was the minimization of the Time To Provider (TTP), considered as the time from the beginning of the ambulance journey to the ED at the beginning of the clinical evaluation by the doctor. Finally, two approaches were further compared: a static approach, based on a retrospective estimation of the TTP, and a dynamic approach, focused on a predictive estimation of the TTP which is determined with a constantly updated Winters forecasting model. Findings reveal that considering the minimization of TTP is the best hospital selection policy. It allows to significantly reducing service throughput times in the ED with a negligible increase in travel time. Furthermore, an immediate view of the saturation state of the ED is produced and the case mix present in the ED structures (i.e., the different triage codes) is considered, as different severity codes correspond to different service throughput times. Besides, the use of a predictive approach is certainly more reliable in terms on TTP estimation, than a retrospective approach. These considerations can support decision-makers in introducing different hospital selection policies to enhance EMSs performance.
Keywords: Emergency medical services, hospital selection, discrete event simulation, forecast model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2331307 Evaluation of Internal Ballistics of Multi-Perforated Grain in a Closed Vessel
Authors: B. A. Parate, C. P. Shetty
Abstract:
This research article describes the evaluation methodology of an internal ballistics of multi-perforated grain in a closed vessel (CV). The propellant testing in a CV is conducted to characterize the propellants and to ascertain the various internal ballistic parameters. The assessment of an internal ballistics plays a very crucial role for suitability of its use in the selection for a given particular application. The propellant used in defense sectors has to satisfy the user requirements as per laid down specifications. The outputs from CV evaluation of multi-propellant grain are maximum pressure of 226.75 MPa, differentiation of pressure with respect to time of 36.99 MPa/ms, average vivacity of 9.990×10-4/MPa ms, force constant of 933.9 J/g, rise time of 9.85 ms, pressure index of 0.878 including burning coefficient of 0.2919. This paper addresses an internal ballistic of multi-perforated grain, propellant selection, its calculation, and evaluation of various parameters in a CV testing. For the current analysis, the propellant is evaluated in 100 cc CV with propellant mass 20 g. The loading density of propellant is 0.2 g/cc. The method for determination of internal ballistic properties consists of burning of propellant mass under constant volume.
Keywords: Burning rate, closed vessel, force constant, internal ballistic, loading density, maximum pressure, multi-propellant grain, propellant, rise time, vivacity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3771306 Ab initio Study of Co2ZrGe and Co2NbB Full Heusler Compounds
Authors: Abada Ahmed, Hiadsi Said, Ouahrani Tarik, Amrani Bouhalouane, Amara Kadda
Abstract:
Using the first-principles full-potential linearized augmented plane wave plus local orbital (FP-LAPW+lo) method based on density functional theory (DFT), we have investigated the electronic structure and magnetism of full Heusler alloys Co2ZrGe and Co2NbB. These compounds are predicted to be half-metallic ferromagnets (HMFs) with a total magnetic moment of 2.000 B per formula unit, well consistent with the Slater-Pauling rule. Calculations show that both the alloys have an indirect band gaps, in the minority-spin channel of density of states (DOS), with values of 0.58 eV and 0.47 eV for Co2ZrGe and Co2NbB, respectively. Analysis of the DOS and magnetic moments indicates that their magnetism is mainly related to the d-d hybridization between the Co and Zr (or Nb) atoms. The half-metallicity is found to be relatively robust against volume changes. In addition, an atom inside molecule AIM formalism and an electron localization function ELF were also adopted to study the bonding properties of these compounds, building a bridge between their electronic and bonding behavior. As they have a good crystallographic compatibility with the lattice of semiconductors used industrially and negative calculated cohesive energies with considerable absolute values these two alloys could be promising magnetic materials in the spintronic field.
Keywords: Electronic properties, full Heusler alloys, halfmetallic ferromagnets, magnetic properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25061305 A Numerical Study on Electrophoresis of a Soft Particle with Charged Core Coated with Polyelectrolyte Layer
Authors: Partha Sarathi Majee, S. Bhattacharyya
Abstract:
Migration of a core-shell soft particle under the influence of an external electric field in an electrolyte solution is studied numerically. The soft particle is coated with a positively charged polyelectrolyte layer (PEL) and the rigid core is having a uniform surface charge density. The Darcy-Brinkman extended Navier-Stokes equations are solved for the motion of the ionized fluid, the non-linear Nernst-Planck equations for the ion transport and the Poisson equation for the electric potential. A pressure correction based iterative algorithm is adopted for numerical computations. The effects of convection on double layer polarization (DLP) and diffusion dominated counter ions penetration are investigated for a wide range of Debye layer thickness, PEL fixed surface charge density, and permeability of the PEL. Our results show that when the Debye layer is in order of the particle size, the DLP effect is significant and produces a reduction in electrophoretic mobility. However, the double layer polarization effect is negligible for a thin Debye layer or low permeable cases. The point of zero mobility and the existence of mobility reversal depending on the electrolyte concentration are also presented.Keywords: Debye length, double layer polarization, electrophoresis, mobility reversal, soft particle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11431304 Forecast of the Small Wind Turbines Sales with Replacement Purchases and with or without Account of Price Changes
Authors: V. Churkin, M. Lopatin
Abstract:
The purpose of the paper is to estimate the US small wind turbines market potential and forecast the small wind turbines sales in the US. The forecasting method is based on the application of the Bass model and the generalized Bass model of innovations diffusion under replacement purchases. In the work an exponential distribution is used for modeling of replacement purchases. Only one parameter of such distribution is determined by average lifetime of small wind turbines. The identification of the model parameters is based on nonlinear regression analysis on the basis of the annual sales statistics which has been published by the American Wind Energy Association (AWEA) since 2001 up to 2012. The estimation of the US average market potential of small wind turbines (for adoption purchases) without account of price changes is 57080 (confidence interval from 49294 to 64866 at P = 0.95) under average lifetime of wind turbines 15 years, and 62402 (confidence interval from 54154 to 70648 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 90,7%, while in the second - 91,8%. The effect of the wind turbines price changes on their sales was estimated using generalized Bass model. This required a price forecast. To do this, the polynomial regression function, which is based on the Berkeley Lab statistics, was used. The estimation of the US average market potential of small wind turbines (for adoption purchases) in that case is 42542 (confidence interval from 32863 to 52221 at P = 0.95) under average lifetime of wind turbines 15 years, and 47426 (confidence interval from 36092 to 58760 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 95,3%, while in the second – 95,3%.Keywords: Bass model, generalized Bass model, replacement purchases, sales forecasting of innovations, statistics of sales of small wind turbines in the United States.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18831303 Remaining Useful Life Estimation of Bearings Based on Nonlinear Dimensional Reduction Combined with Timing Signals
Authors: Zhongmin Wang, Wudong Fan, Hengshan Zhang, Yimin Zhou
Abstract:
In data-driven prognostic methods, the prediction accuracy of the estimation for remaining useful life of bearings mainly depends on the performance of health indicators, which are usually fused some statistical features extracted from vibrating signals. However, the existing health indicators have the following two drawbacks: (1) The differnet ranges of the statistical features have the different contributions to construct the health indicators, the expert knowledge is required to extract the features. (2) When convolutional neural networks are utilized to tackle time-frequency features of signals, the time-series of signals are not considered. To overcome these drawbacks, in this study, the method combining convolutional neural network with gated recurrent unit is proposed to extract the time-frequency image features. The extracted features are utilized to construct health indicator and predict remaining useful life of bearings. First, original signals are converted into time-frequency images by using continuous wavelet transform so as to form the original feature sets. Second, with convolutional and pooling layers of convolutional neural networks, the most sensitive features of time-frequency images are selected from the original feature sets. Finally, these selected features are fed into the gated recurrent unit to construct the health indicator. The results state that the proposed method shows the enhance performance than the related studies which have used the same bearing dataset provided by PRONOSTIA.Keywords: Continuous wavelet transform, convolution neural network, gated recurrent unit, health indicators, remaining useful life.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7671302 Sensing Pressure for Authentication System Using Keystroke Dynamics
Authors: Hidetoshi Nonaka, Masahito Kurihara
Abstract:
In this paper, an authentication system using keystroke dynamics is presented. We introduced pressure sensing for the improvement of the accuracy of measurement and durability against intrusion using key-logger, and so on, however additional instrument is needed. As the result, it has been found that the pressure sensing is also effective for estimation of real moment of keystroke.
Keywords: Biometric authentication, Keystroke dynamics, Pressure sensing, Time-frequency analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22221301 Conflation Methodology Applied to Flood Recovery
Authors: E. L. Suarez, D. E. Meeroff, Y. Yong
Abstract:
Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.
Keywords: Community resilience, conflation, flood risk, nuisance flooding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1391300 Mathematical Correlation for Brake Thermal Efficiency and NOx Emission of CI Engine using Ester of Vegetable Oils
Authors: Samir J. Deshmukh, Lalit B. Bhuyar, Shashank B. Thakre, Sachin S. Ingole
Abstract:
The aim of this study is to develop mathematical relationships for the performance parameter brake thermal efficiency (BTE) and emission parameter nitrogen oxides (NOx) for the various esters of vegetable oils used as CI engine fuel. The BTE is an important performance parameter defining the ability of engine to utilize the energy supplied and power developed similarly it is indication of efficiency of fuels used. The esters of cottonseed oil, soybean oil, jatropha oil and hingan oil are prepared using transesterification process and characterized for their physical and main fuel properties including viscosity, density, flash point and higher heating value using standard test methods. These esters are tried as CI engine fuel to analyze the performance and emission parameters in comparison to diesel. The results of the study indicate that esters as a fuel does not differ greatly with that of diesel in properties. The CI engine performance with esters as fuel is in line with the diesel where as the emission parameters are reduced with the use of esters. The correlation developed between BTE and brake power(BP), gross calorific value(CV), air-fuel ratio(A/F), heat carried away by cooling water(HCW). Another equation is developed between the NOx emission and CO, HC, smoke density (SD), exhaust gas temperature (EGT). The equations are verified by comparing the observed and calculated values which gives the coefficient of correlation of 0.99 and 0.96 for the BTE and NOx equations respectively.Keywords: Esters, emission, performance, and vegetable oil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22181299 The Reproducibility and Repeatability of Modified Likelihood Ratio for Forensics Handwriting Examination
Authors: O. Abiodun Adeyinka, B. Adeyemo Adesesan
Abstract:
The forensic use of handwriting depends on the analysis, comparison, and evaluation decisions made by forensic document examiners. When using biometric technology in forensic applications, it is necessary to compute Likelihood Ratio (LR) for quantifying strength of evidence under two competing hypotheses, namely the prosecution and the defense hypotheses wherein a set of assumptions and methods for a given data set will be made. It is therefore important to know how repeatable and reproducible our estimated LR is. This paper evaluated the accuracy and reproducibility of examiners' decisions. Confidence interval for the estimated LR were presented so as not get an incorrect estimate that will be used to deliver wrong judgment in the court of Law. The estimate of LR is fundamentally a Bayesian concept and we used two LR estimators, namely Logistic Regression (LoR) and Kernel Density Estimator (KDE) for this paper. The repeatability evaluation was carried out by retesting the initial experiment after an interval of six months to observe whether examiners would repeat their decisions for the estimated LR. The experimental results, which are based on handwriting dataset, show that LR has different confidence intervals which therefore implies that LR cannot be estimated with the same certainty everywhere. Though the LoR performed better than the KDE when tested using the same dataset, the two LR estimators investigated showed a consistent region in which LR value can be estimated confidently. These two findings advance our understanding of LR when used in computing the strength of evidence in handwriting using forensics.Keywords: Logistic Regression LoR, Kernel Density Estimator KDE, Handwriting, Confidence Interval, Repeatability, Reproducibility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4711298 Estimation of Exhaust and Non-Exhaust Particulate Matter Emissions’ Share from On-Road Vehicles in Addis Ababa City
Authors: Solomon Neway Jida, Jean-Francois Hetet, Pascal Chesse
Abstract:
Vehicular emission is the key source of air pollution in the urban environment. This includes both fine particles (PM2.5) and coarse particulate matters (PM10). However, particulate matter emissions from road traffic comprise emissions from exhaust tailpipe and emissions due to wear and tear of the vehicle part such as brake, tire and clutch and re-suspension of dust (non-exhaust emission). This study estimates the share of the two sources of pollutant particle emissions from on-roadside vehicles in the Addis Ababa municipality, Ethiopia. To calculate its share, two methods were applied; the exhaust-tailpipe emissions were calculated using the Europeans emission inventory Tier II method and Tier I for the non-exhaust emissions (like vehicle tire wear, brake, and road surface wear). The results show that of the total traffic-related particulate emissions in the city, 63% emitted from vehicle exhaust and the remaining 37% from non-exhaust sources. The annual roads transport exhaust emission shares around 2394 tons of particles from all vehicle categories. However, from the total yearly non-exhaust particulate matter emissions’ contribution, tire and brake wear shared around 65% and 35% emanated by road-surface wear. Furthermore, vehicle tire and brake wear were responsible for annual 584.8 tons of coarse particles (PM10) and 314.4 tons of fine particle matter (PM2.5) emissions in the city whereas surface wear emissions were responsible for around 313.7 tons of PM10 and 169.9 tons of PM2.5 pollutant emissions in the city. This suggests that non-exhaust sources might be as significant as exhaust sources and have a considerable contribution to the impact on air quality.
Keywords: Addis Ababa, automotive emission, emission estimation, particulate matters.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 767