Search results for: Modified maximum urgency first
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2548

Search results for: Modified maximum urgency first

1858 Optimizing the Performance of Thermoelectric for Cooling Computer Chips Using Different Types of Electrical Pulses

Authors: Saleh Alshehri

Abstract:

Thermoelectric technology is currently being used in many industrial applications for cooling, heating and generating electricity. This research mainly focuses on using thermoelectric to cool down high-speed computer chips at different operating conditions. A previously developed and validated three-dimensional model for optimizing and assessing the performance of cascaded thermoelectric and non-cascaded thermoelectric is used in this study to investigate the possibility of decreasing the hotspot temperature of computer chip. Additionally, a test assembly is built and tested at steady-state and transient conditions. The obtained optimum thermoelectric current at steady-state condition is used to conduct a number of pulsed tests (i.e. transient tests) with different shapes to cool the computer chips hotspots. The results of the steady-state tests showed that at hotspot heat rate of 15.58 W (5.97 W/cm2), using thermoelectric current of 4.5 A has resulted in decreasing the hotspot temperature at open circuit condition (89.3 °C) by 50.1 °C. Maximum and minimum hotspot temperatures have been affected by ON and OFF duration of the electrical current pulse. Maximum hotspot temperature was resulted by longer OFF pulse period. In addition, longer ON pulse period has generated the minimum hotspot temperature.

Keywords: Thermoelectric generator, thermoelectric cooler, chip hotspots, electronic cooling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 602
1857 Heavy Metals Transport in the Soil Profiles under the Application of Sludge and Wastewater

Authors: A. Behbahaninia, S. A. Mirbagheri, A. H. Javid

Abstract:

Heavy metal transfer in soil profiles is a major environmental concern because even slow transport through the soil may eventually lead to deterioration of groundwater quality. The use of sewage sludge and effluents from wastewater treatment plants for irrigation of agricultural lands is on the rise particularly in peri-urban area of developing countries. In this study soil samples under sludge application and wastewater irrigation were studied and soil samples were collected in the soil profiles from the surface to 100 cm in depth. For this purpose, three plots were made in a treatment plant in south of Tehran-Iran. First plot was irrigated just with effluent from wastewater treatment plant, second plot with simulated heavy metals concentration equal 50 years irrigation and in third plot sewage sludge and effluent was used. Trace metals concentration (Cd, Cu) were determined for soil samples. The results indicate movement of metals was observed, but the most concentration of metals was found in topsoil samples. The most of Cadmium concentration was measured in the topsoil of plot 3, 4.5mg/kg and Maximum cadmium movement was observed in 0-20 cm. The most concentration of copper was 27.76mg/kg, and maximum percolation in 0-20 cm. Metals (Cd, Cu) were measured in leached water. Preferential flow and metal complexation with soluble organic apparently allow leaching of heavy metals.

Keywords: Heavy metal, sludge, soil, transport.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1771
1856 A New Approach for Image Segmentation using Pillar-Kmeans Algorithm

Authors: Ali Ridho Barakbah, Yasushi Kiyoki

Abstract:

This paper presents a new approach for image segmentation by applying Pillar-Kmeans algorithm. This segmentation process includes a new mechanism for clustering the elements of high-resolution images in order to improve precision and reduce computation time. The system applies K-means clustering to the image segmentation after optimized by Pillar Algorithm. The Pillar algorithm considers the pillars- placement which should be located as far as possible from each other to withstand against the pressure distribution of a roof, as identical to the number of centroids amongst the data distribution. This algorithm is able to optimize the K-means clustering for image segmentation in aspects of precision and computation time. It designates the initial centroids- positions by calculating the accumulated distance metric between each data point and all previous centroids, and then selects data points which have the maximum distance as new initial centroids. This algorithm distributes all initial centroids according to the maximum accumulated distance metric. This paper evaluates the proposed approach for image segmentation by comparing with K-means and Gaussian Mixture Model algorithm and involving RGB, HSV, HSL and CIELAB color spaces. The experimental results clarify the effectiveness of our approach to improve the segmentation quality in aspects of precision and computational time.

Keywords: Image segmentation, K-means clustering, Pillaralgorithm, color spaces.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3358
1855 Development of Total Maximum Daily Load Using Water Quality Modelling as an Approach for Watershed Management in Malaysia

Authors: S. A. Che Osmi, W. M. F. Wan Ishak, H. Kim, M. A. Azman, M. A. Ramli

Abstract:

River is one of important water sources for many activities including industrial and domestic usage such as daily usage, transportation, power supply and recreational activities. However, increasing activities in a river has grown the sources of pollutant enters the water bodies, and degraded the water quality of the river. It becomes a challenge to develop an effective river management to ensure the water sources of the river are well managed and regulated. In Malaysia, several approaches for river management have been implemented such as Integrated River Basin Management (IRBM) program for coordinating the management of resources in a natural environment based on river basin to ensure their sustainability lead by Department of Drainage and Irrigation (DID), Malaysia. Nowadays, Total Maximum Daily Load (TMDL) is one of the best approaches for river management in Malaysia. TMDL implementation is regulated and implemented in the United States. A study on the development of TMDL in Malacca River has been carried out by doing water quality monitoring, the development of water quality model by using Environmental Fluid Dynamic Codes (EFDC), and TMDL implementation plan. The implementation of TMDL will help the stakeholders and regulators to control and improve the water quality of the river. It is one of the good approaches for river management in Malaysia.

Keywords: EFDC, river management, TMDL, water quality modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1552
1854 Determination of Unsaturated Soil Permeability Based on Geometric Factor Development of Constant Discharge Model

Authors: A. Rifa’i, Y. Takeshita, M. Komatsu

Abstract:

After Yogyakarta earthquake in 2006, the main problem that occurred in the first yard of Prambanan Temple is ponding area that occurred after rainfall. Soil characterization needs to be determined by conducting several processes, especially permeability coefficient (k) in both saturated and unsaturated conditions to solve this problem. More accurate and efficient field testing procedure is required to obtain permeability data that present the field condition. One of the field permeability test equipment is Constant Discharge procedure to determine the permeability coefficient. Necessary adjustments of the Constant Discharge procedure are needed to be determined especially the value of geometric factor (F) to improve the corresponding value of permeability coefficient. The value of k will be correlated with the value of volumetric water content (θ) of an unsaturated condition until saturated condition. The principle procedure of Constant Discharge model provides a constant flow in permeameter tube that flows into the ground until the water level in the tube becomes constant. Constant water level in the tube is highly dependent on the tube dimension. Every tube dimension has a shape factor called the geometric factor that affects the result of the test. Geometric factor value is defined as the characteristic of shape and radius of the tube. This research has modified the geometric factor parameters by using empty material tube method so that the geometric factor will change. Saturation level is monitored by using soil moisture sensor. The field test results were compared with the results of laboratory tests to validate the results of the test. Field and laboratory test results of empty tube material method have an average difference of 3.33 x 10-4 cm/sec. The test results showed that modified geometric factor provides more accurate data. The improved methods of constant discharge procedure provide more relevant results.

Keywords: Constant discharge, geometric factor, permeability coefficient, unsaturated soils.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1511
1853 Effect of Thistle Ecotype in the Physical-Chemical and Sensorial Properties of Serra da Estrela Cheese

Authors: Raquel P. F. Guiné, Marlene I. C. Tenreiro, Ana C. Correia, Paulo Barracosa, Paula M. R. Correia

Abstract:

The objective of this study was to evaluate the physical and chemical characteristics of Serra da Estrela cheese and compare these results with those of the sensory analysis. For the study were taken six samples of Serra da Estrela cheese produced with 6 different ecotypes of thistle in a dairy situated in Penalva do Castelo. The chemical properties evaluated were moisture content, protein, fat, ash, chloride and pH; the physical properties studied were color and texture; and finally a sensory evaluation was undertaken. The results showed moisture varying in the range 40- 48%, protein in the range 15-20%, fat between 41-45%, ash between 3.9-5.0% and chlorides varying from 1.2 to 3.0%. The pH varied from 4.8 to 5.4. The textural properties revealed that the crust hardness is relatively low (maximum 7.3 N), although greater than flesh firmness (maximum 1.7 N), and also that these cheeses are in fact soft paste type, with measurable stickiness and intense adhesiveness. The color analysis showed that the crust is relatively light (L* over 50), and with a predominant yellow coloration (b* around 20 or over) although with a slight greenish tone (a* negative). The results of the sensory analysis did not show great variability for most of the attributes measured, although some differences were found in attributes such as crust thickness, crust uniformity, and creamy flesh.

Keywords: Chemical composition, color, sensorial analysis, Serra da Estrela cheese, texture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2093
1852 Design of an Eddy Current Brake System for the Use of Roller Coasters Based on a Human Factors Engineering Approach

Authors: Adam L. Yanagihara, Yong Seok Park

Abstract:

The goal of this paper is to converge upon a design of a brake system that could be used for a roller coaster found at an amusement park. It was necessary to find what could be deemed as a “comfortable” deceleration so that passengers do not feel as if they are suddenly jerked and pressed against the restraining harnesses. A human factors engineering approach was taken in order to determine this deceleration. Using a previous study that tested the deceleration of transit vehicles, it was found that a -0.45 G deceleration would be used as a design requirement to build this system around. An adjustable linear eddy current brake using permanent magnets would be the ideal system to use in order to meet this design requirement. Anthropometric data were then used to determine a realistic weight and length of the roller coaster that the brake was being designed for. The weight and length data were then factored into magnetic brake force equations. These equations were used to determine how the brake system and the brake run layout would be designed. A final design for the brake was determined and it was found that a total of 12 brakes would be needed with a maximum braking distance of 53.6 m in order to stop a roller coaster travelling at its top speed and loaded to maximum capacity. This design is derived from theoretical calculations, but is within the realm of feasibility.

Keywords: Eddy current brake, engineering design, human factors engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1153
1851 A Case Study on the Numerical-Probability Approach for Deep Excavation Analysis

Authors: Komeil Valipourian

Abstract:

Urban advances and the growing need for developing infrastructures has increased the importance of deep excavations. In this study, after the introducing probability analysis as an important issue, an attempt has been made to apply it for the deep excavation project of Bangkok’s Metro as a case study. For this, the numerical probability model has been developed based on the Finite Difference Method and Monte Carlo sampling approach. The results indicate that disregarding the issue of probability in this project will result in an inappropriate design of the retaining structure. Therefore, probabilistic redesign of the support is proposed and carried out as one of the applications of probability analysis. A 50% reduction in the flexural strength of the structure increases the failure probability just by 8% in the allowable range and helps improve economic conditions, while maintaining mechanical efficiency. With regard to the lack of efficient design in most deep excavations, by considering geometrical and geotechnical variability, an attempt was made to develop an optimum practical design standard for deep excavations based on failure probability. On this basis, a practical relationship is presented for estimating the maximum allowable horizontal displacement, which can help improve design conditions without developing the probability analysis.

Keywords: Numerical probability modeling, deep excavation, allowable maximum displacement, finite difference method, FDM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 672
1850 Jeffrey's Prior for Unknown Sinusoidal Noise Model via Cramer-Rao Lower Bound

Authors: Samuel A. Phillips, Emmanuel A. Ayanlowo, Rasaki O. Olanrewaju, Olayode Fatoki

Abstract:

This paper employs the Jeffrey's prior technique in the process of estimating the periodograms and frequency of sinusoidal model for unknown noisy time variants or oscillating events (data) in a Bayesian setting. The non-informative Jeffrey's prior was adopted for the posterior trigonometric function of the sinusoidal model such that Cramer-Rao Lower Bound (CRLB) inference was used in carving-out the minimum variance needed to curb the invariance structure effect for unknown noisy time observational and repeated circular patterns. An average monthly oscillating temperature series measured in degree Celsius (0C) from 1901 to 2014 was subjected to the posterior solution of the unknown noisy events of the sinusoidal model via Markov Chain Monte Carlo (MCMC). It was not only deduced that two minutes period is required before completing a cycle of changing temperature from one particular degree Celsius to another but also that the sinusoidal model via the CRLB-Jeffrey's prior for unknown noisy events produced a miniature posterior Maximum A Posteriori (MAP) compare to a known noisy events.

Keywords: Cramer-Rao Lower Bound (CRLB), Jeffrey's prior, Sinusoidal, Maximum A Posteriori (MAP), Markov Chain Monte Carlo (MCMC), Periodograms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 647
1849 Thermodynamic Cycle Analysis for Overall Efficiency Improvement and Temperature Reduction in Gas Turbines

Authors: Jeni A. Popescu, Ionut Porumbel, Valeriu A. Vilag, Cleopatra F. Cuciumita

Abstract:

The paper presents a thermodynamic cycle analysis for three turboshaft engines. The first cycle is a Brayton cycle, describing the evolution of a classical turboshaft, based on the Klimov TV2 engine. The other four cycles aim at approaching an Ericsson cycle, by replacing the Brayton cycle adiabatic expansion in the turbine by quasi-isothermal expansion. The maximum quasi- Ericsson cycles temperature is set to a lower value than the maximum Brayton cycle temperature, equal to the Brayton cycle power turbine inlet temperature, in order to decrease the engine NOx emissions. Also, the power/expansion ratio distribution over the stages of the gas generator turbine is maintained the same. In two of the considered quasi-Ericsson cycles, the efficiencies of the gas generator turbine, as well as the power/expansion ratio distribution over the stages of the gas generator turbine are maintained the same as for the reference case, while for the other two cases, the efficiencies are increased in order to obtain the same shaft power as in the reference case. For the two cases respecting the first condition, both the shaft power and the thermodynamic efficiency of the engine decrease, while for the other two, the power and efficiency are maintained, as a result of assuming new, more efficient gas generator turbines.

Keywords: Combustion, Ericsson, thermodynamic analysis, turbine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2451
1848 Evaluation of Biofertilizer and Manure Effects on Quantitative Yield of Nigella sativa L.

Authors: Mohammad Reza Haj Seyed Hadi, Fereshteh Ghanepasand, Mohammad Taghi Darzi

Abstract:

The main objective of this study was to determine the effects of Nitrogen fixing bacteria and manure application on the seed yield and yield components in black cumin (Nigella sativa L.). The experiment was carried out at the RAN Research Station in Firouzkouh in 2012. A 4×4 factorial experiment, arranged in a randomized complete blocks designed with three replications. Nitrogen fixing bacteria at 4 levels (control, Azotobacter, Azospirillum and Azotobacter + Azospirillum) and manure application at 4 levels (0, 2.5, 5 and 7.5 ton ha-1) were used at this investigation. The present results have shown that the highest height, 1000 seeds weight, seed number per follicle, follicle yield, seed yield and harvest index were obtained after using Azotobacter and Azospirillum, simultaneously. Manure application only effects on follicle yield and by 5ton manure ha-1 the highest follicle yield obtained. Results of this investigation showed that the maximum seed yield obtained when Aotobacter+Azospirillum inoculated with black cumin seeds and 5 ton manure ha-1 applied. According to the results of this investigation the integrated management of Azotobacter and Azospirillum with manure application is the best treatment for achieving the maximum quantitative charactersitics of Black cumin.

Keywords: Azotobacter, azospirillum, black cumin, yield, yield components.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2396
1847 Numerical Optimization within Vector of Parameters Estimation in Volatility Models

Authors: J. Arneric, A. Rozga

Abstract:

In this paper usefulness of quasi-Newton iteration procedure in parameters estimation of the conditional variance equation within BHHH algorithm is presented. Analytical solution of maximization of the likelihood function using first and second derivatives is too complex when the variance is time-varying. The advantage of BHHH algorithm in comparison to the other optimization algorithms is that requires no third derivatives with assured convergence. To simplify optimization procedure BHHH algorithm uses the approximation of the matrix of second derivatives according to information identity. However, parameters estimation in a/symmetric GARCH(1,1) model assuming normal distribution of returns is not that simple, i.e. it is difficult to solve it analytically. Maximum of the likelihood function can be founded by iteration procedure until no further increase can be found. Because the solutions of the numerical optimization are very sensitive to the initial values, GARCH(1,1) model starting parameters are defined. The number of iterations can be reduced using starting values close to the global maximum. Optimization procedure will be illustrated in framework of modeling volatility on daily basis of the most liquid stocks on Croatian capital market: Podravka stocks (food industry), Petrokemija stocks (fertilizer industry) and Ericsson Nikola Tesla stocks (information-s-communications industry).

Keywords: Heteroscedasticity, Log-likelihood Maximization, Quasi-Newton iteration procedure, Volatility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2637
1846 An Evaluation of Solubility of Wax and Asphaltene in Crude Oil for Improved Flow Properties Using a Copolymer Solubilized in Organic Solvent with an Aromatic Hydrocarbon

Authors: S. M. Anisuzzaman, Sariah Abang, Awang Bono, D. Krishnaiah, N. M. Ismail, G. B. Sandrison

Abstract:

Wax and asphaltene are high molecular weighted compounds that contribute to the stability of crude oil at a dispersed state. Transportation of crude oil along pipelines from the oil rig to the refineries causes fluctuation of temperature which will lead to the coagulation of wax and flocculation of asphaltenes. This paper focuses on the prevention of wax and asphaltene precipitate deposition on the inner surface of the pipelines by using a wax inhibitor and an asphaltene dispersant. The novelty of this prevention method is the combination of three substances; a wax inhibitor dissolved in a wax inhibitor solvent and an asphaltene solvent, namely, ethylene-vinyl acetate (EVA) copolymer dissolved in methylcyclohexane (MCH) and toluene (TOL) to inhibit the precipitation and deposition of wax and asphaltene. The objective of this paper was to optimize the percentage composition of each component in this inhibitor which can maximize the viscosity reduction of crude oil. The optimization was divided into two stages which are the laboratory experimental stage in which the viscosity of crude oil samples containing inhibitor of different component compositions is tested at decreasing temperatures and the data optimization stage using response surface methodology (RSM) to design an optimizing model. The results of experiment proved that the combination of 50% EVA + 25% MCH + 25% TOL gave a maximum viscosity reduction of 67% while the RSM model proved that the combination of 57% EVA + 20.5% MCH + 22.5% TOL gave a maximum viscosity reduction of up to 61%.

Keywords: Asphaltene, ethylene-vinyl acetate, methylcyclohexane, toluene, wax.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1431
1845 Investigation of New Method to Achieve Well Dispersed Multiwall Carbon Nanotubes Reinforced Al Matrix Composites

Authors: A.H.Javadi, Sh.Mirdamadi, M.A.Faghisani, S.Shakhesi

Abstract:

Nanostructured materials have attracted many researchers due to their outstanding mechanical and physical properties. For example, carbon nanotubes (CNTs) or carbon nanofibres (CNFs) are considered to be attractive reinforcement materials for light weight and high strength metal matrix composites. These composites are being projected for use in structural applications for their high specific strength as well as functional materials for their exciting thermal and electrical characteristics. The critical issues of CNT-reinforced MMCs include processing techniques, nanotube dispersion, interface, strengthening mechanisms and mechanical properties. One of the major obstacles to the effective use of carbon nanotubes as reinforcements in metal matrix composites is their agglomeration and poor distribution/dispersion within the metallic matrix. In order to tap into the advantages of the properties of CNTs (or CNFs) in composites, the high dispersion of CNTs (or CNFs) and strong interfacial bonding are the key issues which are still challenging. Processing techniques used for synthesis of the composites have been studied with an objective to achieve homogeneous distribution of carbon nanotubes in the matrix. Modified mechanical alloying (ball milling) techniques have emerged as promising routes for the fabrication of carbon nanotube (CNT) reinforced metal matrix composites. In order to obtain a homogeneous product, good control of the milling process, in particular control of the ball movement, is essential. The control of the ball motion during the milling leads to a reduction in grinding energy and a more homogeneous product. Also, the critical inner diameter of the milling container at a particular rotational speed can be calculated. In the present work, we use conventional and modified mechanical alloying to generate a homogenous distribution of 2 wt. % CNT within Al powders. 99% purity Aluminium powder (Acros, 200mesh) was used along with two different types of multiwall carbon nanotube (MWCNTs) having different aspect ratios to produce Al-CNT composites. The composite powders were processed into bulk material by compaction, and sintering using a cylindrical compaction and tube furnace. Field Emission Scanning electron microscopy (FESEM), X-Ray diffraction (XRD), Raman spectroscopy and Vickers macro hardness tester were used to evaluate CNT dispersion, powder morphology, CNT damage, phase analysis, mechanical properties and crystal size determination. Despite the success of ball milling in dispersing CNTs in Al powder, it is often accompanied with considerable strain hardening of the Al powder, which may have implications on the final properties of the composite. The results show that particle size and morphology vary with milling time. Also, by using the mixing process and sonication before mechanical alloying and modified ball mill, dispersion of the CNTs in Al matrix improves.

Keywords: multiwall carbon nanotube, Aluminum matrixcomposite, dispersion, mechanical alloying, sintering

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2312
1844 Design Approach to Incorporate Unique Performance Characteristics of Special Concrete

Authors: Devendra Kumar Pandey, Debabrata Chakraborty

Abstract:

The advancement in various concrete ingredients like plasticizers, additives and fibers, etc. has enabled concrete technologists to develop many viable varieties of special concretes in recent decades. Such various varieties of concrete have significant enhancement in green as well as hardened properties of concrete. A prudent selection of appropriate type of concrete can resolve many design and application issues in construction projects. This paper focuses on usage of self-compacting concrete, high early strength concrete, structural lightweight concrete, fiber reinforced concrete, high performance concrete and ultra-high strength concrete in the structures. The modified properties of strength at various ages, flowability, porosity, equilibrium density, flexural strength, elasticity, permeability etc. need to be carefully studied and incorporated into the design of the structures. The paper demonstrates various mixture combinations and the concrete properties that can be leveraged. The selection of such products based on the end use of structures has been proposed in order to efficiently utilize the modified characteristics of these concrete varieties. The study involves mapping the characteristics with benefits and savings for the structure from design perspective. Self-compacting concrete in the structure is characterized by high shuttering loads, better finish, and feasibility of closer reinforcement spacing. The structural design procedures can be modified to specify higher formwork strength, height of vertical members, cover reduction and increased ductility. The transverse reinforcement can be spaced at closer intervals compared to regular structural concrete. It allows structural lightweight concrete structures to be designed for reduced dead load, increased insulation properties. Member dimensions and steel requirement can be reduced proportionate to about 25 to 35 percent reduction in the dead load due to self-weight of concrete. Steel fiber reinforced concrete can be used to design grade slabs without primary reinforcement because of 70 to 100 percent higher tensile strength. The design procedures incorporate reduction in thickness and joint spacing. High performance concrete employs increase in the life of the structures by improvement in paste characteristics and durability by incorporating supplementary cementitious materials. Often, these are also designed for slower heat generation in the initial phase of hydration. The structural designer can incorporate the slow development of strength in the design and specify 56 or 90 days strength requirement. For designing high rise building structures, creep and elasticity properties of such concrete also need to be considered. Lastly, certain structures require a performance under loading conditions much earlier than final maturity of concrete. High early strength concrete has been designed to cater to a variety of usages at various ages as early as 8 to 12 hours. Therefore, an understanding of concrete performance specifications for special concrete is a definite door towards a superior structural design approach.

Keywords: High performance concrete, special concrete, structural design, structural lightweight concrete.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 904
1843 Impact of GCSC on Measured Impedance by Distance Relay in the Presence of Single Phase to Earth Fault

Authors: M. Zellagui, A. Chaghi

Abstract:

This paper presents the impact study of GTO Controlled Series Capacitor (GCSC) parameters on measured impedance (Zseen) by MHO distance relays for single transmission line high voltage 220 kV in the presence of single phase to earth fault with fault resistance (RF). The study deals with a 220 kV single electrical transmission line of Eastern Algerian transmission networks at Group Sonelgaz (Algerian Company of Electrical and Gas) compensated by series Flexible AC Transmission System (FACTS) i.e. GCSC connected at midpoint of the transmission line. The transmitted active and reactive powers are controlled by three GCSC-s. The effects of maximum reactive power injected as well as injected maximum voltage by GCSC on distance relays measured impedance is treated. The simulations results investigate the effects of GCSC injected parameters: variable reactance (XGCSC), variable voltage (VGCSC) and reactive power injected (QGCSC) on measured resistance and reactance in the presence of earth fault with resistance fault varied between 5 to 50 Ω for three cases study.

Keywords: GCSC Parameters, Transmission line, Earth fault, Symmetrical components, Distance protection, Measured impedance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1937
1842 Microbial Oil Production by Mixed Culture of Microalgae Chlorella sp. KKU-S2 and Yeast Torulaspora maleeae Y30

Authors: Ratanaporn Leesing, Rattanaporn Baojungharn, Thidarat Papone

Abstract:

Compared to oil production from microorganisms, little work has been performed for mixed culture of microalgae and yeast. In this article it is aimed to show high oil accumulation potential of mixed culture of microalgae Chlorella sp. KKU-S2 and oleaginous yeast Torulaspora maleeae Y30 using sugarcane molasses as substrate. The monoculture of T. maleeae Y30 grew faster than that of microalgae Chlorella sp. KKU-S2. In monoculture of yeast, a biomass of 6.4g/L with specific growth rate (m) of 0.265 (1/d) and lipid yield of 0.466g/L were obtained, while 2.53g/L of biomass with m of 0.133 (1/d) and lipid yield of 0.132g/L were obtained for monoculture of Chlorella sp. KKU-S2. The biomass concentration in the mixed culture of T. maleeae Y30 with Chlorella sp. KKU-S2 increased faster and was higher compared with that in the monoculture and mixed culture of microalgae. In mixed culture of microalgae Chlorella sp. KKU-S2 and C. vulgaris TISTR8580, a biomass of 3.47g/L and lipid yield of 0.123 g/L were obtained. In mixed culture of T. maleeae Y30 with Chlorella sp. KKU-S2, a maximum biomass of 7.33 g/L and lipid yield of 0.808g/L were obtained. Maximum cell yield coefficient (YX/S, 0.229g/L), specific yield of lipid (YP/X, 0.11g lipid/g cells) and volumetric lipid production rate (QP, 0.115 g/L/d) were obtained in mixed culture of yeast and microalgae. Clearly, T. maleeae Y30 and Chlorella sp. KKU-S2 use sugarcane molasses as organic nutrients efficiently in mixed culture under mixotrophic growth. The biomass productivity and lipid yield are notably enhanced in comparison with monoculture.

Keywords: Microbial oil, Chlorella sp. KKU-S2, Chlorella vulgaris, Torulaspora maleeae Y30, mixed culture, biodiesel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2843
1841 Wavelet Compression of ECG Signals Using SPIHT Algorithm

Authors: Mohammad Pooyan, Ali Taheri, Morteza Moazami-Goudarzi, Iman Saboori

Abstract:

In this paper we present a novel approach for wavelet compression of electrocardiogram (ECG) signals based on the set partitioning in hierarchical trees (SPIHT) coding algorithm. SPIHT algorithm has achieved prominent success in image compression. Here we use a modified version of SPIHT for one dimensional signals. We applied wavelet transform with SPIHT coding algorithm on different records of MIT-BIH database. The results show the high efficiency of this method in ECG compression.

Keywords: ECG compression, wavelet, SPIHT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2391
1840 Modelling Hydrological Time Series Using Wakeby Distribution

Authors: Ilaria Lucrezia Amerise

Abstract:

The statistical modelling of precipitation data for a given portion of territory is fundamental for the monitoring of climatic conditions and for Hydrogeological Management Plans (HMP). This modelling is rendered particularly complex by the changes taking place in the frequency and intensity of precipitation, presumably to be attributed to the global climate change. This paper applies the Wakeby distribution (with 5 parameters) as a theoretical reference model. The number and the quality of the parameters indicate that this distribution may be the appropriate choice for the interpolations of the hydrological variables and, moreover, the Wakeby is particularly suitable for describing phenomena producing heavy tails. The proposed estimation methods for determining the value of the Wakeby parameters are the same as those used for density functions with heavy tails. The commonly used procedure is the classic method of moments weighed with probabilities (probability weighted moments, PWM) although this has often shown difficulty of convergence, or rather, convergence to a configuration of inappropriate parameters. In this paper, we analyze the problem of the likelihood estimation of a random variable expressed through its quantile function. The method of maximum likelihood, in this case, is more demanding than in the situations of more usual estimation. The reasons for this lie, in the sampling and asymptotic properties of the estimators of maximum likelihood which improve the estimates obtained with indications of their variability and, therefore, their accuracy and reliability. These features are highly appreciated in contexts where poor decisions, attributable to an inefficient or incomplete information base, can cause serious damages.

Keywords: Generalized extreme values (GEV), likelihood estimation, precipitation data, Wakeby distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 659
1839 Efficient Preparation and Characterization of Carbohydrate Based Monomers. D-mannose Derivatives

Authors: L. M. Stefan, A. M. Pana, M. Silion, M. Balan, G. Bandur, L. M. Rusnac

Abstract:

The field of polymeric biomaterials is very important from the socio-economical viewpoint. Synthetic carbohydrate polymers are being increasingly investigated as biodegradable, biocompatible and biorenewable materials. The aim of this study was to synthesize and characterize some derivatives based on D-mannose. D-mannose was chemically modified to obtain 1-O-allyl-2,3:5,6-di- O-isopropylidene-D-mannofuranose and 1-O-(2-,3--epoxy-propyl)- 2,3:5,6-di-O-isopropylidene-D-mannofuranose. The chemical structure of the resulting compounds was characterized by FT-IR and NMR spectroscopy, and by HPLC-MS.

Keywords: D-mannose, biopolymers , spectroscopy, synthesis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2197
1838 On the Optimality Assessment of Nanoparticle Size Spectrometry and Its Association to the Entropy Concept

Authors: A. Shaygani, R. Saifi, M. S. Saidi, M. Sani

Abstract:

Particle size distribution, the most important characteristics of aerosols, is obtained through electrical characterization techniques. The dynamics of charged nanoparticles under the influence of electric field in Electrical Mobility Spectrometer (EMS) reveals the size distribution of these particles. The accuracy of this measurement is influenced by flow conditions, geometry, electric field and particle charging process, therefore by the transfer function (transfer matrix) of the instrument. In this work, a wire-cylinder corona charger was designed and the combined fielddiffusion charging process of injected poly-disperse aerosol particles was numerically simulated as a prerequisite for the study of a multichannel EMS. The result, a cloud of particles with no uniform charge distribution, was introduced to the EMS. The flow pattern and electric field in the EMS were simulated using Computational Fluid Dynamics (CFD) to obtain particle trajectories in the device and therefore to calculate the reported signal by each electrometer. According to the output signals (resulted from bombardment of particles and transferring their charges as currents), we proposed a modification to the size of detecting rings (which are connected to electrometers) in order to evaluate particle size distributions more accurately. Based on the capability of the system to transfer information contents about size distribution of the injected particles, we proposed a benchmark for the assessment of optimality of the design. This method applies the concept of Von Neumann entropy and borrows the definition of entropy from information theory (Shannon entropy) to measure optimality. Entropy, according to the Shannon entropy, is the ''average amount of information contained in an event, sample or character extracted from a data stream''. Evaluating the responses (signals) which were obtained via various configurations of detecting rings, the best configuration which gave the best predictions about the size distributions of injected particles, was the modified configuration. It was also the one that had the maximum amount of entropy. A reasonable consistency was also observed between the accuracy of the predictions and the entropy content of each configuration. In this method, entropy is extracted from the transfer matrix of the instrument for each configuration. Ultimately, various clouds of particles were introduced to the simulations and predicted size distributions were compared to the exact size distributions.

Keywords: Aerosol Nano-Particle, CFD, Electrical Mobility Spectrometer, Von Neumann entropy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1850
1837 Seamless Flow of Voluminous Data in High Speed Network without Congestion Using Feedback Mechanism

Authors: T.Sheela, Dr.J.Raja

Abstract:

Continuously growing needs for Internet applications that transmit massive amount of data have led to the emergence of high speed network. Data transfer must take place without any congestion and hence feedback parameters must be transferred from the receiver end to the sender end so as to restrict the sending rate in order to avoid congestion. Even though TCP tries to avoid congestion by restricting the sending rate and window size, it never announces the sender about the capacity of the data to be sent and also it reduces the window size by half at the time of congestion therefore resulting in the decrease of throughput, low utilization of the bandwidth and maximum delay. In this paper, XCP protocol is used and feedback parameters are calculated based on arrival rate, service rate, traffic rate and queue size and hence the receiver informs the sender about the throughput, capacity of the data to be sent and window size adjustment, resulting in no drastic decrease in window size, better increase in sending rate because of which there is a continuous flow of data without congestion. Therefore as a result of this, there is a maximum increase in throughput, high utilization of the bandwidth and minimum delay. The result of the proposed work is presented as a graph based on throughput, delay and window size. Thus in this paper, XCP protocol is well illustrated and the various parameters are thoroughly analyzed and adequately presented.

Keywords: Bandwidth-Delay Product, Congestion Control, Congestion Window, TCP/IP

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1479
1836 A New Approach to Optimal Control Problem Constrained by Canonical Form

Authors: B. Farhadinia

Abstract:

In this article, it is considered a class of optimal control problems constrained by differential and integral constraints are called canonical form. A modified measure theoretical approach is introduced to solve this class of optimal control problems.

Keywords: control problem, Canonical form, Measure theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1194
1835 Effect of Concrete Strength and Aspect Ratio on Strength and Ductility of Concrete Columns

Authors: Mohamed A. Shanan, Ashraf H. El-Zanaty, Kamal G. Metwally

Abstract:

This paper presents the effect of concrete compressive strength and rectangularity ratio on strength and ductility of normal and high strength reinforced concrete columns confined with transverse steel under axial compressive loading. Nineteen normal strength concrete rectangular columns with different variables tested in this research were used to study the effect of concrete compressive strength and rectangularity ratio on strength and ductility of columns. The paper also presents a nonlinear finite element analysis for these specimens and another twenty high strength concrete square columns tested by other researchers using ANSYS 15 finite element software. The results indicate that the axial force – axial strain relationship obtained from the analytical model using ANSYS are in good agreement with the experimental data. The comparison shows that the ANSYS is capable of modeling and predicting the actual nonlinear behavior of confined normal and high-strength concrete columns under concentric loading. The maximum applied load and the maximum strain have also been confirmed to be satisfactory. Depending on this agreement between the experimental and analytical results, a parametric numerical study was conducted by ANSYS 15 to clarify and evaluate the effect of each variable on strength and ductility of the columns.

Keywords: ANSYS, concrete compressive strength effect, ductility, rectangularity ratio, strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1885
1834 A Novel SVM-Based OOK Detector in Low SNR Infrared Channels

Authors: J. P. Dubois, O. M. Abdul-Latif

Abstract:

Support Vector Machine (SVM) is a recent class of statistical classification and regression techniques playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM is applied to an infrared (IR) binary communication system with different types of channel models including Ricean multipath fading and partially developed scattering channel with additive white Gaussian noise (AWGN) at the receiver. The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these channel stochastic models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to classical binary signal maximum likelihood detection using a matched filter driven by On-Off keying (OOK) modulation. We found that the performance of SVM is superior to that of the traditional optimal detection schemes used in statistical communication, especially for very low signal-to-noise ratio (SNR) ranges. For large SNR, the performance of the SVM is similar to that of the classical detectors. The implication of these results is that SVM can prove very beneficial to IR communication systems that notoriously suffer from low SNR at the cost of increased computational complexity.

Keywords: Least square-support vector machine, on-off keying, matched filter, maximum likelihood detector, wireless infrared communication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1944
1833 Analysis of Short Bearing in Turbulent Regime Considering Micropolar Lubrication

Authors: S. S. Gautam, S. Samanta

Abstract:

The aim of the paper work is to investigate and predict the static performance of journal bearing in turbulent flow condition considering micropolar lubrication. The Reynolds equation has been modified considering turbulent micropolar lubrication and is solved for steady state operations. The Constantinescu-s turbulence model is adopted using the coefficients. The analysis has been done for a parallel and inertia less flow. Load capacity and friction factor have been evaluated for various operating parameters.

Keywords: hydrodynamic bearing, micropolar lubrication, coupling number, characteristic length, perturbation analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1958
1832 Experimental Investigation of the Effect of Compression Ratio in a Direct Injection Diesel Engine Running on Different Blends of Rice Bran Oil and Ethanol

Authors: Perminderjit Singh, Randeep Singh

Abstract:

The performance, emission and combustion characteristics of a single cylinder four stroke variable compression ratio multi fuel engine when fueled with different blends of rice bran oil methyl ester and ethanol are investigated and compared with the results of standard diesel. Bio diesel produced from Rice bran oil by transesterification process has been used in this study. Experiment has been conducted at a fixed engine speed of 1500 rpm, 50% load and at compression ratios of 16.5:1, 17:1, 17.5:1 and 18:1. The impact of compression ratio on fuel consumption, brake thermal efficiency and exhaust gas emissions has been investigated and presented. Optimum compression ratio which gives best performance has been identified. The results indicate longer ignition delay, maximum rate of pressure rise, lower heat release rate and higher mass fraction burnt at higher compression ratio for waste cooking oil methyl ester when compared to that of diesel. The brake thermal efficiency at 50% load for Rice bran oil methyl ester blends and diesel has been calculated and the blend B40 is found to give maximum thermal efficiency. The blends when used as fuel results in reduction of carbon monoxide, hydrocarbon and increase in nitrogen oxides emissions.

Keywords: Biodiesel, Rice bran oil, Transesterification, Ethanol, Compression Ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3837
1831 Asymmetrical Informative Estimation for Macroeconomic Model: Special Case in the Tourism Sector of Thailand

Authors: Chukiat Chaiboonsri, Satawat Wannapan

Abstract:

This paper used an asymmetric informative concept to apply in the macroeconomic model estimation of the tourism sector in Thailand. The variables used to statistically analyze are Thailand international and domestic tourism revenues, the expenditures of foreign and domestic tourists, service investments by private sectors, service investments by the government of Thailand, Thailand service imports and exports, and net service income transfers. All of data is a time-series index which was observed between 2002 and 2015. Empirically, the tourism multiplier and accelerator were estimated by two statistical approaches. The first was the result of the Generalized Method of Moments model (GMM) based on the assumption which the tourism market in Thailand had perfect information (Symmetrical data). The second was the result of the Maximum Entropy Bootstrapping approach (MEboot) based on the process that attempted to deal with imperfect information and reduced uncertainty in data observations (Asymmetrical data). In addition, the tourism leakages were investigated by a simple model based on the injections and leakages concept. The empirical findings represented the parameters computed from the MEboot approach which is different from the GMM method. However, both of the MEboot estimation and GMM model suggests that Thailand’s tourism sectors are in a period capable of stimulating the economy.

Keywords: Thailand tourism, maximum entropy bootstrapping approach, macroeconomic model, asymmetric information.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1252
1830 A Combined Approach of a Sequential Life Testing and an Accelerated Life Testing Applied to a Low-Alloy High Strength Steel Component

Authors: D. I. De Souza, D. R. Fonseca, G. P. Azevedo

Abstract:

Sometimes the amount of time available for testing could be considerably less than the expected lifetime of the component. To overcome such a problem, there is the accelerated life-testing alternative aimed at forcing components to fail by testing them at much higher-than-intended application conditions. These models are known as acceleration models. One possible way to translate test results obtained under accelerated conditions to normal using conditions could be through the application of the “Maxwell Distribution Law.” In this paper we will apply a combined approach of a sequential life testing and an accelerated life testing to a low alloy high-strength steel component used in the construction of overpasses in Brazil. The underlying sampling distribution will be three-parameter Inverse Weibull model. To estimate the three parameters of the Inverse Weibull model we will use a maximum likelihood approach for censored failure data. We will be assuming a linear acceleration condition. To evaluate the accuracy (significance) of the parameter values obtained under normal conditions for the underlying Inverse Weibull model we will apply to the expected normal failure times a sequential life testing using a truncation mechanism. An example will illustrate the application of this procedure.

Keywords: Sequential Life Testing, Accelerated Life Testing, Underlying Three-Parameter Weibull Model, Maximum Likelihood Approach, Hypothesis Testing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1632
1829 An Improved Preprocessing for Biosonar Target Classification

Authors: Turgay Temel, John Hallam

Abstract:

An improved processing description to be employed in biosonar signal processing in a cochlea model is proposed and examined. It is compared to conventional models using a modified discrimination analysis and both are tested. Their performances are evaluated with echo data captured from natural targets (trees).Results indicate that the phase characteristics of low-pass filters employed in the echo processing have a significant effect on class separability for this data.

Keywords: Cochlea model, discriminant analysis, neurospikecoding, classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1482