Search results for: Single junction AlxGa1-xAs/GaAs solar cell
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2763

Search results for: Single junction AlxGa1-xAs/GaAs solar cell

213 Accelerating Quantum Chemistry Calculations: Machine Learning for Efficient Evaluation of Electron-Repulsion Integrals

Authors: Nishant Rodrigues, Nicole Spanedda, Chilukuri K. Mohan, Arindam Chakraborty

Abstract:

A crucial objective in quantum chemistry is the computation of the energy levels of chemical systems. This task requires electron-repulsion integrals as inputs and the steep computational cost of evaluating these integrals poses a major numerical challenge in efficient implementation of quantum chemical software. This work presents a moment-based machine learning approach for the efficient evaluation of electron-repulsion integrals. These integrals were approximated using linear combinations of a small number of moments. Machine learning algorithms were applied to estimate the coefficients in the linear combination. A random forest approach was used to identify promising features using a recursive feature elimination approach, which performed best for learning the sign of each coefficient, but not the magnitude. A neural network with two hidden layers was then used to learn the coefficient magnitudes, along with an iterative feature masking approach to perform input vector compression, identifying a small subset of orbitals whose coefficients are sufficient for the quantum state energy computation. Finally, a small ensemble of neural networks (with a median rule for decision fusion) was shown to improve results when compared to a single network.

Keywords: Quantum energy calculations, atomic orbitals, electron-repulsion integrals, ensemble machine learning, random forests, neural networks, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 183
212 Critical Assessment of Scoring Schemes for Protein-Protein Docking Predictions

Authors: Dhananjay C. Joshi, Jung-Hsin Lin

Abstract:

Protein-protein interactions (PPI) play a crucial role in many biological processes such as cell signalling, transcription, translation, replication, signal transduction, and drug targeting, etc. Structural information about protein-protein interaction is essential for understanding the molecular mechanisms of these processes. Structures of protein-protein complexes are still difficult to obtain by biophysical methods such as NMR and X-ray crystallography, and therefore protein-protein docking computation is considered an important approach for understanding protein-protein interactions. However, reliable prediction of the protein-protein complexes is still under way. In the past decades, several grid-based docking algorithms based on the Katchalski-Katzir scoring scheme were developed, e.g., FTDock, ZDOCK, HADDOCK, RosettaDock, HEX, etc. However, the success rate of protein-protein docking prediction is still far from ideal. In this work, we first propose a more practical measure for evaluating the success of protein-protein docking predictions,the rate of first success (RFS), which is similar to the concept of mean first passage time (MFPT). Accordingly, we have assessed the ZDOCK bound and unbound benchmarks 2.0 and 3.0. We also createda new benchmark set for protein-protein docking predictions, in which the complexes have experimentally determined binding affinity data. We performed free energy calculation based on the solution of non-linear Poisson-Boltzmann equation (nlPBE) to improve the binding mode prediction. We used the well-studied thebarnase-barstarsystem to validate the parameters for free energy calculations. Besides,thenlPBE-based free energy calculations were conducted for the badly predicted cases by ZDOCK and ZRANK. We found that direct molecular mechanics energetics cannot be used to discriminate the native binding pose from the decoys.Our results indicate that nlPBE-based calculations appeared to be one of the promising approaches for improving the success rate of binding pose predictions.

Keywords: protein-protein docking, protein-protein interaction, molecular mechanics energetics, Poisson-Boltzmann calculations

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1803
211 Multi Response Optimization in Drilling Al6063/SiC/15% Metal Matrix Composite

Authors: Hari Singh, Abhishek Kamboj, Sudhir Kumar

Abstract:

This investigation proposes a grey-based Taguchi method to solve the multi-response problems. The grey-based Taguchi method is based on the Taguchi’s design of experimental method, and adopts grey relational analysis (GRA) to transfer multi-response problems into single-response problems. In this investigation, an attempt has been made to optimize the drilling process parameters considering weighted output response characteristics using grey relational analysis. The output response characteristics considered are surface roughness, burr height and hole diameter error under the experimental conditions of cutting speed, feed rate, step angle, and cutting environment. The drilling experiments were conducted using L27 orthogonal array. A combination of orthogonal array, design of experiments and grey relational analysis was used to ascertain best possible drilling process parameters that give minimum surface roughness, burr height and hole diameter error. The results reveal that combination of Taguchi design of experiment and grey relational analysis improves surface quality of drilled hole. 

Keywords: Metal matrix composite, Drilling, Optimization, step drill, Surface roughness, burr height, hole diameter error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3254
210 Choosing R-tree or Quadtree Spatial DataIndexing in One Oracle Spatial Database System to Make Faster Showing Geographical Map in Mobile Geographical Information System Technology

Authors: Maruto Masserie Sardadi, Mohd Shafry bin Mohd Rahim, Zahabidin Jupri, Daut bin Daman

Abstract:

The latest Geographic Information System (GIS) technology makes it possible to administer the spatial components of daily “business object," in the corporate database, and apply suitable geographic analysis efficiently in a desktop-focused application. We can use wireless internet technology for transfer process in spatial data from server to client or vice versa. However, the problem in wireless Internet is system bottlenecks that can make the process of transferring data not efficient. The reason is large amount of spatial data. Optimization in the process of transferring and retrieving data, however, is an essential issue that must be considered. Appropriate decision to choose between R-tree and Quadtree spatial data indexing method can optimize the process. With the rapid proliferation of these databases in the past decade, extensive research has been conducted on the design of efficient data structures to enable fast spatial searching. Commercial database vendors like Oracle have also started implementing these spatial indexing to cater to the large and diverse GIS. This paper focuses on the decisions to choose R-tree and quadtree spatial indexing using Oracle spatial database in mobile GIS application. From our research condition, the result of using Quadtree and R-tree spatial data indexing method in one single spatial database can save the time until 42.5%.

Keywords: Indexing, Mobile GIS, MapViewer, Oracle SpatialDatabase.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4034
209 Tailoring of ECSS Standard for Space Qualification Test of CubeSat Nano-Satellite

Authors: B. Tiseo, V. Quaranta, G. Bruno, G. Sisinni

Abstract:

There is an increasing demand of nano-satellite development among universities, small companies, and emerging countries. Low-cost and fast-delivery are the main advantages of such class of satellites achieved by the extensive use of commercial-off-the-shelf components. On the other side, the loss of reliability and the poor success rate are limiting the use of nano-satellite to educational and technology demonstration and not to the commercial purpose. Standardization of nano-satellite environmental testing by tailoring the existing test standard for medium/large satellites is then a crucial step for their market growth. Thus, it is fundamental to find the right trade-off between the improvement of reliability and the need to keep their low-cost/fast-delivery advantages. This is particularly even more essential for satellites of CubeSat family. Such miniaturized and standardized satellites have 10 cm cubic form and mass no more than 1.33 kilograms per 1 unit (1U). For this class of nano-satellites, the qualification process is mandatory to reduce the risk of failure during a space mission. This paper reports the description and results of the space qualification test campaign performed on Endurosat’s CubeSat nano-satellite and modules. Mechanical and environmental tests have been carried out step by step: from the testing of the single subsystem up to the assembled CubeSat nano-satellite. Functional tests have been performed during all the test campaign to verify the functionalities of the systems. The test duration and levels have been selected by tailoring the European Space Agency standard ECSS-E-ST-10-03C and GEVS: GSFC-STD-7000A.

Keywords: CubeSat, Nano-satellite, shock, testing, vibration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1713
208 Preparation of Carbon Nanofiber Reinforced HDPE Using Dialkylimidazolium as a Dispersing Agent: Effect on Thermal and Rheological Properties

Authors: J. Samuel, S. Al-Enezi, A. Al-Banna

Abstract:

High-density polyethylene reinforced with carbon nanofibers (HDPE/CNF) have been prepared via melt processing using dialkylimidazolium tetrafluoroborate (ionic liquid) as a dispersion agent. The prepared samples were characterized by thermogravimetric (TGA) and differential scanning calorimetric (DSC) analyses. The samples blended with imidazolium ionic liquid exhibit higher thermal stability. DSC analysis showed clear miscibility of ionic liquid in the HDPE matrix and showed single endothermic peak. The melt rheological analysis of HDPE/CNF composites was performed using an oscillatory rheometer. The influence of CNF and ionic liquid concentration (ranging from 0, 0.5, and 1 wt%) on the viscoelastic parameters was investigated at 200 °C with an angular frequency range of 0.1 to 100 rad/s. The rheological analysis shows the shear-thinning behavior for the composites. An improvement in the viscoelastic properties was observed as the nanofiber concentration increases. The progress in the modulus values was attributed to the structural rigidity imparted by the high aspect ratio CNF. The modulus values and complex viscosity of the composites increased significantly at low frequencies. Composites blended with ionic liquid exhibit slightly lower values of complex viscosity and modulus over the corresponding HDPE/CNF compositions. Therefore, reduction in melt viscosity is an additional benefit for polymer composite processing as a result of wetting effect by polymer-ionic liquid combinations.

Keywords: HDPE, carbon nanofiber, ionic liquid, complex viscosity, modulus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 754
207 Simulation of Lean Principles Impact in a Multi-Product Supply Chain

Authors: M. Rossini, A. Portioli Studacher

Abstract:

The market competition is moving from the single firm to the whole supply chain because of increasing competition and growing need for operational efficiencies and customer orientation. Supply chain management allows companies to look beyond their organizational boundaries to develop and leverage resources and capabilities of their supply chain partners. This creates competitive advantages in the marketplace and because of this SCM has acquired strategic importance. Lean Approach is a management strategy that focuses on reducing every type of waste present in an organization. This approach is becoming more and more popular among supply chain managers. The supply chain application of lean approach is not frequent. In particular, it is not well studied which are the impacts of lean approach principles in a supply chain context. In literature there are only few studies aimed at understanding the qualitative impact of the lean approach in supply chains. Therefore, the goal of this research work is to study the impacts of lean principles implementation along a supply chain. To achieve this, a simulation model of a threeechelon multi-product supply chain has been built. Kanban system (and several priority policies) and setup time reduction degrees are implemented in the lean-configured supply chain to apply pull and lot-sizing decrease principles respectively. To evaluate the benefits of lean approach, lean supply chain is compared with an EOQ-configured supply chain. The simulation results show that Kanban system and setup-time reduction improve inventory stock level. They also show that logistics efforts are affected to lean implementation degree. The paper concludes describing performances of lean supply chain in different contexts.

Keywords: Inventory policy, Kanban, lean supply chain, simulation study, supply chain management, planning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2128
206 Response Surface Methodology Approach to Defining Ultrafiltration of Steepwater from Corn Starch Industry

Authors: Zita I. Šereš, Ljubica P. Dokić, Dragana M. Šoronja Simović, Cecilia Hodur, Zsuzsanna Laszlo, Ivana Nikolić, Nikola Maravić

Abstract:

In this work the concentration of steepwater from corn starch industry is monitored using ultrafiltration membrane. The aim was to examine the conditions of ultrafiltration of steepwater by applying the membrane of 2.5nm. The parameters that vary during the course of ultrafiltration, were the transmembrane pressure, flow rate, while the permeate flux and the dry matter content of permeate and retentate were the dependent parameter constantly monitored during the process. Experiments of ultrafiltration are conducted on the samples of steepwater, which were obtained from the starch wet milling plant „Jabuka“ Pancevo. The procedure of ultrafiltration on a single-channel 250mm lenght, with inner diameter of 6.8mm and outer diameter of 10mm membrane were carried on. The membrane is made of a-Al2O3 with TiO2 layer obtained from GEA (Germany). The experiments are carried out at a flow rate ranging from 100 to 200lh-1 and transmembrane pressure of 1-3 bars. During the experiments of steepwater ultrafiltration, the change of permeate flux, dry matter content of permeate and retentate, as well as the absorbance changes of the permeate and retentate were monitored. The experimental results showed that the maximum flux reaches about 40lm-2h-1. For responses obtained after experiments, a polynomial model of the second degree is established to evaluate and quantify the influence of the variables. The quadratic equitation fits with the experimental values, where the coefficient of determination for flux is 0.96. The dry matter content of the retentate is increased for about 6%, while the dry matter content of permeate was reduced for about 35-40%, respectively. During steepwater ultrafiltration in permeate stays 40% less dry matter compared to the feed.

Keywords: Ultrafiltration, steepwater, starch industry, ceramic membrane.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2134
205 Computational Assistance of the Research, Using Dynamic Vector Logistics of Processes for Critical Infrastructure Subjects Continuity

Authors: J. Urbánek Jiří, Krahulec Josef, Johanidesová Jitka, F. Urbánek Jiří

Abstract:

This paper deals with using of prevailing operation system MS Office (SmartArt...) for mathematical models, using DYVELOP (Dynamic Vector Logistics of Processes) method. It serves for crisis situations investigation and modelling within the organizations of critical infrastructure. In first part of paper, it will be introduced entities, operators, and actors of DYVELOP method. It uses just three operators of Boolean algebra and four types of the entities: the Environments, the Process Systems, the Cases, and the Controlling. The Process Systems (PrS) have five “brothers”: Management PrS, Transformation PrS, Logistic PrS, Event PrS and Operation PrS. The Cases have three “sisters”: Process Cell Case, Use Case, and Activity Case. They all need for the controlling of their functions special Ctrl actors, except ENV – it can do without Ctrl. Model´s maps are named the Blazons and they are able mathematically - graphically express the relationships among entities, actors and processes. In second part of this paper, the rich blazons of DYVELOP method will be used for the discovering and modelling of the cycling cases and their phases. The blazons need live PowerPoint presentation for better comprehension of this paper mission. The crisis management of energetic crisis infrastructure organization is obliged to use the cycles for successful coping of crisis situations. Several times cycling of these cases is necessary condition for the encompassment for both emergency events and the mitigation of organization´s damages. Uninterrupted and continuous cycling process brings for crisis management fruitfulness and it is good indicator and controlling actor of organizational continuity and its sustainable development advanced possibilities. The research reliable rules are derived for the safety and reliable continuity of energetic critical infrastructure organization in the crisis situation.

Keywords: Blazons, computational assistance, DYVELOP method, critical infrastructure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1633
204 Modification of Electrical and Switching Characteristics of a Non Punch-Through Insulated Gate Bipolar Transistor by Gamma Irradiation

Authors: Hani Baek, Gwang Min Sun, Chansun Shin, Sung Ho Ahn

Abstract:

Fast neutron irradiation using nuclear reactors is an effective method to improve switching loss and short circuit durability of power semiconductor (insulated gate bipolar transistors (IGBT) and insulated gate transistors (IGT), etc.). However, not only fast neutrons but also thermal neutrons, epithermal neutrons and gamma exist in the nuclear reactor. And the electrical properties of the IGBT may be deteriorated by the irradiation of gamma. Gamma irradiation damages are known to be caused by Total Ionizing Dose (TID) effect and Single Event Effect (SEE), Displacement Damage. Especially, the TID effect deteriorated the electrical properties such as leakage current and threshold voltage of a power semiconductor. This work can confirm the effect of the gamma irradiation on the electrical properties of 600 V NPT-IGBT. Irradiation of gamma forms lattice defects in the gate oxide and Si-SiO2 interface of the IGBT. It was confirmed that this lattice defect acts on the center of the trap and affects the threshold voltage, thereby negatively shifted the threshold voltage according to TID. In addition to the change in the carrier mobility, the conductivity modulation decreases in the n-drift region, indicating a negative influence that the forward voltage drop decreases. The turn-off delay time of the device before irradiation was 212 ns. Those of 2.5, 10, 30, 70 and 100 kRad(Si) were 225, 258, 311, 328, and 350 ns, respectively. The gamma irradiation increased the turn-off delay time of the IGBT by approximately 65%, and the switching characteristics deteriorated.

Keywords: NPT-IGBT, gamma irradiation, switching, turn-off delay time, recombination, trap center.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 870
203 A Look at the History of Calligraphy in Decoration of Mosques in Iran: 630-1630 AD

Authors: Cengiz Tavşan, Niloufar Akbarzadeh

Abstract:

Architecture in Iran has a continuous history from at least 5000 BC to the present, and numerous Iranian pre-Islamic elements have contributed significantly to the formation of Islamic art. At first, decoration was limited to small objects and containers and then progressed in the art of plaster and brickwork. They later applied in architecture as well. The art of gypsum and brickwork, which was prevalent in the form of motifs (animals and plants) in pre-Islam, was used in the aftermath of Islam with the art of calligraphy in decorations. The splendor and beauty of Iranian architecture, especially during the Islamic era, are related to decoration and design. After the invasion of Iran by the Arabs and the introduction of Islam to Iran, the arrival of the Iranian classical architecture significantly changed, and we saw the Arabic calligraphy decoration of the mosques in Iran. The principles of aesthetics in the art of calligraphy in Iran are based precisely on the principles of the beauty of ancient Iranian and Islamic art. On the other hand, after Islam, calligraphy was one of the most important sources of Islamic art in Islam and one of the important features of Islamic culture. First, the calligraphy had no cultural meaning and was only for decoration and beautification, it had the same meaning only in the inscriptions; however, over time, it became meaningful. This article provides a summary of the history of calligraphy in the mosques (from the entrance to Islam until the Safavid period), which cannot ignore the role of the calligraphy in their decorative ideas; and also, the important role that decorative elements play in creating a public space in terms of social and aesthetic performance. This study was conducted using library studies and field studies. The purpose of this study is to show the characteristics of architecture and art of decorations in Iran, especially in the mosque's architecture, which reaches the pinnacle of progress. We will see that religious beliefs and artistic practices are merging and trying to bring a single concept.

Keywords: Islamic art, Islamic architecture, decorations in Iranian mosques, calligraphy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2190
202 Host Responses in Peri-Implant Tissue in Comparison to Periodontal Tissue

Authors: Raviporn Madarasmi, Anjalee Vacharaksa, Pravej Serichetaphongse

Abstract:

The host response in peri-implant tissue may differ from that in periodontal tissue in a healthy individual. The purpose of this study is to investigate the expression of inflammatory cytokines in peri-implant crevicular fluid (PICF) from single implant with different abutment types in comparison to healthy periodontal tissue. 19 participants with healthy implants and teeth were recruited according to inclusion and exclusion criteria. PICF and gingival crevicular fluid (GCF) was collected using sterile paper points. The expression level of inflammatory cytokines including IL-1α, IL-1β, TNF-α, IFN-γ, IL-6, and IL-8 was assessed using enzyme-linked immunosorbent assay (ELISA). Paired t test was used to compare the expression levels of inflammatory cytokines around natural teeth and peri-implant in PICF and GCF of the same individual. The Independent t-test was used to compare the expression levels of inflammatory cytokines in PICF from titanium and UCLA abutment. Expression of IL-6, TNF-α, and IFN-γ in PICF was not statistically different from GCF among titanium and UCLA abutment group. However, the level of IL-1α in the PICF from the implants with UCLA abutment was significantly higher than GCF (P=0.030). In addition, the level of IL-1β in PICF from the implants with titanium abutment was significantly higher than GCF (P=0.032). When different abutment types was compared, IL-8 expression in PICF from implants with UCLA abutment was significantly higher than titanium abutment (P=0.003).

Keywords: Abutment, dental implant, gingival crevicular fluid and peri-implant crevicular fluid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 925
201 A Remote Sensing Approach for Vulnerability and Environmental Change in Apodi Valley Region, Northeast Brazil

Authors: Mukesh Singh Boori, Venerando Eustáquio Amaro

Abstract:

The objective of this study was to improve our understanding of vulnerability and environmental change; it's causes basically show the intensity, its distribution and human-environment effect on the ecosystem in the Apodi Valley Region, This paper is identify, assess and classify vulnerability and environmental change in the Apodi valley region using a combined approach of landscape pattern and ecosystem sensitivity. Models were developed using the following five thematic layers: Geology, geomorphology, soil, vegetation and land use/cover, by means of a Geographical Information Systems (GIS)-based on hydro-geophysical parameters. In spite of the data problems and shortcomings, using ESRI-s ArcGIS 9.3 program, the vulnerability score, to classify, weight and combine a number of 15 separate land cover classes to create a single indicator provides a reliable measure of differences (6 classes) among regions and communities that are exposed to similar ranges of hazards. Indeed, the ongoing and active development of vulnerability concepts and methods have already produced some tools to help overcome common issues, such as acting in a context of high uncertainties, taking into account the dynamics and spatial scale of asocial-ecological system, or gathering viewpoints from different sciences to combine human and impact-based approaches. Based on this assessment, this paper proposes concrete perspectives and possibilities to benefit from existing commonalities in the construction and application of assessment tools.

Keywords: Vulnerability, Land use/cover, Ecosystem, Remotesensing, GIS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2945
200 International Comparative Study of International Financial Reporting Standards Adoption and Earnings Quality: Effects of Differences in Accounting Standards, Industry Category, and Country Characteristics

Authors: Ichiro Mukai

Abstract:

The purpose of this study is to investigate whether firms applying International Financial Reporting Standards (IFRS), provide high-quality and comparable earnings information that is useful for decision making of information users relative to firms applying local Generally Accepted Accounting Principles (GAAP). Focus is placed on the earnings quality of listed firms in several developed countries: Australia, Canada, France, Germany, Japan, the United Kingdom (UK), and the United States (US). Except for Japan and the US, the adoption of IFRS is mandatory for listed firms in these countries. In Japan, the application of IFRS is allowed for specific listed firms. In the US, the foreign firms listed on the US securities market are permitted to apply IFRS but the listed domestic firms are prohibited from doing so. In this paper, the differences in earnings quality are compared between firms applying local GAAP and those applying IFRS in each country and industry category, and the reasons of differences in earnings quality are analyzed using various factors. The results show that, although the earnings quality of firms applying IFRS is higher than that of firms applying local GAAP, this varies with country and industry category. Thus, even if a single set of global accounting standards is used for all listed firms worldwide, it is difficult to establish comparability of financial information among global firms. These findings imply that various circumstances surrounding firms, industries, and countries etc. influence business operations and affect the differences in earnings quality.

Keywords: Accruals, earnings quality, IFRS, information comparability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 765
199 Isolation and Probiotic Characterization of Arsenic-Resistant Lactic Acid Bacteria for Uptaking Arsenic

Authors: Jatindra N. Bhakta, Kouhei Ohnishi, Yukihiro Munekage, Kozo Iwasaki

Abstract:

The growing health hazardous impact of arsenic (As) contamination in environment is the impetus of the present investigation. Application of lactic acid bacteria (LAB) for the removal of toxic and heavy metals from water has been reported. This study was performed in order to isolate and characterize the Asresistant LAB from mud and sludge samples for using as efficient As uptaking probiotic. Isolation of As-resistant LAB colonies was performed by spread plate technique using bromocresol purple impregnated-MRS (BP-MRS) agar media provided with As @ 50 μg/ml. Isolated LAB were employed for probiotic characterization process, acid and bile tolerance, lactic acid production, antibacterial activity and antibiotic tolerance assays. After As-resistant and removal characterizations, the LAB were identified using 16S rDNA sequencing. A total of 103 isolates were identified as As-resistant strains of LAB. The survival of 6 strains (As99-1, As100-2, As101-3, As102-4, As105-7, and As112-9) was found after passing through the sequential probiotic characterizations. Resistant pattern pronounced hollow zones at As concentration >2000 μg/ml in As99-1, As100-2, and As101-3 LAB strains, whereas it was found at ~1000 μg/ml in rest 3 strains. Among 6 strains, the As uptake efficiency of As102-4 (0.006 μg/h/mg wet weight of cell) was higher (17 – 209%) compared to remaining LAB. 16S rDNA sequencing data of 3 (As99- 1, As100-2, and As101-3) and 3 (As102-4, As105-7, and As112-9) LAB strains clearly showed 97 to 99% (340 bp) homology to Pediococcus dextrinicus and Pediococcus acidilactici, respectively. Though, there was no correlation between the metal resistant and removal efficiency of LAB examined but identified elevated As removing LAB would probably be a potential As uptaking probiotic agent. Since present experiment concerned with only As removal from pure water, As removal and removal mechanism in natural condition of intestinal milieu should be assessed in future studies.

Keywords: Lactic acid bacteria, As-resistant, characterization, Pediococcus sp., As removal probiotic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2733
198 Life Cycle Assessment Comparison between Methanol and Ethanol Feedstock for the Biodiesel from Soybean Oil

Authors: Pawit Tangviroon, Apichit Svang-Ariyaskul

Abstract:

As the limited availability of petroleum-based fuel has been a major concern, biodiesel is one of the most attractive alternative fuels because it is renewable and it also has advantages over the conventional petroleum-base diesel. At Present, productions of biodiesel generally perform by transesterification of vegetable oils with low molecular weight alcohol, mainly methanol, using chemical catalysts. Methanol is petrochemical product that makes biodiesel producing from methanol to be not pure renewable energy source. Therefore, ethanol as a product produced by fermentation processes. It appears as a potential feed stock that makes biodiesel to be pure renewable alternative fuel. The research is conducted based on two biodiesel production processes by reacting soybean oils with methanol and ethanol. Life cycle assessment was carried out in order to evaluate the environmental impacts and to identify the process alternative. Nine mid-point impact categories are investigated. The results indicate that better performance on abiotic depletion potential (ADP) and acidification potential (AP) are observed in biodiesel production from methanol when compared with biodiesel production from ethanol due to less energy consumption during the production processes. Except for ADP and AP, using methanol as feed stock does not show any advantages over biodiesel from ethanol. The single score method is also included in this study in order to identify the best option between two processes of biodiesel production. The global normalization and weighting factor based on ecotaxes are used and it shows that producing biodiesel form ethanol has less environmental load compare to biodiesel from methanol.

Keywords: Biodiesel, Ethanol, Life Cycle Assessment, Methanol, Soybean Oil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3400
197 Modeling and Simulation of Overcurrent and Earth Fault Relay with Inverse Definite Minimum Time

Authors: Win Win Tun, Han Su Yin, Ohn Zin Lin

Abstract:

Transmission networks are an important part of an electric power system. The transmission lines not only have high power transmission capacity but also they are prone of larger magnitudes. Different types of faults occur in transmission lines such as single line to ground (L-G) fault, double line to ground (L-L-G) fault, line to line (L-L) fault and three phases (L-L-L) fault. These faults are needed to be cleared quickly in order to reduce damage caused to the system and they have high impact on the electrical power system equipment’s which are connected in transmission line. The main fault in transmission line is L-G fault. Therefore, protection relays are needed to protect transmission line. Overcurrent and earth fault relay is an important relay used to protect transmission lines, distribution feeders, transformers and bus couplers etc. Sometimes these relays can be used as main protection or backup protection. The modeling of protection relays is important to indicate the effects of network parameters and configurations on the operation of relays. Therefore, the modeling of overcurrent and earth fault relay is described in this paper. The overcurrent and earth fault relays with standard inverse definite minimum time are modeled and simulated by using MATLAB/Simulink software. The developed model was tested with L-G, L-L-G, L-L and L-L-L faults with various fault locations and fault resistance (0.001Ω). The simulation results are obtained by MATLAB software which shows the feasibility of analysis of transmission line protection with overcurrent and earth fault relay.

Keywords: Transmission line, overcurrent and earth fault relay, standard inverse definite minimum time, various faults, MATLAB Software.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 992
196 Combination of Tensile Strength and Elongation of Reverse Rolled TaNbHfZrTi Refractory High Entropy Alloy

Authors: M. Veerasham

Abstract:

The refractory high entropy alloys are potential materials for high-temperature applications because of their ability to retain high strength up to 1600°C. However, their practical applications were limited due to poor elongation at room temperature. Therefore, decreasing the average valence electron concentrations (VEC) is an effective design strategy to improve the intrinsic ductility of refractory high entropy alloys. In this work, the high-entropy alloy TaNbHfZrTi was processed at room temperature by each step reverse rolling up to a 90% reduction in thickness. Subsequently, the reverse rolled 90% samples were utilized for annealing treatment at 800°C and 1000°C for 1 h to understand phase stability, microstructure, texture, and mechanical properties. The reverse rolled 90% condition contains body-centered cubic (BCC) single-phase; upon annealing at 800 °C, the formation of secondary phase BCC-2 prevailed. The partial recrystallization and complete recrystallization microstructures were developed for annealed at 800°C and 1000°C, respectively. The reverse rolled condition and 1000°C annealed temperature exhibit extraordinary room temperature tensile properties with high ultimate tensile strength (UTS) without compromising loss of ductility called “strength-ductility” trade-off. The reverse-rolled 90% and annealing treatment carried out at temperature about 1000°C for 1 h consist of UTS 1430 MPa and 1556 MPa with an appreciable amount of 21% and 20% elongation, respectively. The development of hierarchical microstructure prevailed for the annealed 1000°C which led to the simultaneous increase in tensile strength and elongation.

Keywords: refractory high entropy alloys, reverse rolling, recrystallization, microstructure, tensile properties

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 543
195 Genetic Algorithm Based Approach for Actuator Saturation Effect on Nonlinear Controllers

Authors: M. Mohebbi, K. Shakeri

Abstract:

In the real application of active control systems to mitigate the response of structures subjected to sever external excitations such as earthquake and wind induced vibrations, since the capacity of actuators is limited then the actuators saturate. Hence, in designing controllers for linear and nonlinear structures under sever earthquakes, the actuator saturation should be considered as a constraint. In this paper optimal design of active controllers for nonlinear structures by considering the actuator saturation has been studied. To this end a method has been proposed based on defining an optimization problem which considers the minimizing of the maximum displacement of the structure as objective when a limited capacity for actuator has been used as a constraint in optimization problem. To evaluate the effectiveness of the proposed method, a single degree of freedom (SDF) structure with a bilinear hysteretic behavior has been simulated under a white noise ground acceleration of different amplitudes. Active tendon control mechanism, comprised of pre-stressed tendons and an actuator, and extended nonlinear Newmark method based instantaneous optimal control algorithm have been used as active control mechanism and algorithm. To enhance the efficiency of the controllers, the weights corresponding to displacement, velocity, acceleration and control force in the performance index have been found by using the Distributed Genetic Algorithm (DGA). According to the results it has been concluded that the proposed method has been effective in considering the actuator saturation in designing optimal controllers for nonlinear frames. Also it has been shown that the actuator capacity and the average value of required control force are two important factors in designing nonlinear controllers for considering the actuator saturation.

Keywords: Active control, Actuator Saturation, Nonlinear, Optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1451
194 Process Optimization and Automation of Information Technology Services in a Heterogenic Digital Environment

Authors: Tasneem Halawani, Yamen Khateeb

Abstract:

With customers’ ever-increasing expectations for fast services provisioning for all their business needs, information technology (IT) organizations, as business partners, have to cope with this demanding environment and deliver their services in the most effective and efficient way. The purpose of this paper is to identify optimization and automation opportunities for the top requested IT services in a heterogenic digital environment and widely spread customer base. In collaboration with systems, processes, and subject matter experts (SMEs), the processes in scope were approached by analyzing four-year related historical data, identifying and surveying stakeholders, modeling the as-is processes, and studying systems integration/automation capabilities. This effort resulted in identifying several pain areas, including standardization, unnecessary customer and IT involvement, manual steps, systems integration, and performance measurement. These pain areas were addressed by standardizing the top five requested IT services, eliminating/automating 43 steps, and utilizing a single platform for end-to-end process execution. In conclusion, the optimization of IT service request processes in a heterogenic digital environment and widely spread customer base is challenging, yet achievable without compromising the service quality and customers’ added value. Further studies can focus on measuring the value of the eliminated/automated process steps to quantify the enhancement impact. Moreover, a similar approach can be utilized to optimize other IT service requests, with a focus on business criticality.

Keywords: Automation, customer value, heterogenic, integration, IT services, optimization, processes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 664
193 Estimation of Relative Permeabilities and Capillary Pressures in Shale Using Simulation Method

Authors: F. C. Amadi, G. C. Enyi, G. Nasr

Abstract:

Relative permeabilities are practical factors that are used to correct the single phase Darcy’s law for application to multiphase flow. For effective characterisation of large-scale multiphase flow in hydrocarbon recovery, relative permeability and capillary pressures are used. These parameters are acquired via special core flooding experiments. Special core analysis (SCAL) module of reservoir simulation is applied by engineers for the evaluation of these parameters. But, core flooding experiments in shale core sample are expensive and time consuming before various flow assumptions are achieved for instance Darcy’s law. This makes it imperative for the application of coreflooding simulations in which various analysis of relative permeabilities and capillary pressures of multiphase flow can be carried out efficiently and effectively at a relative pace. This paper presents a Sendra software simulation of core flooding to achieve to relative permeabilities and capillary pressures using different correlations. The approach used in this study was three steps. The first step, the basic petrophysical parameters of Marcellus shale sample such as porosity was determined using laboratory techniques. Secondly, core flooding was simulated for particular scenario of injection using different correlations. And thirdly the best fit correlations for the estimation of relative permeability and capillary pressure was obtained. This research approach saves cost and time and very reliable in the computation of relative permeability and capillary pressures at steady or unsteady state, drainage or imbibition processes in oil and gas industry when compared to other methods.

Keywords: Special core analysis (SCAL), relative permeability, capillary pressures, drainage, imbibition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1815
192 A Survey of 2nd Year Students’ Frequent English Writing Errors and the Effects of Participatory Error Correction Process

Authors: Chaiwat Tantarangsee

Abstract:

The purposes of this study are 1) to study the effects of participatory error correction process and 2) to find out the students’ satisfaction of such error correction process. This study is a Quasi Experimental Research with single group, in which data is collected 5 times preceding and following 4 experimental studies of participatory error correction process including providing coded indirect corrective feedback in the students’ texts with error treatment activities. Samples include 52 2nd year English Major students, Faculty of Humanities and Social Sciences, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the course; Reading and Writing English for Academic Purposes II, and tools for data collection include 5 writing tests of short texts and a questionnaire. Based on formative evaluation of the students’ writing ability prior to and after each of the 4 experiments, the research findings disclose the students’ higher scores with statistical difference at 0.00. Moreover, in terms of the effect size of such process, it is found that for mean of the students’ scores prior to and after the 4 experiments; d equals 0.6801, 0.5093, 0.5071, and 0.5296 respectively. It can be concluded that participatory error correction process enables all of the students to learn equally well and there is improvement in their ability to write short texts. Finally the students’ overall satisfaction of the participatory error correction process is in high level (Mean = 4.39, S.D. = 0.76).

Keywords: Coded indirect corrective feedback, participatory error correction process, error treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1781
191 Performance Analysis of HSDPA Systems using Low-Density Parity-Check (LDPC)Coding as Compared to Turbo Coding

Authors: K. Anitha Sheela, J. Tarun Kumar

Abstract:

HSDPA is a new feature which is introduced in Release-5 specifications of the 3GPP WCDMA/UTRA standard to realize higher speed data rate together with lower round-trip times. Moreover, the HSDPA concept offers outstanding improvement of packet throughput and also significantly reduces the packet call transfer delay as compared to Release -99 DSCH. Till now the HSDPA system uses turbo coding which is the best coding technique to achieve the Shannon limit. However, the main drawbacks of turbo coding are high decoding complexity and high latency which makes it unsuitable for some applications like satellite communications, since the transmission distance itself introduces latency due to limited speed of light. Hence in this paper it is proposed to use LDPC coding in place of Turbo coding for HSDPA system which decreases the latency and decoding complexity. But LDPC coding increases the Encoding complexity. Though the complexity of transmitter increases at NodeB, the End user is at an advantage in terms of receiver complexity and Bit- error rate. In this paper LDPC Encoder is implemented using “sparse parity check matrix" H to generate a codeword at Encoder and “Belief Propagation algorithm "for LDPC decoding .Simulation results shows that in LDPC coding the BER suddenly drops as the number of iterations increase with a small increase in Eb/No. Which is not possible in Turbo coding. Also same BER was achieved using less number of iterations and hence the latency and receiver complexity has decreased for LDPC coding. HSDPA increases the downlink data rate within a cell to a theoretical maximum of 14Mbps, with 2Mbps on the uplink. The changes that HSDPA enables includes better quality, more reliable and more robust data services. In other words, while realistic data rates are only a few Mbps, the actual quality and number of users achieved will improve significantly.

Keywords: AMC, HSDPA, LDPC, WCDMA, 3GPP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2047
190 Antimicrobial, Antioxidant and Cytotoxic Activities of Cleoma viscosa Linn. Crude Extracts

Authors: Suttijit Sriwatcharakul

Abstract:

The bioactivity studies from the weed ethanolic crude extracts from leaf, stem, pod and root of wild spider flower; Cleoma viscosa Linn. were analyzed for the growth inhibition of 6 bacterial species; Salmonella typhimurium TISTR 5562, Pseudomonas aeruginosa ATCC 27853, Staphylococcus aureus TISTR 1466, Streptococcus epidermidis ATCC 1228, Escherichia coli DMST 4212 and Bacillus subtilis ATCC 6633 with initial concentration crude extract of 50 mg/ml. The agar well diffusion results found that the extracts inhibit only gram positive bacteria species; S. aureus, S. epidermidis and B. subtilis. The minimum inhibition concentration study with gram positive strains revealed that leaf crude extract give the best result of the lowest concentration compared with other plant parts to inhibit the growth of S. aureus, S. epidermidis and B. subtilis at 0.78, 0.39 and lower than 0.39 mg/ml, respectively. The determination of total phenolic compounds in the crude extracts exhibited the highest phenolic content was 10.41 mg GAE/g dry weight in leaf crude extract. Analyzed the efficacy of free radical scavenging by using DPPH radical scavenging assay with all crude extracts showed value of IC50 of leaf, stem, pod and root crude extracts were 8.32, 12.26, 21.62 and 35.99 mg/ml, respectively. Studied cytotoxicity of crude extracts on human breast adenocarcinoma cell line by MTT assay found that pod extract had the most cytotoxicity CC50 value, 32.41 µg/ml. Antioxidant activity and cytotoxicity of crude extracts exhibited that the more increase of extract concentration, the more activities indicated. According to the bioactivities results, the leaf crude extract of Cleoma viscosa Linn. is the most interesting plant part for further work to search the beneficial of this weed.

Keywords: Antimicrobial, antioxidant activity, Cleoma viscosa Linn., cytotoxicity test, total phenolic compound.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1769
189 Life Estimation of Induction Motor Insulation under Non-Sinusoidal Voltage and Current Waveforms Using Fuzzy Logic

Authors: Triloksingh G. Arora, Mohan V. Aware, Dhananjay R. Tutakne

Abstract:

Thyristor based firing angle controlled voltage regulators are extensively used for speed control of single phase induction motors. This leads to power saving but the applied voltage and current waveforms become non-sinusoidal. These non-sinusoidal waveforms increase voltage and thermal stresses which result into accelerated insulation aging, thus reducing the motor life. Life models that allow predicting the capability of insulation under such multi-stress situations tend to be very complex and somewhat impractical. This paper presents the fuzzy logic application to investigate the synergic effect of voltage and thermal stresses on intrinsic aging of induction motor insulation. A fuzzy expert system is developed to estimate the life of induction motor insulation under multiple stresses. Three insulation degradation parameters, viz. peak modification factor, wave shape modification factor and thermal loss are experimentally obtained for different firing angles. Fuzzy expert system consists of fuzzyfication of the insulation degradation parameters, algorithms based on inverse power law to estimate the life and defuzzyficaton process to output the life. An electro-thermal life model is developed from the results of fuzzy expert system. This fuzzy logic based electro-thermal life model can be used for life estimation of induction motors operated with non-sinusoidal voltage and current waveforms.

Keywords: Aging, Dielectric losses, Insulation and Life Estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3051
188 Effect of Acids with Different Chain Lengths Modified by Methane Sulfonic Acid and Temperature on the Properties of Thermoplastic Starch/Glycerin Blends

Authors: Chi-Yuan Huang, Mei-Chuan Kuo, Ching-Yi Hsiao

Abstract:

In this study, acids with various chain lengths (C6, C8, C10 and C12) modified by methane sulfonic acid (MSA) and temperature were used to modify tapioca starch (TPS), then the glycerol (GA) were added into modified starch, to prepare new blends. The mechanical properties, thermal properties and physical properties of blends were studied. This investigation was divided into two parts.  First, the biodegradable materials were used such as starch and glycerol with hexanedioic acid (HA), suberic acid (SBA), sebacic acid (SA), decanedicarboxylic acid (DA) manufacturing with different temperatures (90, 110 and 130 °C). And then, the solution was added into modified starch to prepare the blends by using single-screw extruder. The FT-IR patterns indicated that the characteristic peak of C=O in ester was observed at 1730 cm-1. It is proved that different chain length acids (C6, C8, C10 and C12) reacted with glycerol by esterification and these are used to plasticize blends during extrusion. In addition, the blends would improve the hydrolysis and thermal stability. The water contact angle increased from 43.0° to 64.0°.  Second, the HA (110 °C), SBA (110 °C), SA (110 °C), and DA blends (130 °C) were used in study, because they possessed good mechanical properties, water resistances and thermal stability. On the other hand, the various contents (0, 0.005, 0.010, 0.020 g) of MSA were also used to modify the mechanical properties of blends. We observed that the blends were added to MSA, and then the FT-IR patterns indicated that the C=O ester appeared at 1730 cm-1. For this reason, the hydrophobic blends were produced. The water contact angle of the MSA blends increased from 55.0° to 71.0°. Although break elongation of the MSA blends reduced from the original 220% to 128%, the stress increased from 2.5 MPa to 5.1 MPa. Therefore, the optimal composition of blends was the DA blend (130 °C) with adding of MSA (0.005 g).

Keywords: Chain length acids, methane sulfonic acid, tapioca starch, tensile stress.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 911
187 Time Series Forecasting Using Various Deep Learning Models

Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan

Abstract:

Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed length window in the past as an explicit input. In this paper, we study how the performance of predictive models change as a function of different look-back window sizes and different amounts of time to predict into the future. We also consider the performance of the recent attention-based transformer models, which had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (Recurrent Neural Network (RNN), Long Short-term Memory (LSTM), Gated Recurrent Units (GRU), and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the website of University of California, Irvine (UCI), which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean   Absolute Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.

Keywords: Air quality prediction, deep learning algorithms, time series forecasting, look-back window.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1164
186 Organoclay of Cetyl Trimethyl Ammonium- Montmorillonite: Preparation and Study in Adsorption of Benzene-Toluene-2-Chlorophenol

Authors: Is Fatimah, Winda Novita, Yopi Andika, Imam Sahroni, Basitoh Djaelani, Yuyun Yunani N.

Abstract:

Contamination of aromatic compounds in water can cause severe long-lasting effects not only for biotic organism but also on human health. Several alternative technologies for remediation of polluted water have been attempted. One of these is adsorption process of aromatic compounds by using organic modified clay mineral. Porous structure of clay is potential properties for molecular adsorptivity and it can be increased by immobilizing hydrophobic structure to attract organic compounds. In this work natural montmorillonite were modified with cetyltrimethylammonium (CTMA+) and was evaluated for use as adsorbents of aromatic compounds: benzene, toluene, and 2-chloro phenol in its single and multicomponent solution by ethanol:water solvent. Preparation of CTMA-montmorillonite was conducted by simple ion exchange procedure and characterization was conducted by using x-day diffraction (XRD), Fourier-transform infra red (FTIR) and gas sorption analysis. The influence of structural modification of montmorillonite on its adsorption capacity and adsorption affinity of organic compound were studied. It was shown that adsorptivity of montmorillonite was increased by modification associated with arrangements of CTMA+ in the structure even the specific surface area of modified montmorillonite was lower than raw montmorillonite. Adsorption rate indicated that material has affinity to adsorb compound by following order: benzene> toluene > 2-chloro phenol. The adsorption isotherms of benzene and toluene showed 1st order adsorption kinetic indicating a partition phenomenon of compounds between the aqueous and organophilic CTMAmontmorillonite.

Keywords: Adsorption, Desorption, Montmorillonite, Organoclay, Surfactant.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2443
185 Performance Analysis of Chrominance Red and Chrominance Blue in JPEG

Authors: Mamta Garg

Abstract:

While compressing text files is useful, compressing still image files is almost a necessity. A typical image takes up much more storage than a typical text message and without compression images would be extremely clumsy to store and distribute. The amount of information required to store pictures on modern computers is quite large in relation to the amount of bandwidth commonly available to transmit them over the Internet and applications. Image compression addresses the problem of reducing the amount of data required to represent a digital image. Performance of any image compression method can be evaluated by measuring the root-mean-square-error & peak signal to noise ratio. The method of image compression that will be analyzed in this paper is based on the lossy JPEG image compression technique, the most popular compression technique for color images. JPEG compression is able to greatly reduce file size with minimal image degradation by throwing away the least “important" information. In JPEG, both color components are downsampled simultaneously, but in this paper we will compare the results when the compression is done by downsampling the single chroma part. In this paper we will demonstrate more compression ratio is achieved when the chrominance blue is downsampled as compared to downsampling the chrominance red in JPEG compression. But the peak signal to noise ratio is more when the chrominance red is downsampled as compared to downsampling the chrominance blue in JPEG compression. In particular we will use the hats.jpg as a demonstration of JPEG compression using low pass filter and demonstrate that the image is compressed with barely any visual differences with both methods.

Keywords: JPEG, Discrete Cosine Transform, Quantization, Color Space Conversion, Image Compression, Peak Signal to Noise Ratio & Compression Ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1675
184 Combustion Improvements by C4/C5 Bio-Alcohol Isomer Blended Fuels Combined with Supercharging and EGR in a Diesel Engine

Authors: Yasufumi Yoshimoto, Enkhjargal Tserenochir, Eiji Kinoshita, Takeshi Otaka

Abstract:

Next generation bio-alcohols produced from non-food based sources like cellulosic biomass are promising renewable energy sources. The present study investigates engine performance, combustion characteristics, and emissions of a small single cylinder direct injection diesel engine fueled by four kinds of next generation bio-alcohol isomer and diesel fuel blends with a constant blending ratio of 3:7 (mass). The tested bio-alcohol isomers here are n-butanol and iso-butanol (C4 alcohol), and n-pentanol and iso-pentanol (C5 alcohol). To obtain simultaneous reductions in NOx and smoke emissions, the experiments employed supercharging combined with EGR (Exhaust Gas Recirculation). The boost pressures were fixed at two conditions, 100 kPa (naturally aspirated operation) and 120 kPa (supercharged operation) provided with a roots blower type supercharger. The EGR rates were varied from 0 to 25% using a cooled EGR technique. The results showed that both with and without supercharging, all the bio-alcohol blended diesel fuels improved the trade-off relation between NOx and smoke emissions at all EGR rates while maintaining good engine performance, when compared with diesel fuel operation. It was also found that regardless of boost pressure and EGR rate, the ignition delays of the tested bio-alcohol isomer blends are in the order of iso-butanol > n-butanol > iso-pentanol > n-pentanol. Overall, it was concluded that, except for the changes in the ignition delays the influence of bio-alcohol isomer blends on the engine performance, combustion characteristics, and emissions are relatively small.

Keywords: Alternative fuel,  Butanol, Diesel engine, EGR, Next generation bio-alcohol isomer blended fuel, Pentanol, Supercharging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 745