Search results for: optimal parameter settings.
122 An Experimental Study on the Effect of Premixed and Equivalence Ratios on CO and HC Emissions of Dual Fuel HCCI Engine
Authors: M. Ghazikhani, M. R. Kalateh, Y. K. Toroghi, M. Dehnavi
Abstract:
In this study, effects of premixed and equivalence ratios on CO and HC emissions of a dual fuel HCCI engine are investigated. Tests were conducted on a single-cylinder engine with compression ratio of 17.5. Premixed gasoline is provided by a carburetor connected to intake manifold and equipped with a screw to adjust premixed air-fuel ratio, and diesel fuel is injected directly into the cylinder through an injector at pressure of 250 bars. A heater placed at inlet manifold is used to control the intake charge temperature. Optimal intake charge temperature results in better HCCI combustion due to formation of a homogeneous mixture, therefore, all tests were carried out over the optimum intake temperature of 110-115 ºC. Timing of diesel fuel injection has a great effect on stratification of in-cylinder charge and plays an important role in HCCI combustion phasing. Experiments indicated 35 BTDC as the optimum injection timing. Varying the coolant temperature in a range of 40 to 70 ºC, better HCCI combustion was achieved at 50 ºC. Therefore, coolant temperature was maintained 50 ºC during all tests. Simultaneous investigation of effective parameters on HCCI combustion was conducted to determine optimum parameters resulting in fast transition to HCCI combustion. One of the advantages of the method studied in this study is feasibility of easy and fast transition of typical diesel engine to a dual fuel HCCI engine. Results show that increasing premixed ratio, while keeping EGR rate constant, increases unburned hydrocarbon (UHC) emissions due to quenching phenomena and trapping of premixed fuel in crevices, but CO emission decreases due to increase in CO to CO2 reactions.Keywords: Dual fuel HCCI engine, premixed ratio, equivalenceratio, CO and UHC emissions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1911121 Optimization and Validation for Determination of VOCs from Lime Fruit Citrus aurantifolia (Christm.) with and without California Red Scale Aonidiella aurantii (Maskell) Infested by Using HS-SPME-GC-FID/MS
Authors: K. Mohammed, M. Agarwal, J. Mewman, Y. Ren
Abstract:
An optimum technic has been developed for extracting volatile organic compounds which contribute to the aroma of lime fruit (Citrus aurantifolia). The volatile organic compounds of healthy and infested lime fruit with California red scale Aonidiella aurantii were characterized using headspace solid phase microextraction (HS-SPME) combined with gas chromatography (GC) coupled flame ionization detection (FID) and gas chromatography with mass spectrometry (GC-MS) as a very simple, efficient and nondestructive extraction method. A three-phase 50/30 μm PDV/DVB/CAR fibre was used for the extraction process. The optimal sealing and fibre exposure time for volatiles reaching equilibrium from whole lime fruit in the headspace of the chamber was 16 and 4 hours respectively. 5 min was selected as desorption time of the three-phase fibre. Herbivorous activity induces indirect plant defenses, as the emission of herbivorous-induced plant volatiles (HIPVs), which could be used by natural enemies for host location. GC-MS analysis showed qualitative differences among volatiles emitted by infested and healthy lime fruit. The GC-MS analysis allowed the initial identification of 18 compounds, with similarities higher than 85%, in accordance with the NIST mass spectral library. One of these were increased by A. aurantii infestation, D-limonene, and three were decreased, Undecane, α-Farnesene and 7-epi-α-selinene. From an applied point of view, the application of the above-mentioned VOCs may help boost the efficiency of biocontrol programs and natural enemies’ production techniques.
Keywords: Lime fruit, Citrus aurantifolia, California red scale, Aonidiella aurantii, VOCs, HS-SPME/GC-FID-MS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 868120 Interpretation of Two Indices for the Prediction of Cardiovascular Risk in Pediatric Obesity
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
Obesity and weight gain are associated with increased risk of developing cardiovascular diseases and the progression of liver fibrosis. Aspartate transaminase–to-platelet count ratio index (APRI) and fibrosis-4 (FIB-4) were primarily considered as the formulas capable of differentiating hepatitis from cirrhosis. However, to the best of our knowledge, their status in children is not clear. The aim of this study is to determine APRI and FIB-4 status in obese (OB) children and compare them with values found in children with normal body mass index (N-BMI). A total of 68 children examined in the outpatient clinics of the Pediatrics Department in Tekirdag Namik Kemal University Medical Faculty were included in the study. Two groups were constituted. In the first group, 35 children with N-BMI, whose age- and sex-dependent BMI indices vary between 15 and 85 percentiles, were evaluated. The second group comprised 33 OB children whose BMI percentile values were between 95 and 99. Anthropometric measurements and routine biochemical tests were performed. Using these parameters, values for the related indices, BMI, APRI, and FIB-4, were calculated. Appropriate statistical tests were used for the evaluation of the study data. The statistical significance degree was accepted as p < 0.05. In the OB group, values found for APRI and FIB-4 were higher than those calculated for the N-BMI group. However, there was no statistically significant difference between the N-BMI and OB groups in terms of APRI and FIB-4. A similar pattern was detected for triglyceride (TRG) values. The correlation coefficient and degree of significance between APRI and FIB-4 were r = 0.336 and p = 0.065 in the N-BMI group. On the other hand, they were r = 0.707 and p = 0.001 in the OB group. Associations of these two indices with TRG have shown that this parameter was strongly correlated (p < 0.001) both with APRI and FIB-4 in the OB group, whereas no correlation was calculated in children with N-BMI. TRG are associated with an increased risk of fatty liver, which can progress to severe clinical problems such as steatohepatitis, which can lead to liver fibrosis. TRG are also independent risk factors for cardiovascular disease. In conclusion, the lack of correlation between TRG and APRI as well as FIB-4 in children with N-BMI, along with the detection of strong correlations of TRG with these indices in OB children, was the indicator of the possible onset of the tendency towards the development of fatty liver in OB children. This finding also pointed out the potential risk for cardiovascular pathologies in OB children. The nature of the difference between APRI vs. FIB-4 correlations in N-BMI and OB groups (no correlation vs. high correlation), respectively, may be the indicator of the importance of involving age and alanine transaminase parameters in addition to AST and PLT in the formula designed for FIB-4.
Keywords: APRI, FIB-4, obesity, triglycerides.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 228119 Utilizing the Analytic Hierarchy Process in Improving Performances of Blind Judo
Authors: Hyun Chul Cho, Hyunkyoung Oh, Hyun Yoon, Jooyeon Jin, Jae Won Lee
Abstract:
Identifying, structuring, and racking the most important factors related to improving athletes’ performances could pave the way for improve training system. The purpose of this study was to identify the relative importance factors to improve performance of the of judo athletes with visual impairments, including blindness by using the Analytic Hierarchy Process (AHP). After reviewing the literature, the relative importance of factors affecting performance of the blind judo was selected. A group of expert reviewed the first draft of the questionnaires, and then finally selected performance factors were classified into the major categories of techniques, physical fitness, and psychological categories. Later, a pre-selected experts group was asked to review the final version of questionnaire and confirm the priories of performance factors. The order of priority was determined by performing pairwise comparisons using Expert Choice 2000. Results indicated that “grappling” (.303) and “throwing” (.234) were the most important lower hierarchy factors for blind judo skills. In addition, the most important physical factors affecting performance were “muscular strength and endurance” (.238). Further, among other psychological factors “competitive anxiety” (.393) was important factor that affects performance. It is important to offer psychological skills training to reduce anxiety of judo athletes with visual impairments and blindness, so they can compete in their optimal states. These findings offer insights into what should be considered when determining factors to improve performance of judo athletes with visual impairments and blindness.
Keywords: Analytic hierarchy process, blind athlete, judo, sport performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 808118 Genetic Algorithm Application in a Dynamic PCB Assembly with Carryover Sequence- Dependent Setups
Authors: M. T. Yazdani Sabouni, Rasaratnam Logendran
Abstract:
We consider a typical problem in the assembly of printed circuit boards (PCBs) in a two-machine flow shop system to simultaneously minimize the weighted sum of weighted tardiness and weighted flow time. The investigated problem is a group scheduling problem in which PCBs are assembled in groups and the interest is to find the best sequence of groups as well as the boards within each group to minimize the objective function value. The type of setup operation between any two board groups is characterized as carryover sequence-dependent setup time, which exactly matches with the real application of this problem. As a technical constraint, all of the boards must be kitted before the assembly operation starts (kitting operation) and by kitting staff. The main idea developed in this paper is to completely eliminate the role of kitting staff by assigning the task of kitting to the machine operator during the time he is idle which is referred to as integration of internal (machine) and external (kitting) setup times. Performing the kitting operation, which is a preparation process of the next set of boards while the other boards are currently being assembled, results in the boards to continuously enter the system or have dynamic arrival times. Consequently, a dynamic PCB assembly system is introduced for the first time in the assembly of PCBs, which also has characteristics similar to that of just-in-time manufacturing. The problem investigated is computationally very complex, meaning that finding the optimal solutions especially when the problem size gets larger is impossible. Thus, a heuristic based on Genetic Algorithm (GA) is employed. An example problem on the application of the GA developed is demonstrated and also numerical results of applying the GA on solving several instances are provided.Keywords: Genetic algorithm, Dynamic PCB assembly, Carryover sequence-dependent setup times, Multi-objective.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1573117 Full-genomic Network Inference for Non-model organisms: A Case Study for the Fungal Pathogen Candida albicans
Authors: Jörg Linde, Ekaterina Buyko, Robert Altwasser, Udo Hahn, Reinhard Guthke
Abstract:
Reverse engineering of full-genomic interaction networks based on compendia of expression data has been successfully applied for a number of model organisms. This study adapts these approaches for an important non-model organism: The major human fungal pathogen Candida albicans. During the infection process, the pathogen can adapt to a wide range of environmental niches and reversibly changes its growth form. Given the importance of these processes, it is important to know how they are regulated. This study presents a reverse engineering strategy able to infer fullgenomic interaction networks for C. albicans based on a linear regression, utilizing the sparseness criterion (LASSO). To overcome the limited amount of expression data and small number of known interactions, we utilize different prior-knowledge sources guiding the network inference to a knowledge driven solution. Since, no database of known interactions for C. albicans exists, we use a textmining system which utilizes full-text research papers to identify known regulatory interactions. By comparing with these known regulatory interactions, we find an optimal value for global modelling parameters weighting the influence of the sparseness criterion and the prior-knowledge. Furthermore, we show that soft integration of prior-knowledge additionally improves the performance. Finally, we compare the performance of our approach to state of the art network inference approaches.
Keywords: Pathogen, network inference, text-mining, Candida albicans, LASSO, mutual information, reverse engineering, linear regression, modelling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1679116 A Development of Home Service Robot using Omni-Wheeled Mobility and Task-Based Manipulation
Authors: Hijun Kim, Jungkeun Sung, Seungwoo Kim
Abstract:
In this paper, a Smart Home Service Robot, McBot II, which performs mess-cleanup function etc. in house, is designed much more optimally than other service robots. It is newly developed in much more practical system than McBot I which we had developed two years ago. One characteristic attribute of mobile platforms equipped with a set of dependent wheels is their omni- directionality and the ability to realize complex translational and rotational trajectories for agile navigation in door. An accurate coordination of steering angle and spinning rate of each wheel is necessary for a consistent motion. This paper develops trajectory controller of 3-wheels omni-directional mobile robot using fuzzy azimuth estimator. A specialized anthropomorphic robot manipulator which can be attached to the housemaid robot McBot II, is developed in this paper. This built-in type manipulator consists of both arms with 3 DOF (Degree of Freedom) each and both hands with 3 DOF each. The robotic arm is optimally designed to satisfy both the minimum mechanical size and the maximum workspace. Minimum mass and length are required for the built-in cooperated-arms system. But that makes the workspace so small. This paper proposes optimal design method to overcome the problem by using neck joint to move the arms horizontally forward/backward and waist joint to move them vertically up/down. The robotic hand, which has two fingers and a thumb, is also optimally designed in task-based concept. Finally, the good performance of the developed McBot II is confirmed through live tests of the mess-cleanup task.Keywords: Holonomic Omni-wheeled Mobile Robot, Special-purpose, Manipulation, Home Service Robot
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2409115 Study of Proton-9,11Li Elastic Scattering at 60~75 MeV/Nucleon
Authors: Arafa A. Alholaisi, Jamal H. Madani, M. A. Alvi
Abstract:
The radial form of nuclear matter distribution, charge and the shape of nuclei are essential properties of nuclei, and hence, are of great attention for several areas of research in nuclear physics. More than last three decades have witnessed a range of experimental means employing leptonic probes (such as muons, electrons etc.) for exploring nuclear charge distributions, whereas the hadronic probes (for example alpha particles, protons, etc.) have been used to investigate the nuclear matter distributions. In this paper, p-9,11Li elastic scattering differential cross sections in the energy range to MeV have been studied by means of Coulomb modified Glauber scattering formalism. By applying the semi-phenomenological Bhagwat-Gambhir-Patil [BGP] nuclear density for loosely bound neutron rich 11Li nucleus, the estimated matter radius is found to be 3.446 fm which is quite large as compared to so known experimental value 3.12 fm. The results of microscopic optical model based calculation by applying Bethe-Brueckner–Hartree–Fock formalism (BHF) have also been compared. It should be noted that in most of phenomenological density model used to reproduce the p-11Li differential elastic scattering cross sections data, the calculated matter radius lies between 2.964 and 3.55 fm. The calculated results with phenomenological BGP model density and with nucleon density calculated in the relativistic mean-field (RMF) reproduces p-9Li and p-11Li experimental data quite nicely as compared to Gaussian- Gaussian or Gaussian-Oscillator densities at all energies under consideration. In the approach described here, no free/adjustable parameter has been employed to reproduce the elastic scattering data as against the well-known optical model based studies that involve at least four to six adjustable parameters to match the experimental data. Calculated reaction cross sections σR for p-11Li at these energies are quite large as compared to estimated values reported by earlier works though so far no experimental studies have been performed to measure it.
Keywords: Bhagwat-Gambhir-Patil density, coulomb modified Glauber model, halo nucleus, optical limit approximation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 737114 Heat-treated or Raw Sunflower Seeds in Lactating Dairy Cows Diets: Effects on Milk Fatty Acids Profile and Milk Production
Authors: H. Mansoori, A. Aghazadeh, K. Nazeradl
Abstract:
The objective of this study was to investigate the effects of dietary supplementation with raw or heat-treated sunflower oil seed with two levels of 7.5% or 15% on unsaturated fatty acids in milk fat and performances of high-yielding lactating cows. Twenty early lactating Holstein cows were used in a complete randomized design. Treatments included: 1) CON, control (without sunflower oil seed). 2) LS-UT, 7.5% raw sunflower oil seed. 3) LS-HT, 7.5% heat-treated sunflower oil seed. 4) HS-UT, 15% raw sunflower oil seed. 5) HS-HT, 15% heat-treated sunflower oil seed. Experimental period lasted for 4 wk, with first 2 wk used for adaptation to the diets. Supplementation with 7.5% raw sunflower seed (LS-UT) tended to decrease milk yield, with 28.37 kg/d compared with the control (34.75 kg/d). Milk fat percentage was increased with the HS-UT treatment that obtained 3.71% compared with CON that was 3.39% and without significant different. Milk protein percent was decreased high level sunflower oil seed treatments (15%) with 3.18% whereas CON treatment is caused 3.40% protein. The cows fed added low sunflower heat-treated (LS-HT) produced milk with the highest content of total unsaturated fatty acid with 32.59 g/100g of milk fat compared with the HS-UT with 23.59 g/100g of milk fat. Content of C18 unsaturated fatty acids in milk fat increased from 21.68 g/100g of fat in the HS-UT to 22.50, 23.98, 27.39 and 30.30 g/100g of fat from the cow fed HS-HT, CON, LS-UT and LS-HT treatments, respectively. C18:2 isomers of fatty acid in milk were greater by LSHT supplementation with significant effect (P < 0.05). Total of C18 unsaturated fatty acids content was significantly higher in milk of animal fed added low heat-treated sunflower (7.5%) than those fed with high sunflower. In all, results of this study showed that diet cow's supplementation with sunflower oil seed tended to reduce milk production of lactating cows but can improve C18 UFA (Unsaturated Fatty Acid) content in milk fat. 7.5% level of sunflower oil seed that heated seemed to be the optimal source to increase UFA production.
Keywords: Fatty acid profile, milk production, sunflower seed.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1940113 Fatigue Analysis of Spread Mooring Line
Authors: Chanhoe Kang, Changhyun Lee, Seock-Hee Jun, Yeong-Tae Oh
Abstract:
Offshore floating structure under the various environmental conditions maintains a fixed position by mooring system. Environmental conditions, vessel motions and mooring loads are applied to mooring lines as the dynamic tension. Because global responses of mooring system in deep water are specified as wave frequency and low frequency response, they should be calculated from the time-domain analysis due to non-linear dynamic characteristics. To take into account all mooring loads, environmental conditions, added mass and damping terms at each time step, a lot of computation time and capacities are required. Thus, under the premise that reliable fatigue damage could be derived through reasonable analysis method, it is necessary to reduce the analysis cases through the sensitivity studies and appropriate assumptions. In this paper, effects in fatigue are studied for spread mooring system connected with oil FPSO which is positioned in deep water of West Africa offshore. The target FPSO with two Mbbls storage has 16 spread mooring lines (4 bundles x 4 lines). The various sensitivity studies are performed for environmental loads, type of responses, vessel offsets, mooring position, loading conditions and riser behavior. Each parameter applied to the sensitivity studies is investigated from the effects of fatigue damage through fatigue analysis. Based on the sensitivity studies, the following results are presented: Wave loads are more dominant in terms of fatigue than other environment conditions. Wave frequency response causes the higher fatigue damage than low frequency response. The larger vessel offset increases the mean tension and so it results in the increased fatigue damage. The external line of each bundle shows the highest fatigue damage by the governed vessel pitch motion due to swell wave conditions. Among three kinds of loading conditions, ballast condition has the highest fatigue damage due to higher tension. The riser damping occurred by riser behavior tends to reduce the fatigue damage. The various analysis results obtained from these sensitivity studies can be used for a simplified fatigue analysis of spread mooring line as the reference.
Keywords: Mooring system, fatigue analysis, time domain, non-linear dynamic characteristics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2573112 Adaptive Design of Large Prefabricated Concrete Panels Collective Housing
Authors: Daniel M. Muntean, Viorel Ungureanu
Abstract:
More than half of the urban population in Romania lives today in residential buildings made out of large prefabricated reinforced concrete panels. Since their initial design was made in the 1960’s, these housing units are now being technically and morally outdated, consuming large amounts of energy for heating, cooling, ventilation and lighting, while failing to meet the needs of the contemporary life-style. Due to their widespread use, the design of a system that improves their energy efficiency would have a real impact, not only on the energy consumption of the residential sector, but also on the quality of life that it offers. Furthermore, with the transition of today’s existing power grid to a “smart grid”, buildings could become an active element for future electricity networks by contributing in micro-generation and energy storage. One of the most addressed issues today is to find locally adapted strategies that can be applied considering the 20-20-20 EU policy criteria and to offer sustainable and innovative solutions for the cost-optimal energy performance of buildings adapted on the existing local market. This paper presents a possible adaptive design scenario towards sustainable retrofitting of these housing units. The apartments are transformed in order to meet the current living requirements and additional extensions are placed on top of the building, replacing the unused roof space, acting not only as housing units, but as active solar energy collection systems. An adaptive building envelope is ensured in order to achieve overall air-tightness and an elevator system is introduced to facilitate access to the upper levels.
Keywords: Adaptive building, energy efficiency, retrofitting, residential buildings, smart grid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1043111 Zinc Sorption by Six Agricultural Soils Amended with Municipal Biosolids
Authors: Antoine Karam, Lotfi Khiari, Bruno Breton, Alfred Jaouich
Abstract:
Anthropogenic sources of zinc (Zn), including industrial emissions and effluents, Zn–rich fertilizer materials and pesticides containing Zn, can contribute to increasing the concentration of soluble Zn at levels toxic to plants in acid sandy soils. The application of municipal sewage sludge or biosolids (MBS) which contain metal immobilizing agents on coarse-textured soils could improve the metal sorption capacity of the low-CEC soils. The purpose of this experiment was to evaluate the sorption of Zn in surface samples (0-15 cm) of six Quebec (Canada) soils amended with MBS (pH 6.9) from Val d’Or (Quebec, Canada). Soil samples amended with increasing amounts (0 to 20%) of MBS were equilibrated with various amounts of Zn as ZnCl2 in 0.01 M CaCl2 for 48 hours at room temperature. Sorbed Zn was calculated from the difference between the initial and final Zn concentration in solution. Zn sorption data conformed to the linear form of Freundlich equation. The amount of sorbed Zn increased considerably with increasing MBS rate. Analysis of variance revealed a highly significant effect (p ≤ 0.001) of soil texture and MBS rate on the amount of sorbed Zn. The average values of the Zn-sorption capacity of MBS-amended coarse-textured soils were lower than those of MBS-amended fine textured soils. The two sandy soils (86-99% sand) amended with MBS retained 2- to 5-fold Zn than those without MBS (control). Significant Pearson correlation coefficients between the Zn sorption isotherm parameter, i.e. the Freundlich sorption isotherm (KF), and commonly measured physical and chemical entities were obtained. Among all the soil properties measured, soil pH gave the best significant correlation coefficients (p ≤ 0.001) for soils receiving 0, 5 and 10% MBS. Furthermore, KF values were positively correlated with soil clay content, exchangeable basic cations (Ca, Mg or K), CEC and clay content to CEC ratio. From these results, it can be concluded that (i) municipal biosolids provide sorption sites that have a strong affinity for Zn, (ii) both soil texture, especially clay content, and soil pH are the main factors controlling anthropogenic Zn sorption in the municipal biosolids-amended soils, and (iii) the effect of municipal biosolids on Zn sorption will be more pronounced for a sandy soil than for a clay soil.Keywords: Metal, recycling, sewage sludge, trace element.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1770110 Physicochemical Properties of Microemulsions and their uses in Enhanced Oil Recovery
Authors: T. Kumar, Achinta Bera, Ajay Mandal
Abstract:
Use of microemulsion in enhanced oil recovery has become more attractive in recent years because of its high level of extraction efficiency. Experimental investigations have been made on characterization of microemulsions of oil-brinesurfactant/ cosurfactant system for its use in enhanced oil recovery (EOR). Sodium dodecyl sulfate, propan-1-ol and heptane were selected as surfactant, cosurfactant and oil respectively for preparation of microemulsion. The effects of salinity on the relative phase volumes and solubilization parameters have also been studied. As salinity changes from low to high value, phase transition takes place from Winsor I to Winsor II via Winsor III. Suitable microemulsion composition has been selected based on its stability and ability to reduce interfacial tension. A series of flooding experiments have been performed using the selected microemulsion. The flooding experiments were performed in a core flooding apparatus using uniform sand pack. The core holder was tightly packed with uniform sands (60-100 mesh) and saturated with brines of different salinities. It was flooded with the brine at 25 psig and the absolute permeability was calculated from the flow rate of the through sand pack. The sand pack was then flooded with the crude oil at 800 psig to irreducible water saturation. The initial water saturation was determined on the basis of mass balance. Waterflooding was conducted by placing the coreholder horizontally at a constant injection pressure at 200 pisg. After water flooding, when water-cut reached above 95%, around 0.5 pore volume (PV) of the above microemulsion slug was injected followed by chasing water. The experiments were repeated using different composition of microemulsion slug. The additional recoveries were calculated by material balance. Encouraging results with additional recovery more than 20% of original oil in place above the conventional water flooding have been observed.
Keywords: Microemulsion Flooding, Enhanced Oil Recovery, Phase Behavior, Optimal salinity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3267109 Low Overhead Dynamic Channel Selection with Cluster-Based Spatial-Temporal Station Reporting in Wireless Networks
Authors: Zeyad Abdelmageid, Xianbin Wang
Abstract:
Choosing the operational channel for a WLAN access point (AP) in WLAN networks has been a static channel assignment process initiated by the user during the deployment process of the AP, which fails to cope with the dynamic conditions of the assigned channel at the station side afterwards. However, the dramatically growing number of Wi-Fi APs and stations operating in the unlicensed band has led to dynamic, distributed and often severe interference. This highlights the urgent need for the AP to dynamically select the best overall channel of operation for the basic service set (BSS) by considering the distributed and changing channel conditions at all stations. Consequently, dynamic channel selection algorithms which consider feedback from the station side have been developed. Despite the significant performance improvement, existing channel selection algorithms suffer from very high feedback overhead. Feedback latency from the STAs, due the high overhead, can cause the eventually selected channel to no longer be optimal for operation due to the dynamic sharing nature of the unlicensed band. This has inspired us to develop our own dynamic channel selection algorithm with reduced overhead through the proposed low-overhead, cluster-based station reporting mechanism. The main idea behind the cluster-based station reporting is the observation that STAs which are very close to each other tend to have very similar channel conditions. Instead of requesting each STA to report on every candidate channel while causing high overhead, the AP divides STAs into clusters then assigns each STA in each cluster one channel to report feedback on. With proper design of the cluster based reporting, the AP does not lose any information about the channel conditions at the station side while reducing feedback overhead. The simulation results show equal performance and at times better performance with a fraction of the overhead. We believe that this algorithm has great potential in designing future dynamic channel selection algorithms with low overhead.
Keywords: Channel assignment, Wi-Fi networks, clustering, DBSCAN, overhead.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 399108 Typical Day Prediction Model for Output Power and Energy Efficiency of a Grid-Connected Solar Photovoltaic System
Authors: Yan Su, L. C. Chan
Abstract:
A novel typical day prediction model have been built and validated by the measured data of a grid-connected solar photovoltaic (PV) system in Macau. Unlike conventional statistical method used by previous study on PV systems which get results by averaging nearby continuous points, the present typical day statistical method obtain the value at every minute in a typical day by averaging discontinuous points at the same minute in different days. This typical day statistical method based on discontinuous point averaging makes it possible for us to obtain the Gaussian shape dynamical distributions for solar irradiance and output power in a yearly or monthly typical day. Based on the yearly typical day statistical analysis results, the maximum possible accumulated output energy in a year with on site climate conditions and the corresponding optimal PV system running time are obtained. Periodic Gaussian shape prediction models for solar irradiance, output energy and system energy efficiency have been built and their coefficients have been determined based on the yearly, maximum and minimum monthly typical day Gaussian distribution parameters, which are obtained from iterations for minimum Root Mean Squared Deviation (RMSD). With the present model, the dynamical effects due to time difference in a day are kept and the day to day uncertainty due to weather changing are smoothed but still included. The periodic Gaussian shape correlations for solar irradiance, output power and system energy efficiency have been compared favorably with data of the PV system in Macau and proved to be an improvement than previous models.
Keywords: Grid Connected, RMSD, Solar PV System, Typical Day.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1687107 Game-Tree Simplification by Pattern Matching and Its Acceleration Approach using an FPGA
Authors: Suguru Ochiai, Toru Yabuki, Yoshiki Yamaguchi, Yuetsu Kodama
Abstract:
In this paper, we propose a Connect6 solver which adopts a hybrid approach based on a tree-search algorithm and image processing techniques. The solver must deal with the complicated computation and provide high performance in order to make real-time decisions. The proposed approach enables the solver to be implemented on a single Spartan-6 XC6SLX45 FPGA produced by XILINX without using any external devices. The compact implementation is achieved through image processing techniques to optimize a tree-search algorithm of the Connect6 game. The tree search is widely used in computer games and the optimal search brings the best move in every turn of a computer game. Thus, many tree-search algorithms such as Minimax algorithm and artificial intelligence approaches have been widely proposed in this field. However, there is one fundamental problem in this area; the computation time increases rapidly in response to the growth of the game tree. It means the larger the game tree is, the bigger the circuit size is because of their highly parallel computation characteristics. Here, this paper aims to reduce the size of a Connect6 game tree using image processing techniques and its position symmetric property. The proposed solver is composed of four computational modules: a two-dimensional checkmate strategy checker, a template matching module, a skilful-line predictor, and a next-move selector. These modules work well together in selecting next moves from some candidates and the total amount of their circuits is small. The details of the hardware design for an FPGA implementation are described and the performance of this design is also shown in this paper.Keywords: Connect6, pattern matching, game-tree reduction, hardware direct computation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1980106 Ultra-Light Overhead Conveyor Systems for Logistics Applications
Authors: Batin Latif Aylak, Bernd Noche
Abstract:
Overhead conveyor systems satisfy by their simple
construction, wide application range and their full compatibility with
other manufacturing systems, which are designed according to
international standards. Ultra-light overhead conveyor systems are
rope-based conveying systems with individually driven vehicles. The
vehicles can move automatically on the rope and this can be realized
by energy and signals. Crossings are realized by switches. Overhead
conveyor systems are particularly used in the automotive industry but
also at post offices. Overhead conveyor systems always must be
integrated with a logistical process by finding the best way for a
cheaper material flow and in order to guarantee precise and fast
workflows. With their help, any transport can take place without
wasting ground and space, without excessive company capacity, lost
or damaged products, erroneous delivery, endless travels and without
wasting time. Ultra-light overhead conveyor systems provide optimal
material flow, which produces profit and saves time. This article
illustrates the advantages of the structure of the ultra-light overhead
conveyor systems in logistics applications and explains the steps of
their system design. After an illustration of the steps, currently
available systems on the market will be shown by means of their
technical characteristics. Due to their simple construction, demands
to an ultra-light overhead conveyor system will be illustrated.
Keywords: Logistics, material flow, overhead conveyor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2073105 Assessing the Theoretical Suitability of Sentinel-2 and WorldView-3 Data for Hydrocarbon Mapping of Spill Events, Using HYSS
Authors: K. Tunde Olagunju, C. Scott Allen, F.D. (Freek) van der Meer
Abstract:
Identification of hydrocarbon oil in remote sensing images is often the first step in monitoring oil during spill events. Most remote sensing methods adopt techniques for hydrocarbon identification to achieve detection in order to model an appropriate cleanup program. Identification on optical sensors does not only allow for detection but also for characterization and quantification. Until recently, in optical remote sensing, quantification and characterization were only potentially possible using high-resolution laboratory and airborne imaging spectrometers (hyperspectral data). Unlike multispectral, hyperspectral data are not freely available, as this data category is mainly obtained via airborne survey at present. In this research, two operational high-resolution multispectral satellites (WorldView-3 and Sentinel-2) are theoretically assessed for their suitability for hydrocarbon characterization, using the Hydrocarbon Spectra Slope model (HYSS). This method utilized the two most persistent hydrocarbon diagnostic/absorption features at 1.73 µm and 2.30 µm for hydrocarbon mapping on multispectral data. In this research, spectra measurement of seven different hydrocarbon oils (crude and refined oil) taken on 10 different substrates with the use of laboratory ASD Fieldspec were convolved to Sentinel-2 and WorldView-3 resolution, using their full width half maximum (FWHM) parameter. The resulting hydrocarbon slope values obtained from the studied samples enable clear qualitative discrimination of most hydrocarbons, despite the presence of different background substrates, particularly on WorldView-3. Due to close conformity of central wavelengths and narrow bandwidths to key hydrocarbon bands used in HYSS, the statistical significance for qualitative analysis on WorldView-3 sensors for all studied hydrocarbon oil returned with 95% confidence level (P-value ˂ 0.01), except for Diesel. Using multifactor analysis of variance (MANOVA), the discriminating power of HYSS is statistically significant for most hydrocarbon-substrate combinations on Sentinel-2 and WorldView-3 FWHM, revealing the potential of these two operational multispectral sensors as rapid response tools for hydrocarbon mapping. One notable exception is highly transmissive hydrocarbons on Sentinel-2 data due to the non-conformity of spectral bands with key hydrocarbon absorptions and the relatively coarse bandwidth (> 100 nm).
Keywords: hydrocarbon, oil spill, remote sensing, hyperspectral, multispectral, hydrocarbon – substrate combination, Sentinel-2, WorldView-3
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 717104 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload
Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou
Abstract:
Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.
Keywords: Calibration, dynamic range, radiometric resolution, SNR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1347103 Six Sigma Solutions and its Benefit-Cost Ratio for Quality Improvement
Authors: S. Homrossukon, A. Anurathapunt
Abstract:
This is an application research presenting the improvement of production quality using the six sigma solutions and the analyses of benefit-cost ratio. The case of interest is the production of tile-concrete. Such production has faced with the problem of high nonconforming products from an inappropriate surface coating and had low process capability based on the strength property of tile. Surface coating and tile strength are the most critical to quality of this product. The improvements followed five stages of six sigma solutions. After the improvement, the production yield was improved to 80% as target required and the defective products from coating process was remarkably reduced from 29.40% to 4.09%. The process capability based on the strength quality was increased from 0.87 to 1.08 as customer oriented. The improvement was able to save the materials loss for 3.24 millions baht or 0.11 million dollars. The benefits from the improvement were analyzed from (1) the reduction of the numbers of non conforming tile using its factory price for surface coating improvement and (2) the materials saved from the increment of process capability. The benefit-cost ratio of overall improvement was high as 7.03. It was non valuable investment in define, measure, analyses and the initial of improve stages after that it kept increasing. This was due to there were no benefits in define, measure, and analyze stages of six sigma since these three stages mainly determine the cause of problem and its effects rather than improve the process. The benefit-cost ratio starts existing in the improve stage and go on. Within each stage, the individual benefitcost ratio was much higher than the accumulative one as there was an accumulation of cost since the first stage of six sigma. The consideration of the benefit-cost ratio during the improvement project helps make decisions for cost saving of similar activities during the improvement and for new project. In conclusion, the determination of benefit-cost ratio behavior through out six sigma implementation period provides the useful data for managing quality improvement for the optimal effectiveness. This is the additional outcome from the regular proceeding of six sigma.Keywords: Six Sigma Solutions, Process Improvement, QualityManagement, Benefit Cost Ratio
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2144102 Tool Wear of Metal Matrix Composite 10wt% AlN Reinforcement Using TiB2 Cutting Tool
Authors: M. S. Said, J. A. Ghani, Che Hassan C. H., N. N. Wan, M. A. Selamat, R. Othman
Abstract:
Metal matrix composites (MMCs) attract considerable attention as a result from its ability in providing a high strength, high modulus, high toughness, high impact properties, improving wear resistance and providing good corrosion resistance compared to unreinforced alloy. Aluminium Silicon (Al/Si) alloy MMC has been widely used in various industrial sectors such as in transportation, domestic equipment, aerospace, military, construction, etc. Aluminium silicon alloy is an MMC that had been reinforced with aluminium nitrate (AlN) particle and become a new generation material use in automotive and aerospace sector. The AlN is one of the advance material that have a bright prospect in future since it has features such as lightweight, high strength, high hardness and stiffness quality. However, the high degree of ceramic particle reinforcement and the irregular nature of the particles along the matrix material that contribute to its low density is the main problem which leads to difficulties in machining process. This paper examined the tool wear when milling AlSi/AlN Metal Matrix Composite using a TiB2 (Titanium diboride) coated carbide cutting tool. The volume of the AlN reinforced particle was 10% and milling process was carried out under dry cutting condition. The TiB2 coated carbide insert parameters used were at the cutting speed of (230, 300 and 370m/min, feed rate of 0.8, Depth of Cut (DoC) at 0.4m). The Sometech SV-35 video microscope system used to quantify of the tool wear. The result shown that tool life span increasing with the cutting speeds at (370m/min, feed rate of 0.8mm/tooth and DoC at 0.4mm) which constituted an optimum condition for longer tool life lasted until 123.2 mins. Meanwhile, at medium cutting speed which at 300m/m, feed rate of 0.8mm/tooth and depth of cut at 0.4mm we found that tool life span lasted until 119.86 mins while at low cutting speed it lasted in 119.66 mins. High cutting speed will give the best parameter in cutting AlSi/AlN MMCs material. The result will help manufacturers in machining process of AlSi/AlN MMCs materials.
Keywords: AlSi/AlN Metal Matrix Composite milling process, tool wear, TiB2 coated cemented carbide tool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3198101 Thermal Evaluation of Printed Circuit Board Design Options and Voids in Solder Interface by a Simulation Tool
Authors: B. Arzhanov, A. Correia, P. Delgado, J. Meireles
Abstract:
Quad Flat No-Lead (QFN) packages have become very popular for turners, converters and audio amplifiers, among others applications, needing efficient power dissipation in small footprints. Since semiconductor junction temperature (TJ) is a critical parameter in the product quality. And to ensure that die temperature does not exceed the maximum allowable TJ, a thermal analysis conducted in an earlier development phase is essential to avoid repeated re-designs process with huge losses in cost and time. A simulation tool capable to estimate die temperature of components with QFN package was developed. Allow establish a non-empirical way to define an acceptance criterion for amount of voids in solder interface between its exposed pad and Printed Circuit Board (PCB) to be applied during industrialization process, and evaluate the impact of PCB designs parameters. Targeting PCB layout designer as an end user for the application, a user-friendly interface (GUI) was implemented allowing user to introduce design parameters in a convenient and secure way and hiding all the complexity of finite element simulation process. This cost effective tool turns transparent a simulating process and provides useful outputs after acceptable time, which can be adopted by PCB designers, preventing potential risks during the design stage and make product economically efficient by not oversizing it. This article gathers relevant information related to the design and implementation of the developed tool, presenting a parametric study conducted with it. The simulation tool was experimentally validated using a Thermal-Test-Chip (TTC) in a QFN open-cavity, in order to measure junction temperature (TJ) directly on the die under controlled and knowing conditions. Providing a short overview about standard thermal solutions and impacts in exposed pad packages (i.e. QFN), accurately describe the methods and techniques that the system designer should use to achieve optimum thermal performance, and demonstrate the effect of system-level constraints on the thermal performance of the design.Keywords: Quad Flat No-Lead packages, exposed pads, junction temperature, thermal management and measurements.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1933100 Study of Variation of Winds Behavior on Micro Urban Environment with Use of Fuzzy Logic for Wind Power Generation: Case Study in the Cities of Arraial do Cabo and São Pedro da Aldeia, State of Rio de Janeiro, Brazil
Authors: Roberto Rosenhaim, Marcos Antonio Crus Moreira, Robson da Cunha, Gerson Gomes Cunha
Abstract:
This work provides details on the wind speed behavior within cities of Arraial do Cabo and São Pedro da Aldeia located in the Lakes Region of the State of Rio de Janeiro, Brazil. This region has one of the best potentials for wind power generation. In interurban layer, wind conditions are very complex and depend on physical geography, size and orientation of buildings and constructions around, population density, and land use. In the same context, the fundamental surface parameter that governs the production of flow turbulence in urban canyons is the surface roughness. Such factors can influence the potential for power generation from the wind within the cities. Moreover, the use of wind on a small scale is not fully utilized due to complexity of wind flow measurement inside the cities. It is difficult to accurately predict this type of resource. This study demonstrates how fuzzy logic can facilitate the assessment of the complexity of the wind potential inside the cities. It presents a decision support tool and its ability to deal with inaccurate information using linguistic variables created by the heuristic method. It relies on the already published studies about the variables that influence the wind speed in the urban environment. These variables were turned into the verbal expressions that are used in computer system, which facilitated the establishment of rules for fuzzy inference and integration with an application for smartphones used in the research. In the first part of the study, challenges of the sustainable development which are described are followed by incentive policies to the use of renewable energy in Brazil. The next chapter follows the study area characteristics and the concepts of fuzzy logic. Data were collected in field experiment by using qualitative and quantitative methods for assessment. As a result, a map of the various points is presented within the cities studied with its wind viability evaluated by a system of decision support using the method multivariate classification based on fuzzy logic.Keywords: Behavior of winds, wind power, fuzzy logic, sustainable development.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 110999 Estimation of Relative Subsidence of Collapsible Soils Using Electromagnetic Measurements
Authors: Henok Hailemariam, Frank Wuttke
Abstract:
Collapsible soils are weak soils that appear to be stable in their natural state, normally dry condition, but rapidly deform under saturation (wetting), thus generating large and unexpected settlements which often yield disastrous consequences for structures unwittingly built on such deposits. In this study, a prediction model for the relative subsidence of stressed collapsible soils based on dielectric permittivity measurement is presented. Unlike most existing methods for soil subsidence prediction, this model does not require moisture content as an input parameter, thus providing the opportunity to obtain accurate estimation of the relative subsidence of collapsible soils using dielectric measurement only. The prediction model is developed based on an existing relative subsidence prediction model (which is dependent on soil moisture condition) and an advanced theoretical frequency and temperature-dependent electromagnetic mixing equation (which effectively removes the moisture content dependence of the original relative subsidence prediction model). For large scale sub-surface soil exploration purposes, the spatial sub-surface soil dielectric data over wide areas and high depths of weak (collapsible) soil deposits can be obtained using non-destructive high frequency electromagnetic (HF-EM) measurement techniques such as ground penetrating radar (GPR). For laboratory or small scale in-situ measurements, techniques such as an open-ended coaxial line with widely applicable time domain reflectometry (TDR) or vector network analysers (VNAs) are usually employed to obtain the soil dielectric data. By using soil dielectric data obtained from small or large scale non-destructive HF-EM investigations, the new model can effectively predict the relative subsidence of weak soils without the need to extract samples for moisture content measurement. Some of the resulting benefits are the preservation of the undisturbed nature of the soil as well as a reduction in the investigation costs and analysis time in the identification of weak (problematic) soils. The accuracy of prediction of the presented model is assessed by conducting relative subsidence tests on a collapsible soil at various initial soil conditions and a good match between the model prediction and experimental results is obtained.
Keywords: Collapsible soil, relative subsidence, dielectric permittivity, moisture content.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 112598 Multi-Objective Optimization of Gas Turbine Power Cycle
Authors: Mohsen Nikaein
Abstract:
Because of importance of energy, optimization of power generation systems is necessary. Gas turbine cycles are suitable manner for fast power generation, but their efficiency is partly low. In order to achieving higher efficiencies, some propositions are preferred such as recovery of heat from exhaust gases in a regenerator, utilization of intercooler in a multistage compressor, steam injection to combustion chamber and etc. However thermodynamic optimization of gas turbine cycle, even with above components, is necessary. In this article multi-objective genetic algorithms are employed for Pareto approach optimization of Regenerative-Intercooling-Gas Turbine (RIGT) cycle. In the multiobjective optimization a number of conflicting objective functions are to be optimized simultaneously. The important objective functions that have been considered for optimization are entropy generation of RIGT cycle (Ns) derives using Exergy Analysis and Gouy-Stodola theorem, thermal efficiency and the net output power of RIGT Cycle. These objectives are usually conflicting with each other. The design variables consist of thermodynamic parameters such as compressor pressure ratio (Rp), excess air in combustion (EA), turbine inlet temperature (TIT) and inlet air temperature (T0). At the first stage single objective optimization has been investigated and the method of Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used for multi-objective optimization. Optimization procedures are performed for two and three objective functions and the results are compared for RIGT Cycle. In order to investigate the optimal thermodynamic behavior of two objectives, different set, each including two objectives of output parameters, are considered individually. For each set Pareto front are depicted. The sets of selected decision variables based on this Pareto front, will cause the best possible combination of corresponding objective functions. There is no superiority for the points on the Pareto front figure, but they are superior to any other point. In the case of three objective optimization the results are given in tables.Keywords: Exergy, Entropy Generation, Brayton Cycle, DesignParameters, Optimization, Genetic Algorithm, Multi-Objective.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 253297 Statistical Modeling of Constituents in Ash Evolved From Pulverized Coal Combustion
Authors: Esam Jassim
Abstract:
Industries using conventional fossil fuels have an interest in better understanding the mechanism of particulate formation during combustion since such is responsible for emission of undesired inorganic elements that directly impact the atmospheric pollution level. Fine and ultrafine particulates have tendency to escape the flue gas cleaning devices to the atmosphere. They also preferentially collect on surfaces in power systems resulting in ascending in corrosion inclination, descending in the heat transfer thermal unit, and severe impact on human health. This adverseness manifests particularly in the regions of world where coal is the dominated source of energy for consumption. This study highlights the behavior of calcium transformation as mineral grains verses organically associated inorganic components during pulverized coal combustion. The influence of existing type of calcium on the coarse, fine and ultrafine mode formation mechanisms is also presented. The impact of two sub-bituminous coals on particle size and calcium composition evolution during combustion is to be assessed. Three mixed blends named Blends 1, 2, and 3 are selected according to the ration of coal A to coal B by weight. Calcium percentage in original coal increases as going from Blend 1 to 3. A mathematical model and a new approach of describing constituent distribution are proposed. Analysis of experiments of calcium distribution in ash is also modeled using Poisson distribution. A novel parameter, called elemental index λ, is introduced as a measuring factor of element distribution. Results show that calcium in ash that originally in coal as mineral grains has index of 17, whereas organically associated calcium transformed to fly ash shown to be best described when elemental index λ is 7. As an alkaline-earth element, calcium is considered the fundamental element responsible for boiler deficiency since it is the major player in the mechanism of ash slagging process. The mechanism of particle size distribution and mineral species of ash particles are presented using CCSEM and size-segregated ash characteristics. Conclusions are drawn from the analysis of pulverized coal ash generated from a utility-scale boiler.
Keywords: Calcium transformation, Coal Combustion, Inorganic Element, Poisson distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 196296 Meta Model Based EA for Complex Optimization
Authors: Maumita Bhattacharya
Abstract:
Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, many real life optimization problems often require finding optimal solution to complex high dimensional, multimodal problems involving computationally very expensive fitness function evaluations. Use of evolutionary algorithms in such problem domains is thus practically prohibitive. An attractive alternative is to build meta models or use an approximation of the actual fitness functions to be evaluated. These meta models are order of magnitude cheaper to evaluate compared to the actual function evaluation. Many regression and interpolation tools are available to build such meta models. This paper briefly discusses the architectures and use of such meta-modeling tools in an evolutionary optimization context. We further present two evolutionary algorithm frameworks which involve use of meta models for fitness function evaluation. The first framework, namely the Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model [14] reduces computation time by controlled use of meta-models (in this case approximate model generated by Support Vector Machine regression) to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the metamodel are generated from a single uniform model. This does not take into account uncertain scenarios involving noisy fitness functions. The second model, DAFHEA-II, an enhanced version of the original DAFHEA framework, incorporates a multiple-model based learning approach for the support vector machine approximator to handle noisy functions [15]. Empirical results obtained by evaluating the frameworks using several benchmark functions demonstrate their efficiencyKeywords: Meta model, Evolutionary algorithm, Stochastictechnique, Fitness function, Optimization, Support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 207295 Field Study on Thermal Performance of a Green Office in Bangkok, Thailand: A Possibility of Increasing Temperature Set-Points
Authors: T. Sikram, M. Ichinose, R. Sasaki
Abstract:
In the tropics, indoor thermal environment is usually provided by a cooling mode to maintain comfort all year. Indoor thermal environment performance is sometimes different from the standard or from the first design process because of operation, maintenance, and utilization. The field study of thermal environment in the green building is still limited in this region, while the green building continues to increase. This study aims to clarify thermal performance and subjective perception in the green building by testing the temperature set-points. A Thai green office was investigated twice in October 2018 and in May 2019. Indoor environment variables (temperature, relative humidity, and wind velocity) were collected continuously. The temperature set-point was normally set as 23 °C, and it was changed into 24 °C and 25 °C. The study found that this gap of temperature set-point produced average room temperature from 22.7 to 24.6 °C and average relative humidity from 55% to 62%. Thermal environments slight shifted out of the ASHRAE comfort zone when the set-point was increased. Based on the thermal sensation vote, the feeling-colder vote decreased by 30% and 18% when changing +1 °C and +2 °C, respectively. Predicted mean vote (PMV) shows that most of the calculated median values were negative. The values went close to the optimal neutral value (0) when the set-point was set at 25 °C. The neutral temperature was slightly decreased when changing warmer temperature set-points. Building-related symptom reports were found in this study that the number of votes reduced continuously when the temperature was warmer. The symptoms that occurred by a cooler condition had the number of votes more than ones that occurred by a warmer condition. In sum, for this green office, there is a possibility to adjust a higher temperature set-point to +1 °C (24 °C) in terms of reducing cold sensitivity, discomfort, and symptoms. All results could support the policy of changing a warmer temperature of this office to become “a better green building”.
Keywords: Thermal environment, green office, temperature set-point, comfort.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 67694 Seismic Protection of Automated Stocker System by Customized Viscous Fluid Dampers
Authors: Y. P. Wang, J. K. Chen, C. H. Lee, G. H. Huang, M. C. Wang, S. W. Chen, Y. T. Kuan, H. C. Lin, C. Y. Huang, W. H. Liang, W. C. Lin, H. C. Yu
Abstract:
The hi-tech industries in the Science Park at southern Taiwan were heavily damaged by a strong earthquake early 2016. The financial loss in this event was attributed primarily to the automated stocker system handling fully processed products, and recovery of the automated stocker system from the aftermath proved to contribute major lead time. Therefore, development of effective means for protection of stockers against earthquakes has become the highest priority for risk minimization and business continuity. This study proposes to mitigate the seismic response of the stockers by introducing viscous fluid dampers in between the ceiling and the top of the stockers. The stocker is expected to vibrate less violently with a passive control force on top. Linear damper is considered in this application with an optimal damping coefficient determined from a preliminary parametric study. The damper is small in size in comparison with those adopted for building or bridge applications. Component test of the dampers has been carried out to make sure they meet the design requirement. Shake table tests have been further conducted to verify the proposed scheme under realistic earthquake conditions. Encouraging results have been achieved by effectively reducing the seismic responses of up to 60% and preventing the FOUPs from falling off the shelves that would otherwise be the case if left unprotected. Effectiveness of adopting a viscous fluid damper for seismic control of the stocker on top against the ceiling has been confirmed. This technique has been adopted by Macronix International Co., LTD for seismic retrofit of existing stockers. Demonstrative projects on the application of the proposed technique are planned underway for other companies in the display industry as well.
Keywords: Hi-tech industries, seismic protection, automated stocker system, viscous fluid damper.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 98593 Towards an Enhanced Quality of IPTV Media Server Architecture over Software Defined Networking
Authors: Esmeralda Hysenbelliu
Abstract:
The aim of this paper is to present the QoE (Quality of Experience) IPTV SDN-based media streaming server enhanced architecture for configuring, controlling, management and provisioning the improved delivery of IPTV service application with low cost, low bandwidth, and high security. Furthermore, it is given a virtual QoE IPTV SDN-based topology to provide an improved IPTV service based on QoE Control and Management of multimedia services functionalities. Inside OpenFlow SDN Controller there are enabled in high flexibility and efficiency Service Load-Balancing Systems; based on the Loading-Balance module and based on GeoIP Service. This two Load-balancing system improve IPTV end-users Quality of Experience (QoE) with optimal management of resources greatly. Through the key functionalities of OpenFlow SDN controller, this approach produced several important features, opportunities for overcoming the critical QoE metrics for IPTV Service like achieving incredible Fast Zapping time (Channel Switching time) < 0.1 seconds. This approach enabled Easy and Powerful Transcoding system via FFMPEG encoder. It has the ability to customize streaming dimensions bitrates, latency management and maximum transfer rates ensuring delivering of IPTV streaming services (Audio and Video) in high flexibility, low bandwidth and required performance. This QoE IPTV SDN-based media streaming architecture unlike other architectures provides the possibility of Channel Exchanging between several IPTV service providers all over the word. This new functionality brings many benefits as increasing the number of TV channels received by end –users with low cost, decreasing stream failure time (Channel Failure time < 0.1 seconds) and improving the quality of streaming services.
Keywords: Improved QoE, OpenFlow SDN controller, IPTV service application, softwarization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1034