Search results for: forecast accuracy unemployment rate.
957 Optimization of Multi-Zone Unconventional (Shale) Gas Reservoir Using Hydraulic Fracturing Technique
Authors: F.C. Amadi, G. C. Enyi, G. G. Nasr
Abstract:
Hydraulic fracturing is one of the most important stimulation techniques available to the petroleum engineer to extract hydrocarbons in tight gas sandstones. It allows more oil and gas production in tight reservoirs as compared to conventional means. The main aim of the study is to optimize the hydraulic fracturing as technique and for this purpose three multi-zones layer formation is considered and fractured contemporaneously. The three zones are named as Zone1 (upper zone), Zone2 (middle zone) and Zone3 (lower zone) respectively and they all occur in shale rock. Simulation was performed with Mfrac integrated software which gives a variety of 3D fracture options. This simulation process yielded an average fracture efficiency of 93.8%for the three respective zones and an increase of the average permeability of the rock system. An average fracture length of 909 ft with net height (propped height) of 210 ft (average) was achieved. Optimum fracturing results was also achieved with maximum fracture width of 0.379 inches at an injection rate of 13.01 bpm with 17995 Mscf of gas production.Keywords: Hydraulic fracturing, Mfrac, Optimisation, Tight reservoir.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1959956 Influence of Initial Surface Roughness on Severe Wear Volume for SUS304 Austenitic Stainless Steels
Authors: A. Kawamura, K. Ishida, K. Okada, T. Sato
Abstract:
Simultaneous measurements of the curves for wear versus distance, wear rate versus distance, and coefficient of friction versus distance were performed in situ to distinguish the transition from severe running-in wear to mild wear. The effects of the initial surface roughness on the severe running-in wear volume were investigated. Disk-on-plate friction and wear tests were carried out with SUS304 austenitic stainless steel in contact with itself under repeated dry sliding conditions at room temperature. The wear volume was dependent on the initial surface roughness. The wear volume when the initial surfaces on the plate and disk had dissimilar roughness was lower than that when these surfaces had similar roughness. For the dissimilar roughness, the wear volume decreased with decreasing initial surface roughness and reached a minimum; it stayed nearly constant as the roughness was less than the mean size of the oxide particles.
Keywords: Austenitic stainless steel, initial surface roughness, running-in, severe wear.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2211955 Surface Roughness Optimization in End Milling Operation with Damper Inserted End Milling Cutters
Authors: Krishna Mohana Rao, G. Ravi Kumar, P. Sowmya
Abstract:
This paper presents a study of the Taguchi design application to optimize surface quality in damper inserted end milling operation. Maintaining good surface quality usually involves additional manufacturing cost or loss of productivity. The Taguchi design is an efficient and effective experimental method in which a response variable can be optimized, given various factors, using fewer resources than a factorial design. This Study included spindle speed, feed rate, and depth of cut as control factors, usage of different tools in the same specification, which introduced tool condition and dimensional variability. An orthogonal array of L9(3^4)was used; ANOVA analyses were carried out to identify the significant factors affecting surface roughness, and the optimal cutting combination was determined by seeking the best surface roughness (response) and signal-to-noise ratio. Finally, confirmation tests verified that the Taguchi design was successful in optimizing milling parameters for surface roughness.Keywords: ANOVA, Damper, End Milling, Optimization, Surface roughness, Taguchi design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2348954 Intelligent Agent Approach to the Control of Critical Infrastructure Networks
Authors: James D. Gadze, Niki Pissinou, Kia Makki
Abstract:
In this paper we propose an intelligent agent approach to control the electric power grid at a smaller granularity in order to give it self-healing capabilities. We develop a method using the influence model to transform transmission substations into information processing, analyzing and decision making (intelligent behavior) units. We also develop a wireless communication method to deliver real-time uncorrupted information to an intelligent controller in a power system environment. A combined networking and information theoretic approach is adopted in meeting both the delay and error probability requirements. We use a mobile agent approach in optimizing the achievable information rate vector and in the distribution of rates to users (sensors). We developed the concept and the quantitative tools require in the creation of cooperating semiautonomous subsystems which puts the electric grid on the path towards intelligent and self-healing system.Keywords: Mobile agent, power system operation and control, real time, wireless communication.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1673953 Stability Analysis of Impulsive Stochastic Fuzzy Cellular Neural Networks with Time-varying Delays and Reaction-diffusion Terms
Authors: Xinhua Zhang, Kelin Li
Abstract:
In this paper, the problem of stability analysis for a class of impulsive stochastic fuzzy neural networks with timevarying delays and reaction-diffusion is considered. By utilizing suitable Lyapunov-Krasovskii funcational, the inequality technique and stochastic analysis technique, some sufficient conditions ensuring global exponential stability of equilibrium point for impulsive stochastic fuzzy cellular neural networks with time-varying delays and diffusion are obtained. In particular, the estimate of the exponential convergence rate is also provided, which depends on system parameters, diffusion effect and impulsive disturbed intention. It is believed that these results are significant and useful for the design and applications of fuzzy neural networks. An example is given to show the effectiveness of the obtained results.
Keywords: Exponential stability, stochastic fuzzy cellular neural networks, time-varying delays, impulses, reaction-diffusion terms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1382952 Composition Dependent Formation of Sputtered Co-Cu Film on Cr Under-Layer
Authors: Watcharee Rattanasakulthong, Pichai Sirisangsawang, Supree Pinitsoontorn
Abstract:
Sputtered CoxCu100-x films with the different compositions of x = 57.7, 45.8, 25.5, 13.8, 8.8, 7.5 and 1.8 were deposited on Cr under-layer by RF-sputtering. SEM result reveals that the averaged thickness of Co-Cu film and Cr under-layer are 92 nm and 22nm, respectively. All Co-Cu films are composed of Co (FCC) and Cu (FCC) phases in (111) directions on BCC-Cr (110) under-layers. Magnetic properties, surface roughness and morphology of Co-Cu films are dependent on the film composition. The maximum and minimum surface roughness of 3.24 and 1.16nm are observed on the Co7.5Cu92.5 and Co45.8Cu54.2films, respectively. It can be described that the variance of surface roughness of the film because of the difference of the agglomeration rate of Co and Cu atoms on Cr under-layer. The Co57.5Cu42.3, Co45.8Cu54.2 and Co25.5Cu74.5 films shows the ferromagnetic phase whereas the rest of the film exhibits the paramagnetic phase at room temperature. The saturation magnetization, remnant magnetization and coercive field of Co-Cu films on Cr under-layer are slightly increased with increasing the Co composition. It can be concluded that the required magnetic properties and surface roughness of the Co-Cu film can be adapted by the adjustment of the film composition.
Keywords: Co-Cu films, Under-layers, Sputtering, Surface roughness, Magnetic properties, Atomic force microscopy (AFM).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1944951 A New Controlling Parameter in Design of Above Knee Prosthesis
Abstract:
In this paper after reviewing some previous studies, in order to optimize the above knee prosthesis, beside the inertial properties a new controlling parameter is informed. This controlling parameter makes the prosthesis able to act as a multi behavior system when the amputee is opposing to different environments. This active prosthesis with the new controlling parameter can simplify the control of prosthesis and reduce the rate of energy consumption in comparison to recently presented similar prosthesis “Agonistantagonist active knee prosthesis". In this paper three models are generated, a passive, an active, and an optimized active prosthesis. Second order Taylor series is the numerical method in solution of the models equations and the optimization procedure is genetic algorithm. Modeling the prosthesis which comprises this new controlling parameter (SEP) during the swing phase represents acceptable results in comparison to natural behavior of shank. Reported results in this paper represent 3.3 degrees as the maximum deviation of models shank angle from the natural pattern. The natural gait pattern belongs to walking at the speed of 81 m/min.Keywords: Above knee prosthesis, active controlling parameter, ballistic motion, swing phase.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1873950 Integration of Big Data to Predict Transportation for Smart Cities
Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin
Abstract:
The Intelligent transportation system is essential to build smarter cities. Machine learning based transportation prediction could be highly promising approach by delivering invisible aspect visible. In this context, this research aims to make a prototype model that predicts transportation network by using big data and machine learning technology. In detail, among urban transportation systems this research chooses bus system. The research problem that existing headway model cannot response dynamic transportation conditions. Thus, bus delay problem is often occurred. To overcome this problem, a prediction model is presented to fine patterns of bus delay by using a machine learning implementing the following data sets; traffics, weathers, and bus statues. This research presents a flexible headway model to predict bus delay and analyze the result. The prototyping model is composed by real-time data of buses. The data are gathered through public data portals and real time Application Program Interface (API) by the government. These data are fundamental resources to organize interval pattern models of bus operations as traffic environment factors (road speeds, station conditions, weathers, and bus information of operating in real-time). The prototyping model is designed by the machine learning tool (RapidMiner Studio) and conducted tests for bus delays prediction. This research presents experiments to increase prediction accuracy for bus headway by analyzing the urban big data. The big data analysis is important to predict the future and to find correlations by processing huge amount of data. Therefore, based on the analysis method, this research represents an effective use of the machine learning and urban big data to understand urban dynamics.
Keywords: Big data, bus headway prediction, machine learning, public transportation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1562949 A Damage Level Assessment Model for Extra High Voltage Transmission Towers
Authors: Huan-Chieh Chiu, Hung-Shuo Wu, Chien-Hao Wang, Yu-Cheng Yang, Ching-Ya Tseng, Joe-Air Jiang
Abstract:
Power failure resulting from tower collapse due to violent seismic events might bring enormous and inestimable losses. The Chi-Chi earthquake, for example, strongly struck Taiwan and caused huge damage to the power system on September 21, 1999. Nearly 10% of extra high voltage (EHV) transmission towers were damaged in the earthquake. Therefore, seismic hazards of EHV transmission towers should be monitored and evaluated. The ultimate goal of this study is to establish a damage level assessment model for EHV transmission towers. The data of earthquakes provided by Taiwan Central Weather Bureau serve as a reference and then lay the foundation for earthquake simulations and analyses afterward. Some parameters related to the damage level of each point of an EHV tower are simulated and analyzed by the data from monitoring stations once an earthquake occurs. Through the Fourier transform, the seismic wave is then analyzed and transformed into different wave frequencies, and the data would be shown through a response spectrum. With this method, the seismic frequency which damages EHV towers the most is clearly identified. An estimation model is built to determine the damage level caused by a future seismic event. Finally, instead of relying on visual observation done by inspectors, the proposed model can provide a power company with the damage information of a transmission tower. Using the model, manpower required by visual observation can be reduced, and the accuracy of the damage level estimation can be substantially improved. Such a model is greatly useful for health and construction monitoring because of the advantages of long-term evaluation of structural characteristics and long-term damage detection.Keywords: Smart grid, EHV transmission tower, response spectrum, damage level monitoring.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1066948 Low Air Velocity Measurement Characteristics- Variation Due to Flow Regime
Authors: A. Pedišius, V. Janušas, A. Bertašienė
Abstract:
The paper depicts air velocity values, reproduced by laser Doppler anemometer (LDA) and ultrasonic anemometer (UA), relations with calculated ones from flow rate measurements using the gas meter which calibration uncertainty is ± (0.15 – 0.30) %. Investigation had been performed in channel installed in aerodynamical facility used as a part of national standard of air velocity. Relations defined in a research let us confirm the LDA and UA for air velocity reproduction to be the most advantageous measures. The results affirm ultrasonic anemometer to be reliable and favourable instrument for measurement of mean velocity or control of velocity stability in the velocity range of 0.05 m/s – 10 (15) m/s when the LDA used. The main aim of this research is to investigate low velocity regularities, starting from 0.05 m/s, including region of turbulent, laminar and transitional air flows. Theoretical and experimental results and brief analysis of it are given in the paper. Maximum and mean velocity relations for transitional air flow having unique distribution are represented. Transitional flow having distinctive and different from laminar and turbulent flow characteristics experimentally have not yet been analysed.
Keywords: Laser Doppler anemometer, ultrasonic anemometer, air flow velocities, transitional flow regime, measurement, uncertainty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2010947 A New Rigid Fistulectomy Set for Minimally Invasive “Core-Out“ Excision of High Anal Fistulas
Authors: Siamak Najarian, Meysam Esmaeili, Mohsen Towliat Kashani
Abstract:
In this article, we propose a new surgical device for circumferentially excision of high anal fistulas in a minimally invasive manner. The new apparatus works on the basis of axially rotating and moving a tubular blade along a fistulous tract straightened using a rigid straight guidewire. As the blade moves along the tract, its sharp circular cutting edge circumferentially separates approximately 2.25 mm thickness of tract encircling the rigid guidewire. We used the new set to excise two anal fistulas in a 62-year-old male patient, an extrasphincteric type and a long tract with no internal opening. With regard to the results of this test, the new device can be considered as a sphincter preserving mechanism for treatment of high anal fistulas. Consequently, a major reduction in the risk of fecal incontinence, recurrence rate, convalescence period and patient morbidity may be achieved using the new device for treatment of fistula-in-ano.Keywords: Fecal Incontinence, Fistulectomy, High Anal Fistula, Minimally Invasive.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1732946 Formation of Chemical Compound Layer at the Interface of Initial Substances A and B with Dominance of Diffusion of the A Atoms
Authors: Pavlo Selyshchev, Samuel Akintunde
Abstract:
A theoretical approach to consider formation of chemical compound layer at the interface between initial substances A and B due to the interfacial interaction and diffusion is developed. It is considered situation when speed of interfacial interaction is large enough and diffusion of A-atoms through AB-layer is much more then diffusion of B-atoms. Atoms from A-layer diffuse toward B-atoms and form AB-atoms on the surface of B-layer. B-atoms are assumed to be immobile. The growth kinetics of the AB-layer is described by two differential equations with non-linear coupling, producing a good fit to the experimental data. It is shown that growth of the thickness of the AB-layer determines by dependence of chemical reaction rate on reactants concentration. In special case the thickness of the AB-layer can grow linearly or parabolically depending on that which of processes (interaction or the diffusion) controls the growth. The thickness of AB-layer as function of time is obtained. The moment of time (transition point) at which the linear growth are changed by parabolic is found.
Keywords: Phase formation, Binary systems, Interfacial Reaction, Diffusion, Compound layers, Growth kinetics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1760945 Design Optimization of Cutting Parameters when Turning Inconel 718 with Cermet Inserts
Authors: M. Aruna, V. Dhanalaksmi
Abstract:
Inconel 718, a nickel based super-alloy is an extensively used alloy, accounting for about 50% by weight of materials used in an aerospace engine, mainly in the gas turbine compartment. This is owing to their outstanding strength and oxidation resistance at elevated temperatures in excess of 5500 C. Machining is a requisite operation in the aircraft industries for the manufacture of the components especially for gas turbines. This paper is concerned with optimization of the surface roughness when turning Inconel 718 with cermet inserts. Optimization of turning operation is very useful to reduce cost and time for machining. The approach is based on Response Surface Method (RSM). In this work, second-order quadratic models are developed for surface roughness, considering the cutting speed, feed rate and depth of cut as the cutting parameters, using central composite design. The developed models are used to determine the optimum machining parameters. These optimized machining parameters are validated experimentally, and it is observed that the response values are in reasonable agreement with the predicted values.Keywords: Inconel 718, Optimization, Response Surface Methodology (RSM), Surface roughness
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2839944 Development of High Performance Clarification System for FBR Dissolver Liquor
Authors: M.Takeuchi, T.Kitagaki, Y.Noguchi, T. Washiya
Abstract:
A high performance clarification system has been discussed for advanced aqueous reprocessing of FBR spent fuel. Dissolver residue gives the cause of troubles on the plant operation of reprocessing. In this study, the new clarification system based on the hybrid of centrifuge and filtration was proposed to get the high separation ability of the component of whole insoluble sludge. The clarification tests of simulated solid species were carried out to evaluate the clarification performance using small-scale test apparatus of centrifuge and filter unit. The density effect of solid species on the collection efficiency was mainly evaluated in the centrifugal clarification test. In the filtration test using ceramic filter with pore size of 0.2μm, on the other hand, permeability and filtration rate were evaluated in addition to the filtration efficiency. As results, it was evaluated that the collection efficiency of solid species on the new clarification system was estimated as nearly 100%. In conclusion, the high clarification performance of dissolver liquor can be achieved by the hybrid of the centrifuge and filtration system.Keywords: Centrifuge, Clarification, FBR dissolver liquor, Filtration
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1554943 A Numerical Study on the Effects of N2 Dilution on the Flame Structure and Temperature Distribution of Swirl Diffusion Flames
Authors: Yasaman Tohidi, Shidvash Vakilipour, Saeed Ebadi Tavallaee, Shahin Vakilipoor Takaloo, Hossein Amiri
Abstract:
The numerical modeling is performed to study the effects of N2 addition to the fuel stream on the flame structure and temperature distribution of methane-air swirl diffusion flames with different swirl intensities. The Open source Field Operation and Manipulation (OpenFOAM) has been utilized as the computational tool. Flamelet approach along with modified k-ε model is employed to model the flame characteristics. The results indicate that the presence of N2 in the fuel stream leads to the flame temperature reduction. By increasing of swirl intensity, the flame structure changes significantly. The flame has a conical shape in low swirl intensity; however, it has an hour glass-shape with a shorter length in high swirl intensity. The effects of N2 dilution decrease the flame length in all swirl intensities; however, the rate of reduction is more noticeable in low swirl intensity.
Keywords: Swirl diffusion flame, N2 dilution, OpenFOAM, Swirl intensity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 610942 Construction of Water Electrolyzer for Single Slice O2/H2 Polymer Electrolyte Membrane Fuel Cell
Authors: May Zin Lwin., Mya Mya Oo
Abstract:
In the first part of the research work, an electrolyzer (10.16 cm dia and 24.13 cm height) to produce hydrogen and oxygen was constructed for single slice O2/H2 fuel cell using cation exchange membrane. The electrolyzer performance was tested with 23% NaOH, 30% NaOH, 30% KOH and 35% KOH electrolyte solution with current input 4 amp and 2.84 V from the rectifier. Rates of volume of hydrogen produced were 0.159 cm3/sec, 0.155 cm3/sec, 0.169 cm3/sec and 0.163 cm3/sec respectively from 23% NaOH, 30% NaOH, 30% KOH and 35% KOH solution. Rates of volume of oxygen produced were 0.212 cm3/sec, 0.201 cm3/sec, 0.227 cm3/sec and 0.219 cm3/sec respectively from 23% NaOH, 30% NaOH, 30% KOH and 35% KOH solution (1.5 L). In spite of being tested the increased concentration of electrolyte solution, the gas rate does not change significantly. Therefore, inexpensive 23% NaOH electrolyte solution was chosen to use as the electrolyte in the electrolyzer. In the second part of the research work, graphite serpentine flow plates, fiberglass end plates, stainless steel screen electrodes, silicone rubbers were made to assemble the single slice O2/H2 polymer electrolyte membrane fuel cell (PEMFC).
Keywords: electrolyzer, electrolyte solution, fuel cell, rectifier
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2086941 Factors of Effective Business Software Systems Development and Enhancement Projects Work Effort Estimation
Authors: Beata Czarnacka-Chrobot
Abstract:
Majority of Business Software Systems (BSS) Development and Enhancement Projects (D&EP) fail to meet criteria of their effectiveness, what leads to the considerable financial losses. One of the fundamental reasons for such projects- exceptionally low success rate are improperly derived estimates for their costs and time. In the case of BSS D&EP these attributes are determined by the work effort, meanwhile reliable and objective effort estimation still appears to be a great challenge to the software engineering. Thus this paper is aimed at presenting the most important synthetic conclusions coming from the author-s own studies concerning the main factors of effective BSS D&EP work effort estimation. Thanks to the rational investment decisions made on the basis of reliable and objective criteria it is possible to reduce losses caused not only by abandoned projects but also by large scale of overrunning the time and costs of BSS D&EP execution.Keywords: Benchmarking data, business software systems development and enhancement projects, effort estimation, software engineering economics, software functional size measurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1542940 Process Optimisation for Internal Cylindrical Rough Turning of Nickel Alloy 625 Weld Overlay
Authors: Lydia Chan, Islam Shyha, Dale Dreyer, John Hamilton, Phil Hackney
Abstract:
Nickel-based superalloys are generally known to be difficult to cut due to their strength, low thermal conductivity, and high work hardening tendency. Superalloy such as alloy 625 is often used in the oil and gas industry as a surfacing material to provide wear and corrosion resistance to components. The material is typically applied onto a metallic substrate through weld overlay cladding, an arc welding technique. Cladded surfaces are always rugged and carry a tough skin; this creates further difficulties to the machining process. The present work utilised design of experiment to optimise the internal cylindrical rough turning for weld overlay surfaces. An L27 orthogonal array was used to assess effects of the four selected key process variables: cutting insert, depth of cut, feed rate, and cutting speed. The optimal cutting conditions were determined based on productivity and the level of tool wear.Keywords: Cylindrical turning, nickel superalloy, turning of overlay, weld overlay.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 927939 Control Strategy of Solar Thermal Cooling System under the Indonesia Climate
Authors: Budihardjo Sarwo Sastrosudiro, Arnas Lubis, Muhammad Idrus Alhamid, Nasruddin Jusuf
Abstract:
Solar thermal cooling system was installed on Mechanical Research Center (MRC) Building that is located in Universitas Indonesia, Depok, Indonesia. It is the first cooling system in Indonesia that utilizes solar energy as energy input combined with natural gas; therefore, the control system must be appropriated with the climates. In order to stabilize the cooling capacity and also to maximize the use of solar energy, the system applies some controllers. Constant flow rate and on/off controller are applied for the hot water, chilled water and cooling water pumps. The hot water circulated by pump when the solar radiation is over than 400W/m2, and the chilled water is continually circulated by pump and its temperature is kept constant 7 °C by absorption chiller. The cooling water is also continually circulated until the outlet temperature of cooling tower below than 27 oC. Furthermore, the three-way valve is used to control the hot water for generate vapor on absorption chiller. The system performance using that control system is shown in this study results.
Keywords: Absorption chiller, control system, solar cooling, solar energy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1479938 Ethylene Epoxidation in a Low-Temperature Parallel Plate Dielectric Barrier Discharge System: Effects of Ethylene Feed Position and O2/C2H4 Feed Molar Ratio
Authors: Bunphot Paosombat, Thitiporn Suttikul, Sumaeth Chavadej
Abstract:
The effects of ethylene (C2H4) feed position and O2/C2H4 feed molar ratio on ethylene epoxidation in a parallel dielectric barrier discharge (DBD) were studied. The results showed that the ethylene feed position fraction of 0.5 and the feed molar ratio of O2/C2H4 of 0.2:1 gave the highest EO selectivity of 34.3% and the highest EO yield of 5.28% with low power consumptions of 2.11×10-16 Ws/molecule of ethylene converted and 6.34×10-16 Ws/molecule of EO produced when the DBD system was operated under the best conditions: an applied voltage of 19 kV, an input frequency of 500 Hz and a total feed flow rate of 50 cm3/min. The separate ethylene feed system provided much higher epoxidation activity as compared to the mixed feed system which gave EO selectivity of 15.5%, EO yield of 2.1% and the power consumption of EO produced of 7.7×10-16 Ws/molecule.Keywords: Dielectric Barrier Discharge, C2H4 Feed Position, Epoxidation, Ethylene Oxide
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1701937 Natural Radioactivity Measurements of Basalt Rocks in Sidakan District Northeastern of Kurdistan Region-Iraq
Authors: Ali A. Ahmed, Mohammed I. Hussein
Abstract:
The amounts of radioactivity in the igneous rocks have been investigated; samples were collected from the total of eight basalt rock types in the northeastern of Kurdistan region/Iraq. The activity concentration of 226Ra (238U) series, 228Ac (232Th) series, 40K and 137Cs were measured using Planar HPGe and NaI(Tl) detectors. Along the study area the radium equivalent activities Raeq in Bq/Kg of samples under investigation were found in the range of 22.16 to 77.31 Bq/Kg with an average value of 44.8 Bq/Kg, this value is much below the internationally accepted value of 370 Bq/Kg. To estimate the health effects of this natural radioactive composition, the average values of absorbed gamma dose rate D (55 nGyh-1), Indoor and outdoor annual effective dose rates Eied (0.11 mSvy-1) . and Eoed (0.03 mSvy-1), External hazard index Hex (0.138) and internal hazard index Hin(0.154), and representative level index Iγr (0.386) have been calculated and found to be lower than the worldwide average values.Keywords: Absorbed dose, activity concentration, igneousrocks, HPGe, NaI(TI), Natural Radioactivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2340936 Evaluation Process for the Hardware Safety Integrity Level
Authors: Sung Kyu Kim, Yong Soo Kim
Abstract:
Safety instrumented systems (SISs) are becoming increasingly complex and the proportion of programmable electronic parts is growing. The IEC 61508 global standard was established to ensure the functional safety of SISs, but it was expressed in highly macroscopic terms. This study introduces an evaluation process for hardware safety integrity levels through failure modes, effects, and diagnostic analysis (FMEDA).FMEDA is widely used to evaluate safety levels, and it provides the information on failure rates and failure mode distributions necessary to calculate a diagnostic coverage factor for a given component. In our evaluation process, the components of the SIS subsystem are first defined in terms of failure modes and effects. Then, the failure rate and failure mechanism distribution are assigned to each component. The safety mode and detectability of each failure mode are determined for each component. Finally, the hardware safety integrity level is evaluated based on the calculated results.Keywords: Safety instrumented system; Safety integrity level; Failure modes, effects, and diagnostic analysis; IEC 61508.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2531935 Variable Step-Size Affine Projection Algorithm With a Weighted and Regularized Projection Matrix
Authors: Tao Dai, Andy Adler, Behnam Shahrrava
Abstract:
This paper presents a forgetting factor scheme for variable step-size affine projection algorithms (APA). The proposed scheme uses a forgetting processed input matrix as the projection matrix of pseudo-inverse to estimate system deviation. This method introduces temporal weights into the projection matrix, which is typically a better model of the real error's behavior than homogeneous temporal weights. The regularization overcomes the ill-conditioning introduced by both the forgetting process and the increasing size of the input matrix. This algorithm is tested by independent trials with coloured input signals and various parameter combinations. Results show that the proposed algorithm is superior in terms of convergence rate and misadjustment compared to existing algorithms. As a special case, a variable step size NLMS with forgetting factor is also presented in this paper.
Keywords: Adaptive signal processing, affine projection algorithms, variable step-size adaptive algorithms, regularization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1631934 Improving Fake News Detection Using K-means and Support Vector Machine Approaches
Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy
Abstract:
Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.
Keywords: Fake news detection, feature selection, support vector machine, K-means clustering, machine learning, social media.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4524933 Implementation of Channel Estimation and Timing Synchronization Algorithms for MIMO-OFDM System Using NI USRP 2920
Authors: Ali Beydoun, Hamzé H. Alaeddine
Abstract:
MIMO-OFDM communication system presents a key solution for the next generation of mobile communication due to its high spectral efficiency, high data rate and robustness against multi-path fading channels. However, MIMO-OFDM system requires a perfect knowledge of the channel state information and a good synchronization between the transmitter and the receiver to achieve the expected performances. Recently, we have proposed two algorithms for channel estimation and timing synchronization with good performances and very low implementation complexity compared to those proposed in the literature. In order to validate and evaluate the efficiency of these algorithms in real environments, this paper presents in detail the implementation of 2 × 2 MIMO-OFDM system based on LabVIEW and USRP 2920. Implementation results show a good agreement with the simulation results under different configuration parameters.Keywords: MIMO-OFDM system, timing synchronization, channel estimation, STBC, USRP 2920.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 844932 An Adaptive Hand-Talking System for the Hearing Impaired
Authors: Zhou Yu, Jiang Feng
Abstract:
An adaptive Chinese hand-talking system is presented in this paper. By analyzing the 3 data collecting strategies for new users, the adaptation framework including supervised and unsupervised adaptation methods is proposed. For supervised adaptation, affinity propagation (AP) is used to extract exemplar subsets, and enhanced maximum a posteriori / vector field smoothing (eMAP/VFS) is proposed to pool the adaptation data among different models. For unsupervised adaptation, polynomial segment models (PSMs) are used to help hidden Markov models (HMMs) to accurately label the unlabeled data, then the "labeled" data together with signerindependent models are inputted to MAP algorithm to generate signer-adapted models. Experimental results show that the proposed framework can execute both supervised adaptation with small amount of labeled data and unsupervised adaptation with large amount of unlabeled data to tailor the original models, and both achieve improvements on the performance of recognition rate.Keywords: sign language recognition, signer adaptation, eMAP/VFS, polynomial segment model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1759931 High Perfomance Communication Protocol for Wireless Ad-Hoc Sensor Networks
Authors: Toshihiko Sasama, Takahide Yanaka, Kazunori Sugahara, Hiroshi Masuyama
Abstract:
In order to monitor for traffic traversal, sensors can be deployed to perform collaborative target detection. Such a sensor network achieves a certain level of detection performance with the associated costs of deployment and routing protocol. This paper addresses these two points of sensor deployment and routing algorithm in the situation where the absolute quantity of sensors or total energy becomes insufficient. This discussion on the best deployment system concluded that two kinds of deployments; Normal and Power law distributions, show 6 and 3 times longer than Random distribution in the duration of coverage, respectively. The other discussion on routing algorithm to achieve good performance in each deployment system was also addressed. This discussion concluded that, in place of the traditional algorithm, a new algorithm can extend the time of coverage duration by 4 times in a Normal distribution, and in the circumstance where every deployed sensor operates as a binary model.Keywords: binary sensor, coverage rate, power energy consumption, routing algorithm, sensor deployment
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1376930 Phosphine Mortality Estimation for Simulation of Controlling Pest of Stored Grain: Lesser Grain Borer (Rhyzopertha dominica)
Authors: Mingren Shi, Michael Renton
Abstract:
There is a world-wide need for the development of sustainable management strategies to control pest infestation and the development of phosphine (PH3) resistance in lesser grain borer (Rhyzopertha dominica). Computer simulation models can provide a relatively fast, safe and inexpensive way to weigh the merits of various management options. However, the usefulness of simulation models relies on the accurate estimation of important model parameters, such as mortality. Concentration and time of exposure are both important in determining mortality in response to a toxic agent. Recent research indicated the existence of two resistance phenotypes in R. dominica in Australia, weak and strong, and revealed that the presence of resistance alleles at two loci confers strong resistance, thus motivating the construction of a two-locus model of resistance. Experimental data sets on purified pest strains, each corresponding to a single genotype of our two-locus model, were also available. Hence it became possible to explicitly include mortalities of the different genotypes in the model. In this paper we described how we used two generalized linear models (GLM), probit and logistic models, to fit the available experimental data sets. We used a direct algebraic approach generalized inverse matrix technique, rather than the traditional maximum likelihood estimation, to estimate the model parameters. The results show that both probit and logistic models fit the data sets well but the former is much better in terms of small least squares (numerical) errors. Meanwhile, the generalized inverse matrix technique achieved similar accuracy results to those from the maximum likelihood estimation, but is less time consuming and computationally demanding.
Keywords: mortality estimation, probit models, logistic model, generalized inverse matrix approach, pest control simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1584929 Geostatistical Analysis and Mapping of Groundlevel Ozone in a Medium Sized Urban Area
Authors: F. J. Moral García, P. Valiente González, F. López Rodríguez
Abstract:
Ground-level tropospheric ozone is one of the air pollutants of most concern. It is mainly produced by photochemical processes involving nitrogen oxides and volatile organic compounds in the lower parts of the atmosphere. Ozone levels become particularly high in regions close to high ozone precursor emissions and during summer, when stagnant meteorological conditions with high insolation and high temperatures are common. In this work, some results of a study about urban ozone distribution patterns in the city of Badajoz, which is the largest and most industrialized city in Extremadura region (southwest Spain) are shown. Fourteen sampling campaigns, at least one per month, were carried out to measure ambient air ozone concentrations, during periods that were selected according to favourable conditions to ozone production, using an automatic portable analyzer. Later, to evaluate the ozone distribution at the city, the measured ozone data were analyzed using geostatistical techniques. Thus, first, during the exploratory analysis of data, it was revealed that they were distributed normally, which is a desirable property for the subsequent stages of the geostatistical study. Secondly, during the structural analysis of data, theoretical spherical models provided the best fit for all monthly experimental variograms. The parameters of these variograms (sill, range and nugget) revealed that the maximum distance of spatial dependence is between 302-790 m and the variable, air ozone concentration, is not evenly distributed in reduced distances. Finally, predictive ozone maps were derived for all points of the experimental study area, by use of geostatistical algorithms (kriging). High prediction accuracy was obtained in all cases as cross-validation showed. Useful information for hazard assessment was also provided when probability maps, based on kriging interpolation and kriging standard deviation, were produced.Keywords: Kriging, map, tropospheric ozone, variogram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1869928 Precipitation Intensity: Duration Based Threshold Analysis for Initiation of Landslides in Upper Alaknanda Valley
Authors: Soumiya Bhattacharjee, P. K. Champati Ray, Shovan L. Chattoraj, Mrinmoy Dhara
Abstract:
The entire Himalayan range is globally renowned for rainfall-induced landslides. The prime focus of the study is to determine rainfall based threshold for initiation of landslides that can be used as an important component of an early warning system for alerting stake holders. This research deals with temporal dimension of slope failures due to extreme rainfall events along the National Highway-58 from Karanprayag to Badrinath in the Garhwal Himalaya, India. Post processed 3-hourly rainfall intensity data and its corresponding duration from daily rainfall data available from Tropical Rainfall Measuring Mission (TRMM) were used as the prime source of rainfall data. Landslide event records from Border Road Organization (BRO) and some ancillary landslide inventory data for 2013 and 2014 have been used to determine Intensity Duration (ID) based rainfall threshold. The derived governing threshold equation, I= 4.738D-0.025, has been considered for prediction of landslides of the study region. This equation was validated with an accuracy of 70% landslides during August and September 2014. The derived equation was considered for further prediction of landslides of the study region. From the obtained results and validation, it can be inferred that this equation can be used for initiation of landslides in the study area to work as a part of an early warning system. Results can significantly improve with ground based rainfall estimates and better database on landslide records. Thus, the study has demonstrated a very low cost method to get first-hand information on possibility of impending landslide in any region, thereby providing alert and better preparedness for landslide disaster mitigation.
Keywords: Landslide, intensity-duration, rainfall threshold, Tropical Rainfall Measuring Mission, slope, inventory, early warning system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1238