Search results for: point estimate method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 23853

Search results for: point estimate method

23223 Liquid Sulphur Storage Tank

Authors: Roya Moradifar, Naser Agharezaee

Abstract:

In this paper corrosion in the liquid sulphur storage tank at South pars gas complex phases 2&3 is presented. This full hot insulated field-erected storage tanks are used for the temporary storage of 1800m3 of molten sulphur. Sever corrosion inside the tank roof was observed during over haul inspections, in the direction of roof gradient. Investigation shown, in spite of other parts of tank there was no insulation around these manholes. Internal steam coils do not maintain a sufficiently high tank roof temperature in the vapor space. Sulphur and formation of liquid water at cool metal surface, this combination leads to the formation of iron sulfide. By employing a distributed external heating system, the temperatures of any point of the tank roof should be based on ambient dew point and the liquid storage solidification point. Also other construction and operation of tank is more important. This paper will review potential corrosion mechanism and operational case study which illustrate the importance of heating systems.

Keywords: tank, steam, corrosion, sulphur

Procedia PDF Downloads 569
23222 Non-Pharmacological Approach to the Improvement and Maintenance of the Convergence Parameter

Authors: Andreas Aceranti, Guido Bighiani, Francesca Crotto, Marco Colorato, Stefania Zaghi, Marino Zanetti, Simonetta Vernocchi

Abstract:

The management of eye parameters such as convergence, accommodation, and miosis is very complex; in fact, both the neurovegetative system and the complex Oculocephalgiria system come into play. We have found the effectiveness of the "highvelocity low amplitude" technique directed on C7-T1 (where the cilio-spinal nucleus of the budge is located) in improving the convergence parameter through the measurement of the point of maximum convergence. With this research, we set out to investigate whether the improvement obtained through the High Velocity Low Amplitude maneuver lasts over time, carrying out a pre-manipulation measurement, one immediately after manipulation and one month after manipulation. We took a population of 30 subjects with both refractive and non-refractive problems. Of the 30 patients tested, 27 gave a positive result after the High Velocity Low Amplitude maneuver, giving an improvement in the point of maximum convergence. After a month, we retested all 27 subjects: some further improved the result, others kept, and three subjects slightly lost the gain obtained. None of the re-tested patients returned to the point of maximum convergence starting pre-manipulation. This result opens the door to a multidisciplinary approach between ophthalmologists and osteopaths with the aim of addressing oculomotricity and convergence deficits that increasingly afflict our society due to the massive use of devices and for the conduct of life in closed and restricted environments.

Keywords: point of maximum convergence, HVLA, improvement in PPC, convergence

Procedia PDF Downloads 77
23221 Study of Electron Cyclotron Resonance Acceleration by Cylindrical TE₀₁₁ Mode

Authors: Oswaldo Otero, Eduardo A. Orozco, Ana M. Herrera

Abstract:

In this work, we present results from analytical and numerical studies of the electron acceleration by a TE₀₁₁ cylindrical microwave mode in a static homogeneous magnetic field under electron cyclotron resonance (ECR) condition. The stability of the orbits is analyzed using the particle orbit theory. In order to get a better understanding of the interaction wave-particle, we decompose the azimuthally electric field component as the superposition of right and left-hand circular polarization standing waves. The trajectory, energy and phase-shift of the electron are found through a numerical solution of the relativistic Newton-Lorentz equation in a finite difference method by the Boris method. It is shown that an electron longitudinally injected with an energy of 7 keV in a radial position r=Rc/2, being Rc the cavity radius, is accelerated up to energy of 90 keV by an electric field strength of 14 kV/cm and frequency of 2.45 GHz. This energy can be used to produce X-ray for medical imaging. These results can be used as a starting point for study the acceleration of electrons in a magnetic field changing slowly in time (GYRAC), which has some important applications as the electron cyclotron resonance ion proton accelerator (ECR-IPAC) for cancer therapy and to control plasma bunches with relativistic electrons.

Keywords: Boris method, electron cyclotron resonance, finite difference method, particle orbit theory, X-ray

Procedia PDF Downloads 159
23220 Decoration of Multi-Walled Carbon Nanotubes by CdS Nanoparticles Using Magnetron Sputtering Method

Authors: Z. Ghorannevis, E. Akbarnejad, B. Aghazadeh, M. Ghoranneviss

Abstract:

Carbon nanotubes (CNTs) modified with semiconductor nanocrystalline particles may find wide applications due to their unique properties. Here Cadmium Sulfide (CdS) nanoparticles were successfully grown on Multi-Walled Carbon Nanotubes (MWNTs) via a magnetron sputtering method for the first time. The CdS/MWNTs sample was characterized with X-ray diffraction (XRD), Field Emission Scanning and High Resolution Transmission Electron Microscopies (SEM/TEM) and four point probe. The obtained images show clearly the decoration of the MWNTs by the CdS nanoparticles, and the XRD measurements indicate the CdS structure as hexagonal type. Moreover, the physical properties of the CdS/MWNTs were compared with the physical properties of the CdS nanoparticles grown on the silicon. Electrical measurements of CdS and CdS/MWNTs reveal that CdS/MWNTs has lower resistivity than the CdS sample which may be due to the higher carrier concentrations.

Keywords: CdS, MWNTs, HRTEM, magnetron sputtering

Procedia PDF Downloads 405
23219 Quality Fabric Optimization Using Genetic Algorithms

Authors: Halimi Mohamed Taher, Kordoghli Bassem, Ben Hassen Mohamed, Sakli Faouzi

Abstract:

Textile industry has been an important part of many developing countries economies such as Tunisia. This industry is confronted with a challenging and increasing competitive environment. Good quality management in production process is the key factor for retaining existence especially in raw material exploitation. The present work aims to develop an intelligent system for fabric inspection. In the first step, we have studied the method used for fabric control which takes into account the default length and localization in woven. In the second step, we have used a method based on the fuzzy logic to minimize the Demerit point indicator with appropriate total rollers length, so that the quality problem becomes multi-objective. In order to optimize the total fabric quality, we have applied the genetic algorithm (GA).

Keywords: fabric control, Fuzzy logic, genetic algorithm, quality management

Procedia PDF Downloads 591
23218 Row Detection and Graph-Based Localization in Tree Nurseries Using a 3D LiDAR

Authors: Ionut Vintu, Stefan Laible, Ruth Schulz

Abstract:

Agricultural robotics has been developing steadily over recent years, with the goal of reducing and even eliminating pesticides used in crops and to increase productivity by taking over human labor. The majority of crops are arranged in rows. The first step towards autonomous robots, capable of driving in fields and performing crop-handling tasks, is for robots to robustly detect the rows of plants. Recent work done towards autonomous driving between plant rows offers big robotic platforms equipped with various expensive sensors as a solution to this problem. These platforms need to be driven over the rows of plants. This approach lacks flexibility and scalability when it comes to the height of plants or distance between rows. This paper proposes instead an algorithm that makes use of cheaper sensors and has a higher variability. The main application is in tree nurseries. Here, plant height can range from a few centimeters to a few meters. Moreover, trees are often removed, leading to gaps within the plant rows. The core idea is to combine row detection algorithms with graph-based localization methods as they are used in SLAM. Nodes in the graph represent the estimated pose of the robot, and the edges embed constraints between these poses or between the robot and certain landmarks. This setup aims to improve individual plant detection and deal with exception handling, like row gaps, which are falsely detected as an end of rows. Four methods were developed for detecting row structures in the fields, all using a point cloud acquired with a 3D LiDAR as an input. Comparing the field coverage and number of damaged plants, the method that uses a local map around the robot proved to perform the best, with 68% covered rows and 25% damaged plants. This method is further used and combined with a graph-based localization algorithm, which uses the local map features to estimate the robot’s position inside the greater field. Testing the upgraded algorithm in a variety of simulated fields shows that the additional information obtained from localization provides a boost in performance over methods that rely purely on perception to navigate. The final algorithm achieved a row coverage of 80% and an accuracy of 27% damaged plants. Future work would focus on achieving a perfect score of 100% covered rows and 0% damaged plants. The main challenges that the algorithm needs to overcome are fields where the height of the plants is too small for the plants to be detected and fields where it is hard to distinguish between individual plants when they are overlapping. The method was also tested on a real robot in a small field with artificial plants. The tests were performed using a small robot platform equipped with wheel encoders, an IMU and an FX10 3D LiDAR. Over ten runs, the system achieved 100% coverage and 0% damaged plants. The framework built within the scope of this work can be further used to integrate data from additional sensors, with the goal of achieving even better results.

Keywords: 3D LiDAR, agricultural robots, graph-based localization, row detection

Procedia PDF Downloads 139
23217 Blueprinting of a Normalized Supply Chain Processes: Results in Implementing Normalized Software Systems

Authors: Bassam Istanbouli

Abstract:

With the technology evolving every day and with the increase in global competition, industries are always under the pressure to be the best. They need to provide good quality products at competitive prices, when and how the customer wants them.  In order to achieve this level of service, products and their respective supply chain processes need to be flexible and evolvable; otherwise changes will be extremely expensive, slow and with many combinatorial effects. Those combinatorial effects impact the whole organizational structure, from a management, financial, documentation, logistics and specially the information system Enterprise Requirement Planning (ERP) perspective. By applying the normalized system concept/theory to segments of the supply chain, we believe minimal effects, especially at the time of launching an organization global software project. The purpose of this paper is to point out that if an organization wants to develop a software from scratch or implement an existing ERP software for their business needs and if their business processes are normalized and modular then most probably this will yield to a normalized and modular software system that can be easily modified when the business evolves. Another important goal of this paper is to increase the awareness regarding the design of the business processes in a software implementation project. If the blueprints created are normalized then the software developers and configurators will use those modular blueprints to map them into modular software. This paper only prepares the ground for further studies;  the above concept will be supported by going through the steps of developing, configuring and/or implementing a software system for an organization by using two methods: The Software Development Lifecycle method (SDLC) and the Accelerated SAP implementation method (ASAP). Both methods start with the customer requirements, then blue printing of its business processes and finally mapping those processes into a software system.  Since those requirements and processes are the starting point of the implementation process, then normalizing those processes will end up in a normalizing software.

Keywords: blueprint, ERP, modular, normalized

Procedia PDF Downloads 139
23216 Weight Estimation Using the K-Means Method in Steelmaking’s Overhead Cranes in Order to Reduce Swing Error

Authors: Seyedamir Makinejadsanij

Abstract:

One of the most important factors in the production of quality steel is to know the exact weight of steel in the steelmaking area. In this study, a calculation method is presented to estimate the exact weight of the melt as well as the objects transported by the overhead crane. Iran Alloy Steel Company's steelmaking area has three 90-ton cranes, which are responsible for transferring the ladles and ladle caps between 34 areas in the melt shop. Each crane is equipped with a Disomat Tersus weighing system that calculates and displays real-time weight. The moving object has a variable weight due to swinging, and the weighing system has an error of about +-5%. This means that when the object is moving by a crane, which weighs about 80 tons, the device (Disomat Tersus system) calculates about 4 tons more or 4 tons less, and this is the biggest problem in calculating a real weight. The k-means algorithm is an unsupervised clustering method that was used here. The best result was obtained by considering 3 centers. Compared to the normal average(one) or two, four, five, and six centers, the best answer is with 3 centers, which is logically due to the elimination of noise above and below the real weight. Every day, the standard weight is moved with working cranes to test and calibrate cranes. The results are shown that the accuracy is about 40 kilos per 60 tons (standard weight). As a result, with this method, the accuracy of moving weight is calculated as 99.95%. K-means is used to calculate the exact mean of objects. The stopping criterion of the algorithm is also the number of 1000 repetitions or not moving the points between the clusters. As a result of the implementation of this system, the crane operator does not stop while moving objects and continues his activity regardless of weight calculations. Also, production speed increased, and human error decreased.

Keywords: k-means, overhead crane, melt weight, weight estimation, swing problem

Procedia PDF Downloads 90
23215 The Effects of Wood Ash on Ignition Point of Wood

Authors: K. A. Ibe, J. I. Mbonu, G. K. Umukoro

Abstract:

The effects of wood ash on the ignition point of five common tropical woods in Nigeria were investigated. The ash and moisture contents of the wood saw dust from Mahogany (Khaya ivorensis), Opepe (Sarcocephalus latifolius), Abura (Hallealedermannii verdc), Rubber (Heavea brasilensis) and Poroporo (Sorghum bicolour) were determined using a furnace (Vecstar furnaces, model ECF2, serial no. f3077) and oven (Genlab laboratory oven, model MINO/040) respectively. The metal contents of the five wood sawdust ash samples were determined using a Perkin Elmer optima 3000 dv atomic absorption spectrometer while the ignition points were determined using Vecstar furnaces model ECF2. Poroporo had the highest ash content, 2.263 g while rubber had the least, 0.710 g. The results for the moisture content range from 2.971 g to 0.903 g. Magnesium metal had the highest concentration of all the metals, in all the wood ash samples; with mahogany ash having the highest concentration, 9.196 ppm while rubber ash had the least concentration of magnesium metal, 2.196 ppm. The ignition point results showed that the wood ashes from mahogany and opepe increased the ignition points of the test wood samples when coated on them while the ashes from poroporo, rubber and abura decreased the ignition points of the test wood samples when coated on them. However, Opepe saw dust ash decreased the ignition point in one of the test wood samples, suggesting that the metal content of the test wood sample was more than that of the Opepe saw dust ash. Therefore, Mahogany and Opepe saw dust ashes could be used in the surface treatment of wood to enhance their fire resistance or retardancy. However, the caution to be exercised in this application is that the metal content of the test wood samples should be evaluated as well.

Keywords: ash, fire, ignition point, retardant, wood saw dust

Procedia PDF Downloads 388
23214 Evaluation of Settlement of Coastal Embankments Using Finite Elements Method

Authors: Sina Fadaie, Seyed Abolhassan Naeini

Abstract:

Coastal embankments play an important role in coastal structures by reducing the effect of the wave forces and controlling the movement of sediments. Many coastal areas are underlain by weak and compressible soils. Estimation of during construction settlement of coastal embankments is highly important in design and safety control of embankments and appurtenant structures. Accordingly, selecting and establishing of an appropriate model with a reasonable level of complication is one of the challenges for engineers. Although there are advanced models in the literature regarding design of embankments, there is not enough information on the prediction of their associated settlement, particularly in coastal areas having considerable soft soils. Marine engineering study in Iran is important due to the existence of two important coastal areas located in the northern and southern parts of the country. In the present study, the validity of Terzaghi’s consolidation theory has been investigated. In addition, the settlement of these coastal embankments during construction is predicted by using special methods in PLAXIS software by the help of appropriate boundary conditions and soil layers. The results indicate that, for the existing soil condition at the site, some parameters are important to be considered in analysis. Consequently, a model is introduced to estimate the settlement of the embankments in such geotechnical conditions.

Keywords: consolidation, settlement, coastal embankments, numerical methods, finite elements method

Procedia PDF Downloads 156
23213 Models Comparison for Solar Radiation

Authors: Djelloul Benatiallah

Abstract:

Due to the current high consumption and recent industry growth, the depletion of fossil and natural energy supplies like oil, gas, and uranium is declining. Due to pollution and climate change, there needs to be a swift switch to renewable energy sources. Research on renewable energy is being done to meet energy needs. Solar energy is one of the renewable resources that can currently meet all of the world's energy needs. In most parts of the world, solar energy is a free and unlimited resource that can be used in a variety of ways, including photovoltaic systems for the generation of electricity and thermal systems for the generation of heatfor the residential sector's production of hot water. In this article, we'll conduct a comparison. The first step entails identifying the two empirical models that will enable us to estimate the daily irradiations on a horizontal plane. On the other hand, we compare it using the data obtained from measurements made at the Adrar site over the four distinct seasons. The model 2 provides a better estimate of the global solar components, with an absolute mean error of less than 7% and a correlation coefficient of more than 0.95, as well as a relative coefficient of the bias error that is less than 6% in absolute value and a relative RMSE that is less than 10%, according to a comparison of the results obtained by simulating the two models.

Keywords: solar radiation, renewable energy, fossil, photovoltaic systems

Procedia PDF Downloads 79
23212 Copolymers of Epsilon-Caprolactam Received via Anionic Polymerization in the Presence of Polypropylene Glycol Based Polymeric Activators

Authors: Krasimira N. Zhilkova, Mariya K. Kyulavska, Roza P. Mateva

Abstract:

The anionic polymerization of -caprolactam (CL) with bifunctional activators has been extensively studied as an effective and beneficial method of improving chemical and impact resistances, elasticity and other mechanical properties of polyamide (PA6). In presence of activators or macroactivators (MAs) also called polymeric activators (PACs) the anionic polymerization of lactams proceeds rapidly at a temperature range of 130-180C, well below the melting point of PA-6 (220C) permitting thus the direct manufacturing of copolymer product together with desired modifications of polyamide properties. Copolymers of PA6 with an elastic polypropylene glycol (PPG) middle block into main chain were successfully synthesized via activated anionic ring opening polymerization (ROP) of CL. Using novel PACs based on PPG polyols (with differ molecular weight) the anionic ROP of CL was realized and investigated in the presence of a basic initiator sodium salt of CL (NaCL). The PACs were synthesized as N-carbamoyllactam derivatives of hydroxyl terminated PPG functionalized with isophorone diisocyanate [IPh, 5-Isocyanato-1-(isocyanatomethyl)-1,3,3-trimethylcyclohexane] and blocked then with CL units via an addition reaction. The block copolymers were analyzed and proved with 1H-NMR and FT-IR spectroscopy. The influence of the CL/PACs ratio in feed, the length of the PPG segments and polymerization conditions on the kinetics of anionic ROP, on average molecular weight, and on the structure of the obtained block copolymers were investigated. The structure and phase behaviour of the copolymers were explored with differential scanning calorimetry, wide-angle X-ray diffraction, thermogravimetric analysis and dynamic mechanical thermal analysis. The crystallinity dependence of PPG content incorporated into copolymers main backbone was estimate. Additionally, the mechanical properties of the obtained copolymers were studied by notched impact test. From the performed investigation in this study could be concluded that using PPG based PACs at the chosen ROP conditions leads to obtaining well-defined PA6-b-PPG-b-PA6 copolymers with improved impact resistance.

Keywords: anionic ring opening polymerization, caprolactam, polyamide copolymers, polypropylene glycol

Procedia PDF Downloads 415
23211 Use of Chemical Extractions to Estimate the Metals Availability in Bricks Made of Dredged Sediments

Authors: Fabienne Baraud, Lydia Leleyter, Sandra Poree, Melanie Lemoine

Abstract:

SEDIBRIC (valorization de SEDIments en BRIQues et tuiles) is a French project that aims to replace a part of natural clays with dredged sediments in the preparation of fired bricks in order to propose an alternative solution for the management of harbor dredged sediments. The feasibility of such re-use is explored from a technical, economic, and environmental point of view. The present study focuses on the potential environmental impact of various chemical elements (Al, Ca, Cd, Co, Cr, Cu, Fe, Ni, Mg, Mn, Pb, Ti, and Zn) that are initially present in the dredged sediments. The total content (after acid digestion) and the environmental availability (estimated by single extractions with various extractants) of these elements are determined in the raw sediments and in the obtained fired bricks. The possible influence of some steps of the manufacturing process (sediment pre-treatment, firing) is also explored. The first results show that the pre-treatment step, which uses tap water to desalinate the raw sediment, does not influence the environmental availability of the studied elements. However, the firing process, performed at 900°C, can affect the amount of some elements detected in the bricks, as well as their environmental availability. We note that for Cr, or Ni, the HCl or EDTA availability was increased in the brick (compared to the availability in the raw sediment). For Cd, Cu, Pb, and Zn, the HCl and EDTA availability was reduced in the bricks, meaning that these elements were stabilized within the bricks.

Keywords: bricks, chemical extraction, metals, sediment

Procedia PDF Downloads 150
23210 Effect of Non-Genetic Factors and Heritability Estimate of Some Productive and Reproductive Traits of Holstein Cows in Middle of Iraq

Authors: Salim Omar Raoof

Abstract:

This study was conducted at the Al-Salam cows’ station for milk production located in Al-Latifiya district - Al-Mahmudiyah district (25 km south of Baghdad governorate) on a sample of (180) Holstein cows imported from Germany by Taj Al-Nahrain company in order to study the effect of the sequence, season and calving year on Total Milk Production (TMP). The lactation period (LP), calving interval, Services per conception and the estimate of the heritability of the studied traits. The results showed that the overall mean of TMP and LP were 3172.53 kg and 237.09-day respectively. The parity effect on TMP in Holstein cows was highly significant (P≤0.01). Total Milk production increased with the advance of parity and mostly reached its maximum value in the 4th and 3rd parity being 3305.87 kg and3286.35 kg per day, respectively. Season of calving has a highly significant (P≤0.01), effect on (TMP). Cows calved in spring had a highest milk production than those calved in other seasons. Season of calving had a highly significant (P≤0.01) effect on services per conception. The result of the study showed the heritability values for TMP, LP, SPC and CL were 0.21, 0.08, 0.08 and 0.07, respectively.

Keywords: cows, non genetic, milk production, heritability

Procedia PDF Downloads 79
23209 Delimitation of the Perimeters of PR Otection of the Wellfield in the City of Adrar, Sahara of Algeria through the Used Wyssling’s Method

Authors: Ferhati Ahmed, Fillali Ahmed, Oulhadj Younsi

Abstract:

delimitation of the perimeters of protection in the catchment area of the city of Adrar, which are established around the sites for the collection of water intended for human consumption of drinking water, with the objective of ensuring the preservation and reducing the risks of point and accidental pollution of the resource (Continental Intercalar groundwater of the Northern Sahara of Algeria). This wellfield is located in the northeast of the city of Adrar, it covers an area of 132.56 km2 with 21 Drinking Water Supply wells (DWS), pumping a total flow of approximately 13 Hm3/year. The choice of this wellfield is based on the favorable hydrodynamic characteristics and their location in relation to the agglomeration. The vulnerability to pollution of this slick is very high because the slick is free and suffers from the absence of a protective layer. In recent years, several factors have been introduced around the field that can affect the quality of this precious resource, including the presence of a strong centre for domestic waste and agricultural and industrial activities. Thus, its sustainability requires the implementation of protection perimeters. The objective of this study is to set up three protection perimeters: immediate, close and remote. The application of the Wyssling method makes it possible to calculate the transfer time (t) of a drop of groundwater located at any point in the aquifer up to the abstraction and thus to define isochrones which in turn delimit each type of perimeter, 40 days for the nearer and 100 days for the farther away. Special restrictions are imposed for all activities depending on the distance of the catchment. The application of this method to the Adrar city catchment field showed that the close and remote protection perimeters successively occupy areas of 51.14 km2 and 92.9 km2. Perimeters are delimited by geolocated markers, 40 and 46 markers successively. These results show that the areas defined as "near protection perimeter" are free from activities likely to present a risk to the quality of the water used. On the other hand, on the areas defined as "remote protection perimeter," there is some agricultural and industrial activities that may present an imminent risk. A rigorous control of these activities and the restriction of the type of products applied in industrial and agricultural is imperative.

Keywords: continental intercalaire, drinking water supply, groundwater, perimeter of protection, wyssling method

Procedia PDF Downloads 96
23208 Native Point Defects in ZnO

Authors: A. M. Gsiea, J. P. Goss, P. R. Briddon, Ramadan. M. Al-habashi, K. M. Etmimi, Khaled. A. S. Marghani

Abstract:

Using first-principles methods based on density functional theory and pseudopotentials, we have performed a details study of native defects in ZnO. Native point defects are unlikely to be cause of the unintentional n-type conductivity. Oxygen vacancies, which considered most often been invoked as shallow donors, have high formation energies in n-type ZnO, in edition are a deep donors. Zinc interstitials are shallow donors, with high formation energies in n-type ZnO, and thus unlikely to be responsible on their own for unintentional n-type conductivity under equilibrium conditions, as well as Zn antisites which have higher formation energies than zinc interstitials. Zinc vacancies are deep acceptors with low formation energies for n-type and in which case they will not play role in p-type coductivity of ZnO. Oxygen interstitials are stable in the form of electrically inactive split interstitials as well as deep acceptors at the octahedral interstitial site under n-type conditions. Our results may provide a guide to experimental studies of point defects in ZnO.

Keywords: DFT, native, n-type, ZnO

Procedia PDF Downloads 593
23207 Removal of Perchloroethylene, a Common Pollutant, in Groundwater Using Activated Carbon

Authors: Marianne Miguet, Gaël Plantard, Yves Jaeger, Vincent Goetz

Abstract:

The contamination of groundwater is a major concern. A common pollutant, the perchloroethylene, is the target contaminant. Water treatment process as Granular Activated Carbons are very efficient but requires pilot-scale testing to determine the full-scale GAC performance. First, the batch mode was used to get a reliable experimental method to estimate the adsorption capacity of a common volatile compound is settled. The Langmuir model is acceptable to fit the isotherms. Dynamic tests were performed with three columns and different operating conditions. A database of concentration profiles and breakthroughs were obtained. The resolution of the set of differential equations is acceptable to fit the dynamics tests and could be used for a full-scale adsorber.

Keywords: activated carbon, groundwater, perchloroethylene, full-scale

Procedia PDF Downloads 426
23206 Regularizing Software for Aerosol Particles

Authors: Christine Böckmann, Julia Rosemann

Abstract:

We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.

Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization

Procedia PDF Downloads 343
23205 Biomimetics and Additive Manufacturing for Industrial Design Innovation

Authors: Axel Thallemer, Martin Danzer, Dominik Diensthuber, Aleksandar Kostadinov, Bernhard Rogler

Abstract:

Nature has always inspired the creative mind, to a lesser or greater extent. Introduced around the 1950s, Biomimetics served as a systematic method to treat the natural world as a ‘pattern book’ for technical solutions with the aim to create innovative products. Unfortunately, this technique is prone to failure when performed as a mere reverse engineering of a natural system or appearance. Contrary to that, a solution which looks at the principles of a natural design, promises a better outcome. One such example is the here presented case study, which shows the design process of three distinctive grippers. The devices have biomimetic properties on two levels. Firstly, they use a kinematic chain found in beaks and secondly, they have a biomimetic structural geometry, which was realized using additive manufacturing. In a next step, the manufacturing method was evaluated to estimate its efficiency for commercial production. The results show that the fabrication procedure is still in its early stage and thus it is not able to guarantee satisfactory results. To summarize the study, we claim that a novel solution can be derived using principles from nature, however, for the solution to be actualized successfully, there are parameters which are beyond reach for designers. Nonetheless, industrial designers can contribute to product innovation using biomimetics.

Keywords: biomimetics, innovation, design process, additive manufacturing

Procedia PDF Downloads 191
23204 Modeling of Nanocomposite Films Made of Cloisite 30b- Metal Nanoparticle in Packaging of Soy Burger

Authors: Faranak Beigmohammadi, Seyed Hadi Peighambardoust, Seyed Jamaledin Peighambardoust

Abstract:

This study undertakes to investigate the ability of different kinds of nanocomposite films made of cloisite-30B with different percentages of silver and copper oxide nanoparticles incorporated into a low-density polyethylene (LDPE) polymeric matrix by a melt mixing method in order to inhibit the growth of microorganism in soy burger. The number of surviving cell of the total count was decreased by 3.61 log and mold and yeast diminished by 2.01 log after 8 weeks storage at 18 ± 0.5°C below zero, whilst pure LDPE did not has any antimicrobial effect. A composition of 1.3 % cloisite 30B-Ag and 2.7 % cloisite 30B-CuO for total count and 0 % cloisite 30B-Ag and 4 % cloisite 30B-CuO for yeast & mold gave optimum points in combined design test in Design Expert 7.1.5. Suitable microbial models were suggested for retarding above microorganisms growth in soy burger. To validation of optimum point, the difference between the optimum point of nanocomposite film and its repeat was not significant (p<0.05) by one-way ANOVA analysis using SPSS 17.0 software, while the difference was significant for pure film. Migration of metallic nanoparticles into a food stimulant was within the accepted safe level.

Keywords: modeling, nanocomposite film, packaging, soy burger

Procedia PDF Downloads 302
23203 Effect of Non-Genetic Factors and Heritability Estimate of Some Productive and Reproductive Traits of Holstein Cows in Middle of Iraq

Authors: Salim Omar Raoof

Abstract:

This study was conducted at the Al-Salam cows’ station for milk production located in Al-Latifiya district - Al-Mahmudiyah district (25 km south of Baghdad governorate) on a sample of (180) Holstein cows imported from Germany by Taj Al-Nahrain company, in order to study the effect of the sequence, season and calving year on Total Milk Production (TMP). the lactation period (LP), calving interval, Services per conception and the estimate the heritability of the studied traits. The results showed that the overall mean of TMP and LP were 3172.53 kg and237.09-day respectively. The parity effect on TMP in Holstein cows was highly significant (P≤0.01). total Milk production increased with the advanced of parity and mostly reached its maximum value in the 4th and 3rd parity being 3305.87 kg and3286.35 kg per day, respectively. Season of calving has a highly significant (P≤0.01) effect on (TMP). Cows calved in spring had a highest milk production than that calved in other seasons. Season of calving had highly significant (P≤0.01) effect on services per conception. The result of the study showed the heritability value for TMP, LP, SPC and CL were 0.21 ,0.08 ,0.08 and 0.07 respectively.

Keywords: Holstein, cows, milk production, non-genetic, hertability

Procedia PDF Downloads 65
23202 Amblyopia and Eccentric Fixation

Authors: Kristine Kalnica-Dorosenko, Aiga Svede

Abstract:

Amblyopia or 'lazy eye' is impaired or dim vision without obvious defect or change in the eye. It is often associated with abnormal visual experience, most commonly strabismus, anisometropia or both, and form deprivation. The main task of amblyopia treatment is to ameliorate etiological factors to create a clear retinal image and, to ensure the participation of the amblyopic eye in the visual process. The treatment of amblyopia and eccentric fixation is usually associated with problems in the therapy. Eccentric fixation is present in around 44% of all patients with amblyopia and in 30% of patients with strabismic amblyopia. In Latvia, amblyopia is carefully treated in various clinics, but eccentricity diagnosis is relatively rare. Conflict which has developed relating to the relationship between the visual disorder and the degree of eccentric fixation in amblyopia should to be rethoughted, because it has an important bearing on the cause and treatment of amblyopia, and the role of the eccentric fixation in this case. Visuoscopy is the most frequently used method for determination of eccentric fixation. With traditional visuoscopy, a fixation target is projected onto the patient retina, and the examiner asks to look straight directly at the center of the target. An optometrist then observes the point on the macula used for fixation. This objective test provides clinicians with direct observation of the fixation point of the eye. It requires patients to voluntarily fixate the target and assumes the foveal reflex accurately demarcates the center of the foveal pit. In the end, by having a very simple method to evaluate fixation, it is possible to indirectly evaluate treatment improvement, as eccentric fixation is always associated with reduced visual acuity. So, one may expect that if eccentric fixation in amlyopic eye is found with visuoscopy, then visual acuity should be less than 1.0 (in decimal units). With occlusion or another amblyopia therapy, one would expect both visual acuity and fixation to improve simultaneously, that is fixation would become more central. Consequently, improvement in fixation pattern by treatment is an indirect measurement of improvement of visual acuity. Evaluation of eccentric fixation in the child may be helpful in identifying amblyopia in children prior to measurement of visual acuity. This is very important because the earlier amblyopia is diagnosed – the better the chance of improving visual acuity.

Keywords: amblyopia, eccentric fixation, visual acuity, visuoscopy

Procedia PDF Downloads 158
23201 Modeling Standpipe Pressure Using Multivariable Regression Analysis by Combining Drilling Parameters and a Herschel-Bulkley Model

Authors: Seydou Sinde

Abstract:

The aims of this paper are to formulate mathematical expressions that can be used to estimate the standpipe pressure (SPP). The developed formulas take into account the main factors that, directly or indirectly, affect the behavior of SPP values. Fluid rheology and well hydraulics are some of these essential factors. Mud Plastic viscosity, yield point, flow power, consistency index, flow rate, drillstring, and annular geometries are represented by the frictional pressure (Pf), which is one of the input independent parameters and is calculated, in this paper, using Herschel-Bulkley rheological model. Other input independent parameters include the rate of penetration (ROP), applied load or weight on the bit (WOB), bit revolutions per minute (RPM), bit torque (TRQ), and hole inclination and direction coupled in the hole curvature or dogleg (DL). The technique of repeating parameters and Buckingham PI theorem are used to reduce the number of the input independent parameters into the dimensionless revolutions per minute (RPMd), the dimensionless torque (TRQd), and the dogleg, which is already in the dimensionless form of radians. Multivariable linear and polynomial regression technique using PTC Mathcad Prime 4.0 is used to analyze and determine the exact relationships between the dependent parameter, which is SPP, and the remaining three dimensionless groups. Three models proved sufficiently satisfactory to estimate the standpipe pressure: multivariable linear regression model 1 containing three regression coefficients for vertical wells; multivariable linear regression model 2 containing four regression coefficients for deviated wells; and multivariable polynomial quadratic regression model containing six regression coefficients for both vertical and deviated wells. Although that the linear regression model 2 (with four coefficients) is relatively more complex and contains an additional term over the linear regression model 1 (with three coefficients), the former did not really add significant improvements to the later except for some minor values. Thus, the effect of the hole curvature or dogleg is insignificant and can be omitted from the input independent parameters without significant losses of accuracy. The polynomial quadratic regression model is considered the most accurate model due to its relatively higher accuracy for most of the cases. Data of nine wells from the Middle East were used to run the developed models with satisfactory results provided by all of them, even if the multivariable polynomial quadratic regression model gave the best and most accurate results. Development of these models is useful not only to monitor and predict, with accuracy, the values of SPP but also to early control and check for the integrity of the well hydraulics as well as to take the corrective actions should any unexpected problems appear, such as pipe washouts, jet plugging, excessive mud losses, fluid gains, kicks, etc.

Keywords: standpipe, pressure, hydraulics, nondimensionalization, parameters, regression

Procedia PDF Downloads 84
23200 The Analysis of Thermal Conductivity in Porcine Meat Due to Electricity by Finite Element Method

Authors: Orose Rugchati, Sarawut Wattanawongpitak

Abstract:

This research studied the analysis of the thermal conductivity and heat transfer in porcine meat due to the electric current flowing between the electrode plates in parallel. Hot-boned pork sample was prepared in 2*1*1 cubic centimeter. The finite element method with ANSYS workbench program was applied to simulate this heat transfer problem. In the thermal simulation, the input thermoelectric energy was calculated from measured current that flowing through the pork and the input voltage from the dc voltage source. The comparison of heat transfer in pork according to two voltage sources: DC voltage 30 volts and dc pulsed voltage 60 volts (pulse width 50 milliseconds and 50 % duty cycle) were demonstrated. From the result, it shown that the thermal conductivity trends to be steady at temperature 40C and 60C around 1.39 W/mC and 2.65 W/mC for dc voltage source 30 volts and dc pulsed voltage 60 volts, respectively. For temperature increased to 50C at 5 minutes, the appearance color of porcine meat at the exposer point has become to fade. This technique could be used for predicting of thermal conductivity caused by some meat’s characteristics.

Keywords: thermal conductivity, porcine meat, electricity, finite element method

Procedia PDF Downloads 140
23199 Waters Colloidal Phase Extraction and Preconcentration: Method Comparison

Authors: Emmanuelle Maria, Pierre Crançon, Gaëtane Lespes

Abstract:

Colloids are ubiquitous in the environment and are known to play a major role in enhancing the transport of trace elements, thus being an important vector for contaminants dispersion. Colloids study and characterization are necessary to improve our understanding of the fate of pollutants in the environment. However, in stream water and groundwater, colloids are often very poorly concentrated. It is therefore necessary to pre-concentrate colloids in order to get enough material for analysis, while preserving their initial structure. Many techniques are used to extract and/or pre-concentrate the colloidal phase from bulk aqueous phase, but yet there is neither reference method nor estimation of the impact of these different techniques on the colloids structure, as well as the bias introduced by the separation method. In the present work, we have tested and compared several methods of colloidal phase extraction/pre-concentration, and their impact on colloids properties, particularly their size distribution and their elementary composition. Ultrafiltration methods (frontal, tangential and centrifugal) have been considered since they are widely used for the extraction of colloids in natural waters. To compare these methods, a ‘synthetic groundwater’ was used as a reference. The size distribution (obtained by Field-Flow Fractionation (FFF)) and the chemical composition of the colloidal phase (obtained by Inductively Coupled Plasma Mass Spectrometry (ICPMS) and Total Organic Carbon analysis (TOC)) were chosen as comparison factors. In this way, it is possible to estimate the pre-concentration impact on the colloidal phase preservation. It appears that some of these methods preserve in a more efficient manner the colloidal phase composition while others are easier/faster to use. The choice of the extraction/pre-concentration method is therefore a compromise between efficiency (including speed and ease of use) and impact on the structural and chemical composition of the colloidal phase. In perspective, the use of these methods should enhance the consideration of colloidal phase in the transport of pollutants in environmental assessment studies and forensics.

Keywords: chemical composition, colloids, extraction, preconcentration methods, size distribution

Procedia PDF Downloads 215
23198 Fracture Crack Monitoring Using Digital Image Correlation Technique

Authors: B. G. Patel, A. K. Desai, S. G. Shah

Abstract:

The main of objective of this paper is to develop new measurement technique without touching the object. DIC is advance measurement technique use to measure displacement of particle with very high accuracy. This powerful innovative technique which is used to correlate two image segments to determine the similarity between them. For this study, nine geometrically similar beam specimens of different sizes with (steel fibers and glass fibers) and without fibers were tested under three-point bending in a closed loop servo-controlled machine with crack mouth opening displacement control with a rate of opening of 0.0005 mm/sec. Digital images were captured before loading (unreformed state) and at different instances of loading and were analyzed using correlation techniques to compute the surface displacements, crack opening and sliding displacements, load-point displacement, crack length and crack tip location. It was seen that the CMOD and vertical load-point displacement computed using DIC analysis matches well with those measured experimentally.

Keywords: Digital Image Correlation, fibres, self compacting concrete, size effect

Procedia PDF Downloads 389
23197 ICT-based Methodologies and Students’ Academic Performance and Retention in Physics: A Case with Newton Laws of Motion

Authors: Gabriel Ocheleka Aniedi A. Udo, Patum Wasinda

Abstract:

The study was carried out to appraise the impact of ICT-based teaching methodologies (video-taped instructions and Power Point presentations) on academic performance and retention of secondary school students in Physics, with particular interest in Newton Laws of Motion. The study was conducted in Cross River State, Nigeria, with a quasi-experimental research design using non-randomised pre-test and post-test control group. The sample for the study consisted of 176 SS2 students drawn from four intact classes of four secondary schools within the study area. Physics Achievement Test (PAT), with a reliability coefficient of 0.85, was used for data collection. Mean and Analysis of Covariance (ANCOVA) was used in the treatment of the obtained data. The results of the study showed that there was a significant difference in the academic performance and retention of students taught using video-taped instructions and those taught using power point presentations. Findings of the study showed that students taught using video-taped instructions had a higher academic performance and retention than those taught using power point presentations. The study concludes that the use of blended ICT-based teaching methods can improve learner’s academic performance and retention.

Keywords: video taped instruction (VTI), power point presentation (PPT), academic performance, retention, physics

Procedia PDF Downloads 91
23196 Trajectory Generation Procedure for Unmanned Aerial Vehicles

Authors: Amor Jnifene, Cedric Cocaud

Abstract:

One of the most constraining problems facing the development of autonomous vehicles is the limitations of current technologies. Guidance and navigation controllers need to be faster and more robust. Communication data links need to be more reliable and secure. For an Unmanned Aerial Vehicles (UAV) to be useful, and fully autonomous, one important feature that needs to be an integral part of the navigation system is autonomous trajectory planning. The work discussed in this paper presents a method for on-line trajectory planning for UAV’s. This method takes into account various constraints of different types including specific vectors of approach close to target points, multiple objectives, and other constraints related to speed, altitude, and obstacle avoidance. The trajectory produced by the proposed method ensures a smooth transition between different segments, satisfies the minimum curvature imposed by the dynamics of the UAV, and finds the optimum velocity based on available atmospheric conditions. Given a set of objective points and waypoints a skeleton of the trajectory is constructed first by linking all waypoints with straight segments based on the order in which they are encountered in the path. Secondly, vectors of approach (VoA) are assigned to objective waypoints and their preceding transitional waypoint if any. Thirdly, the straight segments are replaced by 3D curvilinear trajectories taking into account the aircraft dynamics. In summary, this work presents a method for on-line 3D trajectory generation (TG) of Unmanned Aerial Vehicles (UAVs). The method takes as inputs a series of waypoints and an optional vector of approach for each of the waypoints. Using a dynamic model based on the performance equations of fixed wing aircrafts, the TG computes a set of 3D parametric curves establishing a course between every pair of waypoints, and assembling these sets of curves to construct a complete trajectory. The algorithm ensures geometric continuity at each connection point between two sets of curves. The geometry of the trajectory is optimized according to the dynamic characteristics of the aircraft such that the result translates into a series of dynamically feasible maneuvers. In summary, this work presents a method for on-line 3D trajectory generation (TG) of Unmanned Aerial Vehicles (UAVs). The method takes as inputs a series of waypoints and an optional vector of approach for each of the waypoints. Using a dynamic model based on the performance equations of fixed wing aircraft, the TG computes a set of 3D parametric curves establishing a course between every pair of waypoints, and assembling these sets of curves to construct a complete trajectory. The algorithm ensures geometric continuity at each connection point between two sets of curves. The geometry of the trajectory is optimized according to the dynamic characteristics of the aircraft such that the result translates into a series of dynamically feasible maneuvers.

Keywords: trajectory planning, unmanned autonomous air vehicle, vector of approach, waypoints

Procedia PDF Downloads 409
23195 Conjugate Heat Transfer Analysis of a Combustion Chamber using ANSYS Computational Fluid Dynamics to Estimate the Thermocouple Positioning in a Chamber Wall

Authors: Muzna Tariq, Ihtzaz Qamar

Abstract:

In most engineering cases, the working temperatures inside a combustion chamber are high enough that they lie beyond the operational range of thermocouples. Furthermore, design and manufacturing limitations restrict the use of internal thermocouples in many applications. Heat transfer inside a combustion chamber is caused due to interaction of the post-combustion hot fluid with the chamber wall. Heat transfer that involves an interaction between the fluid and solid is categorized as Conjugate Heat Transfer (CHT). Therefore, to satisfy the needs of CHT, CHT Analysis is performed by using ANSYS CFD tool to estimate theoretically precise thermocouple positions at the combustion chamber wall where excessive temperatures (beyond thermocouple range) can be avoided. In accordance with these Computational Fluid Dynamics (CFD) results, a combustion chamber is designed, and a prototype is manufactured with multiple thermocouple ports positioned at the specified distances so that the temperature of hot gases can be measured on the chamber wall where the temperatures do not exceed the thermocouple working range.

Keywords: computational fluid dynamics, conduction, conjugate heat transfer, convection, fluid flow, thermocouples

Procedia PDF Downloads 147
23194 Relation of Optimal Pilot Offsets in the Shifted Constellation-Based Method for the Detection of Pilot Contamination Attacks

Authors: Dimitriya A. Mihaylova, Zlatka V. Valkova-Jarvis, Georgi L. Iliev

Abstract:

One possible approach for maintaining the security of communication systems relies on Physical Layer Security mechanisms. However, in wireless time division duplex systems, where uplink and downlink channels are reciprocal, the channel estimate procedure is exposed to attacks known as pilot contamination, with the aim of having an enhanced data signal sent to the malicious user. The Shifted 2-N-PSK method involves two random legitimate pilots in the training phase, each of which belongs to a constellation, shifted from the original N-PSK symbols by certain degrees. In this paper, legitimate pilots’ offset values and their influence on the detection capabilities of the Shifted 2-N-PSK method are investigated. As the implementation of the technique depends on the relation between the shift angles rather than their specific values, the optimal interconnection between the two legitimate constellations is investigated. The results show that no regularity exists in the relation between the pilot contamination attacks (PCA) detection probability and the choice of offset values. Therefore, an adversary who aims to obtain the exact offset values can only employ a brute-force attack but the large number of possible combinations for the shifted constellations makes such a type of attack difficult to successfully mount. For this reason, the number of optimal shift value pairs is also studied for both 100% and 98% probabilities of detecting pilot contamination attacks. Although the Shifted 2-N-PSK method has been broadly studied in different signal-to-noise ratio scenarios, in multi-cell systems the interference from the signals in other cells should be also taken into account. Therefore, the inter-cell interference impact on the performance of the method is investigated by means of a large number of simulations. The results show that the detection probability of the Shifted 2-N-PSK decreases inversely to the signal-to-interference-plus-noise ratio.

Keywords: channel estimation, inter-cell interference, pilot contamination attacks, wireless communications

Procedia PDF Downloads 217