Search results for: motor parameter estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4580

Search results for: motor parameter estimation

560 Conflation Methodology Applied to Flood Recovery

Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: community resilience, conflation, flood risk, nuisance flooding

Procedia PDF Downloads 89
559 Preliminary Evaluation of Decommissioning Wastes for the First Commercial Nuclear Power Reactor in South Korea

Authors: Kyomin Lee, Joohee Kim, Sangho Kang

Abstract:

The commercial nuclear power reactor in South Korea, Kori Unit 1, which was a 587 MWe pressurized water reactor that started operation since 1978, was permanently shut down in June 2017 without an additional operating license extension. The Kori 1 Unit is scheduled to become the nuclear power unit to enter the decommissioning phase. In this study, the preliminary evaluation of the decommissioning wastes for the Kori Unit 1 was performed based on the following series of process: firstly, the plant inventory is investigated based on various documents (i.e., equipment/ component list, construction records, general arrangement drawings). Secondly, the radiological conditions of systems, structures and components (SSCs) are established to estimate the amount of radioactive waste by waste classification. Third, the waste management strategies for Kori Unit 1 including waste packaging are established. Forth, selection of the proper decontamination and dismantling (D&D) technologies is made considering the various factors. Finally, the amount of decommissioning waste by classification for Kori 1 is estimated using the DeCAT program, which was developed by KEPCO-E&C for a decommissioning cost estimation. The preliminary evaluation results have shown that the expected amounts of decommissioning wastes were less than about 2% and 8% of the total wastes generated (i.e., sum of clean wastes and radwastes) before/after waste processing, respectively, and it was found that the majority of contaminated material was carbon or alloy steel and stainless steel. In addition, within the range of availability of information, the results of the evaluation were compared with the results from the various decommissioning experiences data or international/national decommissioning study. The comparison results have shown that the radioactive waste amount from Kori Unit 1 decommissioning were much less than those from the plants decommissioned in U.S. and were comparable to those from the plants in Europe. This result comes from the difference of disposal cost and clearance criteria (i.e., free release level) between U.S. and non-U.S. The preliminary evaluation performed using the methodology established in this study will be useful as a important information in establishing the decommissioning planning for the decommissioning schedule and waste management strategy establishment including the transportation, packaging, handling, and disposal of radioactive wastes.

Keywords: characterization, classification, decommissioning, decontamination and dismantling, Kori 1, radioactive waste

Procedia PDF Downloads 198
558 Statistical Modeling of Constituents in Ash Evolved From Pulverized Coal Combustion

Authors: Esam Jassim

Abstract:

Industries using conventional fossil fuels have an interest in better understanding the mechanism of particulate formation during combustion since such is responsible for emission of undesired inorganic elements that directly impact the atmospheric pollution level. Fine and ultrafine particulates have tendency to escape the flue gas cleaning devices to the atmosphere. They also preferentially collect on surfaces in power systems resulting in ascending in corrosion inclination, descending in the heat transfer thermal unit, and severe impact on human health. This adverseness manifests particularly in the regions of world where coal is the dominated source of energy for consumption. This study highlights the behavior of calcium transformation as mineral grains verses organically associated inorganic components during pulverized coal combustion. The influence of existing type of calcium on the coarse, fine and ultrafine mode formation mechanisms is also presented. The impact of two sub-bituminous coals on particle size and calcium composition evolution during combustion is to be assessed. Three mixed blends named Blends 1, 2, and 3 are selected according to the ration of coal A to coal B by weight. Calcium percentage in original coal increases as going from Blend 1 to 3. A mathematical model and a new approach of describing constituent distribution are proposed. Analysis of experiments of calcium distribution in ash is also modeled using Poisson distribution. A novel parameter, called elemental index λ, is introduced as a measuring factor of element distribution. Results show that calcium in ash that originally in coal as mineral grains has index of 17, whereas organically associated calcium transformed to fly ash shown to be best described when elemental index λ is 7. As an alkaline-earth element, calcium is considered the fundamental element responsible for boiler deficiency since it is the major player in the mechanism of ash slagging process. The mechanism of particle size distribution and mineral species of ash particles are presented using CCSEM and size-segregated ash characteristics. Conclusions are drawn from the analysis of pulverized coal ash generated from a utility-scale boiler.

Keywords: coal combustion, inorganic element, calcium evolution, fluid dynamics

Procedia PDF Downloads 321
557 Development and Validation of First Derivative Method and Artificial Neural Network for Simultaneous Spectrophotometric Determination of Two Closely Related Antioxidant Nutraceuticals in Their Binary Mixture”

Authors: Mohamed Korany, Azza Gazy, Essam Khamis, Marwa Adel, Miranda Fawzy

Abstract:

Background: Two new, simple and specific methods; First, a Zero-crossing first-derivative technique and second, a chemometric-assisted spectrophotometric artificial neural network (ANN) were developed and validated in accordance with ICH guidelines. Both methods were used for the simultaneous estimation of the two closely related antioxidant nutraceuticals ; Coenzyme Q10 (Q) ; also known as Ubidecarenone or Ubiquinone-10, and Vitamin E (E); alpha-tocopherol acetate, in their pharmaceutical binary mixture. Results: For first method: By applying the first derivative, both Q and E were alternatively determined; each at the zero-crossing of the other. The D1 amplitudes of Q and E, at 285 nm and 235 nm respectively, were recorded and correlated to their concentrations. The calibration curve is linear over the concentration range of 10-60 and 5.6-70 μg mL-1 for Q and E, respectively. For second method: ANN (as a multivariate calibration method) was developed and applied for the simultaneous determination of both analytes. A training set (or a concentration set) of 90 different synthetic mixtures containing Q and E, in wide concentration ranges between 0-100 µg/mL and 0-556 µg/mL respectively, were prepared in ethanol. The absorption spectra of the training sets were recorded in the spectral region of 230–300 nm. A Gradient Descend Back Propagation ANN chemometric calibration was computed by relating the concentration sets (x-block) to their corresponding absorption data (y-block). Another set of 45 synthetic mixtures of the two drugs, in defined range, was used to validate the proposed network. Neither chemical separation, preparation stage nor mathematical graphical treatment were required. Conclusions: The proposed methods were successfully applied for the assay of Q and E in laboratory prepared mixtures and combined pharmaceutical tablet with excellent recoveries. The ANN method was superior over the derivative technique as the former determined both drugs in the non-linear experimental conditions. It also offers rapidity, high accuracy, effort and money saving. Moreover, no need for an analyst for its application. Although the ANN technique needed a large training set, it is the method of choice in the routine analysis of Q and E tablet. No interference was observed from common pharmaceutical additives. The results of the two methods were compared together

Keywords: coenzyme Q10, vitamin E, chemometry, quantitative analysis, first derivative spectrophotometry, artificial neural network

Procedia PDF Downloads 433
556 Tool Wear of Metal Matrix Composite 10wt% AlN Reinforcement Using TiB2 Cutting Tool

Authors: M. S. Said, J. A. Ghani, C. H. Che Hassan, N. N. Wan, M. A. Selamat, R. Othman

Abstract:

Metal Matrix Composite (MMCs) have attracted considerable attention as a result of their ability to provide high strength, high modulus, high toughness, high impact properties, improved wear resistance and good corrosion resistance than unreinforced alloy. Aluminium Silicon (Al/Si) alloys Metal Matrix composite (MMC) has been widely used in various industrial sectors such as transportation, domestic equipment, aerospace, military, construction, etc. Aluminium silicon alloy is MMC reinforced with aluminium nitride (AlN) particle and becomes a new generation material for automotive and aerospace applications. The AlN material is one of the advanced materials with light weight, high strength, high hardness and stiffness qualities which have good future prospects. However, the high degree of ceramic particles reinforcement and the irregular nature of the particles along the matrix material that contribute to its low density, is the main problem that leads to the machining difficulties. This paper examines tool wear when milling AlSi/AlN Metal Matrix Composite using a TiB2 coated carbide cutting tool. The volume of the AlN reinforced particle was 10%. The milling process was carried out under dry cutting condition. The TiB2 coated carbide insert parameters used were the cutting speed of (230 m/min, feed rate 0.4mm tooth, DOC 0.5mm, 300 m/min, feed rate 0.8mm/tooth, DOC 0.5mm and 370 m/min, feed rate 0.8, DOC 0.4m). The Sometech SV-35 video microscope system was used for tool wear measurements respectively. The results have revealed that the tool life increases with the cutting speed (370 m/min, feed rate 0.8 mm/tooth and depth of cut 0.4mm) constituted the optimum condition for longer tool life which is 123.2 min. While at medium cutting speed, it is found that the cutting speed of 300m/min, feed rate 0.8 mm/tooth and depth of cut 0.5mm only 119.86 min for tool wear mean while the low cutting speed give 119.66 min. The high cutting speed gives the best parameter for cutting AlSi/AlN MMCs materials. The result will help manufacture to machining the AlSi/AlN MMCs materials.

Keywords: AlSi/AlN Metal Matrix Composite milling process, tool wear, TiB2 coated carbide tool, manufacturing engineering

Procedia PDF Downloads 414
555 Development of a Mechanical Ventilator Using A Manual Artificial Respiration Unit

Authors: Isomar Lima da Silva, Alcilene Batalha Pontes, Aristeu Jonatas Leite de Oliveira, Roberto Maia Augusto

Abstract:

Context: Mechanical ventilators are medical devices that help provide oxygen and ventilation to patients with respiratory difficulties. This equipment consists of a manual breathing unit that can be operated by a doctor or nurse and a mechanical ventilator that controls the airflow and pressure in the patient's respiratory system. This type of ventilator is commonly used in emergencies and intensive care units where it is necessary to provide breathing support to critically ill or injured patients. Objective: In this context, this work aims to develop a reliable and low-cost mechanical ventilator to meet the demand of hospitals in treating people affected by Covid-19 and other severe respiratory diseases, offering a chance of treatment as an alternative to mechanical ventilators currently available in the market. Method: The project presents the development of a low-cost auxiliary ventilator with a controlled ventilatory system assisted by integrated hardware and firmware for respiratory cycle control in non-invasive mechanical ventilation treatments using a manual artificial respiration unit. The hardware includes pressure sensors capable of identifying positive expiratory pressure, peak inspiratory flow, and injected air volume. The embedded system controls the data sent by the sensors. It ensures efficient patient breathing through the operation of the sensors, microcontroller, and actuator, providing patient data information to the healthcare professional (system operator) through the graphical interface and enabling clinical parameter adjustments as needed. Results: The test data of the developed mechanical ventilator presented satisfactory results in terms of performance and reliability, showing that the equipment developed can be a viable alternative to commercial mechanical ventilators currently available, offering a low-cost solution to meet the increasing demand for respiratory support equipment.

Keywords: mechanical fans, breathing, medical equipment, COVID-19, intensive care units

Procedia PDF Downloads 55
554 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 36
553 A Dual-Mode Infinite Horizon Predictive Control Algorithm for Load Tracking in PUSPATI TRIGA Reactor

Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha

Abstract:

The PUSPATI TRIGA Reactor (RTP), Malaysia reached its first criticality on June 28, 1982, with power capacity 1MW thermal. The Feedback Control Algorithm (FCA) which is conventional Proportional-Integral (PI) controller, was used for present power control method to control fission process in RTP. It is important to ensure the core power always stable and follows load tracking within acceptable steady-state error and minimum settling time to reach steady-state power. At this time, the system could be considered not well-posed with power tracking performance. However, there is still potential to improve current performance by developing next generation of a novel design nuclear core power control. In this paper, the dual-mode predictions which are proposed in modelling Optimal Model Predictive Control (OMPC), is presented in a state-space model to control the core power. The model for core power control was based on mathematical models of the reactor core, OMPC, and control rods selection algorithm. The mathematical models of the reactor core were based on neutronic models, thermal hydraulic models, and reactivity models. The dual-mode prediction in OMPC for transient and terminal modes was based on the implementation of a Linear Quadratic Regulator (LQR) in designing the core power control. The combination of dual-mode prediction and Lyapunov which deal with summations in cost function over an infinite horizon is intended to eliminate some of the fundamental weaknesses related to MPC. This paper shows the behaviour of OMPC to deal with tracking, regulation problem, disturbance rejection and caters for parameter uncertainty. The comparison of both tracking and regulating performance is analysed between the conventional controller and OMPC by numerical simulations. In conclusion, the proposed OMPC has shown significant performance in load tracking and regulating core power for nuclear reactor with guarantee stabilising in the closed-loop.

Keywords: core power control, dual-mode prediction, load tracking, optimal model predictive control

Procedia PDF Downloads 150
552 Quantification of the Erosion Effect on Small Caliber Guns: Experimental and Numerical Analysis

Authors: Dhouibi Mohamed, Stirbu Bogdan, Chabotier André, Pirlot Marc

Abstract:

Effects of erosion and wear on the performance of small caliber guns have been analyzed throughout numerical and experimental studies. Mainly, qualitative observations were performed. Correlations between the volume change of the chamber and the maximum pressure are limited. This paper focuses on the development of a numerical model to predict the maximum pressure evolution when the interior shape of the chamber changes in the different weapon’s life phases. To fulfill this goal, an experimental campaign, followed by a numerical simulation study, is carried out. Two test barrels, « 5.56x45mm NATO » and « 7.62x51mm NATO,» are considered. First, a Coordinate Measuring Machine (CMM) with a contact scanning probe is used to measure the interior profile of the barrels after each 300-shots cycle until their worn out. Simultaneously, the EPVAT (Electronic Pressure Velocity and Action Time) method with a special WEIBEL radar are used to measure: (i) the chamber pressure, (ii) the action time, (iii) and the bullet velocity in each barrel. Second, a numerical simulation study is carried out. Thus, a coupled interior ballistic model is developed using the dynamic finite element program LS-DYNA. In this work, two different models are elaborated: (i) coupled Eularien Lagrangian method using fluid-structure interaction (FSI) techniques and a coupled thermo-mechanical finite element using a lumped parameter model (LPM) as a subroutine. Those numerical models are validated and checked through three experimental results, such as (i) the muzzle velocity, (ii) the chamber pressure, and (iii) the surface morphology of fired projectiles. Results show a good agreement between experiments and numerical simulations. Next, a comparison between the two models is conducted. The projectile motions, the dynamic engraving resistances and the maximum pressures are compared and analyzed. Finally, using this obtained database, a statistical correlation between the muzzle velocity, the maximum pressure and the chamber volume is established.

Keywords: engraving process, finite element analysis, gun barrel erosion, interior ballistics, statistical correlation

Procedia PDF Downloads 197
551 Formulation and Evaluation of Curcumin-Zn (II) Microparticulate Drug Delivery System for Antimalarial Activity

Authors: M. R. Aher, R. B. Laware, G. S. Asane, B. S. Kuchekar

Abstract:

Objective: Studies have shown that a new combination therapy with Artemisinin derivatives and curcumin is unique, with potential advantages over known ACTs. In present study an attempt was made to prepare microparticulate drug delivery system of Curcumin-Zn complex and evaluate it in combination with artemether for antimalarial activity. Material and method: Curcumin Zn complex was prepared and encapsulated using sodium alginate. Microparticles thus obtained are further coated with various enteric polymers at different coating thickness to control the release. Microparticles are evaluated for encapsulation efficiency, drug loading and in vitro drug release. Roentgenographic Studies was conducted in rabbits with BaSO 4 tagged formulation. Optimized formulation was screened for antimalarial activity using P. berghei-infected mice survival test and % paracetemia inhibition, alone (three oral dose of 5mg/day) and in combination with arthemether (i.p. 500, 1000 and 1500µg). Curcumin-Zn(II) was estimated in serum after oral administration to rats by using spectroflurometry. Result: Microparticles coated with Cellulose acetate phthalate showed most satisfactory and controlled release with 479 min time for 60% drug release. X-ray images taken at different time intervals confirmed the retention of formulation in GI tract. Estimation of curcumin in serum by spectroflurometry showed that drug concentration is maintained in the blood for longer time with tmax of 6 hours. The survival time (40 days post treatment) of mice infected with P. berghei was compared to survival after treatment with either Curcumin-Zn(II) microparticles artemether combination, curcumin-Zn complex and artemether. Oral administration of Curcumin-Zn(II)-artemether prolonged the survival of P.berghei-infected mice. All the mice treated with Curcumin-Zn(II) microparticles (5mg/day) artemether (1000µg) survived for more than 40 days and recovered with no detectable parasitemia. Administration of Curcumin-Zn(II) artemether combination reduced the parasitemia in mice by more than 90% compared to that in control mice for the first 3 days after treatment. Conclusion: Antimalarial activity of the curcumin Zn-artemether combination was more pronounced than mono therapy. A single dose of 1000µg of artemether in curcumin-Zn combination gives complete protection in P. berghei-infected mice. This may reduce the chances of drug resistance in malaria management.

Keywords: formulation, microparticulate drug delivery, antimalarial, pharmaceutics

Procedia PDF Downloads 384
550 Compensation of Bulk Charge Carriers in Bismuth Based Topological Insulators via Swift Heavy Ion Irradiation

Authors: Jyoti Yadav, Rini Singh, Anoop M.D, Nisha Yadav, N. Srinivasa Rao, Fouran Singh, Takayuki Ichikawa, Ankur Jain, Kamlendra Awasthi, Manoj Kumar

Abstract:

Nanocrystalline films exhibit defects and strain induced by its grain boundaries. Defects and strain affect the physical as well as topological insulating properties of the Bi2Te3 thin films by changing their electronic structure. In the present studies, the effect of Ni7+ ion irradiation on the physical and electrical properties of Bi2Te3 thin films was studied. The films were irradiated at five different fluences (5x1011, 1x1012, 3x1012, 5x1012, 1x1013 ions/cm2). Thin films synthesized using the e-beam technique possess a rhombohedral crystal structure with the R-3m space group. The average crystallite size, as determined by x-ray diffraction (XRD) peak broadening, was found to be 18.5 ± 5 (nm). It was also observed that irradiation increases the induced strain. Raman Spectra of the films demonstrate the splitting of A_1u^1 modes originating from the vibrations along the c-axis. This is by the variation in the lattice parameter ‘c,’ as observed through XRD. The atomic force microscopy study indicates the decrease in surface roughness up to the fluence of 3x1012 ions/cm2 and further increasing the fluence increases the roughness. The decrease in roughness may be due to the growth of smaller nano-crystallites on the surface of thin films due to irradiation-induced annealing. X-ray photoelectron spectroscopy studies reveal the composition to be in close agreement to the nominal values i.e. Bi2Te3. The resistivity v/s temperature measurements revealed an increase in resistivity up to the fluence 3x1012 ions/cm2 and a decrease on further increasing the fluence. The variation in electrical resistivity is corroborated with the change in the carrier concentration as studied through low-temperature Hall measurements. A crossover from the n-type to p-type carriers was achieved in the irradiated films. Interestingly, tuning of the Fermi level by compensating the bulk carriers using ion-irradiation could be achieved.

Keywords: Annealing, Irradiation, Fermi level, Tuning

Procedia PDF Downloads 130
549 Medical Dressing Induced Digital Ischemia in Patient with Congenital Insensitivity to Pain and Anhidrosis

Authors: Abdulwhab Alotaibi, Abdullah Alzahrani, Ziyad Bokhari, Abdulelah Alghamdi

Abstract:

First described in 1975 by Dr. Miller, Medical dressings are uncommon but possible cause of hand digital ischemia due the tourniquet-like effect. The incident of this complication has been reported across wide range of age-groups, yet it seems like that the pediatric population are specifically vulnerable. Multiple dressing types were reported to have caused ischemic injury, such as elastic wrap, tubular gauze, and self-adherent dressings. We present a case of medical dressing induced digital ischemia in patient with Congenital insensitivity to pain and anhidrosis (CIPA), which further challenge the discovery of the condition. An 8-year-old girl known case of CIPA. Brought by her mother to the ER after nail bed injury, which she managed by application of elastic wrap that was left for 24 hours. When the mother found out she immediately removed the elastic band, and noticed the fingertip was black and cold with tense bullae. The color then changed later when she arrived to the ER to dark purple with bluish discoloration on the tip. On examination there was well demarcated tense bullae on the distal right fifth finger. Neurovascular intact, pulse oximetry on distal digit 100%, capillary refill time was delayed. She was seen under Plastic surgery and conservative management recommended, and patient was discharged with safety netting. Two days later the patient came as follow-up visit at which her condition demonstrated significant improvement, the bullae has since ruptured leaving behind sloughed skin, capillary refill and pulse oximetry were both within normal limits, sensory function couldn’t be assessed but her motor function and ROM were normal, topical bacitracin and bandage dressings were applied for the eroded skin. Patient was scheduled for a follow-up in 2 weeks. Preventatively it’s advisable to avoid the commonly implicated dressings such as elastic, tubular gauze or self-adherent wraps in hand or digital injuries when possible, but in cases where the use of these dressings is of necessity the appropriate precautions must be taken, Dr. Makarewich proposed the following 5 measures to help minimize the incidence of the injury: 1-Unwrapping 12 inches of the dressing before rolling the injured finger. 2-Wrapping from distal to proximal with minimal tension to avoid vascular embarrassment. 3-The use of 5-25 inch to overlap the entire wrap. 4-Maintaining light pressure over the wrap to allow adherence of the dressing. 5-Minimization of the number of layers used to wrap the affected digit. Also assessing the capillary refill after the application can help in determining the patency of the supplying blood vessels. It’s also important to selectively determine if the patient is a candidate for conservative management, as tailored approach can help in maximizing the positive outcomes for our patients.

Keywords: congenital insensitivity to pain, digital ischemia, medical dressing, conservative management

Procedia PDF Downloads 57
548 Estimation of Biomedical Waste Generated in a Tertiary Care Hospital in New Delhi

Authors: Priyanka Sharma, Manoj Jais, Poonam Gupta, Suraiya K. Ansari, Ravinder Kaur

Abstract:

Introduction: As much as the Health Care is necessary for the population, so is the management of the Biomedical waste produced. Biomedical waste is a wide terminology used for the waste material produced during the diagnosis, treatment or immunization of human beings and animals, in research or in the production or testing of biological products. Biomedical waste management is a chain of processes from the point of generation of Biomedical waste to its final disposal in the correct and proper way, assigned for that particular type of waste. Any deviation from the said processes leads to improper disposal of Biomedical waste which itself is a major health hazard. Proper segregation of Biomedical waste is the key for Biomedical Waste management. Improper disposal of BMW can cause sharp injuries which may lead to HIV, Hepatitis-B virus, Hepatitis-C virus infections. Therefore, proper disposal of BMW is of upmost importance. Health care establishments segregate the Biomedical waste and dispose it as per the Biomedical waste management rules in India. Objectives: This study was done to observe the current trends of Biomedical waste generated in a tertiary care Hospital in Delhi. Methodology: Biomedical waste management rounds were conducted in the hospital wards. Relevant details were collected and analysed and sites with maximum Biomedical waste generation were identified. All the data was cross checked with the commons collection site. Results: The total amount of waste generated in the hospital during January 2014 till December 2014 was 6,39,547 kg, of which 70.5% was General (non-hazardous) waste and the rest 29.5% was BMW which consisted highly infectious waste (12.2%), disposable plastic waste (16.3%) and sharps (1%). The maximum quantity of Biomedical waste producing sites were Obstetrics and Gynaecology wards with a total Biomedical waste production of 45.8%, followed by Paediatrics, Surgery and Medicine wards with 21.2 %, 4.6% and 4.3% respectively. The maximum average Biomedical waste generated was by Obstetrics and Gynaecology ward with 0.7 kg/bed/day, followed by Paediatrics, Surgery and Medicine wards with 0.29, 0.28 and 0.18 kg/bed/day respectively. Conclusions: Hospitals should pay attention to the sites which produce a large amount of BMW to avoid improper segregation of Biomedical waste. Also, induction and refresher training Program of Biomedical waste management should be conducted to avoid improper management of Biomedical waste. Healthcare workers should be made aware of risks of poor Biomedical waste management.

Keywords: biomedical waste, biomedical waste management, hospital-tertiary care, New Delhi

Procedia PDF Downloads 235
547 Extreme Heat and Workforce Health in Southern Nevada

Authors: Erick R. Bandala, Kebret Kebede, Nicole Johnson, Rebecca Murray, Destiny Green, John Mejia, Polioptro Martinez-Austria

Abstract:

Summertemperature data from Clark County was collected and used to estimate two different heat-related indexes: the heat index (HI) and excess heat factor (EHF). These two indexes were used jointly with data of health-related deaths in Clark County to assess the effect of extreme heat on the exposed population. The trends of the heat indexes were then analyzed for the 2007-2016 decadeandthe correlation between heat wave episodes and the number of heat-related deaths in the area was estimated. The HI showed that this value has increased significantly in June, July, and August over the last ten years. The same trend was found for the EHF, which showed a clear increase in the severity and number of these events per year. The number of heat wave episodes increased from 1.4 per year during the 1980-2016 period to 1.66 per yearduring the 2007-2016 period. However, a different trend was found for heat-wave-event duration, which decreasedfrom an average of 20.4 days during the trans-decadal period (1980-2016) to 18.1 days during the most recent decade(2007-2016). The number of heat-related deaths was also found to increase from 2007 to 2016, with 2016 with the highest number of heat-related deaths. Both HI and the number of deaths showeda normal-like distribution for June, July, and August, with the peak values reached in late July and early August. The average maximum HI values better correlated with the number of deaths registered in Clark County than the EHF, probably because HI uses the maximum temperature and humidity in its estimation,whereas EHF uses the average medium temperature. However, it is worth testing the EHF of the study zone because it was reported to fit properly in the case of heat-related morbidity. For the overall period, 437 heat-related deaths were registered in Clark County, with 20% of the deaths occurring in June, 52% occurring in July, 18% occurring in August,and the remaining 10% occurring in the other months of the year. The most vulnerable subpopulation was people over 50 years old, for which 76% of the heat-related deaths were registered.Most of the cases were associated with heart disease preconditions. The second most vulnerable subpopulation was young adults (20-50), which accounted for 23% of the heat-related deaths. These deathswere associated with alcoholic/illegal drug intoxication.

Keywords: heat, health, hazards, workforce

Procedia PDF Downloads 98
546 Optimal Tetra-Allele Cross Designs Including Specific Combining Ability Effects

Authors: Mohd Harun, Cini Varghese, Eldho Varghese, Seema Jaggi

Abstract:

Hybridization crosses find a vital role in breeding experiments to evaluate the combining abilities of individual parental lines or crosses for creation of lines with desirable qualities. There are various ways of obtaining progenies and further studying the combining ability effects of the lines taken in a breeding programme. Some of the most common methods are diallel or two-way cross, triallel or three-way cross, tetra-allele or four-way cross. These techniques help the breeders to improve the quantitative traits which are of economical as well as nutritional importance in crops and animals. Amongst these methods, tetra-allele cross provides extra information in terms of the higher specific combining ability (sca) effects and the hybrids thus produced exhibit individual as well as population buffering mechanism because of the broad genetic base. Most of the common commercial hybrids in corn are either three-way or four-way cross hybrids. Tetra-allele cross came out as the most practical and acceptable scheme for the production of slaughter pigs having fast growth rate, good feed efficiency, and carcass quality. Tetra-allele crosses are mostly used for exploitation of heterosis in case of commercial silkworm production. Experimental designs involving tetra-allele crosses have been studied extensively in literature. Optimality of designs has also been considered as a researchable issue. In practical situations, it is advisable to include sca effects in the model as this information is needed by the breeder to improve economically and nutritionally important quantitative traits. Thus, a model that provides information regarding the specific traits by utilizing sca effects along with general combining ability (gca) effects may help the breeders to deal with the problem of various stresses. In this paper, a model for experimental designs involving tetra-allele crosses that incorporates both gca and sca has been defined. Optimality aspects of such designs have been discussed incorporating sca effects in the model. Orthogonality conditions have been derived for block designs ensuring estimation of contrasts among the gca effects, after eliminating the nuisance factors, independently from sca effects. User friendly SAS macro and web solution (webPTC) have been developed for the generation and analysis of such designs.

Keywords: general combining ability, optimality, specific combining ability, tetra-allele cross, webPTC

Procedia PDF Downloads 124
545 Antioxidant Status in Synovial Fluid from Osteoarthritis Patients: A Pilot Study in Indian Demography

Authors: S. Koppikar, P. Kulkarni, D. Ingale , N. Wagh, S. Deshpande, A. Mahajan, A. Harsulkar

Abstract:

Crucial role of reactive oxygen species (ROS) in the progression Osteoarthritis (OA) pathogenesis has been endorsed several times though its exact mechanism remains unclear. Oxidative stress is known to instigate classical stress factors such as cytokines, chemokines and ROS, which hampers cartilage remodelling process and ultimately results in worsening the disease. Synovial fluid (SF) is a biological communicator between cartilage and synovium that accumulates redox and biochemical signalling mediators. The present work attempts to measure several oxidative stress markers in the synovial fluid obtained from knee OA patients with varying degree of disease severity. Thirty OA and five Meniscal-tear (MT) patients were graded using Kellgren-Lawrence scale and assessed for Nitric oxide (NO), Nitrate-Nitrite (NN), 2,2-diphenyl-1-picrylhydrazyl (DPPH), Ferric Reducing Antioxidant Potential (FRAP), Catalase (CAT), Superoxide dismutase (SOD) and Malondialdehyde (MDA) levels for comparison. Out of various oxidative markers studied, NO and SOD showed significant difference between moderate and severe OA (p= 0.007 and p= 0.08, respectively), whereas CAT demonstrated significant difference between MT and mild group (p= 0.07). Interestingly, NN revealed statistically positive correlation with OA severity (p= 0.001 and p= 0.003). MDA, a lipid peroxidation by-product was estimated maximum in early OA when compared to MT (p= 0.06). However, FRAP did not show any correlation with OA severity or MT control. NO is an essential bio-regulatory molecule essential for several physiological processes, and inflammatory conditions. However, due to its short life, exact estimation of NO becomes difficult. NO and its measurable stable products are still it is considered as one of the important biomarker of oxidative damage. Levels of NO and nitrite-nitrate in SF of patients with OA indicated its involvement in the disease progression. When SF groups were compared, a significant correlation among moderate, mild and MT groups was established. To summarize, present data illustrated higher levels of NO, SOD, CAT, DPPH and MDA in early OA in comparison with MT, as a control group. NN had emerged as a prognostic bio marker in knee OA patients, which may act as futuristic targets in OA treatment.

Keywords: antioxidant, knee osteoarthritis, oxidative stress, synovial fluid

Procedia PDF Downloads 467
544 Geospatial Multi-Criteria Evaluation to Predict Landslide Hazard Potential in the Catchment of Lake Naivasha, Kenya

Authors: Abdel Rahman Khider Hassan

Abstract:

This paper describes a multi-criteria geospatial model for prediction of landslide hazard zonation (LHZ) for Lake Naivasha catchment (Kenya), based on spatial analysis of integrated datasets of location intrinsic parameters (slope stability factors) and external landslides triggering factors (natural and man-made factors). The intrinsic dataset included: lithology, geometry of slope (slope inclination, aspect, elevation, and curvature) and land use/land cover. The landslides triggering factors included: rainfall as the climatic factor, in addition to the destructive effects reflected by proximity of roads and drainage network to areas that are susceptible to landslides. No published study on landslides has been obtained for this area. Thus, digital datasets of the above spatial parameters were conveniently acquired, stored, manipulated and analyzed in a Geographical Information System (GIS) using a multi-criteria grid overlay technique (in ArcGIS 10.2.2 environment). Deduction of landslide hazard zonation is done by applying weights based on relative contribution of each parameter to the slope instability, and finally, the weighted parameters grids were overlaid together to generate a map of the potential landslide hazard zonation (LHZ) for the lake catchment. From the total surface of 3200 km² of the lake catchment, most of the region (78.7 %; 2518.4 km²) is susceptible to moderate landslide hazards, whilst about 13% (416 km²) is occurring under high hazards. Only 1.0% (32 km²) of the catchment is displaying very high landslide hazards, and the remaining area (7.3 %; 233.6 km²) displays low probability of landslide hazards. This result confirms the importance of steep slope angles, lithology, vegetation land cover and slope orientation (aspect) as the major determining factors of slope failures. The information provided by the produced map of landslide hazard zonation (LHZ) could lay the basis for decision making as well as mitigation and applications in avoiding potential losses caused by landslides in the Lake Naivasha catchment in the Kenya Highlands.

Keywords: decision making, geospatial, landslide, multi-criteria, Naivasha

Procedia PDF Downloads 188
543 Prandtl Number Influence Analysis on Droplet Migration in Natural Convection Flow Using the Level Set Method

Authors: Isadora Bugarin, Taygoara F. de Oliveira

Abstract:

Multiphase flows have currently been placed as a key solution for technological advances in energy and thermal sciences. The comprehension of droplet motion and behavior on non-isothermal flows is, however, rather limited. The present work consists of an investigation of a 2D droplet migration on natural convection inside a square enclosure with differentially heated walls. The investigation in question concerns the effects on drop motion of imposing different combinations of Prandtl and Rayleigh numbers while defining the drop on distinct initial positions. The finite differences method was used to compute the Navier-Stokes and energy equations for a laminar flow, considering the Boussinesq approximation. Also, a high order level set method was applied to simulate the two-phase flow. A previous analysis developed by the authors had shown that for fixed values of Rayleigh and Prandtl, the variation of the droplet initial position at the beginning of the simulation delivered different patterns of motion, in which for Ra≥10⁴ the droplet presents two very specific behaviors: it can travel through a helical path towards the center or define cyclic circular paths resulting in closed paths when reaching the stationary regime. Now, when varying the Prandtl number for different Rayleigh regimes, it was observed that this particular parameter also affects the migration of the droplet, altering the motion patterns as its value is increased. On higher Prandtl values, the drop performs wider paths with larger amplitudes, traveling closer to the walls and taking longer time periods to finally reach the stationary regime. It is important to highlight that drastic drop behavior changes on the stationary regime were not yet observed, but the path traveled from the begging of the simulation until the stationary regime was significantly altered, resulting in distinct turning over frequencies. The flow’s unsteady Nusselt number is also registered for each case studied, enabling a discussion on the overall effects on heat transfer variations.

Keywords: droplet migration, level set method, multiphase flow, natural convection in enclosure, Prandtl number

Procedia PDF Downloads 110
542 Examining the Effects of Exercise and Healthy Diet on Certain Blood Parameter Levels, Oxidative Stress and Anthropometric Measurements in Slightly Overweight Women

Authors: Nezihe Şengün, Ragip Pala

Abstract:

To prevent overweight and obesity, individuals need to consume food and beverages according to their nutritional needs, engage in regular exercises, and regularly monitor their body weight. This study aimed to examine the effects of exercise, diet, or combined intervention on changes in blood lipid parameters (total cholesterol, LDL cholesterol, HDL cholesterol, and triglycerides) and the level of malondialdehyde (MDA), a marker of oxidative stress, in parallel with the increase in body weight due to poor nutrition and sedentary lifestyle conditions. The study included a total of 48 female students aged 18-28 years with a BMI between 25.0 and 29.9 kg/m². They were divided into four groups: control (C), exercise (Ex), diet (D), and exercise+diet (Ex+D). Those in the exercise groups received aerobic exercises at 60-70% intensity (10 minutes warm-up, 30 minutes running, 10 minutes cool-down), while those in the diet groups were provided with a diet program based on the calculation of energy needs considering basal metabolic rate, physical activity level, age, and BMI. The students’ body weight, body fat mass, Body Mass Index (BMI), and waist-hip ratios were measured at the beginning (day 1) and end (day 60) of the 8-week intervention period. Their total cholesterol, HDL cholesterol, LDL cholesterol, triglycerides, and MDA levels were evaluated and analyzed, considering a statistical significance level of p<0.05. As a result, female students in the Ex+D group had the largest difference in body weight, body fat mass, BMI, and waist-hip ratios, and this difference was statistically significant. Except for those in the C group, those in the other groups experienced a decrease in their total cholesterol, LDL cholesterol, and triglyceride levels and an increase in their HDL cholesterol levels. The decrease in total cholesterol, LDL cholesterol, and triglyceride levels was statistically significant for those in the D group, and the increase in HDL cholesterol level was statistically significant for those in the Ex+D group (p<0.05). A decrease in MDA level was found in all groups except those in the C group, and this decrease was significantly higher in the Ex group. In conclusion, our study revealed that the most effective way to achieve weight loss is through a combination of exercise and diet. The application of Ex+D is considered to balance blood lipid levels and suppress oxidative stress.

Keywords: obesity, exercise, diet, body mass index, blood lipids

Procedia PDF Downloads 66
541 Green Procedure for Energy and Emission Balancing of Alternative Scenario Improvements for Cogeneration System: A Case of Hardwood Lumber Manufacturing Process

Authors: Aldona Kluczek

Abstract:

Energy efficient process have become a pressing research field in manufacturing. The arguments for having an effective industrial energy efficiency processes are interacted with factors: economic and environmental impact, and energy security. Improvements in energy efficiency are most often achieved by implementation of more efficient technology or manufacturing process. Current processes of electricity production represents the biggest consumption of energy and the greatest amount of emissions to the environment. The goal of this study is to improve the potential energy-savings and reduce greenhouse emissions related to improvement scenarios for the treatment of hardwood lumber produced by an industrial plant operating in the U.S. through the application of green balancing procedure, in order to find the preferable efficient technology. The green procedure for energy is based on analysis of energy efficiency data. Three alternative scenarios of the cogeneration systems plant (CHP) construction are considered: generation of fresh steam, the purchase of a new boiler with the operating pressure 300 pounds per square inch gauge (PSIG), an installation of a new boiler with a 600 PSIG pressure. In this paper, the application of a bottom-down modelling for energy flow to devise a streamlined Energy and Emission Flow Analyze method for the technology of producing electricity is illustrated. It will identify efficiency or technology of a given process to be reached, through the effective use of energy, or energy management. Results have shown that the third scenario seem to be the efficient alternative scenario considered from the environmental and economic concerns for treating hardwood lumber. The energy conservation evaluation options could save an estimated 6,215.78 MMBtu/yr in each year, which represents 9.5% of the total annual energy usage. The total annual potential cost savings from all recommendations is $143,523/yr, which represents 30.1% of the total annual energy costs. Estimation have presented that energy cost savings are possible up to 43% (US$ 143,337.85), representing 18.6% of the total annual energy costs.

Keywords: alternative scenario improvements, cogeneration system, energy and emission flow analyze, energy balancing, green procedure, hardwood lumber manufacturing process

Procedia PDF Downloads 196
540 Defining Unconventional Hydrocarbon Parameter Using Shale Play Concept

Authors: Rudi Ryacudu, Edi Artono, Gema Wahyudi Purnama

Abstract:

Oil and gas consumption in Indonesia is currently on the rise due to its nation economic improvement. Unfortunately, Indonesia’s domestic oil production cannot meet it’s own consumption and Indonesia has lost its status as Oil and Gas exporter. Even worse, our conventional oil and gas reserve is declining. Unwilling to give up, the government of Indonesia has taken measures to invite investors to invest in domestic oil and gas exploration to find new potential reserve and ultimately increase production. Yet, it has not bear any fruit. Indonesia has taken steps now to explore new unconventional oil and gas play including Shale Gas, Shale Oil and Tight Sands to increase domestic production. These new plays require definite parameters to differentiate each concept. The purpose of this paper is to provide ways in defining unconventional hydrocarbon reservoir parameters in Shale Gas, Shale Oil and Tight Sands. The parameters would serve as an initial baseline for users to perform analysis of unconventional hydrocarbon plays. Some of the on going concerns or question to be answered in regards to unconventional hydrocarbon plays includes: 1. The TOC number, 2. Has it been well “cooked” and become a hydrocarbon, 3. What are the permeability and the porosity values, 4. Does it need a stimulation, 5. Does it has pores, and 6. Does it have sufficient thickness. In contrast with the common oil and gas conventional play, Shale Play assumes that hydrocarbon is retained and trapped in area with very low permeability. In most places in Indonesia, hydrocarbon migrates from source rock to reservoir. From this case, we could derive a theory that Kitchen and Source Rock are located right below the reservoir. It is the starting point for user or engineer to construct basin definition in relation with the tectonic play and depositional environment. Shale Play concept requires definition of characteristic, description and reservoir identification to discover reservoir that is technically and economically possible to develop. These are the steps users and engineers has to do to perform Shale Play: a. Calculate TOC and perform mineralogy analysis using water saturation and porosity value. b. Reconstruct basin that accumulate hydrocarbon c. Brittlenes Index calculated form petrophysical and distributed based on seismic multi attributes d. Integrated natural fracture analysis e. Best location to place a well.

Keywords: unconventional hydrocarbon, shale gas, shale oil tight sand reservoir parameters, shale play

Procedia PDF Downloads 389
539 Study on the Voltage Induced Wrinkling of Elastomer with Different Electrode Areas

Authors: Zhende Hou, Fan Yang, Guoli Zhang

Abstract:

Dielectric elastomer is a promising class of Electroactive polymers which can deform in response to an applied electric field. Comparing general smart material, the Dielectric elastomer is more compliance and can achieve higher energy density, which can be for diverse applications such as actuators, artificial muscles, soft robotics, and energy harvesters. The coupling of the Electroactive polymers and the electric field is that the elastomer is sandwiched between two compliant electrodes and when the electrodes are subjected to a voltage, the positive and negative charges on the two electrodes compress the polymer, so that the polymer reduces in thickness and expands in area. However, the pre-stretched dielectric elastomer film not only can achieve large electric-field induced deformation but also is prone to wrinkling, under the interaction of its own strain energy and the applied electric field energy. For a uniaxially pre-stretched dielectric elastomer film, the electrode area is an important parameter to the electric-field induced deformation and may also be a key factor affecting the film wrinkling. To determine and quantify the effect experimentally, VHB 9473 tapes were employed and compliant electrodes with different areas were pant on each of them. The tape was first tensed to a uniaxial stretch of 8. Then a DC voltage was applied to the electrodes and increased gradually until wrinkling occurred in the film. Then, the critical wrinkling voltages of the film with different electrode areas were obtained, and the wrinkle wavelengths were obtained simultaneously for analyzing the wrinkling characteristics. Experimental results indicate when the electrode area is smaller the wrinkling voltage is higher, and with the increases of electrode area, the wrinkling voltage decreases rapidly until a specific area. Beyond that, the wrinkling voltage becomes larger gradually with the increases of the area. While the wrinkle wavelength decreases gradually with the increase of voltage monotonically. That is, the relation between the critical wrinkling voltage and the electrode areas is U-shaped. Analysis believes that the film wrinkling is a kind of local effect, the interaction and the energy transfer between electrode region and non-electrode region have great influence on wrinkling. In the experiment, very thin copper wires are used as the electrode leads that just contact with the electrodes, which can avoid the stiffness of the leads affecting the wrinkling.

Keywords: elastomers, uniaxial stretch, electrode area, wrinkling

Procedia PDF Downloads 233
538 Investigation of Turbulent Flow in a Bubble Column Photobioreactor and Consequent Effects on Microalgae Cultivation Using Computational Fluid Dynamic Simulation

Authors: Geetanjali Yadav, Arpit Mishra, Parthsarathi Ghosh, Ramkrishna Sen

Abstract:

The world is facing problems of increasing global CO2 emissions, climate change and fuel crisis. Therefore, several renewable and sustainable energy alternatives should be investigated to replace non-renewable fuels in future. Algae presents itself a versatile feedstock for the production of variety of fuels (biodiesel, bioethanol, bio-hydrogen etc.) and high value compounds for food, fodder, cosmetics and pharmaceuticals. Microalgae are simple microorganisms that require water, light, CO2 and nutrients for growth by the process of photosynthesis and can grow in extreme environments, utilize waste gas (flue gas) and waste waters. Mixing, however, is a crucial parameter within the culture system for the uniform distribution of light, nutrients and gaseous exchange in addition to preventing settling/sedimentation, creation of dark zones etc. The overarching goal of the present study is to improve photobioreactor (PBR) design for enhancing dissolution of CO2 from ambient air (0.039%, v/v), pure CO2 and coal-fired flue gas (10 ± 2%) into microalgal PBRs. Computational fluid dynamics (CFD), a state-of-the-art technique has been used to solve partial differential equations with turbulence closure which represents the dynamics of fluid in a photobioreactor. In this paper, the hydrodynamic performance of the PBR has been characterized and compared with that of the conventional bubble column PBR using CFD. Parameters such as flow rate (Q), mean velocity (u), mean turbulent kinetic energy (TKE) were characterized for each experiment that was tested across different aeration schemes. The results showed that the modified PBR design had superior liquid circulation properties and gas-liquid transfer that resulted in creation of uniform environment inside PBR as compared to conventional bubble column PBR. The CFD technique has shown to be promising to successfully design and paves path for a future research in order to develop PBRs which can be commercially available for scale-up microalgal production.

Keywords: computational fluid dynamics, microalgae, bubble column photbioreactor, flue gas, simulation

Procedia PDF Downloads 224
537 Energy Content and Spectral Energy Representation of Wave Propagation in a Granular Chain

Authors: Rohit Shrivastava, Stefan Luding

Abstract:

A mechanical wave is propagation of vibration with transfer of energy and momentum. Studying the energy as well as spectral energy characteristics of a propagating wave through disordered granular media can assist in understanding the overall properties of wave propagation through inhomogeneous materials like soil. The study of these properties is aimed at modeling wave propagation for oil, mineral or gas exploration (seismic prospecting) or non-destructive testing for the study of internal structure of solids. The study of Energy content (Kinetic, Potential and Total Energy) of a pulse propagating through an idealized one-dimensional discrete particle system like a mass disordered granular chain can assist in understanding the energy attenuation due to disorder as a function of propagation distance. The spectral analysis of the energy signal can assist in understanding dispersion as well as attenuation due to scattering in different frequencies (scattering attenuation). The selection of one-dimensional granular chain also helps in studying only the P-wave attributes of the wave and removing the influence of shear or rotational waves. Granular chains with different mass distributions have been studied, by randomly selecting masses from normal, binary and uniform distributions and the standard deviation of the distribution is considered as the disorder parameter, higher standard deviation means higher disorder and lower standard deviation means lower disorder. For obtaining macroscopic/continuum properties, ensemble averaging has been used. Interpreting information from a Total Energy signal turned out to be much easier in comparison to displacement, velocity or acceleration signals of the wave, hence, indicating a better analysis method for wave propagation through granular materials. Increasing disorder leads to faster attenuation of the signal and decreases the Energy of higher frequency signals transmitted, but at the same time the energy of spatially localized high frequencies also increases. An ordered granular chain exhibits ballistic propagation of energy whereas, a disordered granular chain exhibits diffusive like propagation, which eventually becomes localized at long periods of time.

Keywords: discrete elements, energy attenuation, mass disorder, granular chain, spectral energy, wave propagation

Procedia PDF Downloads 274
536 Density Measurement of Underexpanded Jet Using Stripe Patterned Background Oriented Schlieren Method

Authors: Shinsuke Udagawa, Masato Yamagishi, Masanori Ota

Abstract:

The Schlieren method, which has been conventionally used to visualize high-speed flows, has disadvantages such as the complexity of the experimental setup and the inability to quantitatively analyze the amount of refraction of light. The Background Oriented Schlieren (BOS) method proposed by Meier is one of the measurement methods that solves the problems, as mentioned above. The refraction of light is used for BOS method same as the Schlieren method. The BOS method is characterized using a digital camera to capture the images of the background behind the observation area. The images are later analyzed by a computer to quantitatively detect the amount of shift of the background image. The experimental setup for BOS does not require concave mirrors, pinholes, or color filters, which are necessary in the conventional Schlieren method, thus simplifying the experimental setup. However, the defocusing of the observation results is caused in case of using BOS method. Since the focus of camera on the background image leads to defocusing of the observed object. The defocusing of object becomes greater with increasing the distance between the background and the object. On the other hand, the higher sensitivity can be obtained. Therefore, it is necessary to adjust the distance between the background and the object to be appropriate for the experiment, considering the relation between the defocus and the sensitivity. The purpose of this study is to experimentally clarify the effect of defocus on density field reconstruction. In this study, the visualization experiment of underexpanded jet using BOS measurement system with ronchi ruling as the background that we constructed, have been performed. The reservoir pressure of the jet and the distance between camera and axis of jet is fixed, and the distance between background and axis of jet has been changed as the parameter. The images have been later analyzed by using personal computer to quantitatively detect the amount of shift of the background image from the comparison between the background pattern and the captured image of underexpanded jet. The quantitatively measured amount of shift have been reconstructed into a density flow field using the Abel transformation and the Gradstone-Dale equation. From the experimental results, it is found that the reconstructed density image becomes blurring, and noise becomes decreasing with increasing the distance between background and axis of underexpanded jet. Consequently, it is cralified that the sensitivity constant should be greater than 20, and the circle of confusion diameter should be less than 2.7mm at least in this experimental setup.

Keywords: BOS method, underexpanded jet, abel transformation, density field visualization

Procedia PDF Downloads 62
535 Zinc Sorption by Six Agricultural Soils Amended with Municipal Biosolids

Authors: Antoine Karam, Lotfi Khiari, Bruno Breton, Alfred Jaouich

Abstract:

Anthropogenic sources of zinc (Zn), including industrial emissions and effluents, Zn–rich fertilizer materials and pesticides containing Zn, can contribute to increasing the concentration of soluble Zn at levels toxic to plants in acid sandy soils. The application of municipal sewage sludge or biosolids (MBS) which contain metal immobilizing agents on coarse-textured soils could improve the metal sorption capacity of the low-CEC soils. The purpose of this experiment was to evaluate the sorption of Zn in surface samples (0-15 cm) of six Quebec (Canada) soils amended with MBS (pH 6.9) from Val d’Or (Quebec, Canada). Soil samples amended with increasing amounts (0 to 20%) of MBS were equilibrated with various amounts of Zn as ZnCl2 in 0.01 M CaCl2 for 48 hours at room temperature. Sorbed Zn was calculated from the difference between the initial and final Zn concentration in solution. Zn sorption data conformed to the linear form of Freundlich equation. The amount of sorbed Zn increased considerably with increasing MBS rate. Analysis of variance revealed a highly significant effect (p ≤ 0.001) of soil texture and MBS rate on the amount of sorbed Zn. The average values of the Zn-sorption capacity of MBS-amended coarse-textured soils were lower than those of MBS-amended fine textured soils. The two sandy soils (86-99% sand) amended with MBS retained 2- to 5-fold Zn than those without MBS (control). Significant Pearson correlation coefficients between the Zn sorption isotherm parameter, i.e. the Freundlich sorption isotherm (KF), and commonly measured physical and chemical entities were obtained. Among all the soil properties measured, soil pH gave the best significant correlation coefficients (p ≤ 0.001) for soils receiving 0, 5 and 10% MBS. Furthermore, KF values were positively correlated with soil clay content, exchangeable basic cations (Ca, Mg or K), CEC and clay content to CEC ratio. From these results, it can be concluded that (i) municipal biosolids provide sorption sites that have a strong affinity for Zn, (ii) both soil texture, especially clay content, and soil pH are the main factors controlling anthropogenic Zn sorption in the municipal biosolids-amended soils, and (iii) the effect of municipal biosolids on Zn sorption will be more pronounced for a sandy soil than for a clay soil.

Keywords: metal, recycling, sewage sludge, trace element

Procedia PDF Downloads 268
534 Counting Fishes in Aquaculture Ponds: Application of Imaging Sonars

Authors: Juan C. Gutierrez-Estrada, Inmaculada Pulido-Calvo, Ignacio De La Rosa, Antonio Peregrin, Fernando Gomez-Bravo, Samuel Lopez-Dominguez, Alejandro Garrocho-Cruz, Jairo Castro-Gutierrez

Abstract:

The semi-intensive aquaculture in traditional earth ponds is the main rearing system in Southern Spain. These fish rearing systems are approximately two thirds of aquatic production in this area which has made a significant contribution to the regional economy in recent years. In this type of rearing system, a crucial aspect is the correct quantification and control of the fish abundance in the ponds because the fish farmer knows how many fishes he puts in the ponds but doesn’t know how many fishes will harvest at the end of the rear period. This is a consequence of the mortality induced by different causes as pathogen agents as parasites, viruses and bacteria and other factors as predation of fish-eating birds and poaching. Track the fish abundance in these installations is very difficult because usually the ponds take up a large area of land and the management of the water flow is not automatized. Therefore, there is a very high degree of uncertainty on the abundance fishes which strongly hinders the management and planning of the sales. A novel and non-invasive procedure to count fishes in the ponds is by the means of imaging sonars, particularly fixed systems and/or linked to aquatic vehicles as Remotely Operated Vehicles (ROVs). In this work, a method based on census stations procedures is proposed to evaluate the fish abundance estimation accuracy using images obtained of multibeam sonars. The results indicate that it is possible to obtain a realistic approach about the number of fishes, sizes and therefore the biomass contained in the ponds. This research is included in the framework of the KTTSeaDrones Project (‘Conocimiento y transferencia de tecnología sobre vehículos aéreos y acuáticos para el desarrollo transfronterizo de ciencias marinas y pesqueras 0622-KTTSEADRONES-5-E’) financed by the European Regional Development Fund (ERDF) through the Interreg V-A Spain-Portugal Programme (POCTEP) 2014-2020.

Keywords: census station procedure, fish biomass, semi-intensive aquaculture, multibeam sonars

Procedia PDF Downloads 206
533 A Regression Model for Predicting Sugar Crystal Size in a Fed-Batch Vacuum Evaporative Crystallizer

Authors: Sunday B. Alabi, Edikan P. Felix, Aniediong M. Umo

Abstract:

Crystal size distribution is of great importance in the sugar factories. It determines the market value of granulated sugar and also influences the cost of production of sugar crystals. Typically, sugar is produced using fed-batch vacuum evaporative crystallizer. The crystallization quality is examined by crystal size distribution at the end of the process which is quantified by two parameters: the average crystal size of the distribution in the mean aperture (MA) and the width of the distribution of the coefficient of variation (CV). Lack of real-time measurement of the sugar crystal size hinders its feedback control and eventual optimisation of the crystallization process. An attractive alternative is to use a soft sensor (model-based method) for online estimation of the sugar crystal size. Unfortunately, the available models for sugar crystallization process are not suitable as they do not contain variables that can be measured easily online. The main contribution of this paper is the development of a regression model for estimating the sugar crystal size as a function of input variables which are easy to measure online. This has the potential to provide real-time estimates of crystal size for its effective feedback control. Using 7 input variables namely: initial crystal size (Lo), temperature (T), vacuum pressure (P), feed flowrate (Ff), steam flowrate (Fs), initial super-saturation (S0) and crystallization time (t), preliminary studies were carried out using Minitab 14 statistical software. Based on the existing sugar crystallizer models, and the typical ranges of these 7 input variables, 128 datasets were obtained from a 2-level factorial experimental design. These datasets were used to obtain a simple but online-implementable 6-input crystal size model. It seems the initial crystal size (Lₒ) does not play a significant role. The goodness of the resulting regression model was evaluated. The coefficient of determination, R² was obtained as 0.994, and the maximum absolute relative error (MARE) was obtained as 4.6%. The high R² (~1.0) and the reasonably low MARE values are an indication that the model is able to predict sugar crystal size accurately as a function of the 6 easy-to-measure online variables. Thus, the model can be used as a soft sensor to provide real-time estimates of sugar crystal size during sugar crystallization process in a fed-batch vacuum evaporative crystallizer.

Keywords: crystal size, regression model, soft sensor, sugar, vacuum evaporative crystallizer

Procedia PDF Downloads 198
532 Effect of Strength Class of Concrete and Curing Conditions on Capillary Absorption of Self-Compacting and Conventional Concrete

Authors: Emine Ebru Demirci, Remzi Şahin

Abstract:

The purpose of this study is to compare Self Compacting Concrete (SCC) and Conventional Concrete (CC), which are used in beams with dense reinforcement, in terms of their capillary absorption. During the comparison of SCC and CC, the effects of two different factors were also investigated: concrete strength class and curing condition. In the study, both SCC and CC were produced in three different concrete classes (C25, C50 and C70) and the other parameter (i.e curing condition) was determined as two levels: moisture and air curing. Beam dimensions were determined to be 200 x 250 x 3000 mm. Reinforcements of the beams were calculated and placed as 2ø12 for the top and 3ø12 for the bottom. Stirrups with dimension 8 mm were used as lateral rebar and stirrup distances were chosen as 10 cm in the confinement zone and 15 cm at the central zone. In this manner, densification of rebars in lateral cross-sections of beams and handling of SCC in real conditions were aimed. Concrete covers of the rebars were chosen to be equal in all directions as 25 mm. The capillary absorption measurements were performed on core samples taken from the beams. Core samples of ø8x16 cm were taken from the beginning (0-100 cm), middle (100-200 cm) and end (200-300 cm) region of the beams according to the casting direction of SCC. However core samples were taken from lateral surface of the beams. In the study, capillary absorption experiments were performed according to Turkish Standard TS EN 13057. It was observed that, for both curing environments and all strength classes of concrete, SCC’s had lower capillary absorption values than that of CC’s. The capillary absorption values of C25 class of SCC are 11% and 16% lower than that of C25 class of CC for air and moisture conditions, respectively. For C50 class, these decreases were 6% and 18%, while for C70 class, they were 16% and 9%, respectively. It was also detected that, for both SCC and CC, capillary absorption values of samples kept in moisture curing are significantly lower than that of samples stored in air curing. For CC’s; C25, C50 and C70 class moisture-cured samples were found to have 26%, 12% and 31% lower capillary absorption values, respectively, when compared to the air-cured ones. For SCC’s; these values were 30%, 23% and 24%, respectively. Apart from that, it was determined that capillary absorption values for both SCC and CC decrease with increasing strength class of concrete for both curing environments. It was found that, for air cured CC, C50 and C70 class of concretes had 39% and 63% lower capillary absorption values compared to the C25 class of concrete. For the same type of concrete samples cured in the moisture environment, these values were found to be 27% and 66%. It was found that for SCC samples, capillary absorption value of C50 and C70 concretes, which were kept in air curing, were 35% and 65% lower than that of C25, while for moisture-cured samples these values were 29% and 63%, respectively. When standard deviations of the capillary absorption values are compared for core samples obtained from the beginning, middle and end of the CC and SCC beams, it was found that, in all three strength classes of concrete, the variation is much smaller for SCC than CC. This demonstrated that SCC’s had more uniform character than CC’s.

Keywords: self compacting concrete, reinforced concrete beam, capillary absorption, strength class, curing condition

Procedia PDF Downloads 363
531 Single and Sequential Extraction for Potassium Fractionation and Nano-Clay Flocculation Structure

Authors: Chakkrit Poonpakdee, Jing-Hua Tzen, Ya-Zhen Huang, Yao-Tung Lin

Abstract:

Potassium (K) is a known macro nutrient and essential element for plant growth. Single leaching and modified sequential extraction schemes have been developed to estimate the relative phase associations of soil samples. The sequential extraction process is a step in analyzing the partitioning of metals affected by environmental conditions, but it is not a tool for estimation of K bioavailability. While, traditional single leaching method has been used to classify K speciation for a long time, it depend on its availability to the plants and use for potash fertilizer recommendation rate. Clay mineral in soil is a factor for controlling soil fertility. The change of the micro-structure of clay minerals during various environment (i.e. swelling or shrinking) is characterized using Transmission X-Ray Microscopy (TXM). The objective of this study are to 1) compare the distribution of K speciation between single leaching and sequential extraction process 2) determined clay particle flocculation structure before/after suspension with K+ using TXM. Four tropical soil samples: farming without K fertilizer (10 years), long term applied K fertilizer (10 years; 168-240 kg K2O ha-1 year-1), red soil (450-500 kg K2O ha-1 year-1) and forest soil were selected. The results showed that the amount of K speciation by single leaching method were high in mineral K, HNO3 K, Non-exchangeable K, NH4OAc K, exchangeable K and water soluble K respectively. Sequential extraction process indicated that most K speciations in soil were associated with residual, organic matter, Fe or Mn oxide and exchangeable fractions and K associate fraction with carbonate was not detected in tropical soil samples. In farming long term applied K fertilizer and red soil were higher exchangeable K than farming long term without K fertilizer and forest soil. The results indicated that one way to increase the available K (water soluble K and exchangeable K) should apply K fertilizer and organic fertilizer for providing available K. The two-dimension of TXM image of clay particles suspension with K+ shows that the aggregation structure of clay mineral closed-void cellular networks. The porous cellular structure of soil aggregates in 1 M KCl solution had large and very larger empty voids than in 0.025 M KCl and deionized water respectively. TXM nanotomography is a new technique can be useful in the field as a tool for better understanding of clay mineral micro-structure.

Keywords: potassium, sequential extraction process, clay mineral, TXM

Procedia PDF Downloads 273