Search results for: parameter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2046

Search results for: parameter

576 Construction of Submerged Aquatic Vegetation Index through Global Sensitivity Analysis of Radiative Transfer Model

Authors: Guanhua Zhou, Zhongqi Ma

Abstract:

Submerged aquatic vegetation (SAV) in wetlands can absorb nitrogen and phosphorus effectively to prevent the eutrophication of water. It is feasible to monitor the distribution of SAV through remote sensing, but for the reason of weak vegetation signals affected by water body, traditional terrestrial vegetation indices are not applicable. This paper aims at constructing SAV index to enhance the vegetation signals and distinguish SAV from water body. The methodology is as follows: (1) select the bands sensitive to the vegetation parameters based on global sensitivity analysis of SAV canopy radiative transfer model; (2) take the soil line concept as reference, analyze the distribution of SAV and water reflectance simulated by SAV canopy model and semi-analytical water model in the two-dimensional space built by different sensitive bands; (3)select the band combinations which have better separation performance between SAV and water, and use them to build the SAVI indices in the form of normalized difference vegetation index(NDVI); (4)analyze the sensitivity of indices to the water and vegetation parameters, choose the one more sensitive to vegetation parameters. It is proved that index formed of the bands with central wavelengths in 705nm and 842nm has high sensitivity to chlorophyll content in leaves while it is less affected by water constituents. The model simulation shows a general negative, little correlation of SAV index with increasing water depth. Moreover, the index enhances capabilities in separating SAV from water compared to NDVI. The SAV index is expected to have potential in parameter inversion of wetland remote sensing.

Keywords: global sensitivity analysis, radiative transfer model, submerged aquatic vegetation, vegetation indices

Procedia PDF Downloads 262
575 Modelling and Simulation Efforts in Scale-Up and Characterization of Semi-Solid Dosage Forms

Authors: Saurav S. Rath, Birendra K. David

Abstract:

Generic pharmaceutical industry has to operate in strict timelines of product development and scale-up from lab to plant. Hence, detailed product & process understanding and implementation of appropriate mechanistic modelling and Quality-by-design (QbD) approaches are imperative in the product life cycle. This work provides example cases of such efforts in topical dosage products. Topical products are typically in the form of emulsions, gels, thick suspensions or even simple solutions. The efficacy of such products is determined by characteristics like rheology and morphology. Defining, and scaling up the right manufacturing process with a given set of ingredients, to achieve the right product characteristics presents as a challenge to the process engineer. For example, the non-Newtonian rheology varies not only with CPPs and CMAs but also is an implicit function of globule size (CQA). Hence, this calls for various mechanistic models, to help predict the product behaviour. This paper focusses on such models obtained from computational fluid dynamics (CFD) coupled with population balance modelling (PBM) and constitutive models (like shear, energy density). In a special case of the use of high shear homogenisers (HSHs) for the manufacture of thick emulsions/gels, this work presents some findings on (i) scale-up algorithm for HSH using shear strain, a novel scale-up parameter for estimating mixing parameters, (ii) non-linear relationship between viscosity and shear imparted into the system, (iii) effect of hold time on rheology of product. Specific examples of how this approach enabled scale-up across 1L, 10L, 200L, 500L and 1000L scales will be discussed.

Keywords: computational fluid dynamics, morphology, quality-by-design, rheology

Procedia PDF Downloads 269
574 The Correlation between Three-Dimensional Implant Positions and Esthetic Outcomes of Single-Tooth Implant Restoration

Authors: Pongsakorn Komutpol, Pravej Serichetaphongse, Soontra Panmekiate, Atiphan Pimkhaokham

Abstract:

Statement of Problem: The important parameter of esthetic assessment in anterior maxillary implant include pink esthetic of gingiva and white esthetic of restoration. While the 3 dimensional (3D) implant position are recently concerned as a key for succeeding in implant treatment. However, to our knowledge, the authors did not come across any publication that demonstrated the relations of esthetic outcome and 3D implant position. Objectives: To investigate the correlation between positional accuracy of single-tooth implant restoration (STIR) in all 3 dimensions and their esthetic outcomes. Materials and Methods: 17 patients’ data who had a STIR at central incisor with pristine contralateral tooth were included in this study. Intraoral photographs, dental models, and cone beam computed tomography (CBCT) images were retrieved. The esthetic outcome was assessed in accordance with pink esthetic score and white esthetic score (PES/WES). While the number of correct position in each dimension (mesiodistal, labiolingual, apicocoronal) of the implant were evaluated and defined as 'right' or 'wrong' according to ITI consensus conference by one investigator using CBCT data. The different mean score between right and wrong position in all dimensions was analyzed by Mann-Whitney U test with 0.05 was the significant level of the study. Results: The average score of PES/WES was 15.88 ± 1.65 which was considered as clinically acceptable. The average PES/WES score in 1, 2 and 3 right dimension of the implant position were 16.71, 15.75 and 15.17 respectively. None of the implants placed wrongly in all three dimensions. Statistically significant difference of the PES/WES score was found between the implants that placed right in 3 dimensions and 1 dimension (p = 0.041). Conclusion: This study supported the principle of 3D position of implant. The more properly implant was placed, the higher esthetic outcome was found.

Keywords: accuracy, dental implant, esthetic, 3D implant position

Procedia PDF Downloads 179
573 Development of Method for Recovery of Nickel from Aqueous Solution Using 2-Hydroxy-5-Nonyl- Acetophenone Oxime Impregnated on Activated Charcoal

Authors: A. O. Adebayo, G. A. Idowu, F. Odegbemi

Abstract:

Investigations on the recovery of nickel from aqueous solution using 2-hydroxy-5-nonyl- acetophenone oxime (LIX-84I) impregnated on activated charcoal was carried out. The LIX-84I was impregnated onto the pores of dried activated charcoal by dry method and optimum conditions for different equilibrium parameters (pH, adsorbent dosage, extractant concentration, agitation time and temperature) were determined using a simulated solution of nickel. The kinetics and adsorption isotherm studies were also evaluated. It was observed that the efficiency of recovery with LIX-84I impregnated on charcoal was dependent on the pH of the aqueous solution as there was little or no recovery at pH below 4. However, as the pH was raised, percentage recovery increases and peaked at pH 5.0. The recovery was found to increase with temperature up to 60ºC. Also it was observed that nickel adsorbed onto the loaded charcoal best at a lower concentration (0.1M) of the extractant when compared with higher concentrations. Similarly, a moderately low dosage (1 g) of the adsorbent showed better recovery than larger dosages. These optimum conditions were used to recover nickel from the leachate of Ni-MH batteries dissolved with sulphuric acid, and a 99.6% recovery was attained. Adsorption isotherm studies showed that the equilibrium data fitted best to Temkin model, with a negative value of constant, b (-1.017 J/mol) and a high correlation coefficient, R² of 0.9913. Kinetic studies showed that the adsorption process followed a pseudo-second order model. Thermodynamic parameter values (∆G⁰, ∆H⁰, and ∆S⁰) showed that the adsorption was endothermic and spontaneous. The impregnated charcoal appreciably recovered nickel using a relatively smaller volume of extractant than what is required in solvent extraction. Desorption studies showed that the loaded charcoal is reusable for three times, and so might be economical for nickel recovery from waste battery.

Keywords: charcoal, impregnated, LIX-84I, nickel, recovery

Procedia PDF Downloads 150
572 Research on the Aeration Systems’ Efficiency of a Lab-Scale Wastewater Treatment Plant

Authors: Oliver Marunțălu, Elena Elisabeta Manea, Lăcrămioara Diana Robescu, Mihai Necșoiu, Gheorghe Lăzăroiu, Dana Andreya Bondrea

Abstract:

In order to obtain efficient pollutants removal in small-scale wastewater treatment plants, uniform water flow has to be achieved. The experimental setup, designed for treating high-load wastewater (leachate), consists of two aerobic biological reactors and a lamellar settler. Both biological tanks were aerated by using three different types of aeration systems - perforated pipes, membrane air diffusers and tube ceramic diffusers. The possibility of homogenizing the water mass with each of the air diffusion systems was evaluated comparatively. The oxygen concentration was determined by optical sensors with data logging. The experimental data was analyzed comparatively for all three different air dispersion systems aiming to identify the oxygen concentration variation during different operational conditions. The Oxygenation Capacity was calculated for each of the three systems and used as performance and selection parameter. The global mass transfer coefficients were also evaluated as important tools in designing the aeration system. Even though using the tubular porous diffusers leads to higher oxygen concentration compared to the perforated pipe system (which provides medium-sized bubbles in the aqueous solution), it doesn’t achieve the threshold limit of 80% oxygen saturation in less than 30 minutes. The study has shown that the optimal solution for the studied configuration was the radial air diffusers which ensure an oxygen saturation of 80% in 20 minutes. An increment of the values was identified when the air flow was increased.

Keywords: flow, aeration, bioreactor, oxygen concentration

Procedia PDF Downloads 389
571 Visual Simulation for the Relationship of Urban Fabric

Authors: Ting-Yu Lin, Han-Liang Lin

Abstract:

This article is about the urban form of visualization by Cityengine. City is composed of different domains, and each domain has its own fabric because of arrangement. For example, a neighborhood unit contains fabrics such as schools, street networks, residential and commercial spaces. Therefore, studying urban morphology can help us understand the urban form in planning process. Streets, plots, and buildings seem as urban fabrics, and they configure urban form. Traditionally, urban morphology usually discussed single parameter, which is building type, ignoring other parameters such as streets and plots. However, urban space is three-dimensional, instead of two-dimensional. People perceive urban space by their visualization. Therefore, using visualization can fill the gap between two dimensions and three dimensions. Hence, the study of urban morphology will strengthen the understanding of whole appearance of a city. Cityengine is a software which can edit, analyze and monitor the data and visualize the result for GIS, a common tool to analyze data and display the map for urban plan and urban design. Cityengine can parameterize the data of streets, plots and building types and visualize the result in three-dimensional way. The research will reappear the real urban form by visualizing. We can know whether the urban form can be parameterized and the parameterized result can match the real urban form. Then, visualizing the result by software in three dimension to analyze the rule of urban form. There will be three stages of the research. It will start with a field survey of Tainan East District in Taiwan to conclude the relationships between urban fabrics of street networks, plots and building types. Second, to visualize the relationship, it will turn the relationship into codes which Cityengine can read. Last, Cityengine will automatically display the result by visualizing.

Keywords: Cityengine, urban fabric, urban morphology, visual simulation

Procedia PDF Downloads 298
570 Body Composition Analyser Parameters and Their Comparison with Manual Measurements

Authors: I. Karagjozova, B. Dejanova, J. Pluncevic, S. Petrovska, V. Antevska, L. Todorovska

Abstract:

Introduction: Medical checking assessment is important in sports medicine. To follow the health condition in subjects who perform sports, body composition parameters, such as intracellular water, extracellular water, protein and mineral content, muscle and fat mass might be useful. The aim of the study was to show available parameters and to compare them to manual assessment. Material and methods: A number of 20 subjects (14 male and 6 female) at age of 20±2 years were determined in the study, 5 performed recreational sports, while others were professional ones. The mean height was 175±7 cm, the mean weight was 72±9 cm, and the body mass index (BMI) was 23±2 kg/m2. The measured compartments were as following: intracellular water (IW), extracellular water (EW), protein component (PC), mineral component (MC), skeletal muscle mass (SMM) and body fat mass (BFM). Lean balance were examined for right and left arm (LA), trunk (T), right leg (RL) and left leg (LL). The comparison was made between the calculation derived by manual made measurements, using Matejka formula and parameters obtained by body composition analyzer (BCA) - Inbody 720 BCA Biospace. Used parameters for the comparison were muscle mass (SMM), body fat mass (BFM). Results: BCA obtained values were for: IW - 22.6±5L, EW - 13.5±2 L, PC - 9.8±0.9 kg, MC - 3.5±0.3, SMM - 27±3 kg, BFM - 13.8±4 kg. Lean balance showed following values for: RA - 2.45±0.2 kg, LA - 2.37±0.4, T - 20.9±5 kg, RL - 7.43±1 kg, and LL - 7.49 ±1.5 kg. SMM showed statistical difference between manual obtained value, 51±01% to BCA parameter 45.5±3% (p<0.001). Manual obtained values for BFM was lower (17±2%) than BCA obtained one, 19.5±5.9% (p<0.02). Discussion: The obtained results showed appropriate values for the examined age, regarding to all examined parameters which contribute to overview the body compartments, important for sport performing. Due to comparison between the manual and BCA assessment, we may conclude that manual measurements may differ from the certain ones, which is confirmed by statistical significance.

Keywords: athletes, body composition, bio electrical impedance, sports medicine

Procedia PDF Downloads 477
569 Aerodynamic Modeling Using Flight Data at High Angle of Attack

Authors: Rakesh Kumar, A. K. Ghosh

Abstract:

The paper presents the modeling of linear and nonlinear longitudinal aerodynamics using real flight data of Hansa-3 aircraft gathered at low and high angles of attack. The Neural-Gauss-Newton (NGN) method has been applied to model the linear and nonlinear longitudinal dynamics and estimate parameters from flight data. Unsteady aerodynamics due to flow separation at high angles of attack near stall has been included in the aerodynamic model using Kirchhoff’s quasi-steady stall model. NGN method is an algorithm that utilizes Feed Forward Neural Network (FFNN) and Gauss-Newton optimization to estimate the parameters and it does not require any a priori postulation of mathematical model or solving of equations of motion. NGN method was validated on real flight data generated at moderate angles of attack before application to the data at high angles of attack. The estimates obtained from compatible flight data using NGN method were validated by comparing with wind tunnel values and the maximum likelihood estimates. Validation was also carried out by comparing the response of measured motion variables with the response generated by using estimates a different control input. Next, NGN method was applied to real flight data generated by executing a well-designed quasi-steady stall maneuver. The results obtained in terms of stall characteristics and aerodynamic parameters were encouraging and reasonably accurate to establish NGN as a method for modeling nonlinear aerodynamics from real flight data at high angles of attack.

Keywords: parameter estimation, NGN method, linear and nonlinear, aerodynamic modeling

Procedia PDF Downloads 445
568 Risk Assessment of Heavy Metals in River Sediments and Suspended Matter in Small Tributaries of Abandoned Mercury Mines in Wanshan, Guizhou

Authors: Guo-Hui Lu, Jing-Yi Cai, Ke-Yan Tan, Xiao-Cai Yin, Yu Zheng, Peng-Wei Shao, Yong-Liang Yang

Abstract:

Soil erosion around abandoned mines is one of the important geological agents for pollutant diffuses to the lower reaches of the local river basin system. River loading of pollutants is an important parameter for remediation of abandoned mines. In order to obtain information on pollutant transport and diffusion downstream in mining area, the small tributary system of the Xiaxi River in Wanshan District of Guizhou Province was selected as the research area. Sediment and suspended matter samples were collected and determined for Pb, As, Hg, Zn, Co, Cd, Cu, Ni, Cr, and Mn by inductively coupled plasma mass spectrometry (ICP-MS) and atomic fluorescence spectrometry (AFS) with the pretreatment of wet digestion. Discussions are made for pollution status and spatial distribution characteristics. The total Hg content in the sediments ranged from 0.45 to 16.0 g/g (dry weight) with an average of 5.79 g/g, which was ten times higher than the limit of Class II soil for mercury by the National Soil Environmental Quality Standard. The maximum occurred at the intersection of the Jin River and the Xiaxi River. The potential ecological hazard index (RI) was used to evaluate the ecological risk of heavy metals in the sediments. The average RI value for the whole study area suggests the high potential ecological risk level. High Cd potential ecological risk was found at individual sites.

Keywords: heavy metal, risk assessment, sediment, suspended matter, Wanshan mercury mine, small tributary system

Procedia PDF Downloads 130
567 Response Surface Methodology Approach to Defining Ultrafiltration of Steepwater from Corn Starch Industry

Authors: Zita I. Šereš, Ljubica P. Dokić, Dragana M. Šoronja Simović, Cecilia Hodur, Zsuzsanna Laszlo, Ivana Nikolić, Nikola Maravić

Abstract:

In this work the concentration of steep-water from corn starch industry is monitored using ultrafiltration membrane. The aim was to examine the conditions of ultrafiltration of steep-water by applying the membrane of 2.5nm. The parameters that vary during the course of ultrafiltration, were the transmembrane pressure, flow rate, while the permeate flux and the dry matter content of permeate and retentive were the dependent parameter constantly monitored during the process. Experiments of ultrafiltration are conducted on the samples of steep-water, which were obtained from the starch wet milling plant Jabuka, Pancevo. The procedure of ultrafiltration on a single-channel 250mm length, with inner diameter of 6.8mm and outer diameter of 10mm membrane were carried on. The membrane is made of a-Al2O3 with TiO2 layer obtained from GEA (Germany). The experiments are carried out at a flow rate ranging from 100 to 200lh-1 and transmembrane pressure of 1-3 bars. During the experiments of steep-water ultrafiltration, the change of permeate flux, dry matter content of permeate and retentive, as well as the absorbance changes of the permeate and retentive were monitored. The experimental results showed that the maximum flux reaches about 40lm-2h-1. For responses obtained after experiments, a polynomial model of the second degree is established to evaluate and quantify the influence of the variables. The quadratic equitation fits with the experimental values, where the coefficient of determination for flux is 0.96. The dry matter content of the retentive is increased for about 6%, while the dry matter content of permeate was reduced for about 35-40%, respectively. During steep-water ultrafiltration in permeate stays 40% less dry matter compared to the feed.

Keywords: ultrafiltration, steep-water, starch industry, ceramic membrane

Procedia PDF Downloads 284
566 Consumers of Counterfeit Goods and the Role of Context: A Behavioral Perspective of the Process

Authors: Carla S. C. da Silva, Cristiano Coelho, Junio Souza

Abstract:

The universe of luxury has charmed and seduced consumers for centuries. Since the middle ages, their symbols are displayed as objects of power and status, arousing desire and provoking social covetousness. In this way, the counterfeit market is growing every day, offering a group of consumers the opportunity to enter into a distinct social position, where the beautiful and shiny brand logo signals an inclusion passport to everything this group wants. This work sought to investigate how the context and the social environment can influence consumers to choose products of symbolic brands even if they are not legitimate and how this behavior is accepted in society. The study proposed: a) to evaluate the measures of knowledge and quality of a set of marks presented in the manipulation of two contexts (luxury x academic) between buyers and non-buyers of forgeries, both for original products and their correspondence with counterfeit products; b) measure the effect of layout on the verbal responses of buyers and non-buyers in relation to their assessment of the behavior of buyers of counterfeits. The present study, in addition to measuring the level of knowledge and quality attributed to each brand investigated, also verified the willingness of consumers to pay for a falsified good of the brands of predilection compared to the original study. This data can serve as a parameter for luxury brand managers in their counterfeit coping strategies. The investigation into the frequency of purchase has shown that those who buy counterfeit goods do so regularly, and there is a propensity to repeat the purchase. It was noted that a significant majority of buyers of counterfeits are prone to invest in illegality to meet their expectations of being in line with the standards of their interest groups.

Keywords: luxury, consumers, counterfeits, context, behaviorism

Procedia PDF Downloads 301
565 Don't Just Guess and Slip: Estimating Bayesian Knowledge Tracing Parameters When Observations Are Scant

Authors: Michael Smalenberger

Abstract:

Intelligent tutoring systems (ITS) are computer-based platforms which can incorporate artificial intelligence to provide step-by-step guidance as students practice problem-solving skills. ITS can replicate and even exceed some benefits of one-on-one tutoring, foster transactivity in collaborative environments, and lead to substantial learning gains when used to supplement the instruction of a teacher or when used as the sole method of instruction. A common facet of many ITS is their use of Bayesian Knowledge Tracing (BKT) to estimate parameters necessary for the implementation of the artificial intelligence component, and for the probability of mastery of a knowledge component relevant to the ITS. While various techniques exist to estimate these parameters and probability of mastery, none directly and reliably ask the user to self-assess these. In this study, 111 undergraduate students used an ITS in a college-level introductory statistics course for which detailed transaction-level observations were recorded, and users were also routinely asked direct questions that would lead to such a self-assessment. Comparisons were made between these self-assessed values and those obtained using commonly used estimation techniques. Our findings show that such self-assessments are particularly relevant at the early stages of ITS usage while transaction level data are scant. Once a user’s transaction level data become available after sufficient ITS usage, these can replace the self-assessments in order to eliminate the identifiability problem in BKT. We discuss how these findings are relevant to the number of exercises necessary to lead to mastery of a knowledge component, the associated implications on learning curves, and its relevance to instruction time.

Keywords: Bayesian Knowledge Tracing, Intelligent Tutoring System, in vivo study, parameter estimation

Procedia PDF Downloads 172
564 Object-Based Image Analysis for Gully-Affected Area Detection in the Hilly Loess Plateau Region of China Using Unmanned Aerial Vehicle

Authors: Hu Ding, Kai Liu, Guoan Tang

Abstract:

The Chinese Loess Plateau suffers from serious gully erosion induced by natural and human causes. Gully features detection including gully-affected area and its two dimension parameters (length, width, area et al.), is a significant task not only for researchers but also for policy-makers. This study aims at gully-affected area detection in three catchments of Chinese Loess Plateau, which were selected in Changwu, Ansai, and Suide by using unmanned aerial vehicle (UAV). The methodology includes a sequence of UAV data generation, image segmentation, feature calculation and selection, and random forest classification. Two experiments were conducted to investigate the influences of segmentation strategy and feature selection. Results showed that vertical and horizontal root-mean-square errors were below 0.5 and 0.2 m, respectively, which were ideal for the Loess Plateau region. The segmentation strategy adopted in this paper, which considers the topographic information, and optimal parameter combination can improve the segmentation results. Besides, the overall extraction accuracy in Changwu, Ansai, and Suide achieved was 84.62%, 86.46%, and 93.06%, respectively, which indicated that the proposed method for detecting gully-affected area is more objective and effective than traditional methods. This study demonstrated that UAV can bridge the gap between field measurement and satellite-based remote sensing, obtaining a balance in resolution and efficiency for catchment-scale gully erosion research.

Keywords: unmanned aerial vehicle (UAV), object-analysis image analysis, gully erosion, gully-affected area, Loess Plateau, random forest

Procedia PDF Downloads 218
563 Study of the Effect of the Contra-Rotating Component on the Performance of the Centrifugal Compressor

Authors: Van Thang Nguyen, Amelie Danlos, Richard Paridaens, Farid Bakir

Abstract:

This article presents a study of the effect of a contra-rotating component on the efficiency of centrifugal compressors. A contra-rotating centrifugal compressor (CRCC) is constructed using two independent rotors, rotating in the opposite direction and replacing the single rotor of a conventional centrifugal compressor (REF). To respect the geometrical parameters of the REF one, two rotors of the CRCC are designed, based on a single rotor geometry, using the hub and shroud length ratio parameter of the meridional contour. Firstly, the first rotor is designed by choosing a value of length ratio. Then, the second rotor is calculated to be adapted to the fluid flow of the first rotor according aerodynamics principles. In this study, four values of length ratios 0.3, 0.4, 0.5, and 0.6 are used to create four configurations CF1, CF2, CF3, and CF4 respectively. For comparison purpose, the circumferential velocity at the outlet of the REF and the CRCC are preserved, which means that the single rotor of the REF and the second rotor of the CRCC rotate with the same speed of 16000rpm. The speed of the first rotor in this case is chosen to be equal to the speed of the second rotor. The CFD simulation is conducted to compare the performance of the CRCC and the REF with the same boundary conditions. The results show that the configuration with a higher length ratio gives higher pressure rise. However, its efficiency is lower. An investigation over the entire operating range shows that the CF1 is the best configuration in this case. In addition, the CRCC can improve the pressure rise as well as the efficiency by changing the speed of each rotor independently. The results of changing the first rotor speed show with a 130% speed increase, the pressure ratio rises of 8.7% while the efficiency remains stable at the flow rate of the design operating point.

Keywords: centrifugal compressor, contra-rotating, interaction rotor, vacuum

Procedia PDF Downloads 134
562 Parameter Optimization and Thermal Simulation in Laser Joining of Coach Peel Panels of Dissimilar Materials

Authors: Masoud Mohammadpour, Blair Carlson, Radovan Kovacevic

Abstract:

The quality of laser welded-brazed (LWB) joints were strongly dependent on the main process parameters, therefore the effect of laser power (3.2–4 kW), welding speed (60–80 mm/s) and wire feed rate (70–90 mm/s) on mechanical strength and surface roughness were investigated in this study. The comprehensive optimization process by means of response surface methodology (RSM) and desirability function was used for multi-criteria optimization. The experiments were planned based on Box– Behnken design implementing linear and quadratic polynomial equations for predicting the desired output properties. Finally, validation experiments were conducted on an optimized process condition which exhibited good agreement between the predicted and experimental results. AlSi3Mn1 was selected as the filler material for joining aluminum alloy 6022 and hot-dip galvanized steel in coach peel configuration. The high scanning speed could control the thickness of IMC as thin as 5 µm. The thermal simulations of joining process were conducted by the Finite Element Method (FEM), and results were validated through experimental data. The Fe/Al interfacial thermal history evidenced that the duration of critical temperature range (700–900 °C) in this high scanning speed process was less than 1 s. This short interaction time leads to the formation of reaction-control IMC layer instead of diffusion-control mechanisms.

Keywords: laser welding-brazing, finite element, response surface methodology (RSM), multi-response optimization, cross-beam laser

Procedia PDF Downloads 352
561 Effect of 3-Dimensional Knitted Spacer Fabrics Characteristics on Its Thermal and Compression Properties

Authors: Veerakumar Arumugam, Rajesh Mishra, Jiri Militky, Jana Salacova

Abstract:

The thermo-physiological comfort and compression properties of knitted spacer fabrics have been evaluated by varying the different spacer fabric parameters. Air permeability and water vapor transmission of the fabrics were measured using the Textest FX-3300 air permeability tester and PERMETEST. Then thermal behavior of fabrics was obtained by Thermal conductivity analyzer and overall moisture management capacity was evaluated by moisture management tester. Spacer Fabrics compression properties were also tested using Kawabata Evaluation System (KES-FB3). In the KES testing, the compression resilience, work of compression, linearity of compression and other parameters were calculated from the pressure-thickness curves. Analysis of Variance (ANOVA) was performed using new statistical software named QC expert trilobite and Darwin in order to compare the influence of different fabric parameters on thermo-physiological and compression behavior of samples. This study established that the raw materials, type of spacer yarn, density, thickness and tightness of surface layer have significant influence on both thermal conductivity and work of compression in spacer fabrics. The parameter which mainly influence on the water vapor permeability of these fabrics is the properties of raw material i.e. the wetting and wicking properties of fibers. The Pearson correlation between moisture capacity of the fabrics and water vapour permeability was found using statistical software named QC expert trilobite and Darwin. These findings are important requirements for the further designing of clothing for extreme environmental conditions.

Keywords: 3D spacer fabrics, thermal conductivity, moisture management, work of compression (WC), resilience of compression (RC)

Procedia PDF Downloads 542
560 Beam Spatio-Temporal Multiplexing Approach for Improving Control Accuracy of High Contrast Pulse

Authors: Ping Li, Bing Feng, Junpu Zhao, Xudong Xie, Dangpeng Xu, Kuixing Zheng, Qihua Zhu, Xiaofeng Wei

Abstract:

In laser driven inertial confinement fusion (ICF), the control of the temporal shape of the laser pulse is a key point to ensure an optimal interaction of laser-target. One of the main difficulties in controlling the temporal shape is the foot part control accuracy of high contrast pulse. Based on the analysis of pulse perturbation in the process of amplification and frequency conversion in high power lasers, an approach of beam spatio-temporal multiplexing is proposed to improve the control precision of high contrast pulse. In the approach, the foot and peak part of high contrast pulse are controlled independently, which propagate separately in the near field, and combine together in the far field to form the required pulse shape. For high contrast pulse, the beam area ratio of the two parts is optimized, and then beam fluence and intensity of the foot part are increased, which brings great convenience to the control of pulse. Meanwhile, the near field distribution of the two parts is also carefully designed to make sure their F-numbers are the same, which is another important parameter for laser-target interaction. The integrated calculation results show that for a pulse with a contrast of up to 500, the deviation of foot part can be improved from 20% to 5% by using beam spatio-temporal multiplexing approach with beam area ratio of 1/20, which is almost the same as that of peak part. The research results are expected to bring a breakthrough in power balance of high power laser facility.

Keywords: inertial confinement fusion, laser pulse control, beam spatio-temporal multiplexing, power balance

Procedia PDF Downloads 147
559 Integrating Knowledge Distillation of Multiple Strategies

Authors: Min Jindong, Wang Mingxia

Abstract:

With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance.

Keywords: object detection, knowledge distillation, convolutional network, model compression

Procedia PDF Downloads 278
558 Effect of Food Supplies Holstein Calves Supplemented with Bacillus Subtilis PB6 in Morbidity and Mortality

Authors: Banca Patricia Pena Revuelta, Ramiro Gonzalez Avalos, Juan Leonardo Rocha Valdez, Jose Gonzalez Avalos, Karla Rodriguez Hernandez

Abstract:

Probiotics are a promising alternative to improve productivity and animals' health. In addition, they can be part of the composition of different types of products, including foods (functional foods), medicines, and dietary supplements. The objective of the present work was to evaluate the effect of the feeding of Holstein calves supplemented with bacillus subtilis PB6 in morbidity and mortality. 60 newborn animals were used, randomly included in 1 of 3 treatments. The treatments were as follows: T1 = control, T2 = 10 g / calf / day. The first takes within 20 min after birth, T3 = 10 g / calf/day. The first takes between 12 and 24 h after birth. In all the treatments, 432 L of pasteurized whole milk divided into two doses/day 07:00 and 15:00, respectively, were given for 60 days. The addition of bacillus subtilis PB6 was carried out in the milk tub at the time of feeding them. The first colostrum intake (2 L • intake) was given within 2 h after birth, after which they were given a second 6 h after the first one. The diseases registered to monitor the morbidity and mortality of the calves were: diarrhea and pneumonia. The registry was carried out from birth to 60 days of life. The parameter evaluated was food consumption. The variable statistical analysis was performed using analysis of variance, and comparison of means was performed using the Tukey test. The value of P < 0.05 was used to consider the statistical difference. The results of the present study in relation to the consumption of food show no statistical difference P < 0.05 between treatments (14,762, 11,698, and 12,403 kg of food average, respectively). Calves group to which they were not provided Bacillus subtilis PB6 obtained higher feed intake. The addition of Bacillus subtilis PB6 in feeding calves does not increase feed intake.

Keywords: feeding, development, milk, probiotic

Procedia PDF Downloads 147
557 Estimation Model for Concrete Slump Recovery by Using Superplasticizer

Authors: Chaiyakrit Raoupatham, Ram Hari Dhakal, Chalermchai Wanichlamlert

Abstract:

This paper is aimed to introduce the solution of concrete slump recovery using chemical admixture type-F (superplasticizer, naphthalene base) to the practice, in order to solve unusable concrete problem due to concrete loss its slump, especially for those tropical countries that have faster slump loss rate. In the other hand, randomly adding superplasticizer into concrete can cause concrete to segregate. Therefore, this paper also develops the estimation model used to calculate amount of second dose of superplasticizer need for concrete slump recovery. Fresh properties of ordinary Portland cement concrete with volumetric ratio of paste to void between aggregate (paste content) of 1.1-1.3 with water-cement ratio zone of 0.30 to 0.67 and initial superplasticizer (naphthalene base) of 0.25%- 1.6% were tested for initial slump and slump loss for every 30 minutes for one and half hour by slump cone test. Those concretes with slump loss range from 10% to 90% were re-dosed and successfully recovered back to its initial slump. Slump after re-dosed was tested by slump cone test. From the result, it has been concluded that, slump loss was slower for those mix with high initial dose of superplasticizer due to addition of superplasticizer will disturb cement hydration. The required second dose of superplasticizer was affected by two major parameter, which were water-cement ratio and paste content, where lower water-cement ratio and paste content cause an increase in require second dose of superplasticizer. The amount of second dose of superplasticizer is higher as the solid content within the system is increase, solid can be either from cement particles or aggregate. The data was analyzed to form an equation use to estimate the amount of second dosage requirement of superplasticizer to recovery slump to its original.

Keywords: estimation model, second superplasticizer dosage, slump loss, slump recovery

Procedia PDF Downloads 199
556 Differential Effects of Parity, Stress and Fluoxetine Treatment on Locomotor Activity and Swimming Behavior in Rats

Authors: Nur Hidayah Kaz Abdul Aziz, Norhalida Hashim, Zurina Hassan

Abstract:

Peripartum period is a time where women are vulnerable to depression, and stress may further increase the risk of its occurrence. Use of selective serotonin reuptake inhibitors (SSRI) in the treatment of postpartum depression is a common practice. Comparison of antidepressant treatment, however, is rarely studied between gestated and nulliparous animals exposed to stress. This study was aimed to investigate the effect of parity and stress, as well as fluoxetine (an SSRI) treatment after stress exposure on the behavior of rats. Gestating and nulliparous Sprague Dawley rats were either subjected to chronic stressors or left undisturbed throughout the gestation period. After parturition, all stressors were stopped and some of the stressed rats were treated with fluoxetine (10mg/kg). Hence, the final groups formed were: 1. Non-stressed nulliparous rats, 2. Non-stressed dams, 3. Stressed nulliparous rats, 4. Stressed dams, 5. Fluoxetine-treated stressed nulliparous rats, and 6. Fluoxetine-treated stressed dams. Rats were tested in open field test (OFT), novel object recognition test (NOR) and forced swim test (FST) after weaning of pups. Gestational stress significantly reduced the locomotor activity of rats in OFT (p<0.05), while fluoxetine significantly increased the activity in nulliparous rats (p<0.001) but not the dams. While no differences were observed in NOR, stress and parity inhibited the rats from performing swimming behavior in FST. However, climbing and immobile behaviors in FST were found to have no significant differences, although there is a tendency of effect of treatment for immobility parameter (p=0.06) where fluoxetine-treated stressed dams were being the least immobile. In conclusion, the effects of parity and stress, as well as fluoxetine treatment, depended on the type of behavioral test performed.

Keywords: stress, parity, SSRI, behavioral tests

Procedia PDF Downloads 172
555 Numerical and Sensitivity Analysis of Modeling the Newcastle Disease Dynamics

Authors: Nurudeen Oluwasola Lasisi

Abstract:

Newcastle disease is a highly contagious disease of birds caused by a para-myxo virus. In this paper, we presented Novel quarantine-adjusted incident and linear incident of Newcastle disease model equations. We considered the dynamics of transmission and control of Newcastle disease. The existence and uniqueness of the solutions were obtained. The existence of disease-free points was shown, and the model threshold parameter was examined using the next-generation operator method. The sensitivity analysis was carried out in order to identify the most sensitive parameters of the disease transmission. This revealed that as parameters β,ω, and ᴧ increase while keeping other parameters constant, the effective reproduction number R_ev increases. This implies that the parameters increase the endemicity of the infection of individuals. More so, when the parameters μ,ε,γ,δ_1, and α increase, while keeping other parameters constant, the effective reproduction number R_ev decreases. This implies the parameters decrease the endemicity of the infection as they have negative indices. Analytical results were numerically verified by the Differential Transformation Method (DTM) and quantitative views of the model equations were showcased. We established that as contact rate (β) increases, the effective reproduction number R_ev increases, as the effectiveness of drug usage increases, the R_ev decreases and as the quarantined individual decreases, the R_ev decreases. The results of the simulations showed that the infected individual increases when the susceptible person approaches zero, also the vaccination individual increases when the infected individual decreases and simultaneously increases the recovery individual.

Keywords: disease-free equilibrium, effective reproduction number, endemicity, Newcastle disease model, numerical, Sensitivity analysis

Procedia PDF Downloads 45
554 Buffer Allocation and Traffic Shaping Policies Implemented in Routers Based on a New Adaptive Intelligent Multi Agent Approach

Authors: M. Taheri Tehrani, H. Ajorloo

Abstract:

In this paper, an intelligent multi-agent framework is developed for each router in which agents have two vital functionalities, traffic shaping and buffer allocation and are positioned in the ports of the routers. With traffic shaping functionality agents shape the traffic forward by dynamic and real time allocation of the rate of generation of tokens in a Token Bucket algorithm and with buffer allocation functionality agents share their buffer capacity between each other based on their need and the conditions of the network. This dynamic and intelligent framework gives this opportunity to some ports to work better under burst and more busy conditions. These agents work intelligently based on Reinforcement Learning (RL) algorithm and will consider effective parameters in their decision process. As RL have limitation considering much parameter in its decision process due to the volume of calculations, we utilize our novel method which invokes Principle Component Analysis (PCA) on the RL and gives a high dimensional ability to this algorithm to consider as much as needed parameters in its decision process. This implementation when is compared to our previous work where traffic shaping was done without any sharing and dynamic allocation of buffer size for each port, the lower packet drop in the whole network specifically in the source routers can be seen. These methods are implemented in our previous proposed intelligent simulation environment to be able to compare better the performance metrics. The results obtained from this simulation environment show an efficient and dynamic utilization of resources in terms of bandwidth and buffer capacities pre allocated to each port.

Keywords: principal component analysis, reinforcement learning, buffer allocation, multi- agent systems

Procedia PDF Downloads 518
553 Apparent Temperature Distribution on Scaffoldings during Construction Works

Authors: I. Szer, J. Szer, K. Czarnocki, E. Błazik-Borowa

Abstract:

People on construction scaffoldings work in dynamically changing, often unfavourable climate. Additionally, this kind of work is performed on low stiffness structures at high altitude, which increases the risk of accidents. It is therefore desirable to define the parameters of the work environment that contribute to increasing the construction worker occupational safety level. The aim of this article is to present how changes in microclimate parameters on scaffolding can impact the development of dangerous situations and accidents. For this purpose, indicators based on the human thermal balance were used. However, use of this model under construction conditions is often burdened by significant errors or even impossible to implement due to the lack of precise data. Thus, in the target model, the modified parameter was used – apparent environmental temperature. Apparent temperature in the proposed Scaffold Use Risk Assessment Model has been a perceived outdoor temperature, caused by the combined effects of air temperature, radiative temperature, relative humidity and wind speed (wind chill index, heat index). In the paper, correlations between component factors and apparent temperature for facade scaffolding with a width of 24.5 m and a height of 42.3 m, located at south-west side of building are presented. The distribution of factors on the scaffolding has been used to evaluate fitting of the microclimate model. The results of the studies indicate that observed ranges of apparent temperature on the scaffolds frequently results in a worker’s inability to adapt. This leads to reduced concentration and increased fatigue, adversely affects health, and consequently increases the risk of dangerous situations and accidental injuries

Keywords: apparent temperature, health, safety work, scaffoldings

Procedia PDF Downloads 182
552 Fatigue Life Prediction under Variable Loading Based a Non-Linear Energy Model

Authors: Aid Abdelkrim

Abstract:

A method of fatigue damage accumulation based upon application of energy parameters of the fatigue process is proposed in the paper. Using this model is simple, it has no parameter to be determined, it requires only the knowledge of the curve W–N (W: strain energy density N: number of cycles at failure) determined from the experimental Wöhler curve. To examine the performance of nonlinear models proposed in the estimation of fatigue damage and fatigue life of components under random loading, a batch of specimens made of 6082 T 6 aluminium alloy has been studied and some of the results are reported in the present paper. The paper describes an algorithm and suggests a fatigue cumulative damage model, especially when random loading is considered. This work contains the results of uni-axial random load fatigue tests with different mean and amplitude values performed on 6082T6 aluminium alloy specimens. The proposed model has been formulated to take into account the damage evolution at different load levels and it allows the effect of the loading sequence to be included by means of a recurrence formula derived for multilevel loading, considering complex load sequences. It is concluded that a ‘damaged stress interaction damage rule’ proposed here allows a better fatigue damage prediction than the widely used Palmgren–Miner rule, and a formula derived in random fatigue could be used to predict the fatigue damage and fatigue lifetime very easily. The results obtained by the model are compared with the experimental results and those calculated by the most fatigue damage model used in fatigue (Miner’s model). The comparison shows that the proposed model, presents a good estimation of the experimental results. Moreover, the error is minimized in comparison to the Miner’s model.

Keywords: damage accumulation, energy model, damage indicator, variable loading, random loading

Procedia PDF Downloads 396
551 The Effect of Grading Characteristics on the Shear Strength and Mechanical Behavior of Granular Classes of Sands

Authors: Salah Brahim Belakhdar, Tari Mohammed Amin, Rafai Abderrahmen, Amalsi Bilal

Abstract:

Shear strength of sandy soils has been considered as the important parameter to study the stability of different civil engineering structures when subjected to monotonic, cyclic, and earthquake loading conditions. The proposed research investigated the effect of grading characteristics on the shear strength and mechanical behaviour of granular classes of sands mixed with salt in loose and dense states (Dr=15% and 90%). The laboratory investigation aimed at understanding the extent or degree at which shear strength of sand-silt mixture soil is affected by its gradation under static loading conditions. For the purpose of clarifying and evaluating the shear strength characteristics of sandy soils, a series of Casagrande shear box tests were carried out on different reconstituted samples of sand-silt mixtures with various gradations. The soil samples were tested under different normal stresses (100, 200, and 300 kPa). The results from this laboratory investigation were used to develop insight into the shear strength response of sand and sand-silt mixtures under monotonic loading conditions. The analysis of the obtained data revealed that the grading characteristics (D10, D50, Cu, ESR, and MGSR) have a significant influence on the shear strength response. It was found that shear strength can be correlated to the grading characteristics for the sand-silt mixture. The effective size ratio (ESR) and mean grain size ratio (MGSR) appear as pertinent parameters to predict the shear strength response of the sand-silt mixtures for soil gradation under study.

Keywords: mechanical behavior, silty sand, friction angle, cohesion, fines content

Procedia PDF Downloads 373
550 Comparison of Various Policies under Different Maintenance Strategies on a Multi-Component System

Authors: Demet Ozgur-Unluakin, Busenur Turkali, Ayse Karacaorenli

Abstract:

Maintenance strategies can be classified into two types, which are reactive and proactive, with respect to the time of the failure and maintenance. If the maintenance activity is done after a breakdown, it is called reactive maintenance. On the other hand, proactive maintenance, which is further divided as preventive and predictive, focuses on maintaining components before a failure occurs to prevent expensive halts. Recently, the number of interacting components in a system has increased rapidly and therefore, the structure of the systems have become more complex. This situation has made it difficult to provide the right maintenance decisions. Herewith, determining effective decisions has played a significant role. In multi-component systems, many methodologies and strategies can be applied when a component or a system has already broken down or when it is desired to identify and avoid proactively defects that could lead to future failure. This study focuses on the comparison of various maintenance strategies on a multi-component dynamic system. Components in the system are hidden, although there exists partial observability to the decision maker and they deteriorate in time. Several predefined policies under corrective, preventive and predictive maintenance strategies are considered to minimize the total maintenance cost in a planning horizon. The policies are simulated via Dynamic Bayesian Networks on a multi-component system with different policy parameters and cost scenarios, and their performances are evaluated. Results show that when the difference between the corrective and proactive maintenance cost is low, none of the proactive maintenance policies is significantly better than the corrective maintenance. However, when the difference is increased, at least one policy parameter for each proactive maintenance strategy gives significantly lower cost than the corrective maintenance.

Keywords: decision making, dynamic Bayesian networks, maintenance, multi-component systems, reliability

Procedia PDF Downloads 129
549 Finding a Set of Long Common Substrings with Repeats from m Input Strings

Authors: Tiantian Li, Lusheng Wang, Zhaohui Zhan, Daming Zhu

Abstract:

In this paper, we propose two string problems, and study algorithms and complexity of various versions for those problems. Let S = {s₁, s₂, . . . , sₘ} be a set of m strings. A common substring of S is a substring appearing in every string in S. Given a set of m strings S = {s₁, s₂, . . . , sₘ} and a positive integer k, we want to find a set C of k common substrings of S such that the k common substrings in C appear in the same order and have no overlap among the m input strings in S, and the total length of the k common substring in C is maximized. This problem is referred to as the longest total length of k common substrings from m input strings (LCSS(k, m) for short). The other problem we study here is called the longest total length of a set of common substrings with length more than l from m input string (LSCSS(l, m) for short). Given a set of m strings S = {s₁, s₂, . . . , sₘ} and a positive integer l, for LSCSS(l, m), we want to find a set of common substrings of S, each is of length more than l, such that the total length of all the common substrings is maximized. We show that both problems are NP-hard when k and m are variables. We propose dynamic programming algorithms with time complexity O(k n₁n₂) and O(n₁n₂) to solve LCSS(k, 2) and LSCSS(l, 2), respectively, where n1 and n₂ are the lengths of the two input strings. We then design an algorithm for LSCSS(l, m) when every length > l common substring appears once in each of the m − 1 input strings. The running time is O(n₁²m), where n1 is the length of the input string with no restriction on length > l common substrings. Finally, we propose a fixed parameter algorithm for LSCSS(l, m), where each length > l common substring appears m − 1 + c times among the m − 1 input strings (other than s1). In other words, each length > l common substring may repeatedly appear at most c times among the m − 1 input strings {s₂, s₃, . . . , sₘ}. The running time of the proposed algorithm is O((n12ᶜ)²m), where n₁ is the input string with no restriction on repeats. The LSCSS(l, m) is proposed to handle whole chromosome sequence alignment for different strains of the same species, where more than 98% of letters in core regions are identical.

Keywords: dynamic programming, algorithm, common substrings, string

Procedia PDF Downloads 13
548 Effect of Drag Coefficient Models concerning Global Air-Sea Momentum Flux in Broad Wind Range including Extreme Wind Speeds

Authors: Takeshi Takemoto, Naoya Suzuki, Naohisa Takagaki, Satoru Komori, Masako Terui, George Truscott

Abstract:

Drag coefficient is an important parameter in order to correctly estimate the air-sea momentum flux. However, The parameterization of the drag coefficient hasn’t been established due to the variation in the field data. Instead, a number of drag coefficient model formulae have been proposed, even though almost all these models haven’t discussed the extreme wind speed range. With regards to such models, it is unclear how the drag coefficient changes in the extreme wind speed range as the wind speed increased. In this study, we investigated the effect of the drag coefficient models concerning the air-sea momentum flux in the extreme wind range on a global scale, comparing two different drag coefficient models. Interestingly, one model didn’t discuss the extreme wind speed range while the other model considered it. We found that the difference of the models in the annual global air-sea momentum flux was small because the occurrence frequency of strong wind was approximately 1% with a wind speed of 20m/s or more. However, we also discovered that the difference of the models was shown in the middle latitude where the annual mean air-sea momentum flux was large and the occurrence frequency of strong wind was high. In addition, the estimated data showed that the difference of the models in the drag coefficient was large in the extreme wind speed range and that the largest difference became 23% with a wind speed of 35m/s or more. These results clearly show that the difference of the two models concerning the drag coefficient has a significant impact on the estimation of a regional air-sea momentum flux in an extreme wind speed range such as that seen in a tropical cyclone environment. Furthermore, we estimated each air-sea momentum flux using several kinds of drag coefficient models. We will also provide data from an observation tower and result from CFD (Computational Fluid Dynamics) concerning the influence of wind flow at and around the place.

Keywords: air-sea interaction, drag coefficient, air-sea momentum flux, CFD (Computational Fluid Dynamics)

Procedia PDF Downloads 371
547 Impact of Unusual Dust Event on Regional Climate in India

Authors: Kanika Taneja, V. K. Soni, Kafeel Ahmad, Shamshad Ahmad

Abstract:

A severe dust storm generated from a western disturbance over north Pakistan and adjoining Afghanistan affected the north-west region of India between May 28 and 31, 2014, resulting in significant reductions in air quality and visibility. The air quality of the affected region degraded drastically. PM10 concentration peaked at a very high value of around 1018 μgm-3 during dust storm hours of May 30, 2014 at New Delhi. The present study depicts aerosol optical properties monitored during the dust days using ground based multi-wavelength Sky radiometer over the National Capital Region of India. High Aerosol Optical Depth (AOD) at 500 nm was observed as 1.356 ± 0.19 at New Delhi while Angstrom exponent (Alpha) dropped to 0.287 on May 30, 2014. The variation in the Single Scattering Albedo (SSA) and real n(λ) and imaginary k(λ) parts of the refractive index indicated that the dust event influences the optical state to be more absorbing. The single scattering albedo, refractive index, volume size distribution and asymmetry parameter (ASY) values suggested that dust aerosols were predominant over the anthropogenic aerosols in the urban environment of New Delhi. The large reduction in the radiative flux at the surface level caused significant cooling at the surface. Direct Aerosol Radiative Forcing (DARF) was calculated using a radiative transfer model during the dust period. A consistent increase in surface cooling was evident, ranging from -31 Wm-2 to -82 Wm-2 and an increase in heating of the atmosphere from 15 Wm-2 to 92 Wm-2 and -2 Wm-2 to 10 Wm-2 at top of the atmosphere.

Keywords: aerosol optical properties, dust storm, radiative transfer model, sky radiometer

Procedia PDF Downloads 377