Search results for: maximum residue limit (MRL)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5727

Search results for: maximum residue limit (MRL)

4287 Square Wave Anodic Stripping Voltammetry of Copper (II) at the Tetracarbonylmolybdenum(0) MWCNT Paste Electrode

Authors: Illyas Isa, Mohamad Idris Saidin, Mustaffa Ahmad, Norhayati Hashim

Abstract:

A highly selective and sensitive electrode for determination of trace amounts of Cu (II) using square wave anodic stripping voltammetry (SWASV) was proposed. The electrode was made of the paste of multiwall carbon nanotubes (MWCNT) and 2,6–diacetylpyridine-di-(1R)–(-)–fenchone diazine tetracarbonylmolybdenum(0) at 100:5 (w/w). Under optimal conditions the electrode showed a linear relationship with concentration in the range of 1.0 × 10–10 to 1.0 × 10– 6 M Cu (II) and limit of detection 8.0 × 10–11 M Cu (II). The relative standard deviation (n = 5) of response to 1.0 × 10–6 M Cu(II) was 0.036. The interferences of cations such as Ni(II), Mg(II), Cd(II), Co(II), Hg(II), and Zn(II) (in 10 and 100-folds concentration) are negligible except from Pb (II). Electrochemical impedance spectroscopy (EIS) showed that the charge transfer at the electrode-solution interface was favorable. Result of analysis of Cu(II) in several water samples agreed well with those obtained by inductively coupled plasma-optical emission spectrometry (ICP-OES). The proposed electrode was then recommended as an alternative to spectroscopic technique in analyzing Cu (II).

Keywords: chemically modified electrode, Cu(II), Square wave anodic stripping voltammetry, tetracarbonylmolybdenum(0)

Procedia PDF Downloads 249
4286 On Stochastic Models for Fine-Scale Rainfall Based on Doubly Stochastic Poisson Processes

Authors: Nadarajah I. Ramesh

Abstract:

Much of the research on stochastic point process models for rainfall has focused on Poisson cluster models constructed from either the Neyman-Scott or Bartlett-Lewis processes. The doubly stochastic Poisson process provides a rich class of point process models, especially for fine-scale rainfall modelling. This paper provides an account of recent development on this topic and presents the results based on some of the fine-scale rainfall models constructed from this class of stochastic point processes. Amongst the literature on stochastic models for rainfall, greater emphasis has been placed on modelling rainfall data recorded at hourly or daily aggregation levels. Stochastic models for sub-hourly rainfall are equally important, as there is a need to reproduce rainfall time series at fine temporal resolutions in some hydrological applications. For example, the study of climate change impacts on hydrology and water management initiatives requires the availability of data at fine temporal resolutions. One approach to generating such rainfall data relies on the combination of an hourly stochastic rainfall simulator, together with a disaggregator making use of downscaling techniques. Recent work on this topic adopted a different approach by developing specialist stochastic point process models for fine-scale rainfall aimed at generating synthetic precipitation time series directly from the proposed stochastic model. One strand of this approach focused on developing a class of doubly stochastic Poisson process (DSPP) models for fine-scale rainfall to analyse data collected in the form of rainfall bucket tip time series. In this context, the arrival pattern of rain gauge bucket tip times N(t) is viewed as a DSPP whose rate of occurrence varies according to an unobserved finite state irreducible Markov process X(t). Since the likelihood function of this process can be obtained, by conditioning on the underlying Markov process X(t), the models were fitted with maximum likelihood methods. The proposed models were applied directly to the raw data collected by tipping-bucket rain gauges, thus avoiding the need to convert tip-times to rainfall depths prior to fitting the models. One advantage of this approach was that the use of maximum likelihood methods enables a more straightforward estimation of parameter uncertainty and comparison of sub-models of interest. Another strand of this approach employed the DSPP model for the arrivals of rain cells and attached a pulse or a cluster of pulses to each rain cell. Different mechanisms for the pattern of the pulse process were used to construct variants of this model. We present the results of these models when they were fitted to hourly and sub-hourly rainfall data. The results of our analysis suggest that the proposed class of stochastic models is capable of reproducing the fine-scale structure of the rainfall process, and hence provides a useful tool in hydrological modelling.

Keywords: fine-scale rainfall, maximum likelihood, point process, stochastic model

Procedia PDF Downloads 261
4285 Structural Design for Effective Load Balancing of the Iron Frame in Manhole Lid

Authors: Byung Il You, Ryun Oh, Gyo Woo Lee

Abstract:

Manhole refers to facilities that are accessible to the people cleaning and inspection of sewer, and its covering is called manhole lid. Manhole lid is typically made of a cast iron material. Due to the heavy weight of the cast iron manhole lids their installation and maintenance are not easy, and an electrical shock and corrosion aging of them can cause critical problems. The manhole body and the lid manufacturing using the fiber-reinforced composite material can reduce the weight considerably compared to the cast iron manhole. But only the fiber reinforcing is hard to maintain the heavy load, and the method of the iron frame with double injection molding of the composite material has been proposed widely. In this study reflecting the situation of this market, the structural design of the iron frame for the composite manhole lid was carried out. Structural analysis with the computer simulation for the effectively distributed load on the iron frame was conducted. In addition, we want to assess manufacturing costs through the comparing of weights and number of welding spots of the frames. Despite the cross-sectional area is up to 38% compared with the basic solid form the maximum von Mises stress is increased at least about 7 times locally near the rim and the maximum strain in the central part of the lid is about 5.5 times. The number of welding points related to the manufacturing cost was increased gradually with the more complicated shape. Also, the higher the height of the arch in the center of the lid the better result might be obtained. But considering the economic aspect of the composite fabrication we determined the same thickness as the frame for the height of the arch at the center of the lid. Additionally in consideration of the number of the welding points we selected the hexagonal as the optimal shape. Acknowledgment: These are results of a study on the 'Leaders Industry-university Cooperation' Project, supported by the Ministry of Education (MOE).

Keywords: manhole lid, iron frame, structural design, computer simulation

Procedia PDF Downloads 262
4284 Horizontal Stress Magnitudes Using Poroelastic Model in Upper Assam Basin, India

Authors: Jenifer Alam, Rima Chatterjee

Abstract:

Upper Assam sedimentary basin is one of the oldest commercially producing basins of India. Being in a tectonically active zone, estimation of tectonic strain and stress magnitudes has vast application in hydrocarbon exploration and exploitation. This East North East –West South West trending shelf-slope basin encompasses the Bramhaputra valley extending from Mikir Hills in the southwest to the Naga foothills in the northeast. Assam Shelf lying between the Main Boundary Thrust (MBT) and Naga Thrust area is comparatively free from thrust tectonics and depicts normal faulting mechanism. The study area is bounded by the MBT and Main Central Thrust in the northwest. The Belt of Schuppen in the southeast, is bordered by Naga and Disang thrust marking the lower limit of the study area. The entire Assam basin shows low-level seismicity compared to other regions of northeast India. Pore pressure (PP), vertical stress magnitude (SV) and horizontal stress magnitudes have been estimated from two wells - N1 and T1 located in Upper Assam. N1 is located in the Assam gap below the Bramhaputra river while T1, lies in the Belt of Schuppen. N1 penetrates geological formations from top Alluvial through Dhekiajuli, Girujan, Tipam, Barail, Kopili, Sylhet and Langpur to the granitic basement while T1 in trusted zone crosses through Girujan Suprathrust, Tipam Suprathrust, Barail Suprathrust to reach Naga Thrust. Normal compaction trend is drawn through shale points through both wells for estimation of PP using the conventional Eaton sonic equation with an exponent of 1.0 which is validated with Modular Dynamic Tester and mud weight. Observed pore pressure gradient ranges from 10.3 MPa/km to 11.1 MPa/km. The SV has a gradient from 22.20 to 23.80 MPa/km. Minimum and maximum horizontal principal stress (Sh and SH) magnitudes under isotropic conditions are determined using poroelastic model. This approach determines biaxial tectonic strain utilizing static Young’s Modulus, Poisson’s Ratio, SV, PP, leak off test (LOT) and SH derived from breakouts using prior information on unconfined compressive strength. Breakout derived SH information is used for obtaining tectonic strain due to lack of measured SH data from minifrac or hydrofracturing. Tectonic strain varies from 0.00055 to 0.00096 along x direction and from -0.0010 to 0.00042 along y direction. After obtaining tectonic strains at each well, the principal horizontal stress magnitudes are calculated from linear poroelastic model. The magnitude of Sh and SH gradient in normal faulting region are 12.5 and 16.0 MPa/km while in thrust faulted region the gradients are 17.4 and 20.2 MPa/km respectively. Model predicted Sh and SH matches well with the LOT data and breakout derived SH data in both wells. It is observed from this study that the stresses SV>SH>Sh prevailing in the shelf region while near the Naga foothills the regime changes to SH≈SV>Sh area corresponds to normal faulting regime. Hence this model is a reliable tool for predicting stress magnitudes from well logs under active tectonic regime in Upper Assam Basin.

Keywords: Eaton, strain, stress, poroelastic model

Procedia PDF Downloads 192
4283 Optimal Bayesian Control of the Proportion of Defectives in a Manufacturing Process

Authors: Viliam Makis, Farnoosh Naderkhani, Leila Jafari

Abstract:

In this paper, we present a model and an algorithm for the calculation of the optimal control limit, average cost, sample size, and the sampling interval for an optimal Bayesian chart to control the proportion of defective items produced using a semi-Markov decision process approach. Traditional p-chart has been widely used for controlling the proportion of defectives in various kinds of production processes for many years. It is well known that traditional non-Bayesian charts are not optimal, but very few optimal Bayesian control charts have been developed in the literature, mostly considering finite horizon. The objective of this paper is to develop a fast computational algorithm to obtain the optimal parameters of a Bayesian p-chart. The decision problem is formulated in the partially observable framework and the developed algorithm is illustrated by a numerical example.

Keywords: Bayesian control chart, semi-Markov decision process, quality control, partially observable process

Procedia PDF Downloads 303
4282 Numerical Methods versus Bjerksund and Stensland Approximations for American Options Pricing

Authors: Marasovic Branka, Aljinovic Zdravka, Poklepovic Tea

Abstract:

Numerical methods like binomial and trinomial trees and finite difference methods can be used to price a wide range of options contracts for which there are no known analytical solutions. American options are the most famous of that kind of options. Besides numerical methods, American options can be valued with the approximation formulas, like Bjerksund-Stensland formulas from 1993 and 2002. When the value of American option is approximated by Bjerksund-Stensland formulas, the computer time spent to carry out that calculation is very short. The computer time spent using numerical methods can vary from less than one second to several minutes or even hours. However to be able to conduct a comparative analysis of numerical methods and Bjerksund-Stensland formulas, we will limit computer calculation time of numerical method to less than one second. Therefore, we ask the question: Which method will be most accurate at nearly the same computer calculation time?

Keywords: Bjerksund and Stensland approximations, computational analysis, finance, options pricing, numerical methods

Procedia PDF Downloads 437
4281 Hyper Presidentialism and First Year of the Turkish Type of Presidentialism

Authors: Ahmet Ekinci

Abstract:

The new government system of Turkey can be described as hyper-presidentialism, this is because the president then becomes the arbiter of all powers. In another word, the power to enact decrees, appoint bureaucrats and judicial officials into offices, and the power to dissolve a parliament belongs solely to the president. As a strong presidency fuse with a disciplined party system as well as concurrent elections and 10 percent electoral threshold, the president possibly poses a great danger to the separation of powers. Additionally, with regards to the presidential term, the president constitutionally holds the power to be elected only for two terms in Turkey. However, Erdoğan and his supporters believe that the 2017 constitutional amendments that changed the system of government have reset the agenda. Thus, the 2017 amendments offered Erdoğan a secret opportunity to join the presidential election race for a third and even a fourth term.

Keywords: hyper-presidentialism, Turkish presidentialism, presidential decree, concurrent election, Erdogan’s term limit, Turkish government system

Procedia PDF Downloads 128
4280 The Influence of Active Breaks on the Attention/Concentration Performance in Eighth-Graders

Authors: Christian Andrä, Luisa Zimmermann, Christina Müller

Abstract:

Introduction: The positive relation between physical activity and cognition is commonly known. Relevant studies show that in everyday school life active breaks can lead to improvement in certain abilities (e.g. attention and concentration). A beneficial effect is in particular attributed to moderate activity. It is still unclear whether active breaks are beneficial after relatively short phases of cognitive load and whether the postulated effects of activity really have an immediate impact. The objective of this study was to verify whether an active break after 18 minutes of cognitive load leads to enhanced attention/concentration performance, compared to inactive breaks with voluntary mobile phone activity. Methodology: For this quasi-experimental study, 36 students [age: 14.0 (mean value) ± 0.3 (standard deviation); male/female: 21/15] of a secondary school were tested. In week 1, every student’s maximum heart rate (Hfmax) was determined through maximum effort tests conducted during physical education classes. The task was to run 3 laps of 300 m with increasing subjective effort (lap 1: 60%, lap 2: 80%, lap 3: 100% of the maximum performance capacity). Furthermore, first attention/concentration tests (D2-R) took place (pretest). The groups were matched on the basis of the pretest results. During week 2 and 3, crossover testing was conducted, comprising of 18 minutes of cognitive preload (test for concentration performance, KLT-R), a break and an attention/concentration test after a 2-minutes transition. Different 10-minutes breaks (active break: moderate physical activity with 65% Hfmax or inactive break: mobile phone activity) took place between preloading and transition. Major findings: In general, there was no impact of the different break interventions on the concentration test results (symbols processed after physical activity: 185.2 ± 31.3 / after inactive break: 184.4 ± 31.6; errors after physical activity: 5.7 ± 6.3 / after inactive break: 7.0. ± 7.2). There was, however, a noticeable development of the values over the testing periods. Although no difference in the number of processed symbols was detected (active/inactive break: period 1: 49.3 ± 8.8/46.9 ± 9.0; period 2: 47.0 ± 7.7/47.3 ± 8.4; period 3: 45.1 ± 8.3/45.6 ± 8.0; period 4: 43.8 ± 7.8/44.6 ± 8.0), error rates decreased successively after physical activity and increased gradually after an inactive break (active/inactive break: period 1: 1.9 ± 2.4/1.2 ± 1.4; period 2: 1.7 ± 1.8/ 1.5 ± 2.0, period 3: 1.2 ± 1.6/1.8 ± 2.1; period 4: 0.9 ± 1.5/2.5 ± 2.6; p= .012). Conclusion: Taking into consideration only the study’s overall results, the hypothesis must be dismissed. However, more differentiated evaluation shows that the error rates decreased after active breaks and increased after inactive breaks. Obviously, the effects of active intervention occur with a delay. The 2-minutes transition (regeneration time) used for this study seems to be insufficient due to the longer adaptation time of the cardio-vascular system in untrained individuals, which might initially affect the concentration capacity. To use the positive effects of physical activity for teaching and learning processes, physiological characteristics must also be considered. Only this will ensure optimum ability to perform.

Keywords: active breaks, attention/concentration test, cognitive performance capacity, heart rate, physical activity

Procedia PDF Downloads 299
4279 Optimization of Biomass Components from Rice Husk Treated with Trichophyton Soudanense and Trichophyton Mentagrophyte and Effect of Yeast on the Bio-Ethanol Yield

Authors: Chukwuma S. Ezeonu, Ikechukwu N. E. Onwurah, Uchechukwu U. Nwodo, Chibuike S. Ubani, Chigozie M. Ejikeme

Abstract:

Trichophyton soudanense and Trichophyton mentagrophyte were isolated from the rice mill environment, cultured and used singly and as di-culture in the treatment of measure quantities of preheated rice husk. Optimized conditions studied showed that carboxymethylcellulase (CMCellulase) activity of 57.61 µg/ml/min was optimum for Trichophyton mentagrophyte heat pretreated rice husk crude enzymes at 50oC and 80oC respectively. Duration of 120 hours (5 days) gave the highest CMcellulase activity of 75.84 µg/ml/min for crude enzyme of Trichophyton mentagrophyte heat pretreated rice husk. However, 96 hours (4 days) duration gave maximum activity of 58.21 µg/ml/min for crude enzyme of Trichophyton soudanense heat pretreated rice husk. Highest CMCellulase activities of 67.02 µg/ml/min and 69.02 µg/ml/min at pH of 5 were recorded for crude enzymes of monocultures of Trichophyton soudanense (TS) and Trichophyton mentagrophyte (TM) heat pretreated rice husk respectively. Biomass components showed that rice husk cooled after heating followed by treatment with Trichophyton mentagrophyte gave 44.50 ± 10.90 (% ± Standard Error of Mean) cellulose as the highest yield. Maximum total lignin value of 28.90 ± 1.80 (% ± SEM) was obtained from pre-heated rice husk treated with di-culture of Trichophyton soudanense and Trichophyton mentagrophyte (TS+TM). The hemicellulose content of 30.50 ± 2.12 (% ± SEM) from pre-heated rice husk treated with Trichophyton soudanense (TS); lignin value of 28.90 ± 1.80 from pre-heated rice husk treated with di-culture of Trichophyton soudanense and Trichophyton mentagrophyte (TS+TM); also carbohydrate content of 16.79 ± 9.14 (% ± SEM) , reducing and non-reducing sugar values of 2.66 ± 0.45 and 14.13 ± 8.69 (% ± SEM) were all obtained from for pre- heated rice husk treated with Trichophyton mentagrophyte (TM). All the values listed above were the highest values obtained from each rice husk treatment. The pre-heated rice husk treated with Trichophyton mentagrophyte (TM) fermented with palmwine yeast gave bio-ethanol value of 11.11 ± 0.21 (% ± Standard Deviation) as the highest yield.

Keywords: Trichophyton soudanense, Trichophyton mentagrophyte, biomass, bioethanol, rice husk

Procedia PDF Downloads 664
4278 Retrofitting of Asymmetric Steel Structure Equipped with Tuned Liquid Column Dampers by Nonlinear Finite Element Modeling

Authors: A. Akbarpour, M. R. Adib Ramezani, M. Zhian, N. Ghorbani Amirabad

Abstract:

One way to improve the performance of structures against of earthquake is passive control which requires no external power source. In this research, tuned liquid column dampers which are among of systems with the capability to transfer energy between various modes of vibration, are used. For the first time, a liquid column damper for vibration control structure is presented. After modeling this structure in design building software and performing the static and dynamic analysis and obtaining the necessary parameters for the design of tuned liquid column damper, the whole structure will be analyzed in finite elements software. The tuned liquid column dampers are installed on the structure and nonlinear time-history analysis is done in two cases of structures; with and without dampers. Finally the seismic behavior of building in the two cases will be examined. In this study the nonlinear time-history analysis on a twelve-story steel structure equipped with damper subject to records of earthquake including Loma Prieta, Northridge, Imperiall Valley, Pertrolia and Landers was performed. The results of comparing between two cases show that these dampers have reduced lateral displacement and acceleration of levels on average of 10%. Roof displacement and acceleration also reduced respectively 5% and 12%. Due to structural asymmetric in the plan, the maximum displacements of surrounding structures as well as twisting were studied. The results show that the dampers lead to a 10% reduction in the maximum response of structure stories surrounding points. At the same time, placing the dampers, caused to reduce twisting on the floor plan of the structure, Base shear of structure in the different earthquakes also has been reduced on the average of 6%.

Keywords: retrofitting, passive control, tuned liquid column damper, finite element analysis

Procedia PDF Downloads 398
4277 Design and Analysis of Deep Excavations

Authors: Barham J. Nareeman, Ilham I. Mohammed

Abstract:

Excavations in urban developed area are generally supported by deep excavation walls such as; diaphragm wall, bored piles, soldier piles and sheet piles. In some cases, these walls may be braced by internal braces or tie back anchors. Tie back anchors are by far the predominant method for wall support, the large working space inside the excavation provided by a tieback anchor system has a significant construction advantage. This paper aims to analyze a deep excavation bracing system of contiguous pile wall braced by pre-stressed tie back anchors, which is a part of a huge residential building project, located in Turkey/Gaziantep province. The contiguous pile wall will be constructed with a length of 270 m that consists of 285 piles, each having a diameter of 80 cm, and a center to center spacing of 95 cm. The deformation analysis was carried out by a finite element analysis tool using PLAXIS. In the analysis, beam element method together with an elastic perfect plastic soil model and Soil Hardening Model was used to design the contiguous pile wall, the tieback anchor system, and the soil. The two soil clusters which are limestone and a filled soil were modelled with both Hardening soil and Mohr Coulomb models. According to the basic design, both soil clusters are modelled as drained condition. The simulation results show that the maximum horizontal movement of the walls and the maximum settlement of the ground are convenient with 300 individual case histories which are ranging between 1.2mm and 2.3mm for walls, and 15mm and 6.5mm for the settlements. It was concluded that tied-back contiguous pile wall can be satisfactorily modelled using Hardening soil model.

Keywords: deep excavation, finite element, pre-stressed tie back anchors, contiguous pile wall, PLAXIS, horizontal deflection, ground settlement

Procedia PDF Downloads 240
4276 Experimental Investigation on Performance of Beam Column Frames with Column Kickers

Authors: Saiada Fuadi Fancy, Fahim Ahmed, Shofiq Ahmed, Raquib Ahsan

Abstract:

The worldwide use of reinforced concrete construction stems from the wide availability of reinforcing steel as well as concrete ingredients. However, concrete construction requires a certain level of technology, expertise, and workmanship, particularly, in the field during construction. As a supporting technology for a concrete column or wall construction, kicker is cast as part of the slab or foundation to provide a convenient starting point for a wall or column ensuring integrity at this important junction. For that reason, a comprehensive study was carried out here to investigate the behavior of reinforced concrete frame with different kicker parameters. To achieve this objective, six half-scale specimens of portal reinforced concrete frame with kickers and one portal frame without kicker were constructed according to common practice in the industry and subjected to cyclic incremental horizontal loading with sustained gravity load. In this study, the experimental data, obtained in four deflections controlled cycle, were used to evaluate the behavior of kickers. Load-displacement characteristics were obtained; maximum loads and deflections were measured and assessed. Finally, the test results of frames constructed with three different types of kicker thickness were compared with the kickerless frame. Similar crack patterns were observed for all the specimens. From this investigation, specimens with kicker thickness 3″ were shown better results than specimens with kicker thickness 1.5″, which was specified by maximum load, stiffness, initiation of first crack and residual displacement. Despite of better performance, it could not be firmly concluded that 4.5″ kicker thickness is the most appropriate one. Because, during the test of that specimen, separation of dial gauge was needed. Finally, comparing with kickerless specimen, it was observed that performance of kickerless specimen was relatively better than kicker specimens.

Keywords: crack, cyclic, kicker, load-displacement

Procedia PDF Downloads 302
4275 Orbit Determination from Two Position Vectors Using Finite Difference Method

Authors: Akhilesh Kumar, Sathyanarayan G., Nirmala S.

Abstract:

An unusual approach is developed to determine the orbit of satellites/space objects. The determination of orbits is considered a boundary value problem and has been solved using the finite difference method (FDM). Only positions of the satellites/space objects are known at two end times taken as boundary conditions. The technique of finite difference has been used to calculate the orbit between end times. In this approach, the governing equation is defined as the satellite's equation of motion with a perturbed acceleration. Using the finite difference method, the governing equations and boundary conditions are discretized. The resulting system of algebraic equations is solved using Tri Diagonal Matrix Algorithm (TDMA) until convergence is achieved. This methodology test and evaluation has been done using all GPS satellite orbits from National Geospatial-Intelligence Agency (NGA) precise product for Doy 125, 2023. Towards this, two hours of twelve sets have been taken into consideration. Only positions at the end times of each twelve sets are considered boundary conditions. This algorithm is applied to all GPS satellites. Results achieved using FDM compared with the results of NGA precise orbits. The maximum RSS error for the position is 0.48 [m] and the velocity is 0.43 [mm/sec]. Also, the present algorithm is applied on the IRNSS satellites for Doy 220, 2023. The maximum RSS error for the position is 0.49 [m], and for velocity is 0.28 [mm/sec]. Next, a simulation has been done for a Highly Elliptical orbit for DOY 63, 2023, for the duration of 6 hours. The RSS of difference in position is 0.92 [m] and velocity is 1.58 [mm/sec] for the orbital speed of more than 5km/sec. Whereas the RSS of difference in position is 0.13 [m] and velocity is 0.12 [mm/sec] for the orbital speed less than 5km/sec. Results show that the newly created method is reliable and accurate. Further applications of the developed methodology include missile and spacecraft targeting, orbit design (mission planning), space rendezvous and interception, space debris correlation, and navigation solutions.

Keywords: finite difference method, grid generation, NavIC system, orbit perturbation

Procedia PDF Downloads 69
4274 A Turn-on Fluorescent Sensor for Pb(II)

Authors: Ece Kök Yetimoğlu, Soner Çubuk, Neşe Taşci, M. Vezir Kahraman

Abstract:

Lead(II) is one of the most toxic environmental pollutants in the world, due to its high toxicity and non-biodegradability. Lead exposure causes severe risks to human health such as central brain damages, convulsions, kidney damages, and even death. To determine lead(II) in environmental or biological samples, scientists use atomic absorption spectrometry (AAS), inductively coupled plasma mass spectrometry (ICPMS), fluorescence spectrometry and electrochemical techniques. Among these systems the fluorescence spectrometry and fluorescent chemical sensors have attracted considerable attention because of their good selectivity and high sensitivity. The fluorescent polymers usually contain covalently bonded fluorophores. In this study imidazole based UV cured polymeric film was prepared and designed to act as a fluorescence chemo sensor for lead (II) analysis. The optimum conditions such as influence of pH value and time on the fluorescence intensity of the sensor have also been investigated. The sensor was highly sensitive with a detection limit as low as 1.87 × 10−8 mol L-1 and it was successful in the determination of Pb(II) in water samples.

Keywords: fluorescence, lead(II), photopolymerization, polymeric sensor

Procedia PDF Downloads 658
4273 Reliability Analysis of Variable Stiffness Composite Laminate Structures

Authors: A. Sohouli, A. Suleman

Abstract:

This study focuses on reliability analysis of variable stiffness composite laminate structures to investigate the potential structural improvement compared to conventional (straight fibers) composite laminate structures. A computational framework was developed which it consists of a deterministic design step and reliability analysis. The optimization part is Discrete Material Optimization (DMO) and the reliability of the structure is computed by Monte Carlo Simulation (MCS) after using Stochastic Response Surface Method (SRSM). The design driver in deterministic optimization is the maximum stiffness, while optimization method concerns certain manufacturing constraints to attain industrial relevance. These manufacturing constraints are the change of orientation between adjacent patches cannot be too large and the maximum number of successive plies of a particular fiber orientation should not be too high. Variable stiffness composites may be manufactured by Automated Fiber Machines (AFP) which provides consistent quality with good production rates. However, laps and gaps are the most important challenges to steer fibers that effect on the performance of the structures. In this study, the optimal curved fiber paths at each layer of composites are designed in the first step by DMO, and then the reliability analysis is applied to investigate the sensitivity of the structure with different standard deviations compared to the straight fiber angle composites. The random variables are material properties and loads on the structures. The results show that the variable stiffness composite laminate structures are much more reliable, even for high standard deviation of material properties, than the conventional composite laminate structures. The reason is that the variable stiffness composite laminates allow tailoring stiffness and provide the possibility of adjusting stress and strain distribution favorably in the structures.

Keywords: material optimization, Monte Carlo simulation, reliability analysis, response surface method, variable stiffness composite structures

Procedia PDF Downloads 502
4272 Capex Planning with and without Additional Spectrum

Authors: Koirala Abarodh, Maghaiya Ujjwal, Guragain Phani Raj

Abstract:

This analysis focuses on defining the spectrum evaluation model for telecom operators in terms of total cost of ownership (TCO). A quantitative approach for specific case analysis research methodology was used for identifying the results. Specific input parameters like target User experience, year on year traffic growth, capacity site limit per year, target additional spectrum type, bandwidth, spectrum efficiency, UE penetration have been used for the spectrum evaluation process and desired outputs in terms of the number of sites, capex in USD and required spectrum bandwidth have been calculated. Furthermore, this study gives a comparison of capex investment for target growth with and without addition spectrum. As a result, the combination of additional spectrum bands of 700 and 2600 MHz has a better evaluation in terms of TCO and performance. It is our recommendation to use these bands for expansion rather than expansion in the current 1800 and 2100 bands.

Keywords: spectrum, capex planning, case study methodology, TCO

Procedia PDF Downloads 37
4271 Understanding the Dynamics of Linker Histone Using Mathematical Modeling and FRAP Experiments

Authors: G. Carrero, C. Contreras, M. J. Hendzel

Abstract:

Linker histones or histones H1 are highly mobile nuclear proteins that regulate the organization of chromatin and limit DNA accessibility by binding to the chromatin structure (DNA and associated proteins). It is known that this binding process is driven by both slow (strong binding) and rapid (weak binding) interactions. However, the exact binding mechanism has not been fully described. Moreover, the existing models only account for one type of bound population that does not distinguish explicitly between the weakly and strongly bound proteins. Thus, we propose different systems of reaction-diffusion equations to describe explicitly the rapid and slow interactions during a FRAP (Fluorescence Recovery After Photobleaching) experiment. We perform a model comparison analysis to characterize the binding mechanism of histone H1 and provide new meaningful biophysical information on the kinetics of histone H1.

Keywords: FRAP (Fluorescence Recovery After Photobleaching), histone H1, histone H1 binding kinetics, linker histone, reaction-diffusion equation

Procedia PDF Downloads 419
4270 Development of a Non-Dispersive Infrared Multi Gas Analyzer for a TMS

Authors: T. V. Dinh, I. Y. Choi, J. W. Ahn, Y. H. Oh, G. Bo, J. Y. Lee, J. C. Kim

Abstract:

A Non-Dispersive Infrared (NDIR) multi-gas analyzer has been developed to monitor the emission of carbon monoxide (CO) and sulfur dioxide (SO2) from various industries. The NDIR technique for gas measurement is based on the wavelength absorption in the infrared spectrum as a way to detect particular gasses. NDIR analyzers have popularly applied in the Tele-Monitoring System (TMS). The advantage of the NDIR analyzer is low energy consumption and cost compared with other spectroscopy methods. However, zero/span drift and interference are its urgent issues to be solved. Multi-pathway technique based on optical White cell was employed to improve the sensitivity of the analyzer in this work. A pyroelectric detector was used to detect the Infrared radiation. The analytical range of the analyzer was 0 ~ 200 ppm. The instrument response time was < 2 min. The detection limits of CO and SO2 were < 4 ppm and < 6 ppm, respectively. The zero and span drift of 24 h was less than 3%. The linearity of the analyzer was less than 2.5% of reference values. The precision and accuracy of both CO and SO2 channels were < 2.5% of relative standard deviation. In general, the analyzer performed well. However, the detection limit and 24h drift should be improved to be a more competitive instrument.

Keywords: analyzer, CEMS, monitoring, NDIR, TMS

Procedia PDF Downloads 239
4269 Proposals for the Practical Implementation of the Biological Monitoring of Occupational Exposure for Antineoplastic Drugs

Authors: Mireille Canal-Raffin, Nadege Lepage, Antoine Villa

Abstract:

Context: Most antineoplastic drugs (AD) have a potential carcinogenic, mutagenic and/or reprotoxic effect and are classified as 'hazardous to handle' by National Institute for Occupational Safety and Health Their handling increases with the increase of cancer incidence. AD contamination from workers who handle AD and/or care for treated patients is, therefore, a major concern for occupational physicians. As part of the process of evaluation and prevention of chemical risks for professionals exposed to AD, Biological Monitoring of Occupational Exposure (BMOE) is the tool of choice. BMOE allows identification of at-risk groups, monitoring of exposures, assessment of poorly controlled exposures and the effectiveness and/or wearing of protective equipment, and documenting occupational exposure incidents to AD. This work aims to make proposals for the practical implementation of the BMOE for AD. The proposed strategy is based on the French good practice recommendations for BMOE, issued in 2016 by 3 French learned societies. These recommendations have been adapted to occupational exposure to AD. Results: AD contamination of professionals is a sensitive topic, and the BMOE requires the establishment of a working group and information meetings within the concerned health establishment to explain the approach, objectives, and purpose of monitoring. Occupational exposure to AD is often discontinuous and 2 steps are essential upstream: a study of the nature and frequency of AD used to select the Biological Exposure Indice(s) (BEI) most representative of the activity; a study of AD path in the institution to target exposed professionals and to adapt medico-professional information sheet (MPIS). The MPIS is essential to gather the necessary elements for results interpretation. Currently, 28 urinary specific BEIs of AD exposure have been identified, and corresponding analytical methods have been published: 11 BEIs were AD metabolites, and 17 were AD. Results interpretation is performed by groups of homogeneous exposure (GHE). There is no threshold biological limit value of interpretation. Contamination is established when an AD is detected in trace concentration or in a urine concentration equal or greater than the limit of quantification (LOQ) of the analytical method. Results can only be compared to LOQs of these methods, which must be as low as possible. For 8 of the 17 AD BEIs, the LOQ is very low with values between 0.01 to 0.05µg/l. For the other BEIs, the LOQ values were higher between 0.1 to 30µg/l. Results restitution by occupational physicians to workers should be individual and collective. Faced with AD dangerousness, in cases of workers contamination, it is necessary to put in place corrective measures. In addition, the implementation of prevention and awareness measures for those exposed to this risk is a priority. Conclusion: This work is a help for occupational physicians engaging in a process of prevention of occupational risks related to AD exposure. With the current analytical tools, effective and available, the (BMOE) to the AD should now be possible to develop in routine occupational physician practice. The BMOE may be complemented by surface sampling to determine workers' contamination modalities.

Keywords: antineoplastic drugs, urine, occupational exposure, biological monitoring of occupational exposure, biological exposure indice

Procedia PDF Downloads 118
4268 Accuracy of a 3D-Printed Polymer Model for Producing Casting Mold

Authors: Ariangelo Hauer Dias Filho, Gustavo Antoniácomi de Carvalho, Benjamim de Melo Carvalho

Abstract:

The work´s purpose was to evaluate the possibility of manufacturing casting tools utilizing Fused Filament Fabrication, a 3D printing technique, without any post-processing on the printed part. Taguchi Orthogonal array was used to evaluate the influence of extrusion temperature, bed temperature, layer height, and infill on the dimensional accuracy of a 3D-Printed Polymer Model. A Zeiss T-SCAN CS 3D Scanner was used for dimensional evaluation of the printed parts within the limit of ±0,2 mm. The mold capabilities were tested with the printed model to check how it would interact with the green sand. With little adjustments in the 3D model, it was possible to produce rapid tools without the need for post-processing for iron casting. The results are important for reducing time and cost in the development of such tools.

Keywords: additive manufacturing, Taguchi method, rapid tooling, fused filament fabrication, casting mold

Procedia PDF Downloads 126
4267 HCl-Based Hydrometallurgical Recycling Route for Metal Recovery from Li-Ion Battery Wastes

Authors: Claudia Schier, Arvid Biallas, Bernd Friedrich

Abstract:

The demand for Li-ion-batteries owing to their benefits, such as; fast charging time, high energy density, low weight, large temperature range, and a long service life performance is increasing compared to other battery systems. These characteristics are substantial not only for battery-operated portable devices but also in the growing field of electromobility where high-performance energy storage systems in the form of batteries are highly requested. Due to the sharp rising production, there is a tremendous interest to recycle spent Li-Ion batteries in a closed-loop manner owed to the high content of valuable metals such as cobalt, manganese, and lithium as well as regarding the increasing demand for those scarce applied metals. Currently, there are just a few industrial processes using hydrometallurgical methods to recover valuable metals from Li-ion-battery waste. In this study, the extraction of valuable metals from spent Li-ion-batteries is investigated by pretreated and subsequently leached battery wastes using different precipitation methods in a comparative manner. For the extraction of lithium, cobalt, and other valuable metals, pelletized battery wastes with an initial Li content of 2.24 wt. % and cobalt of 22 wt. % is used. Hydrochloric acid with 4 mol/L is applied with 1:50 solid to liquid (s/l) ratio to generate pregnant leach solution for subsequent precipitation steps. In order to obtain pure precipitates, two different pathways (pathway 1 and pathway 2) are investigated, which differ from each other with regard to the precipitation steps carried out. While lithium carbonate recovery is the final process step in pathway 1, pathway 2 requires a preliminary removal of lithium from the process. The aim is to evaluate both processes in terms of purity and yield of the products obtained. ICP-OES is used to determine the chemical content of leach liquor as well as of the solid residue.

Keywords: hydrochloric acid, hydrometallurgy, Li-ion-batteries, metal recovery

Procedia PDF Downloads 153
4266 Bringing Ethics to a Violent System

Authors: Zeynep Selin Acar

Abstract:

In international system, there has always been a cycle of violence, war and peace. As there travels the time, after Christianity and later Just War theorists, international relations theorists have been tried to limit violence and war. As pieces of international law, Peace of Augsburg, Kellog-Briand Pact, League of Nations Covenant and UN Charter were and are still not effective to prevent war. Moreover, in order to find a way around these rules, it is believed that a new excuse started to be used instead of violence or war, humanitarian intervention. However, it has neither a legal nor a universally accepted framework. As a result, it is open to be manipulated by states. In order to prevent this, Responsibility to Protect (RtoP) which gives a state the responsibility to protect its citizens against violence, is created. Additionally, RtoP transfers this responsibility to regional or international group of states at the time when a state is the origin of violence. In the lights of these, this paper analyzes RtoP as an ethical approach to war and peace studies because it provides other states as guardians and care-takers of people who do not belong to them or do not share any togetherness.

Keywords: ethics, humanitarian intervention, responsibility to protect, UN charter

Procedia PDF Downloads 307
4265 Pharmacokinetic Modeling of Valsartan in Dog following a Single Oral Administration

Authors: In-Hwan Baek

Abstract:

Valsartan is a potent and highly selective antagonist of the angiotensin II type 1 receptor, and is widely used for the treatment of hypertension. The aim of this study was to investigate the pharmacokinetic properties of the valsartan in dogs following oral administration of a single dose using quantitative modeling approaches. Forty beagle dogs were randomly divided into two group. Group A (n=20) was administered a single oral dose of valsartan 80 mg (Diovan® 80 mg), and group B (n=20) was administered a single oral dose of valsartan 160 mg (Diovan® 160 mg) in the morning after an overnight fast. Blood samples were collected into heparinized tubes before and at 0.5, 1, 1.5, 2, 2.5, 3, 4, 6, 8, 12 and 24 h following oral administration. The plasma concentrations of the valsartan were determined using LC-MS/MS. Non-compartmental pharmacokinetic analyses were performed using WinNonlin Standard Edition software, and modeling approaches were performed using maximum-likelihood estimation via the expectation maximization (MLEM) algorithm with sampling using ADAPT 5 software. After a single dose of valsartan 80 mg, the mean value of maximum concentration (Cmax) was 2.68 ± 1.17 μg/mL at 1.83 ± 1.27 h. The area under the plasma concentration-versus-time curve from time zero to the last measurable concentration (AUC24h) value was 13.21 ± 6.88 μg·h/mL. After dosing with valsartan 160 mg, the mean Cmax was 4.13 ± 1.49 μg/mL at 1.80 ± 1.53 h, the AUC24h was 26.02 ± 12.07 μg·h/mL. The Cmax and AUC values increased in proportion to the increment in valsartan dose, while the pharmacokinetic parameters of elimination rate constant, half-life, apparent of total clearance, and apparent of volume of distribution were not significantly different between the doses. Valsartan pharmacokinetic analysis fits a one-compartment model with first-order absorption and elimination following a single dose of valsartan 80 mg and 160 mg. In addition, high inter-individual variability was identified in the absorption rate constant. In conclusion, valsartan displays the dose-dependent pharmacokinetics in dogs, and Subsequent quantitative modeling approaches provided detailed pharmacokinetic information of valsartan. The current findings provide useful information in dogs that will aid future development of improved formulations or fixed-dose combinations.

Keywords: dose-dependent, modeling, pharmacokinetics, valsartan

Procedia PDF Downloads 283
4264 FESA: Fuzzy-Controlled Energy-Efficient Selective Allocation and Reallocation of Tasks Among Mobile Robots

Authors: Anuradha Banerjee

Abstract:

Energy aware operation is one of the visionary goals in the area of robotics because operability of robots is greatly dependent upon their residual energy. Practically, the tasks allocated to robots carry different priority and often an upper limit of time stamp is imposed within which the task needs to be completed. If a robot is unable to complete one particular task given to it the task is reallocated to some other robot. The collection of robots is controlled by a Central Monitoring Unit (CMU). Selection of the new robot is performed by a fuzzy controller called Task Reallocator (TRAC). It accepts the parameters like residual energy of robots, possibility that the task will be successfully completed by the new robot within stipulated time, distance of the new robot (where the task is reallocated) from distance of the old one (where the task was going on) etc. The proposed methodology increases the probability of completing globally assigned tasks and saves huge amount of energy as far as the collection of robots is concerned.

Keywords: energy-efficiency, fuzzy-controller, priority, reallocation, task

Procedia PDF Downloads 298
4263 2.4 GHz 0.13µM Multi Biased Cascode Power Amplifier for ISM Band Wireless Applications

Authors: Udayan Patankar, Shashwati Bhagat, Vilas Nitneware, Ants Koel

Abstract:

An ISM band power amplifier is a type of electronic amplifier used to convert a low-power radio-frequency signal into a larger signal of significant power, typically used for driving the antenna of a transmitter. Due to drastic changes in telecommunication generations may lead to the requirements of improvements. Rapid changes in communication lead to the wide implementation of WLAN technology for its excellent characteristics, such as high transmission speed, long communication distance, and high reliability. Many applications such as WLAN, Bluetooth, and ZigBee, etc. were evolved with 2.4GHz to 5 GHz ISM Band, in which the power amplifier (PA) is a key building block of RF transmitters. There are many manufacturing processes available to manufacture a power amplifier for desired power output, but the major problem they have faced is about the power it consumed for its proper working, as many of them are fabricated on the GaN HEMT, Bi COMS process. In this paper we present a CMOS Base two stage cascode design of power amplifier working on 2.4GHz ISM frequency band. To lower the costs and allow full integration of a complete System-on-Chip (SoC) we have chosen 0.13µm low power CMOS technology for design. While designing a power amplifier, it is a real task to achieve higher power efficiency with minimum resources. This design showcase the Multi biased Cascode methodology to implement a two-stage CMOS power amplifier using ADS and LTSpice simulating tool. Main source is maximum of 2.4V which is internally distributed into different biasing point VB driving and VB driven as required for distinct stages of two stage RF power amplifier. It shows maximum power added efficiency near about 70.195% whereas its Power added efficiency calculated at 1 dB compression point is 44.669 %. Biased MOSFET is used to reduce total dc current as this circuit is designed for different wireless applications comes under 2.4GHz ISM Band.

Keywords: RFIC, PAE, RF CMOS, impedance matching

Procedia PDF Downloads 208
4262 Synthetic Daily Flow Duration Curves for the Çoruh River Basin, Turkey

Authors: Ibrahim Can, Fatih Tosunoğlu

Abstract:

The flow duration curve (FDC) is an informative method that represents the flow regime’s properties for a river basin. Therefore, the FDC is widely used for water resource projects such as hydropower, water supply, irrigation and water quality management. The primary purpose of this study is to obtain synthetic daily flow duration curves for Çoruh Basin, Turkey. For this aim, we firstly developed univariate auto-regressive moving average (ARMA) models for daily flows of 9 stations located in Çoruh basin and then these models were used to generate 100 synthetic flow series each having same size as historical series. Secondly, flow duration curves of each synthetic series were drawn and the flow values exceeded 10, 50 and 95 % of the time and 95% confidence limit of these flows were calculated. As a result, flood, mean and low flows potential of Çoruh basin will comprehensively be represented.

Keywords: ARMA models, Çoruh basin, flow duration curve, Turkey

Procedia PDF Downloads 389
4261 The Role of Emotion in Attention Allocation

Authors: Michaela Porubanova

Abstract:

In this exploratory study to examine the effects of emotional significance on change detection using the flicker paradigm, three different categories of scenes were randomly presented (neutral, positive and negative) in three different blocks. We hypothesized that because of the different effects on attention, performance in change detection tasks differs for scenes with different effective values. We found the greatest accuracy of change detection was for changes occurring in positive and negative scenes (compared with neutral scenes). Secondly and most importantly, changes in negative scenes (and also positive scenes, though not with statistical significance) were detected faster than changes in neutral scenes. Interestingly, women were less accurate than men in detecting changes in emotionally significant scenes (both negative and positive), i.e., women detected fewer changes in emotional scenes in the time limit of 40s. But on the other hand, women were quicker to detect changes in positive and negative images than men. The study makes important contributions to the area of the role of emotions on information processing. The role of emotion in attention will be discussed.

Keywords: attention, emotion, flicker task, IAPS

Procedia PDF Downloads 338
4260 Studies on the Existing Status of MSW Management in Agartala City and Recommendation for Improvement

Authors: Subhro Sarkar, Umesh Mishra

Abstract:

Agartala Municipal Council (AMC) is the municipal body which regulates and governs the Agartala city. MSW management may be proclaimed as a tool which rests on the principles of public health, economy, engineering and other aesthetic or environmental factors by dealing with the controlled generation, collection, transport, processing and disposal of MSW. Around 220-250 MT of solid waste per day is collected by AMC out of which 12-14 MT is plastic and is disposed of in Devendra Chandra Nagar dumping ground (33 acres), nearly 12-15 km from the city. A survey was performed to list down the prevailing operations conducted by the AMC which includes road sweeping, garbage lifting, carcass removal, biomedical waste collection, dumping, and incineration. Different types of vehicles are engaged to carry out these operations. Door to door collection of garbage is done from the houses with the help of 220 tricycles issued by 53 NGOs. The location of the dustbin containers were earmarked which consisted of 4.5 cum, 0.6 cum containers and 0.1 cum containers, placed at various locations within the city. The total household waste was categorized as organic, recyclable and other wastes. It was found that East Pratapgarh ward produced 99.3% organic waste out of the total MSW generated in that ward which is maximum among all the wards. A comparison of the waste generation versus the family size has been made. A questionnaire for the survey of MSW from household and market place was prepared. The average waste generated (in kg) per person per day was found out for each of the wards. It has been noted that East Jogendranagar ward had a maximum per person per day waste generation of 0.493 kg/day.In view of the studies made, it has been found that AMC has failed to implement MSWM in an effective way because of the unavailability of suitable facilities for treatment and disposal of the large amount of MSW. It has also been noted that AMC is not following the standard procedures of handling MSW. Transportation system has also been found less effective leading to waste of time, money and manpower.

Keywords: MSW, waste generation, solid waste disposal, management

Procedia PDF Downloads 303
4259 A Parallel Algorithm for Solving the PFSP on the Grid

Authors: Samia Kouki

Abstract:

Solving NP-hard combinatorial optimization problems by exact search methods, such as Branch-and-Bound, may degenerate to complete enumeration. For that reason, exact approaches limit us to solve only small or moderate size problem instances, due to the exponential increase in CPU time when problem size increases. One of the most promising ways to reduce significantly the computational burden of sequential versions of Branch-and-Bound is to design parallel versions of these algorithms which employ several processors. This paper describes a parallel Branch-and-Bound algorithm called GALB for solving the classical permutation flowshop scheduling problem as well as its implementation on a Grid computing infrastructure. The experimental study of our distributed parallel algorithm gives promising results and shows clearly the benefit of the parallel paradigm to solve large-scale instances in moderate CPU time.

Keywords: grid computing, permutation flow shop problem, branch and bound, load balancing

Procedia PDF Downloads 270
4258 An Extension of the Generalized Extreme Value Distribution

Authors: Serge Provost, Abdous Saboor

Abstract:

A q-analogue of the generalized extreme value distribution which includes the Gumbel distribution is introduced. The additional parameter q allows for increased modeling flexibility. The resulting distribution can have a finite, semi-infinite or infinite support. It can also produce several types of hazard rate functions. The model parameters are determined by making use of the method of maximum likelihood. It will be shown that it compares favourably to three related distributions in connection with the modeling of a certain hydrological data set.

Keywords: extreme value theory, generalized extreme value distribution, goodness-of-fit statistics, Gumbel distribution

Procedia PDF Downloads 332