Search results for: intensive unit scoring system
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19622

Search results for: intensive unit scoring system

10652 Home Made Rice Beer Waste (Choak): A Low Cost Feed for Sustainable Poultry Production

Authors: Vinay Singh, Chandra Deo, Asit Chakrabarti, Lopamudra Sahoo, Mahak Singh, Rakesh Kumar, Dinesh Kumar, H. Bharati, Biswajit Das, V. K. Mishra

Abstract:

The most widely used feed resources in poultry feed, like maize and soybean, are expensive as well as in short supply. Hence, there is a need to utilize non-conventional feed ingredients to cut down feed costs. As an alternative, brewery by-products like brewers’ dried grains are potential non-conventional feed resources. North-East India is inhabited by many tribes, and most of these tribes prepare their indigenous local brew, mostly using rice grains as the primary substrate. Choak, a homemade rice beer waste, is an excellent and cheap source of protein and other nutrients. Fresh homemade rice beer waste (rice brewer’s grain) was collected locally. The proximate analysis indicated 28.53% crude protein, 92.76% dry matter, 5.02% ether extract, 7.83% crude fibre, 2.85% total ash, 0.67% acid insoluble ash, 0.91% calcium, and 0.55% total phosphorus. A feeding trial with 5 treatments (incorporating rice beer waste at the inclusion levels of 0,10,20,30 & 40% by replacing maize and soybean from basal diet) was conducted with 25 laying hens per treatment for 16 weeks under completely randomized design in order to study the production performance, blood-biochemical parameters, immunity, egg quality and cost economics of laying hens. The results showed substantial variations (P<0.01) in egg production, egg mass, FCR per dozen eggs, FCR per kg egg mass, and net FCR. However, there was not a substantial difference in either body weight or feed intake or in egg weight. Total serum cholesterol reduced significantly (P<0.01) at 40% inclusion of rice beer waste. Additionally, the egg haugh unit grew considerably (P<0.01) when the graded levels of rice beer waste increased. The inclusion of 20% rice brewers dried grain reduced feed cost per kg egg mass and per dozen egg production by Rs. 15.97 and 9.99, respectively. Choak (homemade rice beer waste) can thus be safely incorporated into the diet of laying hens at a 20% inclusion level for better production performance and cost-effectiveness.

Keywords: choak, rice beer waste, laying hen, production performance, cost economics

Procedia PDF Downloads 43
10651 The Use of Artificial Intelligence in the Context of a Space Traffic Management System: Legal Aspects

Authors: George Kyriakopoulos, Photini Pazartzis, Anthi Koskina, Crystalie Bourcha

Abstract:

The need for securing safe access to and return from outer space, as well as ensuring the viability of outer space operations, maintains vivid the debate over the promotion of organization of space traffic through a Space Traffic Management System (STM). The proliferation of outer space activities in recent years as well as the dynamic emergence of the private sector has gradually resulted in a diverse universe of actors operating in outer space. The said developments created an increased adverse impact on outer space sustainability as the case of the growing number of space debris clearly demonstrates. The above landscape sustains considerable threats to outer space environment and its operators that need to be addressed by a combination of scientific-technological measures and regulatory interventions. In this context, recourse to recent technological advancements and, in particular, to Artificial Intelligence (AI) and machine learning systems, could achieve exponential results in promoting space traffic management with respect to collision avoidance as well as launch and re-entry procedures/phases. New technologies can support the prospects of a successful space traffic management system at an international scale by enabling, inter alia, timely, accurate and analytical processing of large data sets and rapid decision-making, more precise space debris identification and tracking and overall minimization of collision risks and reduction of operational costs. What is more, a significant part of space activities (i.e. launch and/or re-entry phase) takes place in airspace rather than in outer space, hence the overall discussion also involves the highly developed, both technically and legally, international (and national) Air Traffic Management System (ATM). Nonetheless, from a regulatory perspective, the use of AI for the purposes of space traffic management puts forward implications that merit particular attention. Key issues in this regard include the delimitation of AI-based activities as space activities, the designation of the applicable legal regime (international space or air law, national law), the assessment of the nature and extent of international legal obligations regarding space traffic coordination, as well as the appropriate liability regime applicable to AI-based technologies when operating for space traffic coordination, taking into particular consideration the dense regulatory developments at EU level. In addition, the prospects of institutionalizing international cooperation and promoting an international governance system, together with the challenges of establishment of a comprehensive international STM regime are revisited in the light of intervention of AI technologies. This paper aims at examining regulatory implications advanced by the use of AI technology in the context of space traffic management operations and its key correlating concepts (SSA, space debris mitigation) drawing in particular on international and regional considerations in the field of STM (e.g. UNCOPUOS, International Academy of Astronautics, European Space Agency, among other actors), the promising advancements of the EU approach to AI regulation and, last but not least, national approaches regarding the use of AI in the context of space traffic management, in toto. Acknowledgment: The present work was co-funded by the European Union and Greek national funds through the Operational Program "Human Resources Development, Education and Lifelong Learning " (NSRF 2014-2020), under the call "Supporting Researchers with an Emphasis on Young Researchers – Cycle B" (MIS: 5048145).

Keywords: artificial intelligence, space traffic management, space situational awareness, space debris

Procedia PDF Downloads 231
10650 Estimating Groundwater Seepage Rates: Case Study at Zegveld, Netherlands

Authors: Wondmyibza Tsegaye Bayou, Johannes C. Nonner, Joost Heijkers

Abstract:

This study aimed to identify and estimate dynamic groundwater seepage rates using four comparative methods; the Darcian approach, the water balance approach, the tracer method, and modeling. The theoretical background to these methods is put together in this study. The methodology was applied to a case study area at Zegveld following the advice of the Water Board Stichtse Rijnlanden. Data collection has been from various offices and a field campaign in the winter of 2008/09. In this complex confining layer of the study area, the location of the phreatic groundwater table is at a shallow depth compared to the piezometric water level. Data were available for the model years 1989 to 2000 and winter 2008/09. The higher groundwater table shows predominately-downward seepage in the study area. Results of the study indicated that net recharge to the groundwater table (precipitation excess) and the ditch system are the principal sources for seepage across the complex confining layer. Especially in the summer season, the contribution from the ditches is significant. Water is supplied from River Meije through a pumping system to meet the ditches' water demand. The groundwater seepage rate was distributed unevenly throughout the study area at the nature reserve averaging 0.60 mm/day for the model years 1989 to 2000 and 0.70 mm/day for winter 2008/09. Due to data restrictions, the seepage rates were mainly determined based on the Darcian method. Furthermore, the water balance approach and the tracer methods are applied to compute the flow exchange within the ditch system. The site had various validated groundwater levels and vertical flow resistance data sources. The phreatic groundwater level map compared with TNO-DINO groundwater level data values overestimated the groundwater level depth by 28 cm. The hydraulic resistance values obtained based on the 3D geological map compared with the TNO-DINO data agreed with the model values before calibration. On the other hand, the calibrated model significantly underestimated the downward seepage in the area compared with the field-based computations following the Darcian approach.

Keywords: groundwater seepage, phreatic water table, piezometric water level, nature reserve, Zegveld, The Netherlands

Procedia PDF Downloads 70
10649 Using Life Cycle Assessment in Potable Water Treatment Plant: A Colombian Case Study

Authors: Oscar Orlando Ortiz Rodriguez, Raquel A. Villamizar-G, Alexander Araque

Abstract:

There is a total of 1027 municipal development plants in Colombia, 70% of municipalities had Potable Water Treatment Plants (PWTPs) in urban areas and 20% in rural areas. These PWTPs are typically supplied by surface waters (mainly rivers) and resort to gravity, pumping and/or mixed systems to get the water from the catchment point, where the first stage of the potable water process takes place. Subsequently, a series of conventional methods are applied, consisting in a more or less standardized sequence of physicochemical and, sometimes, biological treatment processes which vary depending on the quality of the water that enters the plant. These processes require energy and chemical supplies in order to guarantee an adequate product for human consumption. Therefore, in this paper, we applied the environmental methodology of Life Cycle Assessment (LCA) to evaluate the environmental loads of a potable water treatment plant (PWTP) located in northeastern Colombia following international guidelines of ISO 14040. The different stages of the potable water process, from the catchment point through pumping to the distribution network, were thoroughly assessed. The functional unit was defined as 1 m³ of water treated. The data were analyzed through the database Ecoinvent v.3.01, and modeled and processed in the software LCA-Data Manager. The results allowed determining that in the plant, the largest impact was caused by Clarifloc (82%), followed by Chlorine gas (13%) and power consumption (4%). In this context, the company involved in the sustainability of the potable water service should ideally reduce these environmental loads during the potable water process. A strategy could be the use of Clarifloc can be reduced by applying coadjuvants or other coagulant agents. Also, the preservation of the hydric source that supplies the treatment plant constitutes an important factor, since its deterioration confers unfavorable features to the water that is to be treated. By concluding, treatment processes and techniques, bioclimatic conditions and culturally driven consumption behavior vary from region to region. Furthermore, changes in treatment processes and techniques are likely to affect the environment during all stages of a plant’s operation cycle.

Keywords: climate change, environmental impact, life cycle assessment, treated water

Procedia PDF Downloads 214
10648 Optimizing Solids Control and Cuttings Dewatering for Water-Powered Percussive Drilling in Mineral Exploration

Authors: S. J. Addinell, A. F. Grabsch, P. D. Fawell, B. Evans

Abstract:

The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising down-hole water-powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barren cover. This system has shown superior rates of penetration in water-rich, hard rock formations at depths exceeding 500 metres. With fluid flow rates of up to 120 litres per minute at 200 bar operating pressure to energise the bottom hole tooling, excessive quantities of high quality drilling fluid (water) would be required for a prolonged drilling campaign. As a result, drilling fluid recovery and recycling has been identified as a necessary option to minimise costs and logistical effort. While the majority of the cuttings report as coarse particles, a significant fines fraction will typically also be present. To maximise tool life longevity, the percussive bottom hole assembly requires high quality fluid with minimal solids loading and any recycled fluid needs to have a solids cut point below 40 microns and a concentration less than 400 ppm before it can be used to reenergise the system. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process shows a strong power law relationship for particle size distributions. This data is critical in optimising solids control strategies and cuttings dewatering techniques. Optimisation of deployable solids control equipment is discussed and how the required centrate clarity was achieved in the presence of pyrite-rich metasediment cuttings. Key results were the successful pre-aggregation of fines through the selection and use of high molecular weight anionic polyacrylamide flocculants and the techniques developed for optimal dosing prior to scroll decanter centrifugation, thus keeping sub 40 micron solids loading within prescribed limits. Experiments on maximising fines capture in the presence of thixotropic drilling fluid additives (e.g. Xanthan gum and other biopolymers) are also discussed. As no core is produced during the drilling process, it is intended that the particle laden returned drilling fluid is used for top-of-hole geochemical and mineralogical assessment. A discussion is therefore presented on the biasing and latency of cuttings representivity by dewatering techniques, as well as the resulting detrimental effects on depth fidelity and accuracy. Data pertaining to the sample biasing with respect to geochemical signatures due to particle size distributions is presented and shows that, depending on the solids control and dewatering techniques used, it can have unwanted influence on top-of-hole analysis. Strategies are proposed to overcome these effects, improving sample quality. Successful solids control and cuttings dewatering for water-powered percussive drilling is presented, contributing towards the successful advancement of coiled tubing based greenfields mineral exploration.

Keywords: cuttings, dewatering, flocculation, percussive drilling, solids control

Procedia PDF Downloads 235
10647 Direct Measurements of the Electrocaloric Effect in Solid Ferroelectric Materials via Thermoreflectance

Authors: Layla Farhat, Mathieu Bardoux, Stéphane Longuemart, Ziad Herro, Abdelhak Hadj Sahraoui

Abstract:

Electrocaloric (EC) effect refers to the isothermal entropy or adiabatic temperature changes of a dielectric material induced by an external electric field. This phenomenon has been largely ignored for application because only modest EC effects (2.6

Keywords: electrocaloric effect, thermoreflectance, ferroelectricity, cooling system

Procedia PDF Downloads 165
10646 Digital Joint Equivalent Channel Hybrid Precoding for Millimeterwave Massive Multiple Input Multiple Output Systems

Authors: Linyu Wang, Mingjun Zhu, Jianhong Xiang, Hanyu Jiang

Abstract:

Aiming at the problem that the spectral efficiency of hybrid precoding (HP) is too low in the current millimeter wave (mmWave) massive multiple input multiple output (MIMO) system, this paper proposes a digital joint equivalent channel hybrid precoding algorithm, which is based on the introduction of digital encoding matrix iteration. First, the objective function is expanded to obtain the relation equation, and the pseudo-inverse iterative function of the analog encoder is derived by using the pseudo-inverse method, which solves the problem of greatly increasing the amount of computation caused by the lack of rank of the digital encoding matrix and reduces the overall complexity of hybrid precoding. Secondly, the analog coding matrix and the millimeter-wave sparse channel matrix are combined into an equivalent channel, and then the equivalent channel is subjected to Singular Value Decomposition (SVD) to obtain a digital coding matrix, and then the derived pseudo-inverse iterative function is used to iteratively regenerate the simulated encoding matrix. The simulation results show that the proposed algorithm improves the system spectral efficiency by 10~20%compared with other algorithms and the stability is also improved.

Keywords: mmWave, massive MIMO, hybrid precoding, singular value decompositing, equivalent channel

Procedia PDF Downloads 80
10645 Evaluation of Kabul BRT Route Network with Application of Integrated Land-use and Transportation Model

Authors: Mustafa Mutahari, Nao Sugiki, Kojiro Matsuo

Abstract:

The four decades of war, lack of job opportunities, poverty, lack of services, and natural disasters in different provinces of Afghanistan have contributed to a rapid increase in the population of Kabul, the capital city of Afghanistan. Population census has not been conducted since 1979, the first and last population census in Afghanistan. However, according to population estimations by Afghan authorities, the population of Kabul has been estimated at more than 4 million people, whereas the city was designed for two million people. Although the major transport mode of Kabul residents is public transport, responsible authorities within the country failed to supply the required means of transportation systems for the city. Besides, informal resettlement, lack of intersection control devices, presence of illegal vendors on streets, presence of illegal and unstandardized on-street parking and bus stops, driver`s unprofessional behavior, weak traffic law enforcement, and blocked roads and sidewalks have contributed to the extreme traffic congestion of Kabul. In 2018, the government of Afghanistan approved the Kabul city Urban Design Framework (KUDF), a vision towards the future of Kabul, which provides strategies and design guidance at different scales to direct urban development. Considering traffic congestion of the city and its budget limitations, the KUDF proposes a BRT route network with seven lines to reduce the traffic congestion, and it is said to facilitate more than 50% of Kabul population to benefit from this service. Based on the KUDF, it is planned to increase the BRT mode share from 0% to 17% and later to 30% in medium and long-term planning scenarios, respectively. Therefore, a detailed research study is needed to evaluate the proposed system before the implementation stage starts. The integrated land-use transport model is an effective tool to evaluate the Kabul BRT because of its future assessment capabilities that take into account the interaction between land use and transportation. This research aims to analyze and evaluate the proposed BRT route network with the application of an integrated land-use and transportation model. The research estimates the population distribution and travel behavior of Kabul within small boundary scales. The actual road network and land-use detailed data of the city are used to perform the analysis. The BRT corridors are evaluated not only considering its impacts on the spatial interactions in the city`s transportation system but also on the spatial developments. Therefore, the BRT are evaluated with the scenarios of improving the Kabul transportation system based on the distribution of land-use or spatial developments, planned development typology and population distribution of the city. The impacts of the new improved transport system on the BRT network are analyzed and the BRT network is evaluated accordingly. In addition, the research also focuses on the spatial accessibility of BRT stops, corridors, and BRT line beneficiaries, and each BRT stop and corridor are evaluated in terms of both access and geographic coverage, as well.

Keywords: accessibility, BRT, integrated land-use and transport model, travel behavior, spatial development

Procedia PDF Downloads 194
10644 Performance of BLDC Motor under Kalman Filter Sensorless Drive

Authors: Yuri Boiko, Ci Lin, Iluju Kiringa, Tet Yeap

Abstract:

The performance of a BLDC motor controlled by the Kalman filter-based position-sensorless drive is studied in terms of its dependence on the system’s parameters' variations. The effects of system’s parameters changes on the dynamic behavior of state variables are verified. Simulated is a closed-loop control scheme with a Kalman filter in the feedback line. Distinguished are two separate data sampling modes in analyzing feedback output from the BLDC motor: (1) equal angular separation and (2) equal time intervals. In case (1), the data are collected via equal intervals Δθ of rotor’s angular position θᵢ, i.e., keeping Δθ=const. In case (2), the data collection time points tᵢ are separated by equal sampling time intervals Δt=const. Demonstrated are the effects of the parameters changes on the sensorless control flow, in particular, reduction of the torque ripples, switching spikes, torque load balancing. It is specifically shown that an efficient suppression of commutation induced torque ripples is achievable selection of the sampling rate in the Kalman filter settings above certain critical value. The computational cost of such suppression is shown to be higher for the motors with lower induction values of the windings.

Keywords: BLDC motor, Kalman filter, sensorless drive, state variables, torque ripples reduction, sampling rate

Procedia PDF Downloads 131
10643 Quantification of Dowel-Concrete Interaction in Jointed Plain Concrete Pavements Using 3D Numerical Simulation

Authors: Lakshmana Ravi Raj Gali, K. Sridhar Reddy, M. Amaranatha Reddy

Abstract:

Load transfer between adjacent slabs of the jointed plain concrete pavement (JPCP) system is inevitable for long-lasting performance. Dowel bars are generally used to ensure sufficient degree of load transfer, in addition to the load transferred by aggregate interlock mechanism at the joints. Joint efficiency is the measure of joint quality, a major concern and therefore the dowel bar system should be designed and constructed well. The interaction between dowel bars and concrete that includes various parameters of dowel bar and concrete will explain the degree of joint efficiency. The present study focuses on the methodology of selecting contact stiffness, which quantifies dowel-concrete interaction. In addition, a parametric study which focuses on the effect of dowel diameter, dowel shape, the spacing between dowel bars, joint opening, the thickness of the slab, the elastic modulus of concrete, and concrete cover on contact stiffness was also performed. The results indicated that the thickness of the slab is most critical among various parameters to explain the joint efficiency. Further displacement equivalency method was proposed to find out the contact stiffness. The proposed methodology was validated with the available field surface deflection data collected by falling weight deflectometer (FWD).

Keywords: contact stiffness, displacement equivalency method, Dowel-concrete interaction, joint behavior, 3D numerical simulation

Procedia PDF Downloads 135
10642 A Multi-Criteria Decision Making Approach for Disassembly-To-Order Systems under Uncertainty

Authors: Ammar Y. Alqahtani

Abstract:

In order to minimize the negative impact on the environment, it is essential to manage the waste that generated from the premature disposal of end-of-life (EOL) products properly. Consequently, government and international organizations introduced new policies and regulations to minimize the amount of waste being sent to landfills. Moreover, the consumers’ awareness regards environment has forced original equipment manufacturers to consider being more environmentally conscious. Therefore, manufacturers have thought of different ways to deal with waste generated from EOL products viz., remanufacturing, reusing, recycling, or disposing of EOL products. The rate of depletion of virgin natural resources and their dependency on the natural resources can be reduced by manufacturers when EOL products are treated as remanufactured, reused, or recycled, as well as this will cut on the amount of harmful waste sent to landfills. However, disposal of EOL products contributes to the problem and therefore is used as a last option. Number of EOL need to be estimated in order to fulfill the components demand. Then, disassembly process needs to be performed to extract individual components and subassemblies. Smart products, built with sensors embedded and network connectivity to enable the collection and exchange of data, utilize sensors that are implanted into products during production. These sensors are used for remanufacturers to predict an optimal warranty policy and time period that should be offered to customers who purchase remanufactured components and products. Sensor-provided data can help to evaluate the overall condition of a product, as well as the remaining lives of product components, prior to perform a disassembly process. In this paper, a multi-period disassembly-to-order (DTO) model is developed that takes into consideration the different system uncertainties. The DTO model is solved using Nonlinear Programming (NLP) in multiple periods. A DTO system is considered where a variety of EOL products are purchased for disassembly. The model’s main objective is to determine the best combination of EOL products to be purchased from every supplier in each period which maximized the total profit of the system while satisfying the demand. This paper also addressed the impact of sensor embedded products on the cost of warranties. Lastly, this paper presented and analyzed a case study involving various simulation conditions to illustrate the applicability of the model.

Keywords: closed-loop supply chains, environmentally conscious manufacturing, product recovery, reverse logistics

Procedia PDF Downloads 125
10641 Comparative Study of Heat Transfer Capacity Limits of Heat Pipes

Authors: H. Shokouhmand, A. Ghanami

Abstract:

Heat pipe is simple heat transfer device which combines the conduction and phase change phenomena to control the heat transfer without any need for external power source. At hot surface of heat pipe, the liquid phase absorbs heat and changes to vapor phase. The vapor phase flows to condenser region and with the loss of heat changes to liquid phase. Due to gravitational force the liquid phase flows to evaporator section.In HVAC systems the working fluid is chosen based on the operating temperature. The heat pipe has significant capability to reduce the humidity in HVAC systems. Each HVAC system which uses heater, humidifier or dryer is a suitable nominate for the utilization of heat pipes. Generally heat pipes have three main sections: condenser, adiabatic region and evaporator.Performance investigation and optimization of heat pipes operation in order to increase their efficiency is crucial. In present article, a parametric study is performed to improve the heat pipe performance. Therefore, the heat capacity of heat pipe with respect to geometrical and confining parameters is investigated. For the better observation of heat pipe operation in HVAC systems, a CFD simulation in Eulerian- Eulerian multiphase approach is also performed. The results show that heat pipe heat transfer capacity is higher for water as working fluid with the operating temperature of 340 K. It is also showed that the vertical orientation of heat pipe enhances it’s heat transfer capacity.

Keywords: heat pipe, HVAC system, grooved Heat pipe, heat pipe limits

Procedia PDF Downloads 407
10640 A Combined Activated Sludge-Filtration-Ozonation Process for Abattoir Wastewater Treatment

Authors: Pello Alfonso-Muniozguren, Madeleine Bussemaker, Ralph Chadeesingh, Caryn Jones, David Oakley, Judy Lee, Devendra Saroj

Abstract:

Current industrialized livestock agriculture is growing every year leading to an increase in the generation of wastewater that varies considerably in terms of organic content and microbial population. Therefore, suitable wastewater treatment methods are required to ensure the wastewater quality meet regulations before discharge. In the present study, a combined lab scale activated sludge-filtration-ozonation system was used to treat a pre-treated abattoir wastewater. A hydraulic retention time of 24 hours and a solid retention time of 13 days were used for the activated sludge process, followed by a filtration step (4-7 µm) and using ozone as tertiary treatment. An average reduction of 93% and 98% was achieved for Chemical Oxygen Demand (COD) and Biological Oxygen Demand (BOD), respectively, obtaining final values of 128 mg/L COD and 12 mg/L BOD. For the Total Suspended Solids (TSS), the average reduction increased to 99% in the same system, reducing the final value down to 3 mg/L. Additionally, 98% reduction in Phosphorus (P) and a complete inactivation of Total Coliforms (TC) was obtained after 17 min ozonation time. For Total Viable Counts (TVC), a drastic reduction was observed with 30 min ozonation time (6 log inactivation) at an ozone dose of 71 mg O3/L. Overall, the combined process was sufficient to meet discharge requirements without further treatment for the measured parameters (COD, BOD, TSS, P, TC, and TVC).

Keywords: abattoir waste water, activated sludge, ozone, waste water treatment

Procedia PDF Downloads 261
10639 Role of Baseline Measurements in Assessing Air Quality Impact of Shale Gas Operations

Authors: Paula Costa, Ana Picado, Filomena Pinto, Justina Catarino

Abstract:

Environmental impact associated with large scale shale gas development is of major concern to the public, policy makers and other stakeholders. To assess this impact on the atmosphere, it is important to monitoring ambient air quality prior to and during all shale gas operation stages. Baseline observations can provide a standard of the pre-shale gas development state of the environment. The lack of baseline concentrations was identified as an important knowledge gap to assess the impact of emissions to the air due to shale gas operations. In fact baseline monitoring of air quality are missing in several regions, where there is a strong possibility of future shale gas exploration. This makes it difficult to properly identify, quantify and characterize environmental impacts that may be associated with shale gas development. The implementation of a baseline air monitoring program is imperative to be able to assess the total emissions related with shale gas operations. In fact, any monitoring programme should be designed to provide indicative information on background levels. A baseline air monitoring program should identify and characterize targeted air pollutants, most frequently described from monitoring and emission measurements, as well as those expected from hydraulic fracturing activities, and establish ambient air conditions prior to start-up of potential emission sources from shale gas operations. This program has to be planned for at least one year accounting for ambient variations. In the literature, in addition to GHG emissions of CH4, CO2 and nitrogen oxides (NOx), fugitive emissions from shale gas production can release volatile organic compounds (VOCs), aldehydes (formaldehyde, acetaldehyde) and hazardous air pollutants (HAPs). The VOCs include a.o., benzene, toluene, ethyl benzene, xylenes, hexanes, 2,2,4-trimethylpentane, styrene. The concentrations of six air pollutants (ozone, particulate matter (PM), carbon monoxide (CO), nitrogen oxides (NOx), sulphur oxides (SOx), and lead) whose regional ambient air levels are regulated by the Environmental Protection Agency (EPA), are often discussed. However, the main concern in the emissions to air associated to shale gas operations, seems to be the leakage of methane. Methane is identified as a compound of major concern due to its strong global warming potential. The identification of methane leakage from shale gas activities is complex due to the existence of several other CH4 sources (e.g. landfill, agricultural activity or gas pipeline/compressor station). An integrated monitoring study of methane emissions may be a suitable mean of distinguishing the contribution of different sources of methane to ambient levels. All data analysis needs to be carefully interpreted taking, also, into account the meteorological conditions of the site. This may require the implementation of a more intensive monitoring programme. So, it is essential the development of a low-cost sampling strategy, suitable for establishing pre-operations baseline data as well as an integrated monitoring program to assess the emissions from shale gas operation sites. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 640715.

Keywords: air emissions, baseline, green house gases, shale gas

Procedia PDF Downloads 314
10638 Carbon Capture and Storage in Geological Formation, its Legal, Regulatory Imperatives and Opportunities in India

Authors: Kalbende Krunal Ramesh

Abstract:

The Carbon Capture and Storage Technology (CCS) provides a veritable platform to bridge the gap between the seemingly irreconcilable twin global challenges of ensuring a secure, reliable and diversified energy supply and mitigating climate change by reducing atmospheric emissions of carbon dioxide. Making its proper regulatory policy and making it flexible for the government and private company by law to regulate, also exploring the opportunity in this sector is the main aim of this paper. India's total annual emissions was 1725 Mt CO2 in 2011, which comprises of 6% of total global emission. It is very important to control the greenhouse gas emission for the environment protection. This paper discusses the various regulatory policy and technology adopted by some of the countries for successful using CCS technology. The brief geology of sedimentary basins in India is studied, ranging from the category I to category IV and deep water and potential for mature technology in CCS is reviewed. Areas not suitable for CO2 storage using presently mature technologies were over viewed. CSS and Clean development mechanism was developed for India, considering the various aspects from research and development, project appraisal, approval and validation, implementation, monitoring and verification, carbon credit issued, cap and trade system and its storage potential. The opportunities in oil and gas operations, power sector, transport sector is discussed briefly.

Keywords: carbon credit issued, cap and trade system, carbon capture and storage technology, greenhouse gas

Procedia PDF Downloads 420
10637 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization

Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman

Abstract:

In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.

Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization

Procedia PDF Downloads 224
10636 An Analytical Approach to Assess and Compare the Vulnerability Risk of Operating Systems

Authors: Pubudu K. Hitigala Kaluarachchilage, Champike Attanayake, Sasith Rajasooriya, Chris P. Tsokos

Abstract:

Operating system (OS) security is a key component of computer security. Assessing and improving OSs strength to resist against vulnerabilities and attacks is a mandatory requirement given the rate of new vulnerabilities discovered and attacks occurring. Frequency and the number of different kinds of vulnerabilities found in an OS can be considered an index of its information security level. In the present study five mostly used OSs, Microsoft Windows (windows 7, windows 8 and windows 10), Apple’s Mac and Linux are assessed for their discovered vulnerabilities and the risk associated with each. Each discovered and reported vulnerability has an exploitability score assigned in CVSS score of the national vulnerability database. In this study the risk from vulnerabilities in each of the five Operating Systems is compared. Risk Indexes used are developed based on the Markov model to evaluate the risk of each vulnerability. Statistical methodology and underlying mathematical approach is described. Initially, parametric procedures are conducted and measured. There were, however, violations of some statistical assumptions observed. Therefore the need for non-parametric approaches was recognized. 6838 vulnerabilities recorded were considered in the analysis. According to the risk associated with all the vulnerabilities considered, it was found that there is a statistically significant difference among average risk levels for some operating systems, indicating that according to our method some operating systems have been more risk vulnerable than others given the assumptions and limitations. Relevant test results revealing a statistically significant difference in the Risk levels of different OSs are presented.

Keywords: cybersecurity, Markov chain, non-parametric analysis, vulnerability, operating system

Procedia PDF Downloads 173
10635 Aerodynamic Modelling of Unmanned Aerial System through Computational Fluid Dynamics: Application to the UAS-S45 Balaam

Authors: Maxime A. J. Kuitche, Ruxandra M. Botez, Arthur Guillemin

Abstract:

As the Unmanned Aerial Systems have found diverse utilities in both military and civil aviation, the necessity to obtain an accurate aerodynamic model has shown an enormous growth of interest. Recent modeling techniques are procedures using optimization algorithms and statistics that require many flight tests and are therefore extremely demanding in terms of costs. This paper presents a procedure to estimate the aerodynamic behavior of an unmanned aerial system from a numerical approach using computational fluid dynamic analysis. The study was performed using an unstructured mesh obtained from a grid convergence analysis at a Mach number of 0.14, and at an angle of attack of 0°. The flow around the aircraft was described using a standard k-ω turbulence model. Thus, the Reynold Averaged Navier-Stokes (RANS) equations were solved using ANSYS FLUENT software. The method was applied on the UAS-S45 designed and manufactured by Hydra Technologies in Mexico. The lift, the drag, and the pitching moment coefficients were obtained at different angles of attack for several flight conditions defined in terms of altitudes and Mach numbers. The results obtained from the Computational Fluid Dynamics analysis were compared with the results obtained by using the DATCOM semi-empirical procedure. This comparison has indicated that our approach is highly accurate and that the aerodynamic model obtained could be useful to estimate the flight dynamics of the UAS-S45.

Keywords: aerodynamic modelling, CFD Analysis, ANSYS FLUENT, UAS-S45

Procedia PDF Downloads 361
10634 Research Developments in Vibration Control of Structure Using Tuned Liquid Column Dampers: A State-of-the-Art Review

Authors: Jay Gohel, Anant Parghi

Abstract:

A tuned liquid column damper (TLCD) is a modified passive system of tuned mass damper, where a liquid is used in place of mass in the structure. A TLCD consists of U-shaped tube with an orifice that produces damping against the liquid motion in the tube. This paper provides a state-of-the-art review on the vibration control of wind and earthquake excited structures using liquid dampers. Further, the paper will also discuss the theoretical background of TCLD, history of liquid dampers and existing literature on experimental, numerical, and analytical study. The review will also include different configuration of TLCD viz single TLCD, multi tuned liquid column damper (MTLCD), TLCD-Interior (TLCDI), tuned liquid column ball damper (TLCBD), tuned liquid column ball gas damper (TLCBGD), and pendulum liquid column damper (PLCD). The dynamic characteristics of the different configurate TLCD system and their effectiveness in reducing the vibration of structure will be discussed. The effectiveness of semi-active TLCD will be also discussed with reference to experimental and analytical results. In addition, the review will also provide the numerous examples of implemented TLCD to control the vibration in real structures. Based on the comprehensive review of literature, some important conclusions will be made and the need for future research will be identified for vibration control of structures using TLCD.

Keywords: earthquake, wind, tuned liquid column damper, passive response control, structures

Procedia PDF Downloads 192
10633 Adequacy of Antenatal Care and Its Relationship with Low Birth Weight in Botucatu, São Paulo, Brazil: A Case-Control Study

Authors: Cátia Regina Branco da Fonseca, Maria Wany Louzada Strufaldi, Lídia Raquel de Carvalho, Rosana Fiorini Puccini

Abstract:

Background: Birth weight reflects gestational conditions and development during the fetal period. Low birth weight (LBW) may be associated with antenatal care (ANC) adequacy and quality. The purpose of this study was to analyze ANC adequacy and its relationship with LBW in the Unified Health System in Brazil. Methods: A case-control study was conducted in Botucatu, São Paulo, Brazil, 2004 to 2008. Data were collected from secondary sources (the Live Birth Certificate), and primary sources (the official medical records of pregnant women). The study population consisted of two groups, each with 860 newborns. The case group comprised newborns weighing less than 2,500 grams, while the control group comprised live newborns weighing greater than or equal to 2,500 grams. Adequacy of ANC was evaluated according to three measurements: 1. Adequacy of the number of ANC visits adjusted to gestational age; 2. Modified Kessner Index; and 3. Adequacy of ANC laboratory studies and exams summary measure according to parameters defined by the Ministry of Health in the Program for Prenatal and Birth Care Humanization. Results: Analyses revealed that LBW was associated with the number of ANC visits adjusted to gestational age (OR = 1.78, 95% CI 1.32-2.34) and the ANC laboratory studies and exams summary measure (OR = 4.13, 95% CI 1.36-12.51). According to the modified Kessner Index, 64.4% of antenatal visits in the LBW group were adequate, with no differences between groups. Conclusions: Our data corroborate the association between inadequate number of ANC visits, laboratory studies and exams, and increased risk of LBW newborns. No association was found between the modified Kessner Index as a measure of adequacy of ANC and LBW. This finding reveals the low indices of coverage for basic actions already well regulated in the Health System in Brazil. Despite the association found in the study, we cannot conclude that LBW would be prevented only by an adequate ANC, as LBW is associated with factors of complex and multifactorial etiology. The results could be used to plan monitoring measures and evaluate programs of health care assistance during pregnancy, at delivery and to newborns, focusing on reduced LBW rates.

Keywords: low birth weight, antenatal care, prenatal care, adequacy of health care, health evaluation, public health system

Procedia PDF Downloads 416
10632 Diagnosis of the Heart Rhythm Disorders by Using Hybrid Classifiers

Authors: Sule Yucelbas, Gulay Tezel, Cuneyt Yucelbas, Seral Ozsen

Abstract:

In this study, it was tried to identify some heart rhythm disorders by electrocardiography (ECG) data that is taken from MIT-BIH arrhythmia database by subtracting the required features, presenting to artificial neural networks (ANN), artificial immune systems (AIS), artificial neural network based on artificial immune system (AIS-ANN) and particle swarm optimization based artificial neural network (PSO-NN) classifier systems. The main purpose of this study is to evaluate the performance of hybrid AIS-ANN and PSO-ANN classifiers with regard to the ANN and AIS. For this purpose, the normal sinus rhythm (NSR), atrial premature contraction (APC), sinus arrhythmia (SA), ventricular trigeminy (VTI), ventricular tachycardia (VTK) and atrial fibrillation (AF) data for each of the RR intervals were found. Then these data in the form of pairs (NSR-APC, NSR-SA, NSR-VTI, NSR-VTK and NSR-AF) is created by combining discrete wavelet transform which is applied to each of these two groups of data and two different data sets with 9 and 27 features were obtained from each of them after data reduction. Afterwards, the data randomly was firstly mixed within themselves, and then 4-fold cross validation method was applied to create the training and testing data. The training and testing accuracy rates and training time are compared with each other. As a result, performances of the hybrid classification systems, AIS-ANN and PSO-ANN were seen to be close to the performance of the ANN system. Also, the results of the hybrid systems were much better than AIS, too. However, ANN had much shorter period of training time than other systems. In terms of training times, ANN was followed by PSO-ANN, AIS-ANN and AIS systems respectively. Also, the features that extracted from the data affected the classification results significantly.

Keywords: AIS, ANN, ECG, hybrid classifiers, PSO

Procedia PDF Downloads 427
10631 The Justice of Resources Allocation for People with Disability Base on Activity and Participation Functioning: The Cross-Section Study of National Population

Authors: Chia-Feng Yen, Shyang-Woei Lin

Abstract:

Background: In Taiwan, people with disability can obtain national social welfare services after evaluation. All subsidies and services in- kind are pronounced in People with Disabilities Rights Protection Act. The new disability eligibility determination system base on ICF has carried out five years in Taiwan. There were no systematic outcomes to discuss the relationships between the evaluation results of activity and participation functioning (AP functioning) and ratification of social services for people with disability. The decision-making of welfare resources allocation is in local government, so the ratification could be affected by resource variations in every area (local governments). The purposes of this study are to compare the ratification rate between different areas (the equity of allocation), and to understand the ratification of social services for people with disability after needs assessment stage that can help to predict the resources allocation for local governments in the further. Methods: A cross-sectional study was used, and the data came from Disability Eligibility Determination System in Taiwan between 2013/11/04-2015/01/12. All samples were evaluated by FUNDES-adult version 7th and they all above 18 years old. The samples were collected face to face by physicians and AP evaluators. Result: In the needs assessment stage, the welfare ratification rates are significant differences between these local governments for the samples with the similar impairment and AP functioning.

Keywords: allocation, activity and participation, people with disability, justice

Procedia PDF Downloads 148
10630 Improve Heat Pipes Thermal Performance In H-VAC Systems Using CFD Modeling

Authors: A. Ghanami, M.Heydari

Abstract:

Heat pipe is simple heat transfer device which combines the conduction and phase change phenomena to control the heat transfer without any need for external power source. At hot surface of heat pipe, the liquid phase absorbs heat and changes to vapor phase. The vapor phase flows to condenser region and with the loss of heat changes to liquid phase. Due to gravitational force the liquid phase flows to evaporator section. In HVAC systems the working fluid is chosen based on the operating temperature. The heat pipe has significant capability to reduce the humidity in HVAC systems. Each HVAC system which uses heater, humidifier or dryer is a suitable nominate for the utilization of heat pipes. Generally heat pipes have three main sections: condenser, adiabatic region and evaporator. Performance investigation and optimization of heat pipes operation in order to increase their efficiency is crucial. In present article, a parametric study is performed to improve the heat pipe performance. Therefore, the heat capacity of heat pipe with respect to geometrical and confining parameters is investigated. For the better observation of heat pipe operation in HVAC systems, a CFD simulation in Eulerian- Eulerian multiphase approach is also performed. The results show that heat pipe heat transfer capacity is higher for water as working fluid with the operating temperature of 340 K. It is also showed that the vertical orientation of heat pipe enhances it’s heat transfer capacity.used in the abstract.

Keywords: Heat pipe, HVAC system, Grooved Heat pipe, Heat pipe limits.

Procedia PDF Downloads 467
10629 Endoscopic Treatment of Patients with Large Bile Duct Stones

Authors: Yuri Teterin, Lomali Generdukaev, Dmitry Blagovestnov, Peter Yartcev

Abstract:

Introduction: Under the definition "large biliary stones," we referred to stones over 1.5 cm, in which standard transpapillary litho extraction techniques were unsuccessful. Electrohydraulic and laser contact lithotripsy under SpyGlass control have been actively applied for the last decade in order to improve endoscopic treatment results. Aims and Methods: Between January 2019 and July 2022, the N.V. Sklifosovsky Research Institute of Emergency Care treated 706 patients diagnosed with choledocholithiasis who underwent biliary stones removed from the common bile duct. Of them, in 57 (8, 1%) patients, the use of a Dormia basket or Biliary stone extraction balloon was technically unsuccessful due to the size of the stones (more than 15 mm in diameter), which required their destruction. Mechanical lithotripsy was used in 35 patients, and electrohydraulic and laser lithotripsy under SpyGlass direct visualization system - in 26 patients. Results: The efficiency of mechanical lithotripsy was 72%. Complications in this group were observed in 2 patients. In both cases, on day one after lithotripsy, acute pancreatitis developed, which resolved on day three with conservative therapy (Clavin-Dindo type 2). The efficiency of contact lithotripsy was in 100% of patients. Complications were not observed in this group. Bilirubin level in this group normalized on the 3rd-4th day. Conclusion: Our study showed the efficacy and safety of electrohydraulic and laser lithotripsy under SpyGlass control in a well-defined group of patients with large bile duct stones.

Keywords: contact lithotripsy, choledocholithiasis, SpyGlass, cholangioscopy, laser, electrohydraulic system, ERCP

Procedia PDF Downloads 67
10628 Investigation of the EEG Signal Parameters during Epileptic Seizure Phases in Consequence to the Application of External Healing Therapy on Subjects

Authors: Karan Sharma, Ajay Kumar

Abstract:

Epileptic seizure is a type of disease due to which electrical charge in the brain flows abruptly resulting in abnormal activity by the subject. One percent of total world population gets epileptic seizure attacks.Due to abrupt flow of charge, EEG (Electroencephalogram) waveforms change. On the display appear a lot of spikes and sharp waves in the EEG signals. Detection of epileptic seizure by using conventional methods is time-consuming. Many methods have been evolved that detect it automatically. The initial part of this paper provides the review of techniques used to detect epileptic seizure automatically. The automatic detection is based on the feature extraction and classification patterns. For better accuracy decomposition of the signal is required before feature extraction. A number of parameters are calculated by the researchers using different techniques e.g. approximate entropy, sample entropy, Fuzzy approximate entropy, intrinsic mode function, cross-correlation etc. to discriminate between a normal signal & an epileptic seizure signal.The main objective of this review paper is to present the variations in the EEG signals at both stages (i) Interictal (recording between the epileptic seizure attacks). (ii) Ictal (recording during the epileptic seizure), using most appropriate methods of analysis to provide better healthcare diagnosis. This research paper then investigates the effects of a noninvasive healing therapy on the subjects by studying the EEG signals using latest signal processing techniques. The study has been conducted with Reiki as a healing technique, beneficial for restoring balance in cases of body mind alterations associated with an epileptic seizure. Reiki is practiced around the world and is recommended for different health services as a treatment approach. Reiki is an energy medicine, specifically a biofield therapy developed in Japan in the early 20th century. It is a system involving the laying on of hands, to stimulate the body’s natural energetic system. Earlier studies have shown an apparent connection between Reiki and the autonomous nervous system. The Reiki sessions are applied by an experienced therapist. EEG signals are measured at baseline, during session and post intervention to bring about effective epileptic seizure control or its elimination altogether.

Keywords: EEG signal, Reiki, time consuming, epileptic seizure

Procedia PDF Downloads 393
10627 Value of Willingness to Pay for a Quality-Adjusted Life Years Gained in Iran; A Modified Chained-Approach

Authors: Seyedeh-Fariba Jahanbin, Hasan Yusefzadeh, Bahram Nabilou, Cyrus Alinia, Cyrus Alinia

Abstract:

Background: Due to the lack of a constant Willingness to Pay per one additional Quality Adjusted Life Years gained based on the preferences of Iran’s general public, the cost-efectiveness of health system interventions is unclear and making it challenging to apply economic evaluation to health resources priority setting. Methods: We have measured this cost-efectiveness threshold with the participation of 2854 individuals from fve provinces, each representing an income quintile, using a modifed Time Trade-Of-based Chained-Approach. In this online-based empirical survey, to extract the health utility value, participants were randomly assigned to one of two green (21121) and yellow (22222) health scenarios designed based on the earlier validated EQ-5D-3L questionnaire. Results: Across the two health state versions, mean values for one QALY gain (rounded) ranged from $6740-$7400 and $6480-$7120, respectively, for aggregate and trimmed models, which are equivalent to 1.35-1.18 times of the GDP per capita. Log-linear Multivariate OLS regression analysis confrmed that respondents were more likely to pay if their income, disutility, and education level were higher than their counterparts. Conclusions: In the health system of Iran, any intervention that is with the incremental cost-efectiveness ratio, equal to and less than 7402.12 USD, will be considered cost-efective.

Keywords: willingness to Pay, QALY, chained-approach, cost-efectiveness threshold, Iran

Procedia PDF Downloads 71
10626 Determining Water Quantity from Sprayer Nozzle Using Particle Image Velocimetry (PIV) and Image Processing Techniques

Authors: M. Nadeem, Y. K. Chang, C. Diallo, U. Venkatadri, P. Havard, T. Nguyen-Quang

Abstract:

Uniform distribution of agro-chemicals is highly important because there is a significant loss of agro-chemicals, for example from pesticide, during spraying due to non-uniformity of droplet and off-target drift. Improving the efficiency of spray pattern for different cropping systems would reduce energy, costs and to minimize environmental pollution. In this paper, we examine the water jet patterns in order to study the performance and uniformity of water distribution during the spraying process. We present a method to quantify the water amount from a sprayer jet by using the Particle Image Velocimetry (PIV) system. The results of the study will be used to optimize sprayer or nozzles design for chemical application. For this study, ten sets of images were acquired by using the following PIV system settings: double frame mode, trigger rate is 4 Hz, and time between pulsed signals is 500 µs. Each set of images contained different numbers of double-framed images: 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100 at eight different pressures 25, 50, 75, 100, 125, 150, 175 and 200 kPa. The PIV images obtained were analysed using custom-made image processing software for droplets and volume calculations. The results showed good agreement of both manual and PIV measurements and suggested that the PIV technique coupled with image processing can be used for a precise quantification of flow through nozzles. The results also revealed that the method of measuring fluid flow through PIV is reliable and accurate for sprayer patterns.

Keywords: image processing, PIV, quantifying the water volume from nozzle, spraying pattern

Procedia PDF Downloads 220
10625 Isolation of a Bacterial Community with High Removal Efficiencies of the Insecticide Bendiocarb

Authors: Eusebio A. Jiménez-Arévalo, Deifilia Ahuatzi-Chacón, Juvencio Galíndez-Mayer, Cleotilde Juárez-Ramírez, Nora Ruiz-Ordaz

Abstract:

Bendiocarb is a known toxic xenobiotic that presents acute and chronic risks for freshwater invertebrates and estuarine and marine biota; thus, the treatment of water contaminated with the insecticide is of concern. In this paper, a bacterial community with the capacity to grow in bendiocarb as its sole carbon and nitrogen source was isolated by enrichment techniques in batch culture, from samples of a composting plant located in the northeast of Mexico City. Eight cultivable bacteria were isolated from the microbial community, by PCR amplification of 16 rDNA; Pseudoxanthomonas spadix (NC_016147.2, 98%), Ochrobacterium anthropi (NC_009668.1, 97%), Staphylococcus capitis (NZ_CP007601.1, 99%), Bosea thiooxidans. (NZ_LMAR01000067.1, 99%), Pseudomonas denitrificans. (NC_020829.1, 99%), Agromyces sp. (NZ_LMKQ01000001.1, 98%), Bacillus thuringiensis. (NC_022873.1, 97%), Pseudomonas alkylphenolia (NZ_CP009048.1, 98%). NCBI accession numbers and percentage of similarity are indicated in parentheses. These bacteria were regarded as the isolated species for having the best similarity matches. The ability to degrade bendiocarb by the immobilized bacterial community in a packed bed biofilm reactor, using as support volcanic stone fragments (tezontle), was evaluated. The reactor system was operated in batch using mineral salts medium and 30 mg/L of bendiocarb as carbon and nitrogen source. With this system, an overall removal efficiency (ηbend) rounding 90%, was reached.

Keywords: bendiocarb, biodegradation, biofilm reactor, carbamate insecticide

Procedia PDF Downloads 255
10624 Wireless Information Transfer Management and Case Study of a Fire Alarm System in a Residential Building

Authors: Mohsen Azarmjoo, Mehdi Mehdizadeh Koupaei, Maryam Mehdizadeh Koupaei, Asghar Mahdlouei Azar

Abstract:

The increasing prevalence of wireless networks in our daily lives has made them indispensable. The aim of this research is to investigate the management of information transfer in wireless networks and the integration of renewable solar energy resources in a residential building. The focus is on the transmission of electricity and information through wireless networks, as well as the utilization of sensors and wireless fire alarm systems. The research employs a descriptive approach to examine the transmission of electricity and information on a wireless network with electric and optical telephone lines. It also investigates the transmission of signals from sensors and wireless fire alarm systems via radio waves. The methodology includes a detailed analysis of security, comfort conditions, and costs related to the utilization of wireless networks and renewable solar energy resources. The study reveals that it is feasible to transmit electricity on a network cable using two pairs of network cables without the need for separate power cabling. Additionally, the integration of renewable solar energy systems in residential buildings can reduce dependence on traditional energy carriers. The use of sensors and wireless remote information processing can enhance the safety and efficiency of energy usage in buildings and the surrounding spaces.

Keywords: renewable energy, intelligentization, wireless sensors, fire alarm system

Procedia PDF Downloads 39
10623 MIM and Experimental Studies of the Thermal Drift in an Ultra-High Precision Instrument for Dimensional Metrology

Authors: Kamélia Bouderbala, Hichem Nouira, Etienne Videcoq, Manuel Girault, Daniel Petit

Abstract:

Thermal drifts caused by the power dissipated by the mechanical guiding systems constitute the main limit to enhance the accuracy of an ultra-high precision cylindricity measuring machine. For this reason, a high precision compact prototype has been designed to simulate the behaviour of the instrument. It ensures in situ calibration of four capacitive displacement probes by comparison with four laser interferometers. The set-up includes three heating wires for simulating the powers dissipated by the mechanical guiding systems, four additional heating wires located between each laser interferometer head and its respective holder, 19 Platinum resistance thermometers (Pt100) to observe the temperature evolution inside the set-up and four Pt100 sensors to monitor the ambient temperature. Both a Reduced Model (RM), based on the Modal Identification Method (MIM) was developed and optimized by comparison with the experimental results. Thereafter, time dependent tests were performed under several conditions to measure the temperature variation at 19 fixed positions in the system and compared to the calculated RM results. The RM results show good agreement with experiment and reproduce as well the temperature variations, revealing the importance of the RM proposed for the evaluation of the thermal behaviour of the system.

Keywords: modal identification method (MIM), thermal behavior and drift, dimensional metrology, measurement

Procedia PDF Downloads 382