Search results for: constant heat flux
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5149

Search results for: constant heat flux

529 Extracellular Polymeric Substances Study in an MBR System for Fouling Control

Authors: Dimitra C. Banti, Gesthimani Liona, Petros Samaras, Manasis Mitrakas

Abstract:

Municipal and industrial wastewaters are often treated biologically, by the activated sludge process (ASP). The ASP not only requires large aeration and sedimentation tanks, but also generates large quantities of excess sludge. An alternative technology is the membrane bioreactor (MBR), which replaces two stages of the conventional ASP—clarification and settlement—with a single, integrated biotreatment and clarification step. The advantages offered by the MBR over conventional treatment include reduced footprint and sludge production through maintaining a high biomass concentration in the bioreactor. Notwithstanding these advantages, the widespread application of the MBR process is constrained by membrane fouling. Fouling leads to permeate flux decline, making more frequent membrane cleaning and replacement necessary and resulting to increased operating costs. In general, membrane fouling results from the interaction between the membrane material and the components in the activated sludge liquor. The latter includes substrate components, cells, cell debris and microbial metabolites, such as Extracellular Polymeric Substances (EPS) and Sludge Microbial Products (SMPs). The challenge for effective MBR operation is to minimize the rate of Transmembrane Pressure (TMP) increase. This can be achieved by several ways, one of which is the addition of specific additives, that enhance the coagulation and flocculation of compounds, which are responsible for fouling, hence reducing biofilm formation on the membrane surface and limiting the fouling rate. In this project the effectiveness of a non-commercial composite coagulant was studied as an agent for fouling control in a lab scale MBR system consisting in two aerated tanks. A flat sheet membrane module with 0.40 um pore size was submerged into the second tank. The system was fed by50 L/d of municipal wastewater collected from the effluent of the primary sedimentation basin. The TMP increase rate, which is directly related to fouling growth, was monitored by a PLC system. EPS, MLSS and MLVSS measurements were performed in samples of mixed liquor; in addition, influent and effluent samples were collected for the determination of physicochemical characteristics (COD, BOD5, NO3-N, NH4-N, Total N and PO4-P). The coagulant was added in concentrations 2, 5 and 10mg/L during a period of 2 weeks and the results were compared with the control system (without coagulant addition). EPS fractions were extracted by a three stages physical-thermal treatment allowing the identification of Soluble EPS (SEPS) or SMP, Loosely Bound EPS (LBEPS) and Tightly Bound EPS (TBEPS). Proteins and carbohydrates concentrations were measured in EPS fractions by the modified Lowry method and Dubois method, respectively. Addition of 2 mg/L coagulant concentration did not affect SEPS proteins in comparison with control process and their values varied between 32 to 38mg/g VSS. However a coagulant dosage of 5mg/L resulted in a slight increase of SEPS proteins at 35-40 mg/g VSS while 10mg/L coagulant further increased SEPS to 44-48mg/g VSS. Similar results were obtained for SEPS carbohydrates. Carbohydrates values without coagulant addition were similar to the corresponding values measured for 2mg/L coagulant; the addition of mg/L coagulant resulted to a slight increase of carbohydrates SEPS to 6-7mg/g VSS while a dose of 10 mg/L further increased carbohydrates content to 9-10mg/g VSS. Total LBEPS and TBEPS, consisted of proteins and carbohydrates of LBEPS and TBEPS respectively, presented similar variations by the addition of the coagulant. Total LBEPS at 2mg/L dose were almost equal to 17mg/g VSS, and their values increased to 22 and 29 mg/g VSS during the addition of 5 mg/L and 10 mg/L of coagulant respectively. Total TBEPS were almost 37 mg/g VSS at a coagulant dose of 2 mg/L and increased to 42 and 51 mg/g VSS at 5 mg/L and 10 mg/L doses, respectively. Therefore, it can be concluded that coagulant addition could potentially affect microorganisms activities, excreting EPS in greater amounts. Nevertheless, EPS increase, mainly SEPS increase, resulted to a higher membrane fouling rate, as justified by the corresponding TMP increase rate. However, the addition of the coagulant, although affected the EPS content in the reactor mixed liquor, did not change the filtration process: an effluent of high quality was produced, with COD values as low as 20-30 mg/L.

Keywords: extracellular polymeric substances, MBR, membrane fouling, EPS

Procedia PDF Downloads 250
528 Chronic wrist pain among handstand practitioners. A questionnaire study.

Authors: Martonovich Noa, Maman David, Alfandari Liad, Behrbalk Eyal.

Abstract:

Introduction: The human body is designed for upright standing and walking, with the lower extremities and axial skeleton supporting weight-bearing. Constant weight-bearing on joints not meant for this action can lead to various pathologies, as seen in wheelchair users. Handstand practitioners use their wrists as weight-bearing joints during activities, but little is known about wrist injuries in this population. This study aims to investigate the epidemiology of wrist pain among handstand practitioners, as no such data currently exist. Methods: The study is a cross-sectional online survey conducted among athletes who regularly practice handstands. Participants were asked to complete a three-part questionnaire regarding their workout regimen, training habits, and history of wrist pain. The inclusion criteria were athletes over 18 years old who practice handstands more than twice a month for at least 4 months. All data were collected using Google Forms, organized and anonymized using Microsoft Excel, and analyzed using IBM SPSS 26.0. Descriptive statistics were calculated, and potential risk factors were tested using asymptotic t-tests and Fisher's tests. Differences were considered significant when p < 0.05. Results: This study surveyed 402 athletes who regularly practice handstands to investigate the prevalence of chronic wrist pain and potential risk factors. The participants had a mean age of 31.3 years, with most being male and having an average of 5 years of training experience. 56% of participants reported chronic wrist pain, and 14.4% reported a history of distal radial fracture. Yoga was the most practiced form, followed by Capoeira. No significant differences were found in demographic data between participants with and without chronic wrist pain, and no significant associations were found between chronic wrist pain prevalence and warm-up routines or protective aids. Conclusion: The lower half of the body is meant to handle weight-bearing and impact, while transferring the load to upper extremities can lead to various pathologies. Athletes who perform handstands are particularly prone to chronic wrist pain, which affects over half of them. Warm-up sessions and protective instruments like wrist braces do not seem to prevent chronic wrist pain, and there are no significant differences in age or training volume between athletes with and without the condition. Further research is needed to understand the causes of chronic wrist pain in athletes, given the growing popularity of sports and activities that can cause this type of injury.

Keywords: handstand, handbalance, wrist pain, hand and wrist surgery, yoga, calisthenics, circus, capoeira, movement.

Procedia PDF Downloads 71
527 A Concept in Addressing the Singularity of the Emerging Universe

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times has been studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing an energy conversion mechanism. This is accomplished by establishing a state of energy called a “neutral state”, with an energy level which is referred to as “base energy” capable of converting into other states. Although it follows the same principles, the unique quanta state of the base energy allows it to be distinguishable from other states and have a uniform distribution at the ground level. Although the concept of base energy can be utilized to address the singularity issue, to establish a complete picture, the origin of the base energy should be also identified. This matter is the subject of the first study in the series “A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing” which is discussed in detail. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation

Procedia PDF Downloads 70
526 Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is defined as a closed subset contains real numbers. Then the inequalities of time scales version have received a lot of attention and has had a major field in both pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on double integrals to obtain new time-scale inequalities of Copson driven by Steklov operator. They will be applied in the solution of the Cauchy problem for the wave equation. The proof can be done by introducing restriction on the operator in several cases. In addition, the obtained inequalities done by using some concepts in time scale version such as time scales calculus, theorem of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of Hardy, inequality of Coposon, Steklov operator

Procedia PDF Downloads 59
525 Recommendations for Environmental Impact Assessment of Geothermal Projects on Mature Oil Fields

Authors: Daria Karasalihovic Sedlar, Lucija Jukic, Ivan Smajla, Marija Macenic

Abstract:

This paper analyses possible geothermal energy production from a mature oil reservoir based on exploitation of underlying aquifer thermal energy for the purpose of heating public buildings. Research was conducted based on the case study of the City of Ivanic-Grad public buildings energy demand and Ivanic oil filed that is situated in the same area. Since the City of Ivanic is one of the few cities in the EU where hydrocarbon exploitation has been taking place for decades almost entirely in urban area, decommissioning of oil wells is inevitable; therefore, the research goal was to investigate how to extend the life-time of the reservoir by exploiting geothermal brine beneath the oil reservoir in an environmental friendly manner. This kind of a project is extremely complex in all segments, from documentation preparation, implementation of technological solutions, and providing ecological measures for environmentally acceptable geothermal energy production and utilization. New mining activities that will be needed for the development of geothermal project at the observed Hydrocarbon Exploitation Field Ivanic will be carried out in order to prepare wells for increasing geothermal brine production. These operations involve the conversion of existing wells (well completion for conversion of the observation wells to production ones) along with workover activities, installation of new heat exchangers, and pipelines. Since the wells are in the urban area of the City of Ivanic-Grad in high density populated area, the inhabitants will be exposed to the different environmental impacts during preparation phase of the project. For the purpose of performing workovers, it will be necessary to secure access to wellheads of existing wells. This paper gives guidelines for describing potential impacts on environment components that could occur during geothermal production preparation on existing mature oil filed, recommends possible protection measures to mitigate these impacts, and gives recommendations for environmental monitoring.

Keywords: geothermal energy production, mature oil filed, environmental impact assessment, underlying aquifer thermal energy

Procedia PDF Downloads 131
524 Modeling and Simulation of Multiphase Evaporation in High Torque Low Speed Diesel Engine

Authors: Ali Raza, Rizwan Latif, Syed Adnan Qasim, Imran Shafi

Abstract:

Diesel engines are most efficient and reliable in terms of efficiency, reliability, and adaptability. Most of the research and development up till now have been directed towards High Speed Diesel Engine, for Commercial use. In these engines, objective is to optimize maximum acceleration by reducing exhaust emission to meet international standards. In high torque low speed engines, the requirement is altogether different. These types of engines are mostly used in Maritime Industry, Agriculture Industry, Static Engines Compressors Engines, etc. On the contrary, high torque low speed engines are neglected quite often and are eminent for low efficiency and high soot emissions. One of the most effective ways to overcome these issues is by efficient combustion in an engine cylinder. Fuel spray dynamics play a vital role in defining mixture formation, fuel consumption, combustion efficiency and soot emissions. Therefore, a comprehensive understanding of the fuel spray characteristics and atomization process in high torque low speed diesel engine is of great importance. Evaporation in the combustion chamber has a rigorous effect on the efficiency of the engine. In this paper, multiphase evaporation of fuel is modeled for high torque low speed engine using the CFD (computational fluid dynamics) codes. Two distinct phases of evaporation are modeled using modeling soft wares. The basic model equations are derived from the energy conservation equation and Naiver-Stokes equation. O’Rourke model is used to model the evaporation phases. The results obtained showed a generous effect on the efficiency of the engine. Evaporation rate of fuel droplet is increased with the increase in vapor pressure. An appreciable reduction in size of droplet is achieved by adding the convective heat effects in the combustion chamber. By and large, an overall increase in efficiency is observed by modeling distinct evaporation phases. This increase in efficiency is due to the fact that droplet size is reduced and vapor pressure is increased in the engine cylinder.

Keywords: diesel fuel, CFD, evaporation, multiphase

Procedia PDF Downloads 321
523 The Effects of Cooling during Baseball Games on Perceived Exertion and Core Temperature

Authors: Chih-Yang Liao

Abstract:

Baseball is usually played outdoors in the warmest months of the year. Therefore, baseball players are susceptible to the influence of the hot environment. It has been shown that hitting performance is increased in games played in warm weather, compared to in cold weather, in Major League Baseball. Intermittent cooling during sporting events can prevent the risk of hyperthermia and increase endurance performance. However, the effects of cooling during baseball games played in a hot environment are unclear. This study adopted a cross-over design. Ten Division I collegiate male baseball players in Taiwan volunteered to participate in this study. Each player played two simulated baseball games, with one day in between. Five of the players received intermittent cooling during the first simulated game, while the other five players received intermittent cooling during the second simulated game. The participants were covered in neck and forehand regions for 6 min with towels that were soaked in icy salt water 3 to 4 times during the games. The participants received the cooling treatment in the dugout when they were not on the field for defense or hitting. During the 2 simulated games, the temperature was 31.1-34.1°C and humidity was 58.2-61.8%, with no difference between the two games. Ratings of perceived exertion, thermal sensation, tympanic and forehead skin temperature immediately after each defensive half-inning and after cooling treatments were recorded. Ratings of perceived exertion were measured using the Borg 10-point scale. The thermal sensation was measured with a 6-point scale. The tympanic and skin temperature was measured with infrared thermometers. The data were analyzed with a two-way analysis of variance with repeated measurement. The results showed that intermitted cooling significantly reduced ratings of perceived exertion and thermal sensation. Forehead skin temperature was also significantly decreased after cooling treatments. However, the tympanic temperature was not significantly different between the two trials. In conclusion, intermittent cooling in the neck and forehead regions was effective in alleviating the perceived exertion and heat sensation. However, this cooling intervention did not affect the core temperature. Whether intermittent cooling has any impact on hitting or pitching performance in baseball players warrants further investigation.

Keywords: baseball, cooling, ratings of perceived exertion, thermal sensation

Procedia PDF Downloads 133
522 Measuring the Level of Knowledge of Construction Contracts Procedures: A Case Study of Botswana

Authors: Babulayi B. Wilson

Abstract:

Unsatisfactory performance of construction projects in both the industrialised and developing countries indicate that there could be several defects in construction projects phases. Notwithstanding the fact that some project defects are often conceived at the initiation phase of construction projects, insufficient knowledge of contract procedures has been identified as one of the major sources of construction disputes. Contract procedures are a set of rules that outlines the primary obligations and liabilities of parties involved in the implementation of a construction project. Engineering professional bodies often codify contract procedures into standard forms of contract such as the Institution of Civil Engineers (ICE, UK) and Association of Consulting Engineers (ACE, UK) and keep them under constant review by updating any clause to reflect any change in case law or relevant piece of legislation. Even so, it is the responsibility of a professional body or conditions of contract draftsperson to introduce contract-specific clauses that may be necessary for business efficacy but not covered in the chosen standard conditions of contract. In Botswana, the use of clients’ drafted and/or un-adapted for environment of use international forms of contract in conjunction with client-drafted pricing schedules is common. The product of the latter often impact negatively upon contractors’ claims and payments, in that, tender rates and prices can only be deemed to be sufficient if the chosen conditions of contract compliment the pricing schedule (use of standardised procurement documents). In addition, client drafted and the use of borrowed forms of contract such as FIDIC often conflict with domicile law resulting in costly disputes on the part of the client. It is upon the preceding text that the object of the research is to measure the level of knowledge of contract procedures amongst key stakeholders in the Botswana construction industry by requesting a representative sample from the industry and academia to respond to tutorial questions prepared from two commonly used forms of contract for civil works, that is, FIDIC (International Form of Contract) and ICE (UK). The questions were prepared under the following captions: (a) preparation of tender documents (b) obligations of the parties (c) contract administration; and (d) claims, variations, and valuation of variations. After ascertaining that the level of knowledge of contract procedures is insufficient among most practitioners in the Botswana construction industry, major procurement entities, and engineering institutions of learning; a guide to drafting a condition of a construction contract was developed and then validated through seminars and workshops. In the present, the effectiveness of the guide is not yet measured but feedback from seminars and workshops conducted indicates an appreciation of the guide by the majority of major construction industry stakeholders.

Keywords: contract procedures, conditions of contract, professional practice, construction law, forms of contract

Procedia PDF Downloads 171
521 Investigation of Elastic Properties of 3D Full Five Directional (f5d) Braided Composite Materials

Authors: Apeng Dong, Shu Li, Wenguo Zhu, Ming Qi, Qiuyi Xu

Abstract:

The primary objective of this paper is to focus on the elasticity properties of three-dimensional full five directional (3Df5d) braided composite. A large body of research has been focused on the 3D four directional (4d) and 3D five directional (5d) structure but not much research on the 3Df5d material. Generally, the influence of the yarn shape on mechanical properties of braided materials tends to be ignored, which makes results too ideal. Besides, with the improvement of the computational ability, people are accustomed to using computers to predict the material parameters, which fails to give an explicit and concise result facilitating production and application. Based on the traditional mechanics, this paper firstly deduced the functional relation between elasticity properties and braiding parameters. In addition, considering the actual shape of yarns after consolidation, the longitudinal modulus is modified and defined practically. Firstly, the analytic model is established based on the certain assumptions for the sake of clarity, this paper assumes that: A: the cross section of axial yarns is square; B: The cross section of braiding yarns is hexagonal; C: the characters of braiding yarns and axial yarns are the same; D: The angle between the structure boundary and the projection of braiding yarns in transverse plane is 45°; E: The filling factor ε of composite yarns is π/4; F: The deformation of unit cell is under constant strain condition. Then, the functional relation between material constants and braiding parameters is systematically deduced aimed at the yarn deformation mode. Finally, considering the actual shape of axial yarns after consolidation, the concept of technology factor is proposed and the longitudinal modulus of the material is modified based on the energy theory. In this paper, the analytic solution of material parameters is given for the first time, which provides a good reference for further research and application for 3Df5d materials. Although the analysis model is established based on certain assumptions, the analysis method is also applicable for other braided structures. Meanwhile, it is crucial that the cross section shape and straightness of axial yarns play dominant roles in the longitudinal elastic property. So in the braiding and solidifying process, the stability of the axial yarns should be guaranteed to increase the technology factor to reduce the dispersion of material parameters. Overall, the elastic properties of this materials are closely related to the braiding parameters and can be strongly designable, and although the longitudinal modulus of the material is greatly influenced by the technology factors, it can be defined to certain extent.

Keywords: analytic solution, braided composites, elasticity properties, technology factor

Procedia PDF Downloads 220
520 Building up of European Administrative Space at Central and Local Level as a Key Challenge for the Kosovo's Further State Building Process

Authors: Arlinda Memetaj

Abstract:

Building up of a well-functioning administrative justice system is one of the key prerequisites for ensuring the existence of an accountable and efficient public administration in Kosovo as well. To this aim, the country has already established an almost comprehensive legislative and institutional frameworks. The latter derives from (among others) the Kosovo`s Stabilisation and Association Agreement with the EU of 2016. A series of efforts are being presently still undertaken by all relevant domestic and international stakeholders being active in both the Kosovo`s public administration reform and the country` s system of a local self-government. Both systems are thus under a constant state of reform. Despite the aforesaid, there is still a series of shortcomings in the country in above context. There is a lot of backlog of administrative cases in the Prishtina Administrative court; there is a public lack in judiciary; the public administration is organized in a fragmented way; the administrative laws are still not properly implemented at local level; the municipalities` legislative and executive branches are not sufficiently transparent for the ordinary citizens ... Against the above short background, the full paper firstly outlines the legislative and institutional framework of the Kosovo's systems of an administrative justice and local self-government (on the basis of the fact that public administration and local government are not separate fields). It then illustrates the key specific shortcomings in those fields, as seen from the perspective of the citizens' right to good administration. It finally claims that the current status quo situation in the country may be resolved (among others) by granting Kosovo a status of full member state of the Council of Europe or at least granting it with a temporary status of a contracting party of (among others) the European Human Rights Convention. The later would enable all Kosovo citizens (regardless their ethnic or other origin whose human rights are violated by the Kosovo`s relative administrative authorities including the administrative courts) to bring their case/s before the respective well-known European Strasbourg-based Human Rights Court. This would consequently put the State under permanent and full monitoring process, with a view to obliging the country to properly implement the European Court`s decisions (as adopted by this court in those cases). This would be a benefit first of all for the very Kosovo`s ordinary citizens regardless their ethnic or other background. It would provide for a particular positive input in the ongoing efforts being undertaken by Kosovo and Serbia states within the EU-facilitated Dialogue, with a view to building up of an integral administrative justice system at central and local level in the whole Kosovo` s territory. The main method used in this paper is the descriptive, analytical and comparative one.

Keywords: administrative courts, administrative justice, administrative procedure, benefit, European Human Rights Court, human rights, monitoring, reform.

Procedia PDF Downloads 287
519 Two-Level Separation of High Air Conditioner Consumers and Demand Response Potential Estimation Based on Set Point Change

Authors: Mehdi Naserian, Mohammad Jooshaki, Mahmud Fotuhi-Firuzabad, Mohammad Hossein Mohammadi Sanjani, Ashknaz Oraee

Abstract:

In recent years, the development of communication infrastructure and smart meters have facilitated the utilization of demand-side resources which can enhance stability and economic efficiency of power systems. Direct load control programs can play an important role in the utilization of demand-side resources in the residential sector. However, investments required for installing control equipment can be a limiting factor in the development of such demand response programs. Thus, selection of consumers with higher potentials is crucial to the success of a direct load control program. Heating, ventilation, and air conditioning (HVAC) systems, which due to the heat capacity of buildings feature relatively high flexibility, make up a major part of household consumption. Considering that the consumption of HVAC systems depends highly on the ambient temperature and bearing in mind the high investments required for control systems enabling direct load control demand response programs, in this paper, a recent solution is presented to uncover consumers with high air conditioner demand among large number of consumers and to measure the demand response potential of such consumers. This can pave the way for estimating the investments needed for the implementation of direct load control programs for residential HVAC systems and for estimating the demand response potentials in a distribution system. In doing so, we first cluster consumers into several groups based on the correlation coefficients between hourly consumption data and hourly temperature data using K-means algorithm. Then, by applying a recent algorithm to the hourly consumption and temperature data, consumers with high air conditioner consumption are identified. Finally, demand response potential of such consumers is estimated based on the equivalent desired temperature setpoint changes.

Keywords: communication infrastructure, smart meters, power systems, HVAC system, residential HVAC systems

Procedia PDF Downloads 43
518 Pressure-Robust Approximation for the Rotational Fluid Flow Problems

Authors: Medine Demir, Volker John

Abstract:

Fluid equations in a rotating frame of reference have a broad class of important applications in meteorology and oceanography, especially in the large-scale flows considered in ocean and atmosphere, as well as many physical and industrial applications. The Coriolis and the centripetal forces, resulting from the rotation of the earth, play a crucial role in such systems. For such applications it may be required to solve the system in complex three-dimensional geometries. In recent years, the Navier--Stokes equations in a rotating frame have been investigated in a number of papers using the classical inf-sup stable mixed methods, like Taylor-Hood pairs, to contribute to the analysis and the accurate and efficient numerical simulation. Numerical analysis reveals that these classical methods introduce a pressure-dependent contribution in the velocity error bounds that is proportional to some inverse power of the viscosity. Hence, these methods are optimally convergent but small velocity errors might not be achieved for complicated pressures and small viscosity coefficients. Several approaches have been proposed for improving the pressure-robustness of pairs of finite element spaces. In this contribution, a pressure-robust space discretization of the incompressible Navier--Stokes equations in a rotating frame of reference is considered. The discretization employs divergence-free, $H^1$-conforming mixed finite element methods like Scott--Vogelius pairs. However, this approach might come with a modification of the meshes, like the use of barycentric-refined grids in case of Scott--Vogelius pairs. However, this strategy requires the finite element code to have control on the mesh generator which is not realistic in many engineering applications and might also be in conflict with the solver for the linear system. An error estimate for the velocity is derived that tracks the dependency of the error bound on the coefficients of the problem, in particular on the angular velocity. Numerical examples illustrate the theoretical results. The idea of pressure-robust method could be cast on different types of flow problems which would be considered as future studies. As another future research direction, to avoid a modification of the mesh, one may use a very simple parameter-dependent modification of the Scott-Vogelius element, the pressure-wired Stokes element, such that the inf-sup constant is independent of nearly-singular vertices.

Keywords: navier-stokes equations in a rotating frame of refence, coriolis force, pressure-robust error estimate, scott-vogelius pairs of finite element spaces

Procedia PDF Downloads 41
517 Analyzing Transit Network Design versus Urban Dispersion

Authors: Hugo Badia

Abstract:

This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.

Keywords: analytical network design model, network structure, public transport, urban dispersion

Procedia PDF Downloads 217
516 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements

Authors: Alexander Buhr, Klaus Ehrenfried

Abstract:

Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.

Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements

Procedia PDF Downloads 289
515 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites

Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy

Abstract:

Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.

Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites

Procedia PDF Downloads 162
514 Laser - Ultrasonic Method for the Measurement of Residual Stresses in Metals

Authors: Alexander A. Karabutov, Natalia B. Podymova, Elena B. Cherepetskaya

Abstract:

The theoretical analysis is carried out to get the relation between the ultrasonic wave velocity and the value of residual stresses. The laser-ultrasonic method is developed to evaluate the residual stresses and subsurface defects in metals. The method is based on the laser thermooptical excitation of longitudinal ultrasonic wave sand their detection by a broadband piezoelectric detector. A laser pulse with the time duration of 8 ns of the full width at half of maximum and with the energy of 300 µJ is absorbed in a thin layer of the special generator that is inclined relative to the object under study. The non-uniform heating of the generator causes the formation of a broadband powerful pulse of longitudinal ultrasonic waves. It is shown that the temporal profile of this pulse is the convolution of the temporal envelope of the laser pulse and the profile of the in-depth distribution of the heat sources. The ultrasonic waves reach the surface of the object through the prism that serves as an acoustic duct. At the interface ‚laser-ultrasonic transducer-object‘ the conversion of the most part of the longitudinal wave energy takes place into the shear, subsurface longitudinal and Rayleigh waves. They spread within the subsurface layer of the studied object and are detected by the piezoelectric detector. The electrical signal that corresponds to the detected acoustic signal is acquired by an analog-to-digital converter and when is mathematically processed and visualized with a personal computer. The distance between the generator and the piezodetector as well as the spread times of acoustic waves in the acoustic ducts are the characteristic parameters of the laser-ultrasonic transducer and are determined using the calibration samples. There lative precision of the measurement of the velocity of longitudinal ultrasonic waves is 0.05% that corresponds to approximately ±3 m/s for the steels of conventional quality. This precision allows one to determine the mechanical stress in the steel samples with the minimal detection threshold of approximately 22.7 MPa. The results are presented for the measured dependencies of the velocity of longitudinal ultrasonic waves in the samples on the values of the applied compression stress in the range of 20-100 MPa.

Keywords: laser-ultrasonic method, longitudinal ultrasonic waves, metals, residual stresses

Procedia PDF Downloads 305
513 Study of the Uncertainty Behaviour for the Specific Total Enthalpy of the Hypersonic Plasma Wind Tunnel Scirocco at Italian Aerospace Research Center

Authors: Adolfo Martucci, Iulian Mihai

Abstract:

By means of the expansion through a Conical Nozzle and the low pressure inside the Test Chamber, a large hypersonic stable flow takes place for a duration of up to 30 minutes. Downstream the Test Chamber, the diffuser has the function of reducing the flow velocity to subsonic values, and as a consequence, the temperature increases again. In order to cool down the flow, a heat exchanger is present at the end of the diffuser. The Vacuum System generates the necessary vacuum conditions for the correct hypersonic flow generation, and the DeNOx system, which follows the Vacuum System, reduces the nitrogen oxide concentrations created inside the plasma flow behind the limits imposed by Italian law. This very large, powerful, and complex facility allows researchers and engineers to reproduce entire re-entry trajectories of space vehicles into the atmosphere. One of the most important parameters for a hypersonic flowfield representative of re-entry conditions is the specific total enthalpy. This is the whole energy content of the fluid, and it represents how severe could be the conditions around a spacecraft re-entering from a space mission or, in our case, inside a hypersonic wind tunnel. It is possible to reach very high values of enthalpy (up to 45 MJ/kg) that, together with the large allowable size of the models, represent huge possibilities for making on-ground experiments regarding the atmospheric re-entry field. The maximum nozzle exit section diameter is 1950 mm, where values of Mach number very much higher than 1 can be reached. The specific total enthalpy is evaluated by means of a number of measurements, each of them concurring with its value and its uncertainty. The scope of the present paper is the evaluation of the sensibility of the uncertainty of the specific total enthalpy versus all the parameters and measurements involved. The sensors that, if improved, could give the highest advantages have so been individuated. Several simulations in Python with the METAS library and by means of Monte Carlo simulations are presented together with the obtained results and discussions about them.

Keywords: hypersonic, uncertainty, enthalpy, simulations

Procedia PDF Downloads 76
512 Healthy Architecture Applied to Inclusive Design for People with Cognitive Disabilities

Authors: Santiago Quesada-García, María Lozano-Gómez, Pablo Valero-Flores

Abstract:

The recent digital revolution, together with modern technologies, is changing the environment and the way people interact with inhabited space. However, in society, the elderly are a very broad and varied group that presents serious difficulties in understanding these modern technologies. Outpatients with cognitive disabilities, such as those suffering from Alzheimer's disease (AD), are distinguished within this cluster. This population group is in constant growth, and they have specific requirements for their inhabited space. According to architecture, which is one of the health humanities, environments are designed to promote well-being and improve the quality of life for all. Buildings, as well as the tools and technologies integrated into them, must be accessible, inclusive, and foster health. In this new digital paradigm, artificial intelligence (AI) appears as an innovative resource to help this population group improve their autonomy and quality of life. Some experiences and solutions, such as those that interact with users through chatbots and voicebots, show the potential of AI in its practical application. In the design of healthy spaces, the integration of AI in architecture will allow the living environment to become a kind of 'exo-brain' that can make up for certain cognitive deficiencies in this population. The objective of this paper is to address, from the discipline of neuroarchitecture, how modern technologies can be integrated into everyday environments and be an accessible resource for people with cognitive disabilities. For this, the methodology has a mixed structure. On the one hand, from an empirical point of view, the research carries out a review of the existing literature about the applications of AI to build space, following the critical review foundations. As a unconventional architectural research, an experimental analysis is proposed based on people with AD as a resource of data to study how the environment in which they live influences their regular activities. The results presented in this communication are part of the progress achieved in the competitive R&D&I project ALZARQ (PID2020-115790RB-I00). These outcomes are aimed at the specific needs of people with cognitive disabilities, especially those with AD, since, due to the comfort and wellness that the solutions entail, they can also be extrapolated to the whole society. As a provisional conclusion, it can be stated that, in the immediate future, AI will be an essential element in the design and construction of healthy new environments. The discipline of architecture has the compositional resources to, through this emerging technology, build an 'exo-brain' capable of becoming a personal assistant for the inhabitants, with whom to interact proactively and contribute to their general well-being. The main objective of this work is to show how this is possible.

Keywords: Alzheimer’s disease, artificial intelligence, healthy architecture, neuroarchitecture, architectural design

Procedia PDF Downloads 42
511 Investigation of Nucleation and Thermal Conductivity of Waxy Crude Oil on Pipe Wall via Particle Dynamics

Authors: Jinchen Cao, Tiantian Du

Abstract:

As waxy crude oil is easy to crystallization and deposition in the pipeline wall, it causes pipeline clogging and leads to the reduction of oil and gas gathering and transmission efficiency. In this paper, a mesoscopic scale dissipative particle dynamics method is employed, and constructed four pipe wall models, including smooth wall (SW), hydroxylated wall (HW), rough wall (RW), and single-layer graphene wall (GW). Snapshots of the simulation output trajectories show that paraffin molecules interact with each other to form a network structure that constrains water molecules as their nucleation sites. Meanwhile, it is observed that the paraffin molecules on the near-wall side are adsorbed horizontally between inter-lattice gaps of the solid wall. In the pressure range of 0 - 50 MPa, the pressure change has less effect on the affinity properties of SS, HS, and GS walls, but for RS walls, the contact angle between paraffin wax and water molecules was found to decrease with the increase in pressure, while the water molecules showed the opposite trend, the phenomenon is due to the change in pressure, leading to the transition of paraffin wax molecules from amorphous to crystalline state. Meanwhile, the minimum crystalline phase pressure (MCPP) was proposed to describe the lowest pressure at which crystallization of paraffin molecules occurs. The maximum number of crystalline clusters formed by paraffin molecules at MCPP in the system showed NSS (0.52 MPa) > NHS (0.55 MPa) > NRS (0.62 MPa) > NGS (0.75 MPa). The MCPP on the graphene surface, with the least number of clusters formed, indicates that the addition of graphene inhibited the crystallization process of paraffin deposition on the wall surface. Finally, the thermal conductivity was calculated, and the results show that on the near-wall side, the thermal conductivity changes drastically due to the occurrence of adsorption crystallization of paraffin waxes; on the fluid side the thermal conductivity gradually tends to stabilize, and the average thermal conductivity shows: ĸRS(0.254W/(m·K)) > ĸRS(0.249W/(m·K)) > ĸRS(0.218W/(m·K)) > ĸRS(0.188W/(m·K)).This study provides a theoretical basis for improving the transport efficiency and heat transfer characteristics of waxy crude oil in terms of wall type, wall roughness, and MCPP.

Keywords: waxy crude oil, thermal conductivity, crystallization, dissipative particle dynamics, MCPP

Procedia PDF Downloads 58
510 Gassing Tendency of Natural Ester Based Transformer oils: Low Alkane Generation in Stray Gassing Behaviour

Authors: Thummalapalli CSM Gupta, Banti Sidhiwala

Abstract:

Mineral oils of naphthenic and paraffinic type have been traditionally been used as insulating liquids in the transformer applications to protect the solid insulation from moisture and ensures effective heat transfer/cooling. The performance of these type of oils have been proven in the field over many decades and the condition monitoring and diagnosis of transformer performance have been successfully monitored through oil properties and dissolved gas analysis methods successfully. Different type of gases representing various types of faults due to components or operating conditions effectively. While large amount of data base has been generated in the industry on dissolved gas analysis for mineral oil based transformer oils and various models for predicting the fault and analysis, oil specifications and standards have also been modified to include stray gassing limits which cover the low temperature faults and becomes an effective preventative maintenance tool that can benefit greatly to know the reasons for the breakdown of electrical insulating materials and related components. Natural esters have seen a rise in popularity in recent years due to their "green" credentials. Some of its benefits include biodegradability, a higher fire point, improvement in load capability of transformer and improved solid insulation life than mineral oils. However, the Stray gases evolution like hydrogen and hydrocarbons like methane (CH4) and ethane (C2H6) show very high values which are much higher than the limits of mineral oil standards. Though the standards for these type esters are yet to be evolved, the higher values of hydrocarbon gases that are available in the market is of concern which might be interpreted as a fault in transformer operation. The current paper focuses on developing a natural ester based transformer oil which shows very levels of stray gassing by standard test methods show much lower values compared to the products available currently and experimental results on various test conditions and the underlying mechanism explained.

Keywords: biodegadability, fire point, dissolved gassing analysis, stray gassing

Procedia PDF Downloads 80
509 Impact of Climate Change on Some Physiological Parameters of Cyclic Female Egyptian Buffalo

Authors: Nabil Abu-Heakal, Ismail Abo-Ghanema, Basma Hamed Merghani

Abstract:

The aim of this investigation is to study the effect of seasonal variations in Egypt on hematological parameters, reproductive and metabolic hormones of Egyptian buffalo-cows. This study lasted one year extending from December 2009 to November 2010 and was conducted on sixty buffalo-cows. Group of 5 buffalo-cows at estrus phase were selected monthly. Then, after blood sampling through tail vein puncture in the 2nd day after natural service, they were divided in two samples: one with anticoagulant for hematological analysis and the other without anticoagulant for serum separation. Results of this investigation revealed that the highest atmospheric temperature was in hot summer 32.61±1.12°C versus 26.18±1.67°C in spring and 19.92±0.70°C in winter season, while the highest relative humidity % was in winter season 43.50±1.60% versus 32.50±2.29% in summer season. The rise in temperature-humidity index from 63.73±1.29 in winter to 78.53±1.58 in summer indicates severe heat stress which is associated with significant reduction in total red blood cell count (3.20±0.15×106), hemoglobin concentration (8.83±0.43 g/dl), packed cell volume (30.73±0.12%), lymphocytes % (40.66±2.33 %), serum progesterone hormone concentration (0.56±0.03 ng/mll), estradiol17-B concentration (16.8±0.64 ng/ml), triiodothyronin (T3) concentration (2.33±0.33 ng/ml) and thyroxin hormone (T4) concentration (21.66±1.66 ng/ml), while hot summer resulted in significant increase in mean cell volume (96.55±2.25 fl), mean cell hemoglobin (30.81±1.33 pg), total white blood cell count (10.63±0.97×103), neutrophils % (49.66±2.33%), serum prolactin hormone (PRL) concentration (23.45±1.72 ng/ml) and cortisol hormone concentration (4.47±0.33 ng/ml) compared to winter season. There was no significant seasonal variation in mean cell hemoglobin concentration (MCHC). It was concluded that in Egypt there was a seasonal variation in atmospheric temperature, relative humidity, temperature humidity index (THI) and the rise in THI above the upper critical level (72 units), which, for lactating buffalo-cows in Egypt is the major constraint on buffalo-cows' hematological parameters and hormonal secretion that affects animal reproduction. Hence, we should improve climatic conditions inside the dairy farm to eliminate or reduce summer infertility.

Keywords: buffalo, climate change, Egypt, physiological parameters

Procedia PDF Downloads 637
508 Purification and Characterization of a Novel Extracellular Chitinase from Bacillus licheniformis LHH100

Authors: Laribi-Habchi Hasiba, Bouanane-Darenfed Amel, Drouiche Nadjib, Pausse André, Mameri Nabil

Abstract:

Chitin, a linear 1, 4-linked N-acetyl-d-glucosamine (GlcNAc) polysaccharide is the major structural component of fungal cell walls, insect exoskeletons and shells of crustaceans. It is one of the most abundant naturally occurring polysaccharides and has attracted tremendous attention in the fields of agriculture, pharmacology and biotechnology. Each year, a vast amount of chitin waste is released from the aquatic food industry, where crustaceans (prawn, crab, Shrimp and lobster) constitute one of the main agricultural products. This creates a serious environmental problem. This linear polymer can be hydrolyzed by bases, acids or enzymes such as chitinase. In this context an extracellular chitinase (ChiA-65) was produced and purified from a newly isolated LHH100. Pure protein was obtained after heat treatment and ammonium sulphate precipitation followed by Sephacryl S-200 chromatography. Based on matrix assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF/MS) analysis, the purified enzyme is a monomer with a molecular mass of 65,195.13 Da. The sequence of the 27 N-terminal residues of the mature ChiA-65 showed high homology with family-18 chitinases. Optimal activity was achieved at pH 4 and 75◦C. Among the inhibitors and metals tested p-chloromercuribenzoic acid, N-ethylmaleimide, Hg2+ and Hg + completelyinhibited enzyme activity. Chitinase activity was high on colloidal chitin, glycol chitin, glycol chitosane, chitotriose and chitooligosaccharide. Chitinase activity towards synthetic substrates in the order of p-NP-(GlcNAc) n (n = 2–4) was p-NP-(GlcNAc)2> p-NP-(GlcNAc)4> p-NP-(GlcNAc)3. Our results suggest that ChiA-65 preferentially hydrolyzed the second glycosidic link from the non-reducing end of (GlcNAc) n. ChiA-65 obeyed Michaelis Menten kinetics the Km and kcat values being 0.385 mg, colloidal chitin/ml and5000 s−1, respectively. ChiA-65 exhibited remarkable biochemical properties suggesting that this enzyme is suitable for bioconversion of chitin waste.

Keywords: Bacillus licheniformis LHH100, characterization, extracellular chitinase, purification

Procedia PDF Downloads 425
507 Bringing the World to Net Zero Carbon Dioxide by Sequestering Biomass Carbon

Authors: Jeffrey A. Amelse

Abstract:

Many corporations aspire to become Net Zero Carbon Carbon Dioxide by 2035-2050. This paper examines what it will take to achieve those goals. Achieving Net Zero CO₂ requires an understanding of where energy is produced and consumed, the magnitude of CO₂ generation, and proper understanding of the Carbon Cycle. The latter leads to the distinction between CO₂ and biomass carbon sequestration. Short reviews are provided for prior technologies proposed for reducing CO₂ emissions from fossil fuels or substitution by renewable energy, to focus on their limitations and to show that none offer a complete solution. Of these, CO₂ sequestration is poised to have the largest impact. It will just cost money, scale-up is a huge challenge, and it will not be a complete solution. CO₂ sequestration is still in the demonstration and semi-commercial scale. Transportation accounts for only about 30% of total U.S. energy demand, and renewables account for only a small fraction of that sector. Yet, bioethanol production consumes 40% of U.S. corn crop, and biodiesel consumes 30% of U.S. soybeans. It is unrealistic to believe that biofuels can completely displace fossil fuels in the transportation market. Bioethanol is traced through its Carbon Cycle and shown to be both energy inefficient and inefficient use of biomass carbon. Both biofuels and CO₂ sequestration reduce future CO₂ emissions from continued use of fossil fuels. They will not remove CO₂ already in the atmosphere. Planting more trees has been proposed as a way to reduce atmospheric CO₂. Trees are a temporary solution. When they complete their Carbon Cycle, they die and release their carbon as CO₂ to the atmosphere. Thus, planting more trees is just 'kicking the can down the road.' The only way to permanently remove CO₂ already in the atmosphere is to break the Carbon Cycle by growing biomass from atmospheric CO₂ and sequestering biomass carbon. Sequestering tree leaves is proposed as a solution. Unlike wood, leaves have a short Carbon Cycle time constant. They renew and decompose every year. Allometric equations from the USDA indicate that theoretically, sequestrating only a fraction of the world’s tree leaves can get the world to Net Zero CO₂ without disturbing the underlying forests. How can tree leaves be permanently sequestered? It may be as simple as rethinking how landfills are designed to discourage instead of encouraging decomposition. In traditional landfills, municipal waste undergoes rapid initial aerobic decomposition to CO₂, followed by slow anaerobic decomposition to methane and CO₂. The latter can take hundreds to thousands of years. The first step in anaerobic decomposition is hydrolysis of cellulose to release sugars, which those who have worked on cellulosic ethanol know is challenging for a number of reasons. The key to permanent leaf sequestration may be keeping the landfills dry and exploiting known inhibitors for anaerobic bacteria.

Keywords: carbon dioxide, net zero, sequestration, biomass, leaves

Procedia PDF Downloads 107
506 Thermal Decomposition Behaviors of Hexafluoroethane (C2F6) Using Zeolite/Calcium Oxide Mixtures

Authors: Kazunori Takai, Weng Kaiwei, Sadao Araki, Hideki Yamamoto

Abstract:

HFC and PFC gases have been commonly and widely used as refrigerant of air conditioner and as etching agent of semiconductor manufacturing process, because of their higher heat of vaporization and chemical stability. On the other hand, HFCs and PFCs gases have the high global warming effect on the earth. Therefore, we have to be decomposed these gases emitted from chemical apparatus like as refrigerator. Until now, disposal of these gases were carried out by using combustion method like as Rotary kiln treatment mainly. However, this treatment needs extremely high temperature over 1000 °C. In the recent year, in order to reduce the energy consumption, a hydrolytic decomposition method using catalyst and plasma decomposition treatment have been attracted much attention as a new disposal treatment. However, the decomposition of fluorine-containing gases under the wet condition is not able to avoid the generation of hydrofluoric acid. Hydrofluoric acid is corrosive gas and it deteriorates catalysts in the decomposition process. Moreover, an additional process for the neutralization of hydrofluoric acid is also indispensable. In this study, the decomposition of C2F6 using zeolite and zeolite/CaO mixture as reactant was evaluated in the dry condition at 923 K. The effect of the chemical structure of zeolite on the decomposition reaction was confirmed by using H-Y, H-Beta, H-MOR and H-ZSM-5. The formation of CaF2 in zeolite/CaO mixtures after the decomposition reaction was confirmed by XRD measurements. The decomposition of C2F6 using zeolite as reactant showed the closely similar behaviors regardless the type of zeolite (MOR, Y, ZSM-5, Beta type). There was no difference of XRD patterns of each zeolite before and after reaction. On the other hand, the difference in the C2F6 decomposition for each zeolite/CaO mixtures was observed. These results suggested that the rate-determining process for the C2F6 decomposition on zeolite alone is the removal of fluorine from reactive site. In other words, the C2F6 decomposition for the zeolite/CaO improved compared with that for the zeolite alone by the removal of the fluorite from reactive site. HMOR/CaO showed 100% of the decomposition for 3.5 h and significantly improved from zeolite alone. On the other hand, Y type zeolite showed no improvement, that is, the almost same value of Y type zeolite alone. The descending order of C2F6 decomposition was MOR, ZSM-5, beta and Y type zeolite. This order is similar to the acid strength characterized by NH3-TPD. Hence, it is considered that the C-F bond cleavage is closely related to the acid strength.

Keywords: hexafluoroethane, zeolite, calcium oxide, decomposition

Procedia PDF Downloads 455
505 Spatio-Temporal Analysis of Land Use Change and Green Cover Index

Authors: Poonam Sharma, Ankur Srivastav

Abstract:

Cities are complex and dynamic systems that constitute a significant challenge to urban planning. The increasing size of the built-up area owing to growing population pressure and economic growth have lead to massive Landuse/Landcover change resulted in the loss of natural habitat and thus reducing the green covers in urban areas. Urban environmental quality is influenced by several aspects, including its geographical configuration, the scale, and nature of human activities occurring and environmental impacts generated. Cities have transformed into complex and dynamic systems that constitute a significant challenge to urban planning. Cities and their sustainability are often discussed together as the cities stand confronted with numerous environmental concerns as the world becoming increasingly urbanized, and the cities are situated in the mesh of global networks in multiple senses. A rapid transformed urban setting plays a crucial role to change the green area of natural habitats. To examine the pattern of urban growth and to measure the Landuse/Landcover change in Gurgoan in Haryana, India through the integration of Geospatial technique is attempted in the research paper. Satellite images are used to measure the spatiotemporal changes that have occurred in the land use and land cover resulting into a new cityscape. It has been observed from the analysis that drastically evident changes in land use has occurred with the massive rise in built up areas and the decrease in green cover and therefore causing the sustainability of the city an important area of concern. The massive increase in built-up area has influenced the localised temperatures and heat concentration. To enhance the decision-making process in urban planning, a detailed and real world depiction of these urban spaces is the need of the hour. Monitoring indicators of key processes in land use and economic development are essential for evaluating policy measures.

Keywords: cityscape, geospatial techniques, green cover index, urban environmental quality, urban planning

Procedia PDF Downloads 250
504 Measurement of Fatty Acid Changes in Post-Mortem Belowground Carcass (Sus-scrofa) Decomposition: A Semi-Quantitative Methodology for Determining the Post-Mortem Interval

Authors: Nada R. Abuknesha, John P. Morgan, Andrew J. Searle

Abstract:

Information regarding post-mortem interval (PMI) in criminal investigations is vital to establish a time frame when reconstructing events. PMI is defined as the time period that has elapsed between the occurrence of death and the discovery of the corpse. Adipocere, commonly referred to as ‘grave-wax’, is formed when post-mortem adipose tissue is converted into a solid material that is heavily comprised of fatty acids. Adipocere is of interest to forensic anthropologists, as its formation is able to slow down the decomposition process. Therefore, analysing the changes in the patterns of fatty acids during the early decomposition process may be able to estimate the period of burial, and hence the PMI. The current study concerned the investigation of the fatty acid composition and patterns in buried pig fat tissue. This was in an attempt to determine whether particular patterns of fatty acid composition can be shown to be associated with the duration of the burial, and hence may be used to estimate PMI. The use of adipose tissue from the abdominal region of domestic pigs (Sus-scrofa), was used to model the human decomposition process. 17 x 20cm piece of pork belly was buried in a shallow artificial grave, and weekly samples (n=3) from the buried pig fat tissue were collected over an 11-week period. Marker fatty acids: palmitic (C16:0), oleic (C18:1n-9) and linoleic (C18:2n-6) acid were extracted from the buried pig fat tissue and analysed as fatty acid methyl esters using the gas chromatography system. Levels of the marker fatty acids were quantified from their respective standards. The concentrations of C16:0 (69.2 mg/mL) and C18:1n-9 (44.3 mg/mL) from time zero exhibited significant fluctuations during the burial period. Levels rose (116 and 60.2 mg/mL, respectively) and fell starting from the second week to reach 19.3 and 18.3 mg/mL, respectively at week 6. Levels showed another increase at week 9 (66.3 and 44.1 mg/mL, respectively) followed by gradual decrease at week 10 (20.4 and 18.5 mg/mL, respectively). A sharp increase was observed in the final week (131.2 and 61.1 mg/mL, respectively). Conversely, the levels of C18:2n-6 remained more or less constant throughout the study. In addition to fluctuations in the concentrations, several new fatty acids appeared in the latter weeks. Other fatty acids which were detectable in the time zero sample, were lost in the latter weeks. There are several probable opportunities to utilise fatty acid analysis as a basic technique for approximating PMI: the quantification of marker fatty acids and the detection of selected fatty acids that either disappear or appear during the burial period. This pilot study indicates that this may be a potential semi-quantitative methodology for determining the PMI. Ideally, the analysis of particular fatty acid patterns in the early stages of decomposition could be an additional tool to the already available techniques or methods in improving the overall processes in estimating PMI of a corpse.

Keywords: adipocere, fatty acids, gas chromatography, post-mortem interval

Procedia PDF Downloads 112
503 Damping Optimal Design of Sandwich Beams Partially Covered with Damping Patches

Authors: Guerich Mohamed, Assaf Samir

Abstract:

The application of viscoelastic materials in the form of constrained layers in mechanical structures is an efficient and cost-effective technique for solving noise and vibration problems. This technique requires a design tool to select the best location, type, and thickness of the damping treatment. This paper presents a finite element model for the vibration of beams partially or fully covered with a constrained viscoelastic damping material. The model is based on Bernoulli-Euler theory for the faces and Timoshenko beam theory for the core. It uses four variables: the through-thickness constant deflection, the axial displacements of the faces, and the bending rotation of the beam. The sandwich beam finite element is compatible with the conventional C1 finite element for homogenous beams. To validate the proposed model, several free vibration analyses of fully or partially covered beams, with different locations of the damping patches and different percent coverage, are studied. The results show that the proposed approach can be used as an effective tool to study the influence of the location and treatment size on the natural frequencies and the associated modal loss factors. Then, a parametric study regarding the variation in the damping characteristics of partially covered beams has been conducted. In these studies, the effect of core shear modulus value, the effect of patch size variation, the thickness of constraining layer, and the core and the locations of the patches are considered. In partial coverage, the spatial distribution of additive damping by using viscoelastic material is as important as the thickness and material properties of the viscoelastic layer and the constraining layer. Indeed, to limit added mass and to attain maximum damping, the damping patches should be placed at optimum locations. These locations are often selected using the modal strain energy indicator. Following this approach, the damping patches are applied over regions of the base structure with the highest modal strain energy to target specific modes of vibration. In the present study, a more efficient indicator is proposed, which consists of placing the damping patches over regions of high energy dissipation through the viscoelastic layer of the fully covered sandwich beam. The presented approach is used in an optimization method to select the best location for the damping patches as well as the material thicknesses and material properties of the layers that will yield optimal damping with the minimum area of coverage.

Keywords: finite element model, damping treatment, viscoelastic materials, sandwich beam

Procedia PDF Downloads 135
502 Developing Wearable EMG Sensor Designed for Parkinson's Disease (PD) Monitoring, and Treatment

Authors: Bulcha Belay Etana

Abstract:

Electromyography is used to measure the electrical activity of muscles for various health monitoring applications using surface electrodes or needle electrodes. Recent developments in electromyogram signal acquisition using textile electrodes open the door for wearable health monitoring which enables patients to monitor and control their health issues outside of traditional healthcare facilities. The aim of this research is therefore to develop and analyze wearable textile electrodes for the acquisition of electromyography signals for Parkinson’s patients and apply an appropriate thermal stimulus to relieve muscle cramping. In order to achieve this, textile electrodes are sewn with a silver-coated thread in an overlapping zigzag pattern into an inextensible fabric, and stainless steel knitted textile electrodes attached to a sleeve were prepared and its electrical characteristics including signal to noise ratio were compared with traditional electrodes. To relieve muscle cramping, a heating element using stainless steel conductive yarn Sewn onto a cotton fabric, coupled with a vibration system were developed. The system was integrated using a microcontroller and a Myoware muscle sensor so that when muscle cramping occurs, measured by the system activates the heating elements and vibration motors. The optimum temperature considered for treatment was 35.50c, so a Temperature measurement system was incorporated to deactivate the heating system when the temperature reaches this threshold, and the signals indicating muscle cramping have subsided. The textile electrode exhibited a signal to noise ratio of 6.38dB while the signal to noise ratio of the traditional electrode was 7.05dB. The rise time of the developed heating element was about 6 minutes to reach the optimum temperature using a 9volt power supply. The treatment of muscle cramping in Parkinson's patients using heat and muscle vibration simultaneously with a wearable electromyography signal acquisition system will improve patients’ livelihoods and enable better chronic pain management.

Keywords: electromyography, heating textile, vibration therapy, parkinson’s disease, wearable electronic textile

Procedia PDF Downloads 115
501 Preservation and Packaging Techniques for Extending the Shelf Life of Cucumbers: A Review of Methods and Factors Affecting Quality

Authors: Abdul Umaro Tholley

Abstract:

The preservation and packaging of cucumbers are essential to maintain their shelf life and quality. Cucumbers are a perishable food item that is highly susceptible to spoilage due to their high-water content and delicate nature. Therefore, proper preservation and packaging techniques are crucial to extend their shelf life and prevent economic loss. There are several methods of preserving cucumbers, including refrigeration, canning, pickling, and dehydration. Refrigeration is the most used preservation method, as it slows down the rate of deterioration and maintains the freshness and quality of the cucumbers. Canning and pickling are also popular preservation methods that use heat treatment and acidic solutions, respectively, to prevent microbial growth and increase shelf life. Dehydration involves removing the water content from cucumbers to increase their shelf life, but it may affect their texture and taste. Packaging also plays a vital role in preserving cucumbers. The packaging materials should be selected based on their ability to maintain the quality and freshness of the cucumbers. The most used packaging materials for cucumbers are polyethylene bags, which prevent moisture loss and protect the cucumbers from physical damage. Other packaging materials, such as corrugated boxes and wooden crates, may also be used, but they offer less protection against moisture loss and damage. The quality of cucumbers is affected by several factors, including storage temperature, humidity, and exposure to light. Cucumbers should be stored at temperatures between 7 and 10 °C, with a relative humidity of 90-95%, to maintain their freshness and quality. Exposure to light should also be minimized to prevent the formation of yellowing and decay. In conclusion, the preservation and packaging of cucumbers are essential to maintain their quality and extend their shelf life. Refrigeration, canning, pickling, and dehydration are common preservation methods that can be used to preserve cucumbers. The packaging materials used should be carefully selected to prevent moisture loss and physical damage. Proper storage conditions, such as temperature, humidity, and light exposure, should also be maintained to ensure the quality and freshness of cucumbers. Overall, proper preservation and packaging techniques can help reduce economic loss and provide consumers with high-quality cucumbers.

Keywords: cucumbers, preservation, packaging, shelf life

Procedia PDF Downloads 70
500 Shift from Distance to In-Person Learning of Indigenous People’s Schools during the COVID 19 Pandemic: Gains and Challenges

Authors: May B. Eclar, Romeo M. Alip, Ailyn C. Eay, Jennifer M. Alip, Michelle A. Mejica, Eloy C.eclar

Abstract:

The COVID-19 pandemic has significantly changed the educational landscape of the Philippines. The groups affected by these changes are the poor and those living in the Geographically Isolated and Depressed Areas (GIDA), such as the Indigenous Peoples (IP). This was heavily experienced by the ten IP schools in Zambales, a province in the country. With this in mind, plus other factors relative to safety, the Schools Division of Zambales selected these ten schools to conduct the pilot implementation of in-person classes two (2) years after the country-wide school closures. This study aimed to explore the lived experiences of the school heads of the first ten Indigenous People’s (IP) schools that shifted from distance learning to limited in-person learning. These include the challenges met and the coping mechanism they set to overcome the challenges. The study is linked to experiential learning theory as it focuses on the idea that the best way to learn things is by having experiences). It made use of qualitative research, specifically phenomenology. All the ten school heads from the IP schools were chosen as participants in the study. Afterward, participants underwent semi-structured interviews, both individual and focus group discussions, for triangulation. Data were analyzed through thematic analysis. As a result, the study found that most IP schools did not struggle to convince parents to send their children back to school as they downplay the pandemic threat due to their geographical location. The parents struggled the most during modular learning since many of them are either illiterate, too old to teach their children, busy with their lands, or have too many children to teach. Moreover, there is a meager vaccination rate in the ten barangays where the schools are located because of local beliefs. In terms of financial needs, school heads did not find it difficult even though funding is needed to adjust the schools to the new normal because of the financial support coming from the central office. Technical assistance was also provided to the schools by division personnel. Teachers also welcomed the idea of shifting back to in-person classes, and minor challenges were met but were solved immediately through various mechanisms. Learning losses were evident since most learners struggled with essential reading, writing, and counting skills. Although the community has positively received the conduct of in-person classes, the challenges these IP schools have been experiencing pre-pandemic were also exacerbated due to the school closures. It is therefore recommended that constant monitoring and provision of support must continue to solve other challenges the ten IP schools are still experiencing due to in-person classes

Keywords: In-person learning, indigenous peoples, phenomenology, philippines

Procedia PDF Downloads 97