Search results for: horizontal pipe
98 Numerical Investigation of Solid Subcooling on a Low Melting Point Metal in Latent Thermal Energy Storage Systems Based on Flat Slab Configuration
Authors: Cleyton S. Stampa
Abstract:
This paper addresses the perspectives of using low melting point metals (LMPMs) as phase change materials (PCMs) in latent thermal energy storage (LTES) units, through a numerical approach. This is a new class of PCMs that has been one of the most prospective alternatives to be considered in LTES, due to these materials present high thermal conductivity and elevated heat of fusion, per unit volume. The chosen type of LTES consists of several horizontal parallel slabs filled with PCM. The heat transfer fluid (HTF) circulates through the channel formed between each two consecutive slabs on a laminar regime through forced convection. The study deals with the LTES charging process (heat-storing) by using pure gallium as PCM, and it considers heat conduction in the solid phase during melting driven by natural convection in the melt. The transient heat transfer problem is analyzed in one arbitrary slab under the influence of the HTF. The mathematical model to simulate the isothermal phase change is based on a volume-averaged enthalpy method, which is successfully verified by comparing its predictions with experimental data from works available in the pertinent literature. Regarding the convective heat transfer problem in the HTF, it is assumed that the flow is thermally developing, whereas the velocity profile is already fully developed. The study aims to learn about the effect of the solid subcooling in the melting rate through comparisons with the melting process of the solid in which it starts to melt from its fusion temperature. In order to best understand this effect in a metallic compound, as it is the case of pure gallium, the study also evaluates under the same conditions established for the gallium, the melting process of commercial paraffin wax (organic compound) and of the calcium chloride hexahydrate (CaCl₂ 6H₂O-inorganic compound). In the present work, it is adopted the best options that have been established by several researchers in their parametric studies with respect to this type of LTES, which lead to high values of thermal efficiency. To do so, concerning with the geometric aspects, one considers a gap of the channel formed by two consecutive slabs, thickness and length of the slab. About the HTF, one considers the type of fluid, the mass flow rate, and inlet temperature.Keywords: flat slab, heat storing, pure metal, solid subcooling
Procedia PDF Downloads 14197 Developing a GIS-Based Tool for the Management of Fats, Oils, and Grease (FOG): A Case Study of Thames Water Wastewater Catchment
Authors: Thomas D. Collin, Rachel Cunningham, Bruce Jefferson, Raffaella Villa
Abstract:
Fats, oils and grease (FOG) are by-products of food preparation and cooking processes. FOG enters wastewater systems through a variety of sources such as households, food service establishments, and industrial food facilities. Over time, if no source control is in place, FOG builds up on pipe walls, leading to blockages, and potentially to sewer overflows which are a major risk to the Environment and Human Health. UK water utilities spend millions of pounds annually trying to control FOG. Despite UK legislation specifying that discharge of such material is against the law, it is often complicated for water companies to identify and prosecute offenders. Hence, it leads to uncertainties regarding the attitude to take in terms of FOG management. Research is needed to seize the full potential of implementing current practices. The aim of this research was to undertake a comprehensive study to document the extent of FOG problems in sewer lines and reinforce existing knowledge. Data were collected to develop a model estimating quantities of FOG available for recovery within Thames Water wastewater catchments. Geographical Information System (GIS) software was used in conjunction to integrate data with a geographical component. FOG was responsible for at least 1/3 of sewer blockages in Thames Water waste area. A waste-based approach was developed through an extensive review to estimate the potential for FOG collection and recovery. Three main sources were identified: residential, commercial and industrial. Commercial properties were identified as one of the major FOG producers. The total potential FOG generated was estimated for the 354 wastewater catchments. Additionally, raw and settled sewage were sampled and analysed for FOG (as hexane extractable material) monthly at 20 sewage treatment works (STW) for three years. A good correlation was found with the sampled FOG and population equivalent (PE). On average, a difference of 43.03% was found between the estimated FOG (waste-based approach) and sampled FOG (raw sewage sampling). It was suggested that the approach undertaken could overestimate the FOG available, the sampling could only capture a fraction of FOG arriving at STW, and/or the difference could account for FOG accumulating in sewer lines. Furthermore, it was estimated that on average FOG could contribute up to 12.99% of the primary sludge removed. The model was further used to investigate the relationship between estimated FOG and number of blockages. The higher the FOG potential, the higher the number of FOG-related blockages is. The GIS-based tool was used to identify critical areas (i.e. high FOG potential and high number of FOG blockages). As reported in the literature, FOG was one of the main causes of sewer blockages. By identifying critical areas (i.e. high FOG potential and high number of FOG blockages) the model further explored the potential for source-control in terms of ‘sewer relief’ and waste recovery. Hence, it helped targeting where benefits from implementation of management strategies could be the highest. However, FOG is still likely to persist throughout the networks, and further research is needed to assess downstream impacts (i.e. at STW).Keywords: fat, FOG, GIS, grease, oil, sewer blockages, sewer networks
Procedia PDF Downloads 21196 Long-Term Foam Roll Intervention Study of the Effects on Muscle Performance and Flexibility
Authors: T. Poppendieker
Abstract:
A new innovative tool for self-myofascial release is widely and increasingly used among athletes of various sports. The application of the foam roll is suggested to improve muscle performance and flexibility. Attempts to examine acute and somewhat long term effects of either have been conducted over the past ten years. However, the results of muscle performance have been inconsistent. It is suggested that regular use over a long period of time results in a different, muscle performance improving outcome. This study examines long-term effects of regular foam rolling combined with a short plyometric routine vs. solely the same plyometric routine on muscle performance and flexibility over a period of six weeks. Results of counter movement jump (CMJ), squat jump (SJ), and isometric maximal force (IMF) of a 90° horizontal squat in a leg-press will serve as parameters for muscle performance. Data on the range of motion (ROM) of the sit and reach test will be used as a parameter for the flexibility assessment. Muscle activation will be measured throughout all tests. Twenty male and twenty female members of a Frankfurt area fitness center chain (7.11) with an average age of 25 years will be recruited. Women and men will be randomly assigned to a foam roll (FR) and a control group. All participants will practice their assigned routine three times a week over the period of six weeks. Tests on CMJ, SJ, IMF, and ROM will be taken before and after the intervention period. The statistic software program SPSS 22 will be used to analyze the data of CMJ, SJ, IMF, and ROM under consideration of muscle activation by a 2 x 2 x 2 (time of measurement x gender x group) analysis of variance with repeated measures and dependent t-test analysis of pre- and post-test. The alpha level for statistic significance will be set at p ≤ 0.05. It is hypothesized that a significant difference in outcome based on gender differences in all four tests will be observed. It is further hypothesized that both groups may show significant improvements in their performance in the CMJ and SJ after the six-week period. However, the FR group is hypothesized to achieve a higher improvement in the two jump tests. Moreover, the FR group may increase IMF as well as flexibility, whereas the control group may not show likewise progress. The results of this study are crucial for the understanding of long-term effects of regular foam roll application. The collected information on the matter may help to motivate the incorporation of foam rolling into training routines, in order to improve athletic performances.Keywords: counter movement jump, foam rolling, isometric maximal force, long term effects, self-myofascial release, squat jump
Procedia PDF Downloads 28695 Multidisciplinary Approach to Mio-Plio-Quaternary Aquifer Study in the Zarzis Region (Southeastern Tunisia)
Authors: Ghada Ben Brahim, Aicha El Rabia, Mohamed Hedi Inoubli
Abstract:
Climate change has exacerbated disparities in the distribution of water resources in Tunisia, resulting in significant degradation in quantity and quality over the past five decades. The Mio-Plio-Quaternary aquifer, the primary water source in the Zarzis region, is subject to climatic, geographical, and geological challenges, as well as human stress. The region is experiencing uneven distribution and growing threats from groundwater salinity and saltwater intrusion. Addressing this challenge is critical for the arid region’s socioeconomic development, and effective water resource management is required to combat climate change and reduce water deficits. This study uses a multidisciplinary approach to determine the groundwater potential of this aquifer, involving geophysics and hydrogeology data analysis. We used advanced techniques such as 3D Euler deconvolution and power spectrum analysis to generate detailed anomaly maps and estimate the depths of density sources, identifying significant Bouguer anomalies trending E-W, NW-SE, and NE-SW. Various techniques, such as wavelength filtering, upward continuation, and horizontal and vertical derivatives, were used to improve the gravity data, resulting in consistent results for anomaly shapes and amplitudes. The Euler deconvolution method revealed two prominent surface faults, trending NE-SW and NW-SE, that have a significant impact on the distribution of sedimentary facies and water quality within the Mio-Plio-Quaternary aquifer. Additionally, depth maxima greater than 1400 m to the North indicate the presence of a Cretaceous paleo-fault. Geoelectrical models and resistivity pseudo-sections were used to interpret the distribution of electrical facies in the Mio-Plio-Quaternary aquifer, highlighting lateral variation and depositional environment type. AI optimises the analysis and interpretation of exploration data, which is important to long-term management and water security. Machine learning algorithms and deep learning models analyse large datasets to provide precise interpretations of subsurface conditions, such as aquifer salinisation. However, AI has limitations, such as the requirement for large datasets, the risk of overfitting, and integration issues with traditional geological methods.Keywords: mio-plio-quaternary aquifer, Southeastern Tunisia, geophysical methods, hydrogeological analysis, artificial intelligence
Procedia PDF Downloads 1894 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves
Authors: Shengnan Chen, Shuhua Wang
Abstract:
Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves
Procedia PDF Downloads 28593 Mature Field Rejuvenation Using Hydraulic Fracturing: A Case Study of Tight Mature Oilfield with Reveal Simulator
Authors: Amir Gharavi, Mohamed Hassan, Amjad Shah
Abstract:
The main characteristics of unconventional reservoirs include low-to ultra low permeability and low-to-moderate porosity. As a result, hydrocarbon production from these reservoirs requires different extraction technologies than from conventional resources. An unconventional reservoir must be stimulated to produce hydrocarbons at an acceptable flow rate to recover commercial quantities of hydrocarbons. Permeability for unconventional reservoirs is mostly below 0.1 mD, and reservoirs with permeability above 0.1 mD are generally considered to be conventional. The hydrocarbon held in these formations naturally will not move towards producing wells at economic rates without aid from hydraulic fracturing which is the only technique to assess these tight reservoir productions. Horizontal well with multi-stage fracking is the key technique to maximize stimulated reservoir volume and achieve commercial production. The main objective of this research paper is to investigate development options for a tight mature oilfield. This includes multistage hydraulic fracturing and spacing by building of reservoir models in the Reveal simulator to model potential development options based on sidetracking the existing vertical well. To simulate potential options, reservoir models have been built in the Reveal. An existing Petrel geological model was used to build the static parts of these models. A FBHP limit of 40bars was assumed to take into account pump operating limits and to maintain the reservoir pressure above the bubble point. 300m, 600m and 900m lateral length wells were modelled, in conjunction with 4, 6 and 8 stages of fracs. Simulation results indicate that higher initial recoveries and peak oil rates are obtained with longer well lengths and also with more fracs and spacing. For a 25year forecast, the ultimate recovery ranging from 0.4% to 2.56% for 300m and 1000m laterals respectively. The 900m lateral with 8 fracs 100m spacing gave the highest peak rate of 120m3/day, with the 600m and 300m cases giving initial peak rates of 110m3/day. Similarly, recovery factor for the 900m lateral with 8 fracs and 100m spacing was the highest at 2.65% after 25 years. The corresponding values for the 300m and 600m laterals were 2.37% and 2.42%. Therefore, the study suggests that longer laterals with 8 fracs and 100m spacing provided the optimal recovery, and this design is recommended as the basis for further study.Keywords: unconventional, resource, hydraulic, fracturing
Procedia PDF Downloads 29892 Downward Vertical Evacuation for Disabilities People from Tsunami Using Escape Bunker Technology
Authors: Febrian Tegar Wicaksana, Niqmatul Kurniati, Surya Nandika
Abstract:
Indonesia is one of the countries that have great number of disaster occurrence and threat because it is located in not only between three tectonic plates such as Eurasia plates, Indo-Australia plates and Pacific plates, but also in the Ring of Fire path, like earthquake, Tsunami, volcanic eruption and many more. Recently, research shows that there are potential areas that will be devastated by Tsunami in southern coast of Java. Tsunami is a series of waves in a body of water caused by the displacement of a large volume of water, generally in an ocean. When the waves enter shallow water, they may rise to several feet or, in rare cases, tens of feet, striking the coast with devastating force. The parameter for reference such as magnitude, the depth of epicentre, distance between epicentres with land, the depth of every points, when reached the shore and the growth of waves. Interaction between parameters will bring the big variance of Tsunami wave. Based on that, we can formulate preparation that needed for disaster mitigation strategies. The mitigation strategies will take the important role in an effort to reduce the number of victims and damage in the area. It will reduce the number of victim and casualties. Reducing is directed to the most difficult mobilization casualties in the tsunami disaster area like old people, sick people and disabilities people. Until now, the method that used for rescuing people from Tsunami is basic horizontal evacuation. This evacuation system is not optimal because it needs so long time and it cannot be used by people with disabilities. The writers propose to create a vertical evacuation model with an escape bunker system. This bunker system is chosen because the downward vertical evacuation is considered more efficient and faster. Especially in coastal areas without any highlands surround it. The downward evacuation system is better than upward evacuation because it can avoid the risk of erosion at the ground around the structure which can affect the building. The structure of the bunker and the evacuation process while, and even after, disaster are the main priority to be considered. The power of bunker has quake’s resistance, the durability from water stream, variety of interaction to the ground, and waterproof design. When the situation is back to normal, victim and casualties can go into the safer place. The bunker will be located near the hospital and public places, and will have wide entrance supported by large slide in it so it will ease the disabilities people. The technology of the escape bunker system is expected to reduce the number of victims who have low mobility in the Tsunami.Keywords: escape bunker, tsunami, vertical evacuation, mitigation, disaster management
Procedia PDF Downloads 49591 Application of Geosynthetics for the Recovery of Located Road on Geological Failure
Authors: Rideci Farias, Haroldo Paranhos
Abstract:
The present work deals with the use of drainage geo-composite as a deep drainage and geogrid element to reinforce the base of the body of the landfill destined to the road pavement on geological faults in the stretch of the TO-342 Highway, between the cities of Miracema and Miranorte, in the State of Tocantins / TO, Brazil, which for many years was the main link between TO-010 and BR-153, after the city of Palmas, also in the state of Tocantins / TO, Brazil. For this application, geotechnical and geological studies were carried out by means of SPT percussion drilling, drilling and rotary drilling, to understand the problem, identifying the type of faults, filling material and the definition of the water table. According to the geological and geotechnical studies carried out, the area where the route was defined, passes through a zone of longitudinal fault to the runway, with strong breaking / fracturing, with presence of voids, intense alteration and with advanced argilization of the rock and with the filling up parts of the faults by organic and compressible soils leachate from other horizons. This geology presents as a geotechnical aggravating agent a medium of high hydraulic load and very low resistance to penetration. For more than 20 years, the region presented constant excessive deformations in the upper layers of the pavement, which after routine services of regularization, reconformation, re-compaction of the layers and application of the asphalt coating. The faults were quickly propagated to the surface of the asphalt pavement, generating a longitudinal shear, forming steps (unevenness), close to 40 cm, causing numerous accidents and discomfort to the drivers, since the geometric positioning was in a horizontal curve. Several projects were presented to the region's highway department to solve the problem. Due to the need for partial closure of the runway, the short time for execution, the use of geosynthetics was proposed and the most adequate solution for the problem was taken into account the movement of existing geological faults and the position of the water level in relation to several Layers of pavement and failure. In order to avoid any flow of water in the body of the landfill and in the filling material of the faults, a drainage curtain solution was used, carried out at 4.0 meters depth, with drainage geo-composite and as reinforcement element and inhibitor of the possible A geogrid of 200 kN / m of resistance was inserted at the base of the reconstituted landfill. Recent evaluations, after 13 years of application of the solution, show the efficiency of the technique used, supported by the geotechnical studies carried out in the area.Keywords: geosynthetics, geocomposite, geogrid, road, recovery, geological failure
Procedia PDF Downloads 17090 Study of Drape and Seam Strength of Fabric and Garment in Relation to Weave Design and Comparison of 2D and 3D Drape Properties
Authors: Shagufta Riaz, Ayesha Younus, Munir Ashraf, Tanveer Hussain
Abstract:
Aesthetic and performance are two most important considerations along with quality, durability, comfort and cost that affect the garment credibility. Fabric drape is perhaps the most important clothing characteristics that distinguishes fabric from the sheet, paper, steel or other film materials. It enables the fabric to mold itself under its own weight into desired and required shape when only part of it is directly sustained. The fabric has the ability to be crumpled charmingly in bent folds of single or double curvature due to its drapeability to produce a smooth flowing i.e. ‘the sinusoidal-type folds of a curtain or skirt’. Drape and seam strength are two parameters that are considered for aesthetic and performance of fabric for both apparel and home textiles. Until recently, no such study have been conducted in which effect of weave designs on drape and seam strength of fabric and garment is inspected. Therefore, the aim of this study was to measure seam strength and drape of fabric and garment objectively by changing weave designs and quality of the fabric. Also, the comparison of 2-D drape and 3-D drape was done to find whether a fabric behaves in same manner or differently when sewn and worn on the body. Four different cotton weave designs were developed and pr-treatment was done. 2-D Drape of the fabric was measured by drapemeter attached with digital camera and a supporting disc to hang the specimen on it. Drape coefficient value (DC %) has negative relation with drape. It is the ratio of draped sample’s projected shadow area to the area of undraped (flat) sample expressed as percentage. Similarly, 3-D drape was measured by hanging the A-line skirts for developed weave designs. BS 3356 standard test method was followed for bending length examination. It is related to the angle that the fabric makes with its horizontal axis. Seam strength was determined by following ASTM test standard. For sewn fabric, stitch density of seam was found by magnifying glass according to standard ASTM test method. In this research study, from the experimentation and evaluation it was investigated that drape and seam strength were significantly affected by change of weave design and quality of fabric (PPI & yarn count). Drapeability increased as the number of interlacement or contact point deceased between warp and weft yarns. As the weight of fabric, bending length, and density of fabric had indirect relationship with drapeability. We had concluded that 2-D drape was higher than 3-D drape even though the garment was made of the same fabric construction. Seam breakage strength decreased with decrease in picks density and yarn count.Keywords: drape coefficient, fabric, seam strength, weave
Procedia PDF Downloads 26489 Floating Populations, Rooted Networks Tracing the Evolution of Russeifa City in Relation to Marka Refugee Camp
Authors: Dina Dahood Dabash
Abstract:
Refugee camps are habitually defined as receptive sites, transient spaces of exile and nondescript depoliticized places of exception. However, such arguments form partial sides of reality, especially in countries that are geopolitically challenged and rely immensely on international aid. In Jordan, the dynamics brought with the floating population of refugees (Palestinian amongst others) have resulted in spatial after-effects that cannot be easily overlooked. For instance, Palestine refugee camps have turned by time into socioeconomic centers of gravity and cores of spatial evolution. Yet, such a position is not instantaneous. Amongst various reasons, it can be related, according to this paper, to a distinctive institutional climate that has been co-produced by the refugees, host community and the state. This paper aims to investigate the evolution of urban and spatial regulations in Jordan between 1948 and 1995, more specifically, state regulations, community regulations and refugee-self-regulation that all dynamically interacted that period. The paper aims to unpack the relations between refugee camps and their environs to further explore the agency of such floating populations in establishing rooted networks that extended the time and place boundaries. The paper’s argument stems from the fact that the spatial configuration of urban systems is not only an outcome of a historical evolutionary process but is also a result of interactions between the actors. The research operationalizes Marka camp in Jordan as a case study. Marka Camp is one of the six "emergency" camps erected in 1968 to shelter 15,000 Palestine refugees and displaced persons who left the West Bank and Gaza Strip. Nowadays, camp shelters more than 50,000 refugees in the same area of land. The camp is located in Russeifa, a city in Zarqa Governorate in Jordan. Together with Amman and Zarqa, Russeifa is part of a larger metropolitan area that acts as a home to more than half of Jordan’s businesses. The paper aspires to further understand the post-conflict strategies which were historically applied in Jordan and can be employed to handle more recent geopolitical challenges such as the Syrian refugee crisis. Methodological framework: The paper traces the evolution of the refugee-camp regulating norms in Jordan, parallel with the horizontal and vertical evolution of the Marka camp and its surroundings. Consequently, the main methods employed are historical and mental tracing, Interviews, in addition to using available Aerial and archival photos of the Marka camp and its surrounding.Keywords: forced migration, Palestine refugee camps, spatial agency, urban regulations
Procedia PDF Downloads 18788 Health Risk Assessment from Potable Water Containing Tritium and Heavy Metals
Authors: Olga A. Momot, Boris I. Synzynys, Alla A. Oudalova
Abstract:
Obninsk is situated in the Kaluga region 100 km southwest of Moscow on the left bank of the Protva River. Several enterprises utilizing nuclear energy are operating in the town. A special attention in the region where radiation-hazardous facilities are located has traditionally been paid to radioactive gas and aerosol releases into the atmosphere; liquid waste discharges into the Protva river and groundwater pollution. Municipal intakes involve 34 wells arranged 15 km apart in a sequence north-south along the foot of the left slope of the Protva river valley. Northern and southern water intakes are upstream and downstream of the town, respectively. They belong to river valley intakes with mixed feeding, i.e. precipitation infiltration is responsible for a smaller part of groundwater, and a greater amount is being formed by overflowing from Protva. Water intakes are maintained by the Protva river runoff, the volume of which depends on the precipitation fallen out and watershed area. Groundwater contamination with tritium was first detected in a sanitary-protective zone of the Institute of Physics and Power Engineering (SRC-IPPE) by Roshydromet researchers when realizing the “Program of radiological monitoring in the territory of nuclear industry enterprises”. A comprehensive survey of the SRC-IPPE’s industrial site and adjacent territories has revealed that research nuclear reactors and accelerators where tritium targets are applied as well as radioactive waste storages could be considered as potential sources of technogenic tritium. All the above sources are located within the sanitary controlled area of intakes. Tritium activity in water of springs and wells near the SRC-IPPE is about 17.4 – 3200 Bq/l. The observed values of tritium activity are below the intervention levels (7600 Bq/l for inorganic compounds and 3300 Bq/l for organically bound tritium). The risk has being assessed to estimate possible effect of considered tritium concentrations on human health. Data on tritium concentrations in pipe-line drinking water were used for calculations. The activity of 3H amounted to 10.6 Bq/l and corresponded to the risk of such water consumption of ~ 3·10-7 year-1. The risk value given in magnitude is close to the individual annual death risk for population living near a NPP – 1.6·10-8 year-1 and at the same time corresponds to the level of tolerable risk (10-6) and falls within “risk optimization”, i.e. in the sphere for planning the economically sound measures on exposure risk reduction. To estimate the chemical risk, physical and chemical analysis was made of waters from all springs and wells near the SRC-IPPE. Chemical risk from groundwater contamination was estimated according to the EPA US guidance. The risk of carcinogenic diseases at a drinking water consumption amounts to 5·10-5. According to the classification accepted the health risk in case of spring water consumption is inadmissible. The compared assessments of risk associated with tritium exposure, on the one hand, and the dangerous chemical (e.g. heavy metals) contamination of Obninsk drinking water, on the other hand, have confirmed that just these chemical pollutants are responsible for health risk.Keywords: radiation-hazardous facilities, water intakes, tritium, heavy metal, health risk
Procedia PDF Downloads 24087 Rebuilding Health Post-Conflict: Case Studies from Afghanistan, Cambodia, and Mozambique
Authors: Spencer Rutherford, Shadi Saleh
Abstract:
War and conflict negatively impact all facets of a health system; services cease to function, resources become depleted, and any semblance of governance is lost. Following cessation of conflict, the rebuilding process includes a wide array of international and local actors. During this period, stakeholders must contend with various trade-offs, including balancing sustainable outcomes with immediate health needs, introducing health reform measures while also increasing local capacity, and reconciling external assistance with local legitimacy. Compounding these factors are additional challenges, including coordination amongst stakeholders, the re-occurrence of conflict, and ulterior motives from donors and governments, to name a few. Therefore, the present paper evaluated health system development in three post-conflict countries over a 12-year timeline. Specifically, health policies, health inputs (such infrastructure and human resources), and measures of governance, from the post-conflict periods of Afghanistan, Cambodia, and Mozambique, were assessed against health outputs and other measures. All post-conflict countries experienced similar challenges when rebuilding the health sector, including; division and competition between donors, NGOs, and local institutions; urban and rural health inequalities; and the re-occurrence of conflict. However, countries also employed unique and effective mechanisms for reconstructing their health systems, including; government engagement of the NGO and private sector; integration of competing factions into the same workforce; and collaborative planning for health policy. Based on these findings, best-practice development strategies were determined and compiled into a 12-year framework. Briefly, during the initial stage of the post-conflict period, primary stakeholders should work quickly to draft a national health strategy in collaboration with the government, and focus on managing and coordinating NGOs through performance-based partnership agreements. With this scaffolding in place, the development community can then prioritize the reconstruction of primary health care centers, increasing and retaining health workers, and horizontal integration of immunization services. The final stages should then concentrate on transferring ownership of the health system national institutions, implementing sustainable financing mechanisms, and phasing-out NGO services. Overall, these findings contribute post-conflict health system development by evaluating the process holistically and along a timeline and can be of further use by healthcare managers, policy-makers, and other health professionals.Keywords: Afghanistan, Cambodia, health system development, health system reconstruction, Mozambique, post-conflict, state-building
Procedia PDF Downloads 15986 Teaching Children about Their Brains: Evaluating the Role of Neuroscience Undergraduates in Primary School Education
Authors: Clea Southall
Abstract:
Many children leave primary school having formed preconceptions about their relationship with science. Thus, primary school represents a critical window for stimulating scientific interest in younger children. Engagement relies on the provision of hands-on activities coupled with an ability to capture a child’s innate curiosity. This requires children to perceive science topics as interesting and relevant to their everyday life. Teachers and pupils alike have suggested the school curriculum be tailored to help stimulate scientific interest. Young children are naturally inquisitive about the human body; the brain is one topic which frequently engages pupils, although it is not currently included in the UK primary curriculum. Teaching children about the brain could have wider societal impacts such as increasing knowledge of neurological disorders. However, many primary school teachers do not receive formal neuroscience training and may feel apprehensive about delivering lessons on the nervous system. This is exacerbated by a lack of educational neuroscience resources. One solution is for undergraduates to form partnerships with schools - delivering engaging lessons and supplementing teacher knowledge. The aim of this project was to evaluate the success of a short lesson on the brain delivered by an undergraduate neuroscientist to primary school pupils. Prior to entering schools, semi-structured online interviews were conducted with teachers to gain pedagogical advice and relevant websites were searched for neuroscience resources. Subsequently, a single lesson plan was created comprising of four hands-on activities. The activities were devised in a top-down manner, beginning with learning about the brain as an entity, before focusing on individual neurons. Students were asked to label a ‘brain map’ to assess prior knowledge of brain structure and function. They viewed animal brains and created ‘pipe-cleaner neurons’ which were later used to depict electrical transmission. The same session was delivered by an undergraduate student to 570 key stage 2 (KS2) pupils across five schools in Leeds, UK. Post-session surveys, designed for teachers and pupils respectively, were used to evaluate the session. Children in all year groups had relatively poor knowledge of brain structure and function at the beginning of the session. When asked to label four brain regions with their respective functions, older pupils labeled a mean of 1.5 (± 1.0) brain regions compared to 0.8 (± 0.96) for younger pupils (p=0.002). However, by the end of the session, 95% of pupils felt their knowledge of the brain had increased. Hands-on activities were rated most popular by pupils and were considered the most successful aspect of the session by teachers. Although only half the teachers were aware of neuroscience educational resources, nearly all (95%) felt they would have more confidence in teaching a similar session in the future. All teachers felt the session was engaging and that the content could be linked to the current curriculum. Thus, a short fifty-minute session can successfully enhance pupils’ knowledge of a new topic: the brain. Partnerships with an undergraduate student can provide an alternative method for supplementing teacher knowledge, increasing their confidence in delivering future lessons on the nervous system.Keywords: education, neuroscience, primary school, undergraduate
Procedia PDF Downloads 21285 Combustion Variability and Uniqueness in Cylinders of a Radial Aircraft Piston Engine
Authors: Michal Geca, Grzegorz Baranski, Ksenia Siadkowska
Abstract:
The work is a part of the project which aims at developing innovative power and control systems for the high power aircraft piston engine ASz62IR. Developed electronically controlled ignition system will reduce emissions of toxic compounds as a result of lowered fuel consumption, optimized combustion and engine capability of efficient combustion of ecological fuels. The tested unit is an air-cooled four-stroke gasoline engine of 9 cylinders in a radial setup, mechanically charged by a radial compressor powered by the engine crankshaft. The total engine cubic capac-ity is 29.87 dm3, and the compression ratio is 6.4:1. The maximum take-off power is 1000 HP at 2200 rpm. The maximum fuel consumption is 280 kg/h. Engine powers aircrafts: An-2, M-18 „Dromader”, DHC-3 „OTTER”, DC-3 „Dakota”, GAF-125 „HAWK” i Y5. The main problems of the engine includes the imbalanced work of cylinders. The non-uniformity value in each cylinder results in non-uniformity of their work. In radial engine cylinders arrangement causes that the mixture movement that takes place in accordance (lower cylinder) or the opposite (upper cylinders) to the direction of gravity. Preliminary tests confirmed the presence of uneven workflow of individual cylinders. The phenomenon is most intense at low speed. The non-uniformity is visible on the waveform of cylinder pressure. Therefore two studies were conducted to determine the impact of this phenomenon on the engine performance: simulation and real tests. Simplified simulation was conducted on the element of the intake system coated with fuel film. The study shows that there is an effect of gravity on the movement of the fuel film inside the radial engine intake channels. Both in the lower and the upper inlet channels the film flows downwards. It follows from the fact that gravity assists the movement of the film in the lower cylinder channels and prevents the movement in the upper cylinder channels. Real tests on aircraft engine ASz62IR was conducted in transients condition (rapid change of the excess air in each cylinder were performed. Calculations were conducted for mass of fuel reaching the cylinders theoretically and really and on this basis, the factors of fuel evaporation “x” were determined. Therefore a simplified model of the fuel supply to cylinder was adopted. Model includes time constant of the fuel film τ, the number of engine transport cycles of non-evaporating fuel along the intake pipe γ and time between next cycles Δt. The calculation results of identification of the model parameters are presented in the form of radar graphs. The figures shows the averages declines and increases of the injection time and the average values for both types of stroke. These studies shown, that the change of the position of the cylinder will cause changes in the formation of fuel-air mixture and thus changes in the combustion process. Based on the results of the work of simulation and experiments was possible to develop individual algorithms for ignition control. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.Keywords: radial engine, ignition system, non-uniformity, combustion process
Procedia PDF Downloads 36784 Generation of Roof Design Spectra Directly from Uniform Hazard Spectra
Authors: Amin Asgarian, Ghyslaine McClure
Abstract:
Proper seismic evaluation of Non-Structural Components (NSCs) mandates an accurate estimation of floor seismic demands (i.e. acceleration and displacement demands). Most of the current international codes incorporate empirical equations to calculate equivalent static seismic force for which NSCs and their anchorage system must be designed. These equations, in general, are functions of component mass and peak seismic acceleration to which NSCs are subjected to during the earthquake. However, recent studies have shown that these recommendations are suffered from several shortcomings such as neglecting the higher mode effect, tuning effect, NSCs damping effect, etc. which cause underestimation of the component seismic acceleration demand. This work is aimed to circumvent the aforementioned shortcomings of code provisions as well as improving them by proposing a simplified, practical, and yet accurate approach to generate acceleration Floor Design Spectra (FDS) directly from corresponding Uniform Hazard Spectra (UHS) (i.e. design spectra for structural components). A database of 27 Reinforced Concrete (RC) buildings in which Ambient Vibration Measurements (AVM) have been conducted. The database comprises 12 low-rise, 10 medium-rise, and 5 high-rise buildings all located in Montréal, Canada and designated as post-disaster buildings or emergency shelters. The buildings are subjected to a set of 20 compatible seismic records and Floor Response Spectra (FRS) in terms of pseudo acceleration are derived using the proposed approach for every floor of the building in both horizontal directions considering 4 different damping ratios of NSCs (i.e. 2, 5, 10, and 20% viscous damping). Several effective parameters on NSCs response are evaluated statistically. These parameters comprise NSCs damping ratios, tuning of NSCs natural period with one of the natural periods of supporting structure, higher modes of supporting structures, and location of NSCs. The entire spectral region is divided into three distinct segments namely short-period, fundamental period, and long period region. The derived roof floor response spectra for NSCs with 5% damping are compared with the 5% damping UHS and procedure are proposed to generate roof FDS for NSCs with 5% damping directly from 5% damped UHS in each spectral region. The generated FDS is a powerful, practical, and accurate tool for seismic design and assessment of acceleration-sensitive NSCs particularly in existing post-critical buildings which have to remain functional even after the earthquake and cannot tolerate any damage to NSCs.Keywords: earthquake engineering, operational and functional components (OFCs), operational modal analysis (OMA), seismic assessment and design
Procedia PDF Downloads 23883 Artificial Intelligence and Robotics in the Eye of Private Law with Special Regards to Intellectual Property and Liability Issues
Authors: Barna Arnold Keserű
Abstract:
In the last few years (what is called by many scholars the big data era) artificial intelligence (hereinafter AI) get more and more attention from the public and from the different branches of sciences as well. What previously was a mere science-fiction, now starts to become reality. AI and robotics often walk hand in hand, what changes not only the business and industrial life, but also has a serious impact on the legal system. The main research of the author focuses on these impacts in the field of private law, with special regards to liability and intellectual property issues. Many questions arise in these areas connecting to AI and robotics, where the boundaries are not sufficiently clear, and different needs are articulated by the different stakeholders. Recognizing the urgent need of thinking the Committee on Legal Affairs of the European Parliament adopted a Motion for a European Parliament Resolution A8-0005/2017 (of January 27th, 2017) in order to take some recommendations to the Commission on civil law rules on robotics and AI. This document defines some crucial usage of AI and/or robotics, e.g. the field of autonomous vehicles, the human job replacement in the industry or smart applications and machines. It aims to give recommendations to the safe and beneficial use of AI and robotics. However – as the document says – there are no legal provisions that specifically apply to robotics or AI in IP law, but that existing legal regimes and doctrines can be readily applied to robotics, although some aspects appear to call for specific consideration, calls on the Commission to support a horizontal and technologically neutral approach to intellectual property applicable to the various sectors in which robotics could be employed. AI can generate some content what worth copyright protection, but the question came up: who is the author, and the owner of copyright? The AI itself can’t be deemed author because it would mean that it is legally equal with the human persons. But there is the programmer who created the basic code of the AI, or the undertaking who sells the AI as a product, or the user who gives the inputs to the AI in order to create something new. Or AI generated contents are so far from humans, that there isn’t any human author, so these contents belong to public domain. The same questions could be asked connecting to patents. The research aims to answer these questions within the current legal framework and tries to enlighten future possibilities to adapt these frames to the socio-economical needs. In this part, the proper license agreements in the multilevel-chain from the programmer to the end-user become very important, because AI is an intellectual property in itself what creates further intellectual property. This could collide with data-protection and property rules as well. The problems are similar in the field of liability. We can use different existing forms of liability in the case when AI or AI led robotics cause damages, but it is unsure that the result complies with economical and developmental interests.Keywords: artificial intelligence, intellectual property, liability, robotics
Procedia PDF Downloads 20682 Exposure to Radon on Air in Tourist Caves in Bulgaria
Authors: Bistra Kunovska, Kremena Ivanova, Jana Djounova, Desislava Djunakova, Zdenka Stojanovska
Abstract:
The carcinogenic effects of radon as a radioactive noble gas have been studied and show a strong correlation between radon exposure and lung cancer occurrence, even in the case of low radon levels. The major part of the natural radiation dose in humans is received by inhaling radon and its progenies, which originates from the decay chain of U-238. Indoor radon poses a substantial threat to human health when build-up occurs in confined spaces such as homes, mines and caves and the risk increases with the duration of radon exposure and is proportional to both the radon concentration and the time of exposure. Tourist caves are a case of special environmental conditions that may be affected by high radon concentration. Tourist caves are a recognized danger in terms of radon exposure to cave workers (guides, employees working in shops built above the cave entrances, etc.), but due to the sensitive nature of the cave environment, high concentrations cannot be easily removed. Forced ventilation of the air in the caves is considered unthinkable due to the possible harmful effects on the microclimate, flora and fauna. The risks to human health posed by exposure to elevated radon levels in caves are not well documented. Various studies around the world often detail very high concentrations of radon in caves and exposure of employees but without a follow-up assessment of the overall impact on human health. This study was developed in the implementation of a national project to assess the potential health effects caused by exposure to elevated levels of radon in buildings with public access under the National Science Fund of Bulgaria, in the framework of grant No КП-06-Н23/1/07.12.2018. The purpose of the work is to assess the radon level in Bulgarian caves and the exposure of the visitors and workers. The number of caves (sampling size) was calculated for simple random selection from total available caves 65 (sampling population) are 13 caves with confidence level 95 % and confidence interval (margin of error) approximately 25 %. A measurement of the radon concentration in air at specific locations in caves was done by using CR-39 type nuclear track-etch detectors that were placed by the participants in the research team. Despite the fact that all of the caves were formed in karst rocks, the radon levels were rather different from each other (97–7575 Bq/m3). An assessment of the influence of the orientation of the caves in the earth's surface (horizontal, inclined, vertical) on the radon concentration was performed. Evaluation of health hazards and radon risk exposure causing by inhaling the radon and its daughter products in each surveyed caves was done. Reducing the time spent in the cave has been recommended in order to decrease the exposure of workers.Keywords: tourist caves, radon concentration, exposure, Bulgaria
Procedia PDF Downloads 19281 The Effect of Finding and Development Costs and Gas Price on Basins in the Barnett Shale
Authors: Michael Kenomore, Mohamed Hassan, Amjad Shah, Hom Dhakal
Abstract:
Shale gas reservoirs have been of greater importance compared to shale oil reservoirs since 2009 and with the current nature of the oil market, understanding the technical and economic performance of shale gas reservoirs is of importance. Using the Barnett shale as a case study, an economic model was developed to quantify the effect of finding and development costs and gas prices on the basins in the Barnett shale using net present value as an evaluation parameter. A rate of return of 20% and a payback period of 60 months or less was used as the investment hurdle in the model. The Barnett was split into four basins (Strawn Basin, Ouachita Folded Belt, Forth-worth Syncline and Bend-arch Basin) with analysis conducted on each of the basin to provide a holistic outlook. The dataset consisted of only horizontal wells that started production from 2008 to at most 2015 with 1835 wells coming from the strawn basin, 137 wells from the Ouachita folded belt, 55 wells from the bend-arch basin and 724 wells from the forth-worth syncline. The data was analyzed initially on Microsoft Excel to determine the estimated ultimate recoverable (EUR). The range of EUR from each basin were loaded in the Palisade Risk software and a log normal distribution typical of Barnett shale wells was fitted to the dataset. Monte Carlo simulation was then carried out over a 1000 iterations to obtain a cumulative distribution plot showing the probabilistic distribution of EUR for each basin. From the cumulative distribution plot, the P10, P50 and P90 EUR values for each basin were used in the economic model. Gas production from an individual well with a EUR similar to the calculated EUR was chosen and rescaled to fit the calculated EUR values for each basin at the respective percentiles i.e. P10, P50 and P90. The rescaled production was entered into the economic model to determine the effect of the finding and development cost and gas price on the net present value (10% discount rate/year) as well as also determine the scenario that satisfied the proposed investment hurdle. The finding and development costs used in this paper (assumed to consist only of the drilling and completion costs) were £1 million, £2 million and £4 million while the gas price was varied from $2/MCF-$13/MCF based on Henry Hub spot prices from 2008-2015. One of the major findings in this study was that wells in the bend-arch basin were least economic, higher gas prices are needed in basins containing non-core counties and 90% of the Barnet shale wells were not economic at all finding and development costs irrespective of the gas price in all the basins. This study helps to determine the percentage of wells that are economic at different range of costs and gas prices, determine the basins that are most economic and the wells that satisfy the investment hurdle.Keywords: shale gas, Barnett shale, unconventional gas, estimated ultimate recoverable
Procedia PDF Downloads 30280 Technology of Electrokinetic Disintegration of Virginia Fanpetals (Sida hermaphrodita) Biomass in a Biogas Production System
Authors: Mirosław Krzemieniewski, Marcin Zieliński, Marcin Dębowski
Abstract:
Electrokinetic disintegration is one of the high-voltage electric methods. The design of systems is exceptionally simple. Biomass flows through a system of pipes with alongside mounted electrodes that generate an electric field. Discharges in the electric field deform cell walls and lead to their successive perforation, thereby making their contents easily available to bacteria. The spark-over occurs between electrode surface and pipe jacket which is the second pole and closes the circuit. The value of voltage ranges from 10 to 100kV. Electrodes are supplied by normal “power grid” monophase electric current (230V, 50Hz). Next, the electric current changes into direct current of 24V in modules serving for particular electrodes, and this current directly feeds the electrodes. The installation is completely safe because the value of generated current does not exceed 250mA and because conductors are grounded. Therefore, there is no risk of electric shock posed to the personnel, even in the case of failure or incorrect connection. Low values of the electric current mean small energy consumption by the electrode which is extremely low – only 35W per electrode – compared to other methods of disintegration. Pipes with electrodes with diameter of DN150 are made of acid-proof steel and connected from both sides with 90º elbows ended with flanges. The available S and U types of pipes enable very convenient fitting with system construction in the existing installations and rooms or facilitate space management in new applications. The system of pipes for electrokinetic disintegration may be installed horizontally, vertically, askew, on special stands or also directly on the wall of a room. The number of pipes and electrodes is determined by operating conditions as well as the quantity of substrate, type of biomass, content of dry matter, method of disintegration (single or circulatory), mounting site etc. The most effective method involves pre-treatment of substrate that may be pumped through the disintegration system on the way to the fermentation tank or recirculated in a buffered intermediate tank (substrate mixing tank). Biomass structure destruction in the process of electrokinetic disintegration causes shortening of substrate retention time in the tank and acceleration of biogas production. A significant intensification of the fermentation process was observed in the systems operating in the technical scale, with the greatest increase in biogas production reaching 18%. The secondary, but highly significant for the energetic balance, effect is a tangible decrease of energy input by agitators in tanks. It is due to reduced viscosity of the biomass after disintegration, and may result in energy savings reaching even 20-30% of the earlier noted consumption. Other observed phenomena include reduction in the layer of surface scum, reduced sewage capability for foaming and successive decrease in the quantity of bottom sludge banks. Considering the above, the system for electrokinetic disintegration seems a very interesting and valuable solutions meeting the offer of specialist equipment for the processing of plant biomass, including Virginia fanpetals, before the process of methane fermentation.Keywords: electrokinetic disintegration, biomass, biogas production, fermentation, Virginia fanpetals
Procedia PDF Downloads 37779 On-Ice Force-Velocity Modeling Technical Considerations
Authors: Dan Geneau, Mary Claire Geneau, Seth Lenetsky, Ming -Chang Tsai, Marc Klimstra
Abstract:
Introduction— Horizontal force-velocity profiling (HFVP) involves modeling an athletes linear sprint kinematics to estimate valuable maximum force and velocity metrics. This approach to performance modeling has been used in field-based team sports and has recently been introduced to ice-hockey as a forward skating performance assessment. While preliminary data has been collected on ice, distance constraints of the on-ice test restrict the ability of the athletes to reach their maximal velocity which result in limits of the model to effectively estimate athlete performance. This is especially true of more elite athletes. This report explores whether athletes on-ice are able to reach a velocity plateau similar to what has been seen in overground trials. Fourteen male Major Junior ice-hockey players (BW= 83.87 +/- 7.30 kg, height = 188 ± 3.4cm cm, age = 18 ± 1.2 years n = 14) were recruited. For on-ice sprints, participants completed a standardized warm-up consisting of skating and dynamic stretching and a progression of three skating efforts from 50% to 95%. Following the warm-up, participants completed three on ice 45m sprints, with three minutes of rest in between each trial. For overground sprints, participants completed a similar dynamic warm-up to that of on-ice trials. Following the warm-up participants completed three 40m overground sprint trials. For each trial (on-ice and overground), radar was used to collect instantaneous velocity (Stalker ATS II, Texas, USA) aimed at the participant’s waist. Sprint velocities were modelled using custom Python (version 3.2) script using a mono-exponential function, similar to previous work. To determine if on-ice tirals were achieving a maximum velocity (plateau), minimum acceleration values of the modeled data at the end of the sprint were compared (using paired t-test) between on-ice and overground trials. Significant differences (P<0.001) between overground and on-ice minimum accelerations were observed. It was found that on-ice trials consistently reported higher final acceleration values, indicating a maximum maintained velocity (plateau) had not been reached. Based on these preliminary findings, it is suggested that reliable HFVP metrics cannot yet be collected from all ice-hockey populations using current methods. Elite male populations were not able to achieve a velocity plateau similar to what has been seen in overground trials, indicating the absence of a maximum velocity measure. With current velocity and acceleration modeling techniques, including a dependency of a velocity plateau, these results indicate the potential for error in on-ice HFVP measures. Therefore, these findings suggest that a greater on-ice sprint distance may be required or the need for other velocity modeling techniques, where maximal velocity is not required for a complete profile.Keywords: ice-hockey, sprint, skating, power
Procedia PDF Downloads 10178 Weapon-Being: Weaponized Design and Object-Oriented Ontology in Hypermodern Times
Authors: John Dimopoulos
Abstract:
This proposal attempts a refabrication of Heidegger’s classic thing-being and object-being analysis in order to provide better ontological tools for understanding contemporary culture, technology, and society. In his work, Heidegger sought to understand and comment on the problem of technology in an era of rampant innovation and increased perils for society and the planet. Today we seem to be at another crossroads in this course, coming after postmodernity, during which dreams and dangers of modernity augmented with critical speculations of the post-war era take shape. The new era which we are now living in, referred to as hypermodernity by researchers in various fields such as architecture and cultural theory, is defined by the horizontal implementation of digital technologies, cybernetic networks, and mixed reality. Technology today is rapidly approaching a turning point, namely the point of no return for humanity’s supervision over its creations. The techno-scientific civilization of the 21st century creates a series of problems, progressively more difficult and complex to solve and impossible to ignore, climate change, data safety, cyber depression, and digital stress being some of the most prevalent. Humans often have no other option than to address technology-induced problems with even more technology, as in the case of neuron networks, machine learning, and AI, thus widening the gap between creating technological artifacts and understanding their broad impact and possible future development. As all technical disciplines and particularly design, become enmeshed in a matrix of digital hyper-objects, a conceptual toolbox that allows us to handle the new reality becomes more and more necessary. Weaponized design, prevalent in many fields, such as social and traditional media, urban planning, industrial design, advertising, and the internet in general, hints towards an increase in conflicts. These conflicts between tech companies, stakeholders, and users with implications in politics, work, education, and production as apparent in the cases of Amazon workers’ strikes, Donald Trump’s 2016 campaign, Facebook and Microsoft data scandals, and more are often non-transparent to the wide public’s eye, thus consolidating new elites and technocratic classes and making the public scene less and less democratic. The new category proposed, weapon-being, is outlined in respect to the basic function of reducing complexity, subtracting materials, actants, and parameters, not strictly in favor of a humanistic re-orientation but in a more inclusive ontology of objects and subjects. Utilizing insights of Object-Oriented Ontology (OOO) and its schematization of technological objects, an outline for a radical ontology of technology is approached.Keywords: design, hypermodernity, object-oriented ontology, weapon-being
Procedia PDF Downloads 15377 Electron Bernstein Wave Heating in the Toroidally Magnetized System
Authors: Johan Buermans, Kristel Crombé, Niek Desmet, Laura Dittrich, Andrei Goriaev, Yurii Kovtun, Daniel López-Rodriguez, Sören Möller, Per Petersson, Maja Verstraeten
Abstract:
The International Thermonuclear Experimental Reactor (ITER) will rely on three sources of external heating to produce and sustain a plasma; Neutral Beam Injection (NBI), Ion Cyclotron Resonance Heating (ICRH), and Electron Cyclotron Resonance Heating (ECRH). ECRH is a way to heat the electrons in a plasma by resonant absorption of electromagnetic waves. The energy of the electrons is transferred indirectly to the ions by collisions. The electron cyclotron heating system can be directed to deposit heat in particular regions in the plasma (https://www.iter.org/mach/Heating). Electron Cyclotron Resonance Heating (ECRH) at the fundamental resonance in X-mode is limited by a low cut-off density. Electromagnetic waves cannot propagate in the region between this cut-off and the Upper Hybrid Resonance (UHR) and cannot reach the Electron Cyclotron Resonance (ECR) position. Higher harmonic heating is hence preferred in heating scenarios nowadays to overcome this problem. Additional power deposition mechanisms can occur above this threshold to increase the plasma density. This includes collisional losses in the evanescent region, resonant power coupling at the UHR, tunneling of the X-wave with resonant coupling at the ECR, and conversion to the Electron Bernstein Wave (EBW) with resonant coupling at the ECR. A more profound knowledge of these deposition mechanisms can help determine the optimal plasma production scenarios. Several ECRH experiments are performed on the TOroidally MAgnetized System (TOMAS) to identify the conditions for Electron Bernstein Wave (EBW) heating. Density and temperature profiles are measured with movable Triple Langmuir Probes in the horizontal and vertical directions. Measurements of the forwarded and reflected power allow evaluation of the coupling efficiency. Optical emission spectroscopy and camera images also contribute to plasma characterization. The influence of the injected power, magnetic field, gas pressure, and wave polarization on the different deposition mechanisms is studied, and the contribution of the Electron Bernstein Wave is evaluated. The TOMATOR 1D hydrogen-helium plasma simulator numerically describes the evolution of current less magnetized Radio Frequency plasmas in a tokamak based on Braginskii’s legal continuity and heat balance equations. This code was initially benchmarked with experimental data from TCV to determine the transport coefficients. The code is used to model the plasma parameters and the power deposition profiles. The modeling is compared with the data from the experiments.Keywords: electron Bernstein wave, Langmuir probe, plasma characterization, TOMAS
Procedia PDF Downloads 9676 Modelling of Air-Cooled Adiabatic Membrane-Based Absorber for Absorption Chillers Using Low Temperature Solar Heat
Authors: M. Venegas, M. De Vega, N. García-Hernando
Abstract:
Absorption cooling chillers have received growing attention over the past few decades as they allow the use of low-grade heat to produce the cooling effect. The combination of this technology with solar thermal energy in the summer period can reduce the electricity consumption peak due to air-conditioning. One of the main components, the absorber, is designed for simultaneous heat and mass transfer. Usually, shell and tubes heat exchangers are used, which are large and heavy. Cooling water from a cooling tower is conventionally used to extract the heat released during the absorption and condensation processes. These are clear inconvenient for the generalization of the absorption technology use, limiting its benefits in the contribution to the reduction in CO2 emissions, particularly for the H2O-LiBr solution which can work with low heat temperature sources as provided by solar panels. In the present work a promising new technology is under study, consisting in the use of membrane contactors in adiabatic microchannel mass exchangers. The configuration here proposed consists in one or several modules (depending on the cooling capacity of the chiller) that contain two vapour channels, separated from the solution by adjacent microporous membranes. The solution is confined in rectangular microchannels. A plastic or synthetic wall separates the solution channels between them. The solution entering the absorber is previously subcooled using ambient air. In this way, the need for a cooling tower is avoided. A model of the configuration proposed is developed based on mass and energy balances and some correlations were selected to predict the heat and mass transfer coefficients. The concentration and temperatures along the channels cannot be explicitly determined from the set of equations obtained. For this reason, the equations were implemented in a computer code using Engineering Equation Solver software, EES™. With the aim of minimizing the absorber volume to reduce the size of absorption cooling chillers, the ratio between the cooling power of the chiller and the absorber volume (R) is calculated. Its variation is shown along the solution channels, allowing its optimization for selected operating conditions. For the case considered the solution channel length is recommended to be lower than 3 cm. Maximum values of R obtained in this work are higher than the ones found in optimized horizontal falling film absorbers using the same solution. Results obtained also show the variation of R and the chiller efficiency (COP) for different ambient temperatures and desorption temperatures typically obtained using flat plate solar collectors. The configuration proposed of adiabatic membrane-based absorber using ambient air to subcool the solution is a good technology to reduce the size of the absorption chillers, allowing the use of low temperature solar heat and avoiding the need for cooling towers.Keywords: adiabatic absorption, air-cooled, membrane, solar thermal energy
Procedia PDF Downloads 28675 Competitive Effects of Differential Voting Rights and Promoter Control in Indian Start-Ups
Authors: Prateek Bhattacharya
Abstract:
The definition of 'control' in India is a rapidly evolving concept, owing to varying rights attached to varying securities. Shares with differential voting rights (DVRs) provide the holder with differential rights as to voting, as compared to ordinary equity shareholders of the company. Such DVRs can amount to both superior voting rights and inferior voting rights, where DVRs with superior voting rights amount to providing the holder with golden shares in the company. While DVRs are not a novel concept in India having been recognized since 2000, they were placed on a back burner by the Securities and Exchange Board of India (SEBI) in 2010 after issuance of DVRs with superior voting rights was restricted. In June 2019, the SEBI rekindled the ebbing fire of DVRs, keeping mind the fast-paced nature of the global economy, the government's faith that India’s ‘new age technology companies’ (i.e., Start-Ups) will lead the charge in achieving its goal of India becoming a $5 trillion dollar economy by 2024, and recognizing that the promoters of such Start-Ups seek to raise capital without losing control over their companies. DVRs with superior voting rights guarantee promoters with up to 74% shareholding in Start-Ups for a period of 5 years, meaning that the holder of such DVRs can exercise sole control and material influence over the company for that period. This manner of control has the potential of causing both pro-competitive and anti-competitive effects in the markets where these companies operate. On the one hand, DVRs will allow Start-Up promoters/founders to retain control of their companies and protect its business interests from foreign elements such as private/public investors – in a scenario where such investors have multiple investments in firms engaged in associated lines of business (whether on a horizontal or vertical level) and would seek to influence these firms to enter into potential anti-competitive arrangements with one another, DVRs will enable the promoters to thwart such scenarios. On the other hand, promoters/founders who themselves have multiple investments in Start-Ups, which are in associated lines of business run the risk of influencing these associated Start-Ups to engage in potentially anti-competitive arrangements in the name of profit maximisation. This paper shall be divided into three parts: Part I shall deal with the concept of ‘control’, as deliberated upon and decided by the SEBI and the Competition Commission of India (CCI) under both company/securities law and competition law; Part II shall review this definition of ‘control’ through the lens of DVRs, and Part III shall discuss the aforementioned potential pro-competitive and anti-competitive effects caused by the DVRs by examining the current Indian Start-Up scenario. The paper shall conclude by providing suggestions for the CCI to incorporate a clearer and more progressive concept of ‘control’.Keywords: competition law, competitive effects, control, differential voting rights, DVRs, investor shareholding, merger control, start-ups
Procedia PDF Downloads 12474 Development of the Food Market of the Republic of Kazakhstan in the Field of Milk Processing
Authors: Gulmira Zhakupova, Tamara Tultabayeva, Aknur Muldasheva, Assem Sagandyk
Abstract:
The development of technology and production of products with increased biological value based on the use of natural food raw materials are important tasks in the policy of the food market of the Republic of Kazakhstan. For Kazakhstan, livestock farming, in particular sheep farming, is the most ancient and developed industry and way of life. The history of the Kazakh people is largely connected with this type of agricultural production, with established traditions using dairy products from sheep's milk. Therefore, the development of new technologies from sheep’s milk remains relevant. In addition, one of the most promising areas for the development of food technology for therapeutic and prophylactic purposes is sheep milk products as a source of protein, immunoglobulins, minerals, vitamins, and other biologically active compounds. This article presents the results of research on the study of milk processing technology. The objective of the study is to study the possibilities of processing sheep milk and its role in human nutrition, as well as the results of research to improve the technology of sheep milk products. The studies were carried out on the basis of sanitary and hygienic requirements for dairy products in accordance with the following test methods. To perform microbiological analysis, we used the method for identifying Salmonella bacteria (Horizontal method for identifying, counting, and serotyping Salmonella) in a certain mass or volume of product. Nutritional value is a complex of properties of food products that meet human physiological needs for energy and basic nutrients. The protein mass fraction was determined by the Kjeldahl method. This method is based on the mineralization of a milk sample with concentrated sulfuric acid in the presence of an oxidizing agent, an inert salt - potassium sulfate, and a catalyst - copper sulfate. In this case, the amino groups of the protein are converted into ammonium sulfate dissolved in sulfuric acid. The vitamin composition was determined by HPLC. To determine the content of mineral substances in the studied samples, the method of atomic absorption spectrophotometry was used. The study identified the technological parameters of sheep milk products and determined the prospects for researching sheep milk products. Microbiological studies were used to determine the safety of the study product. According to the results of the microbiological analysis, no deviations from the norm were identified. This means high safety of the products under study. In terms of nutritional value, the resulting products are high in protein. Data on the positive content of amino acids were also obtained. The results obtained will be used in the food industry and will serve as recommendations for manufacturers.Keywords: dairy, milk processing, nutrition, colostrum
Procedia PDF Downloads 5773 Temporal and Spatio-Temporal Stability Analyses in Mixed Convection of a Viscoelastic Fluid in a Porous Medium
Authors: P. Naderi, M. N. Ouarzazi, S. C. Hirata, H. Ben Hamed, H. Beji
Abstract:
The stability of mixed convection in a Newtonian fluid medium heated from below and cooled from above, also known as the Poiseuille-Rayleigh-Bénard problem, has been extensively investigated in the past decades. To our knowledge, mixed convection in porous media has received much less attention in the published literature. The present paper extends the mixed convection problem in porous media for the case of a viscoelastic fluid flow owing to its numerous environmental and industrial applications such as the extrusion of polymer fluids, solidification of liquid crystals, suspension solutions and petroleum activities. Without a superimposed through-flow, the natural convection problem of a viscoelastic fluid in a saturated porous medium has already been treated. The effects of the viscoelastic properties of the fluid on the linear and nonlinear dynamics of the thermoconvective instabilities have also been treated in this work. Consequently, the elasticity of the fluid can lead either to a Hopf bifurcation, giving rise to oscillatory structures in the strongly elastic regime, or to a stationary bifurcation in the weakly elastic regime. The objective of this work is to examine the influence of the main horizontal flow on the linear and characteristics of these two types of instabilities. Under the Boussinesq approximation and Darcy's law extended to a viscoelastic fluid, a temporal stability approach shows that the conditions for the appearance of longitudinal rolls are identical to those found in the absence of through-flow. For the general three-dimensional (3D) perturbations, a Squire transformation allows the deduction of the complex frequencies associated with the 3D problem using those obtained by solving the two-dimensional one. The numerical resolution of the eigenvalue problem concludes that the through-flow has a destabilizing effect and selects a convective configuration organized in purely transversal rolls which oscillate in time and propagate in the direction of the main flow. In addition, by using the mathematical formalism of absolute and convective instabilities, we study the nature of unstable three-dimensional disturbances. It is shown that for a non-vanishing through-flow, general three-dimensional instabilities are convectively unstable which means that in the absence of a continuous noise source these instabilities are drifted outside the porous medium, and no long-term pattern is observed. In contrast, purely transversal rolls may exhibit a transition to absolute instability regime and therefore affect the porous medium everywhere including in the absence of a noise source. The absolute instability threshold, the frequency and the wave number associated with purely transversal rolls are determined as a function of the Péclet number and the viscoelastic parameters. Results are discussed and compared to those obtained from laboratory experiments in the case of Newtonian fluids.Keywords: instability, mixed convection, porous media, and viscoelastic fluid
Procedia PDF Downloads 34172 Integration of Gravity and Seismic Methods in the Geometric Characterization of a Dune Reservoir: Case of the Zouaraa Basin, NW Tunisia
Authors: Marwa Djebbi, Hakim Gabtni
Abstract:
Gravity is a continuously advancing method that has become a mature technology for geological studies. Increasingly, it has been used to complement and constrain traditional seismic data and even used as the only tool to get information of the sub-surface. In fact, in some regions the seismic data, if available, are of poor quality and hard to be interpreted. Such is the case for the current study area. The Nefza zone is part of the Tellian fold and thrust belt domain in the north west of Tunisia. It is essentially made of a pile of allochthonous units resulting from a major Neogene tectonic event. Its tectonic and stratigraphic developments have always been subject of controversies. Considering the geological and hydrogeological importance of this area, a detailed interdisciplinary study has been conducted integrating geology, seismic and gravity techniques. The interpretation of Gravity data allowed the delimitation of the dune reservoir and the identification of the regional lineaments contouring the area. It revealed the presence of three gravity lows that correspond to the dune of Zouara and Ouchtata separated along with a positive gravity axis espousing the Ain Allega_Aroub Er Roumane axe. The Bouguer gravity map illustrated the compartmentalization of the Zouara dune into two depressions separated by a NW-SE anomaly trend. This constitution was confirmed by the vertical derivative map which showed the individualization of two depressions with slightly different anomaly values. The horizontal gravity gradient magnitude was performed in order to determine the different geological features present in the studied area. The latest indicated the presence of NE-SW parallel folds according to the major Atlasic direction. Also, NW-SE and EW trends were identified. The maxima tracing confirmed this direction by the presence of NE-SW faults, mainly the Ghardimaou_Cap Serrat accident. The quality of the available seismic sections and the absence of borehole data in the region, except few hydraulic wells that been drilled and showing the heterogeneity of the substratum of the dune, required the process of gravity modeling of this challenging area that necessitates to be modeled for the geometrical characterization of the dune reservoir and determine the different stratigraphic series underneath these deposits. For more detailed and accurate results, the scale of study will be reduced in coming research. A more concise method will be elaborated; the 4D microgravity survey. This approach is considered as an expansion of gravity method and its fourth dimension is time. It will allow a continuous and repeated monitoring of fluid movement in the subsurface according to the micro gal (μgall) scale. The gravity effect is a result of a monthly variation of the dynamic groundwater level which correlates with rainfall during different periods.Keywords: 3D gravity modeling, dune reservoir, heterogeneous substratum, seismic interpretation
Procedia PDF Downloads 30271 Preliminary Design, Production and Characterization of a Coral and Alginate Composite for Bone Engineering
Authors: Sthephanie A. Colmenares, Fabio A. Rojas, Pablo A. Arbeláez, Johann F. Osma, Diana Narvaez
Abstract:
The loss of functional tissue is a ubiquitous and expensive health care problem, with very limited treatment options for these patients. The golden standard for large bone damage is a cadaveric bone as an allograft with stainless steel support; however, this solution only applies to bones with simple morphologies (long bones), has a limited material supply and presents long term problems regarding mechanical strength, integration, differentiation and induction of native bone tissue. Therefore, the fabrication of a scaffold with biological, physical and chemical properties similar to the human bone with a fabrication method for morphology manipulation is the focus of this investigation. Towards this goal, an alginate and coral matrix was created using two production techniques; the coral was chosen because of its chemical composition and the alginate due to its compatibility and mechanical properties. In order to construct the coral alginate scaffold the following methodology was employed; cleaning of the coral, its pulverization, scaffold fabrication and finally the mechanical and biological characterization. The experimental design had: mill method and proportion of alginate and coral, as the two factors, with two and three levels each, using 5 replicates. The coral was cleaned with sodium hypochlorite and hydrogen peroxide in an ultrasonic bath. Then, it was milled with both a horizontal and a ball mill in order to evaluate the morphology of the particles obtained. After this, using a combination of alginate and coral powder and water as a binder, scaffolds of 1cm3 were printed with a SpectrumTM Z510 3D printer. This resulted in solid cubes that were resistant to small compression stress. Then, using a ESQUIM DP-143 silicon mold, constructs used for the mechanical and biological assays were made. An INSTRON 2267® was implemented for the compression tests; the density and porosity were calculated with an analytical balance and the biological tests were performed using cell cultures with VERO fibroblast, and Scanning Electron Microscope (SEM) as visualization tool. The Young’s moduli were dependent of the pulverization method, the proportion of coral and alginate and the interaction between these factors. The maximum value was 5,4MPa for the 50/50 proportion of alginate and horizontally milled coral. The biological assay showed more extracellular matrix in the scaffolds consisting of more alginate and less coral. The density and porosity were proportional to the amount of coral in the powder mix. These results showed that this composite has potential as a biomaterial, but its behavior is elastic with a small Young’s Modulus, which leads to the conclusion that the application may not be for long bones but for tissues similar to cartilage.Keywords: alginate, biomaterial, bone engineering, coral, Porites asteroids, SEM
Procedia PDF Downloads 25470 Ant and Spider Diversity in a Rural Landscape of the Vhembe Biosphere, South Africa
Authors: Evans V. Mauda, Stefan H. Foord, Thinandavha C. Munyai
Abstract:
The greatest threat to biodiversity is a loss of habitat through landscape fragmentation and attrition. Land use changes are therefore among the most immediate drivers of species diversity. Urbanization and agriculture are the main drivers of habitat loss and transformation in the Savanna biomes of South Africa. Agricultural expansion and the intensification in particular, take place at the expense of biodiversity and will probably be the primary driver of biodiversity loss in this century. Arthropods show measurable behavioural responses to changing land mosaics at the smallest scale and heterogeneous environments are therefore predicted to support more complex and diverse biological assemblages. Ants are premier soil turners, channelers of energy and dominate insect fauna, while spiders are a mega-diverse group that can regulate other invertebrate populations. This study aims to quantify the response of these two taxa in a rural-urban mosaic of a rapidly developing communal area. The study took place in and around two villages in the north-eastern corner of South Africa. Two replicates for each of the dominant land use categories, viz. urban settlements, dryland cultivation and cattle rangelands, were set out in each of the villages and sampled during the dry and wet seasons for a total of 2 villages × 3 land use categories × 2 seasons = 24 assemblages. Local scale variables measured included vertical and horizontal habitat structure as well as structural and chemical composition of the soil. Ant richness was not affected by land use but local scale variables such as vertical vegetation structure (+) and leaf litter cover (+), although vegetation complexity at lower levels was negatively associated with ant richness. However, ant richness was largely shaped by regional and temporal processes invoking the importance of dispersal and historical processes. Spider species richness was mostly affected by land use and local conditions highlighting their landscape elements. Spider richness did not vary much between villages and across seasons and seems to be less dependent on context or history. There was a considerable amount of variation in spider richness that was not explained and this could be related to factors which were not measured in this study such as temperature and competition. For both ant and spider assemblages the constrained ordination explained 18 % of variation in these taxa. Three environmental variables (leaf litter cover, active carbon and rock cover) were important in explaining ant assemblage structure, while two (sand and leaf litter cover) were important for spider assemblage structure. This study highlights the importance of disturbance (land use activities) and leaf litter with the associated effects on ant and spider assemblages across the study area.Keywords: ants, assemblages, biosphere, diversity, land use, spiders, urbanization
Procedia PDF Downloads 26769 Reactivities of Turkish Lignites during Oxygen Enriched Combustion
Authors: Ozlem Uguz, Ali Demirci, Hanzade Haykiri-Acma, Serdar Yaman
Abstract:
Lignitic coal holds its position as Turkey’s most important indigenous energy source to generate energy in thermal power plants. Hence, efficient and environmental-friendly use of lignite in electricity generation is of great importance. Thus, clean coal technologies have been planned to mitigate emissions and provide more efficient burning in power plants. In this context, oxygen enriched combustion (oxy-combustion) is regarded as one of the clean coal technologies, which based on burning with oxygen concentrations higher than that in air. As it is known that the most of the Turkish coals are low rank with high mineral matter content, unburnt carbon trapped in ash is, unfortunately, high, and it leads significant losses in the overall efficiencies of the thermal plants. Besides, the necessity of burning huge amounts of these low calorific value lignites to get the desired amount of energy also results in the formation of large amounts of ash that is rich in unburnt carbon. Oxygen enriched combustion technology enables to increase the burning efficiency through the complete burning of almost all of the carbon content of the fuel. This also contributes to the protection of air quality and emission levels drop reasonably. The aim of this study is to investigate the unburnt carbon content and the burning reactivities of several different lignite samples under oxygen enriched conditions. For this reason, the combined effects of temperature and oxygen/nitrogen ratios in the burning atmosphere were investigated and interpreted. To do this, Turkish lignite samples from Adıyaman-Gölbaşı and Kütahya-Tunçbilek regions were characterized first by proximate and ultimate analyses and the burning profiles were derived using DTA (Differential Thermal Analysis) curves. Then, these lignites were subjected to slow burning process in a horizontal tube furnace at different temperatures (200ºC, 400ºC, 600ºC for Adıyaman-Gölbaşı lignite and 200ºC, 450ºC, 800ºC for Kütahya-Tunçbilek lignite) under atmospheres having O₂+N₂ proportions of 21%O₂+79%N₂, 30%O₂+70%N₂, 40%O₂+60%N₂, and 50%O₂+50%N₂. These burning temperatures were specified based on the burning profiles derived from the DTA curves. The residues obtained from these burning tests were also analyzed by proximate and ultimate analyses to detect the unburnt carbon content along with the unused energy potential. Reactivity of these lignites was calculated using several methodologies. Burning yield under air condition (21%O₂+79%N₂) was used a benchmark value to compare the effectiveness of oxygen enriched conditions. It was concluded that oxygen enriched combustion method enhanced the combustion efficiency and lowered the unburnt carbon content of ash. Combustion of low-rank coals under oxygen enriched conditions was found to be a promising way to improve the efficiency of the lignite-firing energy systems. However, cost-benefit analysis should be considered for a better justification of this method since the use of more oxygen brings an unignorable additional cost.Keywords: coal, energy, oxygen enriched combustion, reactivity
Procedia PDF Downloads 276