Search results for: turbulent boundary layer
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3806

Search results for: turbulent boundary layer

146 BIM Modeling of Site and Existing Buildings: Case Study of ESTP Paris Campus

Authors: Rita Sassine, Yassine Hassani, Mohamad Al Omari, Stéphanie Guibert

Abstract:

Building Information Modelling (BIM) is the process of creating, managing, and centralizing information during the building lifecycle. BIM can be used all over a construction project, from the initiation phase to the planning and execution phases to the maintenance and lifecycle management phase. For existing buildings, BIM can be used for specific applications such as lifecycle management. However, most of the existing buildings don’t have a BIM model. Creating a compatible BIM for existing buildings is very challenging. It requires special equipment for data capturing and efforts to convert these data into a BIM model. The main difficulties for such projects are to define the data needed, the level of development (LOD), and the methodology to be adopted. In addition to managing information for an existing building, studying the impact of the built environment is a challenging topic. So, integrating the existing terrain that surrounds buildings into the digital model is essential to be able to make several simulations as flood simulation, energy simulation, etc. Making a replication of the physical model and updating its information in real-time to make its Digital Twin (DT) is very important. The Digital Terrain Model (DTM) represents the ground surface of the terrain by a set of discrete points with unique height values over 2D points based on reference surface (e.g., mean sea level, geoid, and ellipsoid). In addition, information related to the type of pavement materials, types of vegetation and heights and damaged surfaces can be integrated. Our aim in this study is to define the methodology to be used in order to provide a 3D BIM model for the site and the existing building based on the case study of “Ecole Spéciale des Travaux Publiques (ESTP Paris)” school of engineering campus. The property is located on a hilly site of 5 hectares and is composed of more than 20 buildings with a total area of 32 000 square meters and a height between 50 and 68 meters. In this work, the campus precise levelling grid according to the NGF-IGN69 altimetric system and the grid control points are computed according to (Réseau Gédésique Français) RGF93 – Lambert 93 french system with different methods: (i) Land topographic surveying methods using robotic total station, (ii) GNSS (Global Network Satellite sytem) levelling grid with NRTK (Network Real Time Kinematic) mode, (iii) Point clouds generated by laser scanning. These technologies allow the computation of multiple building parameters such as boundary limits, the number of floors, the floors georeferencing, the georeferencing of the 4 base corners of each building, etc. Once the entry data are identified, the digital model of each building is done. The DTM is also modeled. The process of altimetric determination is complex and requires efforts in order to collect and analyze multiple data formats. Since many technologies can be used to produce digital models, different file formats such as DraWinG (DWG), LASer (LAS), Comma-separated values (CSV), Industry Foundation Classes (IFC) and ReViT (RVT) will be generated. Checking the interoperability between BIM models is very important. In this work, all models are linked together and shared on 3DEXPERIENCE collaborative platform.

Keywords: building information modeling, digital terrain model, existing buildings, interoperability

Procedia PDF Downloads 112
145 Impact of Marangoni Stress and Mobile Surface Charge on Electrokinetics of Ionic Liquids Over Hydrophobic Surfaces

Authors: Somnath Bhattacharyya

Abstract:

The mobile adsorbed surface charge on hydrophobic surfaces can modify the velocity slip condition as well as create a Marangoni stress at the interface. The functionalized hydrophobic walls of micro/nanopores, e.g., graphene nanochannels, may possess physio-sorbed ions. The lateral mobility of the physisorbed absorbed ions creates a friction force as well as an electric force, leading to a modification in the velocity slip condition at the hydrophobic surface. In addition, the non-uniform distribution of these surface ions creates a surface tension gradient, leading to a Marangoni stress. The impact of the mobile surface charge on streaming potential and electrochemical energy conversion efficiency in a pressure-driven flow of ionized liquid through the nanopore is addressed. Also, enhanced electro-osmotic flow through the hydrophobic nanochannel is also analyzed. The mean-filed electrokinetic model is modified to take into account the short-range non-electrostatic steric interactions and the long-range Coulomb correlations. The steric interaction is modeled by considering the ions as charged hard spheres of finite radius suspended in the electrolyte medium. The electrochemical potential is modified by including the volume exclusion effect, which is modeled based on the BMCSL equation of state. The electrostatic correlation is accounted for in the ionic self-energy. The extremal of the self-energy leads to a fourth-order Poisson equation for the electric field. The ion transport is governed by the modified Nernst-Planck equation, which includes the ion steric interactions; born force arises due to the spatial variation of the dielectric permittivity and the dielectrophoretic force on the hydrated ions. This ion transport equation is coupled with the Navier-Stokes equation describing the flow of the ionized fluid and the 3fourth-order Poisson equation for the electric field. We numerically solve the coupled set of nonlinear governing equations along with the prescribed boundary conditions by adopting a control volume approach over a staggered grid arrangement. In the staggered grid arrangements, velocity components are stored on the midpoint of the cell faces to which they are normal, whereas the remaining scalar variables are stored at the center of each cell. The convection and electromigration terms are discretized at each interface of the control volumes using the total variation diminishing (TVD) approach to capture the strong convection resulting from the highly enhanced fluid flow due to the modified model. In order to link pressure to the continuity equation, we adopt a pressure correction-based iterative SIMPLE (Semi-Implicit Method for Pressure-Linked Equations) algorithm, in which the discretized continuity equation is converted to a Poisson equation involving pressure correction terms. Our results show that the physisorbed ions on a hydrophobic surface create an enhanced slip velocity when streaming potential, which enhances the convection current. However, the electroosmotic flow attenuates due to the mobile surface ions.

Keywords: microfluidics, electroosmosis, streaming potential, electrostatic correlation, finite sized ions

Procedia PDF Downloads 72
144 Experimental Investigation on Tensile Durability of Glass Fiber Reinforced Polymer (GFRP) Rebar Embedded in High Performance Concrete

Authors: Yuan Yue, Wen-Wei Wang

Abstract:

The objective of this research is to comprehensively evaluate the impact of alkaline environments on the durability of Glass Fiber Reinforced Polymer (GFRP) reinforcements in concrete structures and further explore their potential value within the construction industry. Specifically, we investigate the effects of two widely used high-performance concrete (HPC) materials on the durability of GFRP bars when embedded within them under varying temperature conditions. A total of 279 GFRP bar specimens were manufactured for microcosmic and mechanical performance tests. Among them, 270 specimens were used to test the residual tensile strength after 120 days of immersion, while 9 specimens were utilized for microscopic testing to analyze degradation damage. SEM techniques were employed to examine the microstructure of GFRP and cover concrete. Unidirectional tensile strength experiments were conducted to determine the remaining tensile strength after corrosion. The experimental variables consisted of four types of concrete (engineering cementitious composite (ECC), ultra-high-performance concrete (UHPC), and two types of ordinary concrete with different compressive strengths) as well as three acceleration temperatures (20, 40, and 60℃). The experimental results demonstrate that high-performance concrete (HPC) offers superior protection for GFRP bars compared to ordinary concrete. Two types of HPC enhance durability through different mechanisms: one by reducing the pH of the concrete pore fluid and the other by decreasing permeability. For instance, ECC improves embedded GFRP's durability by lowering the pH of the pore fluid. After 120 days of immersion at 60°C under accelerated conditions, ECC (pH=11.5) retained 68.99% of its strength, while PC1 (pH=13.5) retained 54.88%. On the other hand, UHPC enhances FRP steel's durability by increasing porosity and compactness in its protective layer to reinforce FRP reinforcement's longevity. Due to fillers present in UHPC, it typically exhibits lower porosity, higher densities, and greater resistance to permeation compared to PC2 with similar pore fluid pH levels, resulting in varying degrees of durability for GFRP bars embedded in UHPC and PC2 after 120 days of immersion at a temperature of 60°C - with residual strengths being 66.32% and 60.89%, respectively. Furthermore, SEM analysis revealed no noticeable evidence indicating fiber deterioration in any examined specimens, thus suggesting that uneven stress distribution resulting from interface segregation and matrix damage emerges as a primary causative factor for tensile strength reduction in GFRP rather than fiber corrosion. Moreover, long-term prediction models were utilized to calculate residual strength values over time for reinforcement embedded in HPC under high temperature and high humidity conditions - demonstrating that approximately 75% of its initial strength was retained by reinforcement embedded in HPC after 100 years of service.

Keywords: GFRP bars, HPC, degeneration, durability, residual tensile strength.

Procedia PDF Downloads 56
143 Triassic and Liassic Paleoenvironments during the Central Atlantic Magmatique Province (CAMP) Effusion in the Moroccan Coastal Meseta: The Mohammedia-Benslimane-El Gara-Berrechid Basin

Authors: Rachid Essamoud, Abdelkrim Afenzar, Ahmed Belqadi

Abstract:

During the Early Mesozoic, the northwestern part of the African continent was affected by initial fracturing associated with the early stages of the opening of the Central Atlantic (Atlantic Rift). During this rifting phase, the Moroccan Meseta experienced an extensive tectonic regime. This extension favored the formation of a set of rift-type basins, including the Mohammedia-Benslimane-ElGara-Berrechid basin. Thus, it is essential to know the nature of the deposits in this basin and their evolution over time as well as their relationship with the basaltic effusion of the Central Atlantic Magmatic Province (CAMP). These deposits are subdivided into two large series: The Lower clay-salt series attributed to the Triassic and the Upper clay-salt series attributed to the Liassic. The two series are separated by the Upper Triassic-Lower Liassic basaltic complex. The detailed sedimentological analysis made it possible to characterize four mega-sequences, fifteen types of facies and eight architectural elements and facies associations in the Triassic series. A progressive decrease observed in paleo-slope over time led to the evolution of the paleoenvironment from a proximal system of alluvial fans to a braided fluvial style, then to an anastomosed system. These environments eventually evolved into an alluvial plain associated with a coastal plain where playa lakes, mudflats and lagoons had developed. The pure and massive halitic facies at the top of the series probably indicate an evolution of the depositional environment towards a shallow subtidal environment. The presence of these evaporites indicates a climate that favored their precipitation, in this case, a fairly hot and humid climate. The sedimentological analysis of the supra-basaltic part shows that during the Lower Liassic, the paleopente after basaltic effusion remained weak with distal environments. The faciological analysis revealed the presence of four major sandstone, silty, clayey and evaporitic lithofacies organized in two mega-sequences: the sedimentation of the first rock-salt mega-sequence took place in a brine depression system free, followed by saline mudflats under continental influences. The upper clay mega-sequence displays facies documenting sea level fluctuations from the final transgression of the Tethys or the opening Atlantic. Saliferous sedimentation is therefore favored from the Upper Triassic, but experienced a sudden rupture by the emission of basaltic flows which are interstratified in the azoic salt clays of very shallow seas. This basaltic emission which belongs to the CAMP would come from a fissural volcanism probably carried out through transfer faults located in the NW and SE of the basin. Their emplacement is probably subaquatic to subaerial. From a chronological and paleogeographic point of view, this main volcanism, dated between the Upper Triassic and the Lower Liassic (180-200 MA), is linked to the fragmentation of Pangea and managed by a progressive expansion triggered in the West in close relation with the initial phases of Central Atlantic rifting and seems to coincide with the major mass extinction at the Triassic-Jurassic boundary.

Keywords: Basalt, CAMP, Liassic, sedimentology, Triassic, Morocco

Procedia PDF Downloads 75
142 Monitoring of Wound Healing Through Structural and Functional Mechanisms Using Photoacoustic Imaging Modality

Authors: Souradip Paul, Arijit Paramanick, M. Suheshkumar Singh

Abstract:

Traumatic injury is the leading worldwide health problem. Annually, millions of surgical wounds are created for the sake of routine medical care. The healing of these unintended injuries is always monitored based on visual inspection. The maximal restoration of tissue functionality remains a significant concern of clinical care. Although minor injuries heal well with proper care and medical treatment, large injuries negatively influence various factors (vasculature insufficiency, tissue coagulation) and cause poor healing. Demographically, the number of people suffering from severe wounds and impaired healing conditions is burdensome for both human health and the economy. An incomplete understanding of the functional and molecular mechanism of tissue healing often leads to a lack of proper therapies and treatment. Hence, strong and promising medical guidance is necessary for monitoring the tissue regeneration processes. Photoacoustic imaging (PAI), is a non-invasive, hybrid imaging modality that can provide a suitable solution in this regard. Light combined with sound offers structural, functional and molecular information from the higher penetration depth. Therefore, molecular and structural mechanisms of tissue repair will be readily observable in PAI from the superficial layer and in the deep tissue region. Blood vessel formation and its growth is an essential tissue-repairing components. These vessels supply nutrition and oxygen to the cell in the wound region. Angiogenesis (formation of new capillaries from existing blood vessels) contributes to new blood vessel formation during tissue repair. The betterment of tissue healing directly depends on angiogenesis. Other optical microscopy techniques can visualize angiogenesis in micron-scale penetration depth but are unable to provide deep tissue information. PAI overcomes this barrier due to its unique capability. It is ideally suited for deep tissue imaging and provides the rich optical contrast generated by hemoglobin in blood vessels. Hence, an early angiogenesis detection method provided by PAI leads to monitoring the medical treatment of the wound. Along with functional property, mechanical property also plays a key role in tissue regeneration. The wound heals through a dynamic series of physiological events like coagulation, granulation tissue formation, and extracellular matrix (ECM) remodeling. Therefore tissue elasticity changes, can be identified using non-contact photoacoustic elastography (PAE). In a nutshell, angiogenesis and biomechanical properties are both critical parameters for tissue healing and these can be characterized in a single imaging modality (PAI).

Keywords: PAT, wound healing, tissue coagulation, angiogenesis

Procedia PDF Downloads 106
141 Dry Reforming of Methane Using Metal Supported and Core Shell Based Catalyst

Authors: Vinu Viswanath, Lawrence Dsouza, Ugo Ravon

Abstract:

Syngas typically and intermediary gas product has a wide range of application of producing various chemical products, such as mixed alcohols, hydrogen, ammonia, Fischer-Tropsch products methanol, ethanol, aldehydes, alcohols, etc. There are several technologies available for the syngas production. An alternative to the conventional processes an attractive route of utilizing carbon dioxide and methane in equimolar ratio to generate syngas of ratio close to one has been developed which is also termed as Dry Reforming of Methane technology. It also gives the privilege to utilize the greenhouse gases like CO2 and CH4. The dry reforming process is highly endothermic, and indeed, ΔG becomes negative if the temperature is higher than 900K and practically, the reaction occurs at 1000-1100K. At this temperature, the sintering of the metal particle is happening that deactivate the catalyst. However, by using this strategy, the methane is just partially oxidized, and some cokes deposition occurs that causing the catalyst deactivation. The current research work was focused to mitigate the main challenges of dry reforming process such coke deposition, and metal sintering at high temperature.To achieve these objectives, we employed three different strategies of catalyst development. 1) Use of bulk catalysts such as olivine and pyrochlore type materials. 2) Use of metal doped support materials, like spinel and clay type material. 3) Use of core-shell model catalyst. In this approach, a thin layer (shell) of redox metal oxide is deposited over the MgAl2O4 /Al2O3 based support material (core). For the core-shell approach, an active metal is been deposited on the surface of the shell. The shell structure formed is a doped metal oxide that can undergo reduction and oxidation reactions (redox), and the core is an alkaline earth aluminate having a high affinity towards carbon dioxide. In the case of metal-doped support catalyst, the enhanced redox properties of doped CeO2 oxide and CO2 affinity property of alkaline earth aluminates collectively helps to overcome coke formation. For all of the mentioned three strategies, a systematic screening of the metals is carried out to optimize the efficiency of the catalyst. To evaluate the performance of them, the activity and stability test were carried out under reaction conditions of temperature ranging from 650 to 850 ̊C and an operating pressure ranging from 1 to 20 bar. The result generated infers that the core-shell model catalyst showed high activity and better stable DR catalysts under atmospheric as well as high-pressure conditions. In this presentation, we will show the results related to the strategy.

Keywords: carbon dioxide, dry reforming, supports, core shell catalyst

Procedia PDF Downloads 177
140 Cellular Targeting to Dual Gaseous Microenvironments by Polydimethylsiloxane Microchip

Authors: Samineh Barmaki, Ville Jokinen, Esko Kankuri

Abstract:

We report a microfluidic chip that can be used to modify the gaseous microenvironment of a cell-culture in ambient atmospheric conditions. The aim of the study is to show the cellular response to nitric oxide (NO) under hypoxic (oxygen < 5%) condition. Simultaneously targeting to hypoxic and nitric oxide will provide an opportunity for NO‑based therapeutics. Studies on cellular responses to lowered oxygen concentration or to gaseous mediators are usually carried out under a specific macro environment, such as hypoxia chambers, or with specific NO donor molecules that may have additional toxic effects. In our study, the chip consists of a microfluidic layer and a cell culture well, separated by a thin gas permeable polydimethylsiloxane (PDMS) membrane. The main design goal is to separate the gas oxygen scavenger and NO donor solutions, which are often toxic, from the cell media. Two different types of gas exchangers, titled 'pool' and 'meander' were tested. We find that the pool design allows us to reach a higher level of oxygen depletion than meander (24.32 ± 19.82 %vs -3.21 ± 8.81). Our microchip design can make the cells culture more simple and makes it easy to adapt existing cell culture protocols. Our first application is utilizing the chip to create hypoxic conditions on targeted areas of cell culture. In this study, oxygen scavenger sodium sulfite generates hypoxia and its effect on human embryonic kidney cells (HEK-293). The PDMS membrane was coated with fibronectin before initiating cell cultures, and the cells were grown for 48h on the chips before initiating the gas control experiments. The hypoxia experiments were performed by pumping of O₂-depleted H₂O into the microfluidic channel with a flow-rate of 0.5 ml/h. Image-iT® reagent as an oxygen level responser was mixed with HEK-293 cells. The fluorescent signal appears on cells stained with Image-iT® hypoxia reagent (after 6h of pumping oxygen-depleted H₂O through the microfluidic channel in pool area). The exposure to different levels of O₂ can be controlled by varying the thickness of the PDMS membrane. Recently, we improved the design of the microfluidic chip, which can control the microenvironment of two different gases at the same time. The hypoxic response was also improved from the new design of microchip. The cells were grown on the thin PDMS membrane for 30 hours, and with a flowrate of 0.1 ml/h; the oxygen scavenger was pumped into the microfluidic channel. We also show that by pumping sodium nitroprusside (SNP) as a nitric oxide donor activated under light and can generate nitric oxide on top of PDMS membrane. We are aiming to show cellular microenvironment response of HEK-293 cells to both nitric oxide (by pumping SNP) and hypoxia (by pumping oxygen scavenger solution) in separated channels in one microfluidic chip.

Keywords: hypoxia, nitric oxide, microenvironment, microfluidic chip, sodium nitroprusside, SNP

Procedia PDF Downloads 134
139 Lessons Learned through a Bicultural Approach to Tsunami Education in Aotearoa New Zealand

Authors: Lucy H. Kaiser, Kate Boersen

Abstract:

Kura Kaupapa Māori (kura) and bilingual schools are primary schools in Aotearoa/New Zealand which operate fully or partially under Māori custom and have curricula developed to include Te Reo Māori and Tikanga Māori (Māori language and cultural practices). These schools were established to support Māori children and their families through reinforcing cultural identity by enabling Māori language and culture to flourish in the field of education. Māori kaupapa (values), Mātauranga Māori (Māori knowledge) and Te Reo are crucial considerations for the development of educational resources developed for kura, bilingual and mainstream schools. The inclusion of hazard risk in education has become an important issue in New Zealand due to the vulnerability of communities to a plethora of different hazards. Māori have an extensive knowledge of their local area and the history of hazards which is often not appropriately recognised within mainstream hazard education resources. Researchers from the Joint Centre for Disaster Research, Massey University and East Coast LAB (Life at the Boundary) in Napier were funded to collaboratively develop a toolkit of tsunami risk reduction activities with schools located in Hawke’s Bay’s tsunami evacuation zones. A Māori-led bicultural approach to developing and running the education activities was taken, focusing on creating culturally and locally relevant materials for students and schools as well as giving students a proactive role in making their communities better prepared for a tsunami event. The community-based participatory research is Māori-centred, framed by qualitative and Kaupapa Maori research methodologies and utilizes a range of data collection methods including interviews, focus groups and surveys. Māori participants, stakeholders and the researchers collaborated through the duration of the project to ensure the programme would align with the wider school curricula and kaupapa values. The education programme applied a tuakana/teina, Māori teaching and learning approach in which high school aged students (tuakana) developed tsunami preparedness activities to run with primary school students (teina). At the end of the education programme, high school students were asked to reflect on their participation, what they had learned and what they had enjoyed during the activities. This paper draws on lessons learned throughout this research project. As an exemplar, retaining a bicultural and bilingual perspective resulted in a more inclusive project as there was variability across the students’ levels of confidence using Te Reo and Māori knowledge and cultural frameworks. Providing a range of different learning and experiential activities including waiata (Māori songs), pūrākau (traditional stories) and games was important to ensure students had the opportunity to participate and contribute using a range of different approaches that were appropriate to their individual learning needs. Inclusion of teachers in facilitation also proved beneficial in assisting classroom behavioral management. Lessons were framed by the tikanga and kawa (protocols) of the school to maintain cultural safety for the researchers and the students. Finally, the tuakana/teina component of the education activities became the crux of the programme, demonstrating a path for Rangatahi to support their whānau and communities through facilitating disaster preparedness, risk reduction and resilience.

Keywords: school safety, indigenous, disaster preparedness, children, education, tsunami

Procedia PDF Downloads 122
138 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning

Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher

Abstract:

Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.

Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping

Procedia PDF Downloads 136
137 Modeling and Analysis of Drilling Operation in Shale Reservoirs with Introduction of an Optimization Approach

Authors: Sina Kazemi, Farshid Torabi, Todd Peterson

Abstract:

Drilling in shale formations is frequently time-consuming, challenging, and fraught with mechanical failures such as stuck pipes or hole packing off when the cutting removal rate is not sufficient to clean the bottom hole. Crossing the heavy oil shale and sand reservoirs with active shale and microfractures is generally associated with severe fluid losses causing a reduction in the rate of the cuttings removal. These circumstances compromise a well’s integrity and result in a lower rate of penetration (ROP). This study presents collective results of field studies and theoretical analysis conducted on data from South Pars and North Dome in an Iran-Qatar offshore field. Solutions to complications related to drilling in shale formations are proposed through systemically analyzing and applying modeling techniques to select field mud logging data. Field data measurements during actual drilling operations indicate that in a shale formation where the return flow of polymer mud was almost lost in the upper dolomite layer, the performance of hole cleaning and ROP progressively change when higher string rotations are initiated. Likewise, it was observed that this effect minimized the force of rotational torque and improved well integrity in the subsequent casing running. Given similar geologic conditions and drilling operations in reservoirs targeting shale as the producing zone like the Bakken formation within the Williston Basin and Lloydminster, Saskatchewan, a drill bench dynamic modeling simulation was used to simulate borehole cleaning efficiency and mud optimization. The results obtained by altering RPM (string revolution per minute) at the same pump rate and optimized mud properties exhibit a positive correlation with field measurements. The field investigation and developed model in this report show that increasing the speed of string revolution as far as geomechanics and drilling bit conditions permit can minimize the risk of mechanically stuck pipes while reaching a higher than expected ROP in shale formations. Data obtained from modeling and field data analysis, optimized drilling parameters, and hole cleaning procedures are suggested for minimizing the risk of a hole packing off and enhancing well integrity in shale reservoirs. Whereas optimization of ROP at a lower pump rate maintains the wellbore stability, it saves time for the operator while reducing carbon emissions and fatigue of mud motors and power supply engines.

Keywords: ROP, circulating density, drilling parameters, return flow, shale reservoir, well integrity

Procedia PDF Downloads 86
136 Vortex Flows under Effects of Buoyant-Thermocapillary Convection

Authors: Malika Imoula, Rachid Saci, Renee Gatignol

Abstract:

A numerical investigation is carried out to analyze vortex flows in a free surface cylinder, driven by the independent rotation and differentially heated boundaries. As a basic uncontrolled isothermal flow, we consider configurations which exhibit steady axisymmetric toroidal type vortices which occur at the free surface; under given rates of the bottom disk uniform rotation and for selected aspect ratios of the enclosure. In the isothermal case, we show that sidewall differential rotation constitutes an effective kinematic means of flow control: the reverse flow regions may be suppressed under very weak co-rotation rates, while an enhancement of the vortex patterns is remarked under weak counter-rotation. However, in this latter case, high rates of counter-rotation reduce considerably the strength of the meridian flow and cause its confinement to a narrow layer on the bottom disk, while the remaining bulk flow is diffusion dominated and controlled by the sidewall rotation. The main control parameters in this case are the rotational Reynolds number, the cavity aspect ratio and the rotation rate ratio defined. Then, the study proceeded to consider the sensitivity of the vortex pattern, within the Boussinesq approximation, to a small temperature gradient set between the ambient fluid and an axial thin rod mounted on the cavity axis. Two additional parameters are introduced; namely, the Richardson number Ri and the Marangoni number Ma (or the thermocapillary Reynolds number). Results revealed that reducing the rod length induces the formation of on-axis bubbles instead of toroidal structures. Besides, the stagnation characteristics are significantly altered under the combined effects of buoyant-thermocapillary convection. Buoyancy, induced under sufficiently high Ri, was shown to predominate over the thermocapillay motion; causing the enhancement (suppression) of breakdown when the rod is warmer (cooler) than the ambient fluid. However, over small ranges of Ri, the sensitivity of the flow to surface tension gradients was clearly evidenced and results showed its full control over the occurrence and location of breakdown. In particular, detailed timewise evolution of the flow indicated that weak thermocapillary motion was sufficient to prevent the formation of toroidal patterns. These latter detach from the surface and undergo considerable size reduction while moving towards the bulk flow before vanishing. Further calculations revealed that the pattern reappears with increasing time as steady bubble type on the rod. However, in the absence of the central rod and also in the case of small rod length l, the flow evolved into steady state without any breakdown.

Keywords: buoyancy, cylinder, surface tension, toroidal vortex

Procedia PDF Downloads 359
135 Quantitative Texture Analysis of Shoulder Sonography for Rotator Cuff Lesion Classification

Authors: Chung-Ming Lo, Chung-Chien Lee

Abstract:

In many countries, the lifetime prevalence of shoulder pain is up to 70%. In America, the health care system spends 7 billion per year about the healthy issues of shoulder pain. With respect to the origin, up to 70% of shoulder pain is attributed to rotator cuff lesions This study proposed a computer-aided diagnosis (CAD) system to assist radiologists classifying rotator cuff lesions with less operator dependence. Quantitative features were extracted from the shoulder ultrasound images acquired using an ALOKA alpha-6 US scanner (Hitachi-Aloka Medical, Tokyo, Japan) with linear array probe (scan width: 36mm) ranging from 5 to 13 MHz. During examination, the postures of the examined patients are standard sitting position and are followed by the regular routine. After acquisition, the shoulder US images were drawn out from the scanner and stored as 8-bit images with pixel value ranging from 0 to 255. Upon the sonographic appearance, the boundary of each lesion was delineated by a physician to indicate the specific pattern for analysis. The three lesion categories for classification were composed of 20 cases of tendon inflammation, 18 cases of calcific tendonitis, and 18 cases of supraspinatus tear. For each lesion, second-order statistics were quantified in the feature extraction. The second-order statistics were the texture features describing the correlations between adjacent pixels in a lesion. Because echogenicity patterns were expressed via grey-scale. The grey-scale co-occurrence matrixes with four angles of adjacent pixels were used. The texture metrics included the mean and standard deviation of energy, entropy, correlation, inverse different moment, inertia, cluster shade, cluster prominence, and Haralick correlation. Then, the quantitative features were combined in a multinomial logistic regression classifier to generate a prediction model of rotator cuff lesions. Multinomial logistic regression classifier is widely used in the classification of more than two categories such as the three lesion types used in this study. In the classifier, backward elimination was used to select a feature subset which is the most relevant. They were selected from the trained classifier with the lowest error rate. Leave-one-out cross-validation was used to evaluate the performance of the classifier. Each case was left out of the total cases and used to test the trained result by the remaining cases. According to the physician’s assessment, the performance of the proposed CAD system was shown by the accuracy. As a result, the proposed system achieved an accuracy of 86%. A CAD system based on the statistical texture features to interpret echogenicity values in shoulder musculoskeletal ultrasound was established to generate a prediction model for rotator cuff lesions. Clinically, it is difficult to distinguish some kinds of rotator cuff lesions, especially partial-thickness tear of rotator cuff. The shoulder orthopaedic surgeon and musculoskeletal radiologist reported greater diagnostic test accuracy than general radiologist or ultrasonographers based on the available literature. Consequently, the proposed CAD system which was developed according to the experiment of the shoulder orthopaedic surgeon can provide reliable suggestions to general radiologists or ultrasonographers. More quantitative features related to the specific patterns of different lesion types would be investigated in the further study to improve the prediction.

Keywords: shoulder ultrasound, rotator cuff lesions, texture, computer-aided diagnosis

Procedia PDF Downloads 284
134 De-Pigmentary Effect of Ayurvedic Treatment on Hyper-Pigmentation of Skin Due to Chloroquine: A Case Report

Authors: Sunil Kumar, Rajesh Sharma

Abstract:

Toxic epidermal necrolysis, pruritis, rashes, lichen planus like eruption, hyper pigmentation of skin are rare toxic effects of choloroquine used over a long time. Skin and mucus membrane hyper pigmentation is generally of a bluish black or grayish color and irreversible after discontinuation of the drug. According to Ayurveda, Dushivisha is the name given to any poisonous substance which is not fully endowed with the qualities of poison by nature (i.e. it acts as an impoverished or weak poison) and because of its mild potency, it remains in the body for many years causing various symptoms, one among them being discoloration of skin.The objective of this case report is to investigate the effect of Ayurvedic management of chloroquine induced hyper-pigmentation on the line of treatment of Dushivisha. Case Report: A 26-year-old female was suffering from hyper-pigmentation of the skin over the neck, forehead, temporo-mandibular joints, upper back and posterior aspect of both the arms since 8 years had history of taking Chloroquine came to Out Patient Department of National Institute of Ayurveda, Jaipur, India in Jan. 2015. The routine investigations (CBC, ESR, Eosinophil count) were within normal limits. Punch biopsy skin studied for histopathology under hematoxylin and eosin staining showed epidermis with hyper-pigmentation of the basal layer. In the papillary dermis as well as deep dermis there were scattered melanophages along with infiltration by mononuclear cells. There was no deposition of amyloid-like substances. These histopathological findings were suggestive of Chloroquine induced hyper-pigmentation. The case was treated on the line of treatment of Dushivisha and was given Vamana and Virechana (therapeutic emesis and purgation) every six months followed by Snehana karma (oleation therapy) with Panchatikta Ghrit and Swedana (sudation). Arogyavardhini Vati -1 g, Dushivishari Vati 500 mg, Mahamanjisthadi Quath 20 ml were given twelve hourly and Aragwadhadi Quath 25 ml at bed time orally. The patient started showing lightening of the pigments after six months and almost complete remission after 12 months of the treatment. Conclusion: This patient presented with the Dushivisha effect of Chloroquineandwas administered two relevant procedures from Panchakarma viz. Vamana and Virechana. Both Vamana and Virechanakarma here referred to Shodhana karma (purification procedures) eliminates accumulated toxins from the body. In this process, oleation dislodge the toxins from the tissues and sudation helps to bring them to the alimentary tract. The line of treatment did not target direct hypo pigmentary effects; rather aimed to eliminate the Dushivisha. This gave promising results in this condition.

Keywords: Ayurveda, chloroquine, Dushivisha, hyper-pigmentation

Procedia PDF Downloads 234
133 Sculpted Forms and Sensitive Spaces: Walking through the Underground in Naples

Authors: Chiara Barone

Abstract:

In Naples, the visible architecture is only what emerges from the underground. Caves and tunnels cross it in every direction, intertwining with each other. They are not natural caves but spaces built by removing what is superfluous in order to dig a form out of the material. Architects, as sculptors of space, do not determine the exterior, what surrounds the volume and in which the forms live, but an interior underground space, perceptive and sensitive, able to generate new emotions each time. It is an intracorporeal architecture linked to the body, not in its external relationships, but rather with what happens inside. The proposed aims to reflect on the design of underground spaces in the Neapolitan city. The idea is to intend the underground as a spectacular museum of the city, an opportunity to learn in situ the history of the place along an unpredictable itinerary that crosses the caves and, in certain points, emerges, escaping from the world of shadows. Starting form the analysis and the study of the many overlapping elements, the archaeological one, the geological layer and the contemporary city above, it is possible to develop realistic alternatives for underground itineraries. The objective is to define minor paths to ensure the continuity between the touristic flows and entire underground segments already investigated but now disconnected: open-air paths, which abyss in the earth, retracing historical and preserved fragments. The visitor, in this way, passes from real spaces to sensitive spaces, in which the imaginary replaces the real experience, running towards exciting and secret knowledge. To safeguard the complex framework of the historical-artistic values, it is essential to use a multidisciplinary methodology based on a global approach. Moreover, it is essential to refer to similar design projects for the archaeological underground, capable of guide action strategies, looking at similar conditions in other cities, where the project has led to an enhancement of the heritage in the city. The research limits the field of investigation, by choosing the historic center of Naples, applying bibliographic and theoretical research to a real place. First of all, it’s necessary to deepen the places’ knowledge understanding the potentialities of the project as a link between what is below and what is above. Starting from a scientific approach, in which theory and practice are constantly intertwined through the architectural project, the major contribution is to provide possible alternative configurations for the underground space and its relationship with the city above, understanding how the condition of transition, as passage between the below and the above becomes structuring in the design process. Starting from the consideration of the underground as both a real physical place and a sensitive place, which engages the memory, imagination, and sensitivity of a man, the research aims at identifying possible configurations and actions useful for future urban programs to make the underground a central part of the lived city, again.

Keywords: underground paths, invisible ruins, imaginary, sculpted forms, sensitive spaces, Naples

Procedia PDF Downloads 103
132 Modified Graphene Oxide in Ceramic Composite

Authors: Natia Jalagonia, Jimsher Maisuradze, Karlo Barbakadze, Tinatin Kuchukhidze

Abstract:

At present intensive scientific researches of ceramics, cermets and metal alloys have been conducted for improving materials physical-mechanical characteristics. In purpose of increasing impact strength of ceramics based on alumina, simple method of graphene homogenization was developed. Homogeneous distribution of graphene (homogenization) in pressing composite became possible through the connection of functional groups of graphene oxide (-OH, -COOH, -O-O- and others) and alumina superficial OH groups with aluminum organic compounds. These two components connect with each other with -O-Al–O- bonds, and by their thermal treatment (300–500°C), graphene and alumina phase are transformed. Thus, choosing of aluminum organic compounds for modification is stipulated by the following opinion: aluminum organic compounds fragments fixed on graphene and alumina finally are transformed into an integral part of the matrix. By using of other elements as modifier on the matrix surface (Al2O3) other phases are transformed, which change sharply physical-mechanical properties of ceramic composites, for this reason, effect caused by the inclusion of graphene will be unknown. Fixing graphene fragments on alumina surface by alumoorganic compounds result in new type graphene-alumina complex, in which these two components are connected by C-O-Al bonds. Part of carbon atoms in graphene oxide are in sp3 hybrid state, so functional groups (-OH, -COOH) are located on both sides of graphene oxide layer. Aluminum organic compound reacts with graphene oxide at the room temperature, and modified graphene oxide is obtained: R2Al-O-[graphene]–COOAlR2. Remaining Al–C bonds also reacts rapidly with surface OH groups of alumina. In a result of these process, pressing powdery composite [Al2O3]-O-Al-O-[graphene]–COO–Al–O–[Al2O3] is obtained. For the purpose, graphene oxide suspension in dry toluene have added alumoorganic compound Al(iC4H9)3 in toluene with equimolecular ratio. Obtained suspension has put in the flask and removed solution in a rotary evaporate presence nitrogen atmosphere. Obtained powdery have been researched and used to consolidation of ceramic materials based on alumina. Ceramic composites are obtained in high temperature vacuum furnace with different temperature and pressure conditions. Received ceramics do not have open pores and their density reaches 99.5 % of TD. During the work, the following devices have been used: High temperature vacuum furnace OXY-GON Industries Inc (USA), device of spark-plasma synthesis, induction furnace, Electronic Scanning Microscopes Nikon Eclipse LV 150, Optical Microscope NMM-800TRF, Planetary mill Pulverisette 7 premium line, Shimadzu Dynamic Ultra Micro Hardness Tester DUH-211S, Analysette 12 Dynasizer and others.

Keywords: graphene oxide, alumo-organic, ceramic

Procedia PDF Downloads 308
131 Probing Mechanical Mechanism of Three-Hinge Formation on a Growing Brain: A Numerical and Experimental Study

Authors: Mir Jalil Razavi, Tianming Liu, Xianqiao Wang

Abstract:

Cortical folding, characterized by convex gyri and concave sulci, has an intrinsic relationship to the brain’s functional organization. Understanding the mechanism of the brain’s convoluted patterns can provide useful clues into normal and pathological brain function. During the development, the cerebral cortex experiences a noticeable expansion in volume and surface area accompanied by tremendous tissue folding which may be attributed to many possible factors. Despite decades of endeavors, the fundamental mechanism and key regulators of this crucial process remain incompletely understood. Therefore, to taking even a small role in unraveling of brain folding mystery, we present a mechanical model to find mechanism of 3-hinges formation in a growing brain that it has not been addressed before. A 3-hinge is defined as a gyral region where three gyral crests (hinge-lines) join. The reasons that how and why brain prefers to develop 3-hinges have not been answered very well. Therefore, we offer a theoretical and computational explanation to mechanism of 3-hinges formation in a growing brain and validate it by experimental observations. In theoretical approach, the dynamic behavior of brain tissue is examined and described with the aid of a large strain and nonlinear constitutive model. Derived constitute model is used in the computational model to define material behavior. Since the theoretical approach cannot predict the evolution of cortical complex convolution after instability, non-linear finite element models are employed to study the 3-hinges formation and secondary morphological folds of the developing brain. Three-dimensional (3D) finite element analyses on a multi-layer soft tissue model which mimics a small piece of the brain are performed to investigate the fundamental mechanism of consistent hinge formation in the cortical folding. Results show that after certain amount growth of cortex, mechanical model starts to be unstable and then by formation of creases enters to a new configuration with lower strain energy. By further growth of the model, formed shallow creases start to form convoluted patterns and then develop 3-hinge patterns. Simulation results related to 3-hinges in models show good agreement with experimental observations from macaque, chimpanzee and human brain images. These results have great potential to reveal fundamental principles of brain architecture and to produce a unified theoretical framework that convincingly explains the intrinsic relationship between cortical folding and 3-hinges formation. This achieved fundamental understanding of the intrinsic relationship between cortical folding and 3-hinges formation would potentially shed new insights into the diagnosis of many brain disorders such as schizophrenia, autism, lissencephaly and polymicrogyria.

Keywords: brain, cortical folding, finite element, three hinge

Procedia PDF Downloads 236
130 Implementation of a PDMS Microdevice for the Improved Purification of Circulating MicroRNAs

Authors: G. C. Santini, C. Potrich, L. Lunelli, L. Vanzetti, S. Marasso, M. Cocuzza, C. Pederzolli

Abstract:

The relevance of circulating miRNAs as non-invasive biomarkers for several pathologies is nowadays undoubtedly clear, as they have been found to have both diagnostic and prognostic value able to add fundamental information to patients’ clinical picture. The availability of these data, however, relies on a time-consuming process spanning from the sample collection and processing to the data analysis. In light of this, strategies which are able to ease this procedure are in high demand and considerable effort have been made in developing Lab-on-a-chip (LOC) devices able to speed up and standardise the bench work. In this context, a very promising polydimethylsiloxane (PDMS)-based microdevice which integrates the processing of the biological sample, i.e. purification of extracellular miRNAs, and reverse transcription was previously developed in our lab. In this study, we aimed at the improvement of the miRNA extraction performances of this micro device by increasing the ability of its surface to absorb extracellular miRNAs from biological samples. For this purpose, we focused on the modulation of two properties of the material: roughness and charge. PDMS surface roughness was modulated by casting with several templates (terminated with silicon oxide coated by a thin anti-adhesion aluminum layer), followed by a panel of curing conditions. Atomic force microscopy (AFM) was employed to estimate changes at the nanometric scale. To introduce modifications in surface charge we functionalized PDMS with different mixes of positively charged 3-aminopropyltrimethoxysilanes (APTMS) and neutral poly(ethylene glycol) silane (PEG). The surface chemical composition was characterized by X-ray photoelectron spectroscopy (XPS) and the number of exposed primary amines was quantified with the reagent sulfosuccinimidyl-4-o-(4,4-dimethoxytrityl) butyrate (s-SDTB). As our final end point, the adsorption rate of all these different conditions was assessed by fluorescence microscopy by incubating a synthetic fluorescently-labeled miRNA. Our preliminary analysis identified casting on thermally grown silicon oxide, followed by a curing step at 85°C for 1 hour, as the most efficient technique to obtain a PDMS surface roughness in the nanometric scaleable to trap miRNA. In addition, functionalisation with 0.1% APTMS and 0.9% PEG was found to be a necessary step to significantly increase the amount of microRNA adsorbed on the surface, therefore, available for further steps as on-chip reverse transcription. These findings show a substantial improvement in the extraction efficiency of our PDMS microdevice, ultimately leading to an important step forward in the development of an innovative, easy-to-use and integrated system for the direct purification of less abundant circulating microRNAs.

Keywords: circulating miRNAs, diagnostics, Lab-on-a-chip, polydimethylsiloxane (PDMS)

Procedia PDF Downloads 318
129 Applying Biosensors’ Electromyography Signals through an Artificial Neural Network to Control a Small Unmanned Aerial Vehicle

Authors: Mylena McCoggle, Shyra Wilson, Andrea Rivera, Rocio Alba-Flores

Abstract:

This work introduces the use of EMGs (electromyography) from muscle sensors to develop an Artificial Neural Network (ANN) for pattern recognition to control a small unmanned aerial vehicle. The objective of this endeavor exhibits interfacing drone applications beyond manual control directly. MyoWare Muscle sensor contains three EMG electrodes (dual and single type) used to collect signals from the posterior (extensor) and anterior (flexor) forearm and the bicep. Collection of raw voltages from each sensor were connected to an Arduino Uno and a data processing algorithm was developed with the purpose of interpreting the voltage signals given when performing flexing, resting, and motion of the arm. Each sensor collected eight values over a two-second period for the duration of one minute, per assessment. During each two-second interval, the movements were alternating between a resting reference class and an active motion class, resulting in controlling the motion of the drone with left and right movements. This paper further investigated adding up to three sensors to differentiate between hand gestures to control the principal motions of the drone (left, right, up, and land). The hand gestures chosen to execute these movements were: a resting position, a thumbs up, a hand swipe right motion, and a flexing position. The MATLAB software was utilized to collect, process, and analyze the signals from the sensors. The protocol (machine learning tool) was used to classify the hand gestures. To generate the input vector to the ANN, the mean, root means squared, and standard deviation was processed for every two-second interval of the hand gestures. The neuromuscular information was then trained using an artificial neural network with one hidden layer of 10 neurons to categorize the four targets, one for each hand gesture. Once the machine learning training was completed, the resulting network interpreted the processed inputs and returned the probabilities of each class. Based on the resultant probability of the application process, once an output was greater or equal to 80% of matching a specific target class, the drone would perform the motion expected. Afterward, each movement was sent from the computer to the drone through a Wi-Fi network connection. These procedures have been successfully tested and integrated into trial flights, where the drone has responded successfully in real-time to predefined command inputs with the machine learning algorithm through the MyoWare sensor interface. The full paper will describe in detail the database of the hand gestures, the details of the ANN architecture, and confusion matrices results.

Keywords: artificial neural network, biosensors, electromyography, machine learning, MyoWare muscle sensors, Arduino

Procedia PDF Downloads 174
128 A Method Intensive Top-down Approach for Generating Guidelines for an Energy-Efficient Neighbourhood: A Case of Amaravati, Andhra Pradesh, India

Authors: Rituparna Pal, Faiz Ahmed

Abstract:

Neighbourhood energy efficiency is a newly emerged term to address the quality of urban strata of built environment in terms of various covariates of sustainability. The concept of sustainability paradigm in developed nations has encouraged the policymakers for developing urban scale cities to envision plans under the aegis of urban scale sustainability. The concept of neighbourhood energy efficiency is realized a lot lately just when the cities, towns and other areas comprising this massive global urban strata have started facing a strong blow from climate change, energy crisis, cost hike and an alarming shortfall in the justice which the urban areas required. So this step of urban sustainability can be easily referred more as a ‘Retrofit Action’ which is to cover up the already affected urban structure. So even if we start energy efficiency for existing cities and urban areas the initial layer remains, for which a complete model of urban sustainability still lacks definition. Urban sustainability is a broadly spoken off word with end number of parameters and policies through which the loop can be met. Out of which neighbourhood energy efficiency can be an integral part where the concept and index of neighbourhood scale indicators, block level indicators and building physics parameters can be understood, analyzed and concluded to help emerge guidelines for urban scale sustainability. The future of neighbourhood energy efficiency not only lies in energy efficiency but also important parameters like quality of life, access to green, access to daylight, outdoor comfort, natural ventilation etc. So apart from designing less energy-hungry buildings, it is required to create a built environment which will create less stress on buildings to consume more energy. A lot of literary analysis has been done in the Western countries prominently in Spain, Paris and also Hong Kong, leaving a distinct gap in the Indian scenario in exploring the sustainability at the urban strata. The site for the study has been selected in the upcoming capital city of Amaravati which can be replicated with similar neighbourhood typologies in the area. The paper suggests a methodical intent to quantify energy and sustainability indices in detail taking by involving several macro, meso and micro level covariates and parameters. Several iterations have been made both at macro and micro level and have been subjected to simulation, computation and mathematical models and finally to comparative analysis. Parameters at all levels are analyzed to suggest the best case scenarios which in turn is extrapolated to the macro level finally coming out with a proposal model for energy efficient neighbourhood and worked out guidelines with significance and correlations derived.

Keywords: energy quantification, macro scale parameters, meso scale parameters, micro scale parameters

Procedia PDF Downloads 176
127 Informative, Inclusive and Transparent Planning Methods for Sustainable Heritage Management

Authors: Mathilde Kirkegaard

Abstract:

The paper will focus on management of heritage that integrates the local community, and argue towards an obligation to integrate this social aspect in heritage management. By broadening the understanding of heritage, a sustainable heritage management takes its departure in more than a continual conservation of the physicality of heritage. The social aspect, or the local community, is in many govern heritage management situations being overlooked and it is not managed through community based urban planning methods, e.g.: citizen-inclusion, a transparent process, informative and inviting initiatives, etc. Historical sites are often being described by embracing terms such as “ours” and “us”: “our history” and “a history that is part of us”. Heritage is not something static, it is a link between the life that has been lived in the historical frames, and the life that is defining it today. This view on heritage is rooted in the strive to ensure that heritage sites, besides securing the national historical interest, have a value for those people who are affected by it: living in it or visiting it. Antigua Guatemala is a UNESCO-defined heritage site and this site is being ‘threatened’ by tourism, habitation and recreation. In other words: ‘the use’ of the site is considered a threat of the preservation of the heritage. Contradictory the same types of use (tourism and habitation) can also be considered development ability, and perhaps even a sustainable management solution. ‘The use’ of heritage is interlinked with the perspective that heritage sites ought to have a value for people today. In other words, the heritage sites should be comprised of a contemporary substance. Heritage is entwined in its context of physical structures and the social layer. A synergy between the use of heritage and the knowledge about the heritage can generate a sustainable preservation solution. The paper will exemplify this symbiosis with different examples of a heritage management that is centred around a local community inclusion. The inclusive method is not new in architectural planning and it refers to a top-down and bottom-up balance in decision making. It can be endeavoured through designs of an inclusive nature. Catalyst architecture is a planning method that strives to move the process of design solutions into the public space. Through process-orientated designs, or catalyst designs, the community can gain an insight into the process or be invited to participate in the process. A balance between bottom-up and top-down in the development process of a heritage site can, in relation to management measures, be understood to generate a socially sustainable solution. The ownership and engagement that can be created among the local community, along with the use that ultimately can gain an economic benefit, can delegate the maintenance and preservation. Informative, inclusive and transparent planning methods can generate a heritage management that is long-term due to the collective understanding and effort. This method handles sustainable management on two levels: the current preservation necessities and the long-term management, while ensuring a value for people today.

Keywords: community, intangible, inclusion, planning

Procedia PDF Downloads 117
126 Sensory Interventions for Dementia: A Review

Authors: Leigh G. Hayden, Susan E. Shepley, Cristina Passarelli, William Tingo

Abstract:

Introduction: Sensory interventions are popular therapeutic and recreational approaches for people living with all stages of dementia. However, it is unknown which sensory interventions are used to achieve which outcomes across all subtypes of dementia. Methods: To address this gap, we conducted a scoping review of sensory interventions for people living with dementia. We conducted a search of the literature for any article published in English from 1 January 1990 to 1 June 2019, on any sensory or multisensory intervention targeted to people living with any kind of dementia, which reported on patient health outcomes. We did not include complex interventions where only a small aspect was related to sensory stimulation. We searched the databases Medline, CINHAL, and Psych Articles using our institutional discovery layer. We conducted all screening in duplicate to reduce Type 1 and Type 2 errors. The data from all included papers were extracted by one team member, and audited by another, to ensure consistency of extraction and completeness of data. Results: Our initial search captured 7654 articles, and the removal of duplicates (n=5329), those that didn’t pass title and abstract screening (n=1840) and those that didn’t pass full-text screening (n=281) resulted in 174 articles included. The countries with the highest publication in this area were the United States (n=59), the United Kingdom (n=26) and Australia (n=15). The most common type of interventions were music therapy (n=36), multisensory rooms (n=27) and multisensory therapies (n=25). Seven articles were published in the 1990’s, 55 in the 2000’s, and the remainder since 2010 (n=112). Discussion: Multisensory rooms have been present in the literature since the early 1990’s. However, more recently, nature/garden therapy, art therapy, and light therapy have emerged since 2008 in the literature, an indication of the increasingly diverse scholarship in the area. The least popular type of intervention is a traditional food intervention. Taste as a sensory intervention is generally avoided for safety reasons, however it shows potential for increasing quality of life. Agitation, behavior, and mood are common outcomes for all sensory interventions. However, light therapy commonly targets sleep. The majority (n=110) of studies have very small sample sizes (n=20 or less), an indicator of the lack of robust data in the field. Additional small-scale studies of the known sensory interventions will likely do little to advance the field. However, there is a need for multi-armed studies which directly compare sensory interventions, and more studies which investigate the use of layering sensory interventions (for example, adding an aromatherapy component to a lighting intervention). In addition, large scale studies which enroll people at early stages of dementia will help us better understand the potential of sensory and multisensory interventions to slow the progression of the disease.

Keywords: sensory interventions, dementia, scoping review

Procedia PDF Downloads 134
125 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors

Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin

Abstract:

IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).

Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)

Procedia PDF Downloads 139
124 Field Performance of Cement Treated Bases as a Reflective Crack Mitigation Technique for Flexible Pavements

Authors: Mohammad R. Bhuyan, Mohammad J. Khattak

Abstract:

Deterioration of flexible pavements due to crack reflection from its soil-cement base layer is a major concern around the globe. The service life of flexible pavement diminishes significantly because of the reflective cracks. Highway agencies are struggling for decades to prevent or mitigate these cracks in order to increase pavement service lives. The root cause of reflective cracks is the shrinkage crack which occurs in the soil-cement bases during the cement hydration process. The primary factor that causes the shrinkage is the cement content of the soil-cement mixture. With the increase of cement content, the soil-cement base gains strength and durability, which is necessary to withstand the traffic loads. But at the same time, higher cement content creates more shrinkage resulting in more reflective cracks in pavements. Historically, various states of USA have used the soil-cement bases for constructing flexile pavements. State of Louisiana (USA) had been using 8 to 10 percent of cement content to manufacture the soil-cement bases. Such traditional soil-cement bases yield 2.0 MPa (300 psi) 7-day compressive strength and are termed as cement stabilized design (CSD). As these CSD bases generate significant reflective cracks, another design of soil-cement base has been utilized by adding 4 to 6 percent of cement content called cement treated design (CTD), which yields 1.0 MPa (150 psi) 7-day compressive strength. The reduction of cement content in the CTD base is expected to minimize shrinkage cracks thus increasing pavement service lives. Hence, this research study evaluates the long-term field performance of CTD bases with respect to CSD bases used in flexible pavements. Pavement Management System of the state of Louisiana was utilized to select flexible pavement projects with CSD and CTD bases that had good historical record and time-series distress performance data. It should be noted that the state collects roughness and distress data for 1/10th mile section every 2-year period. In total, 120 CSD and CTD projects were analyzed in this research, where more than 145 miles (CTD) and 175 miles (CSD) of roadways data were accepted for performance evaluation and benefit-cost analyses. Here, the service life extension and area based on distress performance were considered as benefits. It was found that CTD bases increased 1 to 5 years of pavement service lives based on transverse cracking as compared to CSD bases. On the other hand, the service lives based on longitudinal and alligator cracking, rutting and roughness index remain the same. Hence, CTD bases provide some service life extension (2.6 years, on average) to the controlling distress; transverse cracking, but it was inexpensive due to its lesser cement content. Consequently, CTD bases become 20% more cost-effective than the traditional CSD bases, when both bases were compared by net benefit-cost ratio obtained from all distress types.

Keywords: cement treated base, cement stabilized base, reflective cracking , service life, flexible pavement

Procedia PDF Downloads 166
123 Evaluation of Cyclic Steam Injection in Multi-Layered Heterogeneous Reservoir

Authors: Worawanna Panyakotkaew, Falan Srisuriyachai

Abstract:

Cyclic steam injection (CSI) is a thermal recovery technique performed by injecting periodically heated steam into heavy oil reservoir. Oil viscosity is substantially reduced by means of heat transferred from steam. Together with gas pressurization, oil recovery is greatly improved. Nevertheless, prediction of effectiveness of the process is difficult when reservoir contains degree of heterogeneity. Therefore, study of heterogeneity together with interest reservoir properties must be evaluated prior to field implementation. In this study, thermal reservoir simulation program is utilized. Reservoir model is firstly constructed as multi-layered with coarsening upward sequence. The highest permeability is located on top layer with descending of permeability values in lower layers. Steam is injected from two wells located diagonally in quarter five-spot pattern. Heavy oil is produced by adjusting operating parameters including soaking period and steam quality. After selecting the best conditions for both parameters yielding the highest oil recovery, effects of degree of heterogeneity (represented by Lorenz coefficient), vertical permeability and permeability sequence are evaluated. Surprisingly, simulation results show that reservoir heterogeneity yields benefits on CSI technique. Increasing of reservoir heterogeneity impoverishes permeability distribution. High permeability contrast results in steam intruding in upper layers. Once temperature is cool down during back flow period, condense water percolates downward, resulting in high oil saturation on top layers. Gas saturation appears on top after while, causing better propagation of steam in the following cycle due to high compressibility of gas. Large steam chamber therefore covers most of the area in upper zone. Oil recovery reaches approximately 60% which is of about 20% higher than case of heterogeneous reservoir. Vertical permeability exhibits benefits on CSI. Expansion of steam chamber occurs within shorter time from upper to lower zone. For fining upward permeability sequence where permeability values are reversed from the previous case, steam does not override to top layers due to low permeability. Propagation of steam chamber occurs in middle of reservoir where permeability is high enough. Rate of oil recovery is slower compared to coarsening upward case due to lower permeability at the location where propagation of steam chamber occurs. Even CSI technique produces oil quite slowly in early cycles, once steam chamber is formed deep in the reservoir, heat is delivered to formation quickly in latter cycles. Since reservoir heterogeneity is unavoidable, a thorough understanding of its effect must be considered. This study shows that CSI technique might be one of the compatible solutions for highly heterogeneous reservoir. This competitive technique also shows benefit in terms of heat consumption as steam is injected periodically.

Keywords: cyclic steam injection, heterogeneity, reservoir simulation, thermal recovery

Procedia PDF Downloads 459
122 Consensual A-Monogamous Relationships: Challenges and Ways of Coping

Authors: Tal Braverman Uriel, Tal Litvak Hirsch

Abstract:

Background and Objectives: Little or only partial emphasis has been placed on exploring the complexity of consensual non-monogamous relationships. The term "polyamory" refers to consensual non-monogamy, and it is defined as having emotional and/or sexual relations simultaneously with two or more people, the consent and knowledge of all the partners concerned. Managing multiple romantic relationships with different people evokes more emotions, leads to more emotional conflicts arising from different interests, and demands practical strategies. An individual's transition from a monogamous lifestyle to a consensual non-monogamous lifestyle yields new challenges, accompanied by stress, uncertainty, and question marks, as do other life-changing events, such as divorce or transition to parenthood. The study examines both the process of transition and adaptation to a consensually non-monogamous relationship, as well as the coping mechanism involved in the daily conduct of this lifestyle. The research focuses on understanding the consequences, challenges, and coping methods from a personal, marital, and familial point of view and focuses on 40 middle-aged individuals (20 men and 20 women ages 40-60). The research sheds light on a way of life that has not been previously studied in Israel and is still considered unacceptable. Theories of crisis (e.g., as Folkman and Lazarus) were applied, and as a result, a deeper understanding of the subject was reached, all while focusing on multiple aspects of dealing with stress. The basic research question examines the consequences of entering a polyamorous life from a personal point of view as an individual, partner, and parent and the ways of coping with these consequences. Method: The research is conducted with a narrative qualitative approach in the interpretive paradigm, including semi-structured in-depth interviews. The method of analysis is thematic. Results: The findings indicate that in most cases, an individual's motivation to open the relationship is mainly a longing for better sexuality and for an added layer of excitement to their lives. Most of the interviewees were assisted by their spouses in the process, as well as by social networks and podcasts on the subject. Some of them therapeutic professionals from the field are helpful. It also clearly emerged that among those who experienced acute emotional crises with the primary partner or painful separations from secondary partners, all believed polyamory to be the adequate way of life for them. Finally, a key resource for managing tension and stress is the ability to share and communicate with the primary partner. Conclusions: The study points to the challenges and benefits of a non-monogamous lifestyle as well as the use of coping mechanisms and resources that are consistent with the existing theory and research in the field in the context of life changes. The study indicates the need to expand the research canvas in the future in the context of parenting and the consequences for children.

Keywords: a-monogamy, consent, family, stress, tension

Procedia PDF Downloads 76
121 Unknown Groundwater Pollution Source Characterization in Contaminated Mine Sites Using Optimal Monitoring Network Design

Authors: H. K. Esfahani, B. Datta

Abstract:

Groundwater is one of the most important natural resources in many parts of the world; however it is widely polluted due to human activities. Currently, effective and reliable groundwater management and remediation strategies are obtained using characterization of groundwater pollution sources, where the measured data in monitoring locations are utilized to estimate the unknown pollutant source location and magnitude. However, accurately identifying characteristics of contaminant sources is a challenging task due to uncertainties in terms of predicting source flux injection, hydro-geological and geo-chemical parameters, and the concentration field measurement. Reactive transport of chemical species in contaminated groundwater systems, especially with multiple species, is a complex and highly non-linear geochemical process. Although sufficient concentration measurement data is essential to accurately identify sources characteristics, available data are often sparse and limited in quantity. Therefore, this inverse problem-solving method for characterizing unknown groundwater pollution sources is often considered ill-posed, complex and non- unique. Different methods have been utilized to identify pollution sources; however, the linked simulation-optimization approach is one effective method to obtain acceptable results under uncertainties in complex real life scenarios. With this approach, the numerical flow and contaminant transport simulation models are externally linked to an optimization algorithm, with the objective of minimizing the difference between measured concentration and estimated pollutant concentration at observation locations. Concentration measurement data are very important to accurately estimate pollution source properties; therefore, optimal design of the monitoring network is essential to gather adequate measured data at desired times and locations. Due to budget and physical restrictions, an efficient and effective approach for groundwater pollutant source characterization is to design an optimal monitoring network, especially when only inadequate and arbitrary concentration measurement data are initially available. In this approach, preliminary concentration observation data are utilized for preliminary source location, magnitude and duration of source activity identification, and these results are utilized for monitoring network design. Further, feedback information from the monitoring network is used as inputs for sequential monitoring network design, to improve the identification of unknown source characteristics. To design an effective monitoring network of observation wells, optimization and interpolation techniques are used. A simulation model should be utilized to accurately describe the aquifer properties in terms of hydro-geochemical parameters and boundary conditions. However, the simulation of the transport processes becomes complex when the pollutants are chemically reactive. Three dimensional transient flow and reactive contaminant transport process is considered. The proposed methodology uses HYDROGEOCHEM 5.0 (HGCH) as the simulation model for flow and transport processes with chemically multiple reactive species. Adaptive Simulated Annealing (ASA) is used as optimization algorithm in linked simulation-optimization methodology to identify the unknown source characteristics. Therefore, the aim of the present study is to develop a methodology to optimally design an effective monitoring network for pollution source characterization with reactive species in polluted aquifers. The performance of the developed methodology will be evaluated for an illustrative polluted aquifer sites, for example an abandoned mine site in Queensland, Australia.

Keywords: monitoring network design, source characterization, chemical reactive transport process, contaminated mine site

Procedia PDF Downloads 231
120 Effect of Oxygen Ion Irradiation on the Structural, Spectral and Optical Properties of L-Arginine Acetate Single Crystals

Authors: N. Renuka, R. Ramesh Babu, N. Vijayan

Abstract:

Ion beams play a significant role in the process of tuning the properties of materials. Based on the radiation behavior, the engineering materials are categorized into two different types. The first one comprises organic solids which are sensitive to the energy deposited in their electronic system and the second one comprises metals which are insensitive to the energy deposited in their electronic system. However, exposure to swift heavy ions alters this general behavior. Depending on the mass, kinetic energy and nuclear charge, an ion can produce modifications within a thin surface layer or it can penetrate deeply to produce long and narrow distorted area along its path. When a high energetic ion beam impinges on a material, it causes two different types of changes in the material due to the columbic interaction between the target atom and the energetic ion beam: (i) inelastic collisions of the energetic ion with the atomic electrons of the material; and (ii) elastic scattering from the nuclei of the atoms of the material, which is extremely responsible for relocating the atoms of matter from their lattice position. The exposure of the heavy ions renders the material return to equilibrium state during which the material undergoes surface and bulk modifications which depends on the mass of the projectile ion, physical properties of the target material, its energy, and beam dimension. It is well established that electronic stopping power plays a major role in the defect creation mechanism provided it exceeds a threshold which strongly depends on the nature of the target material. There are reports available on heavy ion irradiation especially on crystalline materials to tune their physical and chemical properties. L-Arginine Acetate [LAA] is a potential semi-organic nonlinear optical crystal and its optical, mechanical and thermal properties have already been reported The main objective of the present work is to enhance or tune the structural and optical properties of LAA single crystals by heavy ion irradiation. In the present study, a potential nonlinear optical single crystal, L-arginine acetate (LAA) was grown by slow evaporation solution growth technique. The grown LAA single crystal was irradiated with oxygen ions at the dose rate of 600 krad and 1M rad in order to tune the structural and optical properties. The structural properties of pristine and oxygen ions irradiated LAA single crystals were studied using Powder X- ray diffraction and Fourier Transform Infrared spectral studies which reveal the structural changes that are generated due to irradiation. Optical behavior of pristine and oxygen ions irradiated crystals is studied by UV-Vis-NIR and photoluminescence analyses. From this investigation we can concluded that oxygen ions irradiation modifies the structural and optical properties of LAA single crystals.

Keywords: heavy ion irradiation, NLO single crystal, photoluminescence, X-ray diffractometer

Procedia PDF Downloads 254
119 A Comprehensive Key Performance Indicators Dashboard for Emergency Medical Services

Authors: Giada Feletti, Daniela Tedesco, Paolo Trucco

Abstract:

The present study aims to develop a dashboard of Key Performance Indicators (KPI) to enhance information and predictive capabilities in Emergency Medical Services (EMS) systems, supporting both operational and strategic decisions of different actors. The employed research methodology consists of the first phase of revision of the technical-scientific literature concerning the indicators currently used for the performance measurement of EMS systems. From this literature analysis, it emerged that current studies focus on two distinct perspectives: the ambulance service, a fundamental component of pre-hospital health treatment, and the patient care in the Emergency Department (ED). The perspective proposed by this study is to consider an integrated view of the ambulance service process and the ED process, both essential to ensure high quality of care and patient safety. Thus, the proposal focuses on the entire healthcare service process and, as such, allows considering the interconnection between the two EMS processes, the pre-hospital and hospital ones, connected by the assignment of the patient to a specific ED. In this way, it is possible to optimize the entire patient management. Therefore, attention is paid to the dependency of decisions that in current EMS management models tend to be neglected or underestimated. In particular, the integration of the two processes enables the evaluation of the advantage of an ED selection decision having visibility on EDs’ saturation status and therefore considering the distance, the available resources and the expected waiting times. Starting from a critical review of the KPIs proposed in the extant literature, the design of the dashboard was carried out: the high number of analyzed KPIs was reduced by eliminating the ones firstly not in line with the aim of the study and then the ones supporting a similar functionality. The KPIs finally selected were tested on a realistic dataset, which draws us to exclude additional indicators due to the unavailability of data required for their computation. The final dashboard, which was discussed and validated by experts in the field, includes a variety of KPIs able to support operational and planning decisions, early warning, and citizens’ awareness of EDs accessibility in real-time. By associating each KPI to the EMS phase it refers to, it was also possible to design a well-balanced dashboard covering both efficiency and effective performance of the entire EMS process. Indeed, just the initial phases related to the interconnection between ambulance service and patient’s care are covered by traditional KPIs compared to the subsequent phases taking place in the hospital ED. This could be taken into consideration for the potential future development of the dashboard. Moreover, the research could proceed by building a multi-layer dashboard composed of the first level with a minimal set of KPIs to measure the basic performance of the EMS system at an aggregate level and further levels with KPIs that can bring additional and more detailed information.

Keywords: dashboard, decision support, emergency medical services, key performance indicators

Procedia PDF Downloads 112
118 Precursor Synthesis of Carbon Materials with Different Aggregates Morphologies

Authors: Nikolai A. Khlebnikov, Vladimir N. Krasilnikov, Evgenii V. Polyakov, Anastasia A. Maltceva

Abstract:

Carbon materials with advanced surfaces are widely used both in modern industry and in environmental protection. The physical-chemical nature of these materials is determined by the morphology of primary atomic and molecular carbon structures, which are the basis for synthesizing the following materials: zero-dimensional (fullerenes), one-dimensional (fiber, tubes), two-dimensional (graphene) carbon nanostructures, three-dimensional (multi-layer graphene, graphite, foams) with unique physical-chemical and functional properties. Experience shows that the microscopic morphological level is the basis for the creation of the next mesoscopic morphological level. The dependence of the morphology on the chemical way and process prehistory (crystallization, colloids formation, liquid crystal state and other) is the peculiarity of the last called level. These factors determine the consumer properties of carbon materials, such as specific surface area, porosity, chemical resistance in corrosive environments, catalytic and adsorption activities. Based on the developed ideology of thin precursor synthesis, the authors discuss one of the approaches of the porosity control of carbon-containing materials with a given aggregates morphology. The low-temperature thermolysis of precursors in a gas environment of a given composition is the basis of the above-mentioned idea. The processes of carbothermic precursor synthesis of two different compounds: tungsten carbide WC:nC and zinc oxide ZnO:nC containing an impurity phase in the form of free carbon were selected as subjects of the research. In the first case, the transition metal (tungsten) forming carbides was the object of the synthesis. In the second case, there was selected zinc that does not form carbides. The synthesis of both kinds of transition metals compounds was conducted by the method of precursor carbothermic synthesis from the organic solution. ZnO:nC composites were obtained by thermolysis of succinate Zn(OO(CH2)2OO), formate glycolate Zn(HCOO)(OCH2CH2O)1/2, glycerolate Zn(OCH2CHOCH2OH), and tartrate Zn(OOCCH(OH)CH(OH)COO). WC:nC composite was synthesized from ammonium paratungstate and glycerol. In all cases, carbon structures that are specific for diamond- like carbon forms appeared on the surface of WC and ZnO particles after the heat treatment. Tungsten carbide and zinc oxide were removed from the composites by selective chemical dissolution preserving the amorphous carbon phase. This work presents the results of investigating WC:nC and ZnO:nC composites and carbon nanopowders with tubular, tape, plate and onion morphologies of aggregates that are separated by chemical dissolution of WC and ZnO from the composites by the following methods: SEM, TEM, XPA, Raman spectroscopy, and BET. The connection between the carbon morphology under the conditions of synthesis and chemical nature of the precursor and the possibility of regulation of the morphology with the specific surface area up to 1700-2000 m2/g of carbon-structured materials are discussed.

Keywords: carbon morphology, composite materials, precursor synthesis, tungsten carbide, zinc oxide

Procedia PDF Downloads 335
117 Lifespan Assessment of the Fish Crossing System of Itaipu Power Plant (Brazil/Paraguay) Based on the Reaching of Its Sedimentological Equilibrium Computed by 3D Modeling and Churchill Trapping Efficiency

Authors: Anderson Braga Mendes, Wallington Felipe de Almeida, Cicero Medeiros da Silva

Abstract:

This study aimed to assess the lifespan of the fish transposition system of the Itaipu Power Plant (Brazil/Paraguay) by using 3D hydrodynamic modeling and Churchill trapping effiency in order to identify the sedimentological equilibrium configuration in the main pond of the Piracema Channel, which is part of a 10 km hydraulic circuit that enables fish migration from downstream to upstream (and vice-versa) the Itaipu Dam, overcoming a 120 m water drop. For that, bottom data from 2002 (its opening year) and 2015 were collected and analyzed, besides bed material at 12 stations to the purpose of identifying their granulometric profiles. The Shields and Yalin and Karahan diagrams for initiation of motion of bed material were used to determine the critical bed shear stress for the sedimentological equilibrium state based on the sort of sediment (grain size) to be found at the bottom once the balance is reached. Such granulometry was inferred by analyzing the grosser material (fine and medium sands) which inflows the pond and deposits in its backwater zone, being adopted a range of diameters within the upper and lower limits of that sand stratification. The software Delft 3D was used in an attempt to compute the bed shear stress at every station under analysis. By modifying the input bathymetry of the main pond of the Piracema Channel so as to the computed bed shear stress at each station fell within the intervals of acceptable critical stresses simultaneously, it was possible to foresee the bed configuration of the main pond when the sedimentological equilibrium is reached. Under such condition, 97% of the whole pond capacity will be silted, and a shallow water course with depths ranging from 0.2 m to 1.5 m will be formed; in 2002, depths ranged from 2 m to 10 m. Out of that water path, the new bottom will be practically flat and covered by a layer of water 0.05 m thick. Thus, in the future the main pond of the Piracema Channel will lack its purpose of providing a resting place for migrating fish species, added to the fact that it may become an insurmountable barrier for medium and large sized specimens. Everything considered, it was estimated that its lifespan, from the year of its opening to the moment of the sedimentological equilibrium configuration, will be approximately 95 years–almost half of the computed lifespan of Itaipu Power Plant itself. However, it is worth mentioning that drawbacks concerning the silting in the main pond will start being noticed much earlier than such time interval owing to the reasons previously mentioned.

Keywords: 3D hydrodynamic modeling, Churchill trapping efficiency, fish crossing system, Itaipu power plant, lifespan, sedimentological equilibrium

Procedia PDF Downloads 233