Search results for: reliable cash flow
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6645

Search results for: reliable cash flow

495 Evaluating and Supporting Student Engagement in Online Learning

Authors: Maria Hopkins

Abstract:

Research on student engagement is founded on a desire to improve the quality of online instruction in both course design and delivery. A high level of student engagement is associated with a wide range of educational practices including purposeful student-faculty contact, peer to peer contact, active and collaborative learning, and positive factors such as student satisfaction, persistence, achievement, and learning. By encouraging student engagement, institutions of higher education can have a positive impact on student success that leads to retention and degree completion. The current research presents the results of an online student engagement survey which support faculty teaching practices to maximize the learning experience for online students. The ‘Indicators of Engaged Learning Online’ provide a framework that measures level of student engagement. Social constructivism and collaborative learning form the theoretical basis of the framework. Social constructivist pedagogy acknowledges the social nature of knowledge and its creation in the minds of individual learners. Some important themes that flow from social constructivism involve the importance of collaboration among instructors and students, active learning vs passive consumption of information, a learning environment that is learner and learning centered, which promotes multiple perspectives, and the use of social tools in the online environment to construct knowledge. The results of the survey indicated themes that emphasized the importance of: Interaction among peers and faculty (collaboration); Timely feedback on assignment/assessments; Faculty participation and visibility; Relevance and real-world application (in terms of assignments, activities, and assessments); and Motivation/interest (the need for faculty to motivate students especially those that may not have an interest in the coursework per se). The qualitative aspect of this student engagement study revealed what instructors did well that made students feel engaged in the course, but also what instructors did not do well, which could inform recommendations to faculty when expectations for teaching a course are reviewed. Furthermore, this research provides evidence for the connection between higher student engagement and persistence and retention in online programs, which supports our rationale for encouraging student engagement, especially in the online environment because attrition rates are higher than in the face-to-face environment.

Keywords: instructional design, learning effectiveness, online learning, student engagement

Procedia PDF Downloads 285
494 Limbic Involvement in Visual Processing

Authors: Deborah Zelinsky

Abstract:

The retina filters millions of incoming signals into a smaller amount of exiting optic nerve fibers that travel to different portions of the brain. Most of the signals are for eyesight (called "image-forming" signals). However, there are other faster signals that travel "elsewhere" and are not directly involved with eyesight (called "non-image-forming" signals). This article centers on the neurons of the optic nerve connecting to parts of the limbic system. Eye care providers are currently looking at parvocellular and magnocellular processing pathways without realizing that those are part of an enormous "galaxy" of all the body systems. Lenses are modifying both non-image and image-forming pathways, taking A.M. Skeffington's seminal work one step further. Almost 100 years ago, he described the Where am I (orientation), Where is It (localization), and What is It (identification) pathways. Now, among others, there is a How am I (animation) and a Who am I (inclination, motivation, imagination) pathway. Classic eye testing considers pupils and often assesses posture and motion awareness, but classical prescriptions often overlook limbic involvement in visual processing. The limbic system is composed of the hippocampus, amygdala, hypothalamus, and anterior nuclei of the thalamus. The optic nerve's limbic connections arise from the intrinsically photosensitive retinal ganglion cells (ipRGC) through the "retinohypothalamic tract" (RHT). There are two main hypothalamic nuclei with direct photic inputs. These are the suprachiasmatic nucleus and the paraventricular nucleus. Other hypothalamic nuclei connected with retinal function, including mood regulation, appetite, and glucose regulation, are the supraoptic nucleus and the arcuate nucleus. The retino-hypothalamic tract is often overlooked when we prescribe eyeglasses. Each person is different, but the lenses we choose are influencing this fast processing, which affects each patient's aiming and focusing abilities. These signals arise from the ipRGC cells that were only discovered 20+ years ago and do not address the campana retinal interneurons that were only discovered 2 years ago. As eyecare providers, we are unknowingly altering such factors as lymph flow, glucose metabolism, appetite, and sleep cycles in our patients. It is important to know what we are prescribing as the visual processing evaluations expand past the 20/20 central eyesight.

Keywords: neuromodulation, retinal processing, retinohypothalamic tract, limbic system, visual processing

Procedia PDF Downloads 74
493 Computer-Integrated Surgery of the Human Brain, New Possibilities

Authors: Ugo Galvanetto, Pirto G. Pavan, Mirco Zaccariotto

Abstract:

The discipline of Computer-integrated surgery (CIS) will provide equipment able to improve the efficiency of healthcare systems and, which is more important, clinical results. Surgeons and machines will cooperate in new ways that will extend surgeons’ ability to train, plan and carry out surgery. Patient specific CIS of the brain requires several steps: 1 - Fast generation of brain models. Based on image recognition of MR images and equipped with artificial intelligence, image recognition techniques should differentiate among all brain tissues and segment them. After that, automatic mesh generation should create the mathematical model of the brain in which the various tissues (white matter, grey matter, cerebrospinal fluid …) are clearly located in the correct positions. 2 – Reliable and fast simulation of the surgical process. Computational mechanics will be the crucial aspect of the entire procedure. New algorithms will be used to simulate the mechanical behaviour of cutting through cerebral tissues. 3 – Real time provision of visual and haptic feedback A sophisticated human-machine interface based on ergonomics and psychology will provide the feedback to the surgeon. The present work will address in particular point 2. Modelling the cutting of soft tissue in a structure as complex as the human brain is an extremely challenging problem in computational mechanics. The finite element method (FEM), that accurately represents complex geometries and accounts for material and geometrical nonlinearities, is the most used computational tool to simulate the mechanical response of soft tissues. However, the main drawback of FEM lies in the mechanics theory on which it is based, classical continuum Mechanics, which assumes matter is a continuum with no discontinuity. FEM must resort to complex tools such as pre-defined cohesive zones, external phase-field variables, and demanding remeshing techniques to include discontinuities. However, all approaches to equip FEM computational methods with the capability to describe material separation, such as interface elements with cohesive zone models, X-FEM, element erosion, phase-field, have some drawbacks that make them unsuitable for surgery simulation. Interface elements require a-priori knowledge of crack paths. The use of XFEM in 3D is cumbersome. Element erosion does not conserve mass. The Phase Field approach adopts a diffusive crack model instead of describing true tissue separation typical of surgical procedures. Modelling discontinuities, so difficult when using computational approaches based on classical continuum Mechanics, is instead easy for novel computational methods based on Peridynamics (PD). PD is a non-local theory of mechanics formulated with no use of spatial derivatives. Its governing equations are valid at points or surfaces of discontinuity, and it is, therefore especially suited to describe crack propagation and fragmentation problems. Moreover, PD does not require any criterium to decide the direction of crack propagation or the conditions for crack branching or coalescence; in the PD-based computational methods, cracks develop spontaneously in the way which is the most convenient from an energy point of view. Therefore, in PD computational methods, crack propagation in 3D is as easy as it is in 2D, with a remarkable advantage with respect to all other computational techniques.

Keywords: computational mechanics, peridynamics, finite element, biomechanics

Procedia PDF Downloads 64
492 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm

Authors: Zachary Huffman, Joana Rocha

Abstract:

Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.

Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations

Procedia PDF Downloads 130
491 Estimation of Small Hydropower Potential Using Remote Sensing and GIS Techniques in Pakistan

Authors: Malik Abid Hussain Khokhar, Muhammad Naveed Tahir, Muhammad Amin

Abstract:

Energy demand has been increased manifold due to increasing population, urban sprawl and rapid socio-economic improvements. Low water capacity in dams for continuation of hydrological power, land cover and land use are the key parameters which are creating problems for more energy production. Overall installed hydropower capacity of Pakistan is more than 35000 MW whereas Pakistan is producing up to 17000 MW and the requirement is more than 22000 that is resulting shortfall of 5000 - 7000 MW. Therefore, there is a dire need to develop small hydropower to fulfill the up-coming requirements. In this regards, excessive rainfall, snow nurtured fast flowing perennial tributaries and streams in northern mountain regions of Pakistan offer a gigantic scope of hydropower potential throughout the year. Rivers flowing in KP (Khyber Pakhtunkhwa) province, GB (Gilgit Baltistan) and AJK (Azad Jammu & Kashmir) possess sufficient water availability for rapid energy growth. In the backdrop of such scenario, small hydropower plants are believed very suitable measures for more green environment and power sustainable option for the development of such regions. Aim of this study is to estimate hydropower potential sites for small hydropower plants and stream distribution as per steam network available in the available basins in the study area. The proposed methodology will focus on features to meet the objectives i.e. site selection of maximum hydropower potential for hydroelectric generation using well emerging GIS tool SWAT as hydrological run-off model on the Neelum, Kunhar and the Dor Rivers’ basins. For validation of the results, NDWI will be computed to show water concentration in the study area while overlaying on geospatial enhanced DEM. This study will represent analysis of basins, watershed, stream links, and flow directions with slope elevation for hydropower potential to produce increasing demand of electricity by installing small hydropower stations. Later on, this study will be benefitted for other adjacent regions for further estimation of site selection for installation of such small power plants as well.

Keywords: energy, stream network, basins, SWAT, evapotranspiration

Procedia PDF Downloads 212
490 Design of a Plant to Produce 100,000 MTPY of Green Hydrogen from Brine

Authors: Abdulrazak Jinadu Otaru, Ahmed Almulhim, Hassan Alhassan, Mohammed Sabri

Abstract:

Saudi Arabia is host to a state-owned oil and gas corporation, known as Saudi ARAMCO, that is responsible for the highest emissions of carbon dioxide (CO₂) due to the heavy reliance on fossil fuels as an energy source for various sectors such as transportation, aerospace, manufacturing, and residential use. Unfortunately, the detrimental consequences of CO₂ emissions include escalating temperatures in the Middle East region, posing significant obstacles in terms of food security and water scarcity for the Kingdom of Saudi Arabia. As part of the Saudi Vision 2030 initiative, which aims to reduce the country's reliance on fossil fuels by 50 %, this study focuses on designing a plant that will produce approximately 100,000 metric tons per year (MTPY) of green hydrogen (H₂) using brine as the primary feedstock. The proposed facility incorporates a double electrolytic technology that first separates brine or sodium chloride (NaCl) into sodium hydroxide, hydrogen gas, and chlorine gas. The sodium hydroxide is then used as an electrolyte in the splitting of water molecules through the supply of electrical energy in a second-stage electrolyser to produce green hydrogen. The study encompasses a comprehensive analysis of process descriptions and flow diagrams, as well as materials and energy balances. It also includes equipment design and specification, cost analysis, and considerations for safety and environmental impact. The design capitalizes on the abundant brine supply, a byproduct of the world's largest desalination plant located in Al Jubail, Saudi Arabia. Additionally, the design incorporates the use of available renewable energy sources, such as solar and wind power, to power the proposed plant. This approach not only helps reduce carbon emissions but also aligns with Saudi Arabia's energy transition policy. Furthermore, it supports the United Nations Sustainable Development Goals on Sustainable Cities and Communities (Goal 11) and Climate Action (Goal 13), benefiting not only Saudi Arabia but also other countries in the Middle East.

Keywords: plant design, electrolysis, brine, sodium hydroxide, chlorine gas, green hydrogen

Procedia PDF Downloads 37
489 Quantification of Dispersion Effects in Arterial Spin Labelling Perfusion MRI

Authors: Rutej R. Mehta, Michael A. Chappell

Abstract:

Introduction: Arterial spin labelling (ASL) is an increasingly popular perfusion MRI technique, in which arterial blood water is magnetically labelled in the neck before flowing into the brain, providing a non-invasive measure of cerebral blood flow (CBF). The accuracy of ASL CBF measurements, however, is hampered by dispersion effects; the distortion of the ASL labelled bolus during its transit through the vasculature. In spite of this, the current recommended implementation of ASL – the white paper (Alsop et al., MRM, 73.1 (2015): 102-116) – does not account for dispersion, which leads to the introduction of errors in CBF. Given that the transport time from the labelling region to the tissue – the arterial transit time (ATT) – depends on the region of the brain and the condition of the patient, it is likely that these errors will also vary with the ATT. In this study, various dispersion models are assessed in comparison with the white paper (WP) formula for CBF quantification, enabling the errors introduced by the WP to be quantified. Additionally, this study examines the relationship between the errors associated with the WP and the ATT – and how this is influenced by dispersion. Methods: Data were simulated using the standard model for pseudo-continuous ASL, along with various dispersion models, and then quantified using the formula in the WP. The ATT was varied from 0.5s-1.3s, and the errors associated with noise artefacts were computed in order to define the concept of significant error. The instantaneous slope of the error was also computed as an indicator of the sensitivity of the error with fluctuations in ATT. Finally, a regression analysis was performed to obtain the mean error against ATT. Results: An error of 20.9% was found to be comparable to that introduced by typical measurement noise. The WP formula was shown to introduce errors exceeding 20.9% for ATTs beyond 1.25s even when dispersion effects were ignored. Using a Gaussian dispersion model, a mean error of 16% was introduced by using the WP, and a dispersion threshold of σ=0.6 was determined, beyond which the error was found to increase considerably with ATT. The mean error ranged from 44.5% to 73.5% when other physiologically plausible dispersion models were implemented, and the instantaneous slope varied from 35 to 75 as dispersion levels were varied. Conclusion: It has been shown that the WP quantification formula holds only within an ATT window of 0.5 to 1.25s, and that this window gets narrower as dispersion occurs. Provided that the dispersion levels fall below the threshold evaluated in this study, however, the WP can measure CBF with reasonable accuracy if dispersion is correctly modelled by the Gaussian model. However, substantial errors were observed with other common models for dispersion with dispersion levels similar to those that have been observed in literature.

Keywords: arterial spin labelling, dispersion, MRI, perfusion

Procedia PDF Downloads 363
488 A Tool to Provide Advanced Secure Exchange of Electronic Documents through Europe

Authors: Jesus Carretero, Mario Vasile, Javier Garcia-Blas, Felix Garcia-Carballeira

Abstract:

Supporting cross-border secure and reliable exchange of data and documents and to promote data interoperability is critical for Europe to enhance sector (like eFinance, eJustice and eHealth). This work presents the status and results of the European Project MADE, a Research Project funded by Connecting Europe facility Programme, to provide secure e-invoicing and e-document exchange systems among Europe countries in compliance with the eIDAS Regulation (Regulation EU 910/2014 on electronic identification and trust services). The main goal of MADE is to develop six new AS4 Access Points and SMP in Europe to provide secure document exchanges using the eDelivery DSI (Digital Service Infrastructure) amongst both private and public entities. Moreover, the project demonstrates the feasibility and interest of the solution provided by providing several months of interoperability among the providers of the six partners in different EU countries. To achieve those goals, we have followed a methodology setting first a common background for requirements in the partner countries and the European regulations. Then, the partners have implemented access points in each country, including their service metadata publisher (SMP), to allow the access to their clients to the pan-European network. Finally, we have setup interoperability tests with the other access points of the consortium. The tests will include the use of each entity production-ready Information Systems that process the data to confirm all steps of the data exchange. For the access points, we have chosen AS4 instead of other existing alternatives because it supports multiple payloads, native web services, pulling facilities, lightweight client implementations, modern crypto algorithms, and more authentication types, like username-password and X.509 authentication and SAML authentication. The main contribution of MADE project is to open the path for European companies to use eDelivery services with cross-border exchange of electronic documents following PEPPOL (Pan-European Public Procurement Online) based on the e-SENS AS4 Profile. It also includes the development/integration of new components, integration of new and existing logging and traceability solutions and maintenance tool support for PKI. Moreover, we have found that most companies are still not ready to support those profiles. Thus further efforts will be needed to promote this technology into the companies. The consortium includes the following 9 partners. From them, 2 are research institutions: University Carlos III of Madrid (Coordinator), and Universidad Politecnica de Valencia. The other 7 (EDICOM, BIZbrains, Officient, Aksesspunkt Norge, eConnect, LMT group, Unimaze) are private entities specialized in secure delivery of electronic documents and information integration brokerage in their respective countries. To achieve cross-border operativity, they will include AS4 and SMP services in their platforms according to the EU Core Service Platform. Made project is instrumental to test the feasibility of cross-border documents eDelivery in Europe. If successful, not only einvoices, but many other types of documents will be securely exchanged through Europe. It will be the base to extend the network to the whole Europe. This project has been funded under the Connecting Europe Facility Agreement number: INEA/CEF/ICT/A2016/1278042. Action No: 2016-EU-IA-0063.

Keywords: security, e-delivery, e-invoicing, e-delivery, e-document exchange, trust

Procedia PDF Downloads 257
487 Counting Fishes in Aquaculture Ponds: Application of Imaging Sonars

Authors: Juan C. Gutierrez-Estrada, Inmaculada Pulido-Calvo, Ignacio De La Rosa, Antonio Peregrin, Fernando Gomez-Bravo, Samuel Lopez-Dominguez, Alejandro Garrocho-Cruz, Jairo Castro-Gutierrez

Abstract:

The semi-intensive aquaculture in traditional earth ponds is the main rearing system in Southern Spain. These fish rearing systems are approximately two thirds of aquatic production in this area which has made a significant contribution to the regional economy in recent years. In this type of rearing system, a crucial aspect is the correct quantification and control of the fish abundance in the ponds because the fish farmer knows how many fishes he puts in the ponds but doesn’t know how many fishes will harvest at the end of the rear period. This is a consequence of the mortality induced by different causes as pathogen agents as parasites, viruses and bacteria and other factors as predation of fish-eating birds and poaching. Track the fish abundance in these installations is very difficult because usually the ponds take up a large area of land and the management of the water flow is not automatized. Therefore, there is a very high degree of uncertainty on the abundance fishes which strongly hinders the management and planning of the sales. A novel and non-invasive procedure to count fishes in the ponds is by the means of imaging sonars, particularly fixed systems and/or linked to aquatic vehicles as Remotely Operated Vehicles (ROVs). In this work, a method based on census stations procedures is proposed to evaluate the fish abundance estimation accuracy using images obtained of multibeam sonars. The results indicate that it is possible to obtain a realistic approach about the number of fishes, sizes and therefore the biomass contained in the ponds. This research is included in the framework of the KTTSeaDrones Project (‘Conocimiento y transferencia de tecnología sobre vehículos aéreos y acuáticos para el desarrollo transfronterizo de ciencias marinas y pesqueras 0622-KTTSEADRONES-5-E’) financed by the European Regional Development Fund (ERDF) through the Interreg V-A Spain-Portugal Programme (POCTEP) 2014-2020.

Keywords: census station procedure, fish biomass, semi-intensive aquaculture, multibeam sonars

Procedia PDF Downloads 211
486 Electrical Tortuosity across Electrokinetically Remediated Soils

Authors: Waddah S. Abdullah, Khaled F. Al-Omari

Abstract:

Electrokinetic remediation is one of the most influential and effective methods to decontaminate contaminated soils. Electroosmosis and electromigration are the processes of electrochemical extraction of contaminants from soils. The driving force that causes removing contaminants from soils (electroosmosis process or electromigration process) is voltage gradient. Therefore, the electric field distribution throughout the soil domain is extremely important to investigate and to determine the factors that help to establish a uniform electric field distribution in order to make the clean-up process work properly and efficiently. In this study, small-sized passive electrodes (made of graphite) were placed at predetermined locations within the soil specimen, and the voltage drop between these passive electrodes was measured in order to observe the electrical distribution throughout the tested soil specimens. The electrokinetic test was conducted on two types of soils; a sandy soil and a clayey soil. The electrical distribution throughout the soil domain was conducted with different tests properties; and the electrical field distribution was observed in three-dimensional pattern in order to establish the electrical distribution within the soil domain. The effects of density, applied voltages, and degree of saturation on the electrical distribution within the remediated soil were investigated. The distribution of the moisture content, concentration of the sodium ions, and the concentration of the calcium ions were determined and established in three-dimensional scheme. The study has shown that the electrical conductivity within soil domain depends on the moisture content and concentration of electrolytes present in the pore fluid. The distribution of the electrical field in the saturated soil was found not be affected by its density. The study has also shown that high voltage gradient leads to non-uniform electric field distribution within the electroremediated soil. Very importantly, it was found that even when the electric field distribution is uniform globally (i.e. between the passive electrodes), local non-uniformity could be established within the remediated soil mass. Cracks or air gaps formed due to temperature rise (because of electric flow in low conductivity regions) promotes electrical tortuosity. Thus, fracturing or cracking formed in the remediated soil mass causes disconnection of electric current and hence, no removal of contaminant occur within these areas.

Keywords: contaminant removal, electrical tortuousity, electromigration, electroosmosis, voltage distribution

Procedia PDF Downloads 413
485 The Ductile Fracture of Armor Steel Targets Subjected to Ballistic Impact and Perforation: Calibration of Four Damage Criteria

Authors: Imen Asma Mbarek, Alexis Rusinek, Etienne Petit, Guy Sutter, Gautier List

Abstract:

Over the past two decades, the automotive, aerospace and army industries have been paying an increasing attention to Finite Elements (FE) numerical simulations of the fracture process of their structures. Thanks to the numerical simulations, it is nowadays possible to analyze several problems involving costly and dangerous extreme loadings safely and at a reduced cost such as blast or ballistic impact problems. The present paper is concerned with ballistic impact and perforation problems involving ductile fracture of thin armor steel targets. The target fracture process depends usually on various parameters: the projectile nose shape, the target thickness and its mechanical properties as well as the impact conditions (friction, oblique/normal impact...). In this work, the investigations are concerned with the normal impact of a conical head-shaped projectile on thin armor steel targets. The main aim is to establish a comparative study of four fracture criteria that are commonly used in the fracture process simulations of structures subjected to extreme loadings such as ballistic impact and perforation. Usually, the damage initiation results from a complex physical process that occurs at the micromechanical scale. On a macro scale and according to the following fracture models, the variables on which the fracture depends are mainly the stress triaxiality ƞ, the strain rate, temperature T, and eventually the Lode angle parameter Ɵ. The four failure criteria are: the critical strain to failure model, the Johnson-Cook model, the Wierzbicki model and the Modified Hosford-Coulomb model MHC. Using the SEM, the observations of the fracture facies of tension specimen and of armor steel targets impacted at low and high incident velocities show that the fracture of the specimens is a ductile fracture. The failure mode of the targets is petalling with crack propagation and the fracture facies are covered with micro-cavities. The parameters of each ductile fracture model have been identified for three armor steels and the applicability of each criterion was evaluated using experimental investigations coupled to numerical simulations. Two loading paths were investigated in this study, under a wide range of strain rates. Namely, quasi-static and intermediate uniaxial tension and quasi-static and dynamic double shear testing allow covering various values of stress triaxiality ƞ and of the Lode angle parameter Ɵ. All experiments were conducted on three different armor steel specimen under quasi-static strain rates ranging from 10-4 to 10-1 1/s and at three different temperatures ranging from 297K to 500K, allowing drawing the influence of temperature on the fracture process. Intermediate tension testing was coupled to dynamic double shear experiments conducted on the Hopkinson tube device, allowing to spot the effect of high strain rate on the damage evolution and the crack propagation. The aforementioned fracture criteria are implemented into the FE code ABAQUS via VUMAT subroutine and they were coupled to suitable constitutive relations allow having reliable results of ballistic impact problems simulation. The calibration of the four damage criteria as well as a concise evaluation of the applicability of each criterion are detailed in this work.

Keywords: armor steels, ballistic impact, damage criteria, ductile fracture, SEM

Procedia PDF Downloads 304
484 Mechanical, Thermal and Biodegradable Properties of Bioplast-Spruce Green Wood Polymer Composites

Authors: A. Atli, K. Candelier, J. Alteyrac

Abstract:

Environmental and sustainability concerns push the industries to manufacture alternative materials having less environmental impact. The Wood Plastic Composites (WPCs) produced by blending the biopolymers and natural fillers permit not only to tailor the desired properties of materials but also are the solution to meet the environmental and sustainability requirements. This work presents the elaboration and characterization of the fully green WPCs prepared by blending a biopolymer, BIOPLAST® GS 2189 and spruce sawdust used as filler with different amounts. Since both components are bio-based, the resulting material is entirely environmentally friendly. The mechanical, thermal, structural properties of these WPCs were characterized by different analytical methods like tensile, flexural and impact tests, Thermogravimetric Analysis (TGA), Differential Scanning Calorimetry (DSC) and X-ray Diffraction (XRD). Their water absorption properties and resistance to the termite and fungal attacks were determined in relation with different wood filler content. The tensile and flexural moduli of WPCs increased with increasing amount of wood fillers into the biopolymer, but WPCs became more brittle compared to the neat polymer. Incorporation of spruce sawdust modified the thermal properties of polymer: The degradation, cold crystallization, and melting temperatures shifted to higher temperatures when spruce sawdust was added into polymer. The termite, fungal and water absorption resistance of WPCs decreased with increasing wood amount in WPCs, but remained in durability class 1 (durable) concerning fungal resistance and quoted 1 (attempted attack) in visual rating regarding to the termites resistance except that the WPC with the highest wood content (30 wt%) rated 2 (slight attack) indicating a long term durability. All the results showed the possibility to elaborate the easy injectable composite materials with adjustable properties by incorporation of BIOPLAST® GS 2189 and spruce sawdust. Therefore, lightweight WPCs allow both to recycle wood industry byproducts and to produce a full ecologic material.

Keywords: biodegradability, color measurements, durability, mechanical properties, melt flow index, MFI, structural properties, thermal properties, wood-plastic composites, WPCs

Procedia PDF Downloads 129
483 An Evaluation of the Lae City Road Network Improvement Project

Authors: Murray Matarab Konzang

Abstract:

Lae Port Development Project, Four Lane Highway and other development in the extraction industry which have direct road link to Lae City are predicted to have significant impact on its road network system. This paper evaluates Lae roads improvement program with forecast on planning, economic and the installation of bypasses to ease congestion, effective and convenient transport service for bulk goods and reduce travel time. Land-use transportation study and plans for local area traffic management scheme will be considered. City roads are faced with increased number of traffic and some inadequate road pavement width, poor transport plans, and facilities to meet this transportation demand. Lae also has drainage system which might not hold a 100 year flood. Proper evaluation, plan, design and intersection analysis is needed to evaluate road network system thus recommend improvement and estimate future growth. Repetitive and cyclic loading by heavy commercial vehicles with different axle configurations apply on the flexible pavement which weakens and tear the pavement surface thus small cracks occur. Rain water seeps through and overtime it creates potholes. Effective planning starts from experimental research and appropriate design standards to enable firm embankment, proper drains and quality pavement material. This paper will address traffic problems as well as road pavement, capacities of intersections, and pedestrian flow during peak hours. The outcome of this research will be to identify heavily trafficked road sections and recommend treatments to reduce traffic congestions, road classification, and proposal for bypass routes and improvement. First part of this study will describe transport or traffic related problems within the city. Second part would be to identify challenges imposed by traffic and road related problems and thirdly to recommend solutions after the analyzing traffic data that will indicate current capacities of road intersections and finally recommended treatment for improvement and future growth.

Keywords: Lae, road network, highway, vehicle traffic, planning

Procedia PDF Downloads 351
482 Development of a Feedback Control System for a Lab-Scale Biomass Combustion System Using Programmable Logic Controller

Authors: Samuel O. Alamu, Seong W. Lee, Blaise Kalmia, Marc J. Louise Caballes, Xuejun Qian

Abstract:

The application of combustion technologies for thermal conversion of biomass and solid wastes to energy has been a major solution to the effective handling of wastes over a long period of time. Lab-scale biomass combustion systems have been observed to be economically viable and socially acceptable, but major concerns are the environmental impacts of the process and deviation of temperature distribution within the combustion chamber. Both high and low combustion chamber temperature may affect the overall combustion efficiency and gaseous emissions. Therefore, there is an urgent need to develop a control system which measures the deviations of chamber temperature from set target values, sends these deviations (which generates disturbances in the system) in the form of feedback signal (as input), and control operating conditions for correcting the errors. In this research study, major components of the feedback control system were determined, assembled, and tested. In addition, control algorithms were developed to actuate operating conditions (e.g., air velocity, fuel feeding rate) using ladder logic functions embedded in the Programmable Logic Controller (PLC). The developed control algorithm having chamber temperature as a feedback signal is integrated into the lab-scale swirling fluidized bed combustor (SFBC) to investigate the temperature distribution at different heights of the combustion chamber based on various operating conditions. The air blower rates and the fuel feeding rates obtained from automatic control operations were correlated with manual inputs. There was no observable difference in the correlated results, thus indicating that the written PLC program functions were adequate in designing the experimental study of the lab-scale SFBC. The experimental results were analyzed to study the effect of air velocity operating at 222-273 ft/min and fuel feeding rate of 60-90 rpm on the chamber temperature. The developed temperature-based feedback control system was shown to be adequate in controlling the airflow and the fuel feeding rate for the overall biomass combustion process as it helps to minimize the steady-state error.

Keywords: air flow, biomass combustion, feedback control signal, fuel feeding, ladder logic, programmable logic controller, temperature

Procedia PDF Downloads 121
481 Total Arterial Coronary Revascularization with Aorto-Bifemoral Bipopliteal Bypass: A Case Report

Authors: Nuruddin Mohammod Zahangir, Syed Tanvir Ahmady, Firoz Ahmed, Mainul Kabir, Tamjid Mohammad Najmus Sakib Khan, Nazmul Hossain, Niaz Ahmed, Madhava Janardhan Naik

Abstract:

The management of combined Coronary Artery Disease and Peripheral Vascular Disease is a challenge and brings with it numerous clinical dilemmas.The 56 year old gentleman presented to our department with significant triple vessel disease with occluded lower end of aorta just before bifurcation and bilateral superficial femoral arteries. Operation was done on 11.03.14. The The Left Internal Mammary Artery (LIMA) and the Right Internal Mammary Artery (RIMA) were harvested in skeletonized manner. The free RIMA was then anastomosed with LIMA to make LIMA-RIMA Y. Cardio Pulmonary Bypass was then established and coronary artery bypass grafts performed. LIMA was anastomosed to the Left Anterior Descending artery. RIMA was anastomosed to Posterior Descending Artery, 1st and 2nd Obtuse Marginal arteries in a sequential manner. Abdomen was opened by midline incision. The infrarenal aorta exposed and was found to be severely diseased. A Vascular Clamp was applied infrarenally, aortotomy done and limited endarterectomy performed. An end-to-side anastomosis was done with upper end of PTFE synthetic Y-graft (14/7 mm) to the infarenal Aorta and the Clamp released. Good flow noted in both limbs of the graft. Patient was then slowly weaned off from Cardio Pulmonary Bypass without difficulty. The distal two limbs of the Y graft were passed to the groin through retroperitoneal tunnels and anastomosed end-to-side with the common femoral arteries. Saphenous vein was interposed between common femoral and popliteal arteries bilaterally through subfascial tunnels in both thigh. On 12th postoperative day he was discharged from hospital in good general condition. Follow up after 3 months of operation the patient is doing good and free of chest pain and claudication pain.

Keywords: total arterial, coronary revascularization, aorto-bifemoral bypass, bifemoro-bipopliteal bypass

Procedia PDF Downloads 460
480 Quantitative Texture Analysis of Shoulder Sonography for Rotator Cuff Lesion Classification

Authors: Chung-Ming Lo, Chung-Chien Lee

Abstract:

In many countries, the lifetime prevalence of shoulder pain is up to 70%. In America, the health care system spends 7 billion per year about the healthy issues of shoulder pain. With respect to the origin, up to 70% of shoulder pain is attributed to rotator cuff lesions This study proposed a computer-aided diagnosis (CAD) system to assist radiologists classifying rotator cuff lesions with less operator dependence. Quantitative features were extracted from the shoulder ultrasound images acquired using an ALOKA alpha-6 US scanner (Hitachi-Aloka Medical, Tokyo, Japan) with linear array probe (scan width: 36mm) ranging from 5 to 13 MHz. During examination, the postures of the examined patients are standard sitting position and are followed by the regular routine. After acquisition, the shoulder US images were drawn out from the scanner and stored as 8-bit images with pixel value ranging from 0 to 255. Upon the sonographic appearance, the boundary of each lesion was delineated by a physician to indicate the specific pattern for analysis. The three lesion categories for classification were composed of 20 cases of tendon inflammation, 18 cases of calcific tendonitis, and 18 cases of supraspinatus tear. For each lesion, second-order statistics were quantified in the feature extraction. The second-order statistics were the texture features describing the correlations between adjacent pixels in a lesion. Because echogenicity patterns were expressed via grey-scale. The grey-scale co-occurrence matrixes with four angles of adjacent pixels were used. The texture metrics included the mean and standard deviation of energy, entropy, correlation, inverse different moment, inertia, cluster shade, cluster prominence, and Haralick correlation. Then, the quantitative features were combined in a multinomial logistic regression classifier to generate a prediction model of rotator cuff lesions. Multinomial logistic regression classifier is widely used in the classification of more than two categories such as the three lesion types used in this study. In the classifier, backward elimination was used to select a feature subset which is the most relevant. They were selected from the trained classifier with the lowest error rate. Leave-one-out cross-validation was used to evaluate the performance of the classifier. Each case was left out of the total cases and used to test the trained result by the remaining cases. According to the physician’s assessment, the performance of the proposed CAD system was shown by the accuracy. As a result, the proposed system achieved an accuracy of 86%. A CAD system based on the statistical texture features to interpret echogenicity values in shoulder musculoskeletal ultrasound was established to generate a prediction model for rotator cuff lesions. Clinically, it is difficult to distinguish some kinds of rotator cuff lesions, especially partial-thickness tear of rotator cuff. The shoulder orthopaedic surgeon and musculoskeletal radiologist reported greater diagnostic test accuracy than general radiologist or ultrasonographers based on the available literature. Consequently, the proposed CAD system which was developed according to the experiment of the shoulder orthopaedic surgeon can provide reliable suggestions to general radiologists or ultrasonographers. More quantitative features related to the specific patterns of different lesion types would be investigated in the further study to improve the prediction.

Keywords: shoulder ultrasound, rotator cuff lesions, texture, computer-aided diagnosis

Procedia PDF Downloads 276
479 Bionaut™: A Breakthrough Robotic Microdevice to Treat Non-Communicating Hydrocephalus in Both Adult and Pediatric Patients

Authors: Suehyun Cho, Darrell Harrington, Florent Cros, Olin Palmer, John Caputo, Michael Kardosh, Eran Oren, William Loudon, Alex Kiselyov, Michael Shpigelmacher

Abstract:

Bionaut Labs, LLC is developing a minimally invasive robotic microdevice designed to treat non-communicating hydrocephalus in both adult and pediatric patients. The device utilizes biocompatible microsurgical particles (Bionaut™) that are specifically designed to safely and reliably perform accurate fenestration(s) in the 3rd ventricle, aqueduct of Sylvius, and/or trapped intraventricular cysts of the brain in order to re-establish normal cerebrospinal fluid flow dynamics and thereby balance and/or normalize intra/intercompartmental pressure. The Bionaut™ is navigated to the target via CSF or brain tissue in a minimally invasive fashion with precise control using real-time imaging. Upon reaching the pre-defined anatomical target, the external driver allows for directing the specific microsurgical action defined to achieve the surgical goal. Notable features of the proposed protocol are i) Bionaut™ access to the intraventricular target follows a clinically validated endoscopy trajectory which may not be feasible via ‘traditional’ rigid endoscopy: ii) the treatment is microsurgical, there are no foreign materials left behind post-procedure; iii) Bionaut™ is an untethered device that is navigated through the subarachnoid and intraventricular compartments of the brain, following pre-designated non-linear trajectories as determined by the safest anatomical and physiological path; iv) Overall protocol involves minimally invasive delivery and post-operational retrieval of the surgical Bionaut™. The approach is expected to be suitable to treat pediatric patients 0-12 months old as well as adult patients with obstructive hydrocephalus who fail traditional shunts or are eligible for endoscopy. Current progress, including platform optimization, Bionaut™ control, and real-time imaging and in vivo safety studies of the Bionauts™ in large animals, specifically the spine and the brain of ovine models, will be discussed.

Keywords: Bionaut™, cerebrospinal fluid, CSF, fenestration, hydrocephalus, micro-robot, microsurgery

Procedia PDF Downloads 160
478 Atmospheric Circulation Types Related to Dust Transport Episodes over Crete in the Eastern Mediterranean

Authors: K. Alafogiannis, E. E. Houssos, E. Anagnostou, G. Kouvarakis, N. Mihalopoulos, A. Fotiadi

Abstract:

The Mediterranean basin is an area where different aerosol types coexist, including urban/industrial, desert dust, biomass burning and marine particles. Particularly, mineral dust aerosols, mostly originated from North African deserts, significantly contribute to high aerosol loads above the Mediterranean. Dust transport, controlled by the variation of the atmospheric circulation throughout the year, results in a strong spatial and temporal variability of aerosol properties. In this study, the synoptic conditions which favor dust transport over the Eastern Mediterranean are thoroughly investigated. For this reason, three datasets are employed. Firstly, ground-based daily data of aerosol properties, namely Aerosol Optical Thickness (AOT), Ångström exponent (α440-870) and fine fraction from the FORTH-AERONET (Aerosol Robotic Network) station along with measurements of PM10 concentrations from Finokalia station, for the period 2003-2011, are used to identify days with high coarse aerosol load (episodes) over Crete. Then, geopotential height at 1000, 850 and 700 hPa levels obtained from the NCEP/NCAR Reanalysis Project, are utilized to depict the atmospheric circulation during the identified episodes. Additionally, air-mass back trajectories, calculated by HYSPLIT, are used to verify the origin of aerosols from neighbouring deserts. For the 227 identified dust episodes, the statistical methods of Factor and Cluster Analysis are applied on the corresponding atmospheric circulation data to reveal the main types of the synoptic conditions favouring dust transport towards Crete (Eastern Mediterranean). The 227 cases are classified into 11 distinct types (clusters). Dust episodes in Eastern Mediterranean, are found to be more frequent (52%) in spring with a secondary maximum in autumn. The main characteristic of the atmospheric circulation associated with dust episodes, is the presence of a low-pressure system at surface, either in southwestern Europe or western/central Mediterranean, which induces a southerly air flow favouring dust transport from African deserts. The exact position and the intensity of the low-pressure system vary notably among clusters. More rarely dust may originate from deserts of Arabian Peninsula.

Keywords: aerosols, atmospheric circulation, dust particles, Eastern Mediterranean

Procedia PDF Downloads 222
477 The Use of Food Industry Bio-Products for Sustainable Lactic Acid Bacteria Encapsulation

Authors: Paulina Zavistanaviciute, Vita Krungleviciute, Elena Bartkiene

Abstract:

Lactic acid bacteria (LAB) are microbial supplements that increase the nutritional, therapeutic, and safety value of food and feed. Often LAB strains are incubated in an expensive commercially available de Man-Rogosa-Sharpe (MRS) medium; the cultures are centrifuged, and the cells are washing with sterile water. Potato juice and apple juice industry bio-products are industrial wastes which may constitute a source of digestible nutrients for microorganisms. Due to their low cost and good chemical composition, potato juice and apple juice production bio- products could have a potential application in LAB encapsulation. In this study, pure LAB (P. acidilactici and P. pentosaceus) were multiplied in a crushed potato juice and apple juice industry bio-products medium. Before using, bio-products were sterilized and filtered. No additives were added to mass, except apple juice industry bioproducts were diluted with sterile water (1/5; v/v). The tap of sterilised mass, and LAB cell suspension (5 mL), containing of 8.9 log10 colony-forming units (cfu) per mL of the P. acidilactici and P. pentosaceus was used to multiply the LAB for 72 h. The final colony number in the potato juice and apple juice bio- products substrate was on average 9.60 log10 cfu/g. In order to stabilize the LAB, several methods of dehydration have been tested: lyophilisation (MilrockKieffer Lane, Kingston, USA) and dehydration in spray drying system (SD-06, Keison, Great Britain). Into the spray drying system multiplied LAB in a crushed potato juice and apple juice bio-products medium was injected in peristaltic way (inlet temperature +60 °C, inlet air temperature +150° C, outgoing air temperature +80 °C, air flow 200 m3/h). After lyophilisation (-48 °C) and spray drying (+150 °C) the viable cell concentration in the fermented potato juice powder was 9.18 ± 0.09 log10 cfu/g and 9.04 ± 0.07 log10 cfu/g, respectively, and in apple mass powder 8.03 ± 0.04 log10 cfu/g and 7.03 ± 0.03 log10 cfu/g, respectively. Results indicated that during the storage (after 12 months) at room temperature (22 +/- 2 ºC) LAB count in dehydrated products was 5.18 log10 cfu/g and 7.00 log10 cfu/g (in spray dried and lyophilized potato juice powder, respectively), and 3.05 log10 cfu/g and 4.10 log10 cfu/g (in spray dried and lyophilized apple juice industry bio-products powder, respectively). According to obtained results, potato juice could be used as alternative substrate for P. acidilactici and P. pentosaceus cultivation, and by drying received powders can be used in food/feed industry as the LAB starters. Therefore, apple juice industry by- products before spray drying and lyophilisation should be modified (i. e. by using different starches) in order to improve its encapsulation.

Keywords: bio-products, encapsulation, lactic acid bacteria, sustainability

Procedia PDF Downloads 269
476 Impact of the Oxygen Content on the Optoelectronic Properties of the Indium-Tin-Oxide Based Transparent Electrodes for Silicon Heterojunction Solar Cells

Authors: Brahim Aissa

Abstract:

Transparent conductive oxides (TCOs) used as front electrodes in solar cells must feature simultaneously high electrical conductivity, low contact resistance with the adjacent layers, and an appropriate refractive index for maximal light in-coupling into the device. However, these properties may conflict with each other, motivating thereby the search for TCOs with high performance. Additionally, due to the presence of temperature sensitive layers in many solar cell designs (for example, in thin-film silicon and silicon heterojunction (SHJ)), low-temperature deposition processes are more suitable. Several deposition techniques have been already explored to fabricate high-mobility TCOs at low temperatures, including sputter deposition, chemical vapor deposition, and atomic layer deposition. Among this variety of methods, to the best of our knowledge, magnetron sputtering deposition is the most established technique, despite the fact that it can lead to damage of underlying layers. The Sn doped In₂O₃ (ITO) is the most commonly used transparent electrode-contact in SHJ technology. In this work, we studied the properties of ITO thin films grown by RF sputtering. Using different oxygen fraction in the argon/oxygen plasma, we prepared ITO films deposited on glass substrates, on one hand, and on a-Si (p and n-types):H/intrinsic a-Si/glass substrates, on the other hand. Hall Effect measurements were systematically conducted together with total-transmittance (TT) and total-reflectance (TR) spectrometry. The electrical properties were drastically affected whereas the TT and TR were found to be slightly impacted by the oxygen variation. Furthermore, the time of flight-secondary ion mass spectrometry (TOF-SIMS) technique was used to determine the distribution of various species throughout the thickness of the ITO and at various interfaces. The depth profiling of indium, oxygen, tin, silicon, phosphorous, boron and hydrogen was investigated throughout the various thicknesses and interfaces, and obtained results are discussed accordingly. Finally, the extreme conditions were selected to fabricate rear emitter SHJ devices, and the photovoltaic performance was evaluated; the lower oxygen flow ratio was found to yield the best performance attributed to lower series resistance.

Keywords: solar cell, silicon heterojunction, oxygen content, optoelectronic properties

Procedia PDF Downloads 145
475 Collaborative and Experimental Cultures in Virtual Reality Journalism: From the Perspective of Content Creators

Authors: Radwa Mabrook

Abstract:

Virtual Reality (VR) content creation is a complex and an expensive process, which requires multi-disciplinary teams of content creators. Grant schemes from technology companies help media organisations to explore the VR potential in journalism and factual storytelling. Media organisations try to do as much as they can in-house, but they may outsource due to time constraints and skill availability. Journalists, game developers, sound designers and creative artists work together and bring in new cultures of work. This study explores the collaborative experimental nature of VR content creation, through tracing every actor involved in the process and examining their perceptions of the VR work. The study builds on Actor Network Theory (ANT), which decomposes phenomena into their basic elements and traces the interrelations among them. Therefore, the researcher conducted 22 semi-structured interviews with VR content creators between November 2017 and April 2018. Purposive and snowball sampling techniques allowed the researcher to recruit fact-based VR content creators from production studios and media organisations, as well as freelancers. Interviews lasted up to three hours, and they were a mix of Skype calls and in-person interviews. Participants consented for their interviews to be recorded, and for their names to be revealed in the study. The researcher coded interviews’ transcripts in Nvivo software, looking for key themes that correspond with the research questions. The study revealed that VR content creators must be adaptive to change, open to learn and comfortable with mistakes. The VR content creation process is very iterative because VR has no established work flow or visual grammar. Multi-disciplinary VR team members often speak different languages making it hard to communicate. However, adaptive content creators perceive VR work as a fun experience and an opportunity to learn. The traditional sense of competition and the strive for information exclusivity are now replaced by a strong drive for knowledge sharing. VR content creators are open to share their methods of work and their experiences. They target to build a collaborative network that aims to harness VR technology for journalism and factual storytelling. Indeed, VR is instilling collaborative and experimental cultures in journalism.

Keywords: collaborative culture, content creation, experimental culture, virtual reality

Procedia PDF Downloads 120
474 Computer-Assisted Management of Building Climate and Microgrid with Model Predictive Control

Authors: Vinko Lešić, Mario Vašak, Anita Martinčević, Marko Gulin, Antonio Starčić, Hrvoje Novak

Abstract:

With 40% of total world energy consumption, building systems are developing into technically complex large energy consumers suitable for application of sophisticated power management approaches to largely increase the energy efficiency and even make them active energy market participants. Centralized control system of building heating and cooling managed by economically-optimal model predictive control shows promising results with estimated 30% of energy efficiency increase. The research is focused on implementation of such a method on a case study performed on two floors of our faculty building with corresponding sensors wireless data acquisition, remote heating/cooling units and central climate controller. Building walls are mathematically modeled with corresponding material types, surface shapes and sizes. Models are then exploited to predict thermal characteristics and changes in different building zones. Exterior influences such as environmental conditions and weather forecast, people behavior and comfort demands are all taken into account for deriving price-optimal climate control. Finally, a DC microgrid with photovoltaics, wind turbine, supercapacitor, batteries and fuel cell stacks is added to make the building a unit capable of active participation in a price-varying energy market. Computational burden of applying model predictive control on such a complex system is relaxed through a hierarchical decomposition of the microgrid and climate control, where the former is designed as higher hierarchical level with pre-calculated price-optimal power flows control, and latter is designed as lower level control responsible to ensure thermal comfort and exploit the optimal supply conditions enabled by microgrid energy flows management. Such an approach is expected to enable the inclusion of more complex building subsystems into consideration in order to further increase the energy efficiency.

Keywords: price-optimal building climate control, Microgrid power flow optimisation, hierarchical model predictive control, energy efficient buildings, energy market participation

Procedia PDF Downloads 457
473 Evaluation Of Reservoir Quality In Cretaceous Sandstone Complex, Western Flank Of Anambra Basin, Southern Nigeria

Authors: Bayole Omoniyi

Abstract:

This study demonstrates the value of outcrops as analogues for evaluating reservoir quality of sandbody in a typical high-sinuosity fluvial system. The study utilized data acquired from selected outcrops in the Campanian-Maastrichtian siliciclastic succession of the western flank of Anambra Basin, southern Nigeria. Textural properties derived from outcrop samples were correlated and compared with porosity and permeability using established standard charts. Porosity was estimated from thin sections of selected samples to reduce uncertainty in the estimates. Following facies classification, 14 distinct facies were grouped into three facies associations (FA1-FA3) and were subsequently modeled as discrete properties in a block-centered Cartesian grid on a scale that captures geometry of principal sandbodies. Porosity and permeability estimated from charts were populated in the grid using comparable geostatistical techniques that reflect their spatial distribution. The resultant models were conditioned to facies property to honour available data. The results indicate a strong control of geometrical parameters on facies distribution, lateral continuity and connectivity with resultant effect on porosity and permeability distribution. Sand-prone FA1 and FA2 display reservoir quality that varies internally from channel axis to margin in each succession. Furthermore, isolated stack pattern of sandbodies reduces static connectivity and thus, increases risk of poor communication between reservoir-quality sandbodies. FA3 is non-reservoir because it is mud-prone. In conclusion, the risk of poor communication between sandbodies may be effectively accentuated in reservoirs that have similar architecture because of thick lateral accretion deposits, usually mudstone, that tend to disconnect good-quality point-bar sandbodies. In such reservoirs, mudstone may act as a barrier to impede flow vertically from one sandbody to another and laterally at the margins of each channel-fill succession in the system. The development plan, therefore, must be designed to effectively mitigate these risks and the risk of stratigraphic compartmentalization for maximum hydrocarbon recovery.

Keywords: analogues, architecture, connectivity, fluvial

Procedia PDF Downloads 8
472 Well Inventory Data Entry: Utilization of Developed Technologies to Progress the Integrated Asset Plan

Authors: Danah Al-Selahi, Sulaiman Al-Ghunaim, Bashayer Sadiq, Fatma Al-Otaibi, Ali Ameen

Abstract:

In light of recent changes affecting the Oil & Gas Industry, optimization measures have become imperative for all companies globally, including Kuwait Oil Company (KOC). To keep abreast of the dynamic market, a detailed Integrated Asset Plan (IAP) was developed to drive optimization across the organization, which was facilitated through the in-house developed software “Well Inventory Data Entry” (WIDE). This comprehensive and integrated approach enabled centralization of all planned asset components for better well planning, enhancement of performance, and to facilitate continuous improvement through performance tracking and midterm forecasting. Traditionally, this was hard to achieve as, in the past, various legacy methods were used. This paper briefly describes the methods successfully adopted to meet the company’s objective. IAPs were initially designed using computerized spreadsheets. However, as data captured became more complex and the number of stakeholders requiring and updating this information grew, the need to automate the conventional spreadsheets became apparent. WIDE, existing in other aspects of the company (namely, the Workover Optimization project), was utilized to meet the dynamic requirements of the IAP cycle. With the growth of extensive features to enhance the planning process, the tool evolved into a centralized data-hub for all asset-groups and technical support functions to analyze and infer from, leading WIDE to become the reference two-year operational plan for the entire company. To achieve WIDE’s goal of operational efficiency, asset-groups continuously add their parameters in a series of predefined workflows that enable the creation of a structured process which allows risk factors to be flagged and helps mitigation of the same. This tool dictates assigned responsibilities for all stakeholders in a method that enables continuous updates for daily performance measures and operational use. The reliable availability of WIDE, combined with its user-friendliness and easy accessibility, created a platform of cross-functionality amongst all asset-groups and technical support groups to update contents of their respective planning parameters. The home-grown entity was implemented across the entire company and tailored to feed in internal processes of several stakeholders across the company. Furthermore, the implementation of change management and root cause analysis techniques captured the dysfunctionality of previous plans, which in turn resulted in the improvement of already existing mechanisms of planning within the IAP. The detailed elucidation of the 2 year plan flagged any upcoming risks and shortfalls foreseen in the plan. All results were translated into a series of developments that propelled the tool’s capabilities beyond planning and into operations (such as Asset Production Forecasts, setting KPIs, and estimating operational needs). This process exemplifies the ability and reach of applying advanced development techniques to seamlessly integrated the planning parameters of various assets and technical support groups. These techniques enables the enhancement of integrating planning data workflows that ultimately lay the founding plans towards an epoch of accuracy and reliability. As such, benchmarks of establishing a set of standard goals are created to ensure the constant improvement of the efficiency of the entire planning and operational structure.

Keywords: automation, integration, value, communication

Procedia PDF Downloads 135
471 Bioreactor for Cell-Based Impedance Measuring with Diamond Coated Gold Interdigitated Electrodes

Authors: Roman Matejka, Vaclav Prochazka, Tibor Izak, Jana Stepanovska, Martina Travnickova, Alexander Kromka

Abstract:

Cell-based impedance spectroscopy is suitable method for electrical monitoring of cell activity especially on substrates that cannot be easily inspected by optical microscope (without fluorescent markers) like decellularized tissues, nano-fibrous scaffold etc. Special sensor for this measurement was developed. This sensor consists of corning glass substrate with gold interdigitated electrodes covered with diamond layer. This diamond layer provides biocompatible non-conductive surface for cells. Also, a special PPFC flow cultivation chamber was developed. This chamber is able to fix sensor in place. The spring contacts are connecting sensor pads with external measuring device. Construction allows real-time live cell imaging. Combining with perfusion system allows medium circulation and generating shear stress stimulation. Experimental evaluation consist of several setups, including pure sensor without any coating and also collagen and fibrin coating was done. The Adipose derived stem cells (ASC) and Human umbilical vein endothelial cells (HUVEC) were seeded onto sensor in cultivation chamber. Then the chamber was installed into microscope system for live-cell imaging. The impedance measurement was utilized by vector impedance analyzer. The measured range was from 10 Hz to 40 kHz. These impedance measurements were correlated with live-cell microscopic imaging and immunofluorescent staining. Data analysis of measured signals showed response to cell adhesion of substrates, their proliferation and also change after shear stress stimulation which are important parameters during cultivation. Further experiments plan to use decellularized tissue as scaffold fixed on sensor. This kind of impedance sensor can provide feedback about cell culture conditions on opaque surfaces and scaffolds that can be used in tissue engineering in development artificial prostheses. This work was supported by the Ministry of Health, grants No. 15-29153A and 15-33018A.

Keywords: bio-impedance measuring, bioreactor, cell cultivation, diamond layer, gold interdigitated electrodes, tissue engineering

Procedia PDF Downloads 291
470 Placement of Inflow Control Valve for Horizontal Oil Well

Authors: S. Thanabanjerdsin, F. Srisuriyachai, J. Chewaroungroj

Abstract:

Drilling horizontal well is one of the most cost-effective method to exploit reservoir by increasing exposure area between well and formation. Together with horizontal well technology, intelligent completion is often co-utilized to increases petroleum production by monitoring/control downhole production. Combination of both technological results in an opportunity to lower water cresting phenomenon, a detrimental problem that does not lower only oil recovery but also cause environmental problem due to water disposal. Flow of reservoir fluid is a result from difference between reservoir and wellbore pressure. In horizontal well, reservoir fluid around the heel location enters wellbore at higher rate compared to the toe location. As a consequence, Oil-Water Contact (OWC) at the heel side of moves upward relatively faster compared to the toe side. This causes the well to encounter an early water encroachment problem. Installation of Inflow Control Valve (ICV) in particular sections of horizontal well can involve several parameters such as number of ICV, water cut constrain of each valve, length of each section. This study is mainly focused on optimization of ICV configuration to minimize water production and at the same time, to enhance oil production. A reservoir model consisting of high aspect ratio of oil bearing zone to underneath aquifer is drilled with horizontal well and completed with variation of ICV segments. Optimization of the horizontal well configuration is firstly performed by varying number of ICV, segment length, and individual preset water cut for each segment. Simulation results show that installing ICV can increase oil recovery factor up to 5% of Original Oil In Place (OOIP) and can reduce of produced water depending on ICV segment length as well as ICV parameters. For equally partitioned-ICV segment, more number of segment results in better oil recovery. However, number of segment exceeding 10 may not give a significant additional recovery. In first production period, deformation of OWC strongly depends on number of segment along the well. Higher number of segment results in smoother deformation of OWC. After water breakthrough at heel location segment, the second production period begins. Deformation of OWC is principally dominated by ICV parameters. In certain situations that OWC is unstable such as high production rate, high viscosity fluid above aquifer and strong aquifer, second production period may give wide enough window to ICV parameter to take the roll.

Keywords: horizontal well, water cresting, inflow control valve, reservoir simulation

Procedia PDF Downloads 405
469 Normal Hematopoietic Stem Cell and the Toxic Effect of Parthenolide

Authors: Alsulami H., Alghamdi N., Alasker A., Almohen N., Shome D.

Abstract:

Most conventional chemotherapeutic agents which are used for the treatment of cancers not only eradicate cancer cells but also affect normal hematopoietic Stem cells (HSCs) that leads to severe pancytopenia during treatment. Therefore, a need exists for novel approaches to treat cancer without or with minimum effect on normal HSCs. Parthenolide (PTL), a herbal product occurring naturally in the plant Feverfew, is a potential new chemotherapeutic agent for the treatment of many cancers such as acute myeloid leukemia (AML) and chronic lymphocytic leukemia (CLL). In this study we investigated the effect of different PTL concentrations on the viability of normal HSCs and also on the ability of these cells to form colonies after they have been treated with PTL in vitro. Methods: In this study, 24 samples of bone marrow and cord blood were collected with consent, and mononuclear cells were separated using density gradient separation. These cells were then exposed to various concentrations of PTL for 24 hours. Cell viability after culture was determined using 7ADD in a flow cytometry test. Additionally, the impact of PTL on hematopoietic stem cells (HSCs) was evaluated using a colony forming unit assay (CFU). Furthermore, the levels of NFҝB expression were assessed by using a PE-labelled anti-pNFκBP65 antibody. Results: this study showed that there was no statistically significant difference in the percentage of cell death between untreated and PTL treated cells with 5 μM PTL (p = 0.7), 10 μM PTL (p = 0.4) and 25 μM (p = 0.09) respectively. However, at higher doses, PTL caused significant increase in the percentage of cell death. These results were significant when compared to untreated control (p < 0.001). The response of cord blood cells (n=4) on the other hand was slightly different from that for bone marrow cells in that the percentage of cell death was significant at 100 μM PTL. Therefore, cord blood cells seemed more resistant than bone marrow cells. Discussion &Conclusion: At concentrations ≤25 μM PTL has a minimum or no effect on HSCs in vitro. Cord blood HSCs are more resistant to PTL compared to bone marrow HSCs. This could be due to the higher percentage of T-lymphocytes, which are resistant to PTL, in CB samples (85% in CB vs. 56% in BM. Additionally, CB samples contained a higher proportion of CD34+ cells, with 14.5% of brightly CD34+ cells compared to only 1% in normal BM. These bright CD34+ cells in CB were mostly negative for early-stage stem cell maturation antigens, making them young and resilient to oxidative stress and high concentrations of PTL.

Keywords: stem cell, parthenolide, NFKB, CLL

Procedia PDF Downloads 30
468 Structural Behavior of Subsoil Depending on Constitutive Model in Calculation Model of Pavement Structure-Subsoil System

Authors: M. Kadela

Abstract:

The load caused by the traffic movement should be transferred in the road constructions in a harmless way to the pavement as follows: − on the stiff upper layers of the structure (e.g. layers of asphalt: abrading and binding), and − through the layers of principal and secondary substructure, − on the subsoil, directly or through an improved subsoil layer. Reliable description of the interaction proceeding in a system “road construction – subsoil” should be in such case one of the basic requirements of the assessment of the size of internal forces of structure and its durability. Analyses of road constructions are based on: − elements of mechanics, which allows to create computational models, and − results of the experiments included in the criteria of fatigue life analyses. Above approach is a fundamental feature of commonly used mechanistic methods. They allow to use in the conducted evaluations of the fatigue life of structures arbitrarily complex numerical computational models. Considering the work of the system “road construction – subsoil”, it is commonly accepted that, as a result of repetitive loads on the subsoil under pavement, the growth of relatively small deformation in the initial phase is recognized, then this increase disappears, and the deformation takes the character completely reversible. The reliability of calculation model is combined with appropriate use (for a given type of analysis) of constitutive relationships. Phenomena occurring in the initial stage of the system “road construction – subsoil” is unfortunately difficult to interpret in the modeling process. The classic interpretation of the behavior of the material in the elastic-plastic model (e-p) is that elastic phase of the work (e) is undergoing to phase (e-p) by increasing the load (or growth of deformation in the damaging structure). The paper presents the essence of the calibration process of cooperating subsystem in the calculation model of the system “road construction – subsoil”, created for the mechanistic analysis. Calibration process was directed to show the impact of applied constitutive models on its deformation and stress response. The proper comparative base for assessing the reliability of created. This work was supported by the on-going research project “Stabilization of weak soil by application of layer of foamed concrete used in contact with subsoil” (LIDER/022/537/L-4/NCBR/2013) financed by The National Centre for Research and Development within the LIDER Programme. M. Kadela is with the Department of Building Construction Elements and Building Structures on Mining Areas, Building Research Institute, Silesian Branch, Katowice, Poland (phone: +48 32 730 29 47; fax: +48 32 730 25 22; e-mail: m.kadela@ itb.pl). models should be, however, the actual, monitored system “road construction – subsoil”. The paper presents too behavior of subsoil under cyclic load transmitted by pavement layers. The response of subsoil to cyclic load is recorded in situ by the observation system (sensors) installed on the testing ground prepared for this purpose, being a part of the test road near Katowice, in Poland. A different behavior of the homogeneous subsoil under pavement is observed for different seasons of the year, when pavement construction works as a flexible structure in summer, and as a rigid plate in winter. Albeit the observed character of subsoil response is the same regardless of the applied load and area values, this response can be divided into: - zone of indirect action of the applied load; this zone extends to the depth of 1,0 m under the pavement, - zone of a small strain, extending to about 2,0 m.

Keywords: road structure, constitutive model, calculation model, pavement, soil, FEA, response of soil, monitored system

Procedia PDF Downloads 347
467 Pulmonary Disease Identification Using Machine Learning and Deep Learning Techniques

Authors: Chandu Rathnayake, Isuri Anuradha

Abstract:

Early detection and accurate diagnosis of lung diseases play a crucial role in improving patient prognosis. However, conventional diagnostic methods heavily rely on subjective symptom assessments and medical imaging, often causing delays in diagnosis and treatment. To overcome this challenge, we propose a novel lung disease prediction system that integrates patient symptoms and X-ray images to provide a comprehensive and reliable diagnosis.In this project, develop a mobile application specifically designed for detecting lung diseases. Our application leverages both patient symptoms and X-ray images to facilitate diagnosis. By combining these two sources of information, our application delivers a more accurate and comprehensive assessment of the patient's condition, minimizing the risk of misdiagnosis. Our primary aim is to create a user-friendly and accessible tool, particularly important given the current circumstances where many patients face limitations in visiting healthcare facilities. To achieve this, we employ several state-of-the-art algorithms. Firstly, the Decision Tree algorithm is utilized for efficient symptom-based classification. It analyzes patient symptoms and creates a tree-like model to predict the presence of specific lung diseases. Secondly, we employ the Random Forest algorithm, which enhances predictive power by aggregating multiple decision trees. This ensemble technique improves the accuracy and robustness of the diagnosis. Furthermore, we incorporate a deep learning model using Convolutional Neural Network (CNN) with the RestNet50 pre-trained model. CNNs are well-suited for image analysis and feature extraction. By training CNN on a large dataset of X-ray images, it learns to identify patterns and features indicative of lung diseases. The RestNet50 architecture, known for its excellent performance in image recognition tasks, enhances the efficiency and accuracy of our deep learning model. By combining the outputs of the decision tree-based algorithms and the deep learning model, our mobile application generates a comprehensive lung disease prediction. The application provides users with an intuitive interface to input their symptoms and upload X-ray images for analysis. The prediction generated by the system offers valuable insights into the likelihood of various lung diseases, enabling individuals to take appropriate actions and seek timely medical attention. Our proposed mobile application has significant potential to address the rising prevalence of lung diseases, particularly among young individuals with smoking addictions. By providing a quick and user-friendly approach to assessing lung health, our application empowers individuals to monitor their well-being conveniently. This solution also offers immense value in the context of limited access to healthcare facilities, enabling timely detection and intervention. In conclusion, our research presents a comprehensive lung disease prediction system that combines patient symptoms and X-ray images using advanced algorithms. By developing a mobile application, we provide an accessible tool for individuals to assess their lung health conveniently. This solution has the potential to make a significant impact on the early detection and management of lung diseases, benefiting both patients and healthcare providers.

Keywords: CNN, random forest, decision tree, machine learning, deep learning

Procedia PDF Downloads 67
466 Analysis and Optimized Design of a Packaged Liquid Chiller

Authors: Saeed Farivar, Mohsen Kahrom

Abstract:

The purpose of this work is to develop a physical simulation model for the purpose of studying the effect of various design parameters on the performance of packaged-liquid chillers. This paper presents a steady-state model for predicting the performance of package-Liquid chiller over a wide range of operation condition. The model inputs are inlet conditions; geometry and output of model include system performance variable such as power consumption, coefficient of performance (COP) and states of refrigerant through the refrigeration cycle. A computer model that simulates the steady-state cyclic performance of a vapor compression chiller is developed for the purpose of performing detailed physical design analysis of actual industrial chillers. The model can be used for optimizing design and for detailed energy efficiency analysis of packaged liquid chillers. The simulation model takes into account presence of all chiller components such as compressor, shell-and-tube condenser and evaporator heat exchangers, thermostatic expansion valve and connection pipes and tubing’s by thermo-hydraulic modeling of heat transfer, fluids flow and thermodynamics processes in each one of the mentioned components. To verify the validity of the developed model, a 7.5 USRT packaged-liquid chiller is used and a laboratory test stand for bringing the chiller to its standard steady-state performance condition is build. Experimental results obtained from testing the chiller in various load and temperature conditions is shown to be in good agreement with those obtained from simulating the performance of the chiller using the computer prediction model. An entropy-minimization-based optimization analysis is performed based on the developed analytical performance model of the chiller. The variation of design parameters in construction of shell-and-tube condenser and evaporator heat exchangers are studied using the developed performance and optimization analysis and simulation model and a best-match condition between the physical design and construction of chiller heat exchangers and its compressor is found to exist. It is expected that manufacturers of chillers and research organizations interested in developing energy-efficient design and analysis of compression chillers can take advantage of the presented study and its results.

Keywords: optimization, packaged liquid chiller, performance, simulation

Procedia PDF Downloads 266