Search results for: storage costs
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4075

Search results for: storage costs

715 Deep Injection Wells for Flood Prevention and Groundwater Management

Authors: Mohammad R. Jafari, Francois G. Bernardeau

Abstract:

With its arid climate, Qatar experiences low annual rainfall, intense storms, and high evaporation rates. However, the fast-paced rate of infrastructure development in the capital city of Doha has led to recurring instances of surface water flooding as well as rising groundwater levels. Public Work Authority (PWA/ASHGHAL) has implemented an approach to collect and discharge the flood water into a) positive gravity systems; b) Emergency Flooding Area (EFA) – Evaporation, Infiltration or Storage off-site using tankers; and c) Discharge to deep injection wells. As part of the flood prevention scheme, 21 deep injection wells have been constructed to discharge the collected surface and groundwater table in Doha city. These injection wells function as an alternative in localities that do not possess either positive gravity systems or downstream networks that can accommodate additional loads. These injection wells are 400-m deep and are constructed in a complex karstic subsurface condition with large cavities. The injection well system will discharge collected groundwater and storm surface runoff into the permeable Umm Er Radhuma Formation, which is an aquifer present throughout the Persian Gulf Region. The Umm Er Radhuma formation contains saline water that is not being used for water supply. The injection zone is separated by an impervious gypsum formation which acts as a barrier between upper and lower aquifer. State of the art drilling, grouting, and geophysical techniques have been implemented in construction of the wells to assure that the shallow aquifer would not be contaminated and impacted by injected water. Injection and pumping tests were performed to evaluate injection well functionality (injectability). The results of these tests indicated that majority of the wells can accept injection rate of 200 to 300 m3 /h (56 to 83 l/s) under gravity with average value of 250 m3 /h (70 l/s) compared to design value of 50 l/s. This paper presents design and construction process and issues associated with these injection wells, performing injection/pumping tests to determine capacity and effectiveness of the injection wells, the detailed design of collection system and conveying system into the injection wells, and the operation and maintenance process. This system is completed now and is under operation, and therefore, construction of injection wells is an effective option for flood control.

Keywords: deep injection well, flood prevention scheme, geophysical tests, pumping and injection tests, wellhead assembly

Procedia PDF Downloads 112
714 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink

Authors: Sanjay Rathee, Arti Kashyap

Abstract:

Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.

Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining

Procedia PDF Downloads 282
713 A Review of Digital Twins to Reduce Emission in the Construction Industry

Authors: Zichao Zhang, Yifan Zhao, Samuel Court

Abstract:

The carbon emission problem of the traditional construction industry has long been a pressing issue. With the growing emphasis on environmental protection and advancement of science and technology, the organic integration of digital technology and emission reduction has gradually become a mainstream solution. Among various sophisticated digital technologies, digital twins, which involve creating virtual replicas of physical systems or objects, have gained enormous attention in recent years as tools to improve productivity, optimize management and reduce carbon emissions. However, the relatively high implementation costs including finances, time, and manpower associated with digital twins have limited their widespread adoption. As a result, most of the current applications are primarily concentrated within a few industries. In addition, the creation of digital twins relies on a large amount of data and requires designers to possess exceptional skills in information collection, organization, and analysis. Unfortunately, these capabilities are often lacking in the traditional construction industry. Furthermore, as a relatively new concept, digital twins have different expressions and usage methods across different industries. This lack of standardized practices poses a challenge in creating a high-quality digital twin framework for construction. This paper firstly reviews the current academic studies and industrial practices focused on reducing greenhouse gas emissions in the construction industry using digital twins. Additionally, it identifies the challenges that may be encountered during the design and implementation of a digital twin framework specific to this industry and proposes potential directions for future research. This study shows that digital twins possess substantial potential and significance in enhancing the working environment within the traditional construction industry, particularly in their ability to support decision-making processes. It proves that digital twins can improve the work efficiency and energy utilization of related machinery while helping this industry save energy and reduce emissions. This work will help scholars in this field to better understand the relationship between digital twins and energy conservation and emission reduction, and it also serves as a conceptual reference for practitioners to implement related technologies.

Keywords: digital twins, emission reduction, construction industry, energy saving, life cycle, sustainability

Procedia PDF Downloads 81
712 Records of Lepidopteron Borers (Lepidoptera) on Stored Seeds of Indian Himalayan Conifers

Authors: Pawan Kumar, Pitamber Singh Negi

Abstract:

Many of the regeneration failures in conifers are often being attributed to heavy insect attack and pathogens during the period of seed formation and under storage conditions. Conifer berries and seed insects occur throughout the known range of the hosts and also limit the production of seed for nursery stock. On occasion, even entire seed crops are lost due to insect attacks. The berry and seeds of both the species have been found to be infected with insects. Recently, heavy damage to the berry and seeds of Juniper and Chilgoza Pine was observed in the field as well as in stored conditions, leading to reduction in the viability of seeds to germinate. Both the species are under great threat and regeneration of the species is very low. Due to lack of adequate literature, the study on the damage potential of seed insects was urgently required to know the exact status of the insect-pests attacking seeds/berries of both the pine species so as to develop pest management practices against the insect pests attack. As both the species are also under threat and are fighting for survival, so the study is important to develop management practices for the insect-pests of seeds/berries of Juniper and Chilgoza pine so as to evaluate in the nursery, as these species form major vegetation of their distribution zones. A six-year study on the management of insect pests of seeds of Chilgoza revealed that seeds of this species are prone to insect pests mainly borers. During present investigations, it was recorded that cones of are heavily attacked only by Dioryctria abietella (Lepidoptera: Pyralidae) in natural conditions, but seeds which are economically important are heavily infected, (sometimes up to 100% damage was also recorded) by insect borer, Plodia interpunctella (Lepidoptera: Pyralidae) and is recorded for the first time ‘to author’s best knowledge’ infesting the stored Chilgoza seeds. Similarly, Juniper berries and seeds were heavily attacked only by a single borer, Homaloxestis cholopis (Lepidoptera: Lecithoceridae) recorded as a new report in natural habitat as well as in stored conditions. During the present investigation details of insect pest attack on Juniper and Chilgoza pine seeds and berries was observed and suitable management practices were also developed to contain the insect-pests attack.

Keywords: borer, chilgozapine, cones, conifer, Lepidoptera, juniper, management, seed

Procedia PDF Downloads 138
711 Integrating Cyber-Physical System toward Advance Intelligent Industry: Features, Requirements and Challenges

Authors: V. Reyes, P. Ferreira

Abstract:

In response to high levels of competitiveness, industrial systems have evolved to improve productivity. As a consequence, a rapid increase in volume production and simultaneously, a customization process require lower costs, more variety, and accurate quality of products. Reducing time-cycle production, enabling customizability, and ensure continuous quality improvement are key features in advance intelligent industry. In this scenario, customers and producers will be able to participate in the ongoing production life cycle through real-time interaction. To achieve this vision, transparency, predictability, and adaptability are key features that provide the industrial systems the capability to adapt to customer demands modifying the manufacturing process through an autonomous response and acting preventively to avoid errors. The industrial system incorporates a diversified number of components that in advanced industry are expected to be decentralized, end to end communicating, and with the capability to make own decisions through feedback. The evolving process towards advanced intelligent industry defines a set of stages to empower components of intelligence and enhancing efficiency to achieve the decision-making stage. The integrated system follows an industrial cyber-physical system (CPS) architecture whose real-time integration, based on a set of enabler technologies, links the physical and virtual world generating the digital twin (DT). This instance allows incorporating sensor data from real to virtual world and the required transparency for real-time monitoring and control, contributing to address important features of the advanced intelligent industry and simultaneously improve sustainability. Assuming the industrial CPS as the core technology toward the latest advanced intelligent industry stage, this paper reviews and highlights the correlation and contributions of the enabler technologies for the operationalization of each stage in the path toward advanced intelligent industry. From this research, a real-time integration architecture for a cyber-physical system with applications to collaborative robotics is proposed. The required functionalities and issues to endow the industrial system of adaptability are identified.

Keywords: cyber-physical systems, digital twin, sensor data, system integration, virtual model

Procedia PDF Downloads 106
710 Water Dumpflood into Multiple Low-Pressure Gas Reservoirs

Authors: S. Lertsakulpasuk, S. Athichanagorn

Abstract:

As depletion-drive gas reservoirs are abandoned when there is insufficient production rate due to pressure depletion, waterflooding has been proposed to increase the reservoir pressure in order to prolong gas production. Due to high cost, water injection may not be economically feasible. Water dumpflood into gas reservoirs is a new promising approach to increase gas recovery by maintaining reservoir pressure with much cheaper costs than conventional waterflooding. Thus, a simulation study of water dumpflood into multiple nearly abandoned or already abandoned thin-bedded gas reservoirs commonly found in the Gulf of Thailand was conducted to demonstrate the advantage of the proposed method and to determine the most suitable operational parameters for reservoirs having different system parameters. A reservoir simulation model consisting of several thin-layered depletion-drive gas reservoirs and an overlying aquifer was constructed in order to investigate the performance of the proposed method. Two producers were initially used to produce gas from the reservoirs. One of them was later converted to a dumpflood well after gas production rate started to decline due to continuous reduction in reservoir pressure. The dumpflood well was used to flow water from the aquifer to increase pressure of the gas reservoir in order to drive gas towards producer. Two main operational parameters which are wellhead pressure of producer and the time to start water dumpflood were investigated to optimize gas recovery for various systems having different gas reservoir dip angles, well spacings, aquifer sizes, and aquifer depths. This simulation study found that water dumpflood can increase gas recovery up to 12% of OGIP depending on operational conditions and system parameters. For the systems having a large aquifer and large distance between wells, it is best to start water dumpflood when the gas rate is still high since the long distance between the gas producer and dumpflood well helps delay water breakthrough at producer. As long as there is no early water breakthrough, the earlier the energy is supplied to the gas reservoirs, the better the gas recovery. On the other hand, for the systems having a small or moderate aquifer size and short distance between the two wells, performing water dumpflood when the rate is close to the economic rate is better because water is more likely to cause an early breakthrough when the distance is short. Water dumpflood into multiple nearly-depleted or depleted gas reservoirs is a novel study. The idea of using water dumpflood to increase gas recovery has been mentioned in the literature but has never been investigated. This detailed study will help a practicing engineer to understand the benefits of such method and can implement it with minimum cost and risk.

Keywords: dumpflood, increase gas recovery, low-pressure gas reservoir, multiple gas reservoirs

Procedia PDF Downloads 435
709 Development of Market Penetration for High Energy Efficiency Technologies in Alberta’s Residential Sector

Authors: Saeidreza Radpour, Md. Alam Mondal, Amit Kumar

Abstract:

Market penetration of high energy efficiency technologies has key impacts on energy consumption and GHG mitigation. Also, it will be useful to manage the policies formulated by public or private organizations to achieve energy or environmental targets. Energy intensity in residential sector of Alberta was 148.8 GJ per household in 2012 which is 39% more than the average of Canada 106.6 GJ, it was the highest amount among the provinces on per household energy consumption. Energy intensity by appliances of Alberta was 15.3 GJ per household in 2012 which is 14% higher than average value of other provinces and territories in energy demand intensity by appliances in Canada. In this research, a framework has been developed to analyze the market penetration and market share of high energy efficiency technologies in residential sector. The overall methodology was based on development of data-intensive models’ estimation of the market penetration of the appliances in the residential sector over a time period. The developed models were a function of a number of macroeconomic and technical parameters. Developed mathematical equations were developed based on twenty-two years of historical data (1990-2011). The models were analyzed through a series of statistical tests. The market shares of high efficiency appliances were estimated based on the related variables such as capital and operating costs, discount rate, appliance’s life time, annual interest rate, incentives and maximum achievable efficiency in the period of 2015 to 2050. Results show that the market penetration of refrigerators is higher than that of other appliances. The stocks of refrigerators per household are anticipated to increase from 1.28 in 2012 to 1.314 and 1.328 in 2030 and 2050, respectively. Modelling results show that the market penetration rate of stand-alone freezers will decrease between 2012 and 2050. Freezer stock per household will decline from 0.634 in 2012 to 0.556 and 0.515 in 2030 and 2050, respectively. The stock of dishwashers per household is expected to increase from 0.761 in 2012 to 0.865 and 0.960 in 2030 and 2050, respectively. The increase in the market penetration rate of clothes washers and clothes dryers is nearly parallel. The stock of clothes washers and clothes dryers per household is expected to rise from 0.893 and 0.979 in 2012 to 0.960 and 1.0 in 2050, respectively. This proposed presentation will include detailed discussion on the modelling methodology and results.

Keywords: appliances efficiency improvement, energy star, market penetration, residential sector

Procedia PDF Downloads 276
708 Delivery of Contraceptive and Maternal Health Commodities with Drones in the Most Remote Areas of Madagascar

Authors: Josiane Yaguibou, Ngoy Kishimba, Issiaka V. Coulibaly, Sabrina Pestilli, Falinirina Razanalison, Hantanirina Andremanisa

Abstract:

Background: Madagascar has one of the least developed road networks in the world with a majority of its national and local roads being earth roads and in poor condition. In addition, the country is affected by frequent natural disasters that further affect the road conditions limiting the accessibility to some parts of the country. In 2021 and 2022, 2.21 million people were affected by drought in the Grand Sud region, and by cyclones and floods in the coastal regions, with disruptions of the health system including last mile distribution of lifesaving maternal health commodities and reproductive health commodities in the health facilities. Program intervention: The intervention uses drone technology to deliver maternal health and family planning commodities in hard-to-reach health facilities in the Grand Sud and Sud-Est of Madagascar, the regions more affected by natural disasters. Methodology The intervention was developed in two phases. A first phase, conducted in the Grand Sud, used drones leased from a private company to deliver commodities in isolated health facilities. Based on the lesson learnt and encouraging results of the first phase, in the second phase (2023) the intervention has been extended to the Sud Est regions with the purchase of drones and the recruitment of pilots to reduce costs and ensure sustainability. Key findings: The drones ensure deliveries of lifesaving commodities in the Grand Sud of Madagascar. In 2023, 297 deliveries in commodities in forty hard-to-reach health facilities have been carried out. Drone technology reduced delivery times from the usual 3 - 7 days necessary by road or boat to only a few hours. Program Implications: The use of innovative drone technology demonstrated to be successful in the Madagascar context to reduce dramatically the distribution time of commodities in hard-to-reach health facilities and avoid stockouts of life-saving medicines. When the intervention reaches full scale with the completion of the second phase and the extension in the Sud-Est, 150 hard-to-reach facilities will receive drone deliveries, avoiding stockouts and improving the quality of maternal health and family planning services offered to 1,4 million people in targeted areas.

Keywords: commodities, drones, last-mile distribution, lifesaving supplies

Procedia PDF Downloads 52
707 Monolithic Integrated GaN Resonant Tunneling Diode Pair with Picosecond Switching Time for High-speed Multiple-valued Logic System

Authors: Fang Liu, JiaJia Yao, GuanLin Wu, ZuMaoLi, XueYan Yang, HePeng Zhang, ZhiPeng Sun, JunShuai Xue

Abstract:

The explosive increasing needs of data processing and information storage strongly drive the advancement of the binary logic system to multiple-valued logic system. Inherent negative differential resistance characteristic, ultra-high-speed switching time, and robust anti-irradiation capability make III-nitride resonant tunneling diode one of the most promising candidates for multi-valued logic devices. Here we report the monolithic integration of GaN resonant tunneling diodes in series to realize multiple negative differential resistance regions, obtaining at least three stable operating states. A multiply-by-three circuit is achieved by this combination, increasing the frequency of the input triangular wave from f0 to 3f0. The resonant tunneling diodes are grown by plasma-assistedmolecular beam epitaxy on free-standing c-plane GaN substrates, comprising double barriers and a single quantum well both at the atomic level. Device with a peak current density of 183kA/cm² in conjunction with a peak-to-valley current ratio (PVCR) of 2.07 is observed, which is the best result reported in nitride-based resonant tunneling diodes. Microwave oscillation event at room temperature was discovered with a fundamental frequency of 0.31GHz and an output power of 5.37μW, verifying the high repeatability and robustness of our device. The switching behavior measurement was successfully carried out, featuring rise and fall times in the order of picoseconds, which can be used in high-speed digital circuits. Limited by the measuring equipment and the layer structure, the switching time can be further improved. In general, this article presents a novel nitride device with multiple negative differential regions driven by the resonant tunneling mechanism, which can be used in high-speed multiple value logic field with reduced circuit complexity, demonstrating a new solution of nitride devices to break through the limitations of binary logic.

Keywords: GaN resonant tunneling diode, negative differential resistance, multiple-valued logic system, switching time, peak-to-valley current ratio

Procedia PDF Downloads 90
706 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 134
705 Development of Generally Applicable Intravenous to Oral Antibiotic Switch Therapy Criteria

Authors: H. Akhloufi, M. Hulscher, J. M. Prins, I. H. Van Der Sijs, D. Melles, A. Verbon

Abstract:

Background: A timely switch from intravenous to oral antibiotic therapy has many advantages, such as reduced incidence of IV-line related infections, a decreased hospital length of stay and less workload for healthcare professionals with equivalent patient safety. Additionally, numerous studies have demonstrated significant decreases in costs of a timely intravenous to oral antibiotic therapy switch, while maintaining efficacy and safety. However, a considerable variation in iv to oral antibiotic switch therapy criteria has been described in literature. Here, we report the development of a set of iv to oral switch criteria that are generally applicable in all hospitals. Material/methods: A RAND-modified Delphi procedure, which was composed of 3 rounds, was used. This Delphi procedure is a widely used structured process to develop consensus using multiple rounds of questionnaires within a qualified panel of selected experts. The international expert panel was multidisciplinary and composed out of clinical microbiologists, infectious disease consultants and clinical pharmacists. This panel of 19 experts appraised 6 major intravenous to oral antibiotic switch therapy criteria and operationalized these criteria using 41 measurable conditions extracted from the literature. The procedure to select a concise set of iv to oral switch criteria included 2 questionnaire rounds and a face-to-face meeting. Results: The procedure resulted in the selection of 16 measurable conditions, which operationalize 6 major intravenous to oral antibiotic switch therapy criteria. The following 6 major switch therapy criteria were selected: (1) Vital signs should be good or improving when bad. (2) Signs and symptoms related to the infection have to be resolved or improved. (3) The gastrointestinal tract has to be intact and functioning. (4) The oral route should not be compromised. (5) Absence of contra-indicated infections. (6) An oral variant of the antibiotic with good bioavailability has to exist. Conclusions: This systematic stepwise method which combined evidence and expert opinion resulted in a feasible set of 6 major intravenous to oral antibiotic switch therapy criteria operationalized by 16 measurable conditions. This set of early antibiotic iv to oral switch criteria can be used in daily practice in all adult hospital patients. Future use in audits and as rules in computer assisted decision support systems will lead to improvement of antimicrobial steward ship programs.

Keywords: antibiotic resistance, antibiotic stewardship, intravenous to oral, switch therapy

Procedia PDF Downloads 351
704 A Virtual Set-Up to Evaluate Augmented Reality Effect on Simulated Driving

Authors: Alicia Yanadira Nava Fuentes, Ilse Cervantes Camacho, Amadeo José Argüelles Cruz, Ana María Balboa Verduzco

Abstract:

Augmented reality promises being present in future driving, with its immersive technology let to show directions and maps to identify important places indicating with graphic elements when the car driver requires the information. On the other side, driving is considered a multitasking activity and, for some people, a complex activity where different situations commonly occur that require the immediate attention of the car driver to make decisions that contribute to avoid accidents; therefore, the main aim of the project is the instrumentation of a platform with biometric sensors that allows evaluating the performance in driving vehicles with the influence of augmented reality devices to detect the level of attention in drivers, since it is important to know the effect that it produces. In this study, the physiological sensors EPOC X (EEG), ECG06 PRO and EMG Myoware are joined in the driving test platform with a Logitech G29 steering wheel and the simulation software City Car Driving in which the level of traffic can be controlled, as well as the number of pedestrians that exist within the simulation obtaining a driver interaction in real mode and through a MSP430 microcontroller achieves the acquisition of data for storage. The sensors bring a continuous analog signal in time that needs signal conditioning, at this point, a signal amplifier is incorporated due to the acquired signals having a sensitive range of 1.25 mm/mV, also filtering that consists in eliminating the frequency bands of the signal in order to be interpretative and without noise to convert it from an analog signal into a digital signal to analyze the physiological signals of the drivers, these values are stored in a database. Based on this compilation, we work on the extraction of signal features and implement K-NN (k-nearest neighbor) classification methods and decision trees (unsupervised learning) that enable the study of data for the identification of patterns and determine by classification methods different effects of augmented reality on drivers. The expected results of this project include are a test platform instrumented with biometric sensors for data acquisition during driving and a database with the required variables to determine the effect caused by augmented reality on people in simulated driving.

Keywords: augmented reality, driving, physiological signals, test platform

Procedia PDF Downloads 131
703 Towards Modern Approaches of Intelligence Measurement for Clinical and Educational Practices

Authors: Alena Kulikova, Tatjana Kanonire

Abstract:

Intelligence research is one of the oldest fields of psychology. Many factors have made a research on intelligence, defined as reasoning and problem solving [1, 2], a very acute and urgent problem. Thus, it has been repeatedly shown that intelligence is a predictor of academic, professional, and social achievement in adulthood (for example, [3]); Moreover, intelligence predicts these achievements better than any other trait or ability [4]. The individual level, a comprehensive assessment of intelligence is a necessary criterion for the diagnosis of various mental conditions. For example, it is a necessary condition for psychological, medical and pedagogical commissions when deciding on educational needs and the most appropriate educational programs for school children. Assessment of intelligence is crucial in clinical psychodiagnostic and needs high-quality intelligence measurement tools. Therefore, it is not surprising that the development of intelligence tests is an essential part of psychological science and practice. Many modern intelligence tests have a long history and have been used for decades, for example, the Stanford-Binet test or the Wechsler test. However, the vast majority of these tests are based on the classic linear test structure, in which all respondents receive all tasks (see, for example, a critical review by [5]). This understanding of the testing procedure is a legacy of the pre-computer era, in which blank testing was the only diagnostic procedure available [6] and has some significant limitations that affect the reliability of the data obtained [7] and increased time costs. Another problem with measuring IQ is that classical line-structured tests do not fully allow to measure respondent's intellectual progress [8], which is undoubtedly a critical limitation. Advances in modern psychometrics allow for avoiding the limitations of existing tools. However, as in any rapidly developing industry, at the moment, psychometrics does not offer ready-made and straightforward solutions and requires additional research. In our presentation we would like to discuss the strengths and weaknesses of the current approaches to intelligence measurement and highlight “points of growth” for creating a test in accordance with modern psychometrics. Whether it is possible to create the instrument that will use all achievements of modern psychometric and remain valid and practically oriented. What would be the possible limitations for such an instrument? The theoretical framework and study design to create and validate the original Russian comprehensive computer test for measuring the intellectual development in school-age children will be presented.

Keywords: Intelligence, psychometrics, psychological measurement, computerized adaptive testing, multistage testing

Procedia PDF Downloads 70
702 Understanding the Effect of Material and Deformation Conditions on the “Wear Mode Diagram”: A Numerical Study

Authors: A. Mostaani, M. P. Pereira, B. F. Rolfe

Abstract:

The increasing application of Advanced High Strength Steel (AHSS) in the automotive industry to fulfill crash requirements has introduced higher levels of wear in stamping dies and parts. Therefore, understanding wear behaviour in sheet metal forming is of great importance as it can help to reduce the high costs currently associated with tool wear. At the contact between the die and the sheet, the tips of hard tool asperities interact with the softer sheet material. Understanding the deformation that occurs during this interaction is important for our overall understanding of the wear mechanisms. For these reasons, the scratching of a perfectly plastic material by a rigid indenter has been widely examined in the literature; with finite element modelling (FEM) used in recent years to further understand the behaviour. The ‘wear mode diagram’ has been commonly used to classify the deformation regime of the soft work-piece during scratching, into three modes: ploughing, wedge formation, and cutting. This diagram, which is based on 2D slip line theory and upper bound method for perfectly plastic work-piece and rigid indenter, relates different wear modes to attack angle and interfacial strength. This diagram has been the basis for many wear studies and wear models to date. Additionally, it has been concluded that galling is most likely to occur during the wedge formation mode. However, there has been little analysis in the literature of how the material behaviour and deformation conditions associated with metal forming processes influence the wear behaviour. Therefore, the first aim of this work is first to use a commercial FEM package (Abaqus/Explicit) to build a 3D model to capture wear modes during scratching with indenters with different attack angles and different interfacial strengths. The second goal is to utilise the developed model to understand how wear modes might change in the presence of bulk deformation of the work-piece material as a result of the metal forming operation. Finally, the effect of the work-piece material properties, including strain hardening, will be examined to understand how these influence the wear modes and wear behaviour. The results show that both strain hardening and substrate deformation can change the critical attack angle at which the wedge formation regime is activated.

Keywords: finite element, pile-up, scratch test, wear mode

Procedia PDF Downloads 320
701 Developing a Decision-Making Tool for Prioritizing Green Building Initiatives

Authors: Tayyab Ahmad, Gerard Healey

Abstract:

Sustainability in built environment sector is subject to many development constraints. Building projects are developed under different requirements of deliverables which makes each project unique. For an owner organization, i.e., a higher-education institution, involved in a significant building stock, it is important to prioritize some of the sustainability initiatives over the others in order to align the sustainable building development with organizational goals. The point-based green building rating tools i.e. Green Star, LEED, BREEAM are becoming increasingly popular and are well-acknowledged worldwide for verifying a sustainable development. It is imperative to synthesize a multi-criteria decision-making tool that can capitalize on the point-based methodology of rating systems while customizing the sustainable development of building projects according to the individual requirements and constraints of the client organization. A multi-criteria decision-making tool for the University of Melbourne is developed that builds on the action-learning and experience of implementing Green Buildings at the University of Melbourne. The tool evaluates the different sustainable building initiatives based on the framework of Green Star rating tool of Green Building Council of Australia. For each different sustainability initiative the decision-making tool makes an assessment based on at least five performance criteria including the ease with which a sustainability initiative can be achieved and the potential of a sustainability initiative to enhance project objectives, reduce life-cycle costs, enhance University’s reputation, and increase the confidence in quality construction. The use of a weighted aggregation mathematical model in the proposed tool can have a considerable role in the decision-making process of a Green Building project by indexing the Green Building initiatives in terms of organizational priorities. The index value of each initiative will be based on its alignment with some of the key performance criteria. The usefulness of the decision-making tool is validated by conducting structured interviews with some of the key stakeholders involved in the development of sustainable building projects at the University of Melbourne. The proposed tool is realized to help a client organization in deciding that within limited resources which sustainability initiatives and practices are more important to be pursued than others.

Keywords: higher education institution, multi-criteria decision-making tool, organizational values, prioritizing sustainability initiatives, weighted aggregation model

Procedia PDF Downloads 222
700 Overview of Environmental and Economic Theories of the Impact of Dams in Different Regions

Authors: Ariadne Katsouras, Andrea Chareunsy

Abstract:

The number of large hydroelectric dams in the world has increased from almost 6,000 in the 1950s to over 45,000 in 2000. Dams are often built to increase the economic development of a country. This can occur in several ways. Large dams take many years to build so the construction process employs many people for a long time and that increased production and income can flow on into other sectors of the economy. Additionally, the provision of electricity can help raise people’s living standards and if the electricity is sold to another country then the money can be used to provide other public goods for the residents of the country that own the dam. Dams are also built to control flooding and provide irrigation water. Most dams are of these types. This paper will give an overview of the environmental and economic theories of the impact of dams in different regions of the world. There is a difference in the degree of environmental and economic impacts due to the varying climates and varying social and political factors of the regions. Production of greenhouse gases from the dam’s reservoir, for instance, tends to be higher in tropical areas as opposed to Nordic environments. However, there are also common impacts due to construction of the dam itself, such as, flooding of land for the creation of the reservoir and displacement of local populations. Economically, the local population tends to benefit least from the construction of the dam. Additionally, if a foreign company owns the dam or the government subsidises the cost of electricity to businesses, then the funds from electricity production do not benefit the residents of the country the dam is built in. So, in the end, the dams can benefit a country economically, but the varying factors related to its construction and how these are dealt with, determine the level of benefit, if any, of the dam. Some of the theories or practices used to evaluate the potential value of a dam include cost-benefit analysis, environmental impacts assessments and regressions. Systems analysis is also a useful method. While these theories have value, there are also possible shortcomings. Cost-benefit analysis converts all the costs and benefits to dollar values, which can be problematic. Environmental impact assessments, likewise, can be incomplete, especially if the assessment does not include feedback effects, that is, they only consider the initial impact. Finally, regression analysis is dependent on the available data and again would not necessarily include feedbacks. Systems analysis is a method that can allow more complex modelling of the environment and the economic system. It would allow a clearer picture to emerge of the impacts and can include a long time frame.

Keywords: comparison, economics, environment, hydroelectric dams

Procedia PDF Downloads 185
699 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers

Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran

Abstract:

With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.

Keywords: optical fiber, multi-mode, data centers, encircled flux

Procedia PDF Downloads 369
698 Using Optimal Cultivation Strategies for Enhanced Biomass and Lipid Production of an Indigenous Thraustochytrium sp. BM2

Authors: Hsin-Yueh Chang, Pin-Chen Liao, Jo-Shu Chang, Chun-Yen Chen

Abstract:

Biofuel has drawn much attention as a potential substitute to fossil fuels. However, biodiesel from waste oil, oil crops or other oil sources can only satisfy partial existing demands for transportation. Due to the feature of being clean, green and viable for mass production, using microalgae as a feedstock for biodiesel is regarded as a possible solution for a low-carbon and sustainable society. In particular, Thraustochytrium sp. BM2, an indigenous heterotrophic microalga, possesses the potential for metabolizing glycerol to produce lipids. Hence, it is being considered as a promising microalgae-based oil source for biodiesel production and other applications. This study was to optimize the culture pH, scale up, assess the feasibility of producing microalgal lipid from crude glycerol and apply operation strategies following optimal results from shake flask system in a 5L stirred-tank fermenter for further enhancing lipid productivities. Cultivation of Thraustochytrium sp. BM2 without pH control resulted in the highest lipid production of 3944 mg/L and biomass production of 4.85 g/L. Next, when initial glycerol and corn steep liquor (CSL) concentration increased five times (50 g and 62.5 g, respectively), the overall lipid productivity could reach 124 mg/L/h. However, when using crude glycerol as a sole carbon source, direct addition of crude glycerol could inhibit culture growth. Therefore, acid and metal salt pretreatment methods were utilized to purify the crude glycerol. Crude glycerol pretreated with acid and CaCl₂ had the greatest overall lipid productivity 131 mg/L/h when used as a carbon source and proved to be a better substitute for pure glycerol as carbon source in Thraustochytrium sp. BM2 cultivation medium. Engineering operation strategies such as fed-batch and semi-batch operation were applied in the cultivation of Thraustochytrium sp. BM2 for the improvement of lipid production. In cultivation of fed-batch operation strategy, harvested biomass 132.60 g and lipid 69.15 g were obtained. Also, lipid yield 0.20 g/g glycerol was same as in batch cultivation, although with poor overall lipid productivity 107 mg/L/h. In cultivation of semi-batch operation strategy, overall lipid productivity could reach 158 mg/L/h due to the shorter cultivation time. Harvested biomass and lipid achieved 232.62 g and 126.61 g respectively. Lipid yield was improved from 0.20 to 0.24 g/g glycerol. Besides, product costs of three kinds of operation strategies were also calculated. The lowest product cost 12.42 $NTD/g lipid was obtained while employing semi-batch operation strategy and reduced 33% in comparison with batch operation strategy.

Keywords: heterotrophic microalga Thrasutochytrium sp. BM2, microalgal lipid, crude glycerol, fermentation strategy, biodiesel

Procedia PDF Downloads 142
697 Effect of Cutting Tools and Working Conditions on the Machinability of Ti-6Al-4V Using Vegetable Oil-Based Cutting Fluids

Authors: S. Gariani, I. Shyha

Abstract:

Cutting titanium alloys are usually accompanied with low productivity, poor surface quality, short tool life and high machining costs. This is due to the excessive generation of heat at the cutting zone and difficulties in heat dissipation due to relatively low heat conductivity of this metal. The cooling applications in machining processes are crucial as many operations cannot be performed efficiently without cooling. Improving machinability, increasing productivity, enhancing surface integrity and part accuracy are the main advantages of cutting fluids. Conventional fluids such as mineral oil-based, synthetic and semi-synthetic are the most common cutting fluids in the machining industry. Although, these cutting fluids are beneficial in the industries, they pose a great threat to human health and ecosystem. Vegetable oils (VOs) are being investigated as a potential source of environmentally favourable lubricants, due to a combination of biodegradability, good lubricous properties, low toxicity, high flash points, low volatility, high viscosity indices and thermal stability. Fatty acids of vegetable oils are known to provide thick, strong, and durable lubricant films. These strong lubricating films give the vegetable oil base stock a greater capability to absorb pressure and high load carrying capacity. This paper details preliminary experimental results when turning Ti-6Al-4V. The impact of various VO-based cutting fluids, cutting tool materials, working conditions was investigated. The full factorial experimental design was employed involving 24 tests to evaluate the influence of process variables on average surface roughness (Ra), tool wear and chip formation. In general, Ra varied between 0.5 and 1.56 µm and Vasco1000 cutting fluid presented comparable performance with other fluids in terms of surface roughness while uncoated coarse grain WC carbide tool achieved lower flank wear at all cutting speeds. On the other hand, all tools tips were subjected to uniform flank wear during whole cutting trails. Additionally, formed chip thickness ranged between 0.1 and 0.14 mm with a noticeable decrease in chip size when higher cutting speed was used.

Keywords: cutting fluids, turning, Ti-6Al-4V, vegetable oils, working conditions

Procedia PDF Downloads 263
696 Control of the Sustainability of Decorative Topping for Bakery in Order to Extend the Shelf-Life of the Product

Authors: Radovan Čobanović, Milica Rankov Šicar

Abstract:

In the modern bakery various supplements are used to attract more customers. Analyzed sample decorative toppings are consisted of flax seeds, corn grits, oatmeal, wheat flakes, sesame seeds, sunflower seeds, soybean sprouts are used as decoration for the bread. Our goal was to extend the product shelf life based on the analysis. According to the plan of sustainability it was defined that sample which already had expired shelf life had to be stored for 5 months at 25°C and analyzed every month from the day of reception until spoilage occurs. Samples were subjected to sensory analysis (appearance, odor, taste, color, and consistency), microbiological analysis (Salmonella spp., Bacillus cereus, Enterobacteriaceae and moulds) and chemistry analysis (free fatty acids (as oleic), peroxide number, water content and degree of acidity). All analyses were tested according: sensory analysis ISO 6658, Salmonella spp ISO 6579, Bacillus cereus ISO 7932, Enterobacteriaceae ISO 21528-2 and moulds ISO 21527-1, free fatty acids (as oleic) ISO 660, peroxide number ISO 3960, water content and degree of acidity Serbian ordinance on the methods of chemical analysis. After five months of storage, there had been the first changes concerning of sensory properties of the product. In the sample were visible worms and creations which look like spider nets linking seeds and cereal. The sample had smell on rancid and pungent. The results of microbiological analysis showed that Salmonella spp was not detected, Enterobacteriaceae were < 10 cfu/g during all 5 months but in fifth month Bacillus cereus and moulds occurred 700 cfu/g and 1500 cfu/g respectively. Chemical analyzes showed that the water content did not exceed a maximum of 14%. The content of free fatty acids ranged from 3.06 to 3.26%, degree of acidity from 3.69 to 4.9. With increasing degree of acidity the degradation of the sample and the activity of microorganisms was increased which led to the formation of acid reaction which is accompanied by the appearance of unpleasant odor and taste. Based on the obtained results it can be concluded that this product can have longer shelf life for four months than shelf life which is already defined because there are no changes that could have influence on decision of customers when purchase of this product is concerned.

Keywords: bakery products, extension of shelf life, sensory and chemical and microbiological analyses, sustainability

Procedia PDF Downloads 377
695 Genomic Prediction Reliability Using Haplotypes Defined by Different Methods

Authors: Sohyoung Won, Heebal Kim, Dajeong Lim

Abstract:

Genomic prediction is an effective way to measure the abilities of livestock for breeding based on genomic estimated breeding values, statistically predicted values from genotype data using best linear unbiased prediction (BLUP). Using haplotypes, clusters of linked single nucleotide polymorphisms (SNPs), as markers instead of individual SNPs can improve the reliability of genomic prediction since the probability of a quantitative trait loci to be in strong linkage disequilibrium (LD) with markers is higher. To efficiently use haplotypes in genomic prediction, finding optimal ways to define haplotypes is needed. In this study, 770K SNP chip data was collected from Hanwoo (Korean cattle) population consisted of 2506 cattle. Haplotypes were first defined in three different ways using 770K SNP chip data: haplotypes were defined based on 1) length of haplotypes (bp), 2) the number of SNPs, and 3) k-medoids clustering by LD. To compare the methods in parallel, haplotypes defined by all methods were set to have comparable sizes; in each method, haplotypes defined to have an average number of 5, 10, 20 or 50 SNPs were tested respectively. A modified GBLUP method using haplotype alleles as predictor variables was implemented for testing the prediction reliability of each haplotype set. Also, conventional genomic BLUP (GBLUP) method, which uses individual SNPs were tested to evaluate the performance of the haplotype sets on genomic prediction. Carcass weight was used as the phenotype for testing. As a result, using haplotypes defined by all three methods showed increased reliability compared to conventional GBLUP. There were not many differences in the reliability between different haplotype defining methods. The reliability of genomic prediction was highest when the average number of SNPs per haplotype was 20 in all three methods, implying that haplotypes including around 20 SNPs can be optimal to use as markers for genomic prediction. When the number of alleles generated by each haplotype defining methods was compared, clustering by LD generated the least number of alleles. Using haplotype alleles for genomic prediction showed better performance, suggesting improved accuracy in genomic selection. The number of predictor variables was decreased when the LD-based method was used while all three haplotype defining methods showed similar performances. This suggests that defining haplotypes based on LD can reduce computational costs and allows efficient prediction. Finding optimal ways to define haplotypes and using the haplotype alleles as markers can provide improved performance and efficiency in genomic prediction.

Keywords: best linear unbiased predictor, genomic prediction, haplotype, linkage disequilibrium

Procedia PDF Downloads 135
694 Co-Gasification of Petroleum Waste and Waste Tires: A Numerical and CFD Study

Authors: Thomas Arink, Isam Janajreh

Abstract:

The petroleum industry generates significant amounts of waste in the form of drill cuttings, contaminated soil and oily sludge. Drill cuttings are a product of the off-shore drilling rigs, containing wet soil and total petroleum hydrocarbons (TPH). Contaminated soil comes from different on-shore sites and also contains TPH. The oily sludge is mainly residue or tank bottom sludge from storage tanks. The two main treatment methods currently used are incineration and thermal desorption (TD). Thermal desorption is a method where the waste material is heated to 450ºC in an anaerobic environment to release volatiles, the condensed volatiles can be used as a liquid fuel. For the thermal desorption unit dry contaminated soil is mixed with moist drill cuttings to generate a suitable mixture. By thermo gravimetric analysis (TGA) of the TD feedstock it was found that less than 50% of the TPH are released, the discharged material is stored in landfill. This study proposes co-gasification of petroleum waste with waste tires as an alternative to thermal desorption. Co-gasification with a high-calorific material is necessary since the petroleum waste consists of more than 60 wt% ash (soil/sand), causing its calorific value to be too low for gasification. Since the gasification process occurs at 900ºC and higher, close to 100% of the TPH can be released, according to the TGA. This work consists of three parts: 1. a mathematical gasification model, 2. a reactive flow CFD model and 3. experimental work on a drop tube reactor. Extensive material characterization was done by means of proximate analysis (TGA), ultimate analysis (CHNOS flash analysis) and calorific value measurements (Bomb calorimeter) for the input parameters of the mathematical and CFD model. The mathematical model is a zero dimensional model based on Gibbs energy minimization together with Lagrange multiplier; it is used to find the product species composition (molar fractions of CO, H2, CH4 etc.) for different tire/petroleum feedstock mixtures and equivalence ratios. The results of the mathematical model act as a reference for the CFD model of the drop-tube reactor. With the CFD model the efficiency and product species composition can be predicted for different mixtures and particle sizes. Finally both models are verified by experiments on a drop tube reactor (1540 mm long, 66 mm inner diameter, 1400 K maximum temperature).

Keywords: computational fluid dynamics (CFD), drop tube reactor, gasification, Gibbs energy minimization, petroleum waste, waste tires

Procedia PDF Downloads 512
693 Prediction of Fluid Induced Deformation using Cavity Expansion Theory

Authors: Jithin S. Kumar, Ramesh Kannan Kandasami

Abstract:

Geomaterials are generally porous in nature due to the presence of discrete particles and interconnected voids. The porosity present in these geomaterials play a critical role in many engineering applications such as CO2 sequestration, well bore strengthening, enhanced oil and hydrocarbon recovery, hydraulic fracturing, and subsurface waste storage. These applications involves solid-fluid interactions, which govern the changes in the porosity which in turn affect the permeability and stiffness of the medium. Injecting fluid into the geomaterials results in permeation which exhibits small or negligible deformation of the soil skeleton followed by cavity expansion/ fingering/ fracturing (different forms of instabilities) due to the large deformation especially when the flow rate is greater than the ability of the medium to permeate the fluid. The complexity of this problem increases as the geomaterial behaves like a solid and fluid under certain conditions. Thus it is important to understand this multiphysics problem where in addition to the permeation, the elastic-plastic deformation of the soil skeleton plays a vital role during fluid injection. The phenomenon of permeation and cavity expansion in porous medium has been studied independently through extensive experimental and analytical/ numerical models. The analytical models generally use Darcy's/ diffusion equations to capture the fluid flow during permeation while elastic-plastic (Mohr-Coulomb and Modified Cam-Clay) models were used to predict the solid deformations. Hitherto, the research generally focused on modelling cavity expansion without considering the effect of injected fluid coming into the medium. Very few studies have considered the effect of injected fluid on the deformation of soil skeleton. However, the porosity changes during the fluid injection and coupled elastic-plastic deformation are not clearly understood. In this study, the phenomenon of permeation and instabilities such as cavity and finger/ fracture formation will be quantified extensively by performing experiments using a novel experimental setup in addition to utilizing image processing techniques. This experimental study will describe the fluid flow and soil deformation characteristics under different boundary conditions. Further, a well refined coupled semi-analytical model will be developed to capture the physics involved in quantifying the deformation behaviour of geomaterial during fluid injection.

Keywords: solid-fluid interaction, permeation, poroelasticity, plasticity, continuum model

Procedia PDF Downloads 65
692 Sustainability Assessment Tool for the Selection of Optimal Site Remediation Technologies for Contaminated Gasoline Sites

Authors: Connor Dunlop, Bassim Abbassi, Richard G. Zytner

Abstract:

Life cycle assessment (LCA) is a powerful tool established by the International Organization for Standardization (ISO) that can be used to assess the environmental impacts of a product or process from cradle to grave. Many studies utilize the LCA methodology within the site remediation field to compare various decontamination methods, including bioremediation, soil vapor extraction or excavation, and off-site disposal. However, with the authors' best knowledge, limited information is available in the literature on a sustainability tool that could be used to help with the selection of the optimal remediation technology. This tool, based on the LCA methodology, would consider site conditions like environmental, economic, and social impacts. Accordingly, this project was undertaken to develop a tool to assist with the selection of optimal sustainable technology. Developing a proper tool requires a large amount of data. As such, data was collected from previous LCA studies looking at site remediation technologies. This step identified knowledge gaps or limitations within project data. Next, utilizing the data obtained from the literature review and other organizations, an extensive LCA study is being completed following the ISO 14040 requirements. Initial technologies being compared include bioremediation, excavation with off-site disposal, and a no-remediation option for a generic gasoline-contaminated site. To complete the LCA study, the modelling software SimaPro is being utilized. A sensitivity analysis of the LCA results will also be incorporated to evaluate the impact on the overall results. Finally, the economic and social impacts associated with each option will then be reviewed to understand how they fluctuate at different sites. All the results will then be summarized, and an interactive tool using Excel will be developed to help select the best sustainable site remediation technology. Preliminary LCA results show improved sustainability for the decontamination of a gasoline-contaminated site for each technology compared to the no-remediation option. Sensitivity analyses are now being completed on on-site parameters to determine how the environmental impacts fluctuate at other contaminated gasoline locations as the parameters vary, including soil type and transportation distances. Additionally, the social improvements and overall economic costs associated with each technology are being reviewed. Utilizing these results, the sustainability tool created to assist in the selection of the overall best option will be refined.

Keywords: life cycle assessment, site remediation, sustainability tool, contaminated sites

Procedia PDF Downloads 52
691 Barriers and Facilitators for Telehealth Use during Cervical Cancer Screening and Care: A Literature Review

Authors: Reuben Mugisha, Stella Bakibinga

Abstract:

The cervical cancer burden is a global threat, but more so in low income settings where more than 85% of mortality cases occur due to lack of sufficient screening programs. There is consequently a lack of early detection of cancer and precancerous cells among women. Studies show that 3% to 35% of deaths could have been avoided through early screening depending on prognosis, disease progression, environmental and lifestyle factors. In this study, a systematic literature review is undertaken to understand potential barriers and facilitators as documented in previous studies that focus on the application of telehealth in cervical cancer screening programs for early detection of cancer and precancerous cells. The study informs future studies especially those from low income settings about lessons learned from previous studies and how to be best prepared while planning to implement telehealth for cervical cancer screening. It further identifies the knowledge gaps in the research area and makes recommendations. Using a specified selection criterion, 15 different articles are analyzed based on the study’s aim, theory or conceptual framework used, method applied, study findings and conclusion. Results are then tabulated and presented thematically to better inform readers about emerging facts on barriers and facilitators to telehealth implementation as documented in the reviewed articles, and how they consequently lead to evidence informed conclusions that are relevant to telehealth implementation for cervical cancer screening. Preliminary findings of this study underscore that use of low cost mobile colposcope is an appealing option in cervical cancer screening, particularly when coupled with onsite treatment of suspicious lesions. These tools relay cervical images to the online databases for storage and retrieval, they permit integration of connected devices at the point of care to rapidly collect clinical data for further analysis of the prevalence of cervical dysplasia and cervical cancer. Results however reveal the need for population sensitization prior to use of mobile colposcopies among patients, standardization of mobile colposcopy programs across screening partners, sufficient logistics and good connectivity, experienced experts to review image cases at the point-of-care as important facilitators to the implementation of mobile colposcope as a telehealth cervical cancer screening mechanism.

Keywords: cervical cancer screening, digital technology, hand-held colposcopy, knowledge-sharing

Procedia PDF Downloads 211
690 Development of Mechanisms of Value Creation and Risk Management Organization in the Conditions of Transformation of the Economy of Russia

Authors: Mikhail V. Khachaturyan, Inga A. Koryagina, Eugenia V. Klicheva

Abstract:

In modern conditions, scientific judgment of problems in developing mechanisms of value creation and risk management acquires special relevance. Formation of economic knowledge has resulted in the constant analysis of consumer behavior for all players from national and world markets. Effective mechanisms development of the demand analysis, crucial for consumer's characteristics of future production, and the risks connected with the development of this production are the main objectives of control systems in modern conditions. The modern period of economic development is characterized by a high level of globalization of business and rigidity of competition. At the same time, the considerable share of new products and services costs has a non-material intellectual nature. The most successful in Russia is the contemporary development of small innovative firms. Such firms, through their unique technologies and new approaches to process management, which form the basis of their intellectual capital, can show flexibility and succeed in the market. As a rule, such enterprises should have very variable structure excluding the tough scheme of submission and demanding essentially new incentives for inclusion of personnel in innovative activity. Realization of similar structures, as well as a new approach to management, can be constructed based on value-oriented management which is directed to gradual change of consciousness of personnel and formation from groups of adherents included in the solution of the general innovative tasks. At the same time, valuable changes can gradually capture not only innovative firm staff, but also the structure of its corporate partners. Introduction of new technologies is the significant factor contributing to the development of new valuable imperatives and acceleration of the changing values systems of the organization. It relates to the fact that new technologies change the internal environment of the organization in a way that the old system of values becomes inefficient in new conditions. Introduction of new technologies often demands change in the structure of employee’s interaction and training in their new principles of work. During the introduction of new technologies and the accompanying change in the value system, the structure of the management of the values of the organization is changing. This is due to the need to attract more staff to justify and consolidate the new value system and bring their view into the motivational potential of the new value system of the organization.

Keywords: value, risk, creation, problems, organization

Procedia PDF Downloads 274
689 Scrutinizing the Effective Parameters on Cuttings Movement in Deviated Wells: Experimental Study

Authors: Siyamak Sarafraz, Reza Esmaeil Pour, Saeed Jamshidi, Asghar Molaei Dehkordi

Abstract:

Cutting transport is one of the major problems in directional and extended reach oil and gas wells. Lack of sufficient attention to this issue may bring some troubles such as casing running, stuck pipe, excessive torque and drag, hole pack off, bit wear, decreased the rate of penetration (ROP), increased equivalent circulation density (ECD) and logging. Since it is practically impossible to directly observe the behavior of deep wells, a test setup was designed to investigate cutting transport phenomena. This experimental work carried out to scrutiny behavior of the effective variables in cutting transport. The test setup contained a test section with 17 feet long that made of a 3.28 feet long transparent glass pipe with 3 inch diameter, a storage tank with 100 liters capacity, drill pipe rotation which made of stainless steel with 1.25 inches diameter, pump to circulate drilling fluid, valve to adjust flow rate, bit and a camera to record all events which then converted to RGB images via the Image Processing Toolbox. After preparation of test process, each test performed separately, and weights of the output particles were measured and compared with each other. Observation charts were plotted to assess the behavior of viscosity, flow rate and RPM in inclinations of 0°, 30°, 60° and 90°. RPM was explored with other variables such as flow rate and viscosity in different angles. Also, effect of different flow rate was investigated in directional conditions. To access the precise results, captured image were analyzed to find out bed thickening and particles behave in the annulus. The results of this experimental study demonstrate that drill string rotation helps particles to be suspension and reduce the particle deposition cutting movement increased significantly. By raising fluid velocity, laminar flow converted to turbulence flow in the annulus. Increases in flow rate in horizontal section by considering a lower range of viscosity is more effective and improved cuttings transport performance.

Keywords: cutting transport, directional drilling, flow rate, hole cleaning, pipe rotation

Procedia PDF Downloads 276
688 Characterization of Minerals, Elicitors in Spent Mushroom Substrate Extract and Effects on Growth, Yield and the Management of Massava Mosaic Diseases

Authors: Samuel E. Okere, Anthony E. Ataga

Abstract:

Introduction: This paper evaluated the mineral compositions, disease resistance elicitors in Pleurotus ostratus (POWESMS), and Pleurotus tuber-regium water extract spent mushroom substrate (PTWESMS) on the growth, yield, and management of cassava mosaic disease. Materials and Methods: The cassava plantlet (tms 98/0505) were generated through meristem tip culture at the Tissue Culture Laboratory, National Root Crop Research Institute, Umudike before they were transferred to the screen house, University of Port Harcourt Research Farm. The minerals and elicitors contained in the two spent mushroom substrates were evaluated using standard procedures. The treatments for this investigation comprised cassava plants treated with POWESMS, PTWESMS, and untreated cassava as control, which were inoculated with viral inoculum seven days after treatment application. The experiment was laid out in a completely randomized block design with 3 replicates. The data generated were subjected to analysis of variance (ANOVA). Means were separated using Fishers Least Significant Difference at p=0.05. Results: The results obtained revealed that POWESMS contained 19.3, 0.52, and 0.1g/200g substrate of carbohydrate polymers, glycoproteins, and lipid molecules elicitors respectively while it also contained 3.17, 212.1, 17.9,21.8, 58.8 and 111.0 mg/100g substrate for N, P, K, Na, Mg and Ca respectively. Further, PTWESMS contain 1.6, 0.04, and 0.2g/200g of the substrate as carbohydrate polymers, glycoprotein, and lipid respectively; the minerals contained in this substrate were 3.4, 204.8, 8.9, 24.2, 32.2 and 105.5 mg respectively for N, P, K, Na, and Ca. There were also significant differences in the mean values of the number of storage roots, root length, fresh root weight, fresh weight plant biomass, root girth, and whole plant dry biomass, but no significant difference was recorded for harvest index. The result also revealed significant differences in mean values of disease severity index evaluated at 4, 8, 12, 16, 20, 24, and 28 weeks after inoculation (WAI). Conclusion: The aqueous extract of these spent mushrooms substrate have shown outstanding prospect in managing cassava mosaic disease and also improvement in growth and yield of cassava due to the high level of the minerals and elicitors they contain when compared with the control. However, more work is recommended, especially in understanding the mechanism of this induced resistance.

Keywords: characterization, elicitors, mosaic, mushroom

Procedia PDF Downloads 121
687 Mesocarbon Microbeads Modification of Stainless-Steel Current Collector to Stabilize Lithium Deposition and Improve the Electrochemical Performance of Anode Solid-State Lithium Hybrid Battery

Authors: Abebe Taye

Abstract:

The interest in enhancing the performance of all-solid-state batteries featuring lithium metal anodes as a potential alternative to traditional lithium-ion batteries has prompted exploration into new avenues. A promising strategy involves transforming lithium-ion batteries into hybrid configurations by integrating lithium-ion and lithium-metal solid-state components. This study is focused on achieving stable lithium deposition and advancing the electrochemical capabilities of solid-state lithium hybrid batteries with anodes by incorporating mesocarbon microbeads (MCMBs) blended with silver nanoparticles. To achieve this, mesocarbon microbeads (MCMBs) blended with silver nanoparticles are coated on stainless-steel current collectors. These samples undergo a battery of analyses employing diverse techniques. Surface morphology is studied through scanning electron microscopy (SEM). The electrochemical behavior of the coated samples is evaluated in both half-cell and full-cell setups utilizing an argyrodite-type sulfide electrolyte. The stability of MCMBs in the electrolyte is assessed using electrochemical impedance spectroscopy (EIS). Additional insights into the composition are gleaned through X-ray photoelectron spectroscopy (XPS), Raman spectroscopy, and energy-dispersive X-ray spectroscopy (EDS). At an ultra-low N/P ratio of 0.26, stability is upheld for over 100 charge/discharge cycles in half-cells. When applied in a full-cell configuration, the hybrid anode preserves 60.1% of its capacity after 80 cycles at 0.3 C under a low N/P ratio of 0.45. In sharp contrast, the capacity retention of the cell using untreated MCMBs declines to 20.2% after a mere 60 cycles. The introduction of mesocarbon microbeads (MCMBs) combined with silver nanoparticles into the hybrid anode of solid-state lithium batteries substantially elevates their stability and electrochemical performance. This approach ensures consistent lithium deposition and removal, mitigating dendrite growth and the accumulation of inactive lithium. The findings from this investigation hold significant value in elevating the reversibility and energy density of lithium-ion batteries, thereby making noteworthy contributions to the advancement of more efficient energy storage systems.

Keywords: MCMB, lithium metal, hybrid anode, silver nanoparticle, cycling stability

Procedia PDF Downloads 58
686 Shelf Life and Overall Quality of Pretreated and Modified Atmosphere Packaged ‘Ready-To-Eat’ Pomegranate arils cv. Bhagwa Stored at 1⁰C

Authors: Sangram Dhumal, Anil Karale

Abstract:

The effect of different pretreatments and modified atmosphere packaging on the quality of minimally processed pomegranate arils of Bhagwa cultivar was evaluated during storage at 1⁰C for 16 days. Hand extracted pomegranate arils were pretreated with different antioxidants and surfactants viz., 100ppm sodium hypochlorite plus 0.5 percent ascorbic acid plus 0.5 percent citric acid, 10 and 20 percent honey solution, 0.1 percent nanosilver stipulated food grade hydrogen peroxide alone and in combination with 10 percent honey solution and control. The disinfected, rinsed and air-dried pomegranate arils were packed in polypropylene punnets (135g each) with different modified atmospheres and stored up to 16 days at 1⁰C. Changes in colour, pH, total soluble solids, sugars, anthocyanins, phenols, acidity, antioxidant activity, microbial and yeast and mold count over initial values were recorded in all the treatments under study but highest on those without antioxidant and surfactant treatments. Pretreated arils stored at 1⁰C recorded decrease in L*, b* value, pH, levels of non-reducing and total sugars, polyphenols, antioxidant activity and acceptability of arils and increase in total soluble solids, a* value, anthocyanins and microbial count. Increase in anthocyanin content was observed in modified atmosphere packaged pretreated arils stored at 1⁰C. Modified atmosphere packaging with 100 percent nitrogen recorded minimum changes in physicochemical and sensorial parameters with minimum microbial growth. Untreated arils in perforated punnets and with air (control) gave shelf life up to 6 days only. The pretreatment of arils with 10 percent honey plus 0.1 percent nanosilver stipulated food grade hydrogen peroxide and packaging in 100 percent nitrogen recorded minimum changes in physicochemical parameters. The treatment also restricted microbial growth and maintained colour, anthocyanin pigmentation, antioxidant activity and overall fresh like quality of arils. The same dipping treatment along with modified atmosphere packaging extended the shelf life of fresh ready to eat arils up to 14 to 16 days with enhanced acceptability when stored at 1⁰C.

Keywords: anthocyanin content, pomegranate, MAP, minimally processed, microbial quality, Bhagwa, shelf-life, overall quality

Procedia PDF Downloads 167