Search results for: predictive mining
110 Removal of Chromium by UF5kDa Membrane: Its Characterization, Optimization of Parameters, and Evaluation of Coefficients
Authors: Bharti Verma, Chandrajit Balomajumder
Abstract:
Water pollution is escalated owing to industrialization and random ejection of one or more toxic heavy metal ions from the semiconductor industry, electroplating, metallurgical, mining, chemical manufacturing, tannery industries, etc., In semiconductor industry various kinds of chemicals in wafers preparation are used . Fluoride, toxic solvent, heavy metals, dyes and salts, suspended solids and chelating agents may be found in wastewater effluent of semiconductor manufacturing industry. Also in the chrome plating, in the electroplating industry, the effluent contains heavy amounts of Chromium. Since Cr(VI) is highly toxic, its exposure poses an acute risk of health. Also, its chronic exposure can even lead to mutagenesis and carcinogenesis. On the contrary, Cr (III) which is naturally occurring, is much less toxic than Cr(VI). Discharge limit of hexavalent chromium and trivalent chromium are 0.05 mg/L and 5 mg/L, respectively. There are numerous methods such as adsorption, chemical precipitation, membrane filtration, ion exchange, and electrochemical methods for the heavy metal removal. The present study focuses on the removal of Chromium ions by using flat sheet UF5kDa membrane. The Ultra filtration membrane process is operated above micro filtration membrane process. Thus separation achieved may be influenced due to the effect of Sieving and Donnan effect. Ultrafiltration is a promising method for the rejection of heavy metals like chromium, fluoride, cadmium, nickel, arsenic, etc. from effluent water. Benefits behind ultrafiltration process are that the operation is quite simple, the removal efficiency is high as compared to some other methods of removal and it is reliable. Polyamide membranes have been selected for the present study on rejection of Cr(VI) from feed solution. The objective of the current work is to examine the rejection of Cr(VI) from aqueous feed solutions by flat sheet UF5kDa membranes with different parameters such as pressure, feed concentration and pH of the feed. The experiments revealed that with increasing pressure, the removal efficiency of Cr(VI) is increased. Also, the effect of pH of feed solution, the initial dosage of chromium in the feed solution has been studied. The membrane has been characterized by FTIR, SEM and AFM before and after the run. The mass transfer coefficients have been estimated. Membrane transport parameters have been calculated and have been found to be in a good correlation with the applied model.Keywords: heavy metal removal, membrane process, waste water treatment, ultrafiltration
Procedia PDF Downloads 141109 Identification of Suitable Sites for Rainwater Harvesting in Salt Water Intruded Area by Using Geospatial Techniques in Jafrabad, Amreli District, India
Authors: Pandurang Balwant, Ashutosh Mishra, Jyothi V., Abhay Soni, Padmakar C., Rafat Quamar, Ramesh J.
Abstract:
The sea water intrusion in the coastal aquifers has become one of the major environmental concerns. Although, it is a natural phenomenon but, it can be induced with anthropogenic activities like excessive exploitation of groundwater, seacoast mining, etc. The geological and hydrogeological conditions including groundwater heads and groundwater pumping pattern in the coastal areas also influence the magnitude of seawater intrusion. However, this problem can be remediated by taking some preventive measures like rainwater harvesting and artificial recharge. The present study is an attempt to identify suitable sites for rainwater harvesting in salt intrusion affected area near coastal aquifer of Jafrabad town, Amreli district, Gujrat, India. The physico-chemical water quality results show that out of 25 groundwater samples collected from the study area most of samples were found to contain high concentration of Total Dissolved Solids (TDS) with major fractions of Na and Cl ions. The Cl/HCO3 ratio was also found greater than 1 which indicates the salt water contamination in the study area. The geophysical survey was conducted at nine sites within the study area to explore the extent of contamination of sea water. From the inverted resistivity sections, low resistivity zone (<3 Ohm m) associated with seawater contamination were demarcated in North block pit and south block pit of NCJW mines, Mitiyala village Lotpur and Lunsapur village at the depth of 33 m, 12 m, 40 m, 37 m, 24 m respectively. Geospatial techniques in combination of Analytical Hierarchy Process (AHP) considering hydrogeological factors, geographical features, drainage pattern, water quality and geophysical results for the study area were exploited to identify potential zones for the Rainwater Harvesting. Rainwater harvesting suitability model was developed in ArcGIS 10.1 software and Rainwater harvesting suitability map for the study area was generated. AHP in combination of the weighted overlay analysis is an appropriate method to identify rainwater harvesting potential zones. The suitability map can be further utilized as a guidance map for the development of rainwater harvesting infrastructures in the study area for either artificial groundwater recharge facilities or for direct use of harvested rainwater.Keywords: analytical hierarchy process, groundwater quality, rainwater harvesting, seawater intrusion
Procedia PDF Downloads 176108 Management of Femoral Neck Stress Fractures at a Specialist Centre and Predictive Factors to Return to Activity Time: An Audit
Authors: Charlotte K. Lee, Henrique R. N. Aguiar, Ralph Smith, James Baldock, Sam Botchey
Abstract:
Background: Femoral neck stress fractures (FNSF) are uncommon, making up 1 to 7.2% of stress fractures in healthy subjects. FNSFs are prevalent in young women, military recruits, endurance athletes, and individuals with energy deficiency syndrome or female athlete triad. Presentation is often non-specific and is often misdiagnosed following the initial examination. There is limited research addressing the return–to–activity time after FNSF. Previous studies have demonstrated prognostic time predictions based on various imaging techniques. Here, (1) OxSport clinic FNSF practice standards are retrospectively reviewed, (2) FNSF cohort demographics are examined, (3) Regression models were used to predict return–to–activity prognosis and consequently determine bone stress risk factors. Methods: Patients with a diagnosis of FNSF attending Oxsport clinic between 01/06/2020 and 01/01/2020 were selected from the Rheumatology Assessment Database Innovation in Oxford (RhADiOn) and OxSport Stress Fracture Database (n = 14). (1) Clinical practice was audited against five criteria based on local and National Institute for Health Care Excellence guidance, with a 100% standard. (2) Demographics of the FNSF cohort were examined with Student’s T-Test. (3) Lastly, linear regression and Random Forest regression models were used on this patient cohort to predict return–to–activity time. Consequently, an analysis of feature importance was conducted after fitting each model. Results: OxSport clinical practice met standard (100%) in 3/5 criteria. The criteria not met were patient waiting times and documentation of all bone stress risk factors. Importantly, analysis of patient demographics showed that of the population with complete bone stress risk factor assessments, 53% were positive for modifiable bone stress risk factors. Lastly, linear regression analysis was utilized to identify demographic factors that predicted return–to–activity time [R2 = 79.172%; average error 0.226]. This analysis identified four key variables that predicted return-to-activity time: vitamin D level, total hip DEXA T value, femoral neck DEXA T value, and history of an eating disorder/disordered eating. Furthermore, random forest regression models were employed for this task [R2 = 97.805%; average error 0.024]. Analysis of the importance of each feature again identified a set of 4 variables, 3 of which matched with the linear regression analysis (vitamin D level, total hip DEXA T value, and femoral neck DEXA T value) and the fourth: age. Conclusion: OxSport clinical practice could be improved by more comprehensively evaluating bone stress risk factors. The importance of this evaluation is demonstrated by the population found positive for these risk factors. Using this cohort, potential bone stress risk factors that significantly impacted return-to-activity prognosis were predicted using regression models.Keywords: eating disorder, bone stress risk factor, femoral neck stress fracture, vitamin D
Procedia PDF Downloads 186107 A Data-Driven Compartmental Model for Dengue Forecasting and Covariate Inference
Authors: Yichao Liu, Peter Fransson, Julian Heidecke, Jonas Wallin, Joacim Rockloev
Abstract:
Dengue, a mosquito-borne viral disease, poses a significant public health challenge in endemic tropical or subtropical countries, including Sri Lanka. To reveal insights into the complexity of the dynamics of this disease and study the drivers, a comprehensive model capable of both robust forecasting and insightful inference of drivers while capturing the co-circulating of several virus strains is essential. However, existing studies mostly focus on only one aspect at a time and do not integrate and carry insights across the siloed approach. While mechanistic models are developed to capture immunity dynamics, they are often oversimplified and lack integration of all the diverse drivers of disease transmission. On the other hand, purely data-driven methods lack constraints imposed by immuno-epidemiological processes, making them prone to overfitting and inference bias. This research presents a hybrid model that combines machine learning techniques with mechanistic modelling to overcome the limitations of existing approaches. Leveraging eight years of newly reported dengue case data, along with socioeconomic factors, such as human mobility, weekly climate data from 2011 to 2018, genetic data detecting the introduction and presence of new strains, and estimates of seropositivity for different districts in Sri Lanka, we derive a data-driven vector (SEI) to human (SEIR) model across 16 regions in Sri Lanka at the weekly time scale. By conducting ablation studies, the lag effects allowing delays up to 12 weeks of time-varying climate factors were determined. The model demonstrates superior predictive performance over a pure machine learning approach when considering lead times of 5 and 10 weeks on data withheld from model fitting. It further reveals several interesting interpretable findings of drivers while adjusting for the dynamics and influences of immunity and introduction of a new strain. The study uncovers strong influences of socioeconomic variables: population density, mobility, household income and rural vs. urban population. The study reveals substantial sensitivity to the diurnal temperature range and precipitation, while mean temperature and humidity appear less important in the study location. Additionally, the model indicated sensitivity to vegetation index, both max and average. Predictions on testing data reveal high model accuracy. Overall, this study advances the knowledge of dengue transmission in Sri Lanka and demonstrates the importance of incorporating hybrid modelling techniques to use biologically informed model structures with flexible data-driven estimates of model parameters. The findings show the potential to both inference of drivers in situations of complex disease dynamics and robust forecasting models.Keywords: compartmental model, climate, dengue, machine learning, social-economic
Procedia PDF Downloads 89106 MicroRNA Drivers of Resistance to Androgen Deprivation Therapy in Prostate Cancer
Authors: Philippa Saunders, Claire Fletcher
Abstract:
INTRODUCTION: Prostate cancer is the most prevalent malignancy affecting Western males. It is initially an androgen-dependent disease: androgens bind to the androgen receptor and drive the expression of genes that promote proliferation and evasion of apoptosis. Despite reduced androgen dependence in advanced prostate cancer, androgen receptor signaling remains a key driver of growth. Androgen deprivation therapy (ADT) is, therefore, a first-line treatment approach and works well initially, but resistance inevitably develops. Abiraterone and Enzalutamide are drugs widely used in ADT and are androgen synthesis and androgen receptor signaling inhibitors, respectively. The shortage of other treatment options means acquired resistance to these drugs is a major clinical problem. MicroRNAs (miRs) are important mediators of post-transcriptional gene regulation and show altered expression in cancer. Several have been linked to the development of resistance to ADT. Manipulation of such miRs may be a pathway to breakthrough treatments for advanced prostate cancer. This study aimed to validate ADT resistance-implicated miRs and their clinically relevant targets. MATERIAL AND METHOD: Small RNA-sequencing of Abiraterone- and Enzalutamide-resistant C42 prostate cancer cells identified subsets of miRs dysregulated as compared to parental cells. Real-Time Quantitative Reverse Transcription PCR (qRT-PCR) was used to validate altered expression of candidate ADT resistance-implicated miRs 195-5p, 497-5p and 29a-5p in ADT-resistant and -responsive prostate cancer cell lines, patient-derived xenografts (PDXs) and primary prostate cancer explants. RESULTS AND DISCUSSION: This study suggests a possible role for miR-497-5p in the development of ADT resistance in prostate cancer. MiR-497-5p expression was increased in ADT-resistant versus ADT-responsive prostate cancer cells. Importantly, miR-497-5p expression was also increased in Enzalutamide-treated, castrated (ADT-mimicking) PDXs versus intact PDXs. MiR-195-5p was also elevated in ADT-resistant versus -responsive prostate cancer cells, while there was a drop in miR-29a-5p expression. Candidate clinically relevant targets of miR-497-5p in prostate cancer were identified by mining AGO-PAR-CLIP-seq data sets and may include AVL9 and FZD6. CONCLUSION: In summary, this study identified microRNAs that are implicated in prostate cancer resistance to androgen deprivation therapy and could represent novel therapeutic targets for advanced disease.Keywords: microRNA, androgen deprivation therapy, Enzalutamide, abiraterone, patient-derived xenograft
Procedia PDF Downloads 148105 South-Mediterranean Oaks Forests Management in Changing Climate Case of the National Park of Tlemcen-Algeria
Authors: K. Bencherif, M. Bellifa
Abstract:
The expected climatic changes in North Africa are the increase of both intensity and frequencies of the summer droughts and a reduction in water availability during growing season. The exiting coppices and forest formations in the national park of Tlemcen are dominated by holm oak, zen oak and cork oak. These opened-fragmented structures don’t seem enough strong so to hope durable protection against climate change. According to the observed climatic tendency, the objective is to analyze the climatic context and its evolution taking into account the eventual behaving of the oak species during the next 20-30 years on one side and the landscaped context in relation with the most adequate sylvicultural models to choose and especially in relation with human activities on another side. The study methodology is based on Climatic synthesis and Floristic and spatial analysis. Meteorological data of the decade 1989-2009 are used to characterize the current climate. An another approach, based on dendrochronological analysis of a 120 years sample Aleppo pine stem growing in the park, is used so to analyze the climate evolution during one century. Results on the climate evolution during the 50 years obtained through climatic predictive models are exploited so to predict the climate tendency in the park. Spatially, in each forest unit of the Park, stratified sampling is achieved so to reduce the degree of heterogeneity and to easily delineate different stands using the GPS. Results from precedent study are used to analyze the anthropogenic factor considering the forecasts for the period 2025-2100, the number of warm days with a temperature over 25°C would increase from 30 to 70. The monthly mean temperatures of the maxima’s (M) and the minima’s (m) would pass respectively from 30.5°C to 33°C and from 2.3°C to 4.8°C. With an average drop of 25%, precipitations will be reduced to 411.37 mm. These new data highlight the importance of the risk fire and the water stress witch would affect the vegetation and the regeneration process. Spatial analysis highlights the forest and the agricultural dimensions of the park compared to the urban habitat and bare soils. Maps show both fragmentation state and forest surface regression (50% of total surface). At the level of the park, fires affected already all types of covers creating low structures with various densities. On the silvi cultural plan, Zen oak form in some places pure stands and this invasion must be considered as a natural tendency where Zen oak becomes the structuring specie. Climate-related changes have nothing to do with the real impact that South-Mediterranean forests are undergoing because human constraints they support. Nevertheless, hardwoods stand of oak in the national park of Tlemcen will face up to unexpected climate changes such as changing rainfall regime associated with a lengthening of the period of water stress, to heavy rainfall and/or to sudden cold snaps. Faced with these new conditions, management based on mixed uneven aged high forest method promoting the more dynamic specie could be an appropriate measure.Keywords: global warming, mediterranean forest, oak shrub-lands, Tlemcen
Procedia PDF Downloads 392104 Water Supply and Demand Analysis for Ranchi City under Climate Change Using Water Evaluation and Planning System Model
Authors: Pappu Kumar, Ajai Singh, Anshuman Singh
Abstract:
There are different water user sectors such as rural, urban, mining, subsistence and commercial irrigated agriculture, commercial forestry, industry, power generation which are present in the catchment in Subarnarekha River Basin and Ranchi city. There is an inequity issue in the access to water. The development of the rural area, construction of new power generation plants, along with the population growth, the requirement of unmet water demand and the consideration of environmental flows, the revitalization of small-scale irrigation schemes is going to increase the water demands in almost all the water-stressed catchment. The WEAP Model was developed by the Stockholm Environment Institute (SEI) to enable evaluation of planning and management issues associated with water resources development. The WEAP model can be used for both urban and rural areas and can address a wide range of issues including sectoral demand analyses, water conservation, water rights and allocation priorities, river flow simulation, reservoir operation, ecosystem requirements and project cost-benefit analyses. This model is a tool for integrated water resource management and planning like, forecasting water demand, supply, inflows, outflows, water use, reuse, water quality, priority areas and Hydropower generation, In the present study, efforts have been made to access the utility of the WEAP model for water supply and demand analysis for Ranchi city. A detailed works have been carried out and it was tried to ascertain that the WEAP model used for generating different scenario of water requirement, which could help for the future planning of water. The water supplied to Ranchi city was mostly contributed by our study river, Hatiya reservoir and ground water. Data was collected from various agencies like PHE Ranchi, census data of 2011, Doranda reservoir and meteorology department etc. This collected and generated data was given as input to the WEAP model. The model generated the trends for discharge of our study river up to next 2050 and same time also generated scenarios calculating our demand and supplies for feature. The results generated from the model outputs predicting the water require 12 million litter. The results will help in drafting policies for future regarding water supplies and demands under changing climatic scenarios.Keywords: WEAP model, water demand analysis, Ranchi, scenarios
Procedia PDF Downloads 421103 Physiological Effects on Scientist Astronaut Candidates: Hypobaric Training Assessment
Authors: Pedro Llanos, Diego García
Abstract:
This paper is addressed to expanding our understanding of the effects of hypoxia training on our bodies to better model its dynamics and leverage some of its implications and effects on human health. Hypoxia training is a recommended practice for military and civilian pilots that allow them to recognize their early hypoxia signs and symptoms, and Scientist Astronaut Candidates (SACs) who underwent hypobaric hypoxia (HH) exposure as part of a training activity for prospective suborbital flight applications. This observational-analytical study describes physiologic responses and symptoms experienced by a SAC group before, during and after HH exposure and proposes a model for assessing predicted versus observed physiological responses. A group of individuals with diverse Science Technology Engineering Mathematics (STEM) backgrounds conducted a hypobaric training session to an altitude up to 22,000 ft (FL220) or 6,705 meters, where heart rate (HR), breathing rate (BR) and core temperature (Tc) were monitored with the use of a chest strap sensor pre and post HH exposure. A pulse oximeter registered levels of saturation of oxygen (SpO2), number and duration of desaturations during the HH chamber flight. Hypoxia symptoms as described by the SACs during the HH training session were also registered. This data allowed to generate a preliminary predictive model of the oxygen desaturation and O2 pressure curve for each subject, which consists of a sixth-order polynomial fit during exposure, and a fifth or fourth-order polynomial fit during recovery. Data analysis showed that HR and BR showed no significant differences between pre and post HH exposure in most of the SACs, while Tc measures showed slight but consistent decrement changes. All subjects registered SpO2 greater than 94% for the majority of their individual HH exposures, but all of them presented at least one clinically significant desaturation (SpO2 < 85% for more than 5 seconds) and half of the individuals showed SpO2 below 87% for at least 30% of their HH exposure time. Finally, real time collection of HH symptoms presented temperature somatosensory perceptions (SP) for 65% of individuals, and task-focus issues for 52.5% of individuals as the most common HH indications. 95% of the subjects experienced HH onset symptoms below FL180; all participants achieved full recovery of HH symptoms within 1 minute of donning their O2 mask. The current HH study performed on this group of individuals suggests a rapid and fully reversible physiologic response after HH exposure as expected and obtained in previous studies. Our data showed consistent results between predicted versus observed SpO2 curves during HH suggesting a mathematical function that may be used to model HH performance deficiencies. During the HH study, real-time HH symptoms were registered providing evidenced SP and task focusing as the earliest and most common indicators. Finally, an assessment of HH signs of symptoms in a group of heterogeneous, non-pilot individuals showed similar results to previous studies in homogeneous populations of pilots.Keywords: slow onset hypoxia, hypobaric chamber training, altitude sickness, symptoms and altitude, pressure cabin
Procedia PDF Downloads 117102 Microbial Resource Research Infrastructure: A Large-Scale Research Infrastructure for Microbiological Services
Authors: R. Hurtado-Ortiz, D. Clermont, M. Schüngel, C. Bizet, D. Smith, E. Stackebrandt
Abstract:
Microbiological resources and their derivatives are the essential raw material for the advancement of human health, agro-food, food security, biotechnology, research and development in all life sciences. Microbial resources, and their genetic and metabolic products, are utilised in many areas such as production of healthy and functional food, identification of new antimicrobials against emerging and resistant pathogens, fighting agricultural disease, identifying novel energy sources on the basis of microbial biomass and screening for new active molecules for the bio-industries. The complexity of public collections, distribution and use of living biological material (not only living but also DNA, services, training, consultation, etc.) and service offer, demands the coordination and sharing of policies, processes and procedures. The Microbial Resource Research Infrastructure (MIRRI) is an initiative within the European Strategy Forum Infrastructures (ESFRI), bring together 16 partners including 13 European public microbial culture collections and biological resource centres (BRCs), supported by several European and non-European associated partners. The objective of MIRRI is to support innovation in microbiology by provision of a one-stop shop for well-characterized microbial resources and high quality services on a not-for-profit basis for biotechnology in support of microbiological research. In addition, MIRRI contributes to the structuring of microbial resources capacity both at the national and European levels. This will facilitate access to microorganisms for biotechnology for the enhancement of the bio-economy in Europe. MIRRI will overcome the fragmentation of access to current resources and services, develop harmonised strategies for delivery of associated information, ensure bio-security and other regulatory conditions to bring access and promote the uptake of these resources into European research. Data mining of the landscape of current information is needed to discover potential and drive innovation, to ensure the uptake of high quality microbial resources into research. MIRRI is in its Preparatory Phase focusing on governance and structure including technical, legal governance and financial issues. MIRRI will help the Biological Resources Centres to work more closely with policy makers, stakeholders, funders and researchers, to deliver resources and services needed for innovation.Keywords: culture collections, microbiology, infrastructure, microbial resources, biotechnology
Procedia PDF Downloads 447101 Company's Orientation and Human Resource Management Evolution in Technological Startup Companies
Authors: Yael Livneh, Shay Tzafrir, Ilan Meshoulam
Abstract:
Technological startup companies have been recognized as bearing tremendous potential for business and economic success. However, many entrepreneurs who produce promising innovative ideas fail to implement them as successful businesses. A key argument for such failure is the entrepreneurs' lack of competence in adaptation of the relevant level of formality of human resource management (HRM). The purpose of the present research was to examine multiple antecedents and consequences of HRM formality in growing startup companies. A review of the research literature identified two central components of HRM formality: HR control and professionalism. The effect of three contextual predictors was examined. The first was an intra-organizational factor: the development level of the organization. We based on a differentiation between knowledge exploration and knowledge exploitation. At a given time, the organization chooses to focus on a specific mix of these orientations, a choice which requires an appropriate level of HRM formality, in order to efficiently overcome the challenges. It was hypothesized that the mix of orientations of knowledge exploration and knowledge exploitation would predict HRM formality. The second predictor was the personal characteristics the organization's leader. According the idea of blueprint effect of CEO's on HRM, it was hypothesized that the CEO's cognitive style would predict HRM formality. The third contextual predictor was an external organizational factor: the level of investor involvement. By using the agency theory, and based on Transaction Cost Economy, it was hypothesized that the level of investor involvement in general management and HRM would be positively related to the HRM formality. The effect of formality on trust was examined directly and indirectly by the mediation role of procedural justice. The research method included a time-lagged field study. In the first study, data was obtained using three questionnaires, each directed to a different source: CEO, HR position-holder and employees. 43 companies participated in this study. The second study was conducted approximately a year later. Data was recollected using three questionnaires by reapplying the same sample. 41 companies participated in the second study. The organizations samples included technological startup companies. Both studies included 884 respondents. The results indicated consistency between the two studies. HRM formality was predicted by the intra-organizational factor as well as the personal characteristics of the CEO, but not at all by the external organizational context. Specifically, the organizational orientations was the greatest contributor to both components of HRM formality. The cognitive style predicted formality to a lesser extent. The investor's involvement was found not to have any predictive effect on the HRM formality. The results indicated a positive contribution to trust in HRM, mainly via the mediation of procedural justice. This study contributed a new concept for technological startup company development by a mixture of organizational orientation. Practical implications indicated that the level of HRM formality should be matched to that of the company's development. This match should be challenged and adjusted periodically by referring to the organization orientation, relevant HR practices, and HR function characteristics. A relevant matching could enhance further trust and business success.Keywords: control, formality, human resource management, organizational development, professionalism, technological startup company
Procedia PDF Downloads 267100 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads
Authors: Gaurav Kumar Sinha
Abstract:
In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies
Procedia PDF Downloads 7199 Two-Stage Estimation of Tropical Cyclone Intensity Based on Fusion of Coarse and Fine-Grained Features from Satellite Microwave Data
Authors: Huinan Zhang, Wenjie Jiang
Abstract:
Accurate estimation of tropical cyclone intensity is of great importance for disaster prevention and mitigation. Existing techniques are largely based on satellite imagery data, and research and utilization of the inner thermal core structure characteristics of tropical cyclones still pose challenges. This paper presents a two-stage tropical cyclone intensity estimation network based on the fusion of coarse and fine-grained features from microwave brightness temperature data. The data used in this network are obtained from the thermal core structure of tropical cyclones through the Advanced Technology Microwave Sounder (ATMS) inversion. Firstly, the thermal core information in the pressure direction is comprehensively expressed through the maximal intensity projection (MIP) method, constructing coarse-grained thermal core images that represent the tropical cyclone. These images provide a coarse-grained feature range wind speed estimation result in the first stage. Then, based on this result, fine-grained features are extracted by combining thermal core information from multiple view profiles with a distributed network and fused with coarse-grained features from the first stage to obtain the final two-stage network wind speed estimation. Furthermore, to better capture the long-tail distribution characteristics of tropical cyclones, focal loss is used in the coarse-grained loss function of the first stage, and ordinal regression loss is adopted in the second stage to replace traditional single-value regression. The selection of tropical cyclones spans from 2012 to 2021, distributed in the North Atlantic (NA) regions. The training set includes 2012 to 2017, the validation set includes 2018 to 2019, and the test set includes 2020 to 2021. Based on the Saffir-Simpson Hurricane Wind Scale (SSHS), this paper categorizes tropical cyclone levels into three major categories: pre-hurricane, minor hurricane, and major hurricane, with a classification accuracy rate of 86.18% and an intensity estimation error of 4.01m/s for NA based on this accuracy. The results indicate that thermal core data can effectively represent the level and intensity of tropical cyclones, warranting further exploration of tropical cyclone attributes under this data.Keywords: Artificial intelligence, deep learning, data mining, remote sensing
Procedia PDF Downloads 6898 Copyright Clearance for Artificial Intelligence Training Data: Challenges and Solutions
Authors: Erva Akin
Abstract:
– The use of copyrighted material for machine learning purposes is a challenging issue in the field of artificial intelligence (AI). While machine learning algorithms require large amounts of data to train and improve their accuracy and creativity, the use of copyrighted material without permission from the authors may infringe on their intellectual property rights. In order to overcome copyright legal hurdle against the data sharing, access and re-use of data, the use of copyrighted material for machine learning purposes may be considered permissible under certain circumstances. For example, if the copyright holder has given permission to use the data through a licensing agreement, then the use for machine learning purposes may be lawful. It is also argued that copying for non-expressive purposes that do not involve conveying expressive elements to the public, such as automated data extraction, should not be seen as infringing. The focus of such ‘copy-reliant technologies’ is on understanding language rules, styles, and syntax and no creative ideas are being used. However, the non-expressive use defense is within the framework of the fair use doctrine, which allows the use of copyrighted material for research or educational purposes. The questions arise because the fair use doctrine is not available in EU law, instead, the InfoSoc Directive provides for a rigid system of exclusive rights with a list of exceptions and limitations. One could only argue that non-expressive uses of copyrighted material for machine learning purposes do not constitute a ‘reproduction’ in the first place. Nevertheless, the use of machine learning with copyrighted material is difficult because EU copyright law applies to the mere use of the works. Two solutions can be proposed to address the problem of copyright clearance for AI training data. The first is to introduce a broad exception for text and data mining, either mandatorily or for commercial and scientific purposes, or to permit the reproduction of works for non-expressive purposes. The second is that copyright laws should permit the reproduction of works for non-expressive purposes, which opens the door to discussions regarding the transposition of the fair use principle from the US into EU law. Both solutions aim to provide more space for AI developers to operate and encourage greater freedom, which could lead to more rapid innovation in the field. The Data Governance Act presents a significant opportunity to advance these debates. Finally, issues concerning the balance of general public interests and legitimate private interests in machine learning training data must be addressed. In my opinion, it is crucial that robot-creation output should fall into the public domain. Machines depend on human creativity, innovation, and expression. To encourage technological advancement and innovation, freedom of expression and business operation must be prioritised.Keywords: artificial intelligence, copyright, data governance, machine learning
Procedia PDF Downloads 8797 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models
Authors: V. Mantey, N. Findlay, I. Maddox
Abstract:
The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.Keywords: building detection, disaster relief, mask-RCNN, satellite mapping
Procedia PDF Downloads 17396 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance
Authors: Ammar Alali, Mahmoud Abughaban
Abstract:
Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe
Procedia PDF Downloads 23795 Smart Services for Easy and Retrofittable Machine Data Collection
Authors: Till Gramberg, Erwin Gross, Christoph Birenbaum
Abstract:
This paper presents the approach of the Easy2IoT research project. Easy2IoT aims to enable companies in the prefabrication sheet metal and sheet metal processing industry to enter the Industrial Internet of Things (IIoT) with a low-threshold and cost-effective approach. It focuses on the development of physical hardware and software to easily capture machine activities from on a sawing machine, benefiting various stakeholders in the SME value chain, including machine operators, tool manufacturers and service providers. The methodological approach of Easy2IoT includes an in-depth requirements analysis and customer interviews with stakeholders along the value chain. Based on these insights, actions, requirements and potential solutions for smart services are derived. The focus is on providing actionable recommendations, competencies and easy integration through no-/low-code applications to facilitate implementation and connectivity within production networks. At the core of the project is a novel, non-invasive measurement and analysis system that can be easily deployed and made IIoT-ready. This system collects machine data without interfering with the machines themselves. It does this by non-invasively measuring the tension on a sawing machine. The collected data is then connected and analyzed using artificial intelligence (AI) to provide smart services through a platform-based application. Three Smart Services are being developed within Easy2IoT to provide immediate benefits to users: Wear part and product material condition monitoring and predictive maintenance for sawing processes. The non-invasive measurement system enables the monitoring of tool wear, such as saw blades, and the quality of consumables and materials. Service providers and machine operators can use this data to optimize maintenance and reduce downtime and material waste. Optimize Overall Equipment Effectiveness (OEE) by monitoring machine activity. The non-invasive system tracks machining times, setup times and downtime to identify opportunities for OEE improvement and reduce unplanned machine downtime. Estimate CO2 emissions for connected machines. CO2 emissions are calculated for the entire life of the machine and for individual production steps based on captured power consumption data. This information supports energy management and product development decisions. The key to Easy2IoT is its modular and easy-to-use design. The non-invasive measurement system is universally applicable and does not require specialized knowledge to install. The platform application allows easy integration of various smart services and provides a self-service portal for activation and management. Innovative business models will also be developed to promote the sustainable use of the collected machine activity data. The project addresses the digitalization gap between large enterprises and SME. Easy2IoT provides SME with a concrete toolkit for IIoT adoption, facilitating the digital transformation of smaller companies, e.g. through retrofitting of existing machines.Keywords: smart services, IIoT, IIoT-platform, industrie 4.0, big data
Procedia PDF Downloads 7794 Assessing P0.1 and Occlusion Pressures in Brain-Injured Patients on Pressure Support Ventilation: A Study Protocol
Authors: S. B. R. Slagmulder
Abstract:
Monitoring inspiratory effort and dynamic lung stress in patients on pressure support ventilation in the ICU is important for protecting against self inflicted lung injury (P-SILI) and diaphragm dysfunction. Strategies to address the detrimental effects of respiratory drive and effort can lead to improved patient outcomes. Two non-invasive estimation methods, occlusion pressure (Pocc) and P0.1, have been proposed for achieving lung and diaphragm protective ventilation. However, their relationship and interpretation in neuro ICU patients is not well understood. P0.1 is the airway pressure measured during a 100-millisecond occlusion of the inspiratory port. It reflects the neural drive from the respiratory centers to the diaphragm and respiratory muscles, indicating the patient's respiratory drive during the initiation of each breath. Occlusion pressure, measured during a brief inspiratory pause against a closed airway, provides information about the inspiratory muscles' strength and the system's total resistance and compliance. Research Objective: Understanding the relationship between Pocc and P0.1 in brain-injured patients can provide insights into the interpretation of these values in pressure support ventilation. This knowledge can contribute to determining extubation readiness and optimizing ventilation strategies to improve patient outcomes. The central goal is to asses a study protocol for determining the relationship between Pocc and P0.1 in brain-injured patients on pressure support ventilation and their ability to predict successful extubation. Additionally, comparing these values between brain-damaged and non-brain-damaged patients may provide valuable insights. Key Areas of Inquiry: 1. How do Pocc and P0.1 values correlate within brain injury patients undergoing pressure support ventilation? 2. To what extent can Pocc and P0.1 values serve as predictive indicators for successful extubation in patients with brain injuries? 3. What differentiates the Pocc and P0.1 values between patients with brain injuries and those without? Methodology: P0.1 and occlusion pressures are standard measurements for pressure support ventilation patients, taken by attending doctors as per protocol. We utilize electronic patient records for existing data. Unpaired T-test will be conducted to compare P0.1 and Pocc values between both study groups. Associations between P0.1 and Pocc and other study variables, such as extubation, will be explored with simple regression and correlation analysis. Depending on how the data evolve, subgroup analysis will be performed for patients with and without extubation failure. Results: While it is anticipated that neuro patients may exhibit high respiratory drive, the linkage between such elevation, quantified by P0.1, and successful extubation remains unknown The analysis will focus on determining the ability of these values to predict successful extubation and their potential impact on ventilation strategies. Conclusion: Further research is pending to fully understand the potential of these indices and their impact on mechanical ventilation in different patient populations and clinical scenarios. Understanding these relationships can aid in determining extubation readiness and tailoring ventilation strategies to improve patient outcomes in this specific patient population. Additionally, it is vital to account for the influence of sedatives, neurological scores, and BMI on respiratory drive and occlusion pressures to ensure a comprehensive analysis.Keywords: brain damage, diaphragm dysfunction, occlusion pressure, p0.1, respiratory drive
Procedia PDF Downloads 7093 Metal Binding Phage Clones in a Quest for Heavy Metal Recovery from Water
Authors: Tomasz Łęga, Marta Sosnowska, Mirosława Panasiuk, Lilit Hovhannisyan, Beata Gromadzka, Marcin Olszewski, Sabina Zoledowska, Dawid Nidzworski
Abstract:
Toxic heavy metal ion contamination of industrial wastewater has recently become a significant environmental concern in many regions of the world. Although the majority of heavy metals are naturally occurring elements found on the earth's surface, anthropogenic activities such as mining and smelting, industrial production, and agricultural use of metals and metal-containing compounds are responsible for the majority of environmental contamination and human exposure. The permissible limits (ppm) for heavy metals in food, water and soil are frequently exceeded and considered hazardous to humans, other organisms, and the environment as a whole. Human exposure to highly nickel-polluted environments causes a variety of pathologic effects. In 2008, nickel received the shameful name of “Allergen of the Year” (GILLETTE 2008). According to the dermatologist, the frequency of nickel allergy is still growing, and it can’t be explained only by fashionable piercing and nickel devices used in medicine (like coronary stents and endoprostheses). Effective remediation methods for removing heavy metal ions from soil and water are becoming increasingly important. Among others, methods such as chemical precipitation, micro- and nanofiltration, membrane separation, conventional coagulation, electrodialysis, ion exchange, reverse and forward osmosis, photocatalysis and polymer or carbon nanocomposite absorbents have all been investigated so far. The importance of environmentally sustainable industrial production processes and the conservation of dwindling natural resources has highlighted the need for affordable, innovative biosorptive materials capable of recovering specific chemical elements from dilute aqueous solutions. The use of combinatorial phage display techniques for selecting and recognizing material-binding peptides with a selective affinity for any target, particularly inorganic materials, has gained considerable interest in the development of advanced bio- or nano-materials. However, due to the limitations of phage display libraries and the biopanning process, the accuracy of molecular recognition for inorganic materials remains a challenge. This study presents the isolation, identification and characterisation of metal binding phage clones that preferentially recover nickel.Keywords: Heavy metal recovery, cleaning water, phage display, nickel
Procedia PDF Downloads 10392 Application of Discrete-Event Simulation in Health Technology Assessment: A Cost-Effectiveness Analysis of Alzheimer’s Disease Treatment Using Real-World Evidence in Thailand
Authors: Khachen Kongpakwattana, Nathorn Chaiyakunapruk
Abstract:
Background: Decision-analytic models for Alzheimer’s disease (AD) have been advanced to discrete-event simulation (DES), in which individual-level modelling of disease progression across continuous severity spectra and incorporation of key parameters such as treatment persistence into the model become feasible. This study aimed to apply the DES to perform a cost-effectiveness analysis of treatment for AD in Thailand. Methods: A dataset of Thai patients with AD, representing unique demographic and clinical characteristics, was bootstrapped to generate a baseline cohort of patients. Each patient was cloned and assigned to donepezil, galantamine, rivastigmine, memantine or no treatment. Throughout the simulation period, the model randomly assigned each patient to discrete events including hospital visits, treatment discontinuation and death. Correlated changes in cognitive and behavioral status over time were developed using patient-level data. Treatment effects were obtained from the most recent network meta-analysis. Treatment persistence, mortality and predictive equations for functional status, costs (Thai baht (THB) in 2017) and quality-adjusted life year (QALY) were derived from country-specific real-world data. The time horizon was 10 years, with a discount rate of 3% per annum. Cost-effectiveness was evaluated based on the willingness-to-pay (WTP) threshold of 160,000 THB/QALY gained (4,994 US$/QALY gained) in Thailand. Results: Under a societal perspective, only was the prescription of donepezil to AD patients with all disease-severity levels found to be cost-effective. Compared to untreated patients, although the patients receiving donepezil incurred a discounted additional costs of 2,161 THB, they experienced a discounted gain in QALY of 0.021, resulting in an incremental cost-effectiveness ratio (ICER) of 138,524 THB/QALY (4,062 US$/QALY). Besides, providing early treatment with donepezil to mild AD patients further reduced the ICER to 61,652 THB/QALY (1,808 US$/QALY). However, the dominance of donepezil appeared to wane when delayed treatment was given to a subgroup of moderate and severe AD patients [ICER: 284,388 THB/QALY (8,340 US$/QALY)]. Introduction of a treatment stopping rule when the Mini-Mental State Exam (MMSE) score goes below 10 to a mild AD cohort did not deteriorate the cost-effectiveness of donepezil at the current treatment persistence level. On the other hand, none of the AD medications was cost-effective when being considered under a healthcare perspective. Conclusions: The DES greatly enhances real-world representativeness of decision-analytic models for AD. Under a societal perspective, treatment with donepezil improves patient’s quality of life and is considered cost-effective when used to treat AD patients with all disease-severity levels in Thailand. The optimal treatment benefits are observed when donepezil is prescribed since the early course of AD. With healthcare budget constraints in Thailand, the implementation of donepezil coverage may be most likely possible when being considered starting with mild AD patients, along with the stopping rule introduced.Keywords: Alzheimer's disease, cost-effectiveness analysis, discrete event simulation, health technology assessment
Procedia PDF Downloads 13291 The Policia Internacional e de Defesa do Estado 1933–1969 and Valtiollinen Poliisi 1939–1948 on Screen: Comparing and Contrasting the Images of the Political Police in Portuguese and Finnish Films between the 1930s and the 1960s
Authors: Riikka Elina Kallio
Abstract:
“The walls have ears” phrase is defining the era of dictatorship in Portugal (1926–1974) and political unrest decades in Finland (1917–1948). The phrase is referring to the policing of the political, secret police, PIDE (Policia Internacional e de Defesa do Estado 1933–1969) in Portugal and VALPO (Valtiollinen Poliisi 1939–1948) in Finland. Free speech at any public space and even in private events could be fatal. The members of the PIDE/VALPO or informers/collaborators could be listening. Strict censorship under the Salazar´s regime was controlling media for example newspapers, music, and the film industry. Similarly, the politically affected censorship influenced the media in Finland in those unrest decades. This article examines the similarities and the differences in the images of the political police in Finland and Portugal, by analyzing Finnish and Portuguese films from the nineteen-thirties to nineteensixties. The text addresses two main research questions: what are the common and different features in the representations of the Finnish and Portuguese political police in films between the 1930s and 1960s, and how did the national censorship affect these representations? This study approach is interdisciplinary, and it combines film studies and criminology. Close reading is a practical qualitative method for analyzing films and in this study, close reading emphasizes the features of the police officer. Criminology provides the methodological tools for analysis of the police universal features and European common policies. The characterization of the police in this study is based on Robert Reiner´s 1980s and Timo Korander´s 2010s definitions of the police officer. The research material consisted of the Portuguese films from online film archives and Finnish films from Movie Making Finland -project´s metadata which offered suitable material by data mining the keywords such as poliisi, poliisipäällikkö and konstaapeli (police, police chief, police constable). The findings of this study suggest that even though there are common features of the images of the political police in Finland and Portugal, there are still national and cultural differences in the representations of the political police and policing.Keywords: censorship, film studies, images, PIDE, political police, VALPO
Procedia PDF Downloads 7490 Apatite Flotation Using Fruits' Oil as Collector and Sorghum as Depressant
Authors: Elenice Maria Schons Silva, Andre Carlos Silva
Abstract:
The crescent demand for raw material has increased mining activities. Mineral industry faces the challenge of process more complexes ores, with very small particles and low grade, together with constant pressure to reduce production costs and environment impacts. Froth flotation deserves special attention among the concentration methods for mineral processing. Besides its great selectivity for different minerals, flotation is a high efficient method to process fine particles. The process is based on the minerals surficial physicochemical properties and the separation is only possible with the aid of chemicals such as collectors, frothers, modifiers, and depressants. In order to use sustainable and eco-friendly reagents, oils extracted from three different vegetable species (pequi’s pulp, macauba’s nut and pulp, and Jatropha curcas) were studied and tested as apatite collectors. Since the oils are not soluble in water, an alkaline hydrolysis (or saponification), was necessary before their contact with the minerals. The saponification was performed at room temperature. The tests with the new collectors were carried out at pH 9 and Flotigam 5806, a synthetic mix of fatty acids industrially adopted as apatite collector manufactured by Clariant, was used as benchmark. In order to find a feasible replacement for cornstarch the flour and starch of a graniferous variety of sorghum was tested as depressant. Apatite samples were used in the flotation tests. XRF (X-ray fluorescence), XRD (X-ray diffraction), and SEM/EDS (Scanning Electron Microscopy with Energy Dispersive Spectroscopy) were used to characterize the apatite samples. Zeta potential measurements were performed in the pH range from 3.5 to 12.5. A commercial cornstarch was used as depressant benchmark. Four depressants dosages and pH values were tested. A statistical test was used to verify the pH, dosage, and starch type influence on the minerals recoveries. For dosages equal or higher than 7.5 mg/L, pequi oil recovered almost all apatite particles. In one hand, macauba’s pulp oil showed excellent results for all dosages, with more than 90% of apatite recovery, but in the other hand, with the nut oil, the higher recovery found was around 84%. Jatropha curcas oil was the second best oil tested and more than 90% of the apatite particles were recovered for the dosage of 7.5 mg/L. Regarding the depressant, the lower apatite recovery with sorghum starch were found for a dosage of 1,200 g/t and pH 11, resulting in a recovery of 1.99%. The apatite recovery for the same conditions as 1.40% for sorghum flour (approximately 30% lower). When comparing with cornstarch at the same conditions sorghum flour produced an apatite recovery 91% lower.Keywords: collectors, depressants, flotation, mineral processing
Procedia PDF Downloads 15689 A Semi-Automated GIS-Based Implementation of Slope Angle Design Reconciliation Process at Debswana Jwaneng Mine, Botswana
Authors: K. Mokatse, O. M. Barei, K. Gabanakgosi, P. Matlhabaphiri
Abstract:
The mining of pit slopes is often associated with some level of deviation from design recommendations, and this may translate to associated changes in the stability of the excavated pit slopes. Therefore slope angle design reconciliations are essential for assessing and monitoring compliance of excavated pit slopes to accepted slope designs. These associated changes in slope stability may be reflected by changes in the calculated factors of safety and/or probabilities of failure. Reconciliations of as-mined and slope design profiles are conducted periodically to assess the implications of these deviations on pit slope stability. Currently, the slope design reconciliation process being implemented in Jwaneng Mine involves the measurement of as-mined and design slope angles along vertical sections cut along the established geotechnical design section lines on the GEOVIA GEMS™ software. Bench retentions are calculated as a percentage of the available catchment area, less over-mined and under-mined areas, to that of the designed catchment area. This process has proven to be both tedious and requires a lot of manual effort and time to execute. Consequently, a new semi-automated mine-to-design reconciliation approach that utilizes laser scanning and GIS-based tools is being proposed at Jwaneng Mine. This method involves high-resolution scanning of targeted bench walls, subsequent creation of 3D surfaces from point cloud data and the derivation of slope toe lines and crest lines on the Maptek I-Site Studio software. The toe lines and crest lines are then exported to the ArcGIS software where distance offsets between the design and actual bench toe lines and crest lines are calculated. Retained bench catchment capacity is measured as distances between the toe lines and crest lines on the same bench elevations. The assessment of the performance of the inter-ramp and overall slopes entails the measurement of excavated and design slope angles along vertical sections on the ArcGIS software. Excavated and design toe-to-toe or crest-to-crest slope angles are measured for inter-ramp stack slope reconciliations. Crest-to-toe slope angles are also measured for overall slope angle design reconciliations. The proposed approach allows for a more automated, accurate, quick and easier workflow for carrying out slope angle design reconciliations. This process has proved highly effective and timeous in the assessment of slope performance in Jwaneng Mine. This paper presents a newly proposed process for assessing compliance to slope angle designs for Jwaneng Mine.Keywords: slope angle designs, slope design recommendations, slope performance, slope stability
Procedia PDF Downloads 23988 Land Art in Public Spaces Design: Remediation, Prevention of Environmental Risks and Recycling as a Consequence of the Avant-Garde Activity of Landscape Architecture
Authors: Karolina Porada
Abstract:
Over the last 40 years, there has been a trend in landscape architecture which supporters do not perceive the role of pro-ecological or postmodern solutions in the design of public green spaces as an essential goal, shifting their attention to the 'sculptural' shaping of areas with the use of slopes, hills, embankments, and other forms of terrain. This group of designers can be considered avant-garde, which in its activities refers to land art. Initial research shows that such applications are particularly frequent in places of former post-industrial sites and landfills, utilizing materials such as debris and post-mining waste in their construction. Due to the high degradation of the environment surrounding modern man, the brownfields are a challenge and a field of interest for the representatives of landscape architecture avant-garde, who through their projects try to recover lost lands by means of transformations supported by engineering and ecological knowledge to create places where nature can develop again. The analysis of a dozen or so facilities made it possible to come up with an important conclusion: apart from the cultural aspects (including artistic activities), the green areas formally referring to the land are important in the process of remediation of post-industrial sites and waste recycling (e. g. from construction sites). In these processes, there is also a potential for applying the concept of Natural Based Solutions, i.e. solutions allowing for the natural development of the site in such a way as to use it to cope with environmental problems, such as e.g. air pollution, soil phytoremediation and climate change. The paper presents examples of modern parks, whose compositions are based on shaping the surface of the terrain in a way referring to the land art, at the same time providing an example of brownfields reuse and application of waste recycling. For the purposes of object analysis, research methods such as historical-interpretation studies, case studies, qualitative research or the method of logical argumentation were used. The obtained results provide information about the role that landscape architecture can have in the process of remediation of degraded areas, at the same time guaranteeing the benefits, such as the shaping of landscapes attractive in terms of visual appearance, low costs of implementation, and improvement of the natural environment quality.Keywords: brownfields, contemporary parks, landscape architecture, remediation
Procedia PDF Downloads 15387 Experimental Recovery of Gold, Silver and Palladium from Electronic Wastes Using Ionic Liquids BmimHSO4 and BmimCl as Solvents
Authors: Lisa Shambare, Jean Mulopo, Sehliselo Ndlovu
Abstract:
One of the major challenges of sustainable development is promoting an industry which is both ecologically durable and economically viable. This requires processes that are material and energy efficient whilst also being able to limit the production of waste and toxic effluents through effective methods of process synthesis and intensification. In South Africa and globally, both miniaturisation and technological advances have substantially increased the amount of electronic wastes (e-waste) generated annually. Vast amounts of e-waste are being generated yearly with only a minute quantity being recycled officially. The passion for electronic devices cannot ignore the scarcity and cost of mining the noble metal resources which contribute significantly to the efficiency of most electronic devices. It has hence become imperative especially in an African context that sustainable strategies which are environmentally friendly be developed for recycling of the noble metals from e-waste. This paper investigates the recovery of gold, silver and palladium from electronic wastes, which consists of a vast array of metals, using ionic liquids which have the potential of reducing the gaseous and aqueous emissions associated with existing hydrometallurgical and pyrometallurgical technologies while also maintaining the economy of the overall recycling scheme through solvent recovery. The ionic liquids 1-butyl-3-methyl imidazolium hydrogen sulphate (BmimHSO4) which behaves like a protic acid and was used in the present research for the selective leaching of gold and silver from e-waste. Different concentrations of the aqueous ionic liquid were used in the experiments ranging from 10% to 50%. Thiourea was used as the complexing agent in the investigation with Fe3+ as the oxidant. The pH of the reaction was maintained in the range of 0.8 to 1.5. The preliminary investigations conducted were successful in the leaching of silver and palladium at room temperature with optimum results being at 48hrs. The leaching results could not be explained because of the leaching of palladium with the absence of gold. Hence a conclusion could not be drawn and there was the need for further experiments to be run. The leaching of palladium was carried out with hydrogen peroxide as oxidant and 1-butyl-3-methyl imidazolium chloride (BmimCl) as the solvent. The experiments at carried out at a temperature of 60 degrees celsius and a very low pH. The chloride ion was used to complex with palladium metal. From the preliminary results, it could be concluded that pretreatment of the treatment e-waste was necessary to improve the efficiency of the metal recovery process. A conclusion could not be drawn for the leaching experiments.Keywords: BmimCl, BmimHSO4, gold, palladium, silver
Procedia PDF Downloads 29286 Brand Positioning in Iran: A Case Study of the Professional Soccer League
Authors: Homeira Asadi Kavan, Seyed Nasrollah Sajjadi, Mehrzade Hamidi, Hossein Rajabi, Mahdi Bigdely
Abstract:
Positioning strategies of a sports brand can create a unique impression in the minds of the fans, sponsors, and other stakeholders. In order to influence potential customer's perception in an effective and positive way, a brands positioning strategy must be unique, credible, and relevant. Many sports clubs in Iran have been struggling to implement and achieve brand positioning accomplishments, due to different reasons such as lack of experience, scarcity of experts in the sports branding, and lack of related researches in this field. This study will provide a comprehensive theoretical framework and action plan for sport managers and marketers to design and implement effective brand positioning and to enable them to be distinguishable from competing brands and sports clubs. The study instrument is interviews with sports marketing and brand experts who have been working in this industry for a minimum of 20 years. Qualitative data analysis was performed using Atlast.ti text mining software version 7 and Open, axial and selective coding were employed to uncover and systematically analyze important and complex phenomena and elements. The findings show 199 effective elements in positioning strategies in Iran Professional Soccer League. These elements are categorized into 23 concepts and sub-categories as follows: Structural prerequisites, Strategic management prerequisites, Commercial prerequisites, Major external prerequisites, Brand personality, Club symbols, Emotional aspects, Event aspects, Fans’ strategies, Marketing information strategies, Marketing management strategies, Empowerment strategies, Executive management strategies, League context, Fans’ background, Market context, Club’s organizational context, Support context, Major contexts, Political-Legal elements, Economic factors, Social factors, and Technological factors. Eventually, the study model was developed by 6 main dimensions of Causal prerequisites, Axial Phenomenon (brand position), Strategies, Context Factors, Interfering Factors, and Consequences. Based on the findings, practical recommendations and strategies are suggested that can help club managers and marketers in developing and improving their respective sport clubs, brand positioning, and activities.Keywords: brand positioning, soccer club, sport marketing, Iran professional soccer league, brand strategy
Procedia PDF Downloads 13985 Genetic Diversity of Norovirus Strains in Outpatient Children from Rural Communities of Vhembe District, South Africa, 2014-2015
Authors: Jean Pierre Kabue, Emma Meader, Afsatou Ndama Traore, Paul R. Hunter, Natasha Potgieter
Abstract:
Norovirus is now considered the most common cause of outbreaks of nonbacterial gastroenteritis. Limited data are available for Norovirus strains in Africa, especially in rural and peri-urban areas. Despite the excessive burden of diarrhea disease in developing countries, Norovirus infections have been to date mostly reported in developed countries. There is a need to investigate intensively the role of viral agents associated with diarrhea in different settings in Africa continent. To determine the prevalence and genetic diversity of Norovirus strains circulating in the rural communities in the Limpopo Province, South Africa and investigate the genetic relationship between Norovirus strains, a cross-sectional study was performed on human stools collected from rural communities. Between July 2014 and April 2015, outpatient children under 5 years of age from rural communities of Vhembe District, South Africa, were recorded for the study. A total of 303 stool specimens were collected from those with diarrhea (n=253) and without (n=50) diarrhea. NoVs were identified using real-time one-step RT-PCR. Partial Sequence analyses were performed to genotype the strains. Phylogenetic analyses were performed to compare identified NoVs genotypes to the worldwide circulating strains. Norovirus detection rate was 41.1% (104/253) in children with diarrhea. There was no significant difference (OR=1.24; 95% CI 0.66-2.33) in Norovirus detection between symptomatic and asymptomatic children. Comparison of the median CT values for NoV in children with diarrhea and without diarrhea revealed significant statistical difference of estimated GII viral load from both groups, with a much higher viral burden in children with diarrhea. To our knowledge, this is the first study reporting on the differences in estimated viral load of GII and GI NoV positive cases and controls. GII.Pe (n=9) were the predominant genotypes followed by GII.Pe/GII.4 Sydney 2012 (n=8) suspected recombinant and GII.4 Sydney 2012 variants(n=7). Two unassigned GII.4 variants and an unusual RdRp genotype GII.P15 were found. With note, the rare GIIP15 identified in this study has a common ancestor with GIIP15 strain from Japan previously reported as GII/untypeable recombinant strain implicated in a gastroenteritis outbreak. To our knowledge, this is the first report of this unusual genotype in the African continent. Though not confirmed predictive of diarrhea disease in this study, the high detection rate of NoV is an indication of subsequent exposure of children from rural communities to enteric pathogens due to poor sanitation and hygiene practices. The results reveal that the difference between asymptomatic and symptomatic children with NoV may possibly be related to the NoV genogroups involved. The findings emphasize NoV genetic diversity and predominance of GII.Pe/GII.4 Sydney 2012, indicative of increased NoV activity. An uncommon GII.P15 and two unassigned GII.4 variants were also identified from rural settings of the Vhembe District/South Africa. NoV surveillance is required to help to inform investigations into NoV evolution, and to support vaccine development programmes in Africa.Keywords: asymptomatic, common, outpatients, norovirus genetic diversity, sporadic gastroenteritis, South African rural communities, symptomatic
Procedia PDF Downloads 20084 Bacterial Recovery of Copper Ores
Authors: Zh. Karaulova, D. Baizhigitov
Abstract:
At the Aktogay deposit, the oxidized ore section has been developed since 2015; by now, the reserves of easily enriched ore are decreasing, and a large number of copper-poor, difficult-to-enrich ores has been accumulated in the dumps of the KAZ Minerals Aktogay deposit, which is unprofitable to mine using the traditional mining methods. Hence, another technology needs to be implemented, which will significantly expand the raw material base of copper production in Kazakhstan and ensure the efficient use of natural resources. Heap and dump bacterial recovery are the most acceptable technologies for processing low-grade secondary copper sulfide ores. Test objects were the copper ores of Aktogay deposit and chemolithotrophic bacteria Leptospirillum ferrooxidans (L.f.), Acidithiobacillus caldus (A.c.), Sulfobacillus Acidophilus (S.a.), which are mixed cultures were both used in bacterial oxidation systems. They can stay active in the 20-400C temperature range. These bacteria were the most extensively studied and widely used in sulfide mineral recovery technology. Biocatalytic acceleration was achieved as a result of bacteria oxidizing iron sulfides to form iron sulfate, which subsequently underwent chemical oxidation to become sulfate oxide. The following results have been achieved at the initial stage: the goal was to grow and maintain the life activity of bacterial cultures under laboratory conditions. These bacteria grew the best within the pH 1,2-1,8 range with light stirring and in an aerated environment. The optimal growth temperature was 30-33оC. The growth rate decreased by one-half for each 4-5°C fall in temperature from 30°C. At best, the number of bacteria doubled every 24 hours. Typically, the maximum concentration of cells that can be grown in ferrous solution is about 107/ml. A further step researched in this case was the adaptation of microorganisms to the environment of certain metals. This was followed by mass production of inoculum and maintenance for their further cultivation on a factory scale. This was done by adding sulfide concentrate, allowing the bacteria to convert the ferrous sulfate as indicated by the Eh (>600 mV), then diluting to double the volume and adding concentrate to achieve the same metal level. This process was repeated until the desired metal level and volumes were achieved. The final stage of bacterial recovery was the transportation and irrigation of secondary sulfide copper ores of the oxidized ore section. In conclusion, the project was implemented at the Aktogay mine since the bioleaching process was prolonged. Besides, the method of bacterial recovery might compete well with existing non-biological methods of extraction of metals from ores.Keywords: bacterial recovery, copper ore, bioleaching, bacterial inoculum
Procedia PDF Downloads 7983 Scoring System for the Prognosis of Sepsis Patients in Intensive Care Units
Authors: Javier E. García-Gallo, Nelson J. Fonseca-Ruiz, John F. Duitama-Munoz
Abstract:
Sepsis is a syndrome that occurs with physiological and biochemical abnormalities induced by severe infection and carries a high mortality and morbidity, therefore the severity of its condition must be interpreted quickly. After patient admission in an intensive care unit (ICU), it is necessary to synthesize the large volume of information that is collected from patients in a value that represents the severity of their condition. Traditional severity of illness scores seeks to be applicable to all patient populations, and usually assess in-hospital mortality. However, the use of machine learning techniques and the data of a population that shares a common characteristic could lead to the development of customized mortality prediction scores with better performance. This study presents the development of a score for the one-year mortality prediction of the patients that are admitted to an ICU with a sepsis diagnosis. 5650 ICU admissions extracted from the MIMICIII database were evaluated, divided into two groups: 70% to develop the score and 30% to validate it. Comorbidities, demographics and clinical information of the first 24 hours after the ICU admission were used to develop a mortality prediction score. LASSO (least absolute shrinkage and selection operator) and SGB (Stochastic Gradient Boosting) variable importance methodologies were used to select the set of variables that make up the developed score; each of this variables was dichotomized and a cut-off point that divides the population into two groups with different mean mortalities was found; if the patient is in the group that presents a higher mortality a one is assigned to the particular variable, otherwise a zero is assigned. These binary variables are used in a logistic regression (LR) model, and its coefficients were rounded to the nearest integer. The resulting integers are the point values that make up the score when multiplied with each binary variables and summed. The one-year mortality probability was estimated using the score as the only variable in a LR model. Predictive power of the score, was evaluated using the 1695 admissions of the validation subset obtaining an area under the receiver operating characteristic curve of 0.7528, which outperforms the results obtained with Sequential Organ Failure Assessment (SOFA), Oxford Acute Severity of Illness Score (OASIS) and Simplified Acute Physiology Score II (SAPSII) scores on the same validation subset. Observed and predicted mortality rates within estimated probabilities deciles were compared graphically and found to be similar, indicating that the risk estimate obtained with the score is close to the observed mortality, it is also observed that the number of events (deaths) is indeed increasing as the outcome go from the decile with the lowest probabilities to the decile with the highest probabilities. Sepsis is a syndrome that carries a high mortality, 43.3% for the patients included in this study; therefore, tools that help clinicians to quickly and accurately predict a worse prognosis are needed. This work demonstrates the importance of customization of mortality prediction scores since the developed score provides better performance than traditional scoring systems.Keywords: intensive care, logistic regression model, mortality prediction, sepsis, severity of illness, stochastic gradient boosting
Procedia PDF Downloads 22582 The Governance of Net-Zero Emission Urban Bus Transitions in the United Kingdom: Insight from a Transition Visioning Stakeholder Workshop
Authors: Iraklis Argyriou
Abstract:
The transition to net-zero emission urban bus (ZEB) systems is receiving increased attention in research and policymaking throughout the globe. Most studies in this area tend to address techno-economic aspects and the perspectives of a narrow group of stakeholders, while they largely overlook analysis of current bus system dynamics. This offers limited insight into the types of ZEB governance challenges and opportunities that are encountered in real-world contexts, as well as into some of the immediate actions that need to be taken to set off the transition over the longer term. This research offers a multi-stakeholder perspective into both the technical and non-technical factors that influence ZEB transitions within a particular context, the UK. It does so by drawing from a recent transition visioning stakeholder workshop (June 2023) with key public, private and civic actors of the urban bus transportation system. Using NVivo software to qualitatively analyze the workshop discussions, the research examines the key technological and funding aspects, as well as the short-term actions (over the next five years), that need to be addressed for supporting the ZEB transition in UK cities. It finds that ZEB technology has reached a mature stage (i.e., high efficiency of batteries, motors and inverters), but important improvements can be pursued through greater control and integration of ZEB technological components and systems. In this regard, telemetry, predictive maintenance and adaptive control strategies pertinent to the performance and operation of ZEB vehicles have a key role to play in the techno-economic advancement of the transition. Yet, more pressing gaps were identified in the current ZEB funding regime. Whereas the UK central government supports greater ZEB adoption through a series of grants and subsidies, the scale of the funding and its fragmented nature do not match the needs for a UK-wide transition. Funding devolution arrangements (i.e., stable funding settlement deals between the central government and the devolved administrations/local authorities), as well as locally-driven schemes (i.e., congestion charging/workplace parking levy), could then enhance the financial prospects of the transition. As for short-term action, three areas were identified as critical: (1) the creation of whole value chains around the supply, use and recycling of ZEB components; (2) the ZEB retrofitting of existing fleets; and (3) integrated transportation that prioritizes buses as a first-choice, convenient and reliable mode while it simultaneously reduces car dependency in urban areas. Taken together, the findings point to the need for place-based transition approaches that create a viable techno-economic ecosystem for ZEB development but at the same time adopt a broader governance perspective beyond a ‘net-zero’ and ‘bus sectoral’ focus. As such, multi-actor collaborations and the coordination of wider resources and agency, both vertically across institutional scales and horizontally across transport, energy and urban planning, become fundamental features of comprehensive ZEB responses. The lessons from the UK case can inform a broader body of empirical contextual knowledge of ZEB transition governance within domestic political economies of public transportation.Keywords: net-zero emission transition, stakeholders, transition governance, UK, urban bus transportation
Procedia PDF Downloads 7981 Sentiment Analysis of Creative Tourism Experiences: The Case of Girona, Spain
Authors: Ariadna Gassiot, Raquel Camprubi, Lluis Coromina
Abstract:
Creative tourism involves the participation of tourists in the co-creation of their own experiences in a tourism destination. Consequently, creative tourists move from a passive behavior to an active behavior, and tourism destinations address this type of tourism by changing the scenario and making tourists learn and participate while they travel instead of merely offering tourism products and services to them. In creative tourism experiences, tourists are in close contact with locals and their culture. In destinations where culture (i.e. food, heritage, etc.) is the basis of their offer, such as Girona, Spain, tourism stakeholders must especially consider, analyze, and further foster the co-creation of authentic tourism experiences. They should focus on discovering more about these experiences, their main attributes, visitors’ opinions, etc. Creative tourists do not only participate while they travel around the world, but they also have and active post-travel behavior. They feel free to write about tourism experiences in different channels. User-generated content becomes crucial for any tourism destination when analyzing the market, making decisions, planning strategies, and when addressing issues, such as their reputation and performance. Sentiment analysis is a methodology used to automatically analyze semantic relationships and meanings in texts, so it is a way to extract tourists’ emotions and feelings. Tourists normally express their views and opinions regarding tourism products and services. They may express positive, neutral or negative feelings towards these products or services. For example, they may express anger, love, hate, sadness or joy towards tourism services and products. They may also express feelings through verbs, nouns, adverbs, adjectives, among others. Sentiment analysis may help tourism professionals in a range of areas, from marketing to customer service. For example, sentiment analysis allows tourism stakeholders to forecast tourism expenditure and tourist arrivals, or to analyze tourists’ profile. While there is an increasing presence of creativity in tourists’ experiences, there is also an increasing need to explore tourists’ expressions about these experiences. There is a need to know how they feel about participating in specific tourism activities. Thus, the main objective of this study is to analyze the meanings, emotions and feelings that tourists express about their creative experiences in Girona, Spain. To do so, sentiment analysis methodology is used. Results show the diversity of tourists who actively participate in tourism in Girona. Their opinions refer both to tangible aspects (e.g. food, museums, etc.) and to intangible aspects (e.g. friendliness, nightlife, etc.) of tourism experiences. Tourists express love, likeliness and other sentiments towards tourism products and services in Girona. This study can help tourism stakeholders in understanding tourists’ experiences and feelings. Consequently, they can offer more customized products and services and they can efficiently make them participate in the co-creation of their own tourism experiences.Keywords: creative tourism, sentiment analysis, text mining, user-generated content
Procedia PDF Downloads 181