Search results for: predictive mining
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2060

Search results for: predictive mining

140 Risk Assessment Tools Applied to Deep Vein Thrombosis Patients Treated with Warfarin

Authors: Kylie Mueller, Nijole Bernaitis, Shailendra Anoopkumar-Dukie

Abstract:

Background: Vitamin K antagonists particularly warfarin is the most frequently used oral medication for deep vein thrombosis (DVT) treatment and prophylaxis. Time in therapeutic range (TITR) of the international normalised ratio (INR) is widely accepted as a measure to assess the quality of warfarin therapy. Multiple factors can affect warfarin control and the subsequent adverse outcomes including thromboembolic and bleeding events. Predictor models have been developed to assess potential contributing factors and measure the individual risk of these adverse events. These predictive models have been validated in atrial fibrillation (AF) patients, however, there is a lack of literature on whether these can be successfully applied to other warfarin users including DVT patients. Therefore, the aim of the study was to assess the ability of these risk models (HAS BLED and CHADS2) to predict haemorrhagic and ischaemic incidences in DVT patients treated with warfarin. Methods: A retrospective analysis of DVT patients receiving warfarin management by a private pathology clinic was conducted. Data was collected from November 2007 to September 2014 and included demographics, medical and drug history, INR targets and test results. Patients receiving continuous warfarin therapy with an INR reference range between 2.0 and 3.0 were included in the study with mean TITR calculated using the Rosendaal method. Bleeding and thromboembolic events were recorded and reported as incidences per patient. The haemorrhagic risk model HAS BLED and ischaemic risk model CHADS2 were applied to the data. Patients were then stratified into either the low, moderate, or high-risk categories. The analysis was conducted to determine if a correlation existed between risk assessment tool and patient outcomes. Data was analysed using GraphPad Instat Version 3 with a p value of <0.05 considered to be statistically significant. Patient characteristics were reported as mean and standard deviation for continuous data and categorical data reported as number and percentage. Results: Of the 533 patients included in the study, there were 268 (50.2%) female and 265 (49.8%) male patients with a mean age of 62.5 years (±16.4). The overall mean TITR was 78.3% (±12.7) with an overall haemorrhagic incidence of 0.41 events per patient. For the HAS BLED model, there was a haemorrhagic incidence of 0.08, 0.53, and 0.54 per patient in the low, moderate and high-risk categories respectively showing a statistically significant increase in incidence with increasing risk category. The CHADS2 model showed an increase in ischaemic events according to risk category with no ischaemic events in the low category, and an ischaemic incidence of 0.03 in the moderate category and 0.47 high-risk categories. Conclusion: An increasing haemorrhagic incidence correlated to an increase in the HAS BLED risk score in DVT patients treated with warfarin. Furthermore, a greater incidence of ischaemic events occurred in patients with an increase in CHADS2 category. In an Australian population of DVT patients, the HAS BLED and CHADS2 accurately predicts incidences of haemorrhage and ischaemic events respectively.

Keywords: anticoagulant agent, deep vein thrombosis, risk assessment, warfarin

Procedia PDF Downloads 267
139 Predicting Provider Service Time in Outpatient Clinics Using Artificial Intelligence-Based Models

Authors: Haya Salah, Srinivas Sharan

Abstract:

Healthcare facilities use appointment systems to schedule their appointments and to manage access to their medical services. With the growing demand for outpatient care, it is now imperative to manage physician's time effectively. However, high variation in consultation duration affects the clinical scheduler's ability to estimate the appointment duration and allocate provider time appropriately. Underestimating consultation times can lead to physician's burnout, misdiagnosis, and patient dissatisfaction. On the other hand, appointment durations that are longer than required lead to doctor idle time and fewer patient visits. Therefore, a good estimation of consultation duration has the potential to improve timely access to care, resource utilization, quality of care, and patient satisfaction. Although the literature on factors influencing consultation length abound, little work has done to predict it using based data-driven approaches. Therefore, this study aims to predict consultation duration using supervised machine learning algorithms (ML), which predicts an outcome variable (e.g., consultation) based on potential features that influence the outcome. In particular, ML algorithms learn from a historical dataset without explicitly being programmed and uncover the relationship between the features and outcome variable. A subset of the data used in this study has been obtained from the electronic medical records (EMR) of four different outpatient clinics located in central Pennsylvania, USA. Also, publicly available information on doctor's characteristics such as gender and experience has been extracted from online sources. This research develops three popular ML algorithms (deep learning, random forest, gradient boosting machine) to predict the treatment time required for a patient and conducts a comparative analysis of these algorithms with respect to predictive performance. The findings of this study indicate that ML algorithms have the potential to predict the provider service time with superior accuracy. While the current approach of experience-based appointment duration estimation adopted by the clinic resulted in a mean absolute percentage error of 25.8%, the Deep learning algorithm developed in this study yielded the best performance with a MAPE of 12.24%, followed by gradient boosting machine (13.26%) and random forests (14.71%). Besides, this research also identified the critical variables affecting consultation duration to be patient type (new vs. established), doctor's experience, zip code, appointment day, and doctor's specialty. Moreover, several practical insights are obtained based on the comparative analysis of the ML algorithms. The machine learning approach presented in this study can serve as a decision support tool and could be integrated into the appointment system for effectively managing patient scheduling.

Keywords: clinical decision support system, machine learning algorithms, patient scheduling, prediction models, provider service time

Procedia PDF Downloads 123
138 A Comparative Study on the Positive and Negative of Electronic Word-of-Mouth on the SERVQUAL Scale-Take A Certain Armed Forces General Hospital in Taiwan As An Example

Authors: Po-Chun Lee, Li-Lin Liang, Ching-Yuan Huang

Abstract:

Purpose: Research on electronic word-of-mouth (eWOM)& online review has been widely used in service industry management research in recent years. The SERVQUAL scale is the most commonly used method to measure service quality. Therefore, the purpose of this research is to combine electronic word of mouth & online review with the SERVQUAL scale. To explore the comparative study of positive and negative electronic word-of-mouth reviews of a certain armed force general hospital in Taiwan. Data sources: This research obtained online word-of-mouth comment data on google maps from a military hospital in Taiwan in the past ten years through Internet data mining technology. Research methods: This study uses the semantic content analysis method to classify word-of-mouth reviews according to the revised PZB SERVQUAL scale. Then carry out statistical analysis. Results of data synthesis: The results of this study disclosed that the negative reviews of this military hospital in Taiwan have been increasing year by year. Under the COVID-19 epidemic, positive word-of-mouth has a downward trend. Among the five determiners of SERVQUAL of PZB, positive word-of-mouth reviews performed best in “Assurance,” with a positive review rate of 58.89%, Followed by 43.33% of “Responsiveness.” In negative word-of-mouth reviews, “Assurance” performed the worst, with a positive rate of 70.99%, followed by responsive 29.01%. Conclusions: The important conclusions of this study disclosed that the total number of electronic word-of-mouth reviews of the military hospital has revealed positive growth in recent years, and the positive word-of-mouth growth has revealed negative growth after the epidemic of COVID-19, while the negative word-of-mouth has grown substantially. Regardless of the positive and negative comments, what patients care most about is “Assurance” of the professional attitude and skills of the medical staff, which needs to be strengthened most urgently. In addition, good “Reliability” will help build positive word-of-mouth. However, poor “Responsiveness” can easily lead to the spread of negative word-of-mouth. This study suggests that the hospital should focus on these few service-oriented quality management and audits.

Keywords: quality of medical service, electronic word-of-mouth, armed forces general hospital

Procedia PDF Downloads 181
137 Cost Based Analysis of Risk Stratification Tool for Prediction and Management of High Risk Choledocholithiasis Patients

Authors: Shreya Saxena

Abstract:

Background: Choledocholithiasis is a common complication of gallstone disease. Risk scoring systems exist to guide the need for further imaging or endoscopy in managing choledocholithiasis. We completed an audit to review the American Society for Gastrointestinal Endoscopy (ASGE) scoring system for prediction and management of choledocholithiasis against the current practice at a tertiary hospital to assess its utility in resource optimisation. We have now conducted a cost focused sub-analysis on patients categorized high-risk for choledocholithiasis according to the guidelines to determine any associated cost benefits. Method: Data collection from our prior audit was used to retrospectively identify thirteen patients considered high-risk for choledocholithiasis. Their ongoing management was mapped against the guidelines. Individual costs for the key investigations were obtained from our hospital financial data. Total cost for the different management pathways identified in clinical practice were calculated and compared against predicted costs associated with recommendations in the guidelines. We excluded the cost of laparoscopic cholecystectomy and considered a set figure for per day hospital admission related expenses. Results: Based on our previous audit data, we identified a77% positive predictive value for the ASGE risk stratification tool to determine patients at high-risk of choledocholithiasis. 47% (6/13) had an magnetic resonance cholangiopancreatography (MRCP) prior to endoscopic retrograde cholangiopancreatography (ERCP), whilst 53% (7/13) went straight for ERCP. The average length of stay in the hospital was 7 days, with an additional day and cost of £328.00 (£117 for ERCP) for patients awaiting an MRCP prior to ERCP. Per day hospital admission was valued at £838.69. When calculating total cost, we assumed all patients had admission bloods and ultrasound done as the gold standard. In doing an MRCP prior to ERCP, there was a 130% increase in cost incurred (£580.04 vs £252.04) per patient. When also considering hospital admission and the average length of stay, it was an additional £1166.69 per patient. We then calculated the exact costs incurred by the department, over a three-month period, for all patients, for key investigations or procedures done in the management of choledocholithiasis. This was compared to an estimate cost derived from the recommended pathways in the ASGE guidelines. Overall, 81% (£2048.45) saving was associated with following the guidelines compared to clinical practice. Conclusion: MRCP is the most expensive test associated with the diagnosis and management of choledocholithiasis. The ASGE guidelines recommend endoscopy without an MRCP in patients stratified as high-risk for choledocholithiasis. Our audit that focused on assessing the utility of the ASGE risk scoring system showed it to be relatively reliable for identifying high-risk patients. Our cost analysis has shown significant cost savings per patient and when considering the average length of stay associated with direct endoscopy rather than an additional MRCP. Part of this is also because of an increased average length of stay associated with waiting for an MRCP. The above data supports the ASGE guidelines for the management of high-risk for choledocholithiasis patients from a cost perspective. The only caveat is our small data set that may impact the validity of our average length of hospital stay figures and hence total cost calculations.

Keywords: cost-analysis, choledocholithiasis, risk stratification tool, general surgery

Procedia PDF Downloads 102
136 Consolidating a Regime of State Terror: A Historical Analysis of Necropolitics and the Evolution of Policing Practices in California as a Former Colony, Frontier, and Late-Modern Settler Society

Authors: Peyton M. Provenzano

Abstract:

This paper draws primarily upon the framework of necropolitics and presents California as itself a former frontier, colony, and late-modern settler society. The convergence of these successive and overlapping regimes of state terror is actualized and traceable through an analysis of historical and contemporary police practices. At the behest of the Spanish Crown and with the assistance of the Spanish military, the Catholic Church led the original expedition to colonize California. The indigenous populations of California were subjected to brutal practices of confinement and enslavement at the missions. After the annex of California by the United States, the western-most territory became an infamous frontier where new settlers established vigilante militias to enact violence against indigenous populations to protect their newly stolen land. Early mining settlements sought to legitimize and fund vigilante violence by wielding the authority of rudimentary democratic structures. White settlers circulated petitions for funding to establish a volunteer company under California’s Militia Law for ‘protection’ against the local indigenous populations. The expansive carceral practices of Los Angelinos at the turn of the 19th century exemplify the way in which California solidified its regime of exclusion as a white settler society. Drawing on recent scholarship that queers the notion of biopower and names police as street-level sovereigns, the police murder of Kayla Moore is understood as the latest manifestation of a carceral regime of exclusion and genocide. Kayla Moore was an African American transgender woman living with a mental health disability that was murdered by Berkeley police responding to a mental health crisis call in 2013. The intersectionality of Kayla’s identity made her hyper-vulnerable to state-sanctioned violence. Kayla was a victim not only of the explicitly racial biopower of police, nor the regulatory state power of necropolitics but of the ‘asphyxia’ that was intended to invisibilize both her life and her murder.

Keywords: asphyxia, biopower, california, carceral state, genocide, necropolitics, police, police violence

Procedia PDF Downloads 139
135 Forecasting Thermal Energy Demand in District Heating and Cooling Systems Using Long Short-Term Memory Neural Networks

Authors: Kostas Kouvaris, Anastasia Eleftheriou, Georgios A. Sarantitis, Apostolos Chondronasios

Abstract:

To achieve the objective of almost zero carbon energy solutions by 2050, the EU needs to accelerate the development of integrated, highly efficient and environmentally friendly solutions. In this direction, district heating and cooling (DHC) emerges as a viable and more efficient alternative to conventional, decentralized heating and cooling systems, enabling a combination of more efficient renewable and competitive energy supplies. In this paper, we develop a forecasting tool for near real-time local weather and thermal energy demand predictions for an entire DHC network. In this fashion, we are able to extend the functionality and to improve the energy efficiency of the DHC network by predicting and adjusting the heat load that is distributed from the heat generation plant to the connected buildings by the heat pipe network. Two case-studies are considered; one for Vransko, Slovenia and one for Montpellier, France. The data consists of i) local weather data, such as humidity, temperature, and precipitation, ii) weather forecast data, such as the outdoor temperature and iii) DHC operational parameters, such as the mass flow rate, supply and return temperature. The external temperature is found to be the most important energy-related variable for space conditioning, and thus it is used as an external parameter for the energy demand models. For the development of the forecasting tool, we use state-of-the-art deep neural networks and more specifically, recurrent networks with long-short-term memory cells, which are able to capture complex non-linear relations among temporal variables. Firstly, we develop models to forecast outdoor temperatures for the next 24 hours using local weather data for each case-study. Subsequently, we develop models to forecast thermal demand for the same period, taking under consideration past energy demand values as well as the predicted temperature values from the weather forecasting models. The contributions to the scientific and industrial community are three-fold, and the empirical results are highly encouraging. First, we are able to predict future thermal demand levels for the two locations under consideration with minimal errors. Second, we examine the impact of the outdoor temperature on the predictive ability of the models and how the accuracy of the energy demand forecasts decreases with the forecast horizon. Third, we extend the relevant literature with a new dataset of thermal demand and examine the performance and applicability of machine learning techniques to solve real-world problems. Overall, the solution proposed in this paper is in accordance with EU targets, providing an automated smart energy management system, decreasing human errors and reducing excessive energy production.

Keywords: machine learning, LSTMs, district heating and cooling system, thermal demand

Procedia PDF Downloads 147
134 Digital Transformation and Digitalization of Public Administration

Authors: Govind Kumar

Abstract:

The concept of ‘e-governance’ that was brought about by the new wave of reforms, namely ‘LPG’ in the early 1990s, has been enabling governments across the globe to digitally transform themselves. Digital transformation is leading the governments with qualitative decisions, optimization in rational use of resources, facilitation of cost-benefit analyses, and elimination of redundancy and corruption with the help of ICT-based applications interface. ICT-based applications/technologies have enormous potential for impacting positive change in the social lives of the global citizenry. Supercomputers test and analyze millions of drug molecules for developing candidate vaccines to combat the global pandemic. Further, e-commerce portals help distribute and supply household items and medicines, while videoconferencing tools provide a visual interface between the clients and hosts. Besides, crop yields are being maximized with the help of drones and machine learning, whereas satellite data, artificial intelligence, and cloud computing help governments with the detection of illegal mining, tackling deforestation, and managing freshwater resources. Such e-applications have the potential to take governance an extra mile by achieving 5 Es (effective, efficient, easy, empower, and equity) of e-governance and six Rs (reduce, reuse, recycle, recover, redesign and remanufacture) of sustainable development. If such digital transformation gains traction within the government framework, it will replace the traditional administration with the digitalization of public administration. On the other hand, it has brought in a new set of challenges, like the digital divide, e-illiteracy, technological divide, etc., and problems like handling e-waste, technological obsolescence, cyber terrorism, e-fraud, hacking, phishing, etc. before the governments. Therefore, it would be essential to bring in a rightful mixture of technological and humanistic interventions for addressing the above issues. This is on account of the reason that technology lacks an emotional quotient, and the administration does not work like technology. Both are self-effacing unless a blend of technology and a humane face are brought in into the administration. The paper will empirically analyze the significance of the technological framework of digital transformation within the government set up for the digitalization of public administration on the basis of the synthesis of two case studies undertaken from two diverse fields of administration and present a future framework of the study.

Keywords: digital transformation, electronic governance, public administration, knowledge framework

Procedia PDF Downloads 103
133 Single Cell Rna Sequencing Operating from Benchside to Bedside: An Interesting Entry into Translational Genomics

Authors: Leo Nnamdi Ozurumba-Dwight

Abstract:

Single-cell genomic analytical systems have proved to be a platform to isolate bulk cells into selected single cells for genomic, proteomic, and related metabolomic studies. This is enabling systematic investigations of the level of heterogeneity in a diverse and wide pool of cell populations. Single cell technologies, embracing techniques such as high parameter flow cytometry, single-cell sequencing, and high-resolution images are playing vital roles in these investigations on messenger ribonucleic acid (mRNA) molecules and related gene expressions in tracking the nature and course of disease conditions. This entails targeted molecular investigations on unit cells that help us understand cell behavoiur and expressions, which can be examined for their health implications on the health state of patients. One of the vital good sides of single-cell RNA sequencing (scRNA seq) is its probing capacity to detect deranged or abnormal cell populations present within homogenously perceived pooled cells, which would have evaded cursory screening on the pooled cell populations of biological samples obtained as part of diagnostic procedures. Despite conduction of just single-cell transcriptome analysis, scRNAseq now permits comparison of the transcriptome of the individual cells, which can be evaluated for gene expressional patterns that depict areas of heterogeneity with pharmaceutical drug discovery and clinical treatment applications. It is vital to strictly work through the tools of investigations from wet lab to bioinformatics and computational tooled analyses. In the precise steps for scRNAseq, it is critical to do thorough and effective isolation of viable single cells from the tissues of interest using dependable techniques (such as FACS) before proceeding to lysis, as this enhances the appropriate picking of quality mRNA molecules for subsequent sequencing (such as by the use of Polymerase Chain Reaction machine). Interestingly, scRNAseq can be deployed to analyze various types of biological samples such as embryos, nervous systems, tumour cells, stem cells, lymphocytes, and haematopoietic cells. In haematopoietic cells, it can be used to stratify acute myeloid leukemia patterns in patients, sorting them out into cohorts that enable re-modeling of treatment regimens based on stratified presentations. In immunotherapy, it can furnish specialist clinician-immunologist with tools to re-model treatment for each patient, an attribute of precision medicine. Finally, the good predictive attribute of scRNAseq can help reduce the cost of treatment for patients, thus attracting more patients who would have otherwise been discouraged from seeking quality clinical consultation help due to perceived high cost. This is a positive paradigm shift for patients’ attitudes primed towards seeking treatment.

Keywords: immunotherapy, transcriptome, re-modeling, mRNA, scRNA-seq

Procedia PDF Downloads 179
132 Performance Optimization of Polymer Materials Thanks to Sol-Gel Chemistry for Fuel Cells

Authors: Gondrexon, Gonon, Mendil-Jakani, Mareau

Abstract:

Proton Exchange Membrane Fuel Cells (PEMFCs) seems to be a promising device used for converting hydrogen into electricity. PEMFC is made of a Membrane Electrode Assembly (MEA) composed of a Proton Exchange Membrane (PEM) sandwiched by two catalytic layers. Nowadays, specific performances are targeted in order to ensure the long-term expansion of this technology. Current polymers used (perfluorinated as Nafion®) are unsuitable (loss of mechanical properties) for the high-temperature range. To overcome this issue, sulfonated polyaromatic polymers appear to be a good alternative since it has very good thermomechanical properties. However, their proton conductivity and chemical stability (oxidative resistance to H2O2 formed during fuel cell (FC) operating) are very low. In our team, we patented an original concept of hybrid membranes able to fulfill the specific requirements for PEMFC. This idea is based on the improvement of commercialized polymer membrane via an easy and processable stabilization thanks to sol-gel (SG) chemistry with judicious embeded chemical functions. This strategy is thus breaking up with traditional approaches (design of new copolymers, use of inorganic charges/additives). In 2020, we presented the elaboration and functional properties of a 1st generation of hybrid membranes with promising performances and durability. The latter was made by self-condensing a SG phase with 3(mercaptopropyl)trimethoxysilane (MPTMS) inside a commercial sPEEK host membrane. The successful in-situ condensation reactions of the MPTMS was demonstrated by measures of mass uptakes, FTIR spectroscopy (presence of C-Haliphatics) and solid state NMR 29Si (T2 & T3 signals of self-condensation products). The ability of the SG phase to prevent the oxidative degradation of the sPEEK phase (thanks to thiol chemical functions) was then proved with H2O2 accelerating tests and FC operating tests. A 2nd generation made of thiourea functionalized SG precursors (named HTU & TTU) was made after. By analysing in depth the morphologies of these different hybrids by direct space analysis (AFM/SEM/TEM) and reciprocal space analysis (SANS/SAXS/WAXS), we highlighted that both SG phase morphology and its localisation into the host has a huge impact on the PEM functional properties observed. This relationship is also dependent on the chemical function embedded. The hybrids obtained have shown very good chemical resistance during aging test (exposed to H2O2) compared to the commercial sPEEK. But the chemical function used is considered as “sacrificial” and cannot react indefinitely with H2O2. Thus, we are now working on a 3rd generation made of both sacrificial/regenerative chemical functions which are expected to inhibit the chemical aging of sPEEK more efficiently. With this work, we are confident to reach a predictive approach of the key parameters governing the final properties.

Keywords: fuel cells, ionomers, membranes, sPEEK, chemical stability

Procedia PDF Downloads 75
131 Thermal Energy Storage Based on Molten Salts Containing Nano-Particles: Dispersion Stability and Thermal Conductivity Using Multi-Scale Computational Modelling

Authors: Bashar Mahmoud, Lee Mortimer, Michael Fairweather

Abstract:

New methods have recently been introduced to improve the thermal property values of molten nitrate salts (a binary mixture of NaNO3:KNO3in 60:40 wt. %), by doping them with minute concentration of nanoparticles in the range of 0.5 to 1.5 wt. % to form the so-called: Nano-heat-transfer-fluid, apt for thermal energy transfer and storage applications. The present study aims to assess the stability of these nanofluids using the advanced computational modelling technique, Lagrangian particle tracking. A multi-phase solid-liquid model is used, where the motion of embedded nanoparticles in the suspended fluid is treated by an Euler-Lagrange hybrid scheme with fixed time stepping. This technique enables measurements of various multi-scale forces whose characteristic (length and timescales) are quite different. Two systems are considered, both consisting of 50 nm Al2O3 ceramic nanoparticles suspended in fluids of different density ratios. This includes both water (5 to 95 °C) and molten nitrate salt (220 to 500 °C) at various volume fractions ranging between 1% to 5%. Dynamic properties of both phases are coupled to the ambient temperature of the fluid suspension. The three-dimensional computational region consists of a 1μm cube and particles are homogeneously distributed across the domain. Periodic boundary conditions are enforced. The particle equations of motion are integrated using the fourth order Runge-Kutta algorithm with a very small time-step, Δts, set at 10-11 s. The implemented technique demonstrates the key dynamics of aggregated nanoparticles and this involves: Brownian motion, soft-sphere particle-particle collisions, and Derjaguin, Landau, Vervey, and Overbeek (DLVO) forces. These mechanisms are responsible for the predictive model of aggregation of nano-suspensions. An energy transport-based method of predicting the thermal conductivity of the nanofluids is also used to determine thermal properties of the suspension. The simulation results confirms the effectiveness of the technique. The values are in excellent agreement with the theoretical and experimental data obtained from similar studies. The predictions indicates the role of Brownian motion and DLVO force (represented by both the repulsive electric double layer and an attractive Van der Waals) and its influence in the level of nanoparticles agglomeration. As to the nano-aggregates formed that was found to play a key role in governing the thermal behavior of nanofluids at various particle concentration. The presentation will include a quantitative assessment of these forces and mechanisms, which would lead to conclusions about nanofluids, heat transfer performance and thermal characteristics and its potential application in solar thermal energy plants.

Keywords: thermal energy storage, molten salt, nano-fluids, multi-scale computational modelling

Procedia PDF Downloads 194
130 The Relationship between Body Fat Percent and Metabolic Syndrome Indices in Childhood Morbid Obesity

Authors: Mustafa Metin Donma

Abstract:

Metabolic syndrome (MetS) is characterized by a series of biochemical, physiological and anthropometric indicators and is a life-threatening health problem due to its close association with chronic diseases such as diabetes mellitus, hypertension, cancer and cardiovascular diseases. The syndrome deserves great interest both in adults and children. Central obesity is the indispensable component of MetS. Particularly, children, who are morbidly obese have a great tendency to develop the disease, because they are under the threat in their future lives. Preventive measures at this stage should be considered. For this, investigators seek for an informative scale or an index for the purpose. So far, several, but not many suggestions come into the stage. However, the diagnostic decision is not so easy and may not be complete particularly in the pediatric population. The aim of the study was to develop a MetS index capable of predicting MetS, while children are at the morbid obesity stage. This study was performed on morbid obese (MO) children, which were divided into two groups. Morbid obese children, who do not possess MetS criteria comprised the first group (n=44). The second group was composed of children (n=42) with MetS diagnosis. Parents were informed about the signed consent forms, which are required for the participation of their children in the study. The approval of the study protocol was taken from the institutional ethics committee of Tekirdag Namik Kemal University. Helsinki Declaration was accepted prior to and during the study. Anthropometric measurements including weight, height, waist circumference (WC), hip C, head C, neck C, biochemical tests including fasting blood glucose (FBG), insulin (INS), triglycerides (TRG), high density lipoprotein cholesterol (HDL-C) and blood pressure measurements (systolic (SBP) and diastolic (DBP)) were performed. Body fat percentage (BFP) values were determined by TANITA’s Bioelectrical Impedance Analysis technology. Body mass index and MetS indices were calculated. The equations for MetS index (MetSI) and advanced Donma MetS index (ADMI) were [(INS/FBG)/(HDL-C/TRG)]*100 and MetSI*[(SBP+DBP/Height)], respectively. Descriptive statistics including median values, compare means tests, correlation-regression analysis were performed within the scope of data evaluation using the statistical package program, SPSS. Statistically significant mean differences were determined by a p value smaller than 0.05. Median values for MetSI and ADMI in MO (MetS-) and MO (MetS+) groups were calculated as (25.9 and 36.5) and (74.0 and 106.1), respectively. Corresponding mean±SD values for BFPs were 35.9±7.1 and 38.2±7.7 in groups. Correlation analysis of these two indices with corresponding general BFP values exhibited significant association with ADMI, close to significance with MetSI in MO group. Any significant correlation was found with neither of the indices in MetS group. In conclusion, important associations observed with MetS indices in MO group were quite meaningful. The presence of these associations in MO group was important for showing the tendency towards the development of MetS in MO (MetS-) participants. The other index, ADMI, was more helpful for predictive purpose.

Keywords: body fat percentage, child, index, metabolic syndrome, obesity

Procedia PDF Downloads 62
129 Flowback Fluids Treatment Technology with Water Recycling and Valuable Metals Recovery

Authors: Monika Konieczyńska, Joanna Fajfer, Olga Lipińska

Abstract:

In Poland works related to the exploration and prospection of unconventional hydrocarbons (natural gas accumulated in the Silurian shale formations) started in 2007, based on the experience of the other countries that have created new possibilities for the use of existing hydrocarbons resources. The highly water-consuming process of hydraulic fracturing is required for the exploitation of shale gas which implies a need to ensure large volume of water available. As a result considerable amount of mining waste is generated, particularly liquid waste, i.e. flowback fluid with variable chemical composition. The chemical composition of the flowback fluid depends on the composition of the fracturing fluid and the chemistry of the fractured geological formations. Typically, flowback fluid is highly salinated, can be enriched in heavy metals, including rare earth elements, naturally occurring radioactive materials and organic compounds. The generated fluids considered as the extractive waste should be properly managed in the recovery or disposal facility. Problematic issue is both high hydration of waste as well as their variable chemical composition. Also the limited capacity of currently operating facilities is a growing problem. Based on the estimates, currently operating facilities will not be sufficient for the need of waste disposal when extraction of unconventional hydrocarbons starts. Further more, the content of metals in flowback fluids including rare earth elements is a considerable incentive to develop technology of metals recovery. Also recycling is a key factor in terms of selection of treatment process, which should provide that the thresholds required for reuse are met. The paper will present the study of the flowback fluids chemical composition, based on samples from hydraulic fracturing processes performed in Poland. The scheme of flowback fluid cleaning and recovering technology will be reviewed along with a discussion of the results and an assessment of environmental impact, including all generated by-products. The presented technology is innovative due to the metal recovery, as well as purified water supply for hydraulic fracturing process, which is significant contribution to reducing water consumption.

Keywords: environmental impact, flowback fluid, management of special waste streams, metals recovery, shale gas

Procedia PDF Downloads 267
128 Clinical and Analytical Performance of Glial Fibrillary Acidic Protein and Ubiquitin C-Terminal Hydrolase L1 Biomarkers for Traumatic Brain Injury in the Alinity Traumatic Brain Injury Test

Authors: Raj Chandran, Saul Datwyler, Jaime Marino, Daniel West, Karla Grasso, Adam Buss, Hina Syed, Zina Al Sahouri, Jennifer Yen, Krista Caudle, Beth McQuiston

Abstract:

The Alinity i TBI test is Therapeutic Goods Administration (TGA) registered and is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays for the measurement of glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) in plasma and serum. The Alinity i TBI performance was evaluated in a multi-center pivotal study to demonstrate the capability to assist in determining the need for a CT scan of the head in adult subjects (age 18+) presenting with suspected mild TBI (traumatic brain injury) with a Glasgow Coma Scale score of 13 to 15. TBI has been recognized as an important cause of death and disability and is a growing public health problem. An estimated 69 million people globally experience a TBI annually1. Blood-based biomarkers such as glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) have shown utility to predict acute traumatic intracranial injury on head CT scans after TBI. A pivotal study using prospectively collected archived (frozen) plasma specimens was conducted to establish the clinical performance of the TBI test on the Alinity i system. The specimens were originally collected in a prospective, multi-center clinical study. Testing of the specimens was performed at three clinical sites in the United States. Performance characteristics such as detection limits, imprecision, linearity, measuring interval, expected values, and interferences were established following Clinical and Laboratory Standards Institute (CLSI) guidance. Of the 1899 mild TBI subjects, 120 had positive head CT scan results; 116 of the 120 specimens had a positive TBI interpretation (Sensitivity 96.7%; 95% CI: 91.7%, 98.7%). Of the 1779 subjects with negative CT scan results, 713 had a negative TBI interpretation (Specificity 40.1%; 95% CI: 37.8, 42.4). The negative predictive value (NPV) of the test was 99.4% (713/717, 95% CI: 98.6%, 99.8%). The analytical measuring interval (AMI) extends from the limit of quantitation (LoQ) to the upper LoQ and is determined by the range that demonstrates acceptable performance for linearity, imprecision, and bias. The AMI is 6.1 to 42,000 pg/mL for GFAP and 26.3 to 25,000 pg/mL for UCH-L1. Overall, within-laboratory imprecision (20 day) ranged from 3.7 to 5.9% CV for GFAP and 3.0 to 6.0% CV for UCH-L1, when including lot and instrument variances. The Alinity i TBI clinical performance results demonstrated high sensitivity and high NPV, supporting the utility to assist in determining the need for a head CT scan in subjects presenting to the emergency department with suspected mild TBI. The GFAP and UCH-L1 assays show robust analytical performance across a broad concentration range of GFAP and UCH-L1 and may serve as a valuable tool to help evaluate TBI patients across the spectrum of mild to severe injury.

Keywords: biomarker, diagnostic, neurology, TBI

Procedia PDF Downloads 73
127 Proposals to Increase the Durability of Concrete Affected by Acid Mine Waters

Authors: Cristian Rodriguez, Jose M. Davila, Aguasanta M. Sarmiento, María L. de la Torre

Abstract:

There are many acidic environments that degrade structural concrete, such as those found in water treatment plants, sports facilities, and more, but one of the most aggressive is undoubtedly the water from acid mine drainage. This phenomenon occurs in all pyrite mining facilities and, to a lesser extent, in coal mines and is characterised by very low pH values and high sulphate, metal, and metalloid contents. This phenomenon causes significant damage to the concrete, mainly attacking the binder. In addition, the process is accentuated by the action of acidophilic bacteria, which accelerate the cracking of the concrete. Due to the damage that concrete experiences in acidic environments, the authors of this study aimed to enhance its performance in various aspects. Thus, two solutions have been proposed to improve the concrete durability, acting both on the mass of the material itself with the incorporation of fibres, and on its surface, proposing treatments with two different paints. The incorporation of polypropylene fibres in the concrete mass aims to improve the tensile strength of concrete, being this parameter the most affected in this type of degradation. The protection of the concrete with surface paint is intended to improve the performance against abrasion while reducing the access of water to the interior of the mass of the material. Sulpho-resistant cement has been used in all the mass concrete mixtures that have been prepared, in addition to complying with the requirements of the current Spanish standard, equivalent to the Eurocodes. For the polypropylene fibres, two alternatives have been used, with 1.7 and 3.4 kg/m³, while as surface treatment, the use of two paints has been analysed, one based on polyurethane and the other on asphalt-type paint. The proposed treatments have been analysed by means of indirect tensile tests and pressure sandblasting, thus analysing the effects of abrasion. The results obtained have confirmed a slight increase in the tensile strength of mass concrete by incorporating polypropylene fibres, being slightly higher for a ratio of 3.4 kg/m³, with an improvement of slightly more than 5% in the tensile strength of concrete. However, the use of fibres in concrete greatly reduces the loss of concrete mass due to abrasion. This improvement against abrasion is even more significant when paint is used as an external protection measure, with a much lower loss of mass with both paints. Acknowledgments: This work has been supported by MICIU/AEI/10.13039/501100011033/FEDER, UE, throughout the project PID2021-123130OB-I00.

Keywords: degradation, concrete, tensile strength, abrasion

Procedia PDF Downloads 24
126 Detection of High Fructose Corn Syrup in Honey by Near Infrared Spectroscopy and Chemometrics

Authors: Mercedes Bertotto, Marcelo Bello, Hector Goicoechea, Veronica Fusca

Abstract:

The National Service of Agri-Food Health and Quality (SENASA), controls honey to detect contamination by synthetic or natural chemical substances and establishes and controls the traceability of the product. The utility of near-infrared spectroscopy for the detection of adulteration of honey with high fructose corn syrup (HFCS) was investigated. First of all, a mixture of different authentic artisanal Argentinian honey was prepared to cover as much heterogeneity as possible. Then, mixtures were prepared by adding different concentrations of high fructose corn syrup (HFCS) to samples of the honey pool. 237 samples were used, 108 of them were authentic honey and 129 samples corresponded to honey adulterated with HFCS between 1 and 10%. They were stored unrefrigerated from time of production until scanning and were not filtered after receipt in the laboratory. Immediately prior to spectral collection, honey was incubated at 40°C overnight to dissolve any crystalline material, manually stirred to achieve homogeneity and adjusted to a standard solids content (70° Brix) with distilled water. Adulterant solutions were also adjusted to 70° Brix. Samples were measured by NIR spectroscopy in the range of 650 to 7000 cm⁻¹. The technique of specular reflectance was used, with a lens aperture range of 150 mm. Pretreatment of the spectra was performed by Standard Normal Variate (SNV). The ant colony optimization genetic algorithm sample selection (ACOGASS) graphical interface was used, using MATLAB version 5.3, to select the variables with the greatest discriminating power. The data set was divided into a validation set and a calibration set, using the Kennard-Stone (KS) algorithm. A combined method of Potential Functions (PF) was chosen together with Partial Least Square Linear Discriminant Analysis (PLS-DA). Different estimators of the predictive capacity of the model were compared, which were obtained using a decreasing number of groups, which implies more demanding validation conditions. The optimal number of latent variables was selected as the number associated with the minimum error and the smallest number of unassigned samples. Once the optimal number of latent variables was defined, we proceeded to apply the model to the training samples. With the calibrated model for the training samples, we proceeded to study the validation samples. The calibrated model that combines the potential function methods and PLSDA can be considered reliable and stable since its performance in future samples is expected to be comparable to that achieved for the training samples. By use of Potential Functions (PF) and Partial Least Square Linear Discriminant Analysis (PLS-DA) classification, authentic honey and honey adulterated with HFCS could be identified with a correct classification rate of 97.9%. The results showed that NIR in combination with the PT and PLS-DS methods can be a simple, fast and low-cost technique for the detection of HFCS in honey with high sensitivity and power of discrimination.

Keywords: adulteration, multivariate analysis, potential functions, regression

Procedia PDF Downloads 127
125 Evaluation of Correct Usage, Comfort and Fit of Personal Protective Equipment in Construction Work

Authors: Anna-Lisa Osvalder, Jonas Borell

Abstract:

There are several reasons behind the use, non-use, or inadequate use of personal protective equipment (PPE) in the construction industry. Comfort and accurate size support proper use, while discomfort, misfit, and difficulties to understand how the PPEs should be handled inhibit correct usage. The need for several protective equipments simultaneously might also create problems. The purpose of this study was to analyse the correct usage, comfort, and fit of different types of PPEs used for construction work. Correct usage was analysed as guessability, i.e., human perceptions of how to don, adjust, use, and doff the equipment, and if used as intended. The PPEs tested individually or in combinations were a helmet, ear protectors, goggles, respiratory masks, gloves, protective cloths, and safety harnesses. First, an analytical evaluation was performed with ECW (enhanced cognitive walkthrough) and PUEA (predictive use error analysis) to search for usability problems and use errors during handling and use. Then usability tests were conducted to evaluate guessability, comfort, and fit with 10 test subjects of different heights and body constitutions. The tests included observations during donning, five different outdoor work tasks, and doffing. The think-aloud method, short interviews, and subjective estimations were performed. The analytical evaluation showed that some usability problems and use errors arise during donning and doffing, but with minor severity, mostly causing discomfort. A few use errors and usability problems arose for the safety harness, especially for novices, where some could lead to a high risk of severe incidents. The usability tests showed that discomfort arose for all test subjects when using a combination of PPEs, increasing over time. For instance, goggles, together with the face mask, caused pressure, chafing at the nose, and heat rash on the face. This combination also limited sight of vision. The helmet, in combination with the goggles and ear protectors, did not fit well and caused uncomfortable pressure at the temples. No major problems were found with the individual fit of the PPEs. The ear protectors, goggles, and face masks could be adjusted for different head sizes. The guessability for how to don and wear the combination of PPE was moderate, but it took some time to adjust them for a good fit. The guessability was poor for the safety harness; few clues in the design showed how it should be donned, adjusted, or worn on the skeletal bones. Discomfort occurred when the straps were tightened too much. All straps could not be adjusted for somebody's constitutions leading to non-optimal safety. To conclude, if several types of PPEs are used together, discomfort leading to pain is likely to occur over time, which can lead to misuse, non-use, or reduced performance. If people who are not regular users should wear a safety harness correctly, the design needs to be improved for easier interpretation, correct position of the straps, and increased possibilities for individual adjustments. The results from this study can be a base for re-design ideas for PPE, especially when they should be used in combinations.

Keywords: construction work, PPE, personal protective equipment, misuse, guessability, usability

Procedia PDF Downloads 93
124 Handling, Exporting and Archiving Automated Mineralogy Data Using TESCAN TIMA

Authors: Marek Dosbaba

Abstract:

Within the mining sector, SEM-based Automated Mineralogy (AM) has been the standard application for quickly and efficiently handling mineral processing tasks. Over the last decade, the trend has been to analyze larger numbers of samples, often with a higher level of detail. This has necessitated a shift from interactive sample analysis performed by an operator using a SEM, to an increased reliance on offline processing to analyze and report the data. In response to this trend, TESCAN TIMA Mineral Analyzer is designed to quickly create a virtual copy of the studied samples, thereby preserving all the necessary information. Depending on the selected data acquisition mode, TESCAN TIMA can perform hyperspectral mapping and save an X-ray spectrum for each pixel or segment, respectively. This approach allows the user to browse through elemental distribution maps of all elements detectable by means of energy dispersive spectroscopy. Re-evaluation of the existing data for the presence of previously unconsidered elements is possible without the need to repeat the analysis. Additional tiers of data such as a secondary electron or cathodoluminescence images can also be recorded. To take full advantage of these information-rich datasets, TIMA utilizes a new archiving tool introduced by TESCAN. The dataset size can be reduced for long-term storage and all information can be recovered on-demand in case of renewed interest. TESCAN TIMA is optimized for network storage of its datasets because of the larger data storage capacity of servers compared to local drives, which also allows multiple users to access the data remotely. This goes hand in hand with the support of remote control for the entire data acquisition process. TESCAN also brings a newly extended open-source data format that allows other applications to extract, process and report AM data. This offers the ability to link TIMA data to large databases feeding plant performance dashboards or geometallurgical models. The traditional tabular particle-by-particle or grain-by-grain export process is preserved and can be customized with scripts to include user-defined particle/grain properties.

Keywords: Tescan, electron microscopy, mineralogy, SEM, automated mineralogy, database, TESCAN TIMA, open format, archiving, big data

Procedia PDF Downloads 113
123 Cr (VI) Adsorption on Ce0.25Zr0.75O2.nH2O-Kinetics and Thermodynamics

Authors: Carlos Alberto Rivera-corredor, Angie Dayana Vargas-Ceballos, Edison Gilpavas, Izabela Dobrosz-Gómez, Miguel Ángel Gómez-García

Abstract:

Hexavalent chromium, Cr (VI) is present in the effluents from different industries such as electroplating, mining, leather tanning, etc. This compound is of great academic and industrial concern because of its toxic and carcinogenic behavior. Its dumping to both environmental and public health for animals and humans causes serious problems in water sources. The amount of Cr (VI) in industrial wastewaters ranges from 0.5 to 270,000 mgL-1. According to the Colombian standard for water quality (NTC-813-2010), the maximum allowed concentration for the Cr (VI) in drinking water is 0.05 mg L-1. To comply with this limit, it is essential that industries treat their effluent to reduce the Cr (VI) to acceptable levels. Numerous methods have been reported for the treatment removing metal ions from aqueous solutions such as: reduction, ion exchange, electrodialysis, etc. Adsorption has become a promising method for the purification of metal ions in water, since its application corresponds with an economic and efficient technology. The absorbent selection and the kinetic and thermodynamic study of the adsorption conditions are key to the development of a suitable adsorption technology. The Ce0.25Zr0.75O2.nH2O presents higher adsorption capacity between a series of hydrated mixed oxides Ce1-xZrxO2 (x = 0, 0.25, 0.5, 0.75, 1). This work presents the kinetic and thermodynamic study of Cr (VI) adsorption on Ce0.25Zr0.75O2.nH2O. Experiments were performed under the following experimental conditions: initial Cr (VI) concentration = 25, 50 and 100 mgL-1, pH = 2, adsorbent charge = 4 gL-1, stirring time = 60 min, temperature=20, 28 and 40 °C. The Cr (VI) concentration was spectrophotometrically estimated by the method of difenilcarbazide with monitoring the absorbance at 540 nm. The Cr (VI) adsorption over hydrated Ce0.25Zr0.75O2.nH2O models was analyzed using pseudo-first and pseudo-second order kinetics. The Langmuir and Freundlich models were used to model the experimental data. The convergence between the experimental values and those predicted by the model, is expressed as a linear regression correlation coefficient (R2) and was employed as the model selection criterion. The adsorption process followed the pseudo-second order kinetic model and obeyed the Langmuir isotherm model. The thermodynamic parameters were calculated as: ΔH°=9.04 kJmol-1,ΔS°=0.03 kJmol-1 K-1, ΔG°=-0.35 kJmol-1 and indicated the endothermic and spontaneous nature of the adsorption process, governed by physisorption interactions.

Keywords: adsorption, hexavalent chromium, kinetics, thermodynamics

Procedia PDF Downloads 303
122 Measuring Enterprise Growth: Pitfalls and Implications

Authors: N. Šarlija, S. Pfeifer, M. Jeger, A. Bilandžić

Abstract:

Enterprise growth is generally considered as a key driver of competitiveness, employment, economic development and social inclusion. As such, it is perceived to be a highly desirable outcome of entrepreneurship for scholars and decision makers. The huge academic debate resulted in the multitude of theoretical frameworks focused on explaining growth stages, determinants and future prospects. It has been widely accepted that enterprise growth is most likely nonlinear, temporal and related to the variety of factors which reflect the individual, firm, organizational, industry or environmental determinants of growth. However, factors that affect growth are not easily captured, instruments to measure those factors are often arbitrary, causality between variables and growth is elusive, indicating that growth is not easily modeled. Furthermore, in line with heterogeneous nature of the growth phenomenon, there is a vast number of measurement constructs assessing growth which are used interchangeably. Differences among various growth measures, at conceptual as well as at operationalization level, can hinder theory development which emphasizes the need for more empirically robust studies. In line with these highlights, the main purpose of this paper is twofold. Firstly, to compare structure and performance of three growth prediction models based on the main growth measures: Revenues, employment and assets growth. Secondly, to explore the prospects of financial indicators, set as exact, visible, standardized and accessible variables, to serve as determinants of enterprise growth. Finally, to contribute to the understanding of the implications on research results and recommendations for growth caused by different growth measures. The models include a range of financial indicators as lag determinants of the enterprises’ performances during the 2008-2013, extracted from the national register of the financial statements of SMEs in Croatia. The design and testing stage of the modeling used the logistic regression procedures. Findings confirm that growth prediction models based on different measures of growth have different set of predictors. Moreover, the relationship between particular predictors and growth measure is inconsistent, namely the same predictor positively related to one growth measure may exert negative effect on a different growth measure. Overall, financial indicators alone can serve as good proxy of growth and yield adequate predictive power of the models. The paper sheds light on both methodology and conceptual framework of enterprise growth by using a range of variables which serve as a proxy for the multitude of internal and external determinants, but are unlike them, accessible, available, exact and free of perceptual nuances in building up the model. Selection of the growth measure seems to have significant impact on the implications and recommendations related to growth. Furthermore, the paper points out to potential pitfalls of measuring and predicting growth. Overall, the results and the implications of the study are relevant for advancing academic debates on growth-related methodology, and can contribute to evidence-based decisions of policy makers.

Keywords: growth measurement constructs, logistic regression, prediction of growth potential, small and medium-sized enterprises

Procedia PDF Downloads 254
121 Online Monitoring and Control of Continuous Mechanosynthesis by UV-Vis Spectrophotometry

Authors: Darren A. Whitaker, Dan Palmer, Jens Wesholowski, James Flaherty, John Mack, Ahmad B. Albadarin, Gavin Walker

Abstract:

Traditional mechanosynthesis has been performed by either ball milling or manual grinding. However, neither of these techniques allow the easy application of process control. The temperature may change unpredictably due to friction in the process. Hence the amount of energy transferred to the reactants is intrinsically non-uniform. Recently, it has been shown that the use of Twin-Screw extrusion (TSE) can overcome these limitations. Additionally, TSE enables a platform for continuous synthesis or manufacturing as it is an open-ended process, with feedstocks at one end and product at the other. Several materials including metal-organic frameworks (MOFs), co-crystals and small organic molecules have been produced mechanochemically using TSE. The described advantages of TSE are offset by drawbacks such as increased process complexity (a large number of process parameters) and variation in feedstock flow impacting on product quality. To handle the above-mentioned drawbacks, this study utilizes UV-Vis spectrophotometry (InSpectroX, ColVisTec) as an online tool to gain real-time information about the quality of the product. Additionally, this is combined with real-time process information in an Advanced Process Control system (PharmaMV, Perceptive Engineering) allowing full supervision and control of the TSE process. Further, by characterizing the dynamic behavior of the TSE, a model predictive controller (MPC) can be employed to ensure the process remains under control when perturbed by external disturbances. Two reactions were studied; a Knoevenagel condensation reaction of barbituric acid and vanillin and, the direct amidation of hydroquinone by ammonium acetate to form N-Acetyl-para-aminophenol (APAP) commonly known as paracetamol. Both reactions could be carried out continuously using TSE, nuclear magnetic resonance (NMR) spectroscopy was used to confirm the percentage conversion of starting materials to product. This information was used to construct partial least squares (PLS) calibration models within the PharmaMV development system, which relates the percent conversion to product to the acquired UV-Vis spectrum. Once this was complete, the model was deployed within the PharmaMV Real-Time System to carry out automated optimization experiments to maximize the percentage conversion based on a set of process parameters in a design of experiments (DoE) style methodology. With the optimum set of process parameters established, a series of PRBS process response tests (i.e. Pseudo-Random Binary Sequences) around the optimum were conducted. The resultant dataset was used to build a statistical model and associated MPC. The controller maximizes product quality whilst ensuring the process remains at the optimum even as disturbances such as raw material variability are introduced into the system. To summarize, a combination of online spectral monitoring and advanced process control was used to develop a robust system for optimization and control of two TSE based mechanosynthetic processes.

Keywords: continuous synthesis, pharmaceutical, spectroscopy, advanced process control

Procedia PDF Downloads 183
120 The Influence of Mechanical and Physicochemical Characteristics of Perfume Microcapsules on Their Rupture Behaviour and How This Relates to Performance in Consumer Products

Authors: Andrew Gray, Zhibing Zhang

Abstract:

The ability for consumer products to deliver a sustained perfume response can be a key driver for a variety of applications. Many compounds in perfume oils are highly volatile, meaning they readily evaporate once the product is applied, and the longevity of the scent is poor. Perfume capsules have been introduced as a means of abating this evaporation once the product has been delivered. The impermeable capsules are aimed to be stable within the formulation, and remain intact during delivery to the desired substrate, only rupturing to release the core perfume oil through application of mechanical force applied by the consumer. This opens up the possibility of obtaining an olfactive response hours, weeks or even months after delivery, depending on the nature of the desired application. Tailoring the properties of the polymeric capsules to better address the needs of the application is not a trivial challenge and currently design of capsules is largely done by trial and error. The aim of this work is to have more predictive methods for capsule design depending on the consumer application. This means refining formulations such that they rupture at the right time for the specific consumer application, not too early, not too late. Finding the right balance between these extremes is essential if a benefit is sought with respect to neat addition of perfume to formulations. It is important to understand the forces that influence capsule rupture, first, by quantifying the magnitude of these different forces, and then by assessing bulk rupture in real-world applications to understand how capsules actually respond. Samples were provided by an industrial partner and the mechanical properties of individual capsules within the samples were characterized via a micromanipulation technique, developed by Professor Zhang at the University of Birmingham. The capsules were synthesized such as to change one particular physicochemical property at a time, such as core: wall material ratio, and the average size of capsules. Analysis of shell thickness via Transmission Electron Microscopy, size distribution via the use of a Mastersizer, as well as a variety of other techniques confirmed that only one particular physicochemical property was altered for each sample. The mechanical analysis was subsequently undertaken, showing the effect that changing certain capsule properties had on the response under compression. It was, however, important to link this fundamental mechanical response to capsule performance in real-world applications. As such, the capsule samples were introduced to a formulation and exposed to full scale stresses. GC-MS headspace analysis of the perfume oil released from broken capsules enabled quantification of what the relative strengths of capsules truly means for product performance. Correlations have been found between the mechanical strength of capsule samples and performance in terms of perfume release in consumer applications. Having a better understanding of the key parameters that drive performance benefits the design of future formulations by offering better guidelines on the parameters that can be adjusted without worrying about the performance effects, and singles out those parameters that are essential in finding the sweet spot for capsule performance.

Keywords: consumer products, mechanical and physicochemical properties, perfume capsules, rupture behaviour

Procedia PDF Downloads 133
119 Assessment of the Situation and the Cause of Junk Food Consumption in Iranians: A Qualitative Study

Authors: A. Rezazadeh, B Damari, S. Riazi-Esfahani, M. Hajian

Abstract:

The consumption of junk food in Iran is alarmingly increasing. This study aimed to investigate the influencing factors of junk food consumption and amendable interventions that are criticized and approved by stakeholders, in order to presented to health policy makers. The articles and documents related to the content of study were collected by using the appropriate key words such as junk food, carbonated beverage, chocolate, candy, sweets, industrial fruit juices, potato chips, French fries, puffed corn, cakes, biscuits, sandwiches, prepared foods and popsicles, ice cream, bar, chewing gum, pastilles and snack, in scholar.google.com, pubmed.com, eric.ed.gov, cochrane.org, magiran.com, medlib.ir, irandoc.ac.ir, who.int, iranmedex.com, sid.ir, pubmed.org and sciencedirect.com databases. The main key points were extracted and included in a checklist and qualitatively analyzed. Then a summarized abstract was prepared in a format of a questionnaire to be presented to stakeholders. The design of this was qualitative (Delphi). According to this method, a questionnaire was prepared based on reviewing the articles and documents and it was emailed to stakeholders, who were asked to prioritize and choose the main problems and effective interventions. After three rounds, consensus was obtained.            Studies revealed high consumption of junk foods in the Iranian population, especially in children and adolescents. The most important affecting factors include availability, low price, media advertisements, preference of fast foods taste, the variety of the packages and their attractiveness, low awareness and changing in lifestyle. Main interventions recommended by stakeholders include developing a protective environment, educational interventions, increasing healthy food access and controlling media advertisements and putting pressure from the Industry and Mining Ministry on producers to produce healthy snacks. According to the findings, the results of this study may be proposed to public health policymakers as an advocacy paper and to be integrated in the interventional programs of Health and Education ministries and the media. Also, implementation of supportive meetings with the producers of alternative healthy products is suggested.

Keywords: junk foods, situation, qualitative study, Iran

Procedia PDF Downloads 261
118 Public Procurement and Innovation: A Municipal Approach

Authors: M. Moso-Diez, J. L. Moragues-Oregi, K. Simon-Elorz

Abstract:

Innovation procurement is designed to steer the development of solutions towards concrete public sector needs as a driver for innovation from the demand side (in public services as well as in market opportunities for companies), is horizontally emerging as a new policy instrument. In 2014 the new EU public procurement directives 2014/24/EC and 2014/25/EC reinforced the support for Public Procurement for Innovation, dedicating funding instruments that can be used across all areas supported by Horizon 2020, and targeting potential buyers of innovative solutions: groups of public procurers with similar needs. Under this programme, new policy adapters and networks emerge, aiming to embed innovation criteria into new procurement processes. As these initiatives are in process, research related to is scarce. We argue that Innovation Public Procurement can arise as an innovative policy instrument to public procurement in different policy domains, in spite of existing institutional and cultural barriers (legal guarantee versus innovation). The presentation combines insights from public procurement to supply management chain management in a sustainability and innovation policy arena, as a means of providing understanding of: (1) the circumstances that emerge; (2) the relationship between public and private actors; and (3) the emerging capacities in the definition of the agenda. The policy adopters are the contracting authorities that mainly are at municipal level where they interact with the supply management chain, interconnecting sustainability and climate measures with other policy priorities such as innovation and urban planning; and through the Competitive Dialogue procedure. We found that geography and territory affect both the level of municipal budget (due to municipal income per capita) and its institutional competencies (due to demographic reasons). In spite of the relevance of institutional determinants for public procurement, other factors play an important role such as human factors as well as both public policy and private intervention. The experience is a ‘city project’ (Bilbao) in the field of brownfield decontamination. Brownfield sites typically refer to abandoned or underused industrial and commercial properties—such as old process plants, mining sites, and landfills—that are available but contain low levels of environmental contaminants that may complicate reuse or redevelopment of the land. This article concludes that Innovation Public Procurement in sustainability and climate issues should be further developed both as a policy instrument and as a policy research line that could enable further relevant changes in public procurement as well as in climate innovation.

Keywords: innovation, city projects, public policy, public procurement

Procedia PDF Downloads 314
117 Kinetic Studies on CO₂ Gasification of Low and High Ash Indian Coals in Context of Underground Coal Gasification

Authors: Geeta Kumari, Prabu Vairakannu

Abstract:

Underground coal gasification (UCG) technology is an efficient and an economic in-situ clean coal technology, which converts unmineable coals into calorific valuable gases. This technology avoids ash disposal, coal mining, and storage problems. CO₂ gas can be a potential gasifying medium for UCG. CO₂ is a greenhouse gas and, the liberation of this gas to the atmosphere from thermal power plant industries leads to global warming. Hence, the capture and reutilization of CO₂ gas are crucial for clean energy production. However, the reactivity of high ash Indian coals with CO₂ needs to be assessed. In the present study, two varieties of Indian coals (low ash and high ash) are used for thermogravimetric analyses (TGA). Two low ash north east Indian coals (LAC) and a typical high ash Indian coal (HAC) are procured from the coal mines of India. Low ash coal with 9% ash (LAC-1) and 4% ash (LAC-2) and high ash coal (HAC) with 42% ash are used for the study. TGA studies are carried out to evaluate the activation energy for pyrolysis and gasification of coal under N₂ and CO₂ atmosphere. Coats and Redfern method is used to estimate the activation energy of coal under different temperature regimes. Volumetric model is assumed for the estimation of the activation energy. The activation energy estimated under different temperature range. The inherent properties of coals play a major role in their reactivity. The results show that the activation energy decreases with the decrease in the inherent percentage of coal ash due to the ash layer hindrance. A reverse trend was observed with volatile matter. High volatile matter of coal leads to the estimation of low activation energy. It was observed that the activation energy under CO₂ atmosphere at 400-600°C is less as compared to N₂ inert atmosphere. At this temperature range, it is estimated that 15-23% reduction in the activation energy under CO₂ atmosphere. This shows the reactivity of CO₂ gas with higher hydrocarbons of the coal volatile matters. The reactivity of CO₂ with the volatile matter of coal might occur through dry reforming reaction in which CO₂ reacts with higher hydrocarbon, aromatics of the tar content. The observed trend of Ea in the temperature range of 150-200˚C and 400-600˚C is HAC > LAC-1 >LAC-2 in both N₂ and CO₂ atmosphere. At the temperature range of 850-1000˚C, higher activation energy is estimated when compared to those values in the temperature range of 400-600°C. Above 800°C, char gasification through Boudouard reaction progressed under CO₂ atmosphere. It was observed that 8-20 kJ/mol of activation energy is increased during char gasification above 800°C compared to volatile matter pyrolysis between the temperature ranges of 400-600°C. The overall activation energy of the coals in the temperature range of 30-1000˚C is higher in N₂ atmosphere than CO₂ atmosphere. It can be concluded that higher hydrocarbons such as tar effectively undergoes cracking and reforming reactions in presence of CO₂. Thus, CO₂ gas is beneficial for the production of high calorific value syngas using high ash Indian coals.

Keywords: clean coal technology, CO₂ gasification, activation energy, underground coal gasification

Procedia PDF Downloads 175
116 Predicting Food Waste and Losses Reduction for Fresh Products in Modified Atmosphere Packaging

Authors: Matar Celine, Gaucel Sebastien, Gontard Nathalie, Guilbert Stephane, Guillard Valerie

Abstract:

To increase the very short shelf life of fresh fruits and vegetable, Modified Atmosphere Packaging (MAP) allows an optimal atmosphere composition to be maintained around the product and thus prevent its decay. This technology relies on the modification of internal packaging atmosphere due to equilibrium between production/consumption of gases by the respiring product and gas permeation through the packaging material. While, to the best of our knowledge, benefit of MAP for fresh fruits and vegetable has been widely demonstrated in the literature, its effect on shelf life increase has never been quantified and formalized in a clear and simple manner leading difficult to anticipate its economic and environmental benefit, notably through the decrease of food losses. Mathematical modelling of mass transfers in the food/packaging system is the basis for a better design and dimensioning of the food packaging system. But up to now, existing models did not permit to estimate food quality nor shelf life gain reached by using MAP. However, shelf life prediction is an indispensable prerequisite for quantifying the effect of MAP on food losses reduction. The objective of this work is to propose an innovative approach to predict shelf life of MAP food product and then to link it to a reduction of food losses and wastes. In this purpose, a ‘Virtual MAP modeling tool’ was developed by coupling a new predictive deterioration model (based on visual surface prediction of deterioration encompassing colour, texture and spoilage development) with models of the literature for respiration and permeation. A major input of this modelling tool is the maximal percentage of deterioration (MAD) which was assessed from dedicated consumers’ studies. Strawberries of the variety Charlotte were selected as the model food for its high perishability, high respiration rate; 50-100 ml CO₂/h/kg produced at 20°C, allowing it to be a good representative of challenging post-harvest storage. A value of 13% was determined as a limit of acceptability for the consumers, permitting to define products’ shelf life. The ‘Virtual MAP modeling tool’ was validated in isothermal conditions (5, 10 and 20°C) and in dynamic temperature conditions mimicking commercial post-harvest storage of strawberries. RMSE values were systematically lower than 3% for respectively, O₂, CO₂ and deterioration profiles as a function of time confirming the goodness of model fitting. For the investigated temperature profile, a shelf life gain of 0.33 days was obtained in MAP compared to the conventional storage situation (no MAP condition). Shelf life gain of more than 1 day could be obtained for optimized post-harvest conditions as numerically investigated. Such shelf life gain permitted to anticipate a significant reduction of food losses at the distribution and consumer steps. This food losses' reduction as a function of shelf life gain has been quantified using a dedicated mathematical equation that has been developed for this purpose.

Keywords: food losses and wastes, modified atmosphere packaging, mathematical modeling, shelf life prediction

Procedia PDF Downloads 185
115 Research on Territorial Ecological Restoration in Mianzhu City, Sichuan, under the Dual Evaluation Framework

Authors: Wenqian Bai

Abstract:

Background: In response to the post-pandemic directives of Xi Jinping concerning the new era of ecological civilization, China has embarked on ecological restoration projects across its territorial spaces. This initiative faces challenges such as complex evaluation metrics and subpar informatization standards. Methodology: This research focuses on Mianzhu City, Sichuan Province, to assess its resource and environmental carrying capacities and the appropriateness of land use for development from ecological, agricultural, and urban perspectives. The study incorporates a range of spatial data to evaluate factors like ecosystem services (including water conservation, soil retention, and biodiversity), ecological vulnerability (addressing issues like soil erosion and desertification), and resilience. Utilizing the Minimum Cumulative Resistance model along with the ‘Three Zones and Three Lines’ strategy, the research maps out ecological corridors and significant ecological networks. These frameworks support the ecological restoration and environmental enhancement of the area. Results: The study identifies critical ecological zones in Mianzhu City's northwestern region, highlighting areas essential for protection and particularly crucial for water conservation. The southeastern region is categorized as a generally protected ecological zone with respective ratings for water conservation functionality and ecosystem resilience. The research also explores the spatial challenges of three ecological functions and underscores the substantial impact of human activities, such as mining and agricultural expansion, on the ecological baseline. The proposed spatial arrangement for ecological restoration, termed ‘One Mountain, One Belt, Four Rivers, Five Zones, and Multiple Corridors’, strategically divides the city into eight major restoration zones, each with specific tasks and projects. Conclusion: With its significant ‘mountain-plain’ geography, Mianzhu City acts as a crucial ecological buffer for the Yangtze River's upper reaches. Future development should focus on enhancing ecological corridors in agriculture and urban areas, controlling soil erosion, and converting farmlands back to forests and grasslands to foster ecosystem rehabilitation.

Keywords: ecological restoration, resource and environmental carrying capacity, land development suitability, ecosystem services, ecological vulnerability, ecological networks

Procedia PDF Downloads 46
114 Colloids and Heavy Metals in Groundwaters: Tangential Flow Filtration Method for Study of Metal Distribution on Different Sizes of Colloids

Authors: Jiancheng Zheng

Abstract:

When metals are released into water from mining activities, they undergo changes chemically, physically and biologically and then may become more mobile and transportable along the waterway from their original sites. Natural colloids, including both organic and inorganic entities, are naturally occurring in any aquatic environment with sizes in the nanometer range. Natural colloids in a water system play an important role, quite often a key role, in binding and transporting compounds. When assessing and evaluating metals in natural waters, their sources, mobility, fate, and distribution patterns in the system are the major concerns from the point of view of assessing environmental contamination and pollution during resource development. There are a few ways to quantify colloids and accordingly study how metals distribute on different sizes of colloids. Current research results show that the presence of colloids can enhance the transport of some heavy metals in water, while heavy metals may also have an influence on the transport of colloids when cations in the water system change colloids and/or the ion strength of the water system changes. Therefore, studies into the relationship between different sizes of colloids and different metals in a water system are necessary and needed as natural colloids in water systems are complex mixtures of both organic and inorganic as well as biological materials. Their stability could be sensitive to changes in their shapes, phases, hardness and functionalities due to coagulation and deposition et al. and chemical, physical, and biological reactions. Because metal contaminants’ adsorption on surfaces of colloids is closely related to colloid properties, it is desired to fraction water samples as soon as possible after a sample is taken in the natural environment in order to avoid changes to water samples during transportation and storage. For this reason, this study carried out groundwater sample processing in the field, using Prep/Scale tangential flow filtration systems with 3-level cartridges (1 kDa, 10 kDa and 100 kDa). Groundwater samples from seven sites at Fort MacMurray, Alberta, Canada, were fractionated during the 2015 field sampling season. All samples were processed within 3 hours after samples were taken. Preliminary results show that although the distribution pattern of metals on colloids may vary with different samples taken from different sites, some elements often tend to larger colloids (such as Fe and Re), some to finer colloids (such as Sb and Zn), while some of them mainly in the dissolved form (such as Mo and Be). This information is useful to evaluate and project the fate and mobility of different metals in the groundwaters and possibly in environmental water systems.

Keywords: metal, colloid, groundwater, mobility, fractionation, sorption

Procedia PDF Downloads 367
113 Comparison of Machine Learning-Based Models for Predicting Streptococcus pyogenes Virulence Factors and Antimicrobial Resistance

Authors: Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Diego Santibañez Oyarce, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán

Abstract:

Streptococcus pyogenes is a gram-positive bacteria involved in a wide range of diseases and is a major-human-specific bacterial pathogen. In Chile, this year the 'Ministerio de Salud' declared an alert due to the increase in strains throughout the year. This increase can be attributed to the multitude of factors including antimicrobial resistance (AMR) and Virulence Factors (VF). Understanding these VF and AMR is crucial for developing effective strategies and improving public health responses. Moreover, experimental identification and characterization of these pathogenic mechanisms are labor-intensive and time-consuming. Therefore, new computational methods are required to provide robust techniques for accelerating this identification. Advances in Machine Learning (ML) algorithms represent the opportunity to refine and accelerate the discovery of VF associated with Streptococcus pyogenes. In this work, we evaluate the accuracy of various machine learning models in predicting the virulence factors and antimicrobial resistance of Streptococcus pyogenes, with the objective of providing new methods for identifying the pathogenic mechanisms of this organism.Our comprehensive approach involved the download of 32,798 genbank files of S. pyogenes from NCBI dataset, coupled with the incorporation of data from Virulence Factor Database (VFDB) and Antibiotic Resistance Database (CARD) which contains sequences of AMR gene sequence and resistance profiles. These datasets provided labeled examples of both virulent and non-virulent genes, enabling a robust foundation for feature extraction and model training. We employed preprocessing, characterization and feature extraction techniques on primary nucleotide/amino acid sequences and selected the optimal more for model training. The feature set was constructed using sequence-based descriptors (e.g., k-mers and One-hot encoding), and functional annotations based on database prediction. The ML models compared are logistic regression, decision trees, support vector machines, neural networks among others. The results of this work show some differences in accuracy between the algorithms, these differences allow us to identify different aspects that represent unique opportunities for a more precise and efficient characterization and identification of VF and AMR. This comparative analysis underscores the value of integrating machine learning techniques in predicting S. pyogenes virulence and AMR, offering potential pathways for more effective diagnostic and therapeutic strategies. Future work will focus on incorporating additional omics data, such as transcriptomics, and exploring advanced deep learning models to further enhance predictive capabilities.

Keywords: antibiotic resistance, streptococcus pyogenes, virulence factors., machine learning

Procedia PDF Downloads 39
112 Behavioral Patterns of Adopting Digitalized Services (E-Sport versus Sports Spectating) Using Agent-Based Modeling

Authors: Justyna P. Majewska, Szymon M. Truskolaski

Abstract:

The growing importance of digitalized services in the so-called new economy, including the e-sports industry, can be observed recently. Various demographic or technological changes lead consumers to modify their needs, not regarding the services themselves but the method of their application (attracting customers, forms of payment, new content, etc.). In the case of leisure-related to competitive spectating activities, there is a growing need to participate in events whose content is not sports competitions but computer games challenge – e-sport. The literature in this area so far focuses on determining the number of e-sport fans with elements of a simple statistical description (mainly concerning demographic characteristics such as age, gender, place of residence). Meanwhile, the development of the industry is influenced by a combination of many different, intertwined demographic, personality and psychosocial characteristics of customers, as well as the characteristics of their environment. Therefore, there is a need for a deeper recognition of the determinants of the behavioral patterns upon selecting digitalized services by customers, which, in the absence of available large data sets, can be achieved by using econometric simulations – multi-agent modeling. The cognitive aim of the study is to reveal internal and external determinants of behavioral patterns of customers taking into account various variants of economic development (the pace of digitization and technological development, socio-demographic changes, etc.). In the paper, an agent-based model with heterogeneous agents (characteristics of customers themselves and their environment) was developed, which allowed identifying a three-stage development scenario: i) initial interest, ii) standardization, and iii) full professionalization. The probabilities regarding the transition process were estimated using the Method of Simulated Moments. The estimation of the agent-based model parameters and sensitivity analysis reveals crucial factors that have driven a rising trend in e-sport spectating and, in a wider perspective, the development of digitalized services. Among the psychosocial characteristics of customers, they are the level of familiarization with the rules of games as well as sports disciplines, active and passive participation history and individual perception of challenging activities. Environmental factors include general reception of games, number and level of recognition of community builders and the level of technological development of streaming as well as community building platforms. However, the crucial factor underlying the good predictive power of the model is the level of professionalization. While in the initial interest phase, the entry barriers for new customers are high. They decrease during the phase of standardization and increase again in the phase of full professionalization when new customers perceive participation history inaccessible. In this case, they are prone to switch to new methods of service application – in the case of e-sport vs. sports to new content and more modern methods of its delivery. In a wider context, the findings in the paper support the idea of a life cycle of services regarding methods of their application from “traditional” to digitalized.

Keywords: agent-based modeling, digitalized services, e-sport, spectators motives

Procedia PDF Downloads 174
111 Psychometric Examination of Atma Jaya's Multiple Intelligence Batteries for University Students

Authors: Angela Oktavia Suryani, Bernadeth Gloria, Edwin Sutamto, Jessica Kristianty, Ni Made Rai Sapitri, Patricia Catherine Agla, Sitti Arlinda Rochiadi

Abstract:

It was found that some blogs or personal websites in Indonesia sell standardized intelligence tests (for example, Progressive Matrices (PM), Intelligence Structure Test (IST), and Culture Fair Intelligence Test (CFIT)) and other psychological tests, together with the manual and the key answers for public. Individuals can buy and prepare themselves for selection or recruitment with the real test. This action drives people to lie to the institution (education or company) and also to themselves. It was also found that those tests are old. Some items are not relevant with the current context, for example a question about a diameter of a certain coin that does not exist anymore. These problems motivate us to develop a new intelligence battery test, namely of Multiple Aptitude Battery (MAB). The battery test was built by using Thurstone’s Primary Mental Abilities theory and intended to be used by high schools students, university students, and worker applicants. The battery tests consist of 9 subtests. In the current study we examine six subtests, namely Reading Comprehension, Verbal Analogies, Numerical Inductive Reasoning, Numerical Deductive Reasoning, Mechanical Ability, and Two Dimensional Spatial Reasoning for university students. The study included 1424 data from students recruited by convenience sampling from eight faculties at Atma Jaya Catholic University of Indonesia. Classical and modern test approaches (Item Response Theory) were carried out to identify the item difficulties of the items and confirmatory factor analysis was applied to examine their internal validities. The validity of each subtest was inspected by using convergent–discriminant method, whereas the reliability was examined by implementing Kuder–Richardson formula. The result showed that the majority of the subtests were difficult in medium level, and there was only one subtest categorized as easy, namely Verbal Analogies. The items were found homogenous and valid measuring their constructs; however at the level of subtests, the construct validity examined by convergent-discriminant method indicated that the subtests were not unidimensional. It means they were not only measuring their own constructs but also other construct. Three of the subtests were able to predict academic performance with small effect size, namely Reading Comprehension, Numerical Inductive Reasoning, and Two Dimensional Spatial Reasoning. GPAs in intermediate level (GPAs at third semester and above) were considered as a factor for predictive invalidity. The Kuder-Richardson formula showed that the reliability coefficients for both numerical reasoning subtests and spatial reasoning were superior, in the range 0.84 – 0.87, whereas the reliability coefficient for the other three subtests were relatively below standard for ability test, in the range of 0.65 – 0.71. It can be concluded that some of the subtests are ready to be used, whereas some others are still need some revisions. This study also demonstrated that the convergent-discrimination method is useful to identify the general intelligence of human.

Keywords: intelligence, psychometric examination, multiple aptitude battery, university students

Procedia PDF Downloads 440