Search results for: predictive tracking
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1847

Search results for: predictive tracking

107 Cost Based Analysis of Risk Stratification Tool for Prediction and Management of High Risk Choledocholithiasis Patients

Authors: Shreya Saxena

Abstract:

Background: Choledocholithiasis is a common complication of gallstone disease. Risk scoring systems exist to guide the need for further imaging or endoscopy in managing choledocholithiasis. We completed an audit to review the American Society for Gastrointestinal Endoscopy (ASGE) scoring system for prediction and management of choledocholithiasis against the current practice at a tertiary hospital to assess its utility in resource optimisation. We have now conducted a cost focused sub-analysis on patients categorized high-risk for choledocholithiasis according to the guidelines to determine any associated cost benefits. Method: Data collection from our prior audit was used to retrospectively identify thirteen patients considered high-risk for choledocholithiasis. Their ongoing management was mapped against the guidelines. Individual costs for the key investigations were obtained from our hospital financial data. Total cost for the different management pathways identified in clinical practice were calculated and compared against predicted costs associated with recommendations in the guidelines. We excluded the cost of laparoscopic cholecystectomy and considered a set figure for per day hospital admission related expenses. Results: Based on our previous audit data, we identified a77% positive predictive value for the ASGE risk stratification tool to determine patients at high-risk of choledocholithiasis. 47% (6/13) had an magnetic resonance cholangiopancreatography (MRCP) prior to endoscopic retrograde cholangiopancreatography (ERCP), whilst 53% (7/13) went straight for ERCP. The average length of stay in the hospital was 7 days, with an additional day and cost of £328.00 (£117 for ERCP) for patients awaiting an MRCP prior to ERCP. Per day hospital admission was valued at £838.69. When calculating total cost, we assumed all patients had admission bloods and ultrasound done as the gold standard. In doing an MRCP prior to ERCP, there was a 130% increase in cost incurred (£580.04 vs £252.04) per patient. When also considering hospital admission and the average length of stay, it was an additional £1166.69 per patient. We then calculated the exact costs incurred by the department, over a three-month period, for all patients, for key investigations or procedures done in the management of choledocholithiasis. This was compared to an estimate cost derived from the recommended pathways in the ASGE guidelines. Overall, 81% (£2048.45) saving was associated with following the guidelines compared to clinical practice. Conclusion: MRCP is the most expensive test associated with the diagnosis and management of choledocholithiasis. The ASGE guidelines recommend endoscopy without an MRCP in patients stratified as high-risk for choledocholithiasis. Our audit that focused on assessing the utility of the ASGE risk scoring system showed it to be relatively reliable for identifying high-risk patients. Our cost analysis has shown significant cost savings per patient and when considering the average length of stay associated with direct endoscopy rather than an additional MRCP. Part of this is also because of an increased average length of stay associated with waiting for an MRCP. The above data supports the ASGE guidelines for the management of high-risk for choledocholithiasis patients from a cost perspective. The only caveat is our small data set that may impact the validity of our average length of hospital stay figures and hence total cost calculations.

Keywords: cost-analysis, choledocholithiasis, risk stratification tool, general surgery

Procedia PDF Downloads 94
106 Embodied Neoliberalism and the Mind as Tool to Manage the Body: A Descriptive Study Applied to Young Australian Amateur Athletes

Authors: Alicia Ettlin

Abstract:

Amid the rise of neoliberalism to the leading economic policy model in Western societies in the 1980s, people have started to internalise a neoliberal way of thinking, whereby the human body has become an entity that can and needs to be precisely managed through free yet rational decision-making processes. The neoliberal citizen has consequently become an entrepreneur of the self who is free, independent, rational, productive and responsible for themselves, their health and wellbeing as well as their appearance. The focus on individuals as entrepreneurs who manage their bodies through the rationally thinking mind has, however, become increasingly criticised for viewing the social actor as ‘disembodied’, as a detached, social actor whose powerful mind governs over the passive body. On the other hand, the discourse around embodiment seeks to connect rational decision-making processes to the dominant neoliberal discourse which creates an embodied understanding that the body, just as other areas of people’s lives, can and should be shaped, monitored and managed through cognitive and rational thinking. This perspective offers an understanding of the body regarding its connections with the social environment that reaches beyond the debates around mind-body binary thinking. Hence, following this argument, body management should not be thought of as either solely guided by embodied discourses nor as merely falling into a mind-body dualism, but rather, simultaneously and inseparably as both at once. The descriptive, qualitative analysis of semi-structured in-depth interviews conducted with young Australian amateur athletes between the age of 18 and 24 has shown that most participants are interested in measuring and managing their body to create self-knowledge and self-improvement. The participants thereby connected self-improvement to weight loss, muscle gain or simply staying fit and healthy. Self-knowledge refers to body measurements including weight, BMI or body fat percentage. Self-management and self-knowledge that are reliant on one another to take rational and well-thought-out decisions, are both characteristic values of the neoliberal doctrine. A neoliberal way of thinking and looking after the body has also by many been connected to rewarding themselves for their discipline, hard work or achievement of specific body management goals (e.g. eating chocolate for reaching the daily step count goal). A few participants, however, have shown resistance against these neoliberal values, and in particular, against the precise monitoring and management of the body with the help of self-tracking devices. Ultimately, however, it seems that most participants have internalised the dominant discourses around self-responsibility, and by association, a sense of duty to discipline their body in normative ways. Even those who have indicated their resistance against body work and body management practices that follow neoliberal thinking and measurement systems, are aware and have internalised the concept of the rational operating mind that needs or should decide how to look after the body in terms of health but also appearance ideals. The discussion around the collected data thereby shows that embodiment and the mind/body dualism constitute two connected, rather than two separate or opposing concepts.

Keywords: dualism, embodiment, mind, neoliberalism

Procedia PDF Downloads 160
105 Forecasting Thermal Energy Demand in District Heating and Cooling Systems Using Long Short-Term Memory Neural Networks

Authors: Kostas Kouvaris, Anastasia Eleftheriou, Georgios A. Sarantitis, Apostolos Chondronasios

Abstract:

To achieve the objective of almost zero carbon energy solutions by 2050, the EU needs to accelerate the development of integrated, highly efficient and environmentally friendly solutions. In this direction, district heating and cooling (DHC) emerges as a viable and more efficient alternative to conventional, decentralized heating and cooling systems, enabling a combination of more efficient renewable and competitive energy supplies. In this paper, we develop a forecasting tool for near real-time local weather and thermal energy demand predictions for an entire DHC network. In this fashion, we are able to extend the functionality and to improve the energy efficiency of the DHC network by predicting and adjusting the heat load that is distributed from the heat generation plant to the connected buildings by the heat pipe network. Two case-studies are considered; one for Vransko, Slovenia and one for Montpellier, France. The data consists of i) local weather data, such as humidity, temperature, and precipitation, ii) weather forecast data, such as the outdoor temperature and iii) DHC operational parameters, such as the mass flow rate, supply and return temperature. The external temperature is found to be the most important energy-related variable for space conditioning, and thus it is used as an external parameter for the energy demand models. For the development of the forecasting tool, we use state-of-the-art deep neural networks and more specifically, recurrent networks with long-short-term memory cells, which are able to capture complex non-linear relations among temporal variables. Firstly, we develop models to forecast outdoor temperatures for the next 24 hours using local weather data for each case-study. Subsequently, we develop models to forecast thermal demand for the same period, taking under consideration past energy demand values as well as the predicted temperature values from the weather forecasting models. The contributions to the scientific and industrial community are three-fold, and the empirical results are highly encouraging. First, we are able to predict future thermal demand levels for the two locations under consideration with minimal errors. Second, we examine the impact of the outdoor temperature on the predictive ability of the models and how the accuracy of the energy demand forecasts decreases with the forecast horizon. Third, we extend the relevant literature with a new dataset of thermal demand and examine the performance and applicability of machine learning techniques to solve real-world problems. Overall, the solution proposed in this paper is in accordance with EU targets, providing an automated smart energy management system, decreasing human errors and reducing excessive energy production.

Keywords: machine learning, LSTMs, district heating and cooling system, thermal demand

Procedia PDF Downloads 136
104 Privacy Paradox and the Internet of Medical Things

Authors: Isabell Koinig, Sandra Diehl

Abstract:

In recent years, the health-care context has not been left unaffected by technological developments. In recent years, the Internet of Medical Things (IoMT)has not only led to a collaboration between disease management and advanced care coordination but also to more personalized health care and patient empowerment. With more than 40 % of all health technology being IoMT-related by 2020, questions regarding privacy become more prevalent, even more so during COVID-19when apps allowing for an intensive tracking of people’s whereabouts and their personal contacts cause privacy advocates to protest and revolt. There is a widespread tendency that even though users may express concerns and fears about their privacy, they behave in a manner that appears to contradict their statements by disclosing personal data. In literature, this phenomenon is discussed as a privacy paradox. While there are some studies investigating the privacy paradox in general, there is only scarce research related to the privacy paradox in the health sector and, to the authors’ knowledge, no empirical study investigating young people’s attitudes toward data security when using wearables and health apps. The empirical study presented in this paper tries to reduce this research gap by focusing on the area of digital and mobile health. It sets out to investigate the degree of importance individuals attribute to protecting their privacy and individual privacy protection strategies. Moreover, the question to which degree individuals between the ages of 20 and 30 years are willing to grant commercial parties access to their private data to use digital health services and apps are put to the test. To answer this research question, results from 6 focus groups with 40 participants will be presented. The focus was put on this age segment that has grown up in a digitally immersed environment. Moreover, it is particularly the young generation who is not only interested in health and fitness but also already uses health-supporting apps or gadgets. Approximately one-third of the study participants were students. Subjects were recruited in August and September 2019 by two trained researchers via email and were offered an incentive for their participation. Overall, results indicate that the young generation is well informed about the growing data collection and is quite critical of it; moreover, they possess knowledge of the potential side effects associated with this data collection. Most respondents indicated to cautiously handle their data and consider privacy as highly relevant, utilizing a number of protective strategies to ensure the confidentiality of their information. Their willingness to share information in exchange for services was only moderately pronounced, particularly in the health context, since health data was seen as valuable and sensitive. The majority of respondents indicated to rather miss out on using digital and mobile health offerings in order to maintain their privacy. While this behavior might be an unintended consequence, it is an important piece of information for app developers and medical providers, who have to find a way to find a user base for their products against the background of rising user privacy concerns.

Keywords: digital health, privacy, privacy paradox, IoMT

Procedia PDF Downloads 132
103 Exploring the Use of Augmented Reality for Laboratory Lectures in Distance Learning

Authors: Michele Gattullo, Vito M. Manghisi, Alessandro Evangelista, Enricoandrea Laviola

Abstract:

In this work, we explored the use of Augmented Reality (AR) to support students in laboratory lectures in Distance Learning (DL), designing an application that proved to be ready for use next semester. AR could help students in the understanding of complex concepts as well as increase their motivation in the learning process. However, despite many prototypes in the literature, it is still less used in schools and universities. This is mainly due to the perceived limited advantages to the investment costs, especially regarding changes needed in the teaching modalities. However, with the spread of epidemiological emergency due to SARS-CoV-2, schools and universities were forced to a very rapid redefinition of consolidated processes towards forms of Distance Learning. Despite its many advantages, it suffers from the impossibility to carry out practical activities that are of crucial importance in STEM ("Science, Technology, Engineering e Math") didactics. In this context, AR perceived advantages increased a lot since teachers are more prepared for new teaching modalities, exploiting AR that allows students to carry on practical activities on their own instead of being physically present in laboratories. In this work, we designed an AR application for the support of engineering students in the understanding of assembly drawings of complex machines. Traditionally, this skill is acquired in the first years of the bachelor's degree in industrial engineering, through laboratory activities where the teacher shows the corresponding components (e.g., bearings, screws, shafts) in a real machine and their representation in the assembly drawing. This research aims to explore the effectiveness of AR to allow students to acquire this skill on their own without physically being in the laboratory. In a preliminary phase, we interviewed students to understand the main issues in the learning of this subject. This survey revealed that students had difficulty identifying machine components in an assembly drawing, matching between the 2D representation of a component and its real shape, and understanding the functionality of a component within the machine. We developed a mobile application using Unity3D, aiming to solve the mentioned issues. We designed the application in collaboration with the course professors. Natural feature tracking was used to associate the 2D printed assembly drawing with the corresponding 3D virtual model. The application can be displayed on students’ tablets or smartphones. Users could interact with selecting a component from a part list on the device. Then, 3D representations of components appear on the printed drawing, coupled with 3D virtual labels for their location and identification. Users could also interact with watching a 3D animation to learn how components are assembled. Students evaluated the application through a questionnaire based on the System Usability Scale (SUS). The survey was provided to 15 students selected among those we participated in the preliminary interview. The mean SUS score was 83 (SD 12.9) over a maximum of 100, allowing teachers to use the AR application in their courses. Another important finding is that almost all the students revealed that this application would provide significant power for comprehension on their own.

Keywords: augmented reality, distance learning, STEM didactics, technology in education

Procedia PDF Downloads 124
102 Performance Optimization of Polymer Materials Thanks to Sol-Gel Chemistry for Fuel Cells

Authors: Gondrexon, Gonon, Mendil-Jakani, Mareau

Abstract:

Proton Exchange Membrane Fuel Cells (PEMFCs) seems to be a promising device used for converting hydrogen into electricity. PEMFC is made of a Membrane Electrode Assembly (MEA) composed of a Proton Exchange Membrane (PEM) sandwiched by two catalytic layers. Nowadays, specific performances are targeted in order to ensure the long-term expansion of this technology. Current polymers used (perfluorinated as Nafion®) are unsuitable (loss of mechanical properties) for the high-temperature range. To overcome this issue, sulfonated polyaromatic polymers appear to be a good alternative since it has very good thermomechanical properties. However, their proton conductivity and chemical stability (oxidative resistance to H2O2 formed during fuel cell (FC) operating) are very low. In our team, we patented an original concept of hybrid membranes able to fulfill the specific requirements for PEMFC. This idea is based on the improvement of commercialized polymer membrane via an easy and processable stabilization thanks to sol-gel (SG) chemistry with judicious embeded chemical functions. This strategy is thus breaking up with traditional approaches (design of new copolymers, use of inorganic charges/additives). In 2020, we presented the elaboration and functional properties of a 1st generation of hybrid membranes with promising performances and durability. The latter was made by self-condensing a SG phase with 3(mercaptopropyl)trimethoxysilane (MPTMS) inside a commercial sPEEK host membrane. The successful in-situ condensation reactions of the MPTMS was demonstrated by measures of mass uptakes, FTIR spectroscopy (presence of C-Haliphatics) and solid state NMR 29Si (T2 & T3 signals of self-condensation products). The ability of the SG phase to prevent the oxidative degradation of the sPEEK phase (thanks to thiol chemical functions) was then proved with H2O2 accelerating tests and FC operating tests. A 2nd generation made of thiourea functionalized SG precursors (named HTU & TTU) was made after. By analysing in depth the morphologies of these different hybrids by direct space analysis (AFM/SEM/TEM) and reciprocal space analysis (SANS/SAXS/WAXS), we highlighted that both SG phase morphology and its localisation into the host has a huge impact on the PEM functional properties observed. This relationship is also dependent on the chemical function embedded. The hybrids obtained have shown very good chemical resistance during aging test (exposed to H2O2) compared to the commercial sPEEK. But the chemical function used is considered as “sacrificial” and cannot react indefinitely with H2O2. Thus, we are now working on a 3rd generation made of both sacrificial/regenerative chemical functions which are expected to inhibit the chemical aging of sPEEK more efficiently. With this work, we are confident to reach a predictive approach of the key parameters governing the final properties.

Keywords: fuel cells, ionomers, membranes, sPEEK, chemical stability

Procedia PDF Downloads 67
101 The Relationship between Body Fat Percent and Metabolic Syndrome Indices in Childhood Morbid Obesity

Authors: Mustafa Metin Donma

Abstract:

Metabolic syndrome (MetS) is characterized by a series of biochemical, physiological and anthropometric indicators and is a life-threatening health problem due to its close association with chronic diseases such as diabetes mellitus, hypertension, cancer and cardiovascular diseases. The syndrome deserves great interest both in adults and children. Central obesity is the indispensable component of MetS. Particularly, children, who are morbidly obese have a great tendency to develop the disease, because they are under the threat in their future lives. Preventive measures at this stage should be considered. For this, investigators seek for an informative scale or an index for the purpose. So far, several, but not many suggestions come into the stage. However, the diagnostic decision is not so easy and may not be complete particularly in the pediatric population. The aim of the study was to develop a MetS index capable of predicting MetS, while children are at the morbid obesity stage. This study was performed on morbid obese (MO) children, which were divided into two groups. Morbid obese children, who do not possess MetS criteria comprised the first group (n=44). The second group was composed of children (n=42) with MetS diagnosis. Parents were informed about the signed consent forms, which are required for the participation of their children in the study. The approval of the study protocol was taken from the institutional ethics committee of Tekirdag Namik Kemal University. Helsinki Declaration was accepted prior to and during the study. Anthropometric measurements including weight, height, waist circumference (WC), hip C, head C, neck C, biochemical tests including fasting blood glucose (FBG), insulin (INS), triglycerides (TRG), high density lipoprotein cholesterol (HDL-C) and blood pressure measurements (systolic (SBP) and diastolic (DBP)) were performed. Body fat percentage (BFP) values were determined by TANITA’s Bioelectrical Impedance Analysis technology. Body mass index and MetS indices were calculated. The equations for MetS index (MetSI) and advanced Donma MetS index (ADMI) were [(INS/FBG)/(HDL-C/TRG)]*100 and MetSI*[(SBP+DBP/Height)], respectively. Descriptive statistics including median values, compare means tests, correlation-regression analysis were performed within the scope of data evaluation using the statistical package program, SPSS. Statistically significant mean differences were determined by a p value smaller than 0.05. Median values for MetSI and ADMI in MO (MetS-) and MO (MetS+) groups were calculated as (25.9 and 36.5) and (74.0 and 106.1), respectively. Corresponding mean±SD values for BFPs were 35.9±7.1 and 38.2±7.7 in groups. Correlation analysis of these two indices with corresponding general BFP values exhibited significant association with ADMI, close to significance with MetSI in MO group. Any significant correlation was found with neither of the indices in MetS group. In conclusion, important associations observed with MetS indices in MO group were quite meaningful. The presence of these associations in MO group was important for showing the tendency towards the development of MetS in MO (MetS-) participants. The other index, ADMI, was more helpful for predictive purpose.

Keywords: body fat percentage, child, index, metabolic syndrome, obesity

Procedia PDF Downloads 56
100 Clinical and Analytical Performance of Glial Fibrillary Acidic Protein and Ubiquitin C-Terminal Hydrolase L1 Biomarkers for Traumatic Brain Injury in the Alinity Traumatic Brain Injury Test

Authors: Raj Chandran, Saul Datwyler, Jaime Marino, Daniel West, Karla Grasso, Adam Buss, Hina Syed, Zina Al Sahouri, Jennifer Yen, Krista Caudle, Beth McQuiston

Abstract:

The Alinity i TBI test is Therapeutic Goods Administration (TGA) registered and is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays for the measurement of glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) in plasma and serum. The Alinity i TBI performance was evaluated in a multi-center pivotal study to demonstrate the capability to assist in determining the need for a CT scan of the head in adult subjects (age 18+) presenting with suspected mild TBI (traumatic brain injury) with a Glasgow Coma Scale score of 13 to 15. TBI has been recognized as an important cause of death and disability and is a growing public health problem. An estimated 69 million people globally experience a TBI annually1. Blood-based biomarkers such as glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) have shown utility to predict acute traumatic intracranial injury on head CT scans after TBI. A pivotal study using prospectively collected archived (frozen) plasma specimens was conducted to establish the clinical performance of the TBI test on the Alinity i system. The specimens were originally collected in a prospective, multi-center clinical study. Testing of the specimens was performed at three clinical sites in the United States. Performance characteristics such as detection limits, imprecision, linearity, measuring interval, expected values, and interferences were established following Clinical and Laboratory Standards Institute (CLSI) guidance. Of the 1899 mild TBI subjects, 120 had positive head CT scan results; 116 of the 120 specimens had a positive TBI interpretation (Sensitivity 96.7%; 95% CI: 91.7%, 98.7%). Of the 1779 subjects with negative CT scan results, 713 had a negative TBI interpretation (Specificity 40.1%; 95% CI: 37.8, 42.4). The negative predictive value (NPV) of the test was 99.4% (713/717, 95% CI: 98.6%, 99.8%). The analytical measuring interval (AMI) extends from the limit of quantitation (LoQ) to the upper LoQ and is determined by the range that demonstrates acceptable performance for linearity, imprecision, and bias. The AMI is 6.1 to 42,000 pg/mL for GFAP and 26.3 to 25,000 pg/mL for UCH-L1. Overall, within-laboratory imprecision (20 day) ranged from 3.7 to 5.9% CV for GFAP and 3.0 to 6.0% CV for UCH-L1, when including lot and instrument variances. The Alinity i TBI clinical performance results demonstrated high sensitivity and high NPV, supporting the utility to assist in determining the need for a head CT scan in subjects presenting to the emergency department with suspected mild TBI. The GFAP and UCH-L1 assays show robust analytical performance across a broad concentration range of GFAP and UCH-L1 and may serve as a valuable tool to help evaluate TBI patients across the spectrum of mild to severe injury.

Keywords: biomarker, diagnostic, neurology, TBI

Procedia PDF Downloads 63
99 Detection of High Fructose Corn Syrup in Honey by Near Infrared Spectroscopy and Chemometrics

Authors: Mercedes Bertotto, Marcelo Bello, Hector Goicoechea, Veronica Fusca

Abstract:

The National Service of Agri-Food Health and Quality (SENASA), controls honey to detect contamination by synthetic or natural chemical substances and establishes and controls the traceability of the product. The utility of near-infrared spectroscopy for the detection of adulteration of honey with high fructose corn syrup (HFCS) was investigated. First of all, a mixture of different authentic artisanal Argentinian honey was prepared to cover as much heterogeneity as possible. Then, mixtures were prepared by adding different concentrations of high fructose corn syrup (HFCS) to samples of the honey pool. 237 samples were used, 108 of them were authentic honey and 129 samples corresponded to honey adulterated with HFCS between 1 and 10%. They were stored unrefrigerated from time of production until scanning and were not filtered after receipt in the laboratory. Immediately prior to spectral collection, honey was incubated at 40°C overnight to dissolve any crystalline material, manually stirred to achieve homogeneity and adjusted to a standard solids content (70° Brix) with distilled water. Adulterant solutions were also adjusted to 70° Brix. Samples were measured by NIR spectroscopy in the range of 650 to 7000 cm⁻¹. The technique of specular reflectance was used, with a lens aperture range of 150 mm. Pretreatment of the spectra was performed by Standard Normal Variate (SNV). The ant colony optimization genetic algorithm sample selection (ACOGASS) graphical interface was used, using MATLAB version 5.3, to select the variables with the greatest discriminating power. The data set was divided into a validation set and a calibration set, using the Kennard-Stone (KS) algorithm. A combined method of Potential Functions (PF) was chosen together with Partial Least Square Linear Discriminant Analysis (PLS-DA). Different estimators of the predictive capacity of the model were compared, which were obtained using a decreasing number of groups, which implies more demanding validation conditions. The optimal number of latent variables was selected as the number associated with the minimum error and the smallest number of unassigned samples. Once the optimal number of latent variables was defined, we proceeded to apply the model to the training samples. With the calibrated model for the training samples, we proceeded to study the validation samples. The calibrated model that combines the potential function methods and PLSDA can be considered reliable and stable since its performance in future samples is expected to be comparable to that achieved for the training samples. By use of Potential Functions (PF) and Partial Least Square Linear Discriminant Analysis (PLS-DA) classification, authentic honey and honey adulterated with HFCS could be identified with a correct classification rate of 97.9%. The results showed that NIR in combination with the PT and PLS-DS methods can be a simple, fast and low-cost technique for the detection of HFCS in honey with high sensitivity and power of discrimination.

Keywords: adulteration, multivariate analysis, potential functions, regression

Procedia PDF Downloads 119
98 Evaluation of Correct Usage, Comfort and Fit of Personal Protective Equipment in Construction Work

Authors: Anna-Lisa Osvalder, Jonas Borell

Abstract:

There are several reasons behind the use, non-use, or inadequate use of personal protective equipment (PPE) in the construction industry. Comfort and accurate size support proper use, while discomfort, misfit, and difficulties to understand how the PPEs should be handled inhibit correct usage. The need for several protective equipments simultaneously might also create problems. The purpose of this study was to analyse the correct usage, comfort, and fit of different types of PPEs used for construction work. Correct usage was analysed as guessability, i.e., human perceptions of how to don, adjust, use, and doff the equipment, and if used as intended. The PPEs tested individually or in combinations were a helmet, ear protectors, goggles, respiratory masks, gloves, protective cloths, and safety harnesses. First, an analytical evaluation was performed with ECW (enhanced cognitive walkthrough) and PUEA (predictive use error analysis) to search for usability problems and use errors during handling and use. Then usability tests were conducted to evaluate guessability, comfort, and fit with 10 test subjects of different heights and body constitutions. The tests included observations during donning, five different outdoor work tasks, and doffing. The think-aloud method, short interviews, and subjective estimations were performed. The analytical evaluation showed that some usability problems and use errors arise during donning and doffing, but with minor severity, mostly causing discomfort. A few use errors and usability problems arose for the safety harness, especially for novices, where some could lead to a high risk of severe incidents. The usability tests showed that discomfort arose for all test subjects when using a combination of PPEs, increasing over time. For instance, goggles, together with the face mask, caused pressure, chafing at the nose, and heat rash on the face. This combination also limited sight of vision. The helmet, in combination with the goggles and ear protectors, did not fit well and caused uncomfortable pressure at the temples. No major problems were found with the individual fit of the PPEs. The ear protectors, goggles, and face masks could be adjusted for different head sizes. The guessability for how to don and wear the combination of PPE was moderate, but it took some time to adjust them for a good fit. The guessability was poor for the safety harness; few clues in the design showed how it should be donned, adjusted, or worn on the skeletal bones. Discomfort occurred when the straps were tightened too much. All straps could not be adjusted for somebody's constitutions leading to non-optimal safety. To conclude, if several types of PPEs are used together, discomfort leading to pain is likely to occur over time, which can lead to misuse, non-use, or reduced performance. If people who are not regular users should wear a safety harness correctly, the design needs to be improved for easier interpretation, correct position of the straps, and increased possibilities for individual adjustments. The results from this study can be a base for re-design ideas for PPE, especially when they should be used in combinations.

Keywords: construction work, PPE, personal protective equipment, misuse, guessability, usability

Procedia PDF Downloads 83
97 Functionalization of Sanitary Pads with Probiotic Paste

Authors: O. Sauperl, L. Fras Zemljic

Abstract:

The textile industry is gaining increasing importance in the field of medical materials. Therefore, presented research is focused on textile materials for external (out-of-body) use. Such materials could be various hygienic textile products (diapers, tampons, sanitary napkins, incontinence products, etc.), protective textiles and various hospital linens (surgical covers, masks, gowns, cloths, bed linens, etc.) wound pillows, bandages, orthopedic socks, etc. Function of tampons and sanitary napkins is not only to provide protection during the menstrual cycle, but their function can be also to take care of physiological or pathological vaginal discharge. In general, women's intimate areas are against infection protected by a low pH value of the vaginal flora. High pH inhibits the development of harmful microorganisms, as it is difficult to be reproduced in an acidic environment. The normal vaginal flora in healthy women is highly colonized by lactobacilli. The lactic acid produced by these organisms maintains the constant acidity of the vagina. If the balance of natural protection breaks, infections can occur. In the market, there exist probiotic tampons as a medical product supplying the vagina with beneficial probiotic lactobacilli. But, many users have concerns about the use of tampons due to the possible dry-out of the vagina as well as the possible toxic shock syndrome, which is the reason that they use mainly sanitary napkins during the menstrual cycle. Functionalization of sanitary napkins with probiotics is, therefore, interesting in regard to maintain a healthy vaginal flora and to offer to users added value of the sanitary napkins in the sense of health- and environmentally-friendly products. For this reason, the presented research is oriented in functionalization of the sanitary napkins with the probiotic paste in order to activate the lactic acid bacteria presented in the core of the functionalized sanitary napkin at the time of the contact with the menstrual fluid. In this way, lactobacilli could penetrate into vagina and by maintaining healthy vaginal flora to reduce the risk of vaginal disorders. In regard to the targeted research problem, the influence of probiotic paste applied onto cotton hygienic napkins on selected properties was studied. The aim of the research was to determine whether the sanitary napkins with the applied probiotic paste may assure suitable vaginal pH to maintain a healthy vaginal flora during the use of this product. Together with this, sorption properties of probiotic functionalized sanitary napkins were evaluated and compared to the untreated one. The research itself was carried out on the basis of tracking and controlling the input parameters, currently defined by Slovenian producer (Tosama d.o.o.) as the most important. Successful functionalization of sanitary pads with the probiotic paste was confirmed by ATR-FTIR spectroscopy. Results of the methods used within the presented research show that the absorption of the pads treated with probiotic paste deteriorates compared to non-treated ones. The coating shows a 6-month stability. Functionalization of sanitary pads with probiotic paste is believed to have a commercial potential for lowering the probability of infection during the menstrual cycle.

Keywords: functionalization, probiotic paste, sanitary pads, textile materials

Procedia PDF Downloads 186
96 Automated Prediction of HIV-associated Cervical Cancer Patients Using Data Mining Techniques for Survival Analysis

Authors: O. J. Akinsola, Yinan Zheng, Rose Anorlu, F. T. Ogunsola, Lifang Hou, Robert Leo-Murphy

Abstract:

Cervical Cancer (CC) is the 2nd most common cancer among women living in low and middle-income countries, with no associated symptoms during formative periods. With the advancement and innovative medical research, there are numerous preventive measures being utilized, but the incidence of cervical cancer cannot be truncated with the application of only screening tests. The mortality associated with this invasive cervical cancer can be nipped in the bud through the important role of early-stage detection. This study research selected an array of different top features selection techniques which was aimed at developing a model that could validly diagnose the risk factors of cervical cancer. A retrospective clinic-based cohort study was conducted on 178 HIV-associated cervical cancer patients in Lagos University teaching Hospital, Nigeria (U54 data repository) in April 2022. The outcome measure was the automated prediction of the HIV-associated cervical cancer cases, while the predictor variables include: demographic information, reproductive history, birth control, sexual history, cervical cancer screening history for invasive cervical cancer. The proposed technique was assessed with R and Python programming software to produce the model by utilizing the classification algorithms for the detection and diagnosis of cervical cancer disease. Four machine learning classification algorithms used are: the machine learning model was split into training and testing dataset into ratio 80:20. The numerical features were also standardized while hyperparameter tuning was carried out on the machine learning to train and test the data. Logistic Regression (LR), Decision Tree (DT), Random Forest (RF), and K-Nearest Neighbor (KNN). Some fitting features were selected for the detection and diagnosis of cervical cancer diseases from selected characteristics in the dataset using the contribution of various selection methods for the classification cervical cancer into healthy or diseased status. The mean age of patients was 49.7±12.1 years, mean age at pregnancy was 23.3±5.5 years, mean age at first sexual experience was 19.4±3.2 years, while the mean BMI was 27.1±5.6 kg/m2. A larger percentage of the patients are Married (62.9%), while most of them have at least two sexual partners (72.5%). Age of patients (OR=1.065, p<0.001**), marital status (OR=0.375, p=0.011**), number of pregnancy live-births (OR=1.317, p=0.007**), and use of birth control pills (OR=0.291, p=0.015**) were found to be significantly associated with HIV-associated cervical cancer. On top ten 10 features (variables) considered in the analysis, RF claims the overall model performance, which include: accuracy of (72.0%), the precision of (84.6%), a recall of (84.6%) and F1-score of (74.0%) while LR has: an accuracy of (74.0%), precision of (70.0%), recall of (70.0%) and F1-score of (70.0%). The RF model identified 10 features predictive of developing cervical cancer. The age of patients was considered as the most important risk factor, followed by the number of pregnancy livebirths, marital status, and use of birth control pills, The study shows that data mining techniques could be used to identify women living with HIV at high risk of developing cervical cancer in Nigeria and other sub-Saharan African countries.

Keywords: associated cervical cancer, data mining, random forest, logistic regression

Procedia PDF Downloads 79
95 Measuring Enterprise Growth: Pitfalls and Implications

Authors: N. Šarlija, S. Pfeifer, M. Jeger, A. Bilandžić

Abstract:

Enterprise growth is generally considered as a key driver of competitiveness, employment, economic development and social inclusion. As such, it is perceived to be a highly desirable outcome of entrepreneurship for scholars and decision makers. The huge academic debate resulted in the multitude of theoretical frameworks focused on explaining growth stages, determinants and future prospects. It has been widely accepted that enterprise growth is most likely nonlinear, temporal and related to the variety of factors which reflect the individual, firm, organizational, industry or environmental determinants of growth. However, factors that affect growth are not easily captured, instruments to measure those factors are often arbitrary, causality between variables and growth is elusive, indicating that growth is not easily modeled. Furthermore, in line with heterogeneous nature of the growth phenomenon, there is a vast number of measurement constructs assessing growth which are used interchangeably. Differences among various growth measures, at conceptual as well as at operationalization level, can hinder theory development which emphasizes the need for more empirically robust studies. In line with these highlights, the main purpose of this paper is twofold. Firstly, to compare structure and performance of three growth prediction models based on the main growth measures: Revenues, employment and assets growth. Secondly, to explore the prospects of financial indicators, set as exact, visible, standardized and accessible variables, to serve as determinants of enterprise growth. Finally, to contribute to the understanding of the implications on research results and recommendations for growth caused by different growth measures. The models include a range of financial indicators as lag determinants of the enterprises’ performances during the 2008-2013, extracted from the national register of the financial statements of SMEs in Croatia. The design and testing stage of the modeling used the logistic regression procedures. Findings confirm that growth prediction models based on different measures of growth have different set of predictors. Moreover, the relationship between particular predictors and growth measure is inconsistent, namely the same predictor positively related to one growth measure may exert negative effect on a different growth measure. Overall, financial indicators alone can serve as good proxy of growth and yield adequate predictive power of the models. The paper sheds light on both methodology and conceptual framework of enterprise growth by using a range of variables which serve as a proxy for the multitude of internal and external determinants, but are unlike them, accessible, available, exact and free of perceptual nuances in building up the model. Selection of the growth measure seems to have significant impact on the implications and recommendations related to growth. Furthermore, the paper points out to potential pitfalls of measuring and predicting growth. Overall, the results and the implications of the study are relevant for advancing academic debates on growth-related methodology, and can contribute to evidence-based decisions of policy makers.

Keywords: growth measurement constructs, logistic regression, prediction of growth potential, small and medium-sized enterprises

Procedia PDF Downloads 247
94 Online Monitoring and Control of Continuous Mechanosynthesis by UV-Vis Spectrophotometry

Authors: Darren A. Whitaker, Dan Palmer, Jens Wesholowski, James Flaherty, John Mack, Ahmad B. Albadarin, Gavin Walker

Abstract:

Traditional mechanosynthesis has been performed by either ball milling or manual grinding. However, neither of these techniques allow the easy application of process control. The temperature may change unpredictably due to friction in the process. Hence the amount of energy transferred to the reactants is intrinsically non-uniform. Recently, it has been shown that the use of Twin-Screw extrusion (TSE) can overcome these limitations. Additionally, TSE enables a platform for continuous synthesis or manufacturing as it is an open-ended process, with feedstocks at one end and product at the other. Several materials including metal-organic frameworks (MOFs), co-crystals and small organic molecules have been produced mechanochemically using TSE. The described advantages of TSE are offset by drawbacks such as increased process complexity (a large number of process parameters) and variation in feedstock flow impacting on product quality. To handle the above-mentioned drawbacks, this study utilizes UV-Vis spectrophotometry (InSpectroX, ColVisTec) as an online tool to gain real-time information about the quality of the product. Additionally, this is combined with real-time process information in an Advanced Process Control system (PharmaMV, Perceptive Engineering) allowing full supervision and control of the TSE process. Further, by characterizing the dynamic behavior of the TSE, a model predictive controller (MPC) can be employed to ensure the process remains under control when perturbed by external disturbances. Two reactions were studied; a Knoevenagel condensation reaction of barbituric acid and vanillin and, the direct amidation of hydroquinone by ammonium acetate to form N-Acetyl-para-aminophenol (APAP) commonly known as paracetamol. Both reactions could be carried out continuously using TSE, nuclear magnetic resonance (NMR) spectroscopy was used to confirm the percentage conversion of starting materials to product. This information was used to construct partial least squares (PLS) calibration models within the PharmaMV development system, which relates the percent conversion to product to the acquired UV-Vis spectrum. Once this was complete, the model was deployed within the PharmaMV Real-Time System to carry out automated optimization experiments to maximize the percentage conversion based on a set of process parameters in a design of experiments (DoE) style methodology. With the optimum set of process parameters established, a series of PRBS process response tests (i.e. Pseudo-Random Binary Sequences) around the optimum were conducted. The resultant dataset was used to build a statistical model and associated MPC. The controller maximizes product quality whilst ensuring the process remains at the optimum even as disturbances such as raw material variability are introduced into the system. To summarize, a combination of online spectral monitoring and advanced process control was used to develop a robust system for optimization and control of two TSE based mechanosynthetic processes.

Keywords: continuous synthesis, pharmaceutical, spectroscopy, advanced process control

Procedia PDF Downloads 168
93 The Influence of Mechanical and Physicochemical Characteristics of Perfume Microcapsules on Their Rupture Behaviour and How This Relates to Performance in Consumer Products

Authors: Andrew Gray, Zhibing Zhang

Abstract:

The ability for consumer products to deliver a sustained perfume response can be a key driver for a variety of applications. Many compounds in perfume oils are highly volatile, meaning they readily evaporate once the product is applied, and the longevity of the scent is poor. Perfume capsules have been introduced as a means of abating this evaporation once the product has been delivered. The impermeable capsules are aimed to be stable within the formulation, and remain intact during delivery to the desired substrate, only rupturing to release the core perfume oil through application of mechanical force applied by the consumer. This opens up the possibility of obtaining an olfactive response hours, weeks or even months after delivery, depending on the nature of the desired application. Tailoring the properties of the polymeric capsules to better address the needs of the application is not a trivial challenge and currently design of capsules is largely done by trial and error. The aim of this work is to have more predictive methods for capsule design depending on the consumer application. This means refining formulations such that they rupture at the right time for the specific consumer application, not too early, not too late. Finding the right balance between these extremes is essential if a benefit is sought with respect to neat addition of perfume to formulations. It is important to understand the forces that influence capsule rupture, first, by quantifying the magnitude of these different forces, and then by assessing bulk rupture in real-world applications to understand how capsules actually respond. Samples were provided by an industrial partner and the mechanical properties of individual capsules within the samples were characterized via a micromanipulation technique, developed by Professor Zhang at the University of Birmingham. The capsules were synthesized such as to change one particular physicochemical property at a time, such as core: wall material ratio, and the average size of capsules. Analysis of shell thickness via Transmission Electron Microscopy, size distribution via the use of a Mastersizer, as well as a variety of other techniques confirmed that only one particular physicochemical property was altered for each sample. The mechanical analysis was subsequently undertaken, showing the effect that changing certain capsule properties had on the response under compression. It was, however, important to link this fundamental mechanical response to capsule performance in real-world applications. As such, the capsule samples were introduced to a formulation and exposed to full scale stresses. GC-MS headspace analysis of the perfume oil released from broken capsules enabled quantification of what the relative strengths of capsules truly means for product performance. Correlations have been found between the mechanical strength of capsule samples and performance in terms of perfume release in consumer applications. Having a better understanding of the key parameters that drive performance benefits the design of future formulations by offering better guidelines on the parameters that can be adjusted without worrying about the performance effects, and singles out those parameters that are essential in finding the sweet spot for capsule performance.

Keywords: consumer products, mechanical and physicochemical properties, perfume capsules, rupture behaviour

Procedia PDF Downloads 129
92 Predicting Food Waste and Losses Reduction for Fresh Products in Modified Atmosphere Packaging

Authors: Matar Celine, Gaucel Sebastien, Gontard Nathalie, Guilbert Stephane, Guillard Valerie

Abstract:

To increase the very short shelf life of fresh fruits and vegetable, Modified Atmosphere Packaging (MAP) allows an optimal atmosphere composition to be maintained around the product and thus prevent its decay. This technology relies on the modification of internal packaging atmosphere due to equilibrium between production/consumption of gases by the respiring product and gas permeation through the packaging material. While, to the best of our knowledge, benefit of MAP for fresh fruits and vegetable has been widely demonstrated in the literature, its effect on shelf life increase has never been quantified and formalized in a clear and simple manner leading difficult to anticipate its economic and environmental benefit, notably through the decrease of food losses. Mathematical modelling of mass transfers in the food/packaging system is the basis for a better design and dimensioning of the food packaging system. But up to now, existing models did not permit to estimate food quality nor shelf life gain reached by using MAP. However, shelf life prediction is an indispensable prerequisite for quantifying the effect of MAP on food losses reduction. The objective of this work is to propose an innovative approach to predict shelf life of MAP food product and then to link it to a reduction of food losses and wastes. In this purpose, a ‘Virtual MAP modeling tool’ was developed by coupling a new predictive deterioration model (based on visual surface prediction of deterioration encompassing colour, texture and spoilage development) with models of the literature for respiration and permeation. A major input of this modelling tool is the maximal percentage of deterioration (MAD) which was assessed from dedicated consumers’ studies. Strawberries of the variety Charlotte were selected as the model food for its high perishability, high respiration rate; 50-100 ml CO₂/h/kg produced at 20°C, allowing it to be a good representative of challenging post-harvest storage. A value of 13% was determined as a limit of acceptability for the consumers, permitting to define products’ shelf life. The ‘Virtual MAP modeling tool’ was validated in isothermal conditions (5, 10 and 20°C) and in dynamic temperature conditions mimicking commercial post-harvest storage of strawberries. RMSE values were systematically lower than 3% for respectively, O₂, CO₂ and deterioration profiles as a function of time confirming the goodness of model fitting. For the investigated temperature profile, a shelf life gain of 0.33 days was obtained in MAP compared to the conventional storage situation (no MAP condition). Shelf life gain of more than 1 day could be obtained for optimized post-harvest conditions as numerically investigated. Such shelf life gain permitted to anticipate a significant reduction of food losses at the distribution and consumer steps. This food losses' reduction as a function of shelf life gain has been quantified using a dedicated mathematical equation that has been developed for this purpose.

Keywords: food losses and wastes, modified atmosphere packaging, mathematical modeling, shelf life prediction

Procedia PDF Downloads 180
91 The Effects of Adding Vibrotactile Feedback to Upper Limb Performance during Dual-Tasking and Response to Misleading Visual Feedback

Authors: Sigal Portnoy, Jason Friedman, Eitan Raveh

Abstract:

Introduction: Sensory substitution is possible due to the capacity of our brain to adapt to information transmitted by a synthetic receptor via an alternative sensory system. Practical sensory substitution systems are being developed in order to increase the functionality of individuals with sensory loss, e.g. amputees. For upper limb prosthetic-users the loss of tactile feedback compels them to allocate visual attention to their prosthesis. The effect of adding vibrotactile feedback (VTF) to the applied force has been studied, however its effect on the allocation if visual attention during dual-tasking and the response during misleading visual feedback have not been studied. We hypothesized that VTF will improve the performance and reduce visual attention during dual-task assignments in healthy individuals using a robotic hand and improve the performance in a standardized functional test, despite the presence of misleading visual feedback. Methods: For the dual-task paradigm, twenty healthy subjects were instructed to toggle two keyboard arrow keys with the left hand to retain a moving virtual car on a road on a screen. During the game, instructions for various activities, e.g. mix the sugar in the glass with a spoon, appeared on the screen. The subject performed these tasks with a robotic hand, attached to the right hand. The robotic hand was controlled by the activity of the flexors and extensors of the right wrist, recorded using surface EMG electrodes. Pressure sensors were attached at the tips of the robotic hand and induced VTF using vibrotactile actuators attached to the right arm of the subject. An eye-tracking system tracked to visual attention of the subject during the trials. The trials were repeated twice, with and without the VTF. Additionally, the subjects performed the modified box and blocks, hidden from eyesight, in a motion laboratory. A virtual presentation of a misleading visual feedback was be presented on a screen so that twice during the trial, the virtual block fell while the physical block was still held by the subject. Results: This is an ongoing study, which current results are detailed below. We are continuing these trials with transradial myoelectric prosthesis-users. In the healthy group, the VTF did not reduce the visual attention or improve performance during dual-tasking for the tasks that were typed transfer-to-target, e.g. place the eraser on the shelf. An improvement was observed for other tasks. For example, the average±standard deviation of time to complete the sugar-mixing task was 13.7±17.2s and 19.3±9.1s with and without the VTF, respectively. Also, the number of gaze shifts from the screen to the hand during this task were 15.5±23.7 and 20.0±11.6, with and without the VTF, respectively. The response of the subjects to the misleading visual feedback did not differ between the two conditions, i.e. with and without VTF. Conclusions: Our interim results suggest that the performance of certain activities of daily living may be improved by VTF. The substitution of visual sensory input by tactile feedback might require a long training period so that brain plasticity can occur and allow adaptation to the new condition.

Keywords: prosthetics, rehabilitation, sensory substitution, upper limb amputation

Procedia PDF Downloads 338
90 Comparison of Machine Learning-Based Models for Predicting Streptococcus pyogenes Virulence Factors and Antimicrobial Resistance

Authors: Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Diego Santibañez Oyarce, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán

Abstract:

Streptococcus pyogenes is a gram-positive bacteria involved in a wide range of diseases and is a major-human-specific bacterial pathogen. In Chile, this year the 'Ministerio de Salud' declared an alert due to the increase in strains throughout the year. This increase can be attributed to the multitude of factors including antimicrobial resistance (AMR) and Virulence Factors (VF). Understanding these VF and AMR is crucial for developing effective strategies and improving public health responses. Moreover, experimental identification and characterization of these pathogenic mechanisms are labor-intensive and time-consuming. Therefore, new computational methods are required to provide robust techniques for accelerating this identification. Advances in Machine Learning (ML) algorithms represent the opportunity to refine and accelerate the discovery of VF associated with Streptococcus pyogenes. In this work, we evaluate the accuracy of various machine learning models in predicting the virulence factors and antimicrobial resistance of Streptococcus pyogenes, with the objective of providing new methods for identifying the pathogenic mechanisms of this organism.Our comprehensive approach involved the download of 32,798 genbank files of S. pyogenes from NCBI dataset, coupled with the incorporation of data from Virulence Factor Database (VFDB) and Antibiotic Resistance Database (CARD) which contains sequences of AMR gene sequence and resistance profiles. These datasets provided labeled examples of both virulent and non-virulent genes, enabling a robust foundation for feature extraction and model training. We employed preprocessing, characterization and feature extraction techniques on primary nucleotide/amino acid sequences and selected the optimal more for model training. The feature set was constructed using sequence-based descriptors (e.g., k-mers and One-hot encoding), and functional annotations based on database prediction. The ML models compared are logistic regression, decision trees, support vector machines, neural networks among others. The results of this work show some differences in accuracy between the algorithms, these differences allow us to identify different aspects that represent unique opportunities for a more precise and efficient characterization and identification of VF and AMR. This comparative analysis underscores the value of integrating machine learning techniques in predicting S. pyogenes virulence and AMR, offering potential pathways for more effective diagnostic and therapeutic strategies. Future work will focus on incorporating additional omics data, such as transcriptomics, and exploring advanced deep learning models to further enhance predictive capabilities.

Keywords: antibiotic resistance, streptococcus pyogenes, virulence factors., machine learning

Procedia PDF Downloads 15
89 Behavioral Patterns of Adopting Digitalized Services (E-Sport versus Sports Spectating) Using Agent-Based Modeling

Authors: Justyna P. Majewska, Szymon M. Truskolaski

Abstract:

The growing importance of digitalized services in the so-called new economy, including the e-sports industry, can be observed recently. Various demographic or technological changes lead consumers to modify their needs, not regarding the services themselves but the method of their application (attracting customers, forms of payment, new content, etc.). In the case of leisure-related to competitive spectating activities, there is a growing need to participate in events whose content is not sports competitions but computer games challenge – e-sport. The literature in this area so far focuses on determining the number of e-sport fans with elements of a simple statistical description (mainly concerning demographic characteristics such as age, gender, place of residence). Meanwhile, the development of the industry is influenced by a combination of many different, intertwined demographic, personality and psychosocial characteristics of customers, as well as the characteristics of their environment. Therefore, there is a need for a deeper recognition of the determinants of the behavioral patterns upon selecting digitalized services by customers, which, in the absence of available large data sets, can be achieved by using econometric simulations – multi-agent modeling. The cognitive aim of the study is to reveal internal and external determinants of behavioral patterns of customers taking into account various variants of economic development (the pace of digitization and technological development, socio-demographic changes, etc.). In the paper, an agent-based model with heterogeneous agents (characteristics of customers themselves and their environment) was developed, which allowed identifying a three-stage development scenario: i) initial interest, ii) standardization, and iii) full professionalization. The probabilities regarding the transition process were estimated using the Method of Simulated Moments. The estimation of the agent-based model parameters and sensitivity analysis reveals crucial factors that have driven a rising trend in e-sport spectating and, in a wider perspective, the development of digitalized services. Among the psychosocial characteristics of customers, they are the level of familiarization with the rules of games as well as sports disciplines, active and passive participation history and individual perception of challenging activities. Environmental factors include general reception of games, number and level of recognition of community builders and the level of technological development of streaming as well as community building platforms. However, the crucial factor underlying the good predictive power of the model is the level of professionalization. While in the initial interest phase, the entry barriers for new customers are high. They decrease during the phase of standardization and increase again in the phase of full professionalization when new customers perceive participation history inaccessible. In this case, they are prone to switch to new methods of service application – in the case of e-sport vs. sports to new content and more modern methods of its delivery. In a wider context, the findings in the paper support the idea of a life cycle of services regarding methods of their application from “traditional” to digitalized.

Keywords: agent-based modeling, digitalized services, e-sport, spectators motives

Procedia PDF Downloads 169
88 Psychometric Examination of Atma Jaya's Multiple Intelligence Batteries for University Students

Authors: Angela Oktavia Suryani, Bernadeth Gloria, Edwin Sutamto, Jessica Kristianty, Ni Made Rai Sapitri, Patricia Catherine Agla, Sitti Arlinda Rochiadi

Abstract:

It was found that some blogs or personal websites in Indonesia sell standardized intelligence tests (for example, Progressive Matrices (PM), Intelligence Structure Test (IST), and Culture Fair Intelligence Test (CFIT)) and other psychological tests, together with the manual and the key answers for public. Individuals can buy and prepare themselves for selection or recruitment with the real test. This action drives people to lie to the institution (education or company) and also to themselves. It was also found that those tests are old. Some items are not relevant with the current context, for example a question about a diameter of a certain coin that does not exist anymore. These problems motivate us to develop a new intelligence battery test, namely of Multiple Aptitude Battery (MAB). The battery test was built by using Thurstone’s Primary Mental Abilities theory and intended to be used by high schools students, university students, and worker applicants. The battery tests consist of 9 subtests. In the current study we examine six subtests, namely Reading Comprehension, Verbal Analogies, Numerical Inductive Reasoning, Numerical Deductive Reasoning, Mechanical Ability, and Two Dimensional Spatial Reasoning for university students. The study included 1424 data from students recruited by convenience sampling from eight faculties at Atma Jaya Catholic University of Indonesia. Classical and modern test approaches (Item Response Theory) were carried out to identify the item difficulties of the items and confirmatory factor analysis was applied to examine their internal validities. The validity of each subtest was inspected by using convergent–discriminant method, whereas the reliability was examined by implementing Kuder–Richardson formula. The result showed that the majority of the subtests were difficult in medium level, and there was only one subtest categorized as easy, namely Verbal Analogies. The items were found homogenous and valid measuring their constructs; however at the level of subtests, the construct validity examined by convergent-discriminant method indicated that the subtests were not unidimensional. It means they were not only measuring their own constructs but also other construct. Three of the subtests were able to predict academic performance with small effect size, namely Reading Comprehension, Numerical Inductive Reasoning, and Two Dimensional Spatial Reasoning. GPAs in intermediate level (GPAs at third semester and above) were considered as a factor for predictive invalidity. The Kuder-Richardson formula showed that the reliability coefficients for both numerical reasoning subtests and spatial reasoning were superior, in the range 0.84 – 0.87, whereas the reliability coefficient for the other three subtests were relatively below standard for ability test, in the range of 0.65 – 0.71. It can be concluded that some of the subtests are ready to be used, whereas some others are still need some revisions. This study also demonstrated that the convergent-discrimination method is useful to identify the general intelligence of human.

Keywords: intelligence, psychometric examination, multiple aptitude battery, university students

Procedia PDF Downloads 432
87 Management of Femoral Neck Stress Fractures at a Specialist Centre and Predictive Factors to Return to Activity Time: An Audit

Authors: Charlotte K. Lee, Henrique R. N. Aguiar, Ralph Smith, James Baldock, Sam Botchey

Abstract:

Background: Femoral neck stress fractures (FNSF) are uncommon, making up 1 to 7.2% of stress fractures in healthy subjects. FNSFs are prevalent in young women, military recruits, endurance athletes, and individuals with energy deficiency syndrome or female athlete triad. Presentation is often non-specific and is often misdiagnosed following the initial examination. There is limited research addressing the return–to–activity time after FNSF. Previous studies have demonstrated prognostic time predictions based on various imaging techniques. Here, (1) OxSport clinic FNSF practice standards are retrospectively reviewed, (2) FNSF cohort demographics are examined, (3) Regression models were used to predict return–to–activity prognosis and consequently determine bone stress risk factors. Methods: Patients with a diagnosis of FNSF attending Oxsport clinic between 01/06/2020 and 01/01/2020 were selected from the Rheumatology Assessment Database Innovation in Oxford (RhADiOn) and OxSport Stress Fracture Database (n = 14). (1) Clinical practice was audited against five criteria based on local and National Institute for Health Care Excellence guidance, with a 100% standard. (2) Demographics of the FNSF cohort were examined with Student’s T-Test. (3) Lastly, linear regression and Random Forest regression models were used on this patient cohort to predict return–to–activity time. Consequently, an analysis of feature importance was conducted after fitting each model. Results: OxSport clinical practice met standard (100%) in 3/5 criteria. The criteria not met were patient waiting times and documentation of all bone stress risk factors. Importantly, analysis of patient demographics showed that of the population with complete bone stress risk factor assessments, 53% were positive for modifiable bone stress risk factors. Lastly, linear regression analysis was utilized to identify demographic factors that predicted return–to–activity time [R2 = 79.172%; average error 0.226]. This analysis identified four key variables that predicted return-to-activity time: vitamin D level, total hip DEXA T value, femoral neck DEXA T value, and history of an eating disorder/disordered eating. Furthermore, random forest regression models were employed for this task [R2 = 97.805%; average error 0.024]. Analysis of the importance of each feature again identified a set of 4 variables, 3 of which matched with the linear regression analysis (vitamin D level, total hip DEXA T value, and femoral neck DEXA T value) and the fourth: age. Conclusion: OxSport clinical practice could be improved by more comprehensively evaluating bone stress risk factors. The importance of this evaluation is demonstrated by the population found positive for these risk factors. Using this cohort, potential bone stress risk factors that significantly impacted return-to-activity prognosis were predicted using regression models.

Keywords: eating disorder, bone stress risk factor, femoral neck stress fracture, vitamin D

Procedia PDF Downloads 177
86 Statistical Models and Time Series Forecasting on Crime Data in Nepal

Authors: Dila Ram Bhandari

Abstract:

Throughout the 20th century, new governments were created where identities such as ethnic, religious, linguistic, caste, communal, tribal, and others played a part in the development of constitutions and the legal system of victim and criminal justice. Acute issues with extremism, poverty, environmental degradation, cybercrimes, human rights violations, crime against, and victimization of both individuals and groups have recently plagued South Asian nations. Everyday massive number of crimes are steadfast, these frequent crimes have made the lives of common citizens restless. Crimes are one of the major threats to society and also for civilization. Crime is a bone of contention that can create a societal disturbance. The old-style crime solving practices are unable to live up to the requirement of existing crime situations. Crime analysis is one of the most important activities of the majority of intelligent and law enforcement organizations all over the world. The South Asia region lacks such a regional coordination mechanism, unlike central Asia of Asia Pacific regions, to facilitate criminal intelligence sharing and operational coordination related to organized crime, including illicit drug trafficking and money laundering. There have been numerous conversations in recent years about using data mining technology to combat crime and terrorism. The Data Detective program from Sentient as a software company, uses data mining techniques to support the police (Sentient, 2017). The goals of this internship are to test out several predictive model solutions and choose the most effective and promising one. First, extensive literature reviews on data mining, crime analysis, and crime data mining were conducted. Sentient offered a 7-year archive of crime statistics that were daily aggregated to produce a univariate dataset. Moreover, a daily incidence type aggregation was performed to produce a multivariate dataset. Each solution's forecast period lasted seven days. Statistical models and neural network models were the two main groups into which the experiments were split. For the crime data, neural networks fared better than statistical models. This study gives a general review of the applied statistics and neural network models. A detailed image of each model's performance on the available data and generalizability is provided by a comparative analysis of all the models on a comparable dataset. Obviously, the studies demonstrated that, in comparison to other models, Gated Recurrent Units (GRU) produced greater prediction. The crime records of 2005-2019 which was collected from Nepal Police headquarter and analysed by R programming. In conclusion, gated recurrent unit implementation could give benefit to police in predicting crime. Hence, time series analysis using GRU could be a prospective additional feature in Data Detective.

Keywords: time series analysis, forecasting, ARIMA, machine learning

Procedia PDF Downloads 161
85 A Data-Driven Compartmental Model for Dengue Forecasting and Covariate Inference

Authors: Yichao Liu, Peter Fransson, Julian Heidecke, Jonas Wallin, Joacim Rockloev

Abstract:

Dengue, a mosquito-borne viral disease, poses a significant public health challenge in endemic tropical or subtropical countries, including Sri Lanka. To reveal insights into the complexity of the dynamics of this disease and study the drivers, a comprehensive model capable of both robust forecasting and insightful inference of drivers while capturing the co-circulating of several virus strains is essential. However, existing studies mostly focus on only one aspect at a time and do not integrate and carry insights across the siloed approach. While mechanistic models are developed to capture immunity dynamics, they are often oversimplified and lack integration of all the diverse drivers of disease transmission. On the other hand, purely data-driven methods lack constraints imposed by immuno-epidemiological processes, making them prone to overfitting and inference bias. This research presents a hybrid model that combines machine learning techniques with mechanistic modelling to overcome the limitations of existing approaches. Leveraging eight years of newly reported dengue case data, along with socioeconomic factors, such as human mobility, weekly climate data from 2011 to 2018, genetic data detecting the introduction and presence of new strains, and estimates of seropositivity for different districts in Sri Lanka, we derive a data-driven vector (SEI) to human (SEIR) model across 16 regions in Sri Lanka at the weekly time scale. By conducting ablation studies, the lag effects allowing delays up to 12 weeks of time-varying climate factors were determined. The model demonstrates superior predictive performance over a pure machine learning approach when considering lead times of 5 and 10 weeks on data withheld from model fitting. It further reveals several interesting interpretable findings of drivers while adjusting for the dynamics and influences of immunity and introduction of a new strain. The study uncovers strong influences of socioeconomic variables: population density, mobility, household income and rural vs. urban population. The study reveals substantial sensitivity to the diurnal temperature range and precipitation, while mean temperature and humidity appear less important in the study location. Additionally, the model indicated sensitivity to vegetation index, both max and average. Predictions on testing data reveal high model accuracy. Overall, this study advances the knowledge of dengue transmission in Sri Lanka and demonstrates the importance of incorporating hybrid modelling techniques to use biologically informed model structures with flexible data-driven estimates of model parameters. The findings show the potential to both inference of drivers in situations of complex disease dynamics and robust forecasting models.

Keywords: compartmental model, climate, dengue, machine learning, social-economic

Procedia PDF Downloads 74
84 South-Mediterranean Oaks Forests Management in Changing Climate Case of the National Park of Tlemcen-Algeria

Authors: K. Bencherif, M. Bellifa

Abstract:

The expected climatic changes in North Africa are the increase of both intensity and frequencies of the summer droughts and a reduction in water availability during growing season. The exiting coppices and forest formations in the national park of Tlemcen are dominated by holm oak, zen oak and cork oak. These opened-fragmented structures don’t seem enough strong so to hope durable protection against climate change. According to the observed climatic tendency, the objective is to analyze the climatic context and its evolution taking into account the eventual behaving of the oak species during the next 20-30 years on one side and the landscaped context in relation with the most adequate sylvicultural models to choose and especially in relation with human activities on another side. The study methodology is based on Climatic synthesis and Floristic and spatial analysis. Meteorological data of the decade 1989-2009 are used to characterize the current climate. An another approach, based on dendrochronological analysis of a 120 years sample Aleppo pine stem growing in the park, is used so to analyze the climate evolution during one century. Results on the climate evolution during the 50 years obtained through climatic predictive models are exploited so to predict the climate tendency in the park. Spatially, in each forest unit of the Park, stratified sampling is achieved so to reduce the degree of heterogeneity and to easily delineate different stands using the GPS. Results from precedent study are used to analyze the anthropogenic factor considering the forecasts for the period 2025-2100, the number of warm days with a temperature over 25°C would increase from 30 to 70. The monthly mean temperatures of the maxima’s (M) and the minima’s (m) would pass respectively from 30.5°C to 33°C and from 2.3°C to 4.8°C. With an average drop of 25%, precipitations will be reduced to 411.37 mm. These new data highlight the importance of the risk fire and the water stress witch would affect the vegetation and the regeneration process. Spatial analysis highlights the forest and the agricultural dimensions of the park compared to the urban habitat and bare soils. Maps show both fragmentation state and forest surface regression (50% of total surface). At the level of the park, fires affected already all types of covers creating low structures with various densities. On the silvi cultural plan, Zen oak form in some places pure stands and this invasion must be considered as a natural tendency where Zen oak becomes the structuring specie. Climate-related changes have nothing to do with the real impact that South-Mediterranean forests are undergoing because human constraints they support. Nevertheless, hardwoods stand of oak in the national park of Tlemcen will face up to unexpected climate changes such as changing rainfall regime associated with a lengthening of the period of water stress, to heavy rainfall and/or to sudden cold snaps. Faced with these new conditions, management based on mixed uneven aged high forest method promoting the more dynamic specie could be an appropriate measure.

Keywords: global warming, mediterranean forest, oak shrub-lands, Tlemcen

Procedia PDF Downloads 387
83 Physiological Effects on Scientist Astronaut Candidates: Hypobaric Training Assessment

Authors: Pedro Llanos, Diego García

Abstract:

This paper is addressed to expanding our understanding of the effects of hypoxia training on our bodies to better model its dynamics and leverage some of its implications and effects on human health. Hypoxia training is a recommended practice for military and civilian pilots that allow them to recognize their early hypoxia signs and symptoms, and Scientist Astronaut Candidates (SACs) who underwent hypobaric hypoxia (HH) exposure as part of a training activity for prospective suborbital flight applications. This observational-analytical study describes physiologic responses and symptoms experienced by a SAC group before, during and after HH exposure and proposes a model for assessing predicted versus observed physiological responses. A group of individuals with diverse Science Technology Engineering Mathematics (STEM) backgrounds conducted a hypobaric training session to an altitude up to 22,000 ft (FL220) or 6,705 meters, where heart rate (HR), breathing rate (BR) and core temperature (Tc) were monitored with the use of a chest strap sensor pre and post HH exposure. A pulse oximeter registered levels of saturation of oxygen (SpO2), number and duration of desaturations during the HH chamber flight. Hypoxia symptoms as described by the SACs during the HH training session were also registered. This data allowed to generate a preliminary predictive model of the oxygen desaturation and O2 pressure curve for each subject, which consists of a sixth-order polynomial fit during exposure, and a fifth or fourth-order polynomial fit during recovery. Data analysis showed that HR and BR showed no significant differences between pre and post HH exposure in most of the SACs, while Tc measures showed slight but consistent decrement changes. All subjects registered SpO2 greater than 94% for the majority of their individual HH exposures, but all of them presented at least one clinically significant desaturation (SpO2 < 85% for more than 5 seconds) and half of the individuals showed SpO2 below 87% for at least 30% of their HH exposure time. Finally, real time collection of HH symptoms presented temperature somatosensory perceptions (SP) for 65% of individuals, and task-focus issues for 52.5% of individuals as the most common HH indications. 95% of the subjects experienced HH onset symptoms below FL180; all participants achieved full recovery of HH symptoms within 1 minute of donning their O2 mask. The current HH study performed on this group of individuals suggests a rapid and fully reversible physiologic response after HH exposure as expected and obtained in previous studies. Our data showed consistent results between predicted versus observed SpO2 curves during HH suggesting a mathematical function that may be used to model HH performance deficiencies. During the HH study, real-time HH symptoms were registered providing evidenced SP and task focusing as the earliest and most common indicators. Finally, an assessment of HH signs of symptoms in a group of heterogeneous, non-pilot individuals showed similar results to previous studies in homogeneous populations of pilots.

Keywords: slow onset hypoxia, hypobaric chamber training, altitude sickness, symptoms and altitude, pressure cabin

Procedia PDF Downloads 113
82 Company's Orientation and Human Resource Management Evolution in Technological Startup Companies

Authors: Yael Livneh, Shay Tzafrir, Ilan Meshoulam

Abstract:

Technological startup companies have been recognized as bearing tremendous potential for business and economic success. However, many entrepreneurs who produce promising innovative ideas fail to implement them as successful businesses. A key argument for such failure is the entrepreneurs' lack of competence in adaptation of the relevant level of formality of human resource management (HRM). The purpose of the present research was to examine multiple antecedents and consequences of HRM formality in growing startup companies. A review of the research literature identified two central components of HRM formality: HR control and professionalism. The effect of three contextual predictors was examined. The first was an intra-organizational factor: the development level of the organization. We based on a differentiation between knowledge exploration and knowledge exploitation. At a given time, the organization chooses to focus on a specific mix of these orientations, a choice which requires an appropriate level of HRM formality, in order to efficiently overcome the challenges. It was hypothesized that the mix of orientations of knowledge exploration and knowledge exploitation would predict HRM formality. The second predictor was the personal characteristics the organization's leader. According the idea of blueprint effect of CEO's on HRM, it was hypothesized that the CEO's cognitive style would predict HRM formality. The third contextual predictor was an external organizational factor: the level of investor involvement. By using the agency theory, and based on Transaction Cost Economy, it was hypothesized that the level of investor involvement in general management and HRM would be positively related to the HRM formality. The effect of formality on trust was examined directly and indirectly by the mediation role of procedural justice. The research method included a time-lagged field study. In the first study, data was obtained using three questionnaires, each directed to a different source: CEO, HR position-holder and employees. 43 companies participated in this study. The second study was conducted approximately a year later. Data was recollected using three questionnaires by reapplying the same sample. 41 companies participated in the second study. The organizations samples included technological startup companies. Both studies included 884 respondents. The results indicated consistency between the two studies. HRM formality was predicted by the intra-organizational factor as well as the personal characteristics of the CEO, but not at all by the external organizational context. Specifically, the organizational orientations was the greatest contributor to both components of HRM formality. The cognitive style predicted formality to a lesser extent. The investor's involvement was found not to have any predictive effect on the HRM formality. The results indicated a positive contribution to trust in HRM, mainly via the mediation of procedural justice. This study contributed a new concept for technological startup company development by a mixture of organizational orientation. Practical implications indicated that the level of HRM formality should be matched to that of the company's development. This match should be challenged and adjusted periodically by referring to the organization orientation, relevant HR practices, and HR function characteristics. A relevant matching could enhance further trust and business success.

Keywords: control, formality, human resource management, organizational development, professionalism, technological startup company

Procedia PDF Downloads 259
81 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads

Authors: Gaurav Kumar Sinha

Abstract:

In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.

Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies

Procedia PDF Downloads 61
80 Design, Control and Implementation of 300Wp Single Phase Photovoltaic Micro Inverter for Village Nano Grid Application

Authors: Ramesh P., Aby Joseph

Abstract:

Micro Inverters provide Module Embedded Solution for harvesting energy from small-scale solar photovoltaic (PV) panels. In addition to higher modularity & reliability (25 years of life), the MicroInverter has inherent advantages such as avoidance of long DC cables, eliminates module mismatch losses, minimizes partial shading effect, improves safety and flexibility in installations etc. Due to the above-stated benefits, the renewable energy technology with Solar Photovoltaic (PV) Micro Inverter becomes more widespread in Village Nano Grid application ensuring grid independence for rural communities and areas without access to electricity. While the primary objective of this paper is to discuss the problems related to rural electrification, this concept can also be extended to urban installation with grid connectivity. This work presents a comprehensive analysis of the power circuit design, control methodologies and prototyping of 300Wₚ Single Phase PV Micro Inverter. This paper investigates two different topologies for PV Micro Inverters, based on the first hand on Single Stage Flyback/ Forward PV Micro-Inverter configuration and the other hand on the Double stage configuration including DC-DC converter, H bridge DC-AC Inverter. This work covers Power Decoupling techniques to reduce the input filter capacitor size to buffer double line (100 Hz) ripple energy and eliminates the use of electrolytic capacitors. The propagation of the double line oscillation reflected back to PV module will affect the Maximum Power Point Tracking (MPPT) performance. Also, the grid current will be distorted. To mitigate this issue, an independent MPPT control algorithm is developed in this work to reject the propagation of this double line ripple oscillation to PV side to improve the MPPT performance and grid side to improve current quality. Here, the power hardware topology accepts wide input voltage variation and consists of suitably rated MOSFET switches, Galvanically Isolated gate drivers, high-frequency magnetics and Film capacitors with a long lifespan. The digital controller hardware platform inbuilt with the external peripheral interface is developed using floating point microcontroller TMS320F2806x from Texas Instruments. The firmware governing the operation of the PV Micro Inverter is written in C language and was developed using code composer studio Integrated Development Environment (IDE). In this work, the prototype hardware for the Single Phase Photovoltaic Micro Inverter with Double stage configuration was developed and the comparative analysis between the above mentioned configurations with experimental results will be presented.

Keywords: double line oscillation, micro inverter, MPPT, nano grid, power decoupling

Procedia PDF Downloads 128
79 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models

Authors: V. Mantey, N. Findlay, I. Maddox

Abstract:

The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.

Keywords: building detection, disaster relief, mask-RCNN, satellite mapping

Procedia PDF Downloads 166
78 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance

Authors: Ammar Alali, Mahmoud Abughaban

Abstract:

Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.

Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe

Procedia PDF Downloads 222