Search results for: model for identification of attributes quality
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26804

Search results for: model for identification of attributes quality

14204 Common Soccer Injuries and Its Risk Factors: A Systematic Review

Authors: C. Brandt, R. Christopher, N. Damons

Abstract:

Background: Soccer is one of the most common sports in the world. It is associated with a significant chance of injury either during training or during the course of an actual match. Studies on the epidemiology of soccer injuries have been widely conducted, but methodological appraisal is lacking to make evidence-based decisions. Objectives: The purpose of this study was to conduct a systematic review of common injuries in soccer and their risk factors. Methods: A systematic review was performed based on the Joanna Briggs Institute procedure for conducting systematic reviews. Databases such as SPORT Discus, Cinahl, Medline, Science Direct, PubMed, and grey literature were searched. The quality of selected studies was rated, and data extracted and tabulated. Plot data analysis was done, and incidence rates and odds ratios were calculated, with their respective 95% confidence intervals. I² statistic was used to determine the proportion of variation across studies. Results: The search yielded 62 studies, of which 21 were screened for inclusion. A total of 16 studies were included for the analysis, ten for qualitative and six for quantitative analysis. The included studies had, on average, a low risk of bias and good methodological quality. The heterogeneity amongst the pooled studies was, however, statistically significant (χ²-p value < 0.001). The pooled results indicated a high incidence of soccer injuries at an incidence rate of 6.83 per 1000 hours of play. The pooled results also showed significant evidence of risk factors and the likelihood of injury occurrence in relation to these risk factors (OR=1.12 95% CI 1.07; 1.17). Conclusion: Although multiple studies are available on the epidemiology of soccer injuries and risk factors, only a limited number of studies were of sound methodology to be included in a review. There was also significant heterogeneity amongst the studies. The incidence rate of common soccer injuries was found to be 6.83 per 1000 hours of play. This incidence rate is lower than the values reported by the majority of previous studies on the occurrence of common soccer injuries. The types of common soccer injuries found by this review support the soccer injuries pattern reported in existing literature as muscle strain and ligament sprain of varying severity, especially in the lower limbs. The risk factors that emerged from this systematic review are predominantly intrinsic risk factors. The risk factors increase the risk of traumatic and overuse injuries of the lower extremities such as hamstrings and groin strains, knee and ankle sprains, and contusion.

Keywords: incidence, prevalence, risk factors, soccer injuries

Procedia PDF Downloads 169
14203 Khiaban (the Street) as an Ancient Percept of the Iranian Urban Landscape: An Aesthetic Reading of Lalehzar Street, the First Modern Khiaban in Iran

Authors: Mohammad Atashinbar

Abstract:

Lalehzar was one of the main streets in central Tehran in late Qajar and 1st Pahlavi (1880-1940) and a center of attention for the government. It was a natural walk during the last decade of the reign of Nasser al-Din Shah (1880-1895). However, this street lost its prosperity status under the 2nd Pahlavi and evolved from a modern cultural street to a commercial corridor. Lalehzar's decline was the result of the immigration of the upper class from the inner city to the northern part and the consequent transfer of amenities and luxury goods with them. It seems that during Lalehzar's six decades of prosperity, this khiâbân has received an aesthetic look, which has made it enjoyable and appreciated by Tehran’s people. Various post-revolutionary urban management measures have been taken to revive Lalehzar and improve the quality of its urban life. Since the beginning of the Safavid era, the khiâbân was accompanied by the concept of urban space, and its characteristics are explained by referring to the main axis of the Persian Garden with rows of trees, streams, and a line of flowers on both sides. The construction of a street inside the city as an urban space benefits from a mental concept as a spiritual and exciting space, especially in common forms in the Persian Garden. Before that, the khiâbân was a religious and mythical concept, and we can even say that the mastery of this concept led to its appearance in the garden. In Tehran, Lalehzar Street is a gateway to modernity. The aesthetic changes in Lalehzar Street, inspired by Nasser al-Din Shah's journey to Europe around 1870, coinciding with the changes in architectural and urban landscape movements around the world between 1880 and 1940. The Shah is impressed by the modernist urbanism and, in particular, the Champs-Élysées in Paris. A tree-lined promenade with the hallmarks of the Persian Garden is familiar to Nasser al-Din Shah's mental image of beauty. In its state of mind, the main axis of the Persian Garden has the characteristics of a promenade. Therefore, the origins of the aesthetic of Lalehzar Street come from the aesthetics of the khiâbân. Admitting that the Champs-Élysées served as a model for Lalehzar, it seems that the Shah wanted to associate the Champs-Élysées with Lalehzar and highlight its landscape aspects by building this street. Depending on whether the percepts have their own aesthetic, this proposal seeks to analyze the aesthetic evolutions of the khiâbân as a percept towards the street as a component of the urban landscape in Lalehzar. The research attempts to review the aesthetic aspects of Lalehzar between 1880-1940 by using iconographic analysis, based on the available historical data, to find the leading aesthetics principles of this street. The aesthetic view to Lalehzar as an artwork is one of the main achievements of this study.

Keywords: Lalehzar, aesthetics, percept, Tehran, street

Procedia PDF Downloads 138
14202 Optimizing Solids Control and Cuttings Dewatering for Water-Powered Percussive Drilling in Mineral Exploration

Authors: S. J. Addinell, A. F. Grabsch, P. D. Fawell, B. Evans

Abstract:

The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising down-hole water-powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barren cover. This system has shown superior rates of penetration in water-rich, hard rock formations at depths exceeding 500 metres. With fluid flow rates of up to 120 litres per minute at 200 bar operating pressure to energise the bottom hole tooling, excessive quantities of high quality drilling fluid (water) would be required for a prolonged drilling campaign. As a result, drilling fluid recovery and recycling has been identified as a necessary option to minimise costs and logistical effort. While the majority of the cuttings report as coarse particles, a significant fines fraction will typically also be present. To maximise tool life longevity, the percussive bottom hole assembly requires high quality fluid with minimal solids loading and any recycled fluid needs to have a solids cut point below 40 microns and a concentration less than 400 ppm before it can be used to reenergise the system. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process shows a strong power law relationship for particle size distributions. This data is critical in optimising solids control strategies and cuttings dewatering techniques. Optimisation of deployable solids control equipment is discussed and how the required centrate clarity was achieved in the presence of pyrite-rich metasediment cuttings. Key results were the successful pre-aggregation of fines through the selection and use of high molecular weight anionic polyacrylamide flocculants and the techniques developed for optimal dosing prior to scroll decanter centrifugation, thus keeping sub 40 micron solids loading within prescribed limits. Experiments on maximising fines capture in the presence of thixotropic drilling fluid additives (e.g. Xanthan gum and other biopolymers) are also discussed. As no core is produced during the drilling process, it is intended that the particle laden returned drilling fluid is used for top-of-hole geochemical and mineralogical assessment. A discussion is therefore presented on the biasing and latency of cuttings representivity by dewatering techniques, as well as the resulting detrimental effects on depth fidelity and accuracy. Data pertaining to the sample biasing with respect to geochemical signatures due to particle size distributions is presented and shows that, depending on the solids control and dewatering techniques used, it can have unwanted influence on top-of-hole analysis. Strategies are proposed to overcome these effects, improving sample quality. Successful solids control and cuttings dewatering for water-powered percussive drilling is presented, contributing towards the successful advancement of coiled tubing based greenfields mineral exploration.

Keywords: cuttings, dewatering, flocculation, percussive drilling, solids control

Procedia PDF Downloads 236
14201 The Bacteriocin Produced by Lactic Acid Bacteria as an Antibacterial of Sub Clinic Mastitis on Dairy Cows

Authors: Nenny Harijani, Dhandy Koesoemo Wardhana

Abstract:

The aim of this study is to know the bacteriocin as antimicrobial activity produced by Lactic Acid Bacteria (LAB) as Antibacterial of Sub Clinic Mastitis on Dairy Cows. The antimicrobial is produced by LAB which isolates from cattle intestine can inhibit the growth Staphylococcus aureus, Steptocococcus agalactiae an Escherichia coli which were caused by dairy cattle subclinical mastitis. The failure of this bacteria growth was indicated by the formation of a clear zone surrounding the colonies on Brain Heart Infusion Agar plate. The bacteriocin was produced by Lactic Acid Bacteria (LAB) as antimicrobial, which could inhibit the growth of indicator bacteria Staphylococcus aureus, S.aglactiae and E.coli. This study was also developed bacteriocin to be used as a therapeutic of subclinical mastitis on dairy cows. The method used in this study was isolation, selection and identification of LAB using Mann Rogosa Sharp Medium, followed by characterization of the bacteriocin produced by LAB. The result of the study showed that bacteriocin isolated from beef cattle’s intestine could inhibit the growth Staphylococcus aureus, S. agalactiae, an Escherichia coli, which was indicated by clear zone surrounding the colonies on Brain Heart Infusion Agar plate. Characteristics of bacteriocin were heat-stable exposed to 80 0C for 30 minutes and 100 ⁰C for 15 minutes and inactivated by proteolytic enzymes such as trypsin. This approach has suggested the development of bacteriocin as a therapeutic agent for subclinical mastitis in dairy cattle.

Keywords: lactic acid bacteria, bacteriocin, staphylococcus aureus, S. agalactiae, E. coli, sub

Procedia PDF Downloads 122
14200 A Kinetic Study on Recovery of High-Purity Rutile TiO₂ Nanoparticles from Titanium Slag Using Sulfuric Acid under Sonochemical Procedure

Authors: Alireza Bahramian

Abstract:

High-purity TiO₂ nanoparticles (NPs) with size ranging between 50 nm and 100 nm are synthesized from titanium slag through sulphate route under sonochemical procedure. The effect of dissolution parameters such as the sulfuric acid/slag weight ratio, caustic soda concentration, digestion temperature and time, and initial particle size of the dried slag on the extraction efficiency of TiO₂ and removal of iron are examined. By optimizing the digestion conditions, a rutile TiO₂ powder with surface area of 42 m²/g and mean pore diameter of 22.4 nm were prepared. A thermo-kinetic analysis showed that the digestion temperature has an important effect, while the acid/slag weight ratio and initial size of the slag has a moderate effect on the dissolution rate. The shrinking-core model including both chemical surface reaction and surface diffusion is used to describe the leaching process. A low value of activation energy, 38.12 kJ/mol, indicates the surface chemical reaction model is a rate-controlling step. The kinetic analysis suggested a first order reaction mechanism with respect to the acid concentrations.

Keywords: TiO₂ nanoparticles, titanium slag, dissolution rate, sonochemical method, thermo-kinetic study

Procedia PDF Downloads 245
14199 Financial Modeling for Net Present Benefit Analysis of Electric Bus and Diesel Bus and Applications to NYC, LA, and Chicago

Authors: Jollen Dai, Truman You, Xinyun Du, Katrina Liu

Abstract:

Transportation is one of the leading sources of greenhouse gas emissions (GHG). Thus, to meet the Paris Agreement 2015, all countries must adopt a different and more sustainable transportation system. From bikes to Maglev, the world is slowly shifting to sustainable transportation. To develop a utility public transit system, a sustainable web of buses must be implemented. As of now, only a handful of cities have adopted a detailed plan to implement a full fleet of e-buses by the 2030s, with Shenzhen in the lead. Every change requires a detailed plan and a focused analysis of the impacts of the change. In this report, the economic implications and financial implications have been taken into consideration to develop a well-rounded 10-year plan for New York City. We also apply the same financial model to the other cities, LA and Chicago. We picked NYC, Chicago, and LA to conduct the comparative NPB analysis since they are all big metropolitan cities and have complex transportation systems. All three cities have started an action plan to achieve a full fleet of e-bus in the decades. Plus, their energy carbon footprint and their energy price are very different, which are the key factors to the benefits of electric buses. Using TCO (Total Cost Ownership) financial analysis, we developed a model to calculate NPB (Net Present Benefit) /and compare EBS (electric buses) to DBS (diesel buses). We have considered all essential aspects in our model: initial investment, including the cost of a bus, charger, and installation, government fund (federal, state, local), labor cost, energy (electricity or diesel) cost, maintenance cost, insurance cost, health and environment benefit, and V2G (vehicle to grid) benefit. We see about $1,400,000 in benefits for a 12-year lifetime of an EBS compared to DBS provided the government fund to offset 50% of EBS purchase cost. With the government subsidy, an EBS starts to make positive cash flow in 5th year and can pay back its investment in 5 years. Please remember that in our model, we consider environmental and health benefits, and every year, $50,000 is counted as health benefits per bus. Besides health benefits, the significant benefits come from the energy cost savings and maintenance savings, which are about $600,000 and $200,000 in 12-year life cycle. Using linear regression, given certain budget limitations, we then designed an optimal three-phase process to replace all NYC electric buses in 10 years, i.e., by 2033. The linear regression process is to minimize the total cost over the years and have the lowest environmental cost. The overall benefits to replace all DBS with EBS for NYC is over $2.1 billion by the year of 2033. For LA, and Chicago, the benefits for electrification of the current bus fleet are $1.04 billion and $634 million by 2033. All NPB analyses and the algorithm to optimize the electrification phase process are implemented in Python code and can be shared.

Keywords: financial modeling, total cost ownership, net present benefits, electric bus, diesel bus, NYC, LA, Chicago

Procedia PDF Downloads 27
14198 Improving Screening and Treatment of Binge Eating Disorders in Pediatric Weight Management Clinic through a Quality Improvement Framework

Authors: Cristina Fernandez, Felix Amparano, John Tumberger, Stephani Stancil, Sarah Hampl, Brooke Sweeney, Amy R. Beck, Helena H Laroche, Jared Tucker, Eileen Chaves, Sara Gould, Matthew Lindquist, Lora Edwards, Renee Arensberg, Meredith Dreyer, Jazmine Cedeno, Alleen Cummins, Jennifer Lisondra, Katie Cox, Kelsey Dean, Rachel Perera, Nicholas A. Clark

Abstract:

Background: Adolescents with obesity are at higher risk of disordered eating than the general population. Detection of eating disorders (ED) is difficult. Screening questionnaires may aid in early detection of ED. Our team’s prior efforts focused on increasing ED screening rates to ≥90% using a validated 10-question adolescent binge eating disorder screening questionnaire (ADO-BED). This aim was achieved. We then aimed to improve treatment plan initiation of patients ≥12 years of age who screen positive for BED within our WMC from 33% to 70% within 12 months. Methods: Our WMC is within a tertiary-care, free-standing children’s hospital. A3, an improvement framework, was used. A multidisciplinary team (physicians, nurses, registered dietitians, psychologists, and exercise physiologists) was created. The outcome measure was documentation of treatment plan initiation of those who screen positive (goal 70%). The process measure was ADO-BED screening rate of WMC patients (goal ≥90%). Plan-Do-Study-Act (PDSA) cycle 1 included provider education on current literature and treatment plan initiation based upon ADO-BED responses. PDSA 2 involved increasing documentation of treatment plan and retrain process to providers. Pre-defined treatment plans were: 1) repeat screen in 3-6 months, 2) resources provided only, or 3) comprehensive multidisciplinary weight management team evaluation. Run charts monitored impact over time. Results: Within 9 months, 166 patients were seen in WMC. Process measure showed sustained performance above goal (mean 98%). Outcome measure showed special cause improvement from mean of 33% to 100% (n=31). Of treatment plans provided, 45% received Plan 1, 4% Plan 2, and 46% Plan 3. Conclusion: Through a multidisciplinary improvement team approach, we maintained sustained ADO-BED screening performance, and, prior to our 12-month timeline, achieved our project aim. Our efforts may serve as a model for other multidisciplinary WMCs. Next steps may include expanding project scope to other WM programs.

Keywords: obesity, pediatrics, clinic, eating disorder

Procedia PDF Downloads 45
14197 Temporal Profile of T2 MRI and 1H-MRS in the MDX Mouse Model of Duchenne Muscular Dystrophy

Authors: P. J. Sweeney, T. Ahtoniemi, J. Puoliväli, T. Laitinen, K.Lehtimäki, A. Nurmi, D. Wells

Abstract:

Duchenne muscular dystrophy (DMD) is an X-linked, lethal muscle wasting disease for which there are currently no treatment that effectively prevents the muscle necrosis and progressive muscle loss. DMD is among the most common of inherited diseases affecting around 1/3500 live male births. MDX (X-linked muscular dystrophy) mice only partially encapsulate the disease in humans and display weakness in muscles, muscle damage and edema during a period deemed the “critical period” when these mice go through cycles of muscular degeneration and regeneration. Although the MDX mutant mouse model has been extensively studied as a model for DMD, to-date an extensive temporal, non-invasive imaging profile that utilizes magnetic resonance imaging (MRI) and 1H-magnetic resonance spectroscopy (1H-MRS) has not been performed.. In addition, longitudinal imaging characterization has not coincided with attempts to exacerbate the progressive muscle damage by exercise. In this study we employed an 11.7 T small animal MRI in order to characterize the MRI and MRS profile of MDX mice longitudinally during a 12 month period during which MDX mice were subjected to exercise. Male mutant MDX mice (n=15) and male wild-type mice (n=15) were subjected to a chronic exercise regime of treadmill walking (30 min/ session) bi-weekly over the whole 12 month follow-up period. Mouse gastrocnemius and tibialis anterior muscles were profiled with baseline T2-MRI and 1H-MRS at 6 weeks of age. Imaging and spectroscopy was repeated again at 3 months, 6 months, 9 months and 12 months of age. Plasma creatine kinase (CK) level measurements were coincided with time-points for T2-MRI and 1H-MRS, but also after the “critical period” at 10 weeks of age. The results obtained from this study indicate that chronic exercise extends dystrophic phenotype of MDX mice as evidenced by T2-MRI and1H-MRS. T2-MRI revealed extent and location of the muscle damage in gastrocnemius and tibialis anterior muscles as hyperintensities (lesions and edema) in exercised MDX mice over follow-up period.. The magnitude of the muscle damage remained stable over time in exercised mice. No evident fat infiltration or cumulation to the muscle tissues was seen at any time-point in exercised MDX mice. Creatine, choline and taurine levels evaluated by 1H-MRS from the same muscles were found significantly decreased in each time-point, Extramyocellular (EMCL) and intramyocellular lipids (IMCL) did not change in exercised mice supporting the findings from anatomical T2-MRI scans for fat content. Creatine kinase levels were found to be significantly higher in exercised MDX mice during the follow-up period and importantly CK levels remained stable over the whole follow-up period. Taken together, we have described here longitudinal prophile for muscle damage and muscle metabolic changes in MDX mice subjected to chronic exercised. The extent of the muscle damage by T2-MRI was found to be stable through the follow-up period in muscles examined. In addition, metabolic profile, especially creatine, choline and taurine levels in muscles, was found to be sustained between time-points. The anatomical muscle damage evaluated by T2-MRI was supported by plasma CK levels which remained stable over the follow-up period. These findings show that non-invasive imaging and spectroscopy can be used effectively to evaluate chronic muscle pathology. These techniques can be also used to evaluate the effect of various manipulations, like here exercise, on the phenotype of the mice. Many of the findings we present here are translatable to clinical disease, such as decreased creatine, choline and taurine levels in muscles. Imaging by T2-MRI and 1H-MRS also revealed that fat content or extramyocellar and intramyocellular lipids, respectively, are not changed in MDX mice, which is in contrast to clinical manifestation of the Duchenne’s muscle dystrophy. Findings show that non-invasive imaging can be used to characterize the phenotype of a MDX model and its translatability to clinical disease, and to study events that have traditionally been not examined, like here rigorous exercise related sustained muscle damage after the “critical period”. The ability for this model to display sustained damage beyond the spontaneous “critical period“ and in turn to study drug effects on this extended phenotype will increase the value of the MDX mouse model as a tool to study therapies and treatments aimed at DMD and associated diseases.

Keywords: 1H-MRS, MRI, muscular dystrophy, mouse model

Procedia PDF Downloads 345
14196 Analysis Model for the Relationship of Users, Products, and Stores on Online Marketplace Based on Distributed Representation

Authors: Ke He, Wumaier Parezhati, Haruka Yamashita

Abstract:

Recently, online marketplaces in the e-commerce industry, such as Rakuten and Alibaba, have become some of the most popular online marketplaces in Asia. In these shopping websites, consumers can select purchase products from a large number of stores. Additionally, consumers of the e-commerce site have to register their name, age, gender, and other information in advance, to access their registered account. Therefore, establishing a method for analyzing consumer preferences from both the store and the product side is required. This study uses the Doc2Vec method, which has been studied in the field of natural language processing. Doc2Vec has been used in many cases to analyze the extraction of semantic relationships between documents (represented as consumers) and words (represented as products) in the field of document classification. This concept is applicable to represent the relationship between users and items; however, the problem is that one more factor (i.e., shops) needs to be considered in Doc2Vec. More precisely, a method for analyzing the relationship between consumers, stores, and products is required. The purpose of our study is to combine the analysis of the Doc2vec model for users and shops, and for users and items in the same feature space. This method enables the calculation of similar shops and items for each user. In this study, we derive the real data analysis accumulated in the online marketplace and demonstrate the efficiency of the proposal.

Keywords: Doc2Vec, online marketplace, marketing, recommendation systems

Procedia PDF Downloads 103
14195 Government Payments to Minority American Producers

Authors: Anil K. Giri, Dipak Subedi, Kathleen Kassel, Ashok Mishra

Abstract:

The United States Department of Agriculture’s programs has been accused of being discriminatory in the past based on the race of the farmer, especially African-American producers. This study examines if there was racial discrimination in payments from the most recent new USDA programs, including those made in response to the pandemic. This study uses the Analysis of Variance (ANOVA) to examine the payments after normalizing them relative to cash receipts to test if discrimination in the number of payments received exists. Three programs investigated in this study are: i) the Coronavirus Food Assistance Program (CFAP), ii) the Market Facilitation Program (MFP), and (iii) the Paycheck Protection Program (PPP). The PPP program was administered by the Small Business Administration, whereas the other two were designed and implemented by the USDA. The PPP made forgivable loans to small businesses and, initially, was heavily criticized for not reaching minority businesses (in general). The Small Business Administration then initiated a second draw of PPP loans, prioritizing minority-owned businesses. This study compares attributes of PPP loans made to African-American farming businesses and other farming businesses in the two draws of the PPP. We find that the number of African-American farming businesses participating in the second draw of PPP loans decreased significantly from the first draw. However, the average amount of PPP loans to African-American farming businesses increased in the second draw. In the first draw, the average cost of jobs reported per loan was higher for African-American farming businesses than for other producers. In the second draw, the average cost of jobs reported per loan was significantly higher for other farming businesses than for African-American businesses. The share of PPP loans forgiven for African-American farming businesses is significantly below the national rate of 89 percent. The rate of forgiveness for PPP loans made to African-American producers is unlikely to increase significantly without policy changes. This can increase financial burdens in the future to farm operations operated by African- Americans. Finally, we conclude that the initial goal of increasing minority participation in PPP loans in the second draw, at least among African-Americans in the agricultural sector, did not meet. CFAP made almost $600 million in direct payments to minority producers, including Black producers. Black or African American producers received more than $52 million in CFAP payments. CFAP payments were proportional to the value of agricultural commodities sold for most minority producers. The 2017 Census of Agriculture showed that the majority of minority producers, including African American producers but excluding Asian producers, raised livestock. CFAP made the highest payments to livestock minority producers.

Keywords: United States department of agriculture (USDA), coronavirus food assistance program (CFAP), paycheck protection program (PPP), African-American producers, minority American producers

Procedia PDF Downloads 83
14194 Predicting Subsurface Abnormalities Growth Using Physics-Informed Neural Networks

Authors: Mehrdad Shafiei Dizaji, Hoda Azari

Abstract:

The research explores the pioneering integration of Physics-Informed Neural Networks (PINNs) into the domain of Ground-Penetrating Radar (GPR) data prediction, akin to advancements in medical imaging for tracking tumor progression in the human body. This research presents a detailed development framework for a specialized PINN model proficient at interpreting and forecasting GPR data, much like how medical imaging models predict tumor behavior. By harnessing the synergy between deep learning algorithms and the physical laws governing subsurface structures—or, in medical terms, human tissues—the model effectively embeds the physics of electromagnetic wave propagation into its architecture. This ensures that predictions not only align with fundamental physical principles but also mirror the precision needed in medical diagnostics for detecting and monitoring tumors. The suggested deep learning structure comprises three components: a CNN, a spatial feature channel attention (SFCA) mechanism, and ConvLSTM, along with temporal feature frame attention (TFFA) modules. The attention mechanism computes channel attention and temporal attention weights using self-adaptation, thereby fine-tuning the visual and temporal feature responses to extract the most pertinent and significant visual and temporal features. By integrating physics directly into the neural network, our model has shown enhanced accuracy in forecasting GPR data. This improvement is vital for conducting effective assessments of bridge deck conditions and other evaluations related to civil infrastructure. The use of Physics-Informed Neural Networks (PINNs) has demonstrated the potential to transform the field of Non-Destructive Evaluation (NDE) by enhancing the precision of infrastructure deterioration predictions. Moreover, it offers a deeper insight into the fundamental mechanisms of deterioration, viewed through the prism of physics-based models.

Keywords: physics-informed neural networks, deep learning, ground-penetrating radar (GPR), NDE, ConvLSTM, physics, data driven

Procedia PDF Downloads 13
14193 'How to Change Things When Change is Hard' Motivating Libyan College Students to Play an Active Role in Their Learning Process

Authors: Hameda Suwaed

Abstract:

Group work, time management and accepting others' opinions are practices rooted in the socio-political culture of democratic nations. In Libya, a country transitioning towards democracy, what is the impact of encouraging college students to use such practices in the English language classroom? How to encourage teachers to use such practices in educational system characterized by using traditional methods of teaching? Using action research and classroom research gathered data; this study investigates how teachers can use education to change their students' understanding of their roles in their society by enhancing their belonging to it. This study adjusts a model of change that includes giving students clear directions, sufficient motivation and supportive environment. These steps were applied by encouraging students to participate actively in the classroom by using group work and variety of activities. The findings of the study showed that following the suggested model can broaden students' perception of their belonging to their environment starting with their classroom and ending with their country. In conclusion, although this was a small scale study, the students' participation in the classroom shows that they gained self confidence in using practices such as group work, how to present their ideas and accepting different opinions. What was remarkable is that most students were aware that is what we need in Libya nowadays.

Keywords: educational change, students' motivation, group work, foreign language teaching

Procedia PDF Downloads 406
14192 Predictive Modelling of Curcuminoid Bioaccessibility as a Function of Food Formulation and Associated Properties

Authors: Kevin De Castro Cogle, Mirian Kubo, Maria Anastasiadi, Fady Mohareb, Claire Rossi

Abstract:

Background: The bioaccessibility of bioactive compounds is a critical determinant of the nutritional quality of various food products. Despite its importance, there is a limited number of comprehensive studies aimed at assessing how the composition of a food matrix influences the bioaccessibility of a compound of interest. This knowledge gap has prompted a growing need to investigate the intricate relationship between food matrix formulations and the bioaccessibility of bioactive compounds. One such class of bioactive compounds that has attracted considerable attention is curcuminoids. These naturally occurring phytochemicals, extracted from the roots of Curcuma longa, have gained popularity owing to their purported health benefits and also well known for their poor bioaccessibility Project aim: The primary objective of this research project is to systematically assess the influence of matrix composition on the bioaccessibility of curcuminoids. Additionally, this study aimed to develop a series of predictive models for bioaccessibility, providing valuable insights for optimising the formula for functional foods and provide more descriptive nutritional information to potential consumers. Methods: Food formulations enriched with curcuminoids were subjected to in vitro digestion simulation, and their bioaccessibility was characterized with chromatographic and spectrophotometric techniques. The resulting data served as the foundation for the development of predictive models capable of estimating bioaccessibility based on specific physicochemical properties of the food matrices. Results: One striking finding of this study was the strong correlation observed between the concentration of macronutrients within the food formulations and the bioaccessibility of curcuminoids. In fact, macronutrient content emerged as a very informative explanatory variable of bioaccessibility and was used, alongside other variables, as predictors in a Bayesian hierarchical model that predicted curcuminoid bioaccessibility accurately (optimisation performance of 0.97 R2) for the majority of cross-validated test formulations (LOOCV of 0.92 R2). These preliminary results open the door to further exploration, enabling researchers to investigate a broader spectrum of food matrix types and additional properties that may influence bioaccessibility. Conclusions: This research sheds light on the intricate interplay between food matrix composition and the bioaccessibility of curcuminoids. This study lays a foundation for future investigations, offering a promising avenue for advancing our understanding of bioactive compound bioaccessibility and its implications for the food industry and informed consumer choices.

Keywords: bioactive bioaccessibility, food formulation, food matrix, machine learning, probabilistic modelling

Procedia PDF Downloads 59
14191 Overview of Pre-Analytical Lab Errors in a Tertiary Care Hospital at Rawalpindi, Pakistan

Authors: S. Saeed, T. Butt, M. Rehan, S. Khaliq

Abstract:

Objective: To determine the frequency of pre-analytical errors in samples taken from patients for various lab tests at Fauji Foundation Hospital, Rawalpindi. Material and Methods: All the lab specimens for diagnostic purposes received at the lab from Fauji Foundation hospital, Rawalpindi indoor and outdoor patients were included. Total number of samples received in the lab is recorded in the computerized program made for the hospital. All the errors observed for pre-analytical process including patient identification, sampling techniques, test collection procedures, specimen transport/processing and storage were recorded in the log book kept for the purpose. Results: A total of 476616 specimens were received in the lab during the period of study including 237931 and 238685 from outdoor and indoor patients respectively. Forty-one percent of the samples (n=197976) revealed pre-analytical discrepancies. The discrepancies included Hemolyzed samples (34.8%), Clotted blood (27.8%), Incorrect samples (17.4%), Unlabeled samples (8.9%), Insufficient specimens (3.9%), Request forms without authorized signature (2.9%), Empty containers (3.9%) and tube breakage during centrifugation (0.8%). Most of these pre-analytical discrepancies were observed in samples received from the wards revealing that inappropriate sample collection by the medical staff of the ward, as most of the outdoor samples are collected by the lab staff who are properly trained for sample collection. Conclusion: It is mandatory to educate phlebotomists and paramedical staff particularly performing duties in the wards regarding timing and techniques of sampling/appropriate container to use/early delivery of the samples to the lab to reduce pre-analytical errors.

Keywords: pre analytical lab errors, tertiary care hospital, hemolyzed, paramedical staff

Procedia PDF Downloads 196
14190 Computed Tomography Guided Bone Biopsies: Experience at an Australian Metropolitan Hospital

Authors: K. Hinde, R. Bookun, P. Tran

Abstract:

Percutaneous CT guided biopsies provide a fast, minimally invasive, cost effective and safe method for obtaining tissue for histopathology and culture. Standards for diagnostic yield vary depending on whether the tissue is being obtained for histopathology or culture. We present a retrospective audit from Western Health in Melbourne Australia over a 12-month period which aimed to determine the diagnostic yield, technical success and complication rate for CT guided bone biopsies and identify factors affecting these results. The digital imaging storage program (Synapse Picture Archiving and Communication System – Fujifilm Australia) was analysed with key word searches from October 2015 to October 2016. Nineteen CT guided bone biopsies were performed during this time. The most common referring unit was oncology, work up imaging included CT, MRI, bone scan and PET scan. The complication rate was 0%, overall diagnostic yield was 74% with a technical success of 95%. When performing biopsies for histologic analysis diagnostic yield was 85% and when performing biopsies for bacterial culture diagnostic yield was 60%. There was no significant relationship identified between size of lesion, distance of lesion to skin, lesion appearance on CT, the number of samples taken or gauge of needle to diagnostic yield or technical success. CT guided bone biopsy at Western Health meets the standard reported at other major clinical centres for technical success and safety. It is a useful investigation in identification of primary malignancy in distal bone metastases.

Keywords: bone biopsy, computed tomography, core biopsy, histopathology

Procedia PDF Downloads 187
14189 Troubleshooting Petroleum Equipment Based on Wireless Sensors Based on Bayesian Algorithm

Authors: Vahid Bayrami Rad

Abstract:

In this research, common methods and techniques have been investigated with a focus on intelligent fault finding and monitoring systems in the oil industry. In fact, remote and intelligent control methods are considered a necessity for implementing various operations in the oil industry, but benefiting from the knowledge extracted from countless data generated with the help of data mining algorithms. It is a avoid way to speed up the operational process for monitoring and troubleshooting in today's big oil companies. Therefore, by comparing data mining algorithms and checking the efficiency and structure and how these algorithms respond in different conditions, The proposed (Bayesian) algorithm using data clustering and their analysis and data evaluation using a colored Petri net has provided an applicable and dynamic model from the point of view of reliability and response time. Therefore, by using this method, it is possible to achieve a dynamic and consistent model of the remote control system and prevent the occurrence of leakage in oil pipelines and refineries and reduce costs and human and financial errors. Statistical data The data obtained from the evaluation process shows an increase in reliability, availability and high speed compared to other previous methods in this proposed method.

Keywords: wireless sensors, petroleum equipment troubleshooting, Bayesian algorithm, colored Petri net, rapid miner, data mining-reliability

Procedia PDF Downloads 52
14188 Longitudinal Vibration of a Micro-Beam in a Micro-Scale Fluid Media

Authors: M. Ghanbari, S. Hossainpour, G. Rezazadeh

Abstract:

In this paper, longitudinal vibration of a micro-beam in micro-scale fluid media has been investigated. The proposed mathematical model for this study is made up of a micro-beam and a micro-plate at its free end. An AC voltage is applied to the pair of piezoelectric layers on the upper and lower surfaces of the micro-beam in order to actuate it longitudinally. The whole structure is bounded between two fixed plates on its upper and lower surfaces. The micro-gap between the structure and the fixed plates is filled with fluid. Fluids behave differently in micro-scale than macro, so the fluid field in the gap has been modeled based on micro-polar theory. The coupled governing equations of motion of the micro-beam and the micro-scale fluid field have been derived. Due to having non-homogenous boundary conditions, derived equations have been transformed to an enhanced form with homogenous boundary conditions. Using Galerkin-based reduced order model, the enhanced equations have been discretized over the beam and fluid domains and solve simultaneously in order to obtain force response of the micro-beam. Effects of micro-polar parameters of the fluid as characteristic length scale, coupling parameter and surface parameter on the response of the micro-beam have been studied.

Keywords: micro-polar theory, Galerkin method, MEMS, micro-fluid

Procedia PDF Downloads 169
14187 Does Citizens’ Involvement Always Improve Outcomes: Procedures, Incentives and Comparative Advantages of Public and Private Law Enforcement

Authors: Avdasheva Svetlanaa, Kryuchkova Polinab

Abstract:

Comparative social efficiency of private and public enforcement of law is debated. This question is not of academic interest only, it is also important for the development of the legal system and regulations. Generally, involvement of ‘common citizens’ in public law enforcement is considered to be beneficial, while involvement of interest groups representatives is not. Institutional economics as well as law and economics consider the difference between public and private enforcement to be rather mechanical. Actions of bureaucrats in government agencies are assumed to be driven by the incentives linked to social welfare (or other indicator of public interest) and their own benefits. In contrast, actions of participants in private enforcement are driven by their private benefits. However administrative law enforcement may be designed in such a way that it would become driven mainly by individual incentives of alleged victims. We refer to this system as reactive public enforcement. Citizens may prefer using reactive public enforcement even if private enforcement is available. However replacement of public enforcement by reactive version of public enforcement negatively affects deterrence and reduces social welfare. We illustrate the problem of private vs pure public and private vs reactive public enforcement models with the examples of three legislation subsystems in Russia – labor law, consumer protection law and competition law. While development of private enforcement instead of public (especially in reactive public model) is desirable, replacement of both public and private enforcement by reactive model is definitely not.

Keywords: public enforcement, private complaints, legal errors, competition protection, labor law, competition law, russia

Procedia PDF Downloads 477
14186 Aggregation Scheduling Algorithms in Wireless Sensor Networks

Authors: Min Kyung An

Abstract:

In Wireless Sensor Networks which consist of tiny wireless sensor nodes with limited battery power, one of the most fundamental applications is data aggregation which collects nearby environmental conditions and aggregates the data to a designated destination, called a sink node. Important issues concerning the data aggregation are time efficiency and energy consumption due to its limited energy, and therefore, the related problem, named Minimum Latency Aggregation Scheduling (MLAS), has been the focus of many researchers. Its objective is to compute the minimum latency schedule, that is, to compute a schedule with the minimum number of timeslots, such that the sink node can receive the aggregated data from all the other nodes without any collision or interference. For the problem, the two interference models, the graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR), have been adopted with different power models, uniform-power and non-uniform power (with power control or without power control), and different antenna models, omni-directional antenna and directional antenna models. In this survey article, as the problem has proven to be NP-hard, we present and compare several state-of-the-art approximation algorithms in various models on the basis of latency as its performance measure.

Keywords: data aggregation, convergecast, gathering, approximation, interference, omni-directional, directional

Procedia PDF Downloads 214
14185 A Two-Week and Six-Month Stability of Cancer Health Literacy Classification Using the CHLT-6

Authors: Levent Dumenci, Laura A. Siminoff

Abstract:

Health literacy has been shown to predict a variety of health outcomes. Reliable identification of persons with limited cancer health literacy (LCHL) has been proved questionable with existing instruments using an arbitrary cut point along a continuum. The CHLT-6, however, uses a latent mixture modeling approach to identify persons with LCHL. The purpose of this study was to estimate two-week and six-month stability of identifying persons with LCHL using the CHLT-6 with a discrete latent variable approach as the underlying measurement structure. Using a test-retest design, the CHLT-6 was administered to cancer patients with two-week (N=98) and six-month (N=51) intervals. The two-week and six-month latent test-retest agreements were 89% and 88%, respectively. The chance-corrected latent agreements estimated from Dumenci’s latent kappa were 0.62 (95% CI: 0.41 – 0.82) and .47 (95% CI: 0.14 – 0.80) for the two-week and six-month intervals, respectively. High levels of latent test-retest agreement between limited and adequate categories of cancer health literacy construct, coupled with moderate to good levels of change-corrected latent agreements indicated that the CHLT-6 classification of limited versus adequate cancer health literacy is relatively stable over time. In conclusion, the measurement structure underlying the instrument allows for estimating classification errors circumventing limitations due to arbitrary approaches adopted by all other instruments. The CHLT-6 can be used to identify persons with LCHL in oncology clinics and intervention studies to accurately estimate treatment effectiveness.

Keywords: limited cancer health literacy, the CHLT-6, discrete latent variable modeling, latent agreement

Procedia PDF Downloads 164
14184 Professional Development in EFL Classroom: Motivation and Reflection

Authors: Iman Jabbar

Abstract:

Within the scope of professionalism and in order to compete with the modern world, teachers, are expected to develop their teaching skills and activities in addition to their professional knowledge. At the college level, the teacher should be able to face classroom challenges through his engagement with the learning situation to understand the students and their needs. In our field of TESOL, the role of the English teacher is no longer restricted to teaching English texts, but rather he should endeavor to enhance the students’ skills such as communication and critical analysis. Within the literature of professionalism, there are certain strategies and tools that an English teacher should adopt to develop his competence and performance. Reflective practice, which is an exploratory process, is one of these strategies. Another strategy contributing to classroom development is motivation. It is crucial in students’ learning as it affects the quality of learning English in the classroom in addition to determining success or failure as well as language achievement. This is a qualitative study grounded on interpretive perspectives of teachers and students regarding the process of professional development. This study aims at (a) understanding how teachers at the college level conceptualize reflective practice and motivation inside EFL classroom, and (b) exploring the methods and strategies that they implement to practice reflection and motivation. This study and is based on two questions: 1. How do EFL teachers perceive and view reflection and motivation in relation to their teaching and professional development? 2. How can reflective practice and motivation be developed into practical strategies and actions in EFL teachers’ professional context? The study is organized into two parts, theoretical and practical. The theoretical part reviews the literature on the concept of reflective practice and motivation in relation to professional development through providing certain definitions, theoretical models, and strategies. The practical part draws on the theoretical one, however; it is the core of the study since it deals with two issues. It involves the research design, methodology, and methods of data collection, sampling, and data analysis. It ends up with an overall discussion of findings and the researcher's reflections on the investigated topic. In terms of significance, the study is intended to contribute to the field of TESOL at the academic level through the selection of the topic and investigating it from theoretical and practical perspectives. Professional development is the path that leads to enhancing the quality of teaching English as a foreign or second language in a way that suits the modern trends of globalization and advanced technology.

Keywords: professional development, motivation, reflection, learning

Procedia PDF Downloads 431
14183 Case Study Analysis of 2017 European Railway Traffic Management Incident: The Application of System for Investigation of Railway Interfaces Methodology

Authors: Sanjeev Kumar Appicharla

Abstract:

This paper presents the results of the modelling and analysis of the European Railway Traffic Management (ERTMS) safety-critical incident to raise awareness of biases in the systems engineering process on the Cambrian Railway in the UK using the RAIB 17/2019 as a primary input. The RAIB, the UK independent accident investigator, published the Report- RAIB 17/2019 giving the details of their investigation of the focal event in the form of immediate cause, causal factors, and underlying factors and recommendations to prevent a repeat of the safety-critical incident on the Cambrian Line. The Systems for Investigation of Railway Interfaces (SIRI) is the methodology used to model and analyze the safety-critical incident. The SIRI methodology uses the Swiss Cheese Model to model the incident and identify latent failure conditions (potentially less than adequate conditions) by means of the management oversight and risk tree technique. The benefits of the systems for investigation of railway interfaces methodology (SIRI) are threefold: first is that it incorporates the “Heuristics and Biases” approach advanced by 2002 Nobel laureate in Economic Sciences, Prof Daniel Kahneman, in the management oversight and risk tree technique to identify systematic errors. Civil engineering and programme management railway professionals are aware of the role “optimism bias” plays in programme cost overruns and are aware of bow tie (fault and event tree) model-based safety risk modelling techniques. However, the role of systematic errors due to “Heuristics and Biases” is not appreciated as yet. This overcomes the problems of omission of human and organizational factors from accident analysis. Second, the scope of the investigation includes all levels of the socio-technical system, including government, regulatory, railway safety bodies, duty holders, signaling firms and transport planners, and front-line staff such that lessons are learned at the decision making and implementation level as well. Third, the author’s past accident case studies are supplemented with research pieces of evidence drawn from the practitioner's and academic researchers’ publications as well. This is to discuss the role of system thinking to improve the decision-making and risk management processes and practices in the IEC 15288 systems engineering standard and in the industrial context such as the GB railways and artificial intelligence (AI) contexts as well.

Keywords: accident analysis, AI algorithm internal audit, bounded rationality, Byzantine failures, heuristics and biases approach

Procedia PDF Downloads 182
14182 Internet of Things as a Source of Opportunities for Entrepreneurs

Authors: Svetlana Gudkova

Abstract:

The Internet of Things experiences a rapid growth bringing inevitable changes into many spheres of human activities. As the Internet has changed the social and business landscape, IoT as its extension, can bring much more profound changes in economic value creation and competitiveness of the economies. It has been already recognized as the next industrial revolution. However, the development of IoT is in a great extent stimulated by the entrepreneurial activity. To expand and reach its full potential it requires proactive entrepreneurs, who explore the potential and create innovative ideas pushing the boundaries of IoT technologies' application further. The goal of the research is to analyze, how entrepreneurs utilize the opportunities created by IoT and how do they stimulate the development of IoT through discovering of new ways of generating economic value and creating opportunities, which attract other entrepreneurs. The qualitative research methods have been applied to prepare the case studies. Entrepreneurs are recognized as an engine of economic growth. They introduce innovative products and services into the market through the creation of a new combination of the existing resources and utilizing new knowledge. Entrepreneurs not only create economic value but what is more important, they challenge the existing business models and invent new ways of value creation. Through identification and exploitation of entrepreneurial opportunities, they create new opportunities for other entrepreneurs. It makes the industry more attractive to other profit/innovation-driven start-ups. IoT creates numerous opportunities for entrepreneurs in the different industries. Smart cities, healthcare, manufacturing, retail, agriculture, smart vehicles and smart buildings benefit a lot from IoT-based breakthrough innovations introduced by entrepreneurs. They reinvented successfully the business models and created new entrepreneurial opportunities for other start-ups to introduce next innovations.

Keywords: entrepreneurship, internet of things, breakthrough innovations, start-ups

Procedia PDF Downloads 188
14181 Particle Filter Supported with the Neural Network for Aircraft Tracking Based on Kernel and Active Contour

Authors: Mohammad Izadkhah, Mojtaba Hoseini, Alireza Khalili Tehrani

Abstract:

In this paper we presented a new method for tracking flying targets in color video sequences based on contour and kernel. The aim of this work is to overcome the problem of losing target in changing light, large displacement, changing speed, and occlusion. The proposed method is made in three steps, estimate the target location by particle filter, segmentation target region using neural network and find the exact contours by greedy snake algorithm. In the proposed method we have used both region and contour information to create target candidate model and this model is dynamically updated during tracking. To avoid the accumulation of errors when updating, target region given to a perceptron neural network to separate the target from background. Then its output used for exact calculation of size and center of the target. Also it is used as the initial contour for the greedy snake algorithm to find the exact target's edge. The proposed algorithm has been tested on a database which contains a lot of challenges such as high speed and agility of aircrafts, background clutter, occlusions, camera movement, and so on. The experimental results show that the use of neural network increases the accuracy of tracking and segmentation.

Keywords: video tracking, particle filter, greedy snake, neural network

Procedia PDF Downloads 328
14180 Faster, Lighter, More Accurate: A Deep Learning Ensemble for Content Moderation

Authors: Arian Hosseini, Mahmudul Hasan

Abstract:

To address the increasing need for efficient and accurate content moderation, we propose an efficient and lightweight deep classification ensemble structure. Our approach is based on a combination of simple visual features, designed for high-accuracy classification of violent content with low false positives. Our ensemble architecture utilizes a set of lightweight models with narrowed-down color features, and we apply it to both images and videos. We evaluated our approach using a large dataset of explosion and blast contents and compared its performance to popular deep learning models such as ResNet-50. Our evaluation results demonstrate significant improvements in prediction accuracy, while benefiting from 7.64x faster inference and lower computation cost. While our approach is tailored to explosion detection, it can be applied to other similar content moderation and violence detection use cases as well. Based on our experiments, we propose a "think small, think many" philosophy in classification scenarios. We argue that transforming a single, large, monolithic deep model into a verification-based step model ensemble of multiple small, simple, and lightweight models with narrowed-down visual features can possibly lead to predictions with higher accuracy.

Keywords: deep classification, content moderation, ensemble learning, explosion detection, video processing

Procedia PDF Downloads 31
14179 A Comparative Study of Three Major Performance Testing Tools

Authors: Abdulaziz Omar Alsadhan, Mohd Mudasir Shafi

Abstract:

Performance testing is done to prove the reliability of any software product. There are a number of tools available in the markets that are used to perform performance testing. In this paper we present a comparative study of the three most commonly used performance testing tools. These tools cover the major share of the performance testing market and are widely used. In this paper we compared the tools on five evaluation parameters which are; User friendliness, portability, tool support, compatibility and cost. The conclusion provided at the end of the paper is based on our study and does not support any tool or company.

Keywords: software development, software testing, quality assurance, performance testing, load runner, rational testing, silk performer

Procedia PDF Downloads 592
14178 Managing Data from One Hundred Thousand Internet of Things Devices Globally for Mining Insights

Authors: Julian Wise

Abstract:

Newcrest Mining is one of the world’s top five gold and rare earth mining organizations by production, reserves and market capitalization in the world. This paper elaborates on the data acquisition processes employed by Newcrest in collaboration with Fortune 500 listed organization, Insight Enterprises, to standardize machine learning solutions which process data from over a hundred thousand distributed Internet of Things (IoT) devices located at mine sites globally. Through the utilization of software architecture cloud technologies and edge computing, the technological developments enable for standardized processes of machine learning applications to influence the strategic optimization of mineral processing. Target objectives of the machine learning optimizations include time savings on mineral processing, production efficiencies, risk identification, and increased production throughput. The data acquired and utilized for predictive modelling is processed through edge computing by resources collectively stored within a data lake. Being involved in the digital transformation has necessitated the standardization software architecture to manage the machine learning models submitted by vendors, to ensure effective automation and continuous improvements to the mineral process models. Operating at scale, the system processes hundreds of gigabytes of data per day from distributed mine sites across the globe, for the purposes of increased improved worker safety, and production efficiency through big data applications.

Keywords: mineral technology, big data, machine learning operations, data lake

Procedia PDF Downloads 97
14177 Hygro-Thermal Modelling of Timber Decks

Authors: Stefania Fortino, Petr Hradil, Timo Avikainen

Abstract:

Timber bridges have an excellent environmental performance, are economical, relatively easy to build and can have a long service life. However, the durability of these bridges is the main problem because of their exposure to outdoor climate conditions. The moisture content accumulated in wood for long periods, in combination with certain temperatures, may cause conditions suitable for timber decay. In addition, moisture content variations affect the structural integrity, serviceability and loading capacity of timber bridges. Therefore, the monitoring of the moisture content in wood is important for the durability of the material but also for the whole superstructure. The measurements obtained by the usual sensor-based techniques provide hygro-thermal data only in specific locations of the wood components. In this context, the monitoring can be assisted by numerical modelling to get more information on the hygro-thermal response of the bridges. This work presents a hygro-thermal model based on a multi-phase moisture transport theory to predict the distribution of moisture content, relative humidity and temperature in wood. Below the fibre saturation point, the multi-phase theory simulates three phenomena in cellular wood during moisture transfer, i.e., the diffusion of water vapour in the pores, the sorption of bound water and the diffusion of bound water in the cell walls. In the multi-phase model, the two water phases are separated, and the coupling between them is defined through a sorption rate. Furthermore, an average between the temperature-dependent adsorption and desorption isotherms is used. In previous works by some of the authors, this approach was found very suitable to study the moisture transport in uncoated and coated stress-laminated timber decks. Compared to previous works, the hygro-thermal fluxes on the external surfaces include the influence of the absorbed solar radiation during the time and consequently, the temperatures on the surfaces exposed to the sun are higher. This affects the whole hygro-thermal response of the timber component. The multi-phase model, implemented in a user subroutine of Abaqus FEM code, provides the distribution of the moisture content, the temperature and the relative humidity in a volume of the timber deck. As a case study, the hygro-thermal data in wood are collected from the ongoing monitoring of the stress-laminated timber deck of Tapiola Bridge in Finland, based on integrated humidity-temperature sensors and the numerical results are found in good agreement with the measurements. The proposed model, used to assist the monitoring, can contribute to reducing the maintenance costs of bridges, as well as the cost of instrumentation, and increase safety.

Keywords: moisture content, multi-phase models, solar radiation, timber decks, FEM

Procedia PDF Downloads 157
14176 An Evaluation of Solubility of Wax and Asphaltene in Crude Oil for Improved Flow Properties Using a Copolymer Solubilized in Organic Solvent with an Aromatic Hydrocarbon

Authors: S. M. Anisuzzaman, Sariah Abang, Awang Bono, D. Krishnaiah, N. M. Ismail, G. B. Sandrison

Abstract:

Wax and asphaltene are high molecular weighted compounds that contribute to the stability of crude oil at a dispersed state. Transportation of crude oil along pipelines from the oil rig to the refineries causes fluctuation of temperature which will lead to the coagulation of wax and flocculation of asphaltenes. This paper focuses on the prevention of wax and asphaltene precipitate deposition on the inner surface of the pipelines by using a wax inhibitor and an asphaltene dispersant. The novelty of this prevention method is the combination of three substances; a wax inhibitor dissolved in a wax inhibitor solvent and an asphaltene solvent, namely, ethylene-vinyl acetate (EVA) copolymer dissolved in methylcyclohexane (MCH) and toluene (TOL) to inhibit the precipitation and deposition of wax and asphaltene. The objective of this paper was to optimize the percentage composition of each component in this inhibitor which can maximize the viscosity reduction of crude oil. The optimization was divided into two stages which are the laboratory experimental stage in which the viscosity of crude oil samples containing inhibitor of different component compositions is tested at decreasing temperatures and the data optimization stage using response surface methodology (RSM) to design an optimizing model. The results of experiment proved that the combination of 50% EVA + 25% MCH + 25% TOL gave a maximum viscosity reduction of 67% while the RSM model proved that the combination of 57% EVA + 20.5% MCH + 22.5% TOL gave a maximum viscosity reduction of up to 61%.

Keywords: asphaltene, ethylene-vinyl acetate, methylcyclohexane, toluene, wax

Procedia PDF Downloads 401
14175 A Fuzzy Decision Making Approach for Supplier Selection in Healthcare Industry

Authors: Zeynep Sener, Mehtap Dursun

Abstract:

Supplier evaluation and selection is one of the most important components of an effective supply chain management system. Due to the expanding competition in healthcare, selecting the right medical device suppliers offers great potential for increasing quality while decreasing costs. This paper proposes a fuzzy decision making approach for medical supplier selection. A real-world medical device supplier selection problem is presented to illustrate the application of the proposed decision methodology.

Keywords: fuzzy decision making, fuzzy multiple objective programming, medical supply chain, supplier selection

Procedia PDF Downloads 438