Search results for: fuel cost function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11888

Search results for: fuel cost function

518 A Case Study on the Estimation of Design Discharge for Flood Management in Lower Damodar Region, India

Authors: Susmita Ghosh

Abstract:

Catchment area of Damodar River, India experiences seasonal rains due to the south-west monsoon every year and depending upon the intensity of the storms, floods occur. During the monsoon season, the rainfall in the area is mainly due to active monsoon conditions. The upstream reach of Damodar river system has five dams store the water for utilization for various purposes viz, irrigation, hydro-power generation, municipal supplies and last but not the least flood moderation. But, in the downstream reach of Damodar River, known as Lower Damodar region, is severely and frequently suffering from flood due to heavy monsoon rainfall and also release from upstream reservoirs. Therefore, an effective flood management study is required to know in depth the nature and extent of flood, water logging, and erosion related problems, affected area, and damages in the Lower Damodar region, by conducting mathematical model study. The design flood or discharge is needed to decide to assign the respective model for getting several scenarios from the simulation runs. The ultimate aim is to achieve a sustainable flood management scheme from the several alternatives. there are various methods for estimating flood discharges to be carried through the rivers and their tributaries for quick drainage from inundated areas due to drainage congestion and excess rainfall. In the present study, the flood frequency analysis is performed to decide the design flood discharge of the study area. This, on the other hand, has limitations in respect of availability of long peak flood data record for determining long type of probability density function correctly. If sufficient past records are available, the maximum flood on a river with a given frequency can safely be determined. The floods of different frequency for the Damodar has been calculated by five candidate distributions i.e., generalized extreme value, extreme value-I, Pearson type III, Log Pearson and normal. Annual peak discharge series are available at Durgapur barrage for the period of 1979 to 2013 (35 years). The available series are subjected to frequency analysis. The primary objective of the flood frequency analysis is to relate the magnitude of extreme events to their frequencies of occurrence through the use of probability distributions. The design flood for return periods of 10, 15 and 25 years return period at Durgapur barrage are estimated by flood frequency method. It is necessary to develop flood hydrographs for the above floods to facilitate the mathematical model studies to find the depth and extent of inundation etc. Null hypothesis that the distributions fit the data at 95% confidence is checked with goodness of fit test, i.e., Chi Square Test. It is revealed from the goodness of fit test that the all five distributions do show a good fit on the sample population and is therefore accepted. However, it is seen that there is considerable variation in the estimation of frequency flood. It is therefore considered prudent to average out the results of these five distributions for required frequencies. The inundated area from past data is well matched using this flood.

Keywords: design discharge, flood frequency, goodness of fit, sustainable flood management

Procedia PDF Downloads 201
517 A Realist Review of Influences of Community-Based Interventions on Noncommunicable Disease Risk Behaviors

Authors: Ifeyinwa Victor-Uadiale, Georgina Pearson, Sophie Witter, D. Reidpath

Abstract:

Introduction: Smoking, alcohol misuse, unhealthy diet, and physical inactivity are the primary drivers of noncommunicable diseases (NCD), including cardiovascular diseases, cancers, respiratory diseases, and diabetes, worldwide. Collectively, these diseases are the leading cause of all global deaths, most of which are premature, affecting people between 30 and 70 years. Empirical evidence suggests that these risk behaviors can be modified by community-based interventions (CBI). However, there is little insight into the mechanisms and contextual factors of successful community interventions that impact risk behaviours for chronic diseases. This study examined “Under what circumstances, for whom, and how, do community-based interventions modify smoking, alcohol use, unhealthy diet, and physical inactivity among adults”. Adopting the Capability (C), Opportunity (O), Motivation (M), Behavior (B) (COM-B) framework for behaviour change, it sought to: (1) identify the mechanisms through which CBIs could reduce tobacco use and alcohol consumption and increase physical activity and the consumption of healthy diets and (2) examine the contextual factors that trigger the impact of these mechanisms on these risk behaviours among adults. Methods: Pawson’s realist review method was used to examine the literature. Empirical evidence and theoretical understanding were combined to develop a realist program theory that explains how CBIs influence NCD risk behaviours. Documents published between 2002 and 2020 were systematically searched in five electronic databases (CINAHL, Cochrane Library, Medline, ProQuest Central, and PsycINFO). They were included if they reported on community-based interventions aimed at cardiovascular diseases, cancers, respiratory diseases, and diabetes in a global context; and had an outcome targeted at smoking, alcohol, physical activity, and diet. Findings: Twenty-nine scientific documents were retrieved and included in the review. Over half of them (n = 18; 62%) focused on three of the four risk behaviours investigated in this review. The review identified four mechanisms: capability, opportunity, motivation, and social support that are likely to change the dietary and physical activity behaviours in adults given certain contexts. There were weak explanations of how the identified mechanisms could likely change smoking and alcohol consumption habits. In addition, eight contextual factors that may affect how these mechanisms impact physical activity and dietary behaviours were identified: suitability to work and family obligations, risk status awareness, socioeconomic status, literacy level, perceived need, availability and access to resources, culture, and group format. Conclusion: The findings suggest that CBIs are likely to improve the physical activity and dietary habits of adults if the intervention function seeks to educate, incentivize, change the environment, and model the right behaviours. The review applies and advances theory, realist research, and the design and implementation of community-based interventions for NCD prevention.

Keywords: community-based interventions, noncommunicable disease, realist program theory, risk behaviors

Procedia PDF Downloads 93
516 Anti-Bacterial Activity Studies of Derivatives of 6β-Hydroxy Betunolic Acid against Selected Stains of Gram (+) and Gram (-) Bacteria

Authors: S. Jayasinghe, W. G. D. Wickramasingha, V. Karunaratne, D. N. Karunaratne, A. Ekanayake

Abstract:

Multi-drug resistant microbial pathogens are a serious global health problem, and hence, there is an urgent necessity for discovering new drug therapeutics. However, finding alternatives is a one of the biggest challenges faced by the global drug industry due to the spiraling high cost and serious side effects associated with modern medicine. On the other hand, plants and their secondary metabolites can be considered as good sources of scaffolds to provide structurally diverse bioactive compounds as potential therapeutic agents. 6β-hydroxy betunolic acid is a triterpenoid isolated from bark of Schumacheria castaneifolia which is an endemic plant to Sri Lanka which has shown antibacterial activity against both Staphylococcus aureus (ATCC 29213) and methicillin-resistant S. aureus with Minimum Inhibition Concentration (MIC) of 16 µg/ml. The objective of this study was to determine the anti-bacterial activity for the derivatives of 6β- hydroxy betunolic acid against standard strains of Staphylococcus aureus (ATCC 29213 and ATCC 25923), Enterococcus faecalis (ATCC 29212), Escherichia coli (ATCC 35218 and ATCC 25922), Pseudomonas aeruginosa (ATCC 27853), carbepenemas produce Kebsiella pneumonia (ATCC BAA 1705) and carbepenemas non produce Kebsiella pneumonia (ATCC BAA 1706) and four stains of clinically isolated methicillin resistance S. aureus and Acinetobacter. Structural analogues of 6β-hydroxy betunolic acid were synthesized by modifying the carbonyl group at C-3 to obtain olefin and oxime, the hydroxyl group at C-6 position to a ketone, the carboxylic acid at C-17 to obtain amide and halo ester and the olefin group at C-20 position to obtain epoxide. Chemical structures of the synthesized analogues were confirmed with spectroscopic data and antibacterial activity was determined through broth micro dilution assay. Results revealed that 6β- hydroxy betunolic acid shows significant antibacterial activity only against the Gram positive strains and it was inactive against all the tested Gram negative strains for the tested concentration range. However, structural modifications into oxime and olefin at C-3, ketone at C-6 and epoxide at C-20 decreased its antibacterial activity against the gram positive organisms and it was totally lost with the both modifications at C-17 into amide and ester. These results concluded that the antibacterial activity of 6β- hydroxy betunolic acid and derivatives is predominantly depending on the cell wall difference of the bacteria and the presence of carboxylic acid at C-17 is highly important for the antibacterial activity against Gram positive organisms.

Keywords: antibacterial activity, 6β- hydroxy betunolic acid, broth micro dilution assay, structure activity relationship

Procedia PDF Downloads 126
515 Genetically Informed Precision Drug Repurposing for Rheumatoid Arthritis

Authors: Sahar El Shair, Laura Greco, William Reay, Murray Cairns

Abstract:

Background: Rheumatoid arthritis (RA) is a chronic, systematic, inflammatory, autoimmune disease that involves damages to joints and erosions to the associated bones and cartilage, resulting in reduced physical function and disability. RA is a multifactorial disorder influenced by heterogenous genetic and environmental factors. Whilst different medications have proven successful in reducing inflammation associated with RA, they often come with significant side effects and limited efficacy. To address this, the novel pharmagenic enrichment score (PES) algorithm was tested in self-reported RA patients from the UK Biobank (UKBB), which is a cohort of predominantly European ancestry, and identified individuals with a high genetic risk in clinically actionable biological pathways to identify novel opportunities for precision interventions and drug repurposing to treat RA. Methods and materials: Genetic association data for rheumatoid arthritis was derived from publicly available genome-wide association studies (GWAS) summary statistics (N=97173). The PES framework exploits competitive gene set enrichment to identify pathways that are associated with RA to explore novel treatment opportunities. This data is then integrated into WebGestalt, Drug Interaction database (DGIdb) and DrugBank databases to identify existing compounds with existing use or potential for repurposed use. The PES for each of these candidates was then profiled in individuals with RA in the UKBB (Ncases = 3,719, Ncontrols = 333,160). Results A total of 209 pathways with known drug targets after multiple testing correction were identified. Several pathways, including interferon gamma signaling and TID pathway (which relates to a chaperone that modulates interferon signaling), were significantly associated with self-reported RA in the UKBB when adjusting for age, sex, assessment centre month and location, RA polygenic risk and 10 principal components. These pathways have a major role in RA pathogenesis, including autoimmune attacks against certain citrullinated proteins, synovial inflammation, and bone loss. Encouragingly, many also relate to the mechanism of action of existing RA medications. The analyses also revealed statistically significant association between RA polygenic scores and self-reported RA with individual PES scorings, highlighting the potential utility of the PES algorithm in uncovering additional genetic insights that could aid in the identification of individuals at risk for RA and provide opportunities for more targeted interventions. Conclusions In this study, pharmacologically annotated genetic risk was explored through the PES framework to overcome inter-individual heterogeneity and enable precision drug repurposing in RA. The results showed a statistically significant association between RA polygenic scores and self-reported RA and individual PES scorings for 3,719 RA patients. Interestingly, several enriched PES pathways were targeted by already approved RA drugs. In addition, the analysis revealed genetically supported drug repurposing opportunities for future treatment of RA with a relatively safe profile.

Keywords: rheumatoid arthritis, precision medicine, drug repurposing, system biology, bioinformatics

Procedia PDF Downloads 76
514 Eco-Nanofiltration Membranes: Nanofiltration Membrane Technology Utilization-Based Fiber Pineapple Leaves Waste as Solutions for Industrial Rubber Liquid Waste Processing and Fertilizer Crisis in Indonesia

Authors: Andi Setiawan, Annisa Ulfah Pristya

Abstract:

Indonesian rubber plant area reached 2.9 million hectares with productivity reached 1.38 million. High rubber productivity is directly proportional to the amount of waste produced rubber processing industry. Rubber industry would produce a negative impact on the rubber industry in the form of environmental pollution caused by waste that has not been treated optimally. Rubber industrial wastewater containing high-nitrogen compounds (nitrate and ammonia) and phosphate compounds which cause water pollution and odor problems due to the high ammonia content. On the other hand, demand for NPK fertilizers in Indonesia continues to increase from year to year and in need of ammonia and phosphate as raw material. Based on domestic demand, it takes a year to 400,000 tons of ammonia and Indonesia imports 200,000 tons of ammonia per year valued at IDR 4.2 trillion. As well, the lack of phosphoric acid to be imported from Jordan, Morocco, South Africa, the Philippines, and India as many as 225 thousand tons per year. During this time, the process of wastewater treatment is generally done with a rubber on the tank to contain the waste and then precipitated, filtered and the rest released into the environment. However, this method is inefficient and thus require high energy costs because through many stages before producing clean water that can be discharged into the river. On the other hand, Indonesia has the potential of pineapple fruit can be harvested throughout the year in all of Indonesia. In 2010, production reached 1,406,445 tons of pineapple in Indonesia or about 9.36 percent of the total fruit production in Indonesia. Increased productivity is directly proportional to the amount of pineapple waste pineapple leaves are kept continuous and usually just dumped in the ground or disposed of with other waste at the final disposal. Through Eco-Nanofiltration Membrane-Based Fiber Pineapple leaves Waste so that environmental problems can be solved efficiently. Nanofiltration is a process that uses pressure as a driving force that can be either convection or diffusion of each molecule. Nanofiltration membranes that can split water to nano size so as to separate the waste processed residual economic value that N and P were higher as a raw material for the manufacture of NPK fertilizer to overcome the crisis in Indonesia. The raw materials were used to manufacture Eco-Nanofiltration Membrane is cellulose from pineapple fiber which processed into cellulose acetate which is biodegradable and only requires a change of the membrane every 6 months. Expected output target is Green eco-technology so with nanofiltration membranes not only treat waste rubber industry in an effective, efficient and environmentally friendly but also lowers the cost of waste treatment compared to conventional methods.

Keywords: biodegradable, cellulose diacetate, fertilizers, pineapple, rubber

Procedia PDF Downloads 446
513 Method for Requirements Analysis and Decision Making for Restructuring Projects in Factories

Authors: Rene Hellmuth

Abstract:

The requirements for the factory planning and the building concerned have changed in the last years. Factory planning has the task of designing products, plants, processes, organization, areas, and the building of a factory. Regular restructuring gains more importance in order to maintain the competitiveness of a factory. Restrictions regarding new areas, shorter life cycles of product and production technology as well as a VUCA (volatility, uncertainty, complexity and ambiguity) world cause more frequently occurring rebuilding measures within a factory. Restructuring of factories is the most common planning case today. Restructuring is more common than new construction, revitalization and dismantling of factories. The increasing importance of restructuring processes shows that the ability to change was and is a promising concept for the reaction of companies to permanently changing conditions. The factory building is the basis for most changes within a factory. If an adaptation of a construction project (factory) is necessary, the inventory documents must be checked and often time-consuming planning of the adaptation must take place to define the relevant components to be adapted, in order to be able to finally evaluate them. The different requirements of the planning participants from the disciplines of factory planning (production planner, logistics planner, automation planner) and industrial construction planning (architect, civil engineer) come together during reconstruction and must be structured. This raises the research question: Which requirements do the disciplines involved in the reconstruction planning place on a digital factory model? A subordinate research question is: How can model-based decision support be provided for a more efficient design of the conversion within a factory? Because of the high adaptation rate of factories and its building described above, a methodology for rescheduling factories based on the requirements engineering method from software development is conceived and designed for practical application in factory restructuring projects. The explorative research procedure according to Kubicek is applied. Explorative research is suitable if the practical usability of the research results has priority. Furthermore, it will be shown how to best use a digital factory model in practice. The focus will be on mobile applications to meet the needs of factory planners on site. An augmented reality (AR) application will be designed and created to provide decision support for planning variants. The aim is to contribute to a shortening of the planning process and model-based decision support for more efficient change management. This requires the application of a methodology that reduces the deficits of the existing approaches. The time and cost expenditure are represented in the AR tablet solution based on a building information model (BIM). Overall, the requirements of those involved in the planning process for a digital factory model in the case of restructuring within a factory are thus first determined in a structured manner. The results are then applied and transferred to a construction site solution based on augmented reality.

Keywords: augmented reality, digital factory model, factory planning, restructuring

Procedia PDF Downloads 134
512 Barriers to Entry: The Pitfall of Charter School Accountability

Authors: Ian Kingsbury

Abstract:

The rapid expansion of charter schools (public schools that receive government but do not face the same regulations as traditional public schools) over the preceding two decades has raised concerns over the potential for graft and fraud. These concerns are largely justified: Incidents of financial crime and mismanagement are not unheard of, and the charter sector has become a darling of hedge fund managers. In response, several states have strengthened their charter school regulatory regimes. Imposing regulations and attempting to increase accountability seem like sensible measures, and perhaps they are necessary. However, increased regulation may come at the cost of imposing barriers to entry. Specifically, increased regulation often entails evidence for a high likelihood of fiscal solvency. That should theoretically entail access to capital in the short-term, which may systematically preclude Black or Hispanic applicants from opening charter schools. Moreover, increased regulation necessarily entails more red tape. The institutional wherewithal and the number of hours required to complete an application to open a charter school might favor those who have partnered with an education service provider, specifically a charter management organization (CMO) or education management organization (EMO). These potential barriers to entry pose a significant policy concern. Just as policymakers hope to increase the share of minority teachers and principals, they should sensibly care whether individuals who open charter schools look like the students in that school. Moreover, they might be concerned if successful applications in states with stringent regulations are overwhelmingly affiliated with education service providers. One of the original missions of charter schools was to serve as a laboratory of innovation. Approving only those applications affiliated with education service providers (and in effect establishing a parallel network of schools rather than a diverse marketplace of schools) undermines that mission. Data and methods: The analysis examines more than 2,000 charter school applications from 15 states. It compares the outcomes of applications from states with a strong regulatory environment (those with high scores) from NACSA-the National Association of Charter School Authorizers- to applications from states with a weak regulatory environment (those with a low NACSA score). If the hypothesis is correct, applicants not affiliated with an ESP are more likely to be rejected in high-regulation states compared to those affiliated with an ESP, and minority candidates not affiliated with an education service provider (ESP) are particularly likely to be rejected. Initial returns indicate that the hypothesis holds. More applications in low NASCA-scoring Arizona come from individuals not associated with an ESP, and those individuals are as likely to be accepted as those affiliated with an ESP. On the other hand, applicants in high-NACSA scoring Indiana and Ohio are more than 20 percentage points more likely to be accepted if they are affiliated with an ESP, and the effect is particularly pronounced for minority candidates. These findings should spur policymakers to consider the drawbacks of charter school accountability and consider accountability regimes that do not impose barriers to entry.

Keywords: accountability, barriers to entry, charter schools, choice

Procedia PDF Downloads 159
511 Robotic Process Automation in Accounting and Finance Processes: An Impact Assessment of Benefits

Authors: Rafał Szmajser, Katarzyna Świetla, Mariusz Andrzejewski

Abstract:

Robotic process automation (RPA) is a technology of repeatable business processes performed using computer programs, robots that simulate the work of a human being. This approach assumes replacing an existing employee with the use of dedicated software (software robots) to support activities, primarily repeated and uncomplicated, characterized by a low number of exceptions. RPA application is widespread in modern business services, particularly in the areas of Finance, Accounting and Human Resources Management. By utilizing this technology, the effectiveness of operations increases while reducing workload, minimizing possible errors in the process, and as a result, bringing measurable decrease in the cost of providing services. Regardless of how the use of modern information technology is assessed, there are also some doubts as to whether we should replace human activities in the implementation of the automation in business processes. After the initial awe for the new technological concept, a reflection arises: to what extent does the implementation of RPA increase the efficiency of operations or is there a Business Case for implementing it? If the business case is beneficial, in which business processes is the greatest potential for RPA? A closer look at these issues was provided by in this research during which the respondents’ view of the perceived advantages resulting from the use of robotization and automation in financial and accounting processes was verified. As a result of an online survey addressed to over 500 respondents from international companies, 162 complete answers were returned from the most important types of organizations in the modern business services industry, i.e. Business or IT Process Outsourcing (BPO/ITO), Shared Service Centers (SSC), Consulting/Advisory and their customers. Answers were provided by representatives of the positions in their organizations: Members of the Board, Directors, Managers and Experts/Specialists. The structure of the survey allowed the respondents to supplement the survey with additional comments and observations. The results formed the basis for the creation of a business case calculating tangible benefits associated with the implementation of automation in the selected financial processes. The results of the statistical analyses carried out with regard to revenue growth confirmed the correctness of the hypothesis that there is a correlation between job position and the perception of the impact of RPA implementation on individual benefits. Second hypothesis (H2) that: There is a relationship between the kind of company in the business services industry and the reception of the impact of RPA on individual benefits was thus not confirmed. Based results of survey authors performed simulation of business case for implementation of RPA in selected Finance and Accounting Processes. Calculated payback period was diametrically different ranging from 2 months for the Account Payables process with 75% savings and in the extreme case for the process Taxes implementation and maintenance costs exceed the savings resulting from the use of the robot.

Keywords: automation, outsourcing, business process automation, process automation, robotic process automation, RPA, RPA business case, RPA benefits

Procedia PDF Downloads 137
510 A Fermatean Fuzzy MAIRCA Approach for Maintenance Strategy Selection of Process Plant Gearbox Using Sustainability Criteria

Authors: Soumava Boral, Sanjay K. Chaturvedi, Ian Howard, Kristoffer McKee, V. N. A. Naikan

Abstract:

Due to strict regulations from government to enhance the possibilities of sustainability practices in industries, and noting the advances in sustainable manufacturing practices, it is necessary that the associated processes are also sustainable. Maintenance of large scale and complex machines is a pivotal task to maintain the uninterrupted flow of manufacturing processes. Appropriate maintenance practices can prolong the lifetime of machines, and prevent associated breakdowns, which subsequently reduces different cost heads. Selection of the best maintenance strategies for such machines are considered as a burdensome task, as they require the consideration of multiple technical criteria, complex mathematical calculations, previous fault data, maintenance records, etc. In the era of the fourth industrial revolution, organizations are rapidly changing their way of business, and they are giving their utmost importance to sensor technologies, artificial intelligence, data analytics, automations, etc. In this work, the effectiveness of several maintenance strategies (e.g., preventive, failure-based, reliability centered, condition based, total productive maintenance, etc.) related to a large scale and complex gearbox, operating in a steel processing plant is evaluated in terms of economic, social, environmental and technical criteria. As it is not possible to obtain/describe some criteria by exact numerical values, these criteria are evaluated linguistically by cross-functional experts. Fuzzy sets are potential soft-computing technique, which has been useful to deal with linguistic data and to provide inferences in many complex situations. To prioritize different maintenance practices based on the identified sustainable criteria, multi-criteria decision making (MCDM) approaches can be considered as potential tools. Multi-Attributive Ideal Real Comparative Analysis (MAIRCA) is a recent addition in the MCDM family and has proven its superiority over some well-known MCDM approaches, like TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and ELECTRE (ELimination Et Choix Traduisant la REalité). It has a simple but robust mathematical approach, which is easy to comprehend. On the other side, due to some inherent drawbacks of Intuitionistic Fuzzy Sets (IFS) and Pythagorean Fuzzy Sets (PFS), recently, the use of Fermatean Fuzzy Sets (FFSs) has been proposed. In this work, we propose the novel concept of FF-MAIRCA. We obtain the weights of the criteria by experts’ evaluation and use them to prioritize the different maintenance practices according to their suitability by FF-MAIRCA approach. Finally, a sensitivity analysis is carried out to highlight the robustness of the approach.

Keywords: Fermatean fuzzy sets, Fermatean fuzzy MAIRCA, maintenance strategy selection, sustainable manufacturing, MCDM

Procedia PDF Downloads 138
509 Mathematical Modeling of Avascular Tumor Growth and Invasion

Authors: Meitham Amereh, Mohsen Akbari, Ben Nadler

Abstract:

Cancer has been recognized as one of the most challenging problems in biology and medicine. Aggressive tumors are a lethal type of cancers characterized by high genomic instability, rapid progression, invasiveness, and therapeutic resistance. Their behavior involves complicated molecular biology and consequential dynamics. Although tremendous effort has been devoted to developing therapeutic approaches, there is still a huge need for new insights into the dark aspects of tumors. As one of the key requirements in better understanding the complex behavior of tumors, mathematical modeling and continuum physics, in particular, play a pivotal role. Mathematical modeling can provide a quantitative prediction on biological processes and help interpret complicated physiological interactions in tumors microenvironment. The pathophysiology of aggressive tumors is strongly affected by the extracellular cues such as stresses produced by mechanical forces between the tumor and the host tissue. During the tumor progression, the growing mass displaces the surrounding extracellular matrix (ECM), and due to the level of tissue stiffness, stress accumulates inside the tumor. The produced stress can influence the tumor by breaking adherent junctions. During this process, the tumor stops the rapid proliferation and begins to remodel its shape to preserve the homeostatic equilibrium state. To reach this, the tumor, in turn, upregulates epithelial to mesenchymal transit-inducing transcription factors (EMT-TFs). These EMT-TFs are involved in various signaling cascades, which are often associated with tumor invasiveness and malignancy. In this work, we modeled the tumor as a growing hyperplastic mass and investigated the effects of mechanical stress from surrounding ECM on tumor invasion. The invasion is modeled as volume-preserving inelastic evolution. In this framework, principal balance laws are considered for tumor mass, linear momentum, and diffusion of nutrients. Also, mechanical interactions between the tumor and ECM is modeled using Ciarlet constitutive strain energy function, and dissipation inequality is utilized to model the volumetric growth rate. System parameters, such as rate of nutrient uptake and cell proliferation, are obtained experimentally. To validate the model, human Glioblastoma multiforme (hGBM) tumor spheroids were incorporated inside Matrigel/Alginate composite hydrogel and was injected into a microfluidic chip to mimic the tumor’s natural microenvironment. The invasion structure was analyzed by imaging the spheroid over time. Also, the expression of transcriptional factors involved in invasion was measured by immune-staining the tumor. The volumetric growth, stress distribution, and inelastic evolution of tumors were predicted by the model. Results showed that the level of invasion is in direct correlation with the level of predicted stress within the tumor. Moreover, the invasion length measured by fluorescent imaging was shown to be related to the inelastic evolution of tumors obtained by the model.

Keywords: cancer, invasion, mathematical modeling, microfluidic chip, tumor spheroids

Procedia PDF Downloads 111
508 Single Cell and Spatial Transcriptomics: A Beginners Viewpoint from the Conceptual Pipeline

Authors: Leo Nnamdi Ozurumba-Dwight

Abstract:

Messenger ribooxynucleic acid (mRNA) molecules are compositional, protein-based. These proteins, encoding mRNA molecules (which collectively connote the transcriptome), when analyzed by RNA sequencing (RNAseq), unveils the nature of gene expression in the RNA. The obtained gene expression provides clues of cellular traits and their dynamics in presentations. These can be studied in relation to function and responses. RNAseq is a practical concept in Genomics as it enables detection and quantitative analysis of mRNA molecules. Single cell and spatial transcriptomics both present varying avenues for expositions in genomic characteristics of single cells and pooled cells in disease conditions such as cancer, auto-immune diseases, hematopoietic based diseases, among others, from investigated biological tissue samples. Single cell transcriptomics helps conduct a direct assessment of each building unit of tissues (the cell) during diagnosis and molecular gene expressional studies. A typical technique to achieve this is through the use of a single-cell RNA sequencer (scRNAseq), which helps in conducting high throughput genomic expressional studies. However, this technique generates expressional gene data for several cells which lack presentations on the cells’ positional coordinates within the tissue. As science is developmental, the use of complimentary pre-established tissue reference maps using molecular and bioinformatics techniques has innovatively sprung-forth and is now used to resolve this set back to produce both levels of data in one shot of scRNAseq analysis. This is an emerging conceptual approach in methodology for integrative and progressively dependable transcriptomics analysis. This can support in-situ fashioned analysis for better understanding of tissue functional organization, unveil new biomarkers for early-stage detection of diseases, biomarkers for therapeutic targets in drug development, and exposit nature of cell-to-cell interactions. Also, these are vital genomic signatures and characterizations of clinical applications. Over the past decades, RNAseq has generated a wide array of information that is igniting bespoke breakthroughs and innovations in Biomedicine. On the other side, spatial transcriptomics is tissue level based and utilized to study biological specimens having heterogeneous features. It exposits the gross identity of investigated mammalian tissues, which can then be used to study cell differentiation, track cell line trajectory patterns and behavior, and regulatory homeostasis in disease states. Also, it requires referenced positional analysis to make up of genomic signatures that will be sassed from the single cells in the tissue sample. Given these two presented approaches to RNA transcriptomics study in varying quantities of cell lines, with avenues for appropriate resolutions, both approaches have made the study of gene expression from mRNA molecules interesting, progressive, developmental, and helping to tackle health challenges head-on.

Keywords: transcriptomics, RNA sequencing, single cell, spatial, gene expression.

Procedia PDF Downloads 122
507 Green Extraction Technologies of Flavonoids Containing Pharmaceuticals

Authors: Lamzira Ebralidze, Aleksandre Tsertsvadze, Dali Berashvili, Aliosha Bakuridze

Abstract:

Nowadays, there is an increasing demand for biologically active substances from vegetable, animal, and mineral resources. In terms of the use of natural compounds, pharmaceutical, cosmetic, and nutrition industry has big interest. The biggest drawback of conventional extraction methods is the need to use a large volume of organic extragents. The removal of the organic solvent is a multi-stage process. And their absolute removal cannot be achieved, and they still appear in the final product as impurities. A large amount of waste containing organic solvent damages not only human health but also has the harmful effects of the environment. Accordingly, researchers are focused on improving the extraction methods, which aims to minimize the use of organic solvents and energy sources, using alternate solvents and renewable raw materials. In this context, green extraction principles were formed. Green Extraction is a need of today’s environment. Green Extraction is the concept, and it totally corresponds to the challenges of the 21st century. The extraction of biologically active compounds based on green extraction principles is vital from the view of preservation and maintaining biodiversity. Novel technologies of green extraction are known, such as "cold methods" because during the extraction process, the temperature is relatively lower, and it doesn’t have a negative impact on the stability of plant compounds. Novel technologies provide great opportunities to reduce or replace the use of organic toxic solvents, the efficiency of the process, enhance excretion yield, and improve the quality of the final product. The objective of the research is the development of green technologies of flavonoids containing preparations. Methodology: At the first stage of the research, flavonoids containing preparations (Tincture Herba Leonuri, flamine, rutine) were prepared based on conventional extraction methods: maceration, bismaceration, percolation, repercolation. At the same time, the same preparations were prepared based on green technologies, microwave-assisted, UV extraction methods. Product quality characteristics were evaluated by pharmacopeia methods. At the next stage of the research technological - economic characteristics and cost efficiency of products prepared based on conventional and novel technologies were determined. For the extraction of flavonoids, water is used as extragent. Surface-active substances are used as co-solvent in order to reduce surface tension, which significantly increases the solubility of polyphenols in water. Different concentrations of water-glycerol mixture, cyclodextrin, ionic solvent were used for the extraction process. In vitro antioxidant activity will be studied by the spectrophotometric method, using DPPH (2,2-diphenyl-1- picrylhydrazyl) as an antioxidant assay. The advantage of green extraction methods is also the possibility of obtaining higher yield in case of low temperature, limitation extraction process of undesirable compounds. That is especially important for the extraction of thermosensitive compounds and maintaining their stability.

Keywords: extraction, green technologies, natural resources, flavonoids

Procedia PDF Downloads 129
506 Investigation of Linezolid, 127I-Linezolid and 131I-Linezolid Effects on Slime Layer of Staphylococcus with Nuclear Methods

Authors: Hasan Demiroğlu, Uğur Avcıbaşı, Serhan Sakarya, Perihan Ünak

Abstract:

Implanted devices are progressively practiced in innovative medicine to relieve pain or improve a compromised function. Implant-associated infections represent an emerging complication, caused by organisms which adhere to the implant surface and grow embedded in a protective extracellular polymeric matrix, known as a biofilm. In addition, the microorganisms within biofilms enter a stationary growth phase and become phenotypically resistant to most antimicrobials, frequently causing treatment failure. In such cases, surgical removal of the implant is often required, causing high morbidity and substantial healthcare costs. Staphylococcus aureus is the most common pathogen causing implant-associated infections. Successful treatment of these infections includes early surgical intervention and antimicrobial treatment with bactericidal drugs that also act on the surface-adhering microorganisms. Linezolid is a promising anti-microbial with ant-staphylococcal activity, used for the treatment of MRSA infections. Linezolid is a synthetic antimicrobial and member of oxazolidinoni group, with a bacteriostatic or bactericidal dose-dependent antimicrobial mechanism against gram-positive bacteria. Intensive use of antibiotics, have emerged multi-resistant organisms over the years and major problems have begun to be experienced in the treatment of infections occurred with them. While new drugs have been developed worldwide, on the other hand infections formed with microorganisms which gained resistance against these drugs were reported and the scale of the problem increases gradually. Scientific studies about the production of bacterial biofilm increased in recent years. For this purpose, we investigated the activity of Lin, Lin radiolabeled with 131I (131I-Lin) and cold iodinated Lin (127I-Lin) against clinical strains of Staphylococcus aureus DSM 4910 in biofilm. In the first stage, radio and cold labeling studies were performed. Quality-control studies of Lin and iodo (radio and cold) Lin derivatives were carried out by using TLC (Thin Layer Radiochromatography) and HPLC (High Pressure Liquid Chromatography). In this context, it was found that the binding yield was obtained to be about 86±2 % for 131I-Lin. The minimal inhibitory concentration (MIC) of Lin, 127I-Lin and 131I-Lin for Staphylococcus aureus DSM 4910 strain were found to be 1µg/mL. In time-kill studies of Lin, 127I-Lin and 131I-Lin were producing ≥ 3 log10 decreases in viable counts (cfu/ml) within 6 h at 2 and 4 fold of MIC respectively. No viable bacteria were observed within the 24 h of the experiments. Biofilm eradication of S. aureus started with 64 µg/mL of Lin, 127I-Lin and 131I-Lin, and OD630 was 0.507±0.0.092, 0.589±0.058 and 0.266±0.047, respectively. The media control of biofilm producing Staphylococcus was 1.675±0,01 (OD630). 131I and 127I did not have any effects on biofilms. Lin and 127I-Lin were found less effectively than 131I-Lin at killing cells in biofilm and biofilm eradication. Our results demonstrate that the 131I-Lin have potent anti-biofilm activity against S. aureus compare to Lin, 127I-Lin and media control. This is suggested that, 131I may have harmful effect on biofilm structure.

Keywords: iodine-131, linezolid, radiolabeling, slime layer, Staphylococcus

Procedia PDF Downloads 558
505 The Effects of Aging on Visuomotor Behaviors in Reaching

Authors: Mengjiao Fan, Thomson W. L. Wong

Abstract:

It is unavoidable that older adults may have to deal with aging-related motor problems. Aging is highly likely to affect motor learning and control as well. For example, older adults may suffer from poor motor function and quality of life due to age-related eye changes. These adverse changes in vision results in impairment of movement automaticity. Reaching is a fundamental component of various complex movements, which is therefore beneficial to explore the changes and adaptation in visuomotor behaviors. The current study aims to explore how aging affects visuomotor behaviors by comparing motor performance and gaze behaviors between two age groups (i.e., young and older adults). Visuomotor behaviors in reaching under providing or blocking online visual feedback (simulated visual deficiency) conditions were investigated in 60 healthy young adults (Mean age=24.49 years, SD=2.12) and 37 older adults (Mean age=70.07 years, SD=2.37) with normal or corrected-to-normal vision. Participants in each group were randomly allocated into two subgroups. Subgroup 1 was provided with online visual feedback of the hand-controlled mouse cursor. However, in subgroup 2, visual feedback was blocked to simulate visual deficiency. The experimental task required participants to complete 20 times of reaching to a target by controlling the mouse cursor on the computer screen. Among all the 20 trials, start position was upright in the center of the screen and target appeared at a randomly selected position by the tailor-made computer program. Primary outcomes of motor performance and gaze behaviours data were recorded by the EyeLink II (SR Research, Canada). The results suggested that aging seems to affect the performance of reaching tasks significantly in both visual feedback conditions. In both age groups, blocking online visual feedback of the cursor in reaching resulted in longer hand movement time (p < .001), longer reaching distance away from the target center (p<.001) and poorer reaching motor accuracy (p < .001). Concerning gaze behaviors, blocking online visual feedback increased the first fixation duration time in young adults (p<.001) but decreased it in older adults (p < .001). Besides, under the condition of providing online visual feedback of the cursor, older adults conducted a longer fixation dwell time on target throughout reaching than the young adults (p < .001) although the effect was not significant under blocking online visual feedback condition (p=.215). Therefore, the results suggested that different levels of visual feedback during movement execution can affect gaze behaviors differently in older and young adults. Differential effects by aging on visuomotor behaviors appear on two visual feedback patterns (i.e., blocking or providing online visual feedback of hand-controlled cursor in reaching). Several specific gaze behaviors among the older adults were found, which imply that blocking of visual feedback may act as a stimulus to seduce extra perceptive load in movement execution and age-related visual degeneration might further deteriorate the situation. It indeed provides us with insight for the future development of potential rehabilitative training method (e.g., well-designed errorless training) in enhancing visuomotor adaptation for our aging population in the context of improving their movement automaticity by facilitating their compensation of visual degeneration.

Keywords: aging effect, movement automaticity, reaching, visuomotor behaviors, visual degeneration

Procedia PDF Downloads 312
504 The Reliability Analysis of Concrete Chimneys Due to Random Vortex Shedding

Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta

Abstract:

Chimneys are generally tall and slender structures with circular cross-sections, due to which they are highly prone to wind forces. Wind exerts pressure on the wall of the chimneys, which produces unwanted forces. Vortex-induced oscillation is one of such excitations which can lead to the failure of the chimneys. Therefore, vortex-induced oscillation of chimneys is of great concern to researchers and practitioners since many failures of chimneys due to vortex shedding have occurred in the past. As a consequence, extensive research has taken place on the subject over decades. Many laboratory experiments have been performed to verify the theoretical models proposed to predict vortex-induced forces, including aero-elastic effects. Comparatively, very few proto-type measurement data have been recorded to verify the proposed theoretical models. Because of this reason, the theoretical models developed with the help of experimental laboratory data are utilized for analyzing the chimneys for vortex-induced forces. This calls for reliability analysis of the predictions of the responses of the chimneys produced due to vortex shedding phenomena. Although several works of literature exist on the vortex-induced oscillation of chimneys, including code provisions, the reliability analysis of chimneys against failure caused due to vortex shedding is scanty. In the present study, the reliability analysis of chimneys against vortex shedding failure is presented, assuming the uncertainty in vortex shedding phenomena to be significantly more than other uncertainties, and hence, the latter is ignored. The vortex shedding is modeled as a stationary random process and is represented by a power spectral density function (PSDF). It is assumed that the vortex shedding forces are perfectly correlated and act over the top one-third height of the chimney. The PSDF of the tip displacement of the chimney is obtained by performing a frequency domain spectral analysis using a matrix approach. For this purpose, both chimney and random wind forces are discretized over a number of points along with the height of the chimney. The method of analysis duly accounts for the aero-elastic effects. The double barrier threshold crossing level, as proposed by Vanmarcke, is used for determining the probability of crossing different threshold levels of the tip displacement of the chimney. Assuming the annual distribution of the mean wind velocity to be a Gumbel type-I distribution, the fragility curve denoting the variation of the annual probability of threshold crossing against different threshold levels of the tip displacement of the chimney is determined. The reliability estimate is derived from the fragility curve. A 210m tall concrete chimney with a base diameter of 35m, top diameter as 21m, and thickness as 0.3m has been taken as an illustrative example. The terrain condition is assumed to be that corresponding to the city center. The expression for the PSDF of the vortex shedding force is taken to be used by Vickery and Basu. The results of the study show that the threshold crossing reliability of the tip displacement of the chimney is significantly influenced by the assumed structural damping and the Gumbel distribution parameters. Further, the aero-elastic effect influences the reliability estimate to a great extent for small structural damping.

Keywords: chimney, fragility curve, reliability analysis, vortex-induced vibration

Procedia PDF Downloads 160
503 Exploring the Application of IoT Technology in Lower Limb Assistive Devices for Rehabilitation during the Golden Period of Stroke Patients with Hemiplegia

Authors: Ching-Yu Liao, Ju-Joan Wong

Abstract:

Recent years have shown a trend of younger stroke patients and an increase in ischemic strokes with the rise in stroke incidence. This has led to a growing demand for telemedicine, particularly during the COVID-19 pandemic, which has made the need for telemedicine even more urgent. This shift in healthcare is also closely related to advancements in Internet of Things (IoT) technology. Stroke-induced hemiparesis is a significant issue for patients. The medical community believes that if intervention occurs within three to six months of stroke onset, 80% of the residual effects can be restored to normal, a period known as the stroke golden period. During this time, patients undergo treatment and rehabilitation, and neural plasticity is at its best. Lower limb rehabilitation for stroke generally includes exercises such as support standing and walking posture, typically involving the healthy limb to guide the affected limb to achieve rehabilitation goals. Existing gait training aids in hospitals usually involve balance gait, sitting posture training, and precise muscle control, effectively addressing issues of poor gait, insufficient muscle activity, and inability to train independently during recovery. However, home training aids, such as braced and wheeled devices, often rely on the healthy limb to pull the affected limb, leading to lower usage of the affected limb, worsening circular walking, and compensatory movement issues. IoT technology connects devices via the internet to record, receive data, provide feedback, and adjust equipment for intelligent effects. Therefore, this study aims to explore how IoT can be integrated into existing gait training aids to monitor and sensor home rehabilitation movements, improve gait training compensatory issues through real-time feedback, and enable healthcare professionals to quickly understand patient conditions and enhance medical communication. To understand the needs of hemiparetic patients, a review of relevant literature from the past decade will be conducted. From the perspective of user experience, participant observation will be used to explore the use of home training aids by stroke patients and therapists, and interviews with physical therapists will be conducted to obtain professional opinions and practical experiences. Design specifications for home training aids for hemiparetic patients will be summarized. Applying IoT technology to lower limb training aids for stroke hemiparesis can help promote walking function recovery in hemiparetic patients, reduce muscle atrophy, and allow healthcare professionals to immediately grasp patient conditions and adjust gait training plans based on collected and analyzed information. Exploring these potential development directions provides a valuable reference for the further application of IoT technology in the field of medical rehabilitation.

Keywords: stroke, hemiplegia, rehabilitation, gait training, internet of things technology

Procedia PDF Downloads 29
502 Determination of Physical Properties of Crude Oil Distillates by Near-Infrared Spectroscopy and Multivariate Calibration

Authors: Ayten Ekin Meşe, Selahattin Şentürk, Melike Duvanoğlu

Abstract:

Petroleum refineries are a highly complex process industry with continuous production and high operating costs. Physical separation of crude oil starts with the crude oil distillation unit, continues with various conversion and purification units, and passes through many stages until obtaining the final product. To meet the desired product specification, process parameters are strictly followed. To be able to ensure the quality of distillates, routine analyses are performed in quality control laboratories based on appropriate international standards such as American Society for Testing and Materials (ASTM) standard methods and European Standard (EN) methods. The cut point of distillates in the crude distillation unit is very crucial for the efficiency of the upcoming processes. In order to maximize the process efficiency, the determination of the quality of distillates should be as fast as possible, reliable, and cost-effective. In this sense, an alternative study was carried out on the crude oil distillation unit that serves the entire refinery process. In this work, studies were conducted with three different crude oil distillates which are Light Straight Run Naphtha (LSRN), Heavy Straight Run Naphtha (HSRN), and Kerosene. These products are named after separation by the number of carbons it contains. LSRN consists of five to six carbon-containing hydrocarbons, HSRN consist of six to ten, and kerosene consists of sixteen to twenty-two carbon-containing hydrocarbons. Physical properties of three different crude distillation unit products (LSRN, HSRN, and Kerosene) were determined using Near-Infrared Spectroscopy with multivariate calibration. The absorbance spectra of the petroleum samples were obtained in the range from 10000 cm⁻¹ to 4000 cm⁻¹, employing a quartz transmittance flow through cell with a 2 mm light path and a resolution of 2 cm⁻¹. A total of 400 samples were collected for each petroleum sample for almost four years. Several different crude oil grades were processed during sample collection times. Extended Multiplicative Signal Correction (EMSC) and Savitzky-Golay (SG) preprocessing techniques were applied to FT-NIR spectra of samples to eliminate baseline shifts and suppress unwanted variation. Two different multivariate calibration approaches (Partial Least Squares Regression, PLS and Genetic Inverse Least Squares, GILS) and an ensemble model were applied to preprocessed FT-NIR spectra. Predictive performance of each multivariate calibration technique and preprocessing techniques were compared, and the best models were chosen according to the reproducibility of ASTM reference methods. This work demonstrates the developed models can be used for routine analysis instead of conventional analytical methods with over 90% accuracy.

Keywords: crude distillation unit, multivariate calibration, near infrared spectroscopy, data preprocessing, refinery

Procedia PDF Downloads 129
501 Application of Deep Learning Algorithms in Agriculture: Early Detection of Crop Diseases

Authors: Manaranjan Pradhan, Shailaja Grover, U. Dinesh Kumar

Abstract:

Farming community in India, as well as other parts of the world, is one of the highly stressed communities due to reasons such as increasing input costs (cost of seeds, fertilizers, pesticide), droughts, reduced revenue leading to farmer suicides. Lack of integrated farm advisory system in India adds to the farmers problems. Farmers need right information during the early stages of crop’s lifecycle to prevent damage and loss in revenue. In this paper, we use deep learning techniques to develop an early warning system for detection of crop diseases using images taken by farmers using their smart phone. The research work leads to building a smart assistant using analytics and big data which could help the farmers with early diagnosis of the crop diseases and corrective actions. The classical approach for crop disease management has been to identify diseases at crop level. Recently, ImageNet Classification using the convolutional neural network (CNN) has been successfully used to identify diseases at individual plant level. Our model uses convolution filters, max pooling, dense layers and dropouts (to avoid overfitting). The models are built for binary classification (healthy or not healthy) and multi class classification (identifying which disease). Transfer learning is used to modify the weights of parameters learnt through ImageNet dataset and apply them on crop diseases, which reduces number of epochs to learn. One shot learning is used to learn from very few images, while data augmentation techniques are used to improve accuracy with images taken from farms by using techniques such as rotation, zoom, shift and blurred images. Models built using combination of these techniques are more robust for deploying in the real world. Our model is validated using tomato crop. In India, tomato is affected by 10 different diseases. Our model achieves an accuracy of more than 95% in correctly classifying the diseases. The main contribution of our research is to create a personal assistant for farmers for managing plant disease, although the model was validated using tomato crop, it can be easily extended to other crops. The advancement of technology in computing and availability of large data has made possible the success of deep learning applications in computer vision, natural language processing, image recognition, etc. With these robust models and huge smartphone penetration, feasibility of implementation of these models is high resulting in timely advise to the farmers and thus increasing the farmers' income and reducing the input costs.

Keywords: analytics in agriculture, CNN, crop disease detection, data augmentation, image recognition, one shot learning, transfer learning

Procedia PDF Downloads 119
500 Conceptual Design of Gravity Anchor Focusing on Anchor Towing and Lowering

Authors: Vinay Kumar Vanjakula, Frank Adam, Nils Goseberg

Abstract:

Wind power is one of the leading renewable energy generation methods. Due to abundant higher wind speeds far away from shore, the construction of offshore wind turbines began in the last decades. However, installation of offshore foundation-based (monopiles) wind turbines in deep waters are often associated with technical and financial challenges. To overcome such challenges, the concept of floating wind turbines is expanded as the basis from the oil and gas industry. The unfolding of Universal heavyweight gravity anchor (UGA) for floating based foundation for floating Tension Leg Platform (TLP) sub-structures is developed in this research work. It is funded by the German Federal Ministry of Education and Research) for a three-year (2019-2022) research program called “Offshore Wind Solutions Plus (OWSplus) - Floating Offshore Wind Solutions Mecklenburg-Vorpommern.” It’s a group consists of German institutions (Universities, laboratories, and consulting companies). The part of the project is focused on the numerical modeling of gravity anchor that involves to analyze and solve fluid flow problems. Compared to gravity-based torpedo anchors, these UGA will be towed and lowered via controlled machines (tug boats) at lower speeds. This kind of installation of UGA are new to the offshore wind industry, particularly for TLP, and very few research works have been carried out in recent years. Conventional methods for transporting the anchor requires a large transportation crane vessel which involves a greater cost. This conceptual UGA anchors consists of ballasting chambers which utilizes the concept of buoyancy forces; the inside chambers are filled with the required amount of water in a way that they can float on the water for towing. After reaching the installation site, those chambers are ballasted with water for lowering. After it’s lifetime, these UGA can be unballasted (for erection or replacement) results in self-rising to the sea surface; buoyancy chambers give an advantage for using an UGA without the need of heavy machinery. However, while lowering/rising the UGA towards/away from the seabed, it experiences difficult, harsh marine environments due to the interaction of waves and currents. This leads to drifting of the anchor from the desired installation position and damage to the lowering machines. To overcome such harsh environments problems, a numerical model is built to investigate the influences of different outer contours and other fluid governing shapes that can be installed on the UGA to overcome the turbulence and drifting. The presentation will highlight the importance of the Computational Fluid Dynamics (CFD) numerical model in OpenFOAM, which is open-source programming software.

Keywords: anchor lowering, towing, waves, currrents, computational fluid dynamics

Procedia PDF Downloads 166
499 Altered Proteostasis Contributes to Skeletal Muscle Atrophy during Chronic Hypobaric Hypoxia: An Insight into Signaling Mechanisms

Authors: Akanksha Agrawal, Richa Rathor, Geetha Suryakumar

Abstract:

Muscle represents about ¾ of the body mass, and a healthy muscular system is required for human performance. A healthy muscular system is dynamically balanced via the catabolic and anabolic process. High altitude associated hypoxia altered this redox balance via producing reactive oxygen and nitrogen species that ultimately modulates protein structure and function, hence, disrupts proteostasis or protein homeostasis. The mechanism by which proteostasis is clinched includes regulated protein translation, protein folding, and protein degradation machinery. Perturbation in any of these mechanisms could increase proteome imbalance in the cellular processes. Altered proteostasis in skeletal muscle is likely to be responsible for contributing muscular atrophy in response to hypoxia. Therefore, we planned to elucidate the mechanism involving altered proteostasis leading to skeletal muscle atrophy under chronic hypobaric hypoxia. Material and Methods-Male Sprague Dawley rats weighing about 200-220 were divided into five groups - Control (Normoxic animals), 1d, 3d, 7d and 14d hypobaric hypoxia exposed animals. The animals were exposed to simulated hypoxia equivalent to 282 torr pressure (equivalent to an altitude of 7620m, 8% oxygen) at 25°C. On completion of chronic hypobaric hypoxia (CHH) exposure, rats were sacrificed, muscle was excised and biochemical, histopathological and protein synthesis signaling were studied. Results-A number of changes were observed with the CHH exposure time period. ROS was increased significantly on 07 and 14 days which were attributed to protein oxidation via damaging muscle protein structure by oxidation of amino acids moiety. The oxidative damage to the protein further enhanced the various protein degradation pathways. Calcium activated cysteine proteases and other intracellular proteases participate in protein turnover in muscles. Therefore, we analysed calpain and 20S proteosome activity which were noticeably increased at CHH exposure as compared to control group representing enhanced muscle protein catabolism. Since inflammatory markers (myokines) affect protein synthesis and triggers degradation machinery. So, we determined inflammatory pathway regulated under hypoxic environment. Other striking finding of the study was upregulation of Akt/PKB translational machinery that was increased on CHH exposure. Akt, p-Akt, p70 S6kinase, and GSK- 3β expression were upregulated till 7d of CHH exposure. Apoptosis related markers, caspase-3, caspase-9 and annexin V was also increased on CHH exposure. Conclusion: The present study provides evidence of disrupted proteostasis under chronic hypobaric hypoxia. A profound loss of muscle mass is accompanied by the muscle damage leading to apoptosis and cell death under CHH. These cellular stress response pathways may play a pivotal role in hypobaric hypoxia induced skeletal muscle atrophy. Further research in these signaling pathways will lead to development of therapeutic interventions for amelioration of hypoxia induced muscle atrophy.

Keywords: Akt/PKB translational machinery, chronic hypobaric hypoxia, muscle atrophy, protein degradation

Procedia PDF Downloads 270
498 Impact of Financial Factors on Total Factor Productivity: Evidence from Indian Manufacturing Sector

Authors: Lopamudra D. Satpathy, Bani Chatterjee, Jitendra Mahakud

Abstract:

The rapid economic growth in terms of output and investment necessitates a substantial growth of Total Factor Productivity (TFP) of firms which is an indicator of an economy’s technological change. The strong empirical relationship between financial sector development and economic growth clearly indicates that firms financing decisions do affect their levels of output via their investment decisions. Hence it establishes a linkage between the financial factors and productivity growth of the firms. To achieve the smooth and continuous economic growth over time, it is imperative to understand the financial channel that serves as one of the vital channels. The theoretical or logical argument behind this linkage is that when the internal financial capital is not sufficient enough for the investment, the firms always rely upon the external sources of finance. But due to the frictions and existence of information asymmetric behavior, it is always costlier for the firms to raise the external capital from the market, which in turn affect their investment sentiment and productivity. This kind of financial position of the firms puts heavy pressure on their productive activities. Keeping in view this theoretical background, the present study has tried to analyze the role of both external and internal financial factors (leverage, cash flow and liquidity) on the determination of total factor productivity of the firms of manufacturing industry and its sub-industries, maintaining a set of firm specific variables as control variables (size, age and disembodied technological intensity). An estimate of total factor productivity of the Indian manufacturing industry and sub-industries is computed using a semi-parametric approach, i.e., Levinsohn- Petrin method. It establishes the relationship between financial factors and productivity growth of 652 firms using a dynamic panel GMM method covering the time period between 1997-98 and 2012-13. From the econometric analyses, it has been found that the internal cash flow has a positive and significant impact on the productivity of overall manufacturing sector. The other financial factors like leverage and liquidity also play the significant role in the determination of total factor productivity of the Indian manufacturing sector. The significant role of internal cash flow on determination of firm-level productivity suggests that access to external finance is not available to Indian companies easily. Further, the negative impact of leverage on productivity could be due to the less developed bond market in India. These findings have certain implications for the policy makers to take various policy reforms to develop the external bond market and easily workout through which the financially constrained companies will be able to raise the financial capital in a cost-effective manner and would be able to influence their investments in the highly productive activities, which would help for the acceleration of economic growth.

Keywords: dynamic panel, financial factors, manufacturing sector, total factor productivity

Procedia PDF Downloads 332
497 Hydration Evaluation In A Working Population in Greece

Authors: Aikaterini-Melpomeni Papadopoulou, Kyriaki Apergi, Margarita-Vasiliki Panagopoulou, Olga Malisova

Abstract:

Introduction: Adequate hydration is a vital factor that enhances concentration, memory, and decision-making abilities throughout the workday. Various factors may affect hydration status in workplace settings, and many variables, such as age, gender and activity level affect hydration needs. Employees frequently overlook their hydration needs amid busy schedules and demanding tasks, leading to dehydration that can negatively affect cognitive function, productivity, and overall well-being In addition, dietary habits, including fluid intake and food choices, can either support or hinder optimal hydration. However, factors that affect hydration balance among workers in Greece have not been adequately studied. Objective: This study aims to evaluate the hydration status of the working population in Greece and investigate the various factors that impact hydration status in workplace settings, considering demographic, dietary, and occupational influences in a Greek sample of employees from diverse working environments Materials & Methods: The study included 212 participants (46.2% women) from the working population in Greece. Water intake from both solid and liquid foods was recorded using a semi-quantified drinking frequency questionnaire the validated Water Balance Questionnaire was used to evaluate hydration status. The calculation of water from solid and liquid foods was based on data from the USDA National Nutrient Database. Water balance was calculated subtracting the total fluid loss from the total fluid intake in the body. Furthermore, the questionnaire including additional questions on drinking habits and work-related factors.volunteers answered questions of different categories such as a) demographic socio-economic b) work style characteristics c) health, d) physical activity, e) food and fluid intake, f) fluid excretion and g) trends on fluid and water intake. Individual and multivariate regression analyses were performed to assess the relationships between demographic, work-related factors, and hydration balance. Results: Analysis showed that demographic factors like gender, age, and BMI, as well as certain work-related factors, had a weak and statistically non-significant effect on hydration balance. However, the use of a bottle or water container during work hours (b = 944.93, p < 0.001) and engaging in intense physical activity outside of work (b = -226.28, p < 0.001) were found to have a significant impact. Additionally, the consumption of beverages other than water (b = -416.14, p = 0.059) could negatively impact hydration balance. On average, the total consumption of the sample is 3410 ml of water daily, with men consuming approximately 440 ml / day more water (3470 ml / day) compared to women (3030 ml / day) with this difference also being statistically significant. Finally, the water balance, defined as the difference between water intake and water excretion, was found to be negative on average for the entire sample. Conclusions: This study is among the first to explore hydration status within the Greek working population. Findings indicate that awareness of adequate hydration and individual actions, such as using a water bottle during work, may influence hydration balance.

Keywords: hydration, working population, water balance, workplace behavior

Procedia PDF Downloads 11
496 Neuroanatomical Specificity in Reporting & Diagnosing Neurolinguistic Disorders: A Functional & Ethical Primer

Authors: Ruairi J. McMillan

Abstract:

Introduction: This critical analysis aims to ascertain how well neuroanatomical aetiologies are communicated within 20 case reports of aphasia. Neuroanatomical visualisations based on dissected brain specimens were produced and combined with white matter tract and vascular taxonomies of function in order to address the most consistently underreported features found within the aphasic case study reports. Together, these approaches are intended to integrate aphasiological knowledge from the past 20 years with aphasiological diagnostics, and to act as prototypal resources for both researchers and clinical professionals. The medico-legal precedent for aphasia diagnostics under Canadian, US and UK case law and the neuroimaging/neurological diagnostics relative to the functional capacity of aphasic patients are discussed in relation to the major findings of the literary analysis, neuroimaging protocols in clinical use today, and the neuroanatomical aetiologies of different aphasias. Basic Methodology: Literature searches of relevant scientific databases (e.g, OVID medline) were carried out using search terms such as aphasia case study (year) & stroke induced aphasia case study. A series of 7 diagnostic reporting criteria were formulated, and the resulting case studies were scored / 7 alongside clinical stroke criteria. In order to focus on the diagnostic assessment of the patient’s condition, only the case report proper (not the discussion) was used to quantify results. Statistical testing established if specific reporting criteria were associated with higher overall scores and potentially inferable increases in quality of reporting. Statistical testing of whether criteria scores were associated with an unclear/adjusted diagnosis were also tested, as well as the probability of a given criterion deviating from an expected estimate. Major Findings: The quantitative analysis of neuroanatomically driven diagnostics in case studies of aphasia revealed particularly low scores in the connection of neuroanatomical functions to aphasiological assessment (10%), and in the inclusion of white matter tracts within neuroimaging or assessment diagnostics (30%). Case studies which included clinical mention of white matter tracts within the report itself were distributed among higher scoring cases, as were case studies which (as clinically indicated) related the affected vascular region to the brain parenchyma of the language network. Concluding Statement: These findings indicate that certain neuroanatomical functions are integrated less often within the patient report than others, despite a precedent for well-integrated neuroanatomical aphasiology also being found among the case studies sampled, and despite these functions being clinically essential in diagnostic neuroimaging and aphasiological assessment. Therefore, ultimately the integration and specificity of aetiological neuroanatomy may contribute positively to the capacity and autonomy of aphasic patients as well as their clinicians. The integration of a full aetiological neuroanatomy within the reporting of aphasias may improve patient outcomes and sustain autonomy in the event of medico-ethical investigation.

Keywords: aphasia, language network, functional neuroanatomy, aphasiological diagnostics, medico-legal ethics

Procedia PDF Downloads 67
495 Systematic Review of Dietary Fiber Characteristics Relevant to Appetite and Energy Intake Outcomes in Clinical Intervention Trials of Healthy Humans

Authors: K. S. Poutanen, P. Dussort, A. Erkner, S. Fiszman, K. Karnik, M. Kristensen, C. F. M. Marsaux, S. Miquel-Kergoat, S. Pentikäinen, P. Putz, R. E. Steinert, J. Slavin, D. J. Mela

Abstract:

Dietary fiber (DF) intake has been associated with lower body weight or less weight gain. These effects are generally attributed to putative effects of DF on appetite. Many intervention studies have tested the effect of DFs on appetite-related measures, with inconsistent results. However, DF includes a wide category of different compounds with diverse chemical and physical characteristics, and correspondingly diverse effects in human digestion. Thus, inconsistent results between DF consumption and appetite are not surprising. The specific contribution of different compounds with varying physico-chemical properties to appetite control and the mediating mechanisms are not well characterized. This systematic review aimed to assess the influence of specific DF characteristics, including viscosity, gel forming capacity, fermentability, and molecular weight, on appetite-related outcomes in healthy humans. Medline and FSTA databases were searched for controlled human intervention trials, testing the effects of well-characterized DFs on subjective satiety/appetite or energy intake outcomes. Studies were included only if they reported: 1) fiber name and origin, and 2) data on viscosity, gelling properties, fermentability, or molecular weight of the DF materials tested. The search generated 3001 unique records, 322 of which were selected for further consideration from title and abstract screening. Of these, 149 were excluded due to insufficient fiber characterization and 124 for other reasons (not original article, not randomized controlled trial, or no appetite related outcome), leaving 49 papers meeting all the inclusion criteria, most of which reported results from acute testing (<1 day). The eligible 49 papers described 90 comparisons of DFs in foods, beverages or supplements. DF-containing material of interest was efficacious for at least one appetite-related outcome in 51/90 comparisons. Gel-forming DF sources were most consistently efficacious but there were no clear associations between viscosity, MW or fermentability and appetite-related outcomes. A considerable number of papers had to be excluded from the review due to shortcomings in fiber characterization. To build understanding about the impact of DF on satiety/appetite specifically there should be clear hypotheses about the mechanisms behind the proposed beneficial effect of DF material on appetite, and sufficient data about the DF properties relevant for the hypothesized mechanisms to justify clinical testing. The hypothesized mechanisms should also guide the decision about relevant duration of exposure in studies, i.e. are the effects expected to occur during acute time frame (related to stomach emptying, digestion rate, etc.) or develop from sustained exposure (gut fermentation mediated mechanisms). More consistent measurement methods and reporting of fiber specifications and characterization are needed to establish reliable structure-function relationships for DF and health outcomes.

Keywords: appetite, dietary fiber, physico-chemical properties, satiety

Procedia PDF Downloads 235
494 Strategies for the Optimization of Ground Resistance in Large Scale Foundations for Optimum Lightning Protection

Authors: Oibar Martinez, Clara Oliver, Jose Miguel Miranda

Abstract:

In this paper, we discuss the standard improvements which can be made to reduce the earth resistance in difficult terrains for optimum lightning protection, what are the practical limitations, and how the modeling can be refined for accurate diagnostics and ground resistance minimization. Ground resistance minimization can be made via three different approaches: burying vertical electrodes connected in parallel, burying horizontal conductive plates or meshes, or modifying the own terrain, either by changing the entire terrain material in a large volume or by adding earth-enhancing compounds. The use of vertical electrodes connected in parallel pose several practical limitations. In order to prevent loss of effectiveness, it is necessary to keep a minimum distance between each electrode, which is typically around five times larger than the electrode length. Otherwise, the overlapping of the local equipotential lines around each electrode reduces the efficiency of the configuration. The addition of parallel electrodes reduces the resistance and facilitates the measurement, but the basic parallel resistor formula of circuit theory will always underestimate the final resistance. Numerical simulation of equipotential lines around the electrodes overcomes this limitation. The resistance of a single electrode will always be proportional to the soil resistivity. The electrodes are usually installed with a backfilling material of high conductivity, which increases the effective diameter. However, the improvement is marginal, since the electrode diameter counts in the estimation of the ground resistance via a logarithmic function. Substances that are used for efficient chemical treatment must be environmentally friendly and must feature stability, high hygroscopicity, low corrosivity, and high electrical conductivity. A number of earth enhancement materials are commercially available. Many are comprised of carbon-based materials or clays like bentonite. These materials can also be used as backfilling materials to reduce the resistance of an electrode. Chemical treatment of soil has environmental issues. Some products contain copper sulfate or other copper-based compounds, which may not be environmentally friendly. Carbon-based compounds are relatively inexpensive and they do have very low resistivities, but they also feature corrosion issues. Typically, the carbon can corrode and destroy a copper electrode in around five years. These compounds also have potential environmental concerns. Some earthing enhancement materials contain cement, which, after installation acquire properties that are very close to concrete. This prevents the earthing enhancement material from leaching into the soil. After analyzing different configurations, we conclude that a buried conductive ring with vertical electrodes connected periodically should be the optimum baseline solution for the grounding of a large size structure installed on a large resistivity terrain. In order to show this, a practical example is explained here where we simulate the ground resistance of a conductive ring buried in a terrain with a resistivity in the range of 1 kOhm·m.

Keywords: grounding improvements, large scale scientific instrument, lightning risk assessment, lightning standards

Procedia PDF Downloads 139
493 Analysing the Stability of Electrical Grid for Increased Renewable Energy Penetration by Focussing on LI-Ion Battery Storage Technology

Authors: Hemendra Singh Rathod

Abstract:

Frequency is, among other factors, one of the governing parameters for maintaining electrical grid stability. The quality of an electrical transmission and supply system is mainly described by the stability of the grid frequency. Over the past few decades, energy generation by intermittent sustainable sources like wind and solar has seen a significant increase globally. Consequently, controlling the associated deviations in grid frequency within safe limits has been gaining momentum so that the balance between demand and supply can be maintained. Lithium-ion battery energy storage system (Li-Ion BESS) has been a promising technology to tackle the challenges associated with grid instability. BESS is, therefore, an effective response to the ongoing debate whether it is feasible to have an electrical grid constantly functioning on a hundred percent renewable power in the near future. In recent years, large-scale manufacturing and capital investment into battery production processes have made the Li-ion battery systems cost-effective and increasingly efficient. The Li-ion systems require very low maintenance and are also independent of geographical constraints while being easily scalable. The paper highlights the use of stationary and moving BESS for balancing electrical energy, thereby maintaining grid frequency at a rapid rate. Moving BESS technology, as implemented in the selected railway network in Germany, is here considered as an exemplary concept for demonstrating the same functionality in the electrical grid system. Further, using certain applications of Li-ion batteries, such as self-consumption of wind and solar parks or their ancillary services, wind and solar energy storage during low demand, black start, island operation, residential home storage, etc. offers a solution to effectively integrate the renewables and support Europe’s future smart grid. EMT software tool DIgSILENT PowerFactory has been utilised to model an electrical transmission system with 100% renewable energy penetration. The stability of such a transmission system has been evaluated together with BESS within a defined frequency band. The transmission system operators (TSO) have the superordinate responsibility for system stability and must also coordinate with the other European transmission system operators. Frequency control is implemented by TSO by maintaining a balance between electricity generation and consumption. Li-ion battery systems are here seen as flexible, controllable loads and flexible, controllable generation for balancing energy pools. Thus using Li-ion battery storage solution, frequency-dependent load shedding, i.e., automatic gradual disconnection of loads from the grid, and frequency-dependent electricity generation, i.e., automatic gradual connection of BESS to the grid, is used as a perfect security measure to maintain grid stability in any case scenario. The paper emphasizes the use of stationary and moving Li-ion battery storage for meeting the demands of maintaining grid frequency and stability for near future operations.

Keywords: frequency control, grid stability, li-ion battery storage, smart grid

Procedia PDF Downloads 150
492 Significant Growth in Expected Muslim Inbound Tourists in Japan Towards 2020 Tokyo Olympic and Still Incipient Stage of Current Halal Implementations in Hiroshima

Authors: Kyoko Monden

Abstract:

Tourism has moved to the forefront of national attention in Japan since September of 2013 when Tokyo won its bid to host the 2020 Summer Olympics. The number of foreign tourists has continued to break records, reaching 13.4 million in 2014, and is now expected to hit 20 million sooner than initially targeted 2020 due to government stimulus promotions; an increase in low cost carriers; the weakening of the Japanese yen, and strong economic growth in Asia. The tourism industry can be an effective trigger in Japan’s economic recovery as foreign tourists spent two trillion yen ($16.6 million) in Japan in 2014. In addition, 81% of them were all from Asian countries, and it is essential to know that 68.9% of the world’s Muslims, about a billion people, live in South and Southeast Asia. An important question is ‘Do Muslim tourists feel comfortable traveling in Japan?’ This research was initiated by an encounter with Muslim visitors in Hiroshima, a popular international tourist destination, who said they had found very few suitable restaurants in Hiroshima. The purpose of this research is to examine halal implementation in Hiroshima and suggest the next steps to be taken to improve current efforts. The goal will be to provide anyone, Muslims included, with first class hospitality in the near future in preparation for the massive influx of foreign tourists in 2020. The methods of this research were questionnaires, face-to-face interviews, phone interviews, and internet research. First, this research aims to address the significance of growing inbound tourism in Japan, especially the expected growth in Muslim tourists. Additionally, it should address the strong popularity of eating Japanese foods in Asian Muslim countries and as ranked no. 1 thing foreign tourists want to do in Japan. Secondly, the current incipient stage of Hiroshima’s halal implementation at hotels, restaurants, and major public places were exposed, and the existing action plans by Hiroshima Prefecture Government were presented. Furthermore, two surveys were conducted to clarify basic halal awareness of local residents in Hiroshima, and to gauge the inconveniences Muslims living in Hiroshima faced. Thirdly, the reasons for this lapse were observed and compared to the benchmarking data of other major tourist sites, Hiroshima’s halal implementation plans were proposed. The conclusion is, despite increasing demands and interests in halal-friendly businesses, overall halal actions have barely been applied in Hiroshima. 76% of Hiroshima residents had no idea what halal or halaal meant. It is essential to increase halal awareness and its importance to the economy and to launch further actions to make Muslim tourists feel welcome in Hiroshima and the entire country.

Keywords: halaal, halal implementation, Hiroshima, inbound tourists in Japan

Procedia PDF Downloads 223
491 Hospital Malnutrition and its Impact on 30-day Mortality in Hospitalized General Medicine Patients in a Tertiary Hospital in South India

Authors: Vineet Agrawal, Deepanjali S., Medha R., Subitha L.

Abstract:

Background. Hospital malnutrition is a highly prevalent issue and is known to increase the morbidity, mortality, length of hospital stay, and cost of care. In India, studies on hospital malnutrition have been restricted to ICU, post-surgical, and cancer patients. We designed this study to assess the impact of hospital malnutrition on 30-day post-discharge and in-hospital mortality in patients admitted in the general medicine department, irrespective of diagnosis. Methodology. All patients aged above 18 years admitted in the medicine wards, excluding medico-legal cases, were enrolled in the study. Nutritional assessment was done within 72 h of admission, using Subjective Global Assessment (SGA), which classifies patients into three categories: Severely malnourished, Mildly/moderately malnourished, and Normal/well-nourished. Anthropometric measurements like Body Mass Index (BMI), Triceps skin-fold thickness (TSF), and Mid-upper arm circumference (MUAC) were also performed. Patients were followed-up during hospital stay and 30 days after discharge through telephonic interview, and their final diagnosis, comorbidities, and cause of death were noted. Multivariate logistic regression and cox regression model were used to determine if the nutritional status at admission independently impacted mortality at one month. Results. The prevalence of malnourishment by SGA in our study was 67.3% among 395 hospitalized patients, of which 155 patients (39.2%) were moderately malnourished, and 111 (28.1%) were severely malnourished. Of 395 patients, 61 patients (15.4%) expired, of which 30 died in the hospital, and 31 died within 1 month of discharge from hospital. On univariate analysis, malnourished patients had significantly higher morality (24.3% in 111 Cat C patients) than well-nourished patients (10.1% in 129 Cat A patients), with OR 9.17, p-value 0.007. On multivariate logistic regression, age and higher Charlson Comorbidity Index (CCI) were independently associated with mortality. Higher CCI indicates higher burden of comorbidities on admission, and the CCI in the expired patient group (mean=4.38) was significantly higher than that of the alive cohort (mean=2.85). Though malnutrition significantly contributed to higher mortality on univariate analysis, it was not an independent predictor of outcome on multivariate logistic regression. Length of hospitalisation was also longer in the malnourished group (mean= 9.4 d) compared to the well-nourished group (mean= 8.03 d) with a trend towards significance (p=0.061). None of the anthropometric measurements like BMI, MUAC, or TSF showed any association with mortality or length of hospitalisation. Inference. The results of our study highlight the issue of hospital malnutrition in medicine wards and reiterate that malnutrition contributes significantly to patient outcomes. We found that SGA performs better than anthropometric measurements in assessing under-nutrition. We are of the opinion that the heterogeneity of the study population by diagnosis was probably the primary reason why malnutrition by SGA was not found to be an independent risk factor for mortality. Strategies to identify high-risk patients at admission and treat malnutrition in the hospital and post-discharge are needed.

Keywords: hospitalization outcome, length of hospital stay, mortality, malnutrition, subjective global assessment (SGA)

Procedia PDF Downloads 149
490 Advancing the Analysis of Physical Activity Behaviour in Diverse, Rapidly Evolving Populations: Using Unsupervised Machine Learning to Segment and Cluster Accelerometer Data

Authors: Christopher Thornton, Niina Kolehmainen, Kianoush Nazarpour

Abstract:

Background: Accelerometers are widely used to measure physical activity behavior, including in children. The traditional method for processing acceleration data uses cut points, relying on calibration studies that relate the quantity of acceleration to energy expenditure. As these relationships do not generalise across diverse populations, they must be parametrised for each subpopulation, including different age groups, which is costly and makes studies across diverse populations difficult. A data-driven approach that allows physical activity intensity states to emerge from the data under study without relying on parameters derived from external populations offers a new perspective on this problem and potentially improved results. We evaluated the data-driven approach in a diverse population with a range of rapidly evolving physical and mental capabilities, namely very young children (9-38 months old), where this new approach may be particularly appropriate. Methods: We applied an unsupervised machine learning approach (a hidden semi-Markov model - HSMM) to segment and cluster the accelerometer data recorded from 275 children with a diverse range of physical and cognitive abilities. The HSMM was configured to identify a maximum of six physical activity intensity states and the output of the model was the time spent by each child in each of the states. For comparison, we also processed the accelerometer data using published cut points with available thresholds for the population. This provided us with time estimates for each child’s sedentary (SED), light physical activity (LPA), and moderate-to-vigorous physical activity (MVPA). Data on the children’s physical and cognitive abilities were collected using the Paediatric Evaluation of Disability Inventory (PEDI-CAT). Results: The HSMM identified two inactive states (INS, comparable to SED), two lightly active long duration states (LAS, comparable to LPA), and two short-duration high-intensity states (HIS, comparable to MVPA). Overall, the children spent on average 237/392 minutes per day in INS/SED, 211/129 minutes per day in LAS/LPA, and 178/168 minutes in HIS/MVPA. We found that INS overlapped with 53% of SED, LAS overlapped with 37% of LPA and HIS overlapped with 60% of MVPA. We also looked at the correlation between the time spent by a child in either HIS or MVPA and their physical and cognitive abilities. We found that HIS was more strongly correlated with physical mobility (R²HIS =0.5, R²MVPA= 0.28), cognitive ability (R²HIS =0.31, R²MVPA= 0.15), and age (R²HIS =0.15, R²MVPA= 0.09), indicating increased sensitivity to key attributes associated with a child’s mobility. Conclusion: An unsupervised machine learning technique can segment and cluster accelerometer data according to the intensity of movement at a given time. It provides a potentially more sensitive, appropriate, and cost-effective approach to analysing physical activity behavior in diverse populations, compared to the current cut points approach. This, in turn, supports research that is more inclusive across diverse populations.

Keywords: physical activity, machine learning, under 5s, disability, accelerometer

Procedia PDF Downloads 210
489 The Effect of Rice Husk Ash on the Mechanical and Durability Properties of Concrete

Authors: Binyamien Rasoul

Abstract:

Portland cement is one of the most widely used construction materials in the world today; however, manufacture of ordinary Portland cement (OPC) emission significant amount of CO2 resulting environmental impact. On the other hand, rice husk ash (RHA), which is produce as by product material is generally considered to be an environmental issue as a waste material. This material (RHA) consists of non-crystalline silicon dioxide with high specific surface area and high pozzolanic reactivity. These RHA properties can demonstrate a significant influence in improving the mechanical and durability properties of mortar and concrete. Furthermore, rice husk ash can provide a cost effective and give concrete more sustainability. In this paper, chemical composition, reactive silica and fineness effect was assessed by examining five different types of RHA. Mortars and concrete specimens were molded with 5% to 50% of ash, replacing the Portland cement, and measured their compressive and tensile strength behavior. Beyond it, another two parameters had been considered: the durability of concrete blended RHA, and effect of temperature on the transformed of amorphous structure to crystalline form. To obtain the rice husk ash properties, these different types were subjected to X-Ray fluorescence to determine the chemical composition, while pozzolanic activity obtained by using X-Ray diffraction test. On the other hand, finesses and specific surface area were obtained by used Malvern Mastersizer 2000 test. The measured parameters properties of fresh mortar and concrete obtained by used flow table and slump test. While, for hardened mortar and concrete the compressive and tensile strength determined pulse the chloride ions penetration for concrete using NT Build 492 (Nord Test) – non-steady state migration test (RMT Test). The obtained test results indicated that RHA can be used as a cement replacement material in concrete with considerable proportion up to 50% percentages without compromising concrete strength. The use of RHA in the concrete as blending materials improved the different characteristics of the concrete product. The paper concludes that to exhibits a good compressive strength of OPC mortar or concrete with increase RHA replacement ratio rice husk ash should be consist of high silica content with high pozzolanic activity. Furthermore, with high amount of carbon content (12%) could be improve the strength of concrete when the silica structure is totally amorphous. As well RHA with high amount of crystalline form (25%) can be used as cement replacement when the silica content over 90%. The workability and strength of concrete increased by used of superplasticizer and it depends on the silica structure and carbon content. This study therefore is an investigation of the effect of partially replacing Ordinary Portland cement (OPC) with Rice hush Ash (RHA) on the mechanical properties and durability of concrete. This paper gives satisfactory results to use RHA in sustainable construction in order to reduce the carbon footprint associated with cement industry.

Keywords: OPC, ordinary Portland cement, RHA rice husk ash, W/B water to binder ratio, CO2, carbon dioxide

Procedia PDF Downloads 192