Search results for: risk mismanagement
394 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration
Authors: Matthew Yeager, Christopher Willy, John Bischoff
Abstract:
The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design
Procedia PDF Downloads 189393 Bariatric Surgery Referral as an Alternative to Fundoplication in Obese Patients Presenting with GORD: A Retrospective Hospital-Based Cohort Study
Authors: T. Arkle, D. Pournaras, S. Lam, B. Kumar
Abstract:
Introduction: Fundoplication is widely recognised as the best surgical option for gastro-oesophageal reflux disease (GORD) in the general population. However, there is controversy surrounding the use of conventional fundoplication in obese patients. Whilst the intra-operative failure of fundoplication, including wrap disruption, is reportedly higher in obese individuals, the more significant issue surrounds symptom recurrence post-surgery. Could a bariatric procedure be considered in obese patients for weight management, to treat the GORD, and to also reduce the risk of recurrence? Roux-en-Y gastric bypass, a widely performed bariatric procedure, has been shown to be highly successful both in controlling GORD symptoms and in weight management in obese patients. Furthermore, NICE has published clear guidelines on eligibility for bariatric surgery, with the main criteria being type 3 obesity or type 2 obesity with the presence of significant co-morbidities that would improve with weight loss. This study aims to identify the proportion of patients who undergo conventional fundoplication for GORD and/or hiatus hernia, which would have been eligible for bariatric surgery referral according to NICE guidelines. Methods: All patients who underwent fundoplication procedures for GORD and/or hiatus hernia repair at a single NHS foundation trust over a 10-year period will be identified using the Trust’s health records database. Pre-operative patient records will be used to find BMI and the presence of significant co-morbidities at the time of consideration for surgery. This information will be compared to NICE guidelines to determine potential eligibility for the bariatric surgical referral at the time of initial surgical intervention. Results: A total of 321 patients underwent fundoplication procedures between January 2011 and December 2020; 133 (41.4%) had available data for BMI or to allow BMI to be estimated. Of those 133, 40 patients (30%) had a BMI greater than 30kg/m², and 7 (5.3%) had BMI >35kg/m². One patient (0.75%) had a BMI >40 and would therefore be automatically eligible according to NICE guidelines. 4 further patients had significant co-morbidities, such as hypertension and osteoarthritis, which likely be improved by weight management surgery and therefore also indicated eligibility for referral. Overall, 3.75% (5/133) of patients undergoing conventional fundoplication procedures would have been eligible for bariatric surgical referral, these patients were all female, and the average age was 60.4 years. Conclusions: Based on this Trust’s experience, around 4% of obese patients undergoing fundoplication would have been eligible for bariatric surgical intervention. Based on current evidence, in class 2/3 obese patients, there is likely to have been a notable proportion with recurrent disease, potentially requiring further intervention. These patient’s may have benefitted more through undergoing bariatric surgery, for example a Roux-en-Y gastric bypass, addressing both their obesity and GORD. Use of patient written notes to obtain BMI data for the 188 patients with missing BMI data and further analysis to determine outcomes following fundoplication in all patients, assessing for incidence of recurrent disease, will be undertaken to strengthen conclusions.Keywords: bariatric surgery, GORD, Nissen fundoplication, nice guidelines
Procedia PDF Downloads 60392 An Exploration of the Emergency Staff’s Perceptions and Experiences of Teamwork and the Skills Required in the Emergency Department in Saudi Arabia
Authors: Sami Alanazi
Abstract:
Teamwork practices have been recognized as a significant strategy to improve patient safety, quality of care, and staff and patient satisfaction in healthcare settings, particularly within the emergency department (ED). The EDs depend heavily on teams of interdisciplinary healthcare staff to carry out their operational goals and core business of providing care to the serious illness and injured. The ED is also recognized as a high-risk area in relation to service demand and the potential for human error. Few studies have considered the perceptions and experiences of the ED staff (physicians, nurses, allied health professionals, and administration staff) about the practice of teamwork, especially in Saudi Arabia (SA), and no studies have been conducted to explore the practices of teamwork in the EDs. Aim: To explore the practices of teamwork from the perspectives and experiences of staff (physicians, nurses, allied health professionals, and administration staff) when interacting with each other in the admission areas in the ED of a public hospital in the Northern Border region of SA. Method: A qualitative case study design was utilized, drawing on two methods for the data collection, comprising of semi-structured interviews (n=22) with physicians (6), nurses (10), allied health professionals (3), and administrative members (3) working in the ED of a hospital in the Northern Border region of SA. The second method is non-participant direct observation. All data were analyzed using thematic analysis. Findings: The main themes that emerged from the analysis were as follows: the meaningful of teamwork, reasons of teamwork, the ED environmental factors, the organizational factors, the value of communication, leadership, teamwork skills in the ED, team members' behaviors, multicultural teamwork, and patients and families behaviors theme. Discussion: Working in the ED environment played a major role in affecting work performance as well as team dynamics. However, Communication, time management, fast-paced performance, multitasking, motivation, leadership, and stress management were highlighted by the participants as fundamental skills that have a major impact on team members and patients in the ED. It was found that the behaviors of the team members impacted the team dynamics as well as ED health services. Behaviors such as disputes among team members, conflict, cooperation, uncooperative members, neglect, and emotions of the members. Besides that, the behaviors of the patients and their accompanies had a direct impact on the team and the quality of the services. In addition, the differences in the cultures have separated the team members and created undesirable gaps such the gender segregation, national origin discrimination, and similarity and different in interests. Conclusion: Effective teamwork, in the context of the emergency department, was recognized as an essential element to obtain the quality of care as well as improve staff satisfaction.Keywords: teamwork, barrier, facilitator, emergencydepartment
Procedia PDF Downloads 143391 Using Machine Learning to Extract Patient Data from Non-standardized Sports Medicine Physician Notes
Authors: Thomas Q. Pan, Anika Basu, Chamith S. Rajapakse
Abstract:
Machine learning requires data that is categorized into features that models train on. This topic is important to the field of sports medicine due to the many tools it provides to physicians such as diagnosis support and risk assessment. Physician note that healthcare professionals take are usually unclean and not suitable for model training. The objective of this study was to develop and evaluate an advanced approach for extracting key features from sports medicine data without the need for extensive model training or data labeling. An LLM (Large Language Model) was given a narrative (Physician’s Notes) and prompted to extract four features (details about the patient). The narrative was found in a datasheet that contained six columns: Case Number, Validation Age, Validation Gender, Validation Diagnosis, Validation Body Part, and Narrative. The validation columns represent the accurate responses that the LLM attempts to output. With the given narrative, the LLM would output its response and extract the age, gender, diagnosis, and injured body part with each category taking up one line. The output would then be cleaned, matched, and added to new columns containing the extracted responses. Five ways of checking the accuracy were used: unclear count, substring comparison, LLM comparison, LLM re-check, and hand-evaluation. The unclear count essentially represented the extractions the LLM missed. This can be also understood as the recall score ([total - false negatives] over total). The rest of these correspond to the precision score ([total - false positives] over total). Substring comparison evaluated the validation (X) and extracted (Y) columns’ likeness by checking if X’s results were a substring of Y's findings and vice versa. LLM comparison directly asked an LLM if the X and Y’s results were similar. LLM Re-check prompted the LLM to see if the extracted results can be found in the narrative. Lastly, A selection of 1,000 random narratives was also selected and hand-evaluated to give an estimate of how well the LLM-based feature extraction model performed. With a selection of 10,000 narratives, the LLM-based approach had a recall score of roughly 98%. However, the precision scores of the substring comparison and LLM comparison models were around 72% and 76% respectively. The reason for these low figures is due to the minute differences between answers. For example, the ‘chest’ is a part of the ‘upper trunk’ however, these models cannot detect that. On the other hand, the LLM re-check and subset of hand-tested narratives showed a precision score of 96% and 95%. If this subset is used to extrapolate the possible outcome of the whole 10,000 narratives, the LLM-based approach would be strong in both precision and recall. These results indicated that an LLM-based feature extraction model could be a useful way for medical data in sports to be collected and analyzed by machine learning models. Wide use of this method could potentially increase the availability of data thus improving machine learning algorithms and supporting doctors with more enhanced tools. Procedia PDF Downloads 17390 Focus on the Bactericidal Efficacies of Alkaline Agents in Solid and the Required Time for Bacterial Inactivation
Authors: Hakimullah Hakim, Chiharu Toyofuku, Mari Ota, Mayuko Suzuki, Miyuki Komura, Masashi Yamada, Md. Shahin Alam, Natthanan Sangsriratanakul, Dany Shoham, Kazuaki Takehara
Abstract:
Disinfectants and their application are essential part of infection control strategies and enhancement of biosecurity at farms, worldwide. Alkaline agents are well known for their strong and long term antimicrobial capacities and most frequently are applied at farms for control and prevention of biological hazards. However, inadequate information regarding such materials’ capacities to inactivate pathogens and their improper applications fail farmers to achieve the mentioned goal. Thus, this requires attention to further evaluate their efficacies, under different conditions and in different ways. Here in this study we evaluated bactericidal efficacies of food additive grade of calcium hydroxide (FdCa(OH)2) powder derived from natural calcium carbonates obtained from limestone (Fine Co., Ltd., Tokyo, Japan), and bioceramic powder (BCX) derived from chicken feces at pH 13 (NMG environmental development Co., Ltd., Tokyo, Japan), for their efficacies to inactivate bacteria in feces. [Materials & Methods] Chicken feces were inoculated by 100 µl Escherichia coli and Salmonella Infantis in falcon tubes, individually, then FdCa(OH)2 or BCX powders were individually added to make final concentration of 0, 5, 10, 20 and 30% (w/w) in total weight of 0.5g, followed by properly mixing and incubating at room temperature for certain periods of time, in a dark place. Afterwards, 10 ml 1M Tris-HCl (pH 7.2) was added onto them to reduce their pH, in order to stop powders’ activities and to harvest the remained viable bacteria, whereas using normal medium or dW2 to recover bacteria increases the mixture pH, and as a result bacteria would be inactivated soon; therefore, the latter practice brings about incorrect and misleading results. Samples were then inoculated on DHL agar plates in order to calculate colony forming units (CFU)/ml of viable bacteria. [Results and Discussion] FdCa(OH)2 powder at 10% and 5% required 3 hr and 6 hr exposure times, respectively, while BCX powder at 20% concentrations required 6 hr exposure time to kill the mentioned bacteria in feces down to lower than detectable level (≤ 3.6 log10 CFU/ml). This study confirmed capacities of FdCa(OH)2 and BCX powders to inactivate bacteria in feces, and both materials are environment friendly materials, with no risk to human or animal’s health. This finding helps farmers to properly apply alkaline agents in appropriate concentrations and exposure times in their farms, in order to prevent and control infectious diseases outbreaks and to enhance biosecurity. Finally, this finding may help farmers to implement better strategies for infections control in their livestock farms.Keywords: bacterial inactivation, bioceramic, biosecurity at livestock farms, chicken feces
Procedia PDF Downloads 442389 Cytotoxicity and Genotoxicity of Glyphosate and Its Two Impurities in Human Peripheral Blood Mononuclear Cells
Authors: Marta Kwiatkowska, Paweł Jarosiewicz, Bożena Bukowska
Abstract:
Glyphosate (N-phosphonomethylglycine) is a non-selected broad spectrum ingredient in the herbicide (Roundup) used for over 35 years for the protection of agricultural and horticultural crops. Glyphosate was believed to be environmentally friendly but recently, a large body of evidence has revealed that glyphosate can negatively affect on environment and humans. It has been found that glyphosate is present in the soil and groundwater. It can also enter human body which results in its occurrence in blood in low concentrations of 73.6 ± 28.2 ng/ml. Research conducted for potential genotoxicity and cytotoxicity can be an important element in determining the toxic effect of glyphosate. Due to regulation of European Parliament 1107/2009 it is important to assess genotoxicity and cytotoxicity not only for the parent substance but also its impurities, which are formed at different stages of production of major substance – glyphosate. Moreover verifying, which of these compounds are more toxic is required. Understanding of the molecular pathways of action is extremely important in the context of the environmental risk assessment. In 2002, the European Union has decided that glyphosate is not genotoxic. Unfortunately, recently performed studies around the world achieved results which contest decision taken by the committee of the European Union. World Health Organization (WHO) in March 2015 has decided to change the classification of glyphosate to category 2A, which means that the compound is considered to "probably carcinogenic to humans". This category relates to compounds for which there is limited evidence of carcinogenicity to humans and sufficient evidence of carcinogenicity on experimental animals. That is why we have investigated genotoxicity and cytotoxicity effects of the most commonly used pesticide: glyphosate and its impurities: N-(phosphonomethyl)iminodiacetic acid (PMIDA) and bis-(phosphonomethyl)amine on human peripheral blood mononuclear cells (PBMCs), mostly lymphocytes. DNA damage (analysis of DNA strand-breaks) using the single cell gel electrophoresis (comet assay) and ATP level were assessed. Cells were incubated with glyphosate and its impurities: PMIDA and bis-(phosphonomethyl)amine at concentrations from 0.01 to 10 mM for 24 hours. Evaluating genotoxicity using the comet assay showed a concentration-dependent increase in DNA damage for all compounds studied. ATP level was decreased to zero as a result of using the highest concentration of two investigated impurities, like bis-(phosphonomethyl)amine and PMIDA. Changes were observed using the highest concentration at which a person can be exposed as a result of acute intoxication. Our survey leads to a conclusion that the investigated compounds exhibited genotoxic and cytotoxic potential but only in high concentrations, to which people are not exposed environmentally. Acknowledgments: This work was supported by the Polish National Science Centre (Contract-2013/11/N/NZ7/00371), MSc Marta Kwiatkowska, project manager.Keywords: cell viability, DNA damage, glyphosate, impurities, peripheral blood mononuclear cells
Procedia PDF Downloads 483388 Wood Energy, Trees outside Forests and Agroforestry Wood Harvesting and Conversion Residues Preparing and Storing
Authors: Adeiza Matthew, Oluwadamilola Abubakar
Abstract:
Wood energy, also known as wood fuel, is a renewable energy source that is derived from woody biomass, which is organic matter that is harvested from forests, woodlands, and other lands. Woody biomass includes trees, branches, twigs, and other woody debris that can be used as fuel. Wood energy can be classified based on its sources, such as trees outside forests, residues from wood harvesting and conversion, and energy plantations. There are several policy frameworks that support the use of wood energy, including participatory forest management and agroforestry. These policies aim to promote the sustainable use of woody biomass as a source of energy while also protecting forests and wildlife habitats. There are several options for using wood as a fuel, including central heating systems, pellet-based systems, wood chip-based systems, log boilers, fireplaces, and stoves. Each of these options has its own benefits and drawbacks, and the most appropriate option will depend on factors such as the availability of woody biomass, the heating needs of the household or facility, and the local climate. In order to use wood as a fuel, it must be harvested and stored properly. Hardwood or softwood can be used as fuel, and the heating value of firewood depends on the species of tree and the degree of moisture content. Proper harvesting and storage of wood can help to minimize environmental impacts and improve wildlife habitats. The use of wood energy has several environmental impacts, including the release of greenhouse gases during combustion and the potential for air pollution from combustion by-products. However, wood energy can also have positive environmental impacts, such as the sequestration of carbon in trees and the reduction of reliance on fossil fuels. The regulation and legislation of wood energy vary by country and region, and there is an ongoing debate about the potential use of wood energy in renewable energy technologies. Wood energy is a renewable energy source that can be used to generate electricity, heat, and transportation fuels. Woody biomass is abundant and widely available, making it a potentially significant source of energy for many countries. The use of wood energy can create local economic and employment opportunities, particularly in rural areas. Wood energy can be used to reduce reliance on fossil fuels and reduce greenhouse gas emissions. Properly managed forests can provide a sustained supply of woody biomass for energy, helping to reduce the risk of deforestation and habitat loss. Wood energy can be produced using a variety of technologies, including direct combustion, co-firing with fossil fuels, and the production of biofuels. The environmental impacts of wood energy can be minimized through the use of best practices in harvesting, transportation, and processing. Wood energy is regulated and legislated at the national and international levels, and there are various standards and certification systems in place to promote sustainable practices. Wood energy has the potential to play a significant role in the transition to a low-carbon economy and the achievement of climate change mitigation goals.Keywords: biomass, timber, charcoal, firewood
Procedia PDF Downloads 101387 Business Intelligent to a Decision Support Tool for Green Entrepreneurship: Meso and Macro Regions
Authors: Anishur Rahman, Maria Areias, Diogo Simões, Ana Figeuiredo, Filipa Figueiredo, João Nunes
Abstract:
The circular economy (CE) has gained increased awareness among academics, businesses, and decision-makers as it stimulates resource circularity in the production and consumption systems. A large epistemological study has explored the principles of CE, but scant attention eagerly focused on analysing how CE is evaluated, consented to, and enforced using economic metabolism data and business intelligent framework. Economic metabolism involves the ongoing exchange of materials and energy within and across socio-economic systems and requires the assessment of vast amounts of data to provide quantitative analysis related to effective resource management. Limited concern, the present work has focused on the regional flows pilot region from Portugal. By addressing this gap, this study aims to promote eco-innovation and sustainability in the regions of Intermunicipal Communities Região de Coimbra, Viseu Dão Lafões and Beiras e Serra da Estrela, using this data to find precise synergies in terms of material flows and give companies a competitive advantage in form of valuable waste destinations, access to new resources and new markets, cost reduction and risk sharing benefits. In our work, emphasis on applying artificial intelligence (AI) and, more specifically, on implementing state-of-the-art deep learning algorithms is placed, contributing to construction a business intelligent approach. With the emergence of new approaches generally highlighted under the sub-heading of AI and machine learning (ML), the methods for statistical analysis of complex and uncertain production systems are facing significant changes. Therefore, various definitions of AI and its differences from traditional statistics are presented, and furthermore, ML is introduced to identify its place in data science and the differences in topics such as big data analytics and in production problems that using AI and ML are identified. A lifecycle-based approach is then taken to analyse the use of different methods in each phase to identify the most useful technologies and unifying attributes of AI in manufacturing. Most of macroeconomic metabolisms models are mainly direct to contexts of large metropolis, neglecting rural territories, so within this project, a dynamic decision support model coupled with artificial intelligence tools and information platforms will be developed, focused on the reality of these transition zones between the rural and urban. Thus, a real decision support tool is under development, which will surpass the scientific developments carried out to date and will allow to overcome imitations related to the availability and reliability of data.Keywords: circular economy, artificial intelligence, economic metabolisms, machine learning
Procedia PDF Downloads 74386 Impact Location From Instrumented Mouthguard Kinematic Data In Rugby
Authors: Jazim Sohail, Filipe Teixeira-Dias
Abstract:
Mild traumatic brain injury (mTBI) within non-helmeted contact sports is a growing concern due to the serious risk of potential injury. Extensive research is being conducted looking into head kinematics in non-helmeted contact sports utilizing instrumented mouthguards that allow researchers to record accelerations and velocities of the head during and after an impact. This does not, however, allow the location of the impact on the head, and its magnitude and orientation, to be determined. This research proposes and validates two methods to quantify impact locations from instrumented mouthguard kinematic data, one using rigid body dynamics, the other utilizing machine learning. The rigid body dynamics technique focuses on establishing and matching moments from Euler’s and torque equations in order to find the impact location on the head. The methodology is validated with impact data collected from a lab test with the dummy head fitted with an instrumented mouthguard. Additionally, a Hybrid III Dummy head finite element model was utilized to create synthetic kinematic data sets for impacts from varying locations to validate the impact location algorithm. The algorithm calculates accurate impact locations; however, it will require preprocessing of live data, which is currently being done by cross-referencing data timestamps to video footage. The machine learning technique focuses on eliminating the preprocessing aspect by establishing trends within time-series signals from instrumented mouthguards to determine the impact location on the head. An unsupervised learning technique is used to cluster together impacts within similar regions from an entire time-series signal. The kinematic signals established from mouthguards are converted to the frequency domain before using a clustering algorithm to cluster together similar signals within a time series that may span the length of a game. Impacts are clustered within predetermined location bins. The same Hybrid III Dummy finite element model is used to create impacts that closely replicate on-field impacts in order to create synthetic time-series datasets consisting of impacts in varying locations. These time-series data sets are used to validate the machine learning technique. The rigid body dynamics technique provides a good method to establish accurate impact location of impact signals that have already been labeled as true impacts and filtered out of the entire time series. However, the machine learning technique provides a method that can be implemented with long time series signal data but will provide impact location within predetermined regions on the head. Additionally, the machine learning technique can be used to eliminate false impacts captured by sensors saving additional time for data scientists using instrumented mouthguard kinematic data as validating true impacts with video footage would not be required.Keywords: head impacts, impact location, instrumented mouthguard, machine learning, mTBI
Procedia PDF Downloads 218385 Automatic Identification and Classification of Contaminated Biodegradable Plastics using Machine Learning Algorithms and Hyperspectral Imaging Technology
Authors: Nutcha Taneepanichskul, Helen C. Hailes, Mark Miodownik
Abstract:
Plastic waste has emerged as a critical global environmental challenge, primarily driven by the prevalent use of conventional plastics derived from petrochemical refining and manufacturing processes in modern packaging. While these plastics serve vital functions, their persistence in the environment post-disposal poses significant threats to ecosystems. Addressing this issue necessitates approaches, one of which involves the development of biodegradable plastics designed to degrade under controlled conditions, such as industrial composting facilities. It is imperative to note that compostable plastics are engineered for degradation within specific environments and are not suited for uncontrolled settings, including natural landscapes and aquatic ecosystems. The full benefits of compostable packaging are realized when subjected to industrial composting, preventing environmental contamination and waste stream pollution. Therefore, effective sorting technologies are essential to enhance composting rates for these materials and diminish the risk of contaminating recycling streams. In this study, it leverage hyperspectral imaging technology (HSI) coupled with advanced machine learning algorithms to accurately identify various types of plastics, encompassing conventional variants like Polyethylene terephthalate (PET), Polypropylene (PP), Low density polyethylene (LDPE), High density polyethylene (HDPE) and biodegradable alternatives such as Polybutylene adipate terephthalate (PBAT), Polylactic acid (PLA), and Polyhydroxyalkanoates (PHA). The dataset is partitioned into three subsets: a training dataset comprising uncontaminated conventional and biodegradable plastics, a validation dataset encompassing contaminated plastics of both types, and a testing dataset featuring real-world packaging items in both pristine and contaminated states. Five distinct machine learning algorithms, namely Partial Least Squares Discriminant Analysis (PLS-DA), Support Vector Machine (SVM), Convolutional Neural Network (CNN), Logistic Regression, and Decision Tree Algorithm, were developed and evaluated for their classification performance. Remarkably, the Logistic Regression and CNN model exhibited the most promising outcomes, achieving a perfect accuracy rate of 100% for the training and validation datasets. Notably, the testing dataset yielded an accuracy exceeding 80%. The successful implementation of this sorting technology within recycling and composting facilities holds the potential to significantly elevate recycling and composting rates. As a result, the envisioned circular economy for plastics can be established, thereby offering a viable solution to mitigate plastic pollution.Keywords: biodegradable plastics, sorting technology, hyperspectral imaging technology, machine learning algorithms
Procedia PDF Downloads 83384 A Greener Approach towards the Synthesis of an Antimalarial Drug Lumefantrine
Authors: Luphumlo Ncanywa, Paul Watts
Abstract:
Malaria is a disease that kills approximately one million people annually. Children and pregnant women in sub-Saharan Africa lost their lives due to malaria. Malaria continues to be one of the major causes of death, especially in poor countries in Africa. Decrease the burden of malaria and save lives is very essential. There is a major concern about malaria parasites being able to develop resistance towards antimalarial drugs. People are still dying due to lack of medicine affordability in less well-off countries in the world. If more people could receive treatment by reducing the cost of drugs, the number of deaths in Africa could be massively reduced. There is a shortage of pharmaceutical manufacturing capability within many of the countries in Africa. However one has to question how Africa would actually manufacture drugs, active pharmaceutical ingredients or medicines developed within these research programs. It is quite likely that such manufacturing would be outsourced overseas, hence increasing the cost of production and potentially limiting the full benefit of the original research. As a result the last few years has seen major interest in developing more effective and cheaper technology for manufacturing generic pharmaceutical products. Micro-reactor technology (MRT) is an emerging technique that enables those working in research and development to rapidly screen reactions utilizing continuous flow, leading to the identification of reaction conditions that are suitable for usage at a production level. This emerging technique will be used to develop antimalarial drugs. It is this system flexibility that has the potential to reduce both the time was taken and risk associated with transferring reaction methodology from research to production. Using an approach referred to as scale-out or numbering up, a reaction is first optimized within the laboratory using a single micro-reactor, and in order to increase production volume, the number of reactors employed is simply increased. The overall aim of this research project is to develop and optimize synthetic process of antimalarial drugs in the continuous processing. This will provide a step change in pharmaceutical manufacturing technology that will increase the availability and affordability of antimalarial drugs on a worldwide scale, with a particular emphasis on Africa in the first instance. The research will determine the best chemistry and technology to define the lowest cost manufacturing route to pharmaceutical products. We are currently developing a method to synthesize Lumefantrine in continuous flow using batch process as bench mark. Lumefantrine is a dichlorobenzylidine derivative effective for the treatment of various types of malaria. Lumefantrine is an antimalarial drug used with artemether for the treatment of uncomplicated malaria. The results obtained when synthesizing Lumefantrine in a batch process are transferred into a continuous flow process in order to develop an even better and reproducible process. Therefore, development of an appropriate synthetic route for Lumefantrine is significant in pharmaceutical industry. Consequently, if better (and cheaper) manufacturing routes to antimalarial drugs could be developed and implemented where needed, it is far more likely to enable antimalarial drugs to be available to those in need.Keywords: antimalarial, flow, lumefantrine, synthesis
Procedia PDF Downloads 205383 Mental Well-Being and Quality of Life: A Comparative Study of Male Leather Tannery and Non-Tannery Workers of Kanpur City, India
Authors: Gyan Kashyap, Shri Kant Singh
Abstract:
Improved mental health can be articulated as a good physical health and quality of life. Mental health plays an important role in survival of any one’s life. In today’s time people living with stress in life due to their personal matters, health problems, unemployment, work environment, living environment, substance use, life style and many more important reasons. Many studies confirmed that the significant proportion of mental health people increasing in India. This study is focused on mental well-being of male leather tannery workers in Kanpur city, India. Environment at work place as well as living environment plays an important health risk factors among leather tannery workers. Leather tannery workers are more susceptible to many chemicals and physical hazards, just because they are liable to be affected by their exposure to lots of hazardous materials and processes during tanning work in very hazardous work environment. The aim of this study to determine the level of mental health disorder and quality of life among male leather tannery and non-tannery workers in Kanpur city, India. This study utilized the primary data from the cross- sectional household study which was conducted from January to June, 2015 on tannery and non-tannery workers as a part of PhD program from the Jajmau area of Kanpur city, India. The sample of 286 tannery and 295 non-tannery workers has been collected from the study area. We have collected information from the workers of age group 15-70 those who were working at the time of survey for at least one year. This study utilized the general health questionnaire (GHQ-12) and work related stress scale to test the mental wellbeing of male tannery and non-tannery workers. By using GHQ-12 and work related stress scale, Polychoric factor analysis method has been used for best threshold and scoring. Some of important question like ‘How would you rate your overall quality of life’ on Likert scale to measure the quality of life, their earnings, education, family size, living condition, household assets, media exposure, health expenditure, treatment seeking behavior and food habits etc. Results from the study revealed that around one third of tannery workers had severe mental health problems then non-tannery workers. Mental health problem shown the statistically significant association with wealth quintile, 56 percent tannery workers had severe mental health problem those belong to medium wealth quintile. And 42 percent tannery workers had moderate mental health problem among those from the low wealth quintile. Work related stress scale found the statistically significant results for tannery workers. Large proportion of tannery and non-tannery workers reported they are unable to meet their basic needs from their earnings and living in worst condition. Important result from the study, tannery workers who were involved in beam house work in tannery (58%) had severe mental health problem. This study found the statistically significant association with tannery work and mental health problem among tannery workers.Keywords: GHQ-12, mental well-being, factor analysis, quality of life, tannery workers
Procedia PDF Downloads 388382 Application of the State of the Art of Hydraulic Models to Manage Coastal Problems, Case Study: The Egyptian Mediterranean Coast Model
Authors: Al. I. Diwedar, Moheb Iskander, Mohamed Yossef, Ahmed ElKut, Noha Fouad, Radwa Fathy, Mustafa M. Almaghraby, Amira Samir, Ahmed Romya, Nourhan Hassan, Asmaa Abo Zed, Bas Reijmerink, Julien Groenenboom
Abstract:
Coastal problems are stressing the coastal environment due to its complexity. The dynamic interaction between the sea and the land results in serious problems that threaten coastal areas worldwide, in addition to human interventions and activities. This makes the coastal environment highly vulnerable to natural processes like flooding, erosion, and the impact of human activities as pollution. Protecting and preserving this vulnerable coastal zone with its valuable ecosystems calls for addressing the coastal problems. This, in the end, will support the sustainability of the coastal communities and maintain the current and future generations. Consequently applying suitable management strategies and sustainable development that consider the unique characteristics of the coastal system is a must. The coastal management philosophy aims to solve the conflicts of interest between human development activities and this dynamic nature. Modeling emerges as a successful tool that provides support to decision-makers, engineers, and researchers for better management practices. Modeling tools proved that it is accurate and reliable in prediction. With its capability to integrate data from various sources such as bathymetric surveys, satellite images, and meteorological data, it offers the possibility for engineers and scientists to understand this complex dynamic system and get in-depth into the interaction between both the natural and human-induced factors. This enables decision-makers to make informed choices and develop effective strategies for sustainable development and risk mitigation of the coastal zone. The application of modeling tools supports the evaluation of various scenarios by affording the possibility to simulate and forecast different coastal processes from the hydrodynamic and wave actions and the resulting flooding and erosion. The state-of-the-art application of modeling tools in coastal management allows for better understanding and predicting coastal processes, optimizing infrastructure planning and design, supporting ecosystem-based approaches, assessing climate change impacts, managing hazards, and finally facilitating stakeholder engagement. This paper emphasizes the role of hydraulic models in enhancing the management of coastal problems by discussing the diverse applications of modeling in coastal management. It highlights the modelling role in understanding complex coastal processes, and predicting outcomes. The importance of informing decision-makers with modeling results which gives technical and scientific support to achieve sustainable coastal development and protection.Keywords: coastal problems, coastal management, hydraulic model, numerical model, physical model
Procedia PDF Downloads 31381 Risks beyond Cyber in IoT Infrastructure and Services
Authors: Mattias Bergstrom
Abstract:
Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.Keywords: IoT, security, infrastructure, SCADA, blockchain, AI
Procedia PDF Downloads 108380 Investigation of Ground Disturbance Caused by Pile Driving: Case Study
Authors: Thayalan Nall, Harry Poulos
Abstract:
Piling is the most widely used foundation method for heavy structures in poor soil conditions. The geotechnical engineer can choose among a variety of piling methods, but in most cases, driving piles by impact hammer is the most cost-effective alternative. Under unfavourable conditions, driving piles can cause environmental problems, such as noise, ground movements and vibrations, with the risk of ground disturbance leading to potential damage to proposed structures. In one of the project sites in which the authors were involved, three offshore container terminals, namely CT1, CT2 and CT3, were constructed over thick compressible marine mud. The seabed was around 6m deep and the soft clay thickness within the project site varied between 9m and 20m. CT2 and CT3 were connected together and rectangular in shape and were 2600mx800m in size. CT1 was 400m x 800m in size and was located on south opposite of CT2 towards its eastern end. CT1 was constructed first and due to time and environmental limitations, it was supported on a “forest” of large diameter driven piles. CT2 and CT3 are now under construction and are being carried out using a traditional dredging and reclamation approach with ground improvement by surcharging with vertical drains. A few months after the installation of the CT1 piles, a 2600m long sand bund to 2m above mean sea level was constructed along the southern perimeter of CT2 and CT3 to contain the dredged mud that was expected to be pumped. The sand bund was constructed by sand spraying and pumping using a dredging vessel. About 2000m length of the sand bund in the west section was constructed without any major stability issues or any noticeable distress. However, as the sand bund approached the section parallel to CT1, it underwent a series of deep seated failures leading the displaced soft clay materials to heave above the standing water level. The crest of the sand bund was about 100m away from the last row of piles. There were no plausible geological reasons to conclude that the marine mud only across the CT1 region was weaker than over the rest of the site. Hence it was suspected that the pile driving by impact hammer may have caused ground movements and vibrations, leading to generation of excess pore pressures and cyclic softening of the marine mud. This paper investigates the probable cause of failure by reviewing: (1) All ground investigation data within the region; (2) Soil displacement caused by pile driving, using theories similar to spherical cavity expansion; (3) Transfer of stresses and vibrations through the entire system, including vibrations transmitted from the hammer to the pile, and the dynamic properties of the soil; and (4) Generation of excess pore pressure due to ground vibration and resulting cyclic softening. The evidence suggests that the problems encountered at the site were primarily caused by the “side effects” of the pile driving operations.Keywords: pile driving, ground vibration, excess pore pressure, cyclic softening
Procedia PDF Downloads 238379 The Impact of the Covid-19 Crisis on the Information Behavior in the B2B Buying Process
Authors: Stehr Melanie
Abstract:
The availability of apposite information is essential for the decision-making process of organizational buyers. Due to the constraints of the Covid-19 crisis, information channels that emphasize face-to-face contact (e.g. sales visits, trade shows) have been unavailable, and usage of digitally-driven information channels (e.g. videoconferencing, platforms) has skyrocketed. This paper explores the question in which areas the pandemic induced shift in the use of information channels could be sustainable and in which areas it is a temporary phenomenon. While information and buying behavior in B2C purchases has been regularly studied in the last decade, the last fundamental model of organizational buying behavior in B2B was introduced by Johnston and Lewin (1996) in times before the advent of the internet. Subsequently, research efforts in B2B marketing shifted from organizational buyers and their decision and information behavior to the business relationships between sellers and buyers. This study builds on the extensive literature on situational factors influencing organizational buying and information behavior and uses the economics of information theory as a theoretical framework. The research focuses on the German woodworking industry, which before the Covid-19 crisis was characterized by a rather low level of digitization of information channels. By focusing on an industry with traditional communication structures, a shift in information behavior induced by an exogenous shock is considered a ripe research setting. The study is exploratory in nature. The primary data source is 40 in-depth interviews based on the repertory-grid method. Thus, 120 typical buying situations in the woodworking industry and the information and channels relevant to them are identified. The results are combined into clusters, each of which shows similar information behavior in the procurement process. In the next step, the clusters are analyzed in terms of the post and pre-Covid-19 crisis’ behavior identifying stable and dynamic information behavior aspects. Initial results show that, for example, clusters representing search goods with low risk and complexity suggest a sustainable rise in the use of digitally-driven information channels. However, in clusters containing trust goods with high significance and novelty, an increased return to face-to-face information channels can be expected after the Covid-19 crisis. The results are interesting from both a scientific and a practical point of view. This study is one of the first to apply the economics of information theory to organizational buyers and their decision and information behavior in the digital information age. Especially the focus on the dynamic aspects of information behavior after an exogenous shock might contribute new impulses to theoretical debates related to the economics of information theory. For practitioners - especially suppliers’ marketing managers and intermediaries such as publishers or trade show organizers from the woodworking industry - the study shows wide-ranging starting points for a future-oriented segmentation of their marketing program by highlighting the dynamic and stable preferences of elaborated clusters in the choice of their information channels.Keywords: B2B buying process, crisis, economics of information theory, information channel
Procedia PDF Downloads 185378 European Commission Radioactivity Environmental Monitoring Database REMdb: A Law (Art. 36 Euratom Treaty) Transformed in Environmental Science Opportunities
Authors: M. Marín-Ferrer, M. A. Hernández, T. Tollefsen, S. Vanzo, E. Nweke, P. V. Tognoli, M. De Cort
Abstract:
Under the terms of Article 36 of the Euratom Treaty, European Union Member States (MSs) shall periodically communicate to the European Commission (EC) information on environmental radioactivity levels. Compilations of the information received have been published by the EC as a series of reports beginning in the early 1960s. The environmental radioactivity results received from the MSs have been introduced into the Radioactivity Environmental Monitoring database (REMdb) of the Institute for Transuranium Elements of the EC Joint Research Centre (JRC) sited in Ispra (Italy) as part of its Directorate General for Energy (DG ENER) support programme. The REMdb brings to the scientific community dealing with environmental radioactivity topics endless of research opportunities to exploit the near 200 millions of records received from MSs containing information of radioactivity levels in milk, water, air and mixed diet. The REM action was created shortly after Chernobyl crisis to support the EC in its responsibilities in providing qualified information to the European Parliament and the MSs on the levels of radioactive contamination of the various compartments of the environment (air, water, soil). Hence, the main line of REM’s activities concerns the improvement of procedures for the collection of environmental radioactivity concentrations for routine and emergency conditions, as well as making this information available to the general public. In this way, REM ensures the availability of tools for the inter-communication and access of users from the Member States and the other European countries to this information. Specific attention is given to further integrate the new MSs with the existing information exchange systems and to assist Candidate Countries in fulfilling these obligations in view of their membership of the EU. Article 36 of the EURATOM treaty requires the competent authorities of each MS to provide regularly the environmental radioactivity monitoring data resulting from their Article 35 obligations to the EC in order to keep EC informed on the levels of radioactivity in the environment (air, water, milk and mixed diet) which could affect population. The REMdb has mainly two objectives: to keep a historical record of the radiological accidents for further scientific study, and to collect the environmental radioactivity data gathered through the national environmental monitoring programs of the MSs to prepare the comprehensive annual monitoring reports (MR). The JRC continues his activity of collecting, assembling, analyzing and providing this information to public and MSs even during emergency situations. In addition, there is a growing concern with the general public about the radioactivity levels in the terrestrial and marine environment, as well about the potential risk of future nuclear accidents. To this context, a clear and transparent communication with the public is needed. EURDEP (European Radiological Data Exchange Platform) is both a standard format for radiological data and a network for the exchange of automatic monitoring data. The latest release of the format is version 2.0, which is in use since the beginning of 2002.Keywords: environmental radioactivity, Euratom, monitoring report, REMdb
Procedia PDF Downloads 449377 Internal Concept of Integrated Health by Agrarian Society in Malagasy Highlands for the Last Century
Authors: O. R. Razanakoto, L. Temple
Abstract:
Living in a least developed country, the Malagasy society has a weak capacity to internalize progress, including health concerns. Since the arrival in the fifteenth century of Arabic script, called Sorabe, that was mainly dedicated to the aristocracy, until the colonial era beginning at the end of the nineteenth century and that has popularized the current usual script of the occidental civilization, the upcoming manuscripts that deal with apparent scientific or at least academic issue have been slowly established. So that, the Malagasy communities’ way of life is not well documented yet to allow a precise understanding of the major concerns, reason, and purpose of the existence of the farmers that compose them. A question arises, according to literature, how does Malagasy community that is dominated by agrarian society conceive the conservation of its wellbeing? This study aims to emphasize the scope and the limits of the « One Health » concept or of the Health Integrated Approach (HIA) that evolves at global scale, with regard to the specific context of local Malagasy smallholder farms. It is expected to identify how this society represents linked risks and the mechanisms between human health, animal health, plant health, and ecosystem health within the last 100 years. To do so, the framework to conduct systematic review for agricultural research has been deployed to access available literature. This task has been coupled with the reading of articles that are not indexed by online scientific search engine but that mention part of a history of agriculture and of farmers in Madagascar. This literature review has informed the interactions between human illnesses and those affecting animals and plants (breeded or wild) with any unexpected event (ecological or economic) that has modified the equilibrium of the ecosystem, or that has disturbed the livelihoods of agrarian communities. Besides, drivers that may either accentuate or attenuate the devasting effects of these illnesses and changes were revealed. The study has established that the reasons of human worries are not only physiological. Among the factors that regulate global health, food system and contemporary medicine have helped to the improvement of life expectancy from 55 to 63 years in Madagascar during the last 50 years. However, threats to global health are still occurring. New human or animal illnesses and livestock / plant pathology or enemies may also appear, whereas ancient illnesses that are supposed to have disappeared may be back. This study has highlighted how much important are the risks associated to the impact of unmanaged externalities that weaken community’s life. Many risks, and also solutions, come from abroad and have long term effects even though those happen as punctual event. Thus, a constructivist strategy is suggested to the « One Health » global concept throughout the record of local facts. This approach should facilitate the exploration of methodological pathways and the identification of relevant indicators for research related to HIA.Keywords: agrarian system, health integrated approach, history, madagascar, resilience, risk
Procedia PDF Downloads 112376 The Duty of Sea Carrier to Transship the Cargo in Case of Vessel Breakdown
Authors: Mojtaba Eshraghi Arani
Abstract:
Concluding the contract for carriage of cargo with the shipper (through bill of lading or charterparty), the carrier must transport the cargo from loading port to the port of discharge and deliver it to the consignee. Unless otherwise agreed in the contract, the carrier must avoid from any deviation, transfer of cargo to another vessel or unreasonable stoppage of carriage in-transit. However, the vessel might break down in-transit for any reason and becomes unable to continue its voyage to the port of discharge. This is a frequent incident in the carriage of goods by sea which leads to important dispute between the carrier/owner and the shipper/charterer (hereinafter called “cargo interests”). It is a generally accepted rule that in such event, the carrier/owner must repair the vessel after which it will continue its voyage to the destination port. The dispute will arise in the case that temporary repair of the vessel cannot be done in the short or reasonable term. There are two options for the contract parties in such a case: First, the carrier/owner is entitled to repair the vessel while having the cargo onboard or discharged in the port of refugee, and the cargo interests must wait till the breakdown is rectified at any time, whenever. Second, the carrier/owner will be responsible to charter another vessel and transfer the entirety of cargo to the substitute vessel. In fact, the main question revolves around the duty of carrier/owner to perform transfer of cargo to another vessel. Such operation which is called “trans-shipment” or “transhipment” (in terms of the oil industry it is usually called “ship-to-ship” or “STS”) needs to be done carefully and with due diligence. In fact, the transshipment operation for various cargoes might be different as each cargo requires its own suitable equipment for transfer to another vessel, so this operation is often costly. Moreover, there is a considerable risk of collision between two vessels in particular in bulk carriers. Bulk cargo is also exposed to the shortage and partial loss in the process of transshipment especially during bad weather. Concerning tankers which carry oil and petrochemical products, transshipment, is most probably followed by sea pollution. On the grounds of the above consequences, the owners are afraid of being held responsible for such operation and are reluctant to perform in the relevant disputes. The main argument raised by them is that no regulation has recognized such duty upon their shoulders so any such operation must be done under the auspices of the cargo interests and all costs must be reimbursed by themselves. Unfortunately, not only the international conventions including Hague rules, Hague-Visby Rules, Hamburg rules and Rotterdam rules but also most domestic laws are silent in this regard. The doctrine has yet to analyse the issue and no legal researches was found out in this regard. A qualitative method with the concept of interpretation of data collection has been used in this paper. The source of the data is the analysis of regulations and cases. It is argued in this article that the paramount rule in the maritime law is “the accomplishment of the voyage” by the carrier/owner in view of which, if the voyage can only be finished by transshipment, then the carrier/owner will be responsible to carry out this operation. The duty of carrier/owner to apply “due diligence” will strengthen this reasoning. Any and all costs and expenses will also be on the account pf the owner/carrier, unless the incident is attributable to any cause arising from the cargo interests’ negligence.Keywords: cargo, STS, transshipment, vessel, voyage
Procedia PDF Downloads 123375 The Effect of Clover Honey Supplementation on the Anthropometric Measurements and Lipid Profile of Malnourished Infants and Children
Authors: Bassma A. Abdelhaleem, Mamdouh A. Abdulrhman, Nagwa I. Mohamed
Abstract:
Malnutrition in children is an increasing problem worldwide which may result in both short and long-term irreversible negative health outcomes. Severe Acute Malnutrition (SAM) affects more than 18 million children each year, mostly living in low-income settings. SAM contributes to 45% of all deaths in children less than five years of age. Honey is a natural sweetener, containing mainly monosaccharides (up to 80%), disaccharides (3–5%), water (17–20%), and a wide range of minor constituents such as vitamins, minerals, proteins, amino acids, enzymes, and phytochemicals, mainly phenolic acids, and flavonoids. Honey has been used in many cultures around the world due to its known nutritional and medicinal benefits including the treatment of hypercholesterolemia. Despite its use since ancient times yet little is known about its potential benefits for malnourished children. Honey has the potential to be an affordable solution for malnourished low-income children as it is nutrient-dense and calorie dense food, easily absorbed, highly palatable, enhances appetite, and boosts immunity. This study assessed the effect of clover honey supplementation on the anthropometric measurements and lipid profile of malnourished infants and children. A prospective interventional clinical trial was conducted between November 2019 to November 2020, on 40 malnourished infants and children divided into two groups: Group A (20 children; 11 males and 9 females) received honey in a dose of 1.75ml/kg/dose, twice weekly for 12 weeks and Group B (20 children; 6 males and 14 females) received placebo. Written informed consent was obtained for parents/guardians. Patients were recruited from the Pediatric Nutrition Clinic at Ain Shams University. Anthropometric measurements (weight, height, body mass index, head circumference, and mid-arm circumference) and fasting serum cholesterol levels were measured at baseline and after 3 months. The 3-month honey consumption had a statistically highly significant effect on increasing weight, height, and body mass index and lowering fasting serum cholesterol levels in primary malnourished infants and children. Weight, height, body mass index, and fasting serum cholesterol level before honey consumption were (9.49 ± 2.03, 81.45 ± 8.31, 14.24 ± 2.15, 178.00 ± 20.91) and after 3 months of honey consumption were (10.91 ± 2.11, 84.80 ± 8.23, 15.07 ± 2.05, 162.45 ± 19.73) respectively with P-value < 0.01. Our results showed a significant desirable effect of honey consumption on changes in nutritional status based on weight, height, and body mass index, and has a favourable effect on lowering fasting serum cholesterol levels. These results propose the use of honey as an affordable solution to improve malnutrition, particularly in low-income countries. However, further research needs to weigh benefits against potential harms including the risk of botulinum toxin that is historically associated with honey consumption in early childhood.Keywords: clinical trial, dyslipidemia, honey, malnutrition
Procedia PDF Downloads 110374 Parenting Interventions for Refugee Families: A Systematic Scoping Review
Authors: Ripudaman S. Minhas, Pardeep K. Benipal, Aisha K. Yousafzai
Abstract:
Background: Children of refugee or asylum-seeking background have multiple, complex needs (e.g. trauma, mental health concerns, separation, relocation, poverty, etc.) that places them at an increased risk for developing learning problems. Families encounter challenges accessing support during resettlement, preventing children from achieving their full developmental potential. There are very few studies in literature that examine the unique parenting challenges refugee families’ face. Providing appropriate support services and educational resources that address these distinctive concerns of refugee parents, will alleviate these challenges allowing for a better developmental outcome for children. Objective: To identify the characteristics of effective parenting interventions that address the unique needs of refugee families. Methods: English-language articles published from 1997 onwards were included if they described or evaluated programmes or interventions for parents of refugee or asylum-seeking background, globally. Data were extracted and analyzed according to Arksey and O’Malley’s descriptive analysis model for scoping reviews. Results: Seven studies met criteria and were included, primarily studying families settled in high-income countries. Refugee parents identified parenting to be a major concern, citing they experienced: alienation/unwelcoming services, language barriers, and lack of familiarity with school and early years services. Services that focused on building the resilience of parents, parent education, or provided services in the family’s native language, and offered families safe spaces to promote parent-child interactions were most successful. Home-visit and family-centered programs showed particular success, minimizing barriers such as transportation and inflexible work schedules, while allowing caregivers to receive feedback from facilitators. The vast majority of studies evaluated programs implementing existing curricula and frameworks. Interventions were designed in a prescriptive manner, without direct participation by family members and not directly addressing accessibility barriers. The studies also did not employ evaluation measures of parenting practices or the caregiving environment, or child development outcomes, primarily focusing on parental perceptions. Conclusion: There is scarce literature describing parenting interventions for refugee families. Successful interventions focused on building parenting resilience and capacity in their native language. To date, there are no studies that employ a participatory approach to program design to tailor content or accessibility, and few that employ parenting, developmental, behavioural, or environmental outcome measures.Keywords: asylum-seekers, developmental pediatrics, parenting interventions, refugee families
Procedia PDF Downloads 166373 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator
Authors: Yildiz Stella Dak, Jale Tezcan
Abstract:
Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection
Procedia PDF Downloads 331372 Retrospective Assessment of the Safety and Efficacy of Percutaneous Microwave Ablation in the Management of Hepatic Lesions
Authors: Suang K. Lau, Ismail Goolam, Rafid Al-Asady
Abstract:
Background: The majority of patients with hepatocellular carcinoma (HCC) are not suitable for curative treatment, in the form of surgical resection or transplantation, due to tumour extent and underlying liver dysfunction. In these non-resectable cases, a variety of non-surgical therapies are available, including microwave ablation (MWA), which has shown increasing popularity due to its low morbidity, low reported complication rate, and the ability to perform multiple ablations simultaneously. Objective: The aim of this study was to evaluate the validity of MWA as a viable treatment option in the management of HCC and hepatic metastatic disease, by assessing its efficacy and complication rate at a tertiary hospital situated in Westmead (Australia). Methods: A retrospective observational study was performed evaluating patients that underwent MWA between 1/1/2017–31/12/2018 at Westmead Hospital, NSW, Australia. Outcome measures, including residual disease, recurrence rates, as well as major and minor complication rates, were retrospectively analysed over a 12-months period following MWA treatment. Excluded patients included those whose lesions were treated on the basis of residual or recurrent disease from previous treatment, which occurred prior to the study window (11 patients) and those who were lost to follow up (2 patients). Results: Following treatment of 106 new hepatic lesions, the complete response rate (CR) was 86% (91/106) at 12 months follow up. 10 patients had the residual disease at post-treatment follow up imaging, corresponding to an incomplete response (ICR) rate of 9.4% (10/106). The local recurrence rate (LRR) was 4.6% (5/106) with follow-up period up to 12 months. The minor complication rate was 9.4% (10/106) including asymptomatic pneumothorax (n=2), asymptomatic pleural effusions (n=2), right lower lobe pneumonia (n=3), pain requiring admission (n=1), hypotension (n=1), cellulitis (n=1) and intraparenchymal hematoma (n=1). There was 1 major complication reported, with pleuro-peritoneal fistula causing recurrent large pleural effusion necessitating repeated thoracocentesis (n=1). There was no statistically significant association between tumour size, location or ablation factors, and risk of recurrence or residual disease. A subset analysis identified 6 segment VIII lesions, which were treated via a trans-pleural approach. This cohort demonstrated an overall complication rate of 33% (2/6), including 1 minor complication of asymptomatic pneumothorax and 1 major complication of pleuro-peritoneal fistula. Conclusions: Microwave ablation therapy is an effective and safe treatment option in cases of non-resectable hepatocellular carcinoma and liver metastases, with good local tumour control and low complication rates. A trans-pleural approach for high segment VIII lesions is associated with a higher complication rate and warrants greater caution.Keywords: hepatocellular carcinoma, liver metastases, microwave ablation, trans-pleural approach
Procedia PDF Downloads 139371 Designing an Operational Control System for the Continuous Cycle of Industrial Technological Processes Using Fuzzy Logic
Authors: Teimuraz Manjapharashvili, Ketevani Manjaparashvili
Abstract:
Fuzzy logic is a modeling method for complex or ill-defined systems and is a relatively new mathematical approach. Its basis is to consider overlapping cases of parameter values and define operations to manipulate these cases. Fuzzy logic can successfully create operative automatic management or appropriate advisory systems. Fuzzy logic techniques in various operational control technologies have grown rapidly in the last few years. Fuzzy logic is used in many areas of human technological activity. In recent years, Fuzzy logic has proven its great potential, especially in the automation of industrial process control, where it allows the form of a control design based on the experience of experts and the results of experiments. The engineering of chemical technological processes uses fuzzy logic in optimal management, and it is also used in process control, including the operational control of continuous cycle chemical industrial, technological processes, where special features appear due to the continuous cycle and correct management acquires special importance. This paper discusses how intelligent systems can be developed, in particular, how Fuzzy logic can be used to build knowledge-based expert systems in chemical process engineering. The implemented projects reveal that the use of Fuzzy logic in technological process control has already given us better solutions than standard control techniques. Fuzzy logic makes it possible to develop an advisory system for decision-making based on the historical experience of the managing operator and experienced experts. The present paper deals with operational control and management systems of continuous cycle chemical technological processes, including advisory systems. Because of the continuous cycle, many features are introduced in them compared to the operational control of other chemical technological processes. Among them, there is a greater risk of transitioning to emergency mode; the return from emergency mode to normal mode must be done very quickly due to the impossibility of stopping the technological process due to the release of defective products during this period (i.e., receiving a loss), accordingly, due to the need for high qualification of the operator managing the process, etc. For these reasons, operational control systems of continuous cycle chemical technological processes have been specifically discussed, as they are different systems. Special features of such systems in control and management were brought out, which determine the characteristics of the construction of control and management systems. To verify the findings, the development of an advisory decision-making information system for operational control of a lime kiln using Fuzzy logic, based on the creation of a relevant expert-targeted knowledge base, was discussed. The control system has been implemented in a real lime production plant with a lime burn kiln, which has shown that suitable and intelligent automation improves operational management, reduces the risks of releasing defective products, and, therefore, reduces costs. The special advisory system was successfully used in the said plant both for the improvement of operational management and, if necessary, for the training of new operators due to the lack of an appropriate training institution.Keywords: chemical process control systems, continuous cycle industrial technological processes, fuzzy logic, lime kiln
Procedia PDF Downloads 30370 The Effect of Soil-Structure Interaction on the Post-Earthquake Fire Performance of Structures
Authors: A. T. Al-Isawi, P. E. F. Collins
Abstract:
The behaviour of structures exposed to fire after an earthquake is not a new area of engineering research, but there remain a number of areas where further work is required. Such areas relate to the way in which seismic excitation is applied to a structure, taking into account the effect of soil-structure interaction (SSI) and the method of analysis, in addition to identifying the excitation load properties. The selection of earthquake data input for use in nonlinear analysis and the method of analysis are still challenging issues. Thus, realistic artificial ground motion input data must be developed to certify that site properties parameters adequately describe the effects of the nonlinear inelastic behaviour of the system and that the characteristics of these parameters are coherent with the characteristics of the target parameters. Conversely, ignoring the significance of some attributes, such as frequency content, soil site properties and earthquake parameters may lead to misleading results, due to the misinterpretation of required input data and the incorrect synthesise of analysis hypothesis. This paper presents a study of the post-earthquake fire (PEF) performance of a multi-storey steel-framed building resting on soft clay, taking into account the effects of the nonlinear inelastic behaviour of the structure and soil, and the soil-structure interaction (SSI). Structures subjected to an earthquake may experience various levels of damage; the geometrical damage, which indicates the change in the initial structure’s geometry due to the residual deformation as a result of plastic behaviour, and the mechanical damage which identifies the degradation of the mechanical properties of the structural elements involved in the plastic range of deformation. Consequently, the structure presumably experiences partial structural damage but is then exposed to fire under its new residual material properties, which may result in building failure caused by a decrease in fire resistance. This scenario would be more complicated if SSI was also considered. Indeed, most earthquake design codes ignore the probability of PEF as well as the effect that SSI has on the behaviour of structures, in order to simplify the analysis procedure. Therefore, the design of structures based on existing codes which neglect the importance of PEF and SSI can create a significant risk of structural failure. In order to examine the criteria for the behaviour of a structure under PEF conditions, a two-dimensional nonlinear elasto-plastic model is developed using ABAQUS software; the effects of SSI are included. Both geometrical and mechanical damages have been taken into account after the earthquake analysis step. For comparison, an identical model is also created, which does not include the effects of soil-structure interaction. It is shown that damage to structural elements is underestimated if SSI is not included in the analysis, and the maximum percentage reduction in fire resistance is detected in the case when SSI is included in the scenario. The results are validated using the literature.Keywords: Abaqus Software, Finite Element Analysis, post-earthquake fire, seismic analysis, soil-structure interaction
Procedia PDF Downloads 124369 Improving Exchange Rate Forecasting Accuracy Using Ensemble Learning Techniques: A Comparative Study
Authors: Gokcen Ogruk-Maz, Sinan Yildirim
Abstract:
Introduction: Exchange rate forecasting is pivotal for informed financial decision-making, encompassing risk management, investment strategies, and international trade planning. However, traditional forecasting models often fail to capture the complexity and volatility of currency markets. This study explores the potential of ensemble learning techniques such as Random Forest, Gradient Boosting, and AdaBoost to enhance the accuracy and robustness of exchange rate predictions. Research Objectives The primary objective is to evaluate the performance of ensemble methods in comparison to traditional econometric models such as Uncovered Interest Rate Parity, Purchasing Power Parity, and Monetary Models. By integrating advanced machine learning techniques with fundamental macroeconomic indicators, this research seeks to identify optimal approaches for predicting exchange rate movements across major currency pairs. Methodology: Using historical exchange rate data and economic indicators such as interest rates, inflation, money supply, and GDP, the study develops forecasting models leveraging ensemble techniques. Comparative analysis is performed against traditional models and hybrid approaches incorporating Facebook Prophet, Artificial Neural Networks, and XGBoost. The models are evaluated using statistical metrics like Mean Squared Error, Theil Ratio, and Diebold-Mariano tests across five currency pairs (JPY to USD, AUD to USD, CAD to USD, GBP to USD, and NZD to USD). Preliminary Results: Results indicate that ensemble learning models consistently outperform traditional methods in predictive accuracy. XGBoost shows the strongest performance among the techniques evaluated, achieving significant improvements in forecast precision with consistently low p-values and Theil Ratios. Hybrid models integrating macroeconomic fundamentals into machine learning frameworks further enhance predictive accuracy. Discussion: The findings show the potential of ensemble methods to address the limitations of traditional models by capturing non-linear relationships and complex dynamics in exchange rate movements. While Random Forest and Gradient Boosting are effective, the superior performance of XGBoost suggests that its capacity for handling sparse and irregular data offers a distinct advantage in financial forecasting. Conclusion and Implications: This research demonstrates that ensemble learning techniques, particularly when combined with traditional macroeconomic fundamentals, provide a robust framework for improving exchange rate forecasting. The study offers actionable insights for financial practitioners and policymakers, emphasizing the value of integrating machine learning approaches into predictive modeling for monetary economics.Keywords: exchange rate forecasting, ensemble learning, financial modeling, machine learning, monetary economics, XGBoost
Procedia PDF Downloads 5368 Bedouin Dispersion in Israel: Between Sustainable Development and Social Non-Recognition
Authors: Tamir Michal
Abstract:
The subject of Bedouin dispersion has accompanied the State of Israel from the day of its establishment. From a legal point of view, this subject has offered a launchpad for creative judicial decisions. Thus, for example, the first court decision in Israel to recognize affirmative action (Avitan), dealt with a petition submitted by a Jew appealing the refusal of the State to recognize the Petitioner’s entitlement to the long-term lease of a plot designated for Bedouins. The Supreme Court dismissed the petition, holding that there existed a public interest in assisting Bedouin to establish permanent urban settlements, an interest which justifies giving them preference by selling them plots at subsidized prices. In another case (The Forum for Coexistence in the Negev) the Supreme Court extended equitable relief for the purpose of constructing a bridge, even though the construction infringed the Law, in order to allow the children of dispersed Bedouin to reach school. Against this background, the recent verdict, delivered during the Protective Edge military campaign, which dismissed a petition aimed at forcing the State to spread out Protective Structures in Bedouin villages in the Negev against the risk of being hit from missiles launched from Gaza (Abu Afash) is disappointing. Even if, in arguendo, no selective discrimination was involved in the State’s decision not to provide such protection, the decision, and its affirmation by the Court, is problematic when examined through the prism of the Theory of Recognition. The article analyses the issue by tools of theory of Recognition, according to which people develop their identities through mutual relations of recognition in different fields. In the social context, the path to recognition is cognitive respect, which is provided by means of legal rights. By seeing other participants in Society as bearers of rights and obligations, the individual develops an understanding of his legal condition as reflected in the attitude to others. Consequently, even if the Court’s decision may be justified on strict legal grounds, the fact that Jewish settlements were protected during the military operation, whereas Bedouin villages were not, is a setback in the struggle to make the Bedouin citizens with equal rights in Israeli society. As the Court held, ‘Beyond their protective function, the Migunit [Protective Structures] may make a moral and psychological contribution that should not be undervalued’. This contribution is one that the Bedouin did not receive in the Abu Afash verdict. The basic thesis is that the Court’s verdict analyzed above clearly demonstrates that the reliance on classical liberal instruments (e.g., equality) cannot secure full appreciation of all aspects of Bedouin life, and hence it can in fact prejudice them. Therefore, elements of the recognition theory should be added, in order to find the channel for cognitive dignity, thereby advancing the Bedouins’ ability to perceive themselves as equal human beings in the Israeli society.Keywords: bedouin dispersion, cognitive respect, recognition theory, sustainable development
Procedia PDF Downloads 354367 Assessing the Outcomes of Collaboration with Students on Curriculum Development and Design on an Undergraduate Art History Module
Authors: Helen Potkin
Abstract:
This paper presents a practice-based case study of a project in which the student group designed and planned the curriculum content, classroom activities and assessment briefs in collaboration with the tutor. It focuses on the co-creation of the curriculum within a history and theory module, Researching the Contemporary, which runs for BA (Hons) Fine Art and Art History and for BA (Hons) Art Design History Practice at Kingston University, London. The paper analyses the potential of collaborative approaches to engender students’ investment in their own learning and to encourage reflective and self-conscious understandings of themselves as learners. It also addresses some of the challenges of working in this way, attending to the risks involved and feelings of uncertainty produced in experimental, fluid and open situations of learning. Alongside this, it acknowledges the tensions inherent in adopting such practices within the framework of the institution and within the wider of context of the commodification of higher education in the United Kingdom. The concept underpinning the initiative was to test out co-creation as a creative process and to explore the possibilities of altering the traditional hierarchical relationship between teacher and student in a more active, participatory environment. In other words, the project asked about: what kind of learning could be imagined if we were all in it together? It considered co-creation as producing different ways of being, or becoming, as learners, involving us reconfiguring multiple relationships: to learning, to each other, to research, to the institution and to our emotions. The project provided the opportunity for students to bring their own research and wider interests into the classroom, take ownership of sessions, collaborate with each other and to define the criteria against which they would be assessed. Drawing on students’ reflections on their experience of co-creation alongside theoretical considerations engaging with the processual nature of learning, concepts of equality and the generative qualities of the interrelationships in the classroom, the paper suggests that the dynamic nature of collaborative and participatory modes of engagement have the potential to foster relevant and significant learning experiences. The findings as a result of the project could be quantified in terms of the high level of student engagement in the project, specifically investment in the assessment, alongside the ambition and high quality of the student work produced. However, reflection on the outcomes of the experiment prompts a further set of questions about the nature of positionality in connection to learning, the ways our identities as learners are formed in and through our relationships in the classroom and the potential and productive nature of creative practice in education. Overall, the paper interrogates questions of what it means to work with students to invent and assemble the curriculum and it assesses the benefits and challenges of co-creation. Underpinning it is the argument that, particularly in the current climate of higher education, it is increasingly important to ask what it means to teach and to envisage what kinds of learning can be possible.Keywords: co-creation, collaboration, learning, participation, risk
Procedia PDF Downloads 123366 Telogen Effluvium: A Modern Hair Loss Concern and the Interventional Strategies
Authors: Chettyparambil Lalchand Thejalakshmi, Sonal Sabu Edattukaran
Abstract:
Hair loss is one of the main issues that contemporary society is dealing with. It can be attributable to a wide range of factors, listing from one's genetic composition and the anxiety we experience on a daily basis. Telogen effluvium [TE] is a condition that causes temporary hair loss after a stressor that might shock the body and cause the hair follicles to temporarily rest, leading to hair loss. Most frequently, women are the ones who bring up these difficulties. Extreme illness or trauma, an emotional or important life event, rapid weight loss and crash dieting, a severe scalp skin problem, a new medication, or ceasing hormone therapy are examples of potential causes. Men frequently do not notice hair thinning with time, but women with long hair may be easily identified when shedding, which can occasionally result in bias because women tend to be more concerned with aesthetics and beauty standards of the society, and approach frequently with the concerns .The woman, who formerly possessed a full head of hair, is worried about the hair loss from her scalp . There are several cases of hair loss reported every day, and Telogen effluvium is said to be the most prevalent one of them all without any hereditary risk factors. While the patient has loss in hair volume, baldness is not the result of this problem . The exponentially growing Dermatology and Aesthetic medical division has discovered that this problem is the most common and also the easiest to cure since it is feasible for these people to regrow their hair, unlike those who have scarring alopecia, in which the follicle itself is damaged and non-viable. Telogen effluvium comes in two different forms: acute and chronic. Acute TE occurs in all the age groups with a hair loss of less than three months, while chronic TE is more common in those between the ages of 30 and 60 with a hair loss of more than six months . Both kinds are prevalent throughout all age groups, regardless of the predominance. It takes between three and six months for the lost hair to come back, although this condition is readily reversed by eliminating stresses. After shedding their hair, patients frequently describe having noticeable fringes on their forehead. The current medical treatments for this condition include topical corticosteroids, systemic corticosteroids, minoxidil and finasteride, CNDPA (caffeine, niacinamide, panthenol, dimethicone, and an acrylate polymer) .Individual terminal hair growth was increased by 10% as a result of the innovative intervention CNDPA. Botulinum Toxin A, Scalp Micro Needling, Platelet Rich Plasma Therapy [PRP], and sessions with Multivitamin Mesotherapy Injections are some recently enhanced techniques with partially or completely reversible hair loss. Also, it has been shown that supplements like Nutrafol and Biotin are producing effective outcomes. There is virtually little evidence to support the claim that applying sulfur-rich ingredients to the scalp, such as onion juice, can help TE patients' hair regenerate.Keywords: dermatology, telogen effluvium, hair loss, modern hair loass treatments
Procedia PDF Downloads 92365 Multilevel Regression Model - Evaluate Relationship Between Early Years’ Activities of Daily Living and Alzheimer’s Disease Onset Accounting for Influence of Key Sociodemographic Factors Using a Longitudinal Household Survey Data
Authors: Linyi Fan, C.J. Schumaker
Abstract:
Background: Biomedical efforts to treat Alzheimer’s disease (AD) have typically produced mixed to poor results, while more lifestyle-focused treatments such as exercise may fare better than existing biomedical treatments. A few promising studies have indicated that activities of daily life (ADL) may be a useful way of predicting AD. However, the existing cross-sectional studies fail to show how functional-related issues such as ADL in early years predict AD and how social factors influence health either in addition to or in interaction with individual risk factors. This study would helpbetterscreening and early treatments for the elderly population and healthcare practice. The findings have significance academically and practically in terms of creating positive social change. Methodology: The purpose of this quantitative historical, correlational study was to examine the relationship between early years’ ADL and the development of AD in later years. The studyincluded 4,526participantsderived fromRAND HRS dataset. The Health and Retirement Study (HRS) is a longitudinal household survey data set that is available forresearchof retirement and health among the elderly in the United States. The sample was selected by the completion of survey questionnaire about AD and dementia. The variablethat indicates whether the participant has been diagnosed with AD was the dependent variable. The ADL indices and changes in ADL were the independent variables. A four-step multilevel regression model approach was utilized to address the research questions. Results: Amongst 4,526 patients who completed the AD and dementia questionnaire, 144 (3.1%) were diagnosed with AD. Of the 4,526 participants, 3,465 (76.6%) have high school and upper education degrees,4,074 (90.0%) were above poverty threshold. The model evaluatedthe effect of ADL and change in ADL on onset of AD in late years while allowing the intercept of the model to vary by level of education. The results suggested that the only significant predictor of the onset of AD was changes in early years’ ADL (b = 20.253, z = 2.761, p < .05). However, the result of the sensitivity analysis (b = 7.562, z = 1.900, p =.058), which included more control variables and increased the observation period of ADL, are not supported this finding. The model also estimated whether the variances of random effect vary by Level-2 variables. The results suggested that the variances associated with random slopes were approximately zero, suggesting that the relationship between early years’ ADL were not influenced bysociodemographic factors. Conclusion: The finding indicated that an increase in changes in ADL leads to an increase in the probability of onset AD in the future. However, this finding is not support in a broad observation period model. The study also failed to reject the hypothesis that the sociodemographic factors explained significant amounts of variance in random effect. Recommendations were then made for future research and practice based on these limitations and the significance of the findings.Keywords: alzheimer’s disease, epidemiology, moderation, multilevel modeling
Procedia PDF Downloads 136