Search results for: soil application
226 Surviral: An Agent-Based Simulation Framework for Sars-Cov-2 Outcome Prediction
Authors: Sabrina Neururer, Marco Schweitzer, Werner Hackl, Bernhard Tilg, Patrick Raudaschl, Andreas Huber, Bernhard Pfeifer
Abstract:
History and the current outbreak of Covid-19 have shown the deadly potential of infectious diseases. However, infectious diseases also have a serious impact on areas other than health and healthcare, such as the economy or social life. These areas are strongly codependent. Therefore, disease control measures, such as social distancing, quarantines, curfews, or lockdowns, have to be adopted in a very considerate manner. Infectious disease modeling can support policy and decision-makers with adequate information regarding the dynamics of the pandemic and therefore assist in planning and enforcing appropriate measures that will prevent the healthcare system from collapsing. In this work, an agent-based simulation package named “survival” for simulating infectious diseases is presented. A special focus is put on SARS-Cov-2. The presented simulation package was used in Austria to model the SARS-Cov-2 outbreak from the beginning of 2020. Agent-based modeling is a relatively recent modeling approach. Since our world is getting more and more complex, the complexity of the underlying systems is also increasing. The development of tools and frameworks and increasing computational power advance the application of agent-based models. For parametrizing the presented model, different data sources, such as known infections, wastewater virus load, blood donor antibodies, circulating virus variants and the used capacity for hospitalization, as well as the availability of medical materials like ventilators, were integrated with a database system and used. The simulation result of the model was used for predicting the dynamics and the possible outcomes and was used by the health authorities to decide on the measures to be taken in order to control the pandemic situation. The survival package was implemented in the programming language Java and the analytics were performed with R Studio. During the first run in March 2020, the simulation showed that without measures other than individual personal behavior and appropriate medication, the death toll would have been about 27 million people worldwide within the first year. The model predicted the hospitalization rates (standard and intensive care) for Tyrol and South Tyrol with an accuracy of about 1.5% average error. They were calculated to provide 10-days forecasts. The state government and the hospitals were provided with the 10-days models to support their decision-making. This ensured that standard care was maintained for as long as possible without restrictions. Furthermore, various measures were estimated and thereafter enforced. Among other things, communities were quarantined based on the calculations while, in accordance with the calculations, the curfews for the entire population were reduced. With this framework, which is used in the national crisis team of the Austrian province of Tyrol, a very accurate model could be created on the federal state level as well as on the district and municipal level, which was able to provide decision-makers with a solid information basis. This framework can be transferred to various infectious diseases and thus can be used as a basis for future monitoring.Keywords: modelling, simulation, agent-based, SARS-Cov-2, COVID-19
Procedia PDF Downloads 173225 Application of Discrete-Event Simulation in Health Technology Assessment: A Cost-Effectiveness Analysis of Alzheimer’s Disease Treatment Using Real-World Evidence in Thailand
Authors: Khachen Kongpakwattana, Nathorn Chaiyakunapruk
Abstract:
Background: Decision-analytic models for Alzheimer’s disease (AD) have been advanced to discrete-event simulation (DES), in which individual-level modelling of disease progression across continuous severity spectra and incorporation of key parameters such as treatment persistence into the model become feasible. This study aimed to apply the DES to perform a cost-effectiveness analysis of treatment for AD in Thailand. Methods: A dataset of Thai patients with AD, representing unique demographic and clinical characteristics, was bootstrapped to generate a baseline cohort of patients. Each patient was cloned and assigned to donepezil, galantamine, rivastigmine, memantine or no treatment. Throughout the simulation period, the model randomly assigned each patient to discrete events including hospital visits, treatment discontinuation and death. Correlated changes in cognitive and behavioral status over time were developed using patient-level data. Treatment effects were obtained from the most recent network meta-analysis. Treatment persistence, mortality and predictive equations for functional status, costs (Thai baht (THB) in 2017) and quality-adjusted life year (QALY) were derived from country-specific real-world data. The time horizon was 10 years, with a discount rate of 3% per annum. Cost-effectiveness was evaluated based on the willingness-to-pay (WTP) threshold of 160,000 THB/QALY gained (4,994 US$/QALY gained) in Thailand. Results: Under a societal perspective, only was the prescription of donepezil to AD patients with all disease-severity levels found to be cost-effective. Compared to untreated patients, although the patients receiving donepezil incurred a discounted additional costs of 2,161 THB, they experienced a discounted gain in QALY of 0.021, resulting in an incremental cost-effectiveness ratio (ICER) of 138,524 THB/QALY (4,062 US$/QALY). Besides, providing early treatment with donepezil to mild AD patients further reduced the ICER to 61,652 THB/QALY (1,808 US$/QALY). However, the dominance of donepezil appeared to wane when delayed treatment was given to a subgroup of moderate and severe AD patients [ICER: 284,388 THB/QALY (8,340 US$/QALY)]. Introduction of a treatment stopping rule when the Mini-Mental State Exam (MMSE) score goes below 10 to a mild AD cohort did not deteriorate the cost-effectiveness of donepezil at the current treatment persistence level. On the other hand, none of the AD medications was cost-effective when being considered under a healthcare perspective. Conclusions: The DES greatly enhances real-world representativeness of decision-analytic models for AD. Under a societal perspective, treatment with donepezil improves patient’s quality of life and is considered cost-effective when used to treat AD patients with all disease-severity levels in Thailand. The optimal treatment benefits are observed when donepezil is prescribed since the early course of AD. With healthcare budget constraints in Thailand, the implementation of donepezil coverage may be most likely possible when being considered starting with mild AD patients, along with the stopping rule introduced.Keywords: Alzheimer's disease, cost-effectiveness analysis, discrete event simulation, health technology assessment
Procedia PDF Downloads 128224 Computer Aided Design Solution Based on Genetic Algorithms for FMEA and Control Plan in Automotive Industry
Authors: Nadia Belu, Laurenţiu Mihai Ionescu, Agnieszka Misztal
Abstract:
The automotive industry is one of the most important industries in the world that concerns not only the economy, but also the world culture. In the present financial and economic context, this field faces new challenges posed by the current crisis, companies must maintain product quality, deliver on time and at a competitive price in order to achieve customer satisfaction. Two of the most recommended techniques of quality management by specific standards of the automotive industry, in the product development, are Failure Mode and Effects Analysis (FMEA) and Control Plan. FMEA is a methodology for risk management and quality improvement aimed at identifying potential causes of failure of products and processes, their quantification by risk assessment, ranking of the problems identified according to their importance, to the determination and implementation of corrective actions related. The companies use Control Plans realized using the results from FMEA to evaluate a process or product for strengths and weaknesses and to prevent problems before they occur. The Control Plans represent written descriptions of the systems used to control and minimize product and process variation. In addition Control Plans specify the process monitoring and control methods (for example Special Controls) used to control Special Characteristics. In this paper we propose a computer-aided solution with Genetic Algorithms in order to reduce the drafting of reports: FMEA analysis and Control Plan required in the manufacture of the product launch and improved knowledge development teams for future projects. The solution allows to the design team to introduce data entry required to FMEA. The actual analysis is performed using Genetic Algorithms to find optimum between RPN risk factor and cost of production. A feature of Genetic Algorithms is that they are used as a means of finding solutions for multi criteria optimization problems. In our case, along with three specific FMEA risk factors is considered and reduce production cost. Analysis tool will generate final reports for all FMEA processes. The data obtained in FMEA reports are automatically integrated with other entered parameters in Control Plan. Implementation of the solution is in the form of an application running in an intranet on two servers: one containing analysis and plan generation engine and the other containing the database where the initial parameters and results are stored. The results can then be used as starting solutions in the synthesis of other projects. The solution was applied to welding processes, laser cutting and bending to manufacture chassis for buses. Advantages of the solution are efficient elaboration of documents in the current project by automatically generating reports FMEA and Control Plan using multiple criteria optimization of production and build a solid knowledge base for future projects. The solution which we propose is a cheap alternative to other solutions on the market using Open Source tools in implementation.Keywords: automotive industry, FMEA, control plan, automotive technology
Procedia PDF Downloads 405223 Mastopexy with the "Dermoglandular Autоaugmentation" Method. Increased Stability of the Result. Personalized Technique
Authors: Maksim Barsakov
Abstract:
Introduction. In modern plastic surgery, there are a large number of breast lift techniques.Due to the spreading information about the "side effects" of silicone implants, interest in implant-free mastopexy is increasing year after year. However, despite the variety of techniques, patients sometimes do not get full satisfaction from the results of mastopexy because of the unexpressed filling of the upper pole, extended anchoring postoperative scars and sometimes because of obtaining an aesthetically unattractive breast shape. The stability of the result after mastopexy depends on many factors, including postoperative rehabilitation. Stability of weight and hormonal background, stretchability of tissues. The high recurrence rate of ptosis and short-term aesthetic effect of mastopexy indicate the urgency of improving surgical techniques and increasing the stabilization of breast tissue. Purpose of the study. To develop and introduce into practice a technique of mastopexy based on the use of a modified Ribeiro flap, as well as elements of tissue movement and fixation designed to increase the stability of postoperative mastopexy. In addition, to give indications for the application of this surgical technique. Materials and Methods. it operated on 103 patients aged 18 to 53 years from 2019 to 2023 according to the reported method. These were patients with primary mastopexy, secondary mastopexy, and also patient with implant removal and one-stage mastopexy. The patients were followed up for 12 months to assess the stability of the result. Results and their discussion. Observing the patients, we noted greater stability of the breast shape and upper pole filling compared to the conventional classical methods. We did not have to resort to anchoring scars. In 90 percent of cases, a inverted T-shape scar was used. In 10 percent, the J-scar was used. The quantitative distribution of complications identified among the operated patients is as follows: worsened healing of the junction of vertical and horizontal sutures at the period of 1-1.5 months after surgery - 15 patients; at treatment with ointment method healing was observed in 7-30 days; permanent loss of NAC sensitivity - 0 patients; vascular disorders in the area of NAC/areola necrosis - 0 patients; marginal necrosis of the areola-2 patients. independent healing within 3-4 weeks without aesthetic defects. Aesthetically unacceptable mature scars-3 patients; partial liponecrosis of the autoflap unilaterally - 1 patient. recurrence of ptosis - 1 patient (after weight loss of 12 kg). In the late postoperative period, 2 patients became pregnant, gave birth, and no lactation problems were observed. Conclusion. Thus, in the world of plastic surgery methods of breast lift continue to improve, which is especially relevant in modern times, due to the increased attention to this operation. The author's proposed method of mastopexy with glandular autoflap allows obtaining in most cases a stable result, a fuller breast shape, avoiding the presence of extended anchoring scars, and also preserves the possibility of lactation. The author of this article has obtained a patent for invention for this method of mastopexy.Keywords: mastopexy, mammoplasty, autoflap, personal technique
Procedia PDF Downloads 35222 Innovation in PhD Training in the Interdisciplinary Research Institute
Authors: B. Shaw, K. Doherty
Abstract:
The Cultural Communication and Computing Research Institute (C3RI) is a diverse multidisciplinary research institute including art, design, media production, communication studies, computing and engineering. Across these disciplines it can seem like there are enormous differences of research practice and convention, including differing positions on objectivity and subjectivity, certainty and evidence, and different political and ethical parameters. These differences sit within, often unacknowledged, histories, codes, and communication styles of specific disciplines, and it is all these aspects that can make understanding of research practice across disciplines difficult. To explore this, a one day event was orchestrated, testing how a PhD community might communicate and share research in progress in a multi-disciplinary context. Instead of presenting results at a conference, research students were tasked to articulate their method of inquiry. A working party of students from across disciplines had to design a conference call, visual identity and an event framework that would work for students across all disciplines. The process of establishing the shape and identity of the conference was revealing. Even finding a linguistic frame that would meet the expectations of different disciplines for the conference call was challenging. The first abstracts submitted either resorted to reporting findings, or only described method briefly. It took several weeks of supported intervention for research students to get ‘inside’ their method and to understand their research practice as a process rich with philosophical and practical decisions and implications. In response to the abstracts the conference committee generated key methodological categories for conference sessions, including sampling, capturing ‘experience’, ‘making models’, researcher identities, and ‘constructing data’. Each session involved presentations by visual artists, communications students and computing researchers with inter-disciplinary dialogue, facilitated by alumni Chairs. The apparently simple focus on method illuminated research process as a site of creativity, innovation and discovery, and also built epistemological awareness, drawing attention to what is being researched and how it can be known. It was surprisingly difficult to limit students to discussing method, and it was apparent that the vocabulary available for method is sometimes limited. However, by focusing on method rather than results, the genuine process of research, rather than one constructed for approval, could be captured. In unlocking the twists and turns of planning and implementing research, and the impact of circumstance and contingency, students had to reflect frankly on successes and failures. This level of self – and public- critique emphasised the degree of critical thinking and rigour required in executing research and demonstrated that honest reportage of research, faults and all, is good valid research. The process also revealed the degree that disciplines can learn from each other- the computing students gained insights from the sensitive social contextualizing generated by communications and art and design students, and art and design students gained understanding from the greater ‘distance’ and emphasis on application that computing students applied to their subjects. Finding the means to develop dialogue across disciplines makes researchers better equipped to devise and tackle research problems across disciplines, potentially laying the ground for more effective collaboration.Keywords: interdisciplinary, method, research student, training
Procedia PDF Downloads 206221 Mycophenolate Mofetil Increases Mucin Expression in Primary Cultures of Oral Mucosal Epithelial Cells for Application in Limbal Stem Cell Deficiency
Authors: Sandeep Kumar Agrawal, Aditi Bhattacharya, Janvie Manhas, Krushna Bhatt, Yatin Kholakiya, Nupur Khera, Ajoy Roychoudhury, Sudip Sen
Abstract:
Autologous cultured explants of human oral mucosal epithelial cells (OMEC) are a potential therapeutic modality for limbal stem cell deficiency (LSCD). Injury or inflammation of the ocular surface in the form of burns, chemicals, Stevens Johnson syndrome, ocular cicatricial pemphigoid etc. can lead to destruction and deficiency of limbal stem cells. LSCD manifests in the form of severe ocular surface diseases (OSD) characterized by persistent and recurrent epithelial defects, conjuntivalisation and neovascularisation of the corneal surface, scarring and ultimately opacity and blindness. Most of the cases of OSD are associated with severe dry eye pertaining to diminished mucin and aqueous secretion. Mycophenolate mofetil (MMF) has been shown to upregulate the mucin expression in conjunctival goblet cells in vitro. The aim of this study was to evaluate the effects of MMF on mucin expression in primary cultures of oral mucosal epithelial cells. With institutional ethics committee approval and written informed consent, thirty oral mucosal epithelial tissue samples were obtained from patients undergoing oral surgery for non-malignant conditions. OMEC were grown on human amniotic membrane (HAM, obtained from expecting mothers undergoing elective caesarean section) scaffold for 2 weeks in growth media containing DMEM & Ham’s F12 (1:1) with 10% FBS and growth factors. In vitro dosage of MMF was standardised by MTT assay. Analysis of stem cell markers was done using RT-PCR while mucin mRNA expression was quantified using RT-PCR and q-PCR before and after treating cultured OMEC with graded concentrations of MMF for 24 hours. Protein expression was validated using immunocytochemistry. Morphological studies revealed a confluent sheet of proliferating, stratified oral mucosal epithelial cells growing over the surface of HAM scaffold. The presence of progenitor stem cell markers (p63, p75, β1-Integrin and ABCG2) and cell surface associated mucins (MUC1, MUC15 and MUC16) were elucidated by RT-PCR. The mucin mRNA expression was found to be upregulated in MMF treated primary cultures of OMEC, compared to untreated controls as quantified by q-PCR with β-actin as internal reference gene. Increased MUC1 protein expression was validated by immunocytochemistry on representative samples. Our findings conclude that OMEC have the ability to form a multi-layered confluent sheet on the surface of HAM similar to a cornea, which is important for the reconstruction of the damaged ocular surface. Cultured OMEC has stem cell properties as demonstrated by stem cell markers. MMF can be a novel enhancer of mucin production in OMEC. It has the potential to improve dry eye in patients undergoing OMEC transplantation for bilateral OSD. Further clinical trials are required to establish the role of MMF in patients undergoing OMEC transplantation.Keywords: limbal stem cell deficiency, mycophenolate mofetil, mucin, ocular surface disease
Procedia PDF Downloads 328220 Numerical Modeling of Timber Structures under Varying Humidity Conditions
Authors: Sabina Huč, Staffan Svensson, Tomaž Hozjan
Abstract:
Timber structures may be exposed to various environmental conditions during their service life. Often, the structures have to resist extreme changes in the relative humidity of surrounding air, with simultaneously carrying the loads. Wood material response for this load case is seen as increasing deformation of the timber structure. Relative humidity variations cause moisture changes in timber and consequently shrinkage and swelling of the material. Moisture changes and loads acting together result in mechano-sorptive creep, while sustained load gives viscoelastic creep. In some cases, magnitude of the mechano-sorptive strain can be about five times the elastic strain already at low stress levels. Therefore, analyzing mechano-sorptive creep and its influence on timber structures’ long-term behavior is of high importance. Relatively many one-dimensional rheological models for rheological behavior of wood can be found in literature, while a number of models coupling creep response in each material direction is limited. In this study, mathematical formulation of a coupled two-dimensional mechano-sorptive model and its application to the experimental results are presented. The mechano-sorptive model constitutes of a moisture transport model and a mechanical model. Variation of the moisture content in wood is modelled by multi-Fickian moisture transport model. The model accounts for processes of the bound-water and water-vapor diffusion in wood, that are coupled through sorption hysteresis. Sorption defines a nonlinear relation between moisture content and relative humidity. Multi-Fickian moisture transport model is able to accurately predict unique, non-uniform moisture content field within the timber member over time. Calculated moisture content in timber members is used as an input to the mechanical analysis. In the mechanical analysis, the total strain is assumed to be a sum of the elastic strain, viscoelastic strain, mechano-sorptive strain, and strain due to shrinkage and swelling. Mechano-sorptive response is modelled by so-called spring-dashpot type of a model, that proved to be suitable for describing creep of wood. Mechano-sorptive strain is dependent on change of moisture content. The model includes mechano-sorptive material parameters that have to be calibrated to the experimental results. The calibration is made to the experiments carried out on wooden blocks subjected to uniaxial compressive loaded in tangential direction and varying humidity conditions. The moisture and the mechanical model are implemented in a finite element software. The calibration procedure gives the required, distinctive set of mechano-sorptive material parameters. The analysis shows that mechano-sorptive strain in transverse direction is present, though its magnitude and variation are substantially lower than the mechano-sorptive strain in the direction of loading. The presented mechano-sorptive model enables observing real temporal and spatial distribution of the moisture-induced strains and stresses in timber members. Since the model’s suitability for predicting mechano-sorptive strains is shown and the required material parameters are obtained, a comprehensive advanced analysis of the stress-strain state in timber structures, including connections subjected to constant load and varying humidity is possible.Keywords: mechanical analysis, mechano-sorptive creep, moisture transport model, timber
Procedia PDF Downloads 244219 Nano-Enabling Technical Carbon Fabrics to Achieve Improved Through Thickness Electrical Conductivity in Carbon Fiber Reinforced Composites
Authors: Angelos Evangelou, Katerina Loizou, Loukas Koutsokeras, Orestes Marangos, Giorgos Constantinides, Stylianos Yiatros, Katerina Sofocleous, Vasileios Drakonakis
Abstract:
Owing to their outstanding strength to weight properties, carbon fiber reinforced polymer (CFRPs) composites have attracted significant attention finding use in various fields (sports, automotive, transportation, etc.). The current momentum indicates that there is an increasing demand for their employment in high value bespoke applications such as avionics and electronic casings, damage sensing structures, EMI (electromagnetic interference) structures that dictate the use of materials with increased electrical conductivity both in-plane and through the thickness. Several efforts by research groups have focused on enhancing the through-thickness electrical conductivity of FRPs, in an attempt to combine the intrinsically high relative strengths exhibited with improved z-axis electrical response as well. However, only a limited number of studies deal with printing of nano-enhanced polymer inks to produce a pattern on dry fabric level that could be used to fabricate CFRPs with improved through thickness electrical conductivity. The present study investigates the employment of screen-printing process on technical dry fabrics using nano-reinforced polymer-based inks to achieve the required through thickness conductivity, opening new pathways for the application of fiber reinforced composites in niche products. Commercially available inks and in-house prepared inks reinforced with electrically conductive nanoparticles are employed, printed in different patterns. The aim of the present study is to investigate both the effect of the nanoparticle concentration as well as the droplet patterns (diameter, inter-droplet distance and coverage) to optimize printing for the desired level of conductivity enhancement in the lamina level. The electrical conductivity is measured initially at ink level to pinpoint the optimum concentrations to be employed using a “four-probe” configuration. Upon printing of the different patterns, the coverage of the dry fabric area is assessed along with the permeability of the resulting dry fabrics, in alignment with the fabrication of CFRPs that requires adequate wetting by the epoxy matrix. Results demonstrated increased electrical conductivities of the printed droplets, with increase of the conductivity from the benchmark value of 0.1 S/M to between 8 and 10 S/m. Printability of dense and dispersed patterns has exhibited promising results in terms of increasing the z-axis conductivity without inhibiting the penetration of the epoxy matrix at the processing stage of fiber reinforced composites. The high value and niche prospect of the resulting applications that can stem from CFRPs with increased through thickness electrical conductivities highlights the potential of the presented endeavor, signifying screen printing as the process to to nano-enable z-axis electrical conductivity in composite laminas. This work was co-funded by the European Regional Development Fund and the Republic of Cyprus through the Research and Innovation Foundation (Project: ENTERPRISES/0618/0013).Keywords: CFRPs, conductivity, nano-reinforcement, screen-printing
Procedia PDF Downloads 151218 Assessment of Biofilm Production Capacity of Industrially Important Bacteria under Electroinductive Conditions
Authors: Omolola Ojetayo, Emmanuel Garuba, Obinna Ajunwa, Abiodun A. Onilude
Abstract:
Introduction: Biofilm is a functional community of microorganisms that are associated with a surface or an interface. These adherent cells become embedded within an extracellular matrix composed of polymeric substances, i.e., biofilms refer to biological deposits consisting of both microbes and their extracellular products on biotic and abiotic surfaces. Despite their detrimental effects in medicine, biofilms as natural cell immobilization have found several applications in biotechnology, such as in the treatment of wastewater, bioremediation and biodegradation, desulfurization of gas, and conversion of agro-derived materials into alcohols and organic acids. The means of enhancing immobilized cells have been chemical-inductive, and this affects the medium composition and final product. Physical factors including electrical, magnetic, and electromagnetic flux have shown potential for enhancing biofilms depending on the bacterial species, nature, and intensity of emitted signals, the duration of exposure, and substratum used. However, the concept of cell immobilisation by electrical and magnetic induction is still underexplored. Methods: To assess the effects of physical factors on biofilm formation, six American typed culture collection (Acetobacter aceti ATCC15973, Pseudomonas aeruginosa ATCC9027, Serratia marcescens ATCC14756, Gluconobacter oxydans ATCC19357, Rhodobacter sphaeroides ATCC17023, and Bacillus subtilis ATCC6633) were used. Standard culture techniques for bacterial cells were adopted. Natural autoimmobilisation potentials of test bacteria were carried out by simple biofilms ring formation on tubes, while crystal violet binding assay techniques were adopted in the characterisation of biofilm quantity. Electroinduction of bacterial cells by direct current (DC) application in cell broth, static magnetic field exposure, and electromagnetic flux were carried out, and autoimmobilisation of cells in a biofilm pattern was determined on various substrata tested, including wood, glass, steel, polyvinylchloride (PVC) and polyethylene terephthalate. Biot Savart law was used in quantifying magnetic field intensity, and statistical analyses of data obtained were carried out using the analyses of variance (ANOVA) as well as other statistical tools. Results: Biofilm formation by the selected test bacteria was enhanced by the physical factors applied. Electromagnetic induction had the greatest effect on biofilm formation, with magnetic induction producing the least effect across all substrata used. Microbial cell-cell communication could be a possible means via which physical signals affected the cells in a polarisable manner. Conclusion: The enhancement of biofilm formation by bacteria using physical factors has shown that their inherent capability as a cell immobilization method can be further optimised for industrial applications. A possible relationship between the presence of voltage-dependent channels, mechanosensitive channels, and bacterial biofilms could shed more light on this phenomenon.Keywords: bacteria, biofilm, cell immobilization, electromagnetic induction, substrata
Procedia PDF Downloads 187217 Investigation of Resilient Circles in Local Community and Industry: Waju-Traditional Culture in Japan and Modern Technology Application
Authors: R. Ueda
Abstract:
Today global society is seeking resilient partnership in local organizations and individuals, which realizes multi-stakeholders relationship. Although it is proposed by modern global framework of sustainable development, it is conceivable that such affiliation can be found out in the traditional local community in Japan, and that traditional spirit is tacitly sustaining in modern context of disaster mitigation in society and economy. Then this research is aiming to clarify and analyze implication for the global world by actual case studies. Regional and urban resilience is the ability of multi-stakeholders to cooperate flexibly and to adapt in response to changes in the circumstances caused by disasters, but there are various conflicts affecting coordination of disaster relief measures. These conflicts arise not only from a lack of communication and an insufficient network, but also from the difficulty to jointly draw common context from fragmented information. This is because of the weakness of our modern engineering which focuses on maintenance and restoration of individual systems. Here local ‘circles’ holistically includes local community and interacts periodically. Focusing on examples of resilient organizations and wisdom created in communities, what can be seen throughout history is a virtuous cycle where the information and the knowledge are structured, the context to be adapted becomes clear, and an adaptation at a higher level is made possible, by which the collaboration between organizations is deepened and expanded. And the wisdom of a solid and autonomous disaster prevention formed by the historical community called’ Waju’ – an area surrounded by circle embankment to protect the settlement from flood – lives on in government efforts of the coastal industrial island of today. Industrial company there collaborates to create a circle including common evacuation space, road access improvement and infrastructure recovery. These days, people here adopts new interface technology. Large-scale AR- Augmented Reality for more than hundred people is expressing detailed hazard by tsunami and liquefaction. Common experiences of the major disaster space and circle of mutual discussion are enforcing resilience. Collaboration spirit lies in the center of circle. A consistent key point is a virtuous cycle where the information and the knowledge are structured, the context to be adapted becomes clear, and an adaptation at a higher level is made possible, by which the collaboration between organizations is deepened and expanded. This writer believes that both self-governing human organizations and the societal implementation of technical systems are necessary. Infrastructure should be autonomously instituted by associations of companies and other entities in industrial areas for working closely with local governments. To develop advanced disaster prevention and multi-stakeholder collaboration, partnerships among industry, government, academia and citizens are important.Keywords: industrial recovery, multi-sakeholders, traditional culture, user experience, Waju
Procedia PDF Downloads 112216 An Argument for Agile, Lean, and Hybrid Project Management in Museum Conservation Practice: A Qualitative Evaluation of the Morris Collection Conservation Project at the Sainsbury Centre for Visual Arts
Authors: Maria Ledinskaya
Abstract:
This paper is part case study and part literature review. It seeks to introduce Agile, Lean, and Hybrid project management concepts from business, software development, and manufacturing fields to museum conservation by looking at their practical application on a recent conservation project at the Sainsbury Centre for Visual Arts. The author outlines the advantages of leaner and more agile conservation practices in today’s faster, less certain, and more budget-conscious museum climate where traditional project structures are no longer as relevant or effective. The Morris Collection Conservation Project was carried out in 2019-2021 in Norwich, UK, and concerned the remedial conservation of around 150 Abstract Constructivist artworks bequeathed to the Sainsbury Centre by private collectors Michael and Joyce Morris. It was a medium-sized conservation project of moderate complexity, planned and delivered in an environment with multiple known unknowns – unresearched collection, unknown conditions and materials, unconfirmed budget. The project was later impacted by the COVID-19 pandemic, introducing indeterminate lockdowns, budget cuts, staff changes, and the need to accommodate social distancing and remote communications. The author, then a staff conservator at the Sainsbury Centre who acted as project manager on the Morris Project, presents an incremental, iterative, and value-based approach to managing a conservation project in an uncertain environment. The paper examines the project from the point of view of Traditional, Agile, Lean, and Hybrid project management. The author argues that most academic writing on project management in conservation has focussed on a Traditional plan-driven approach – also known as Waterfall project management – which has significant drawbacks in today’s museum environment due to its over-reliance on prediction-based planning and its low tolerance to change. In the last 20 years, alternative Agile, Lean and Hybrid approaches to project management have been widely adopted in software development, manufacturing, and other industries, although their recognition in the museum sector has been slow. Using examples from the Morris Project, the author introduces key principles and tools of Agile, Lean, and Hybrid project management and presents a series of arguments on the effectiveness of these alternative methodologies in museum conservation, including the ethical and practical challenges to their implementation. These project management approaches are discussed in the context of consequentialist, relativist, and utilitarian developments in contemporary conservation ethics. Although not intentionally planned as such, the Morris Project had a number of Agile and Lean features which were instrumental to its successful delivery. These key features are identified as distributed decision-making, a co-located cross-disciplinary team, servant leadership, focus on value-added work, flexible planning done in shorter sprint cycles, light documentation, and emphasis on reducing procedural, financial, and logistical waste. Overall, the author’s findings point in favour of a hybrid model, which combines traditional and alternative project processes and tools to suit the specific needs of the project.Keywords: agile project management, conservation, hybrid project management, lean project management, waterfall project management
Procedia PDF Downloads 70215 Integration of an Evidence-Based Medicine Curriculum into Physician Assistant Education: Teaching for Today and the Future
Authors: Martina I. Reinhold, Theresa Bacon-Baguley
Abstract:
Background: Medical knowledge continuously evolves and to help health care providers to stay up-to-date, evidence-based medicine (EBM) has emerged as a model. The practice of EBM requires new skills of the health care provider, including directed literature searches, the critical evaluation of research studies, and the direct application of the findings to patient care. This paper describes the integration and evaluation of an evidence-based medicine course sequence into a Physician Assistant curriculum. This course sequence teaches students to manage and use the best clinical research evidence to competently practice medicine. A survey was developed to assess the outcomes of the EBM course sequence. Methodology: The cornerstone of the three-semester sequence of EBM are interactive small group discussions that are designed to introduce students to the most clinically applicable skills to identify, manage and use the best clinical research evidence to improve the health of their patients. During the three-semester sequence, the students are assigned each semester to participate in small group discussions that are facilitated by faculty with varying background and expertise. Prior to the start of the first EBM course in the winter semester, PA students complete a knowledge-based survey that was developed by the authors to assess the effectiveness of the course series. The survey consists of 53 Likert scale questions that address the nine objectives for the course series. At the end of the three semester course series, the same survey was given to all students in the program and the results from before, and after the sequence of EBM courses are compared. Specific attention is paid to overall performance of students in the nine course objectives. Results: We find that students from the Class of 2016 and 2017 consistently improve (as measured by percent correct responses on the survey tool) after the EBM course series (Class of 2016: Pre- 62% Post- 75%; Class of 2017: Pre- 61 % Post-70%). The biggest increase in knowledge was observed in the areas of finding and evaluating the evidence, with asking concise clinical questions (Class of 2016: Pre- 61% Post- 81%; Class of 2017: Pre- 61 % Post-75%) and searching the medical database (Class of 2016: Pre- 24% Post- 65%; Class of 2017: Pre- 35 % Post-66 %). Questions requiring students to analyze, evaluate and report on the available clinical evidence regarding diagnosis showed improvement, but to a lesser extend (Class of 2016: Pre- 56% Post- 77%; Class of 2017: Pre- 56 % Post-61%). Conclusions: Outcomes identified that students did gain skills which will allow them to apply EBM principles. In addition, the outcomes of the knowledge-based survey allowed the faculty to focus on areas needing improvement, specifically the translation of best evidence into patient care. To address this area, the clinical faculty developed case scenarios that were incorporated into the lecture and discussion sessions, allowing students to better connect the research studies with patient care. Students commented that ‘class discussion and case examples’ contributed most to their learning and that ‘it was helpful to learn how to develop research questions and how to analyze studies and their significance to a potential client’. As evident by the outcomes, the EBM courses achieved the goals of the course and were well received by the students.Keywords: evidence-based medicine, clinical education, assessment tool, physician assistant
Procedia PDF Downloads 124214 Usability Assessment of a Bluetooth-Enabled Resistance Exercise Band among Young Adults
Authors: Lillian M. Seo, Curtis L. Petersen, Ryan J. Halter, David Kotz, John A. Batsis
Abstract:
Background: Resistance-based exercises effectively enhance muscle strength, which is especially important in older populations as it reduces the risk of disability. Our group developed a Bluetooth-enabled handle for resistance exercise bands that wirelessly transmits relative force data through low-energy Bluetooth to a local smartphone or similar device. The system has the potential to measure home-based exercise interventions, allowing health professionals to monitor compliance. Its feasibility has already been demonstrated in both clinical and field-based settings, but it remained unclear whether the system’s usability persisted upon repeated use. The current study sought to assess the usability of this system and its users’ satisfaction with repeated use by deploying the device among younger adults to gather formative information that can ultimately improve the device’s design for older adults. Methods: A usability study was conducted in which 32 participants used the above system. Participants executed 10 repetitions of four commonly performed exercises: bicep flexion, shoulder abduction, elbow extension, and triceps extension. Each completed three exercise sessions, separated by at least 24 hours to minimize muscle fatigue. At its conclusion, subjects completed an adapted version of the usefulness, satisfaction, and ease (USE) questionnaire – assessing the system across four domains: usability, satisfaction, ease of use, and ease of learning. The 20-item questionnaire examined how strongly a participant agrees with positive statements about the device on a seven-point Likert scale, with one representing ‘strongly disagree’ and seven representing ‘strongly agree.’ Participants’ data were aggregated to calculate mean response values for each question and domain, effectively assessing the device’s performance across different facets of the user experience. Summary force data were visualized using a custom web application. Finally, an optional prompt at the end of the questionnaire allowed for written comments and feedback from participants to elicit qualitative indicators of usability. Results: Of the n=32 participants, 13 (41%) were female; their mean age was 32.4 ± 11.8 years, and no participants had a physical impairment. No usability questions received a mean score < 5 of seven. The four domains’ mean scores were: usefulness 5.66 ± 0.35; satisfaction 6.23 ± 0.06; ease of use 6.25 ± 0.43; and ease of learning 6.50 ± 0.19. Representative quotes of the open-ended feedback include: ‘A non-rigid strap-style handle might be useful for some exercises,’ and, ‘Would need different bands for each exercise as they use different muscle groups with different strength levels.’ General impressions were favorable, supporting the expectation that the device would be a useful tool in exercise interventions. Conclusions: A simple usability assessment of a Bluetooth-enabled resistance exercise band supports a consistent and positive user experience among young adults. This study provides adequate formative data, assuring the next steps can be taken to continue testing and development for the target population of older adults.Keywords: Bluetooth, exercise, mobile health, mHealth, usability
Procedia PDF Downloads 116213 A Resilience-Based Approach for Assessing Social Vulnerability in New Zealand's Coastal Areas
Authors: Javad Jozaei, Rob G. Bell, Paula Blackett, Scott A. Stephens
Abstract:
In the last few decades, Social Vulnerability Assessment (SVA) has been a favoured means in evaluating the susceptibility of social systems to drivers of change, including climate change and natural disasters. However, the application of SVA to inform responsive and practical strategies to deal with uncertain climate change impacts has always been challenging, and typically agencies resort back to conventional risk/vulnerability assessment. These challenges include complex nature of social vulnerability concepts which influence its applicability, complications in identifying and measuring social vulnerability determinants, the transitory social dynamics in a changing environment, and unpredictability of the scenarios of change that impacts the regime of vulnerability (including contention of when these impacts might emerge). Research suggests that the conventional quantitative approaches in SVA could not appropriately address these problems; hence, the outcomes could potentially be misleading and not fit for addressing the ongoing uncertain rise in risk. The second phase of New Zealand’s Resilience to Nature’s Challenges (RNC2) is developing a forward-looking vulnerability assessment framework and methodology that informs the decision-making and policy development in dealing with the changing coastal systems and accounts for complex dynamics of New Zealand’s coastal systems (including socio-economic, environmental and cultural). Also, RNC2 requires the new methodology to consider plausible drivers of incremental and unknowable changes, create mechanisms to enhance social and community resilience; and fits the New Zealand’s multi-layer governance system. This paper aims to analyse the conventional approaches and methodologies in SVA and offer recommendations for more responsive approaches that inform adaptive decision-making and policy development in practice. The research adopts a qualitative research design to examine different aspects of the conventional SVA processes, and the methods to achieve the research objectives include a systematic review of the literature and case study methods. We found that the conventional quantitative, reductionist and deterministic mindset in the SVA processes -with a focus the impacts of rapid stressors (i.e. tsunamis, floods)- show some deficiencies to account for complex dynamics of social-ecological systems (SES), and the uncertain, long-term impacts of incremental drivers. The paper will focus on addressing the links between resilience and vulnerability; and suggests how resilience theory and its underpinning notions such as the adaptive cycle, panarchy, and system transformability could address these issues, therefore, influence the perception of vulnerability regime and its assessment processes. In this regard, it will be argued that how a shift of paradigm from ‘specific resilience’, which focuses on adaptive capacity associated with the notion of ‘bouncing back’, to ‘general resilience’, which accounts for system transformability, regime shift, ‘bouncing forward’, can deliver more effective strategies in an era characterised by ongoing change and deep uncertainty.Keywords: complexity, social vulnerability, resilience, transformation, uncertain risks
Procedia PDF Downloads 100212 Digital Health During a Pandemic: Critical Analysis of the COVID-19 Contact Tracing Apps
Authors: Mohanad Elemary, Imose Itua, Rajeswari B. Matam
Abstract:
Virologists and public health experts have been predicting potential pandemics from coronaviruses for decades. The viruses which caused the SARS and MERS pandemics and the Nipah virus led to many lost lives, but still, the COVID-19 pandemic caused by the SARS-CoV2 virus surprised many scientific communities, experts, and governments with its ease of transmission and its pathogenicity. Governments of various countries reacted by locking down entire populations to their homes to combat the devastation caused by the virus, which led to a loss of livelihood and economic hardship to many individuals and organizations. To revive national economies and support their citizens in resuming their lives, governments focused on the development and use of contact tracing apps as a digital way to track and trace exposure. Google and Apple introduced the Exposure Notification Systems (ENS) framework. Independent organizations and countries also developed different frameworks for contact tracing apps. The efficiency, popularity, and adoption rate of these various apps have been different across countries. In this paper, we present a critical analysis of the different contact tracing apps with respect to their efficiency, adoption rate and general perception, and the governmental strategies and policies, which led to the development of the applications. When it comes to the European countries, each of them followed an individualistic approach to the same problem resulting in different realizations of a similarly functioning application with differing results of use and acceptance. The study conducted an extensive review of existing literature, policies, and reports across multiple disciplines, from which a framework was developed and then validated through interviews with six key stakeholders in the field, including founders and executives in digital health startups and corporates as well as experts from international organizations like The World Health Organization. A framework of best practices and tactics is the result of this research. The framework looks at three main questions regarding the contact tracing apps; how to develop them, how to deploy them, and how to regulate them. The findings are based on the best practices applied by governments across multiple countries, the mistakes they made, and the best practices applied in similar situations in the business world. The findings include multiple strategies when it comes to the development milestone regarding establishing frameworks for cooperation with the private sector and how to design the features and user experience of the app for a transparent, effective, and rapidly adaptable app. For the deployment section, several tactics were discussed regarding communication messages, marketing campaigns, persuasive psychology, and the initial deployment scale strategies. The paper also discusses the data privacy dilemma and how to build for a more sustainable system of health-related data processing and utilization. This is done through principles-based regulations specific for health data to allow for its avail for the public good. This framework offers insights into strategies and tactics that could be implemented as protocols for future public health crises and emergencies whether global or regional.Keywords: contact tracing apps, COVID-19, digital health applications, exposure notification system
Procedia PDF Downloads 135211 Feasibility of Applying a Hydrodynamic Cavitation Generator as a Method for Intensification of Methane Fermentation Process of Virginia Fanpetals (Sida hermaphrodita) Biomass
Authors: Marcin Zieliński, Marcin Dębowski, Mirosław Krzemieniewski
Abstract:
The anaerobic degradation of substrates is limited especially by the rate and effectiveness of the first (hydrolytic) stage of fermentation. This stage may be intensified through pre-treatment of substrate aimed at disintegration of the solid phase and destruction of substrate tissues and cells. The most frequently applied criterion of disintegration outcomes evaluation is the increase in biogas recovery owing to the possibility of its use for energetic purposes and, simultaneously, recovery of input energy consumed for the pre-treatment of substrate before fermentation. Hydrodynamic cavitation is one of the methods for organic substrate disintegration that has a high implementation potential. Cavitation is explained as the phenomenon of the formation of discontinuity cavities filled with vapor or gas in a liquid induced by pressure drop to the critical value. It is induced by a varying field of pressures. A void needs to occur in the flow in which the pressure first drops to the value close to the pressure of saturated vapor and then increases. The process of cavitation conducted under controlled conditions was found to significantly improve the effectiveness of anaerobic conversion of organic substrates having various characteristics. This phenomenon allows effective damage and disintegration of cellular and tissue structures. Disintegration of structures and release of organic compounds to the dissolved phase has a direct effect on the intensification of biogas production in the process of anaerobic fermentation, on reduced dry matter content in the post-fermentation sludge as well as a high degree of its hygienization and its increased susceptibility to dehydration. A device the efficiency of which was confirmed both in laboratory conditions and in systems operating in the technical scale is a hydrodynamic generator of cavitation. Cavitators, agitators and emulsifiers constructed and tested worldwide so far have been characterized by low efficiency and high energy demand. Many of them proved effective under laboratory conditions but failed under industrial ones. The only task successfully realized by these appliances and utilized on a wider scale is the heating of liquids. For this reason, their usability was limited to the function of heating installations. Design of the presented cavitation generator allows achieving satisfactory energy efficiency and enables its use under industrial conditions in depolymerization processes of biomass with various characteristics. Investigations conducted on the laboratory and industrial scale confirmed the effectiveness of applying cavitation in the process of biomass destruction. The use of the cavitation generator in laboratory studies for disintegration of sewage sludge allowed increasing biogas production by ca. 30% and shortening the treatment process by ca. 20 - 25%. The shortening of the technological process and increase of wastewater treatment plant effectiveness may delay investments aimed at increasing system output. The use of a mechanical cavitator and application of repeated cavitation process (4-6 times) enables significant acceleration of the biogassing process. In addition, mechanical cavitation accelerates increases in COD and VFA levels.Keywords: hydrodynamic cavitation, pretreatment, biomass, methane fermentation, Virginia fanpetals
Procedia PDF Downloads 432210 Removal of VOCs from Gas Streams with Double Perovskite-Type Catalyst
Authors: Kuan Lun Pan, Moo Been Chang
Abstract:
Volatile organic compounds (VOCs) are one of major air contaminants, and they can react with nitrogen oxides (NOx) in atmosphere to form ozone (O3) and peroxyacetyl nitrate (PAN) with solar irradiation, leading to environmental hazards. In addition, some VOCs are toxic at low concentration levels and cause adverse effects on human health. How to effectively reduce VOCs emission has become an important issue. Thermal catalysis is regarded as an effective way for VOCs removal because it provides oxidation route to successfully convert VOCs into carbon dioxide (CO2) and water (H2O(g)). Single perovskite-type catalysts are promising for VOC removal, and they are of good potential to replace noble metals due to good activity and high thermal stability. Single perovskites can be generally described as ABO3 or A2BO4, where A-site is often a rare earth element or an alkaline. Typically, the B-site is transition metal cation (Fe, Cu, Ni, Co, or Mn). Catalytic properties of perovskites mainly rely on nature, oxidation states and arrangement of B-site cation. Interestingly, single perovskites could be further synthesized to form double perovskite-type catalysts which can simply be represented by A2B’B”O6. Likewise, A-site stands for an alkaline metal or rare earth element, and the B′ and B′′ are transition metals. Double perovskites possess unique surface properties. In structure, three-dimensional of B-site with ordered arrangement of B’O6 and B”O6 is presented alternately, and they corner-share octahedral along three directions of the crystal lattice, while cations of A-site position between the void of octahedral. It has attracted considerable attention due to specific arrangement of alternating B-site structure. Therefore, double perovskites may have more variations than single perovskites, and this greater variation may promote catalytic performance. It is expected that activity of double perovskites is higher than that of single perovskites toward VOC removal. In this study, double perovskite-type catalyst (La2CoMnO6) is prepared and evaluated for VOC removal. Also, single perovskites including LaCoO3 and LaMnO3 are tested for the comparison purpose. Toluene (C7H8) is one of the important VOCs which are commonly applied in chemical processes. In addition to its wide application, C7H8 has high toxicity at a low concentration. Therefore, C7H8 is selected as the target compound in this study. Experimental results indicate that double perovskite (La2CoMnO6) has better activity if compared with single perovskites. Especially, C7H8 can be completely oxidized to CO2 at 300oC as La2CoMnO6 is applied. Characterization of catalysts indicates that double perovskite has unique surface properties and is of higher amounts of lattice oxygen, leading to higher activity. For durability test, La2CoMnO6 maintains high C7H8 removal efficiency of 100% at 300oC and 30,000 h-1, and it also shows good resistance to CO2 (5%) and H2O(g) (5%) of gas streams tested. For various VOCs including isopropyl alcohol (C3H8O), ethanal (C2H4O), and ethylene (C2H4) tested, as high as 100% efficiency could be achieved with double perovskite-type catalyst operated at 300℃, indicating that double perovskites are promising catalysts for VOCs removal, and possible mechanisms will be elucidated in this paper.Keywords: volatile organic compounds, Toluene (C7H8), double perovskite-type catalyst, catalysis
Procedia PDF Downloads 164209 Healing (in) Relationship: The Theory and Practice of Inner-Outer Peacebuilding in North-Western India
Authors: Josie Gardner
Abstract:
The overall intention of this research is to reimagine peacebuilding in both in theory and practical application in light of the shortcomings and unsustainability of the current peacebuilding paradigm. These limitations are identified here as an overly rational-material approach to peacebuilding that neglects the inner dimension of peace for a fragmented rather than holistic model, and that espouses a conflict and violence-centric approach to peacebuilding. In counter, this presentation is purposed to investigate the dynamics of inner and outer peace as a holistic, complex system towards ‘inner-outer’ peacebuilding. This paper draws from primary research in the protracted conflict context of north-western India (Jammu, Kashmir & Ladakh) as a case study. This presentation has two central aims. First, to introduce the process of inner (psycho-spiritual) peacebuilding, which has thus far been neglected by mainstream and orthodox literature. Second, to examine why inner peacebuilding is essential for realising sustainable peace on a broader scale as outer (socio-political) peace and to better understand how the inner and outer dynamics of peace relate and affect one another. To these ends, Josephine (the researcher/author/presenter) partnered with Yakjah Reconciliation and Development Network to implement a series of action-oriented workshops and retreats centred around healing, reconciliation, leadership, and personal development for the dual purpose of collaboratively generating data, theory, and insights, as well as providing the youth leaders with an experiential, transformative experience. The research team created and used a novel methodological approach called Mapping Ritual Ecologies, which draws from Participatory Action Research and Digital Ethnography to form a collaborative research model with a group of 20 youth co-researchers who are emerging youth peace leaders in Kashmir, Jammu, and Ladakh. This research found significant intra- and inter-personal shifts towards an experience of inner peace through inner peacebuilding activities. Moreover, this process of inner peacebuilding affected their families and communities through interpersonal healing and peace leadership in an inside-out process of change. These insights have generated rich insights and have supported emerging theories about the dynamics between inner and outer peace, power, justice, and collective healing. This presentation argues that the largely neglected dimension of inner (psycho-spiritual) peacebuilding is imperative for broader socio-political (outer) change. Changing structures of oppression, injustice, and violence—i.e. structures of separation—requires individual, interpersonal, and collective healing. While this presentation primarily examines and advocates for inside-out peacebuilding and social justice, it will also touch upon the effect of systems of separation on the inner condition and human experience. This research reimagines peacebuilding as a holistic inner-outer approach. This offers an alternative path forward those weaves together self-actualisation and social justice. While contextualised within north-western India with a small case study population, the findings speak also to other conflict contexts as well as our global peacebuilding and social justice milieu.Keywords: holistic, inner peacebuilding, psycho-spiritual, systems youth
Procedia PDF Downloads 118208 Application of Harris Hawks Optimization Metaheuristic Algorithm and Random Forest Machine Learning Method for Long-Term Production Scheduling Problem under Uncertainty in Open-Pit Mines
Authors: Kamyar Tolouei, Ehsan Moosavi
Abstract:
In open-pit mines, the long-term production scheduling optimization problem (LTPSOP) is a complicated problem that contains constraints, large datasets, and uncertainties. Uncertainty in the output is caused by several geological, economic, or technical factors. Due to its dimensions and NP-hard nature, it is usually difficult to find an ideal solution to the LTPSOP. The optimal schedule generally restricts the ore, metal, and waste tonnages, average grades, and cash flows of each period. Past decades have witnessed important measurements of long-term production scheduling and optimal algorithms since researchers have become highly cognizant of the issue. In fact, it is not possible to consider LTPSOP as a well-solved problem. Traditional production scheduling methods in open-pit mines apply an estimated orebody model to produce optimal schedules. The smoothing result of some geostatistical estimation procedures causes most of the mine schedules and production predictions to be unrealistic and imperfect. With the expansion of simulation procedures, the risks from grade uncertainty in ore reserves can be evaluated and organized through a set of equally probable orebody realizations. In this paper, to synthesize grade uncertainty into the strategic mine schedule, a stochastic integer programming framework is presented to LTPSOP. The objective function of the model is to maximize the net present value and minimize the risk of deviation from the production targets considering grade uncertainty simultaneously while satisfying all technical constraints and operational requirements. Instead of applying one estimated orebody model as input to optimize the production schedule, a set of equally probable orebody realizations are applied to synthesize grade uncertainty in the strategic mine schedule and to produce a more profitable and risk-based production schedule. A mixture of metaheuristic procedures and mathematical methods paves the way to achieve an appropriate solution. This paper introduced a hybrid model between the augmented Lagrangian relaxation (ALR) method and the metaheuristic algorithm, the Harris Hawks optimization (HHO), to solve the LTPSOP under grade uncertainty conditions. In this study, the HHO is experienced to update Lagrange coefficients. Besides, a machine learning method called Random Forest is applied to estimate gold grade in a mineral deposit. The Monte Carlo method is used as the simulation method with 20 realizations. The results specify that the progressive versions have been considerably developed in comparison with the traditional methods. The outcomes were also compared with the ALR-genetic algorithm and ALR-sub-gradient. To indicate the applicability of the model, a case study on an open-pit gold mining operation is implemented. The framework displays the capability to minimize risk and improvement in the expected net present value and financial profitability for LTPSOP. The framework could control geological risk more effectively than the traditional procedure considering grade uncertainty in the hybrid model framework.Keywords: grade uncertainty, metaheuristic algorithms, open-pit mine, production scheduling optimization
Procedia PDF Downloads 103207 Teaching Linguistic Humour Research Theories: Egyptian Higher Education EFL Literature Classes
Authors: O. F. Elkommos
Abstract:
“Humour studies” is an interdisciplinary research area that is relatively recent. It interests researchers from the disciplines of psychology, sociology, medicine, nursing, in the work place, gender studies, among others, and certainly teaching, language learning, linguistics, and literature. Linguistic theories of humour research are numerous; some of which are of interest to the present study. In spite of the fact that humour courses are now taught in universities around the world in the Egyptian context it is not included. The purpose of the present study is two-fold: to review the state of arts and to show how linguistic theories of humour can be possibly used as an art and craft of teaching and of learning in EFL literature classes. In the present study linguistic theories of humour were applied to selected literary texts to interpret humour as an intrinsic artistic communicative competence challenge. Humour in the area of linguistics was seen as a fifth component of communicative competence of the second language leaner. In literature it was studied as satire, irony, wit, or comedy. Linguistic theories of humour now describe its linguistic structure, mechanism, function, and linguistic deviance. Semantic Script Theory of Verbal Humor (SSTH), General Theory of Verbal Humor (GTVH), Audience Based Theory of Humor (ABTH), and their extensions and subcategories as well as the pragmatic perspective were employed in the analyses. This research analysed the linguistic semantic structure of humour, its mechanism, and how the audience reader (teacher or learner) becomes an interactive interpreter of the humour. This promotes humour competence together with the linguistic, social, cultural, and discourse communicative competence. Studying humour as part of the literary texts and the perception of its function in the work also brings its positive association in class for educational purposes. Humour is by default a provoking/laughter-generated device. Incongruity recognition, perception and resolving it, is a cognitive mastery. This cognitive process involves a humour experience that lightens up the classroom and the mind. It establishes connections necessary for the learning process. In this context the study examined selected narratives to exemplify the application of the theories. It is, therefore, recommended that the theories would be taught and applied to literary texts for a better understanding of the language. Students will then develop their language competence. Teachers in EFL/ESL classes will teach the theories, assist students apply them and interpret text and in the process will also use humour. This is thus easing students' acquisition of the second language, making the classroom an enjoyable, cheerful, self-assuring, and self-illuminating experience for both themselves and their students. It is further recommended that courses of humour research studies should become an integral part of higher education curricula in Egypt.Keywords: ABTH, deviance, disjuncture, episodic, GTVH, humour competence, humour comprehension, humour in the classroom, humour in the literary texts, humour research linguistic theories, incongruity-resolution, isotopy-disjunction, jab line, longer text joke, narrative story line (macro-micro), punch line, six knowledge resource, SSTH, stacks, strands, teaching linguistics, teaching literature, TEFL, TESL
Procedia PDF Downloads 302206 Luminescent Properties of Plastic Scintillator with Large Area Photonic Crystal Prepared by a Combination of Nanoimprint Lithography and Atomic Layer Deposition
Authors: Jinlu Ruan, Liang Chen, Bo Liu, Xiaoping Ouyang, Zhichao Zhu, Zhongbing Zhang, Shiyi He, Mengxuan Xu
Abstract:
Plastic scintillators play an important role in the measurement of a mixed neutron/gamma pulsed radiation, neutron radiography and pulse shape discrimination technology. In some research, these luminescent properties are necessary that photons produced by the interactions between a plastic scintillator and radiations can be detected as much as possible by the photoelectric detectors and more photons can be emitted from the scintillators along a specific direction where detectors are located. Unfortunately, a majority of these photons produced are trapped in the plastic scintillators due to the total internal reflection (TIR), because there is a significant light-trapping effect when the incident angle of internal scintillation light is larger than the critical angle. Some of these photons trapped in the scintillator may be absorbed by the scintillator itself and the others are emitted from the edges of the scintillator. This makes the light extraction of plastic scintillators very low. Moreover, only a small portion of the photons emitted from the scintillator easily can be detected by detectors effectively, because the distribution of the emission directions of this portion of photons exhibits approximate Lambertian angular profile following a cosine emission law. Therefore, enhancing the light extraction efficiency and adjusting the emission angular profile become the keys for improving the number of photons detected by the detectors. In recent years, photonic crystal structures have been covered on inorganic scintillators to enhance the light extraction efficiency and adjust the angular profile of scintillation light successfully. However, that, preparation methods of photonic crystals will deteriorate performance of plastic scintillators and even destroy the plastic scintillators, makes the investigation on preparation methods of photonic crystals for plastic scintillators and luminescent properties of plastic scintillators with photonic crystal structures inadequate. Although we have successfully made photonic crystal structures covered on the surface of plastic scintillators by a modified self-assembly technique and achieved a great enhance of light extraction efficiency without evident angular-dependence for the angular profile of scintillation light, the preparation of photonic crystal structures with large area (the diameter is larger than 6cm) and perfect periodic structure is still difficult. In this paper, large area photonic crystals on the surface of scintillators were prepared by nanoimprint lithography firstly, and then a conformal layer with material of high refractive index on the surface of photonic crystal by atomic layer deposition technique in order to enhance the stability of photonic crystal structures and increase the number of leaky modes for improving the light extraction efficiency. The luminescent properties of the plastic scintillator with photonic crystals prepared by the mentioned method are compared with those of the plastic scintillator without photonic crystal. The results indicate that the number of photons detected by detectors is increased by the enhanced light extraction efficiency and the angular profile of scintillation light exhibits evident angular-dependence for the scintillator with photonic crystals. The mentioned preparation of photonic crystals is beneficial to scintillation detection applications and lays an important technique foundation for the plastic scintillators to meet special requirements under different application backgrounds.Keywords: angular profile, atomic layer deposition, light extraction efficiency, plastic scintillator, photonic crystal
Procedia PDF Downloads 199205 Mixed-Methods Analyses of Subjective Strategies of Most Unlikely but Successful Transitions from Social Benefits to Work
Authors: Hirseland Andreas, Kerschbaumer Lukas
Abstract:
In the case of Germany, there are about one million long-term unemployed – a figure that did not vary much during the past years. These long-term unemployed did not benefit from the prospering labor market while most short-term unemployed did. Instead, they are continuously dependent on welfare and sometimes precarious short-term employment, experiencing work poverty. Long-term unemployment thus turns into a main obstacle to become employed again, especially if it is accompanied by other impediments such as low-level education (school/vocational), poor health (especially chronical illness), advanced age (older than fifty), immigrant status, motherhood or engagement in care for other relatives. As can be shown by this current research project, in these cases the chance to regain employment decreases to near nil. Almost two-thirds of all welfare recipients have multiple impediments which hinder a successful transition from welfare back to sustainable and sufficient employment. Prospective employers are unlikely to hire long-term unemployed with additional impediments because they evaluate potential employees on their negative signaling (e.g. low-level education) and the implicit assumption of unproductiveness (e.g. poor health, age). Some findings of the panel survey “Labor market and social security” (PASS) carried out by the Institute of Employment Research (the research institute of the German Federal Labor Agency) spread a ray of hope, showing that unlikely does not necessarily mean impossible. The presentation reports on current research on these very scarce “success stories” of unlikely transitions from long-term unemployment to work and how these cases were able to perform this switch against all odds. The study is based on a mixed-method design. Within the panel survey (~15,000 respondents in ~10,000 households), only 66 cases of such unlikely transitions were observed. These cases have been explored by qualitative inquiry – in depth-interviews and qualitative network techniques. There is strong evidence that sustainable transitions are influenced by certain biographical resources like habits of network use, a set of informal skills and particularly a resilient way of dealing with obstacles, combined with contextual factors rather than by job-placement procedures promoted by Job-Centers according to activation rules or by following formal paths of application. On the employer’s side small and medium-sized enterprises are often found to give job opportunities to a wider variety of applicants, often based on a slow but steadily increasing relationship leading to employment. According to these results it is possible to show and discuss some limitations of (German) activation policies targeting the labor market and their impact on welfare dependency and long-term unemployment. Based on these findings, indications for more supportive small-scale measures in the field of labor-market policies are suggested to help long-term unemployed with multiple impediments to overcome their situation (e.g. organizing small-scale-structures and low-threshold services to encounter possible employers on a more informal basis like “meet and greet”).Keywords: against-all-odds, mixed-methods, Welfare State, long-term unemployment
Procedia PDF Downloads 361204 A Critical Analysis of How the Role of the Imam Can Best Meet the Changing Social, Cultural, and Faith-Based Needs of Muslim Families in 21st Century Britain
Authors: Christine Hough, Eddie Abbott-Halpin, Tariq Mahmood, Jessica Giles
Abstract:
This paper draws together the findings from two research studies, each undertaken with cohorts of South Asian Muslim respondents located in the North of England between 2017 and 2019. The first study, entitled Faith Family and Crime (FFC), investigated the extent to which a Muslim family’s social and health well-being is affected by a family member’s involvement in the Criminal Justice System (CJS). This study captured a range of data through a detailed questionnaire and structured interviews. The data from the interview transcripts were analysed using open coding and an application of aspects of the grounded theory approach. The findings provide clear evidence that the respondents were neither well-informed nor supported throughout the processes of the CJS, from arrest to post-sentencing. These experiences gave rise to mental and physical stress, potentially unfair sentencing, and a significant breakdown in communication within the respondents’ families. They serve to highlight a particular aspect of complexity in the current needs of those South Asian Muslim families who find themselves involved in the CJS and is closely connected to family structure, culture, and faith. The second study, referred to throughout this paper as #ImamsBritain (that provides the majority of content for this paper), explores how Imams, in their role as community faith leaders, can best address the complex – and changing - needs of South Asian Muslims families, such as those that emerged in the findings from FFC. The changing socio-economic and political climates of the last thirty or so years have brought about significant changes to the lives of Muslim families, and these have created more complex levels of social, cultural, and faith-based needs for families and individuals. As a consequence, Imams now have much greater demands made of them, and so their role has undergone far-reaching changes in response to this. The #ImamsBritain respondents identified a pressing need to develop a wider range of pastoral and counseling skills, which they saw as extending far beyond the traditional role of the Imam as a religious teacher and spiritual guide. The #ImamsBritain project was conducted with a cohort of British Imams in the North of England. Data was collected firstly through a questionnaire that related to the respondents’ training and development needs and then analysed in depth using the Delphi approach. Through Delphi, the data were scrutinized in depth using interpretative content analysis. The findings from this project reflect the respondents’ individual perceptions of the kind of training and development they need to fulfill their role in 21st Century Britain. They also provide a unique framework for constructing a professional guide for Imams in Great Britain. The discussions and critical analyses in this paper draw on the discourses of professionalization and pastoral care and relevant reports and reviews on Imam training in Europe and Canada.Keywords: criminal justice system, faith and culture, Imams, Muslim community leadership, professionalization, South Asian family structure
Procedia PDF Downloads 137203 Ethanolamine Detection with Composite Films
Authors: S. A. Krutovertsev, A. E. Tarasova, L. S. Krutovertseva, O. M. Ivanova
Abstract:
The aim of the work was to get stable sensitive films with good sensitivity to ethanolamine (C2H7NO) in air. Ethanolamine is used as adsorbent in different processes of gas purification and separation. Besides it has wide industrial application. Chemical sensors of sorption type are widely used for gas analysis. Their behavior is determined by sensor characteristics of sensitive sorption layer. Forming conditions and characteristics of chemical gas sensors based on nanostructured modified silica films activated by different admixtures have been studied. As additives molybdenum containing polyoxometalates of the eighteen series were incorporated in silica films. The method of hydrolythic polycondensation from tetraethyl orthosilicate solutions was used for forming such films in this work. The method’s advantage is a possibility to introduce active additives directly into an initial solution. This method enables to obtain sensitive thin films with high specific surface at room temperature. Particular properties make polyoxometalates attractive as active additives for forming of gas-sensitive films. As catalyst of different redox processes, they can either accelerate the reaction of the matrix with analyzed gas or interact with it, and it results in changes of matrix’s electrical properties Polyoxometalates based films were deposited on the test structures manufactured by microelectronic planar technology with interdigitated electrodes. Modified silica films were deposited by a casting method from solutions based on tetraethyl orthosilicate and polyoxometalates. Polyoxometalates were directly incorporated into initial solutions. Composite nanostructured films were deposited by drop casting method on test structures with a pair of interdigital metal electrodes formed at their surface. The sensor’s active area was 4.0 x 4.0 mm, and electrode gap was egual 0.08 mm. Morphology of the layers surface were studied with Solver-P47 scanning probe microscope (NT-MDT, Russia), the infrared spectra were investigated by a Bruker EQUINOX 55 (Germany). The conditions of film formation varied during the tests. Electrical parameters of the sensors were measured electronically in real-time mode. Films had highly developed surface with value of 450 m2/g and nanoscale pores. Thickness of them was 0,2-0,3 µm. The study shows that the conditions of the environment affect markedly the sensors characteristics, which can be improved by choosing of the right procedure of forming and processing. Addition of polyoxometalate into silica film resulted in stabilization of film mass and changed markedly of electrophysical characteristics. Availability of Mn3P2Mo18O62 into silica film resulted in good sensitivity and selectivity to ethanolamine. Sensitivity maximum was observed at weight content of doping additive in range of 30–50% in matrix. With ethanolamine concentration changing from 0 to 100 ppm films’ conductivity increased by 10-12 times. The increase of sensor’s sensitivity was received owing to complexing reaction of tested substance with cationic part of polyoxometalate. This fact results in intramolecular redox reaction which sharply change electrophysical properties of polyoxometalate. This process is reversible and takes place at room temperature.Keywords: ethanolamine, gas analysis, polyoxometalate, silica film
Procedia PDF Downloads 209202 Transparency of Algorithmic Decision-Making: Limits Posed by Intellectual Property Rights
Authors: Olga Kokoulina
Abstract:
Today, algorithms are assuming a leading role in various areas of decision-making. Prompted by a promise to provide increased economic efficiency and fuel solutions for pressing societal challenges, algorithmic decision-making is often celebrated as an impartial and constructive substitute for human adjudication. But in the face of this implied objectivity and efficiency, the application of algorithms is also marred with mounting concerns about embedded biases, discrimination, and exclusion. In Europe, vigorous debates on risks and adverse implications of algorithmic decision-making largely revolve around the potential of data protection laws to tackle some of the related issues. For example, one of the often-cited venues to mitigate the impact of potentially unfair decision-making practice is a so-called 'right to explanation'. In essence, the overall right is derived from the provisions of the General Data Protection Regulation (‘GDPR’) ensuring the right of data subjects to access and mandating the obligation of data controllers to provide the relevant information about the existence of automated decision-making and meaningful information about the logic involved. Taking corresponding rights and obligations in the context of the specific provision on automated decision-making in the GDPR, the debates mainly focus on efficacy and the exact scope of the 'right to explanation'. In essence, the underlying logic of the argued remedy lies in a transparency imperative. Allowing data subjects to acquire as much knowledge as possible about the decision-making process means empowering individuals to take control of their data and take action. In other words, forewarned is forearmed. The related discussions and debates are ongoing, comprehensive, and, often, heated. However, they are also frequently misguided and isolated: embracing the data protection law as ultimate and sole lenses are often not sufficient. Mandating the disclosure of technical specifications of employed algorithms in the name of transparency for and empowerment of data subjects potentially encroach on the interests and rights of IPR holders, i.e., business entities behind the algorithms. The study aims at pushing the boundaries of the transparency debate beyond the data protection regime. By systematically analysing legal requirements and current judicial practice, it assesses the limits of the transparency requirement and right to access posed by intellectual property law, namely by copyrights and trade secrets. It is asserted that trade secrets, in particular, present an often-insurmountable obstacle for realising the potential of the transparency requirement. In reaching that conclusion, the study explores the limits of protection afforded by the European Trade Secrets Directive and contrasts them with the scope of respective rights and obligations related to data access and portability enshrined in the GDPR. As shown, the far-reaching scope of the protection under trade secrecy is evidenced both through the assessment of its subject matter as well as through the exceptions from such protection. As a way forward, the study scrutinises several possible legislative solutions, such as flexible interpretation of the public interest exception in trade secrets as well as the introduction of the strict liability regime in case of non-transparent decision-making.Keywords: algorithms, public interest, trade secrets, transparency
Procedia PDF Downloads 124201 Magnetic Carriers of Organic Selenium (IV) Compounds: Physicochemical Properties and Possible Applications in Anticancer Therapy
Authors: E. Mosiniewicz-Szablewska, P. Suchocki, P. C. Morais
Abstract:
Despite the significant progress in cancer treatment, there is a need to search for new therapeutic methods in order to minimize side effects. Chemotherapy, the main current method of treating cancer, is non-selective and has a number of limitations. Toxicity to healthy cells is undoubtedly the biggest problem limiting the use of many anticancer drugs. The problem of how to kill cancer without harming a patient can be solved by using organic selenium (IV) compounds. Organic selenium (IV) compounds are a new class of materials showing a strong anticancer activity. They are first organic compounds containing selenium at the +4 oxidation level and therefore they eliminate the multidrug-resistance for all tumor cell lines tested so far. These materials are capable of selectively killing cancer cells without damaging the healthy ones. They are obtained by the incorporation of selenous acid (H2SeO3) into molecules of fatty acids of sunflower oil and therefore, they are inexpensive to manufacture. Attaching these compounds to magnetic carriers enables their precise delivery directly to the tumor area and the simultaneous application of the magnetic hyperthermia, thus creating a huge opportunity to effectively get rid of the tumor without any side effects. Polylactic-co-glicolic acid (PLGA) nanocapsules loaded with maghemite (-Fe2O3) nanoparticles and organic selenium (IV) compounds are successfully prepared by nanoprecipitation method. In vitro antitumor activity of the nanocapsules were evidenced using murine melanoma (B16-F10), oral squamos carcinoma (OSCC) and murine (4T1) and human (MCF-7) breast lines. Further exposure of these cells to an alternating magnetic field increased the antitumor effect of nanocapsules. Moreover, the nanocapsules presented antitumor effect while not affecting normal cells. Magnetic properties of the nanocapsules were investigated by means of dc magnetization, ac susceptibility and electron spin resonance (ESR) measurements. The nanocapsules presented a typical superparamagnetic behavior around room temperature manifested itself by the split between zero field-cooled/field-cooled (ZFC/FC) magnetization curves and the absence of hysteresis on the field-dependent magnetization curve above the blocking temperature. Moreover, the blocking temperature decreased with increasing applied magnetic field. The superparamagnetic character of the nanocapsules was also confirmed by the occurrence of a maximum in temperature dependences of both real ′(T) and imaginary ′′ (T) components of the ac magnetic susceptibility, which shifted towards higher temperatures with increasing frequency. Additionally, upon decreasing the temperature the ESR signal shifted to lower fields and gradually broadened following closely the predictions for the ESR of superparamagnetoc nanoparticles. The observed superparamagnetic properties of nanocapsules enable their simple manipulation by means of magnetic field gradient, after introduction into the blood stream, which is a necessary condition for their use as magnetic drug carriers. The observed anticancer and superparamgnetic properties show that the magnetic nanocapsules loaded with organic selenium (IV) compounds should be considered as an effective material system for magnetic drug delivery and magnetohyperthermia inductor in antitumor therapy.Keywords: cancer treatment, magnetic drug delivery system, nanomaterials, nanotechnology
Procedia PDF Downloads 203200 Establishing Feedback Partnerships in Higher Education: A Discussion of Conceptual Framework and Implementation Strategies
Authors: Jessica To
Abstract:
Feedback is one of the powerful levers for enhancing students’ performance. However, some students are under-engaged with feedback because they lack responsibility for feedback uptake. To resolve this conundrum, recent literature proposes feedback partnerships in which students and teachers share the power and responsibilities to co-construct feedback. During feedback co-construction, students express feedback needs to teachers, and teachers respond to individuals’ needs in return. Though this approach can increase students’ feedback ownership, its application is lagging as the field lacks conceptual clarity and implementation guide. This presentation aims to discuss the conceptual framework of feedback partnerships and feedback co-construction strategies. It identifies the components of feedback partnerships and strategies which could facilitate feedback co-construction. A systematic literature review was conducted to answer the questions. The literature search was performed using ERIC, PsycINFO, and Google Scholar with the keywords “assessment partnership”, “student as partner,” and “feedback engagement”. No time limit was set for the search. The inclusion criteria encompassed (i) student-teacher partnerships in feedback, (ii) feedback engagement in higher education, (iii) peer-reviewed publications, and (iv) English as the language of publication. Those without addressing conceptual understanding and implementation strategies were excluded. Finally, 65 publications were identified and analysed using thematic analysis. For the procedure, the texts relating to the questions were first extracted. Then, codes were assigned to summarise the ideas of the texts. Upon subsuming similar codes into themes, four themes emerged: students’ responsibilities, teachers’ responsibilities, conditions for partnerships development, and strategies. Their interrelationships were examined iteratively for framework development. Establishing feedback partnerships required different responsibilities of students and teachers during feedback co-construction. Students needed to self-evaluate performance against task criteria, identify inadequacies and communicate their needs to teachers. During feedback exchanges, they interpreted teachers’ comments, generated self-feedback through reflection, and co-developed improvement plans with teachers. Teachers had to increase students’ understanding of criteria and evaluation skills and create opportunities for students’ expression of feedback needs. In feedback dialogue, teachers responded to students’ needs and advised on the improvement plans. Feedback partnerships would be best grounded in an environment with trust and psychological safety. Four strategies could facilitate feedback co-construction. First, students’ understanding of task criteria could be increased by rubrics explanation and exemplar analysis. Second, students could sharpen evaluation skills if they participated in peer review and received teacher feedback on the quality of peer feedback. Third, provision of self-evaluation checklists and prompts and teacher modeling of self-assessment process could aid students in articulating feedback needs. Fourth, the trust could be fostered when teachers explained the benefits of feedback co-construction, showed empathy, and provided personalised comments in dialogue. Some strategies were applied in interactive cover sheets in which students performed self-evaluation and made feedback requests on a cover sheet during assignment submission, followed by teachers’ response to individuals’ requests. The significance of this presentation lies in unpacking the conceptual framework of feedback partnerships and outlining feedback co-construction strategies. With a solid foundation in theory and practice, researchers and teachers could better enhance students’ engagement with feedback.Keywords: conceptual framework, feedback co-construction, feedback partnerships, implementation strategies
Procedia PDF Downloads 90199 Reduced General Dispersion Model in Cylindrical Coordinates and Isotope Transient Kinetic Analysis in Laminar Flow
Authors: Masood Otarod, Ronald M. Supkowski
Abstract:
This abstract discusses a method that reduces the general dispersion model in cylindrical coordinates to a second order linear ordinary differential equation with constant coefficients so that it can be utilized to conduct kinetic studies in packed bed tubular catalytic reactors at a broad range of Reynolds numbers. The model was tested by 13CO isotope transient tracing of the CO adsorption of Boudouard reaction in a differential reactor at an average Reynolds number of 0.2 over Pd-Al2O3 catalyst. Detailed experimental results have provided evidence for the validity of the theoretical framing of the model and the estimated parameters are consistent with the literature. The solution of the general dispersion model requires the knowledge of the radial distribution of axial velocity. This is not always known. Hence, up until now, the implementation of the dispersion model has been largely restricted to the plug-flow regime. But, ideal plug-flow is impossible to achieve and flow regimes approximating plug-flow leave much room for debate as to the validity of the results. The reduction of the general dispersion model transpires as a result of the application of a factorization theorem. Factorization theorem is derived from the observation that a cross section of a catalytic bed consists of a solid phase across which the reaction takes place and a void or porous phase across which no significant measure of reaction occurs. The disparity in flow and the heterogeneity of the catalytic bed cause the concentration of reacting compounds to fluctuate radially. These variabilities signify the existence of radial positions at which the radial gradient of concentration is zero. Succinctly, factorization theorem states that a concentration function of axial and radial coordinates in a catalytic bed is factorable as the product of the mean radial cup-mixing function and a contingent dimensionless function. The concentration of adsorbed compounds are also factorable since they are piecewise continuous functions and suffer the same variability but in the reverse order of the concentration of mobile phase compounds. Factorability is a property of packed beds which transforms the general dispersion model to an equation in terms of the measurable mean radial cup-mixing concentration of the mobile phase compounds and mean cross-sectional concentration of adsorbed species. The reduced model does not require the knowledge of the radial distribution of the axial velocity. Instead, it is characterized by new transport parameters so denoted by Ωc, Ωa, Ωc, and which are respectively denominated convection coefficient cofactor, axial dispersion coefficient cofactor, and radial dispersion coefficient cofactor. These cofactors adjust the dispersion equation as compensation for the unavailability of the radial distribution of the axial velocity. Together with the rest of the kinetic parameters they can be determined from experimental data via an optimization procedure. Our data showed that the estimated parameters Ωc, Ωa Ωr, are monotonically correlated with the Reynolds number. This is expected to be the case based on the theoretical construct of the model. Computer generated simulations of methanation reaction on nickel provide additional support for the utility of the newly conceptualized dispersion model.Keywords: factorization, general dispersion model, isotope transient kinetic, partial differential equations
Procedia PDF Downloads 268198 Experimental-Numerical Inverse Approaches in the Characterization and Damage Detection of Soft Viscoelastic Layers from Vibration Test Data
Authors: Alaa Fezai, Anuj Sharma, Wolfgang Mueller-Hirsch, André Zimmermann
Abstract:
Viscoelastic materials have been widely used in the automotive industry over the last few decades with different functionalities. Besides their main application as a simple and efficient surface damping treatment, they may ensure optimal operating conditions for on-board electronics as thermal interface or sealing layers. The dynamic behavior of viscoelastic materials is generally dependent on many environmental factors, the most important being temperature and strain rate or frequency. Prior to the reliability analysis of systems including viscoelastic layers, it is, therefore, crucial to accurately predict the dynamic and lifetime behavior of these materials. This includes the identification of the dynamic material parameters under critical temperature and frequency conditions along with a precise damage localization and identification methodology. The goal of this work is twofold. The first part aims at applying an inverse viscoelastic material-characterization approach for a wide frequency range and under different temperature conditions. For this sake, dynamic measurements are carried on a single lap joint specimen using an electrodynamic shaker and an environmental chamber. The specimen consists of aluminum beams assembled to adapter plates through a viscoelastic adhesive layer. The experimental setup is reproduced in finite element (FE) simulations, and frequency response functions (FRF) are calculated. The parameters of both the generalized Maxwell model and the fractional derivatives model are identified through an optimization algorithm minimizing the difference between the simulated and the measured FRFs. The second goal of the current work is to guarantee an on-line detection of the damage, i.e., delamination in the viscoelastic bonding of the described specimen during frequency monitored end-of-life testing. For this purpose, an inverse technique, which determines the damage location and size based on the modal frequency shift and on the change of the mode shapes, is presented. This includes a preliminary FE model-based study correlating the delamination location and size to the change in the modal parameters and a subsequent experimental validation achieved through dynamic measurements of specimen with different, pre-generated crack scenarios and comparing it to the virgin specimen. The main advantage of the inverse characterization approach presented in the first part resides in the ability of adequately identifying the material damping and stiffness behavior of soft viscoelastic materials over a wide frequency range and under critical temperature conditions. Classic forward characterization techniques such as dynamic mechanical analysis are usually linked to limitations under critical temperature and frequency conditions due to the material behavior of soft viscoelastic materials. Furthermore, the inverse damage detection described in the second part guarantees an accurate prediction of not only the damage size but also its location using a simple test setup and outlines; therefore, the significance of inverse numerical-experimental approaches in predicting the dynamic behavior of soft bonding layers applied in automotive electronics.Keywords: damage detection, dynamic characterization, inverse approaches, vibration testing, viscoelastic layers
Procedia PDF Downloads 204197 Artificial Intelligence for Traffic Signal Control and Data Collection
Authors: Reggie Chandra
Abstract:
Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal
Procedia PDF Downloads 168