Search results for: conjunction
53 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU
Authors: Ali Abdul Kadhim, Fue Lien
Abstract:
Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model
Procedia PDF Downloads 20752 Addressing Supply Chain Data Risk with Data Security Assurance
Authors: Anna Fowler
Abstract:
When considering assets that may need protection, the mind begins to contemplate homes, cars, and investment funds. In most cases, the protection of those assets can be covered through security systems and insurance. Data is not the first thought that comes to mind that would need protection, even though data is at the core of most supply chain operations. It includes trade secrets, management of personal identifiable information (PII), and consumer data that can be used to enhance the overall experience. Data is considered a critical element of success for supply chains and should be one of the most critical areas to protect. In the supply chain industry, there are two major misconceptions about protecting data: (i) We do not manage or store confidential/personally identifiable information (PII). (ii) Reliance on Third-Party vendor security. These misconceptions can significantly derail organizational efforts to adequately protect data across environments. These statistics can be exciting yet overwhelming at the same time. The first misconception, “We do not manage or store confidential/personally identifiable information (PII)” is dangerous as it implies the organization does not have proper data literacy. Enterprise employees will zero in on the aspect of PII while neglecting trade secret theft and the complete breakdown of information sharing. To circumvent the first bullet point, the second bullet point forges an ideology that “Reliance on Third-Party vendor security” will absolve the company from security risk. Instead, third-party risk has grown over the last two years and is one of the major causes of data security breaches. It is important to understand that a holistic approach should be considered when protecting data which should not involve purchasing a Data Loss Prevention (DLP) tool. A tool is not a solution. To protect supply chain data, start by providing data literacy training to all employees and negotiating the security component of contracts with vendors to highlight data literacy training for individuals/teams that may access company data. It is also important to understand the origin of the data and its movement to include risk identification. Ensure processes effectively incorporate data security principles. Evaluate and select DLP solutions to address specific concerns/use cases in conjunction with data visibility. These approaches are part of a broader solutions framework called Data Security Assurance (DSA). The DSA Framework looks at all of the processes across the supply chain, including their corresponding architecture and workflows, employee data literacy, governance and controls, integration between third and fourth-party vendors, DLP as a solution concept, and policies related to data residency. Within cloud environments, this framework is crucial for the supply chain industry to avoid regulatory implications and third/fourth party risk.Keywords: security by design, data security architecture, cybersecurity framework, data security assurance
Procedia PDF Downloads 8851 An Investigation on MgAl₂O₄ Based Mould System in Investment Casting Titanium Alloy
Authors: Chen Yuan, Nick Green, Stuart Blackburn
Abstract:
The investment casting process offers a great freedom of design combined with the economic advantage of near net shape manufacturing. It is widely used for the production of high value precision cast parts in particularly in the aerospace sector. Various combinations of materials have been used to produce the ceramic moulds, but most investment foundries use a silica based binder system in conjunction with fused silica, zircon, and alumino-silicate refractories as both filler and coarse stucco materials. However, in the context of advancing alloy technologies, silica based systems are struggling to keep pace, especially when net-shape casting titanium alloys. Study has shown that the casting of titanium based alloys presents considerable problems, including the extensive interactions between the metal and refractory, and the majority of metal-mould interaction is due to reduction of silica, present as binder and filler phases, by titanium in the molten state. Cleaner, more refractory systems are being devised to accommodate these changes. Although yttria has excellent chemical inertness to titanium alloy, it is not very practical in a production environment combining high material cost, short slurry life, and poor sintering properties. There needs to be a cost effective solution to these issues. With limited options for using pure oxides, in this work, a silica-free magnesia spinel MgAl₂O₄ was used as a primary coat filler and alumina as a binder material to produce facecoat in the investment casting mould. A comparison system was also studied with a fraction of the rare earth oxide Y₂O₃ adding into the filler to increase the inertness. The stability of the MgAl₂O₄/Al₂O₃ and MgAl₂O₄/Y₂O₃/Al₂O₃ slurries was assessed by tests, including pH, viscosity, zeta-potential and plate weight measurement, and mould properties such as friability were also measured. The interaction between the face coat and titanium alloy was studied by both a flash re-melting technique and a centrifugal investment casting method. The interaction products between metal and mould were characterized using x-ray diffraction (XRD), scanning electron microscopy (SEM) and Energy Dispersive X-Ray Spectroscopy (EDS). The depth of the oxygen hardened layer was evaluated by micro hardness measurement. Results reveal that introducing a fraction of Y₂O₃ into magnesia spinel can significantly increase the slurry life and reduce the thickness of hardened layer during centrifugal casting.Keywords: titanium alloy, mould, MgAl₂O₄, Y₂O₃, interaction, investment casting
Procedia PDF Downloads 11350 Neuropsychiatric Outcomes of Intensive Music Therapy in Stroke Rehabilitation A Premilitary Investigation
Authors: Honey Bryant, Elvina Chu
Abstract:
Stroke is the leading cause of disability in adults in Canada and directly related to depression, anxiety, and sleep disorders; with an estimated annual cost of $50 billion in health care. Strokes not only impact the individual but society as a whole. Current stroke rehabilitation does not include Music Therapy, although it has success in clinical research in the use of stroke rehabilitation. This study examines the use of neurologic music therapy (NMT) in conjunction with stroke rehabilitation to improve sleep quality, reduce stress levels, and promote neurogenesis. Existing research on NMT in stroke is limited, which means any conclusive information gathered during this study will be significant. My novel hypotheses are a.) stroke patients will become less depressed and less anxious with improved sleep following NMT. b.) NMT will reduce stress levels and promote neurogenesis in stroke patients admitted for rehabilitation. c.) Beneficial effects of NMT will be sustained at least short-term following treatment. Participants were recruited from the in-patient stroke rehabilitation program at Providence Care Hospital in Kingston, Ontario, Canada. All participants-maintained stroke rehabilitation treatment as normal. The study was spilt into two groups, the first being Passive Music Listening (PML) and the second Neurologic Music Therapy (NMT). Each group underwent 10 sessions of intensive music therapy lasting 45 minutes for 10 consecutive days, excluding weekends. Psychiatric Assessments, Epworth Sleepiness Scale (ESS), Hospital Anxiety & Depression Rating Scale (HADS), and Music Engagement Questionnaire (MusEQ), were completed, followed by a general feedback interview. Physiological markers of stress were measured through blood pressure measurements and heart rate variability. Serum collections reviewed neurogenesis via Brain-derived neurotrophic factor (BDNF) and stress markers of cortisol levels. As this study is still on-going, a formal analysis of data has not been fully completed, although trends are following our hypotheses. A decrease in sleepiness and anxiety is seen upon the first cohort of PML. Feedback interviews have indicated most participants subjectively felt more relaxed and thought PML was useful in their recovery. If the hypothesis is supported, larger external funding which will allow for greater investigation of the use of NMT in stroke rehabilitation. As we know, NMT is not covered under Ontario Health Insurance Plan (OHIP), so there is limited scientific data surrounding its uses as a clinical tool. This research will provide detailed findings of the treatment of neuropsychiatric aspects of stroke. Concurrently, a passive music listening study is being designed to further review the use of PML in rehabilitation as well.Keywords: music therapy, psychotherapy, neurologic music therapy, passive music listening, neuropsychiatry, counselling, behavioural, stroke, stroke rehabilitation, rehabilitation, neuroscience
Procedia PDF Downloads 11349 Sociology Perspective on Emotional Maltreatment: Retrospective Case Study in a Japanese Elementary School
Authors: Nozomi Fujisaka
Abstract:
This sociological case study analyzes a sequence of student maltreatment in an elementary school in Japan, based on narratives from former students. Among various forms of student maltreatment, emotional maltreatment has received less attention. One reason for this is that emotional maltreatment is often considered part of education and is difficult to capture in surveys. To discuss the challenge of recognizing emotional maltreatment, it's necessary to consider the social background in which student maltreatment occurs. Therefore, from the perspective of the sociology of education, this study aims to clarify the process through which emotional maltreatment was embraced by students within a Japanese classroom. The focus of this study is a series of educational interactions by a homeroom teacher with 11- or 12-year-old students at a small public elementary school approximately 10 years ago. The research employs retrospective narrative data collected through interviews and autoethnography. The semi-structured interviews, lasting one to three hours each, were conducted with 11 young people who were enrolled in the same class as the researcher during their time in elementary school. Autoethnography, as a critical research method, contributes to existing theories and studies by providing a critical representation of the researcher's own experiences. Autoethnography enables researchers to collect detailed data that is often difficult to verbalize in interviews. These research methods are well-suited for this study, which aims to shift the focus from teachers' educational intentions to students' perspectives and gain a deeper understanding of student maltreatment. The research results imply a pattern of emotional maltreatment that is challenging to differentiate from education. In this study's case, the teacher displayed calm and kind behavior toward students after a threat and an explosion of anger. Former students frequently mentioned this behavior of the teacher and perceived emotional maltreatment as part of education. It was not uncommon for former students to offer positive evaluations of the teacher despite experiencing emotional distress. These findings are analyzed and discussed in conjunction with the deschooling theory and the cycle of violence theory. The deschooling theory provides a sociological explanation for how emotional maltreatment can be overlooked in society. The cycle of violence theory, originally developed within the context of domestic violence, explains how violence between romantic partners can be tolerated due to prevailing social norms. Analyzing the case in association with these two theories highlights the characteristics of teachers' behaviors that rationalize maltreatment as education and hinder students from escaping emotional maltreatment. This study deepens our understanding of the causes of student maltreatment and provides a new perspective for future qualitative and quantitative research. Furthermore, since this research is based on the sociology of education, it has the potential to expand research in the fields of pedagogy and sociology, in addition to psychology and social welfare.Keywords: emotional maltreatment, education, student maltreatment, Japan
Procedia PDF Downloads 8448 Assessment of the Environmental Compliance at the Jurassic Production Facilities towards HSE MS Procedures and Kuwait Environment Public Authority Regulations
Authors: Fatemah Al-Baroud, Sudharani Shreenivas Kshatriya
Abstract:
Kuwait Oil Company (KOC) is one of the companies for gas & oil production in Kuwait. The oil and gas industry is truly global; with operations conducted in every corner of the globe, the global community will rely heavily on oil and gas supplies. KOC has made many commitments to protect all due to KOC’s operations and operational releases. As per KOC’s strategy, the substantial increase in production activities will bring many challenges in managing various environmental hazards and stresses in the company. In order to handle those environmental challenges, the need of implementing effectively the health, safety, and environmental management system (HSEMS) is significant. And by implementing the HSEMS system properly, the environmental aspects of the activities, products, and services were identified, evaluated, and controlled in order to (i) Comply with local regulatory and other obligatory requirements; (ii) Comply with company policy and business requirements; and (iii) Reduce adverse environmental impact, including adverse impact to company reputation. Assessments for the Jurassic Production Facilities are being carried out as a part of the KOC HSEMS procedural requirement and monitoring the implementation of the relevant HSEMS procedures in the facilities. The assessments have been done by conducting series of theme audits using KOC’s audit protocol at JPFs. The objectives of the audits are to evaluate the compliance of the facilities towards the implementation of environmental procedures and the status of the KEPA requirement at all JPFs. The list of the facilities that were covered during the theme audit program are the following: (1) Jurassic Production Facility (JPF) – Sabriya (2) Jurassic Production Facility (JPF) – East Raudhatian (3) Jurassic Production Facility (JPF) – West Raudhatian (4)Early Production Facility (EPF 50). The auditing process comprehensively focuses on the application of KOC HSE MS procedures at JPFs and their ability to reduce the resultant negative impacts on the environment from the operations. Number of findings and observations were noted and highlighted in the audit reports and sent to all concerned controlling teams. The results of these audits indicated that the facilities, in general view, were in line with KOC HSE Procedures, and there were commitments in documenting all the HSE issues in the right records and plans. Further, implemented several control measures at JPFs that minimized/reduced the environmental impact, such as SRU were installed for sulphur recovery. Future scope and monitoring audit after a sufficient period of time will be carried out in conjunction with the controlling teams in order to verify the current status of the recommendations and evaluate the contractors' performance towards the required actions in preserving the environment.Keywords: assessment of the environmental compliance, environmental and social impact assessment, kuwait environment public authority regulations, health, safety and environment management procedures, jurassic production facilities
Procedia PDF Downloads 18447 The M Health Paradigm for the Chronic Care Management of Obesity: New Opportunities in Clinical Psychology and Medicine
Authors: Gianluca Castelnuovo, Gian Mauro Manzoni, Giada Pietrabissa, Stefania Corti, Emanuele Giusti, Roberto Cattivelli, Enrico Molinari, Susan Simpson
Abstract:
Obesity is currently an important public health problem of epidemic proportions (globesity). Moreover Binge Eating Disorder (BED) is typically connected with obesity, even if not occurring exclusively in conjunction with overweight conditions. Typically obesity with BED requires a longer term treatment in comparison with simple obesity. Rehabilitation interventions that aim at improving weight-loss, reducing obesity-related complications and changing dysfunctional behaviors, should ideally be carried out in a multidisciplinary context with a clinical team composed of psychologists, dieticians, psychiatrists, endocrinologists, nutritionists, physiotherapists, etc. Long-term outpatient multidisciplinary treatments are likely to constitute an essential aspect of rehabilitation, due to the growing costs of a limited inpatient approach. Internet-based technologies can improve long-term obesity rehabilitation within a collaborative approach. The new m health (m-health, mobile health) paradigm, defined as clinical practices supported by up to date mobile communication devices, could increase compliance- engagement and contribute to a significant cost reduction in BED and obesity rehabilitation. Five psychological components need to be considered for successful m Health-based obesity rehabilitation in order to facilitate weight-loss.1) Self-monitoring. Portable body monitors, pedometers and smartphones are mobile and, therefore, can be easily used, resulting in continuous self-monitoring. 2) Counselor feedback and communication. A functional approach is to provide online weight-loss interventions with brief weekly or monthly counselor or psychologist visits. 3) Social support. A group treatment format is typically preferred for behavioral weight-loss interventions. 4) Structured program. Technology-based weight-loss programs incorporate principles of behavior therapy and change with structured weekly protocolos including nutrition, exercise, stimulus control, self-regulation strategies, goal-setting. 5) Individually tailored program. Interventions specifically designed around individual’s goals typically record higher rates of adherence and weight loss. Opportunities and limitations of m health approach in clinical psychology for obesity and BED are discussed, taking into account future research directions in this promising area.Keywords: obesity, rehabilitation, out-patient, new technologies, tele medicine, tele care, m health, clinical psychology, psychotherapy, chronic care management
Procedia PDF Downloads 47346 CsPbBr₃@MOF-5-Based Single Drop Microextraction for in-situ Fluorescence Colorimetric Detection of Dechlorination Reaction
Authors: Yanxue Shang, Jingbin Zeng
Abstract:
Chlorobenzene homologues (CBHs) are a category of environmental pollutants that can not be ignored. They can stay in the environment for a long period and are potentially carcinogenic. The traditional degradation method of CBHs is dechlorination followed by sample preparation and analysis. This is not only time-consuming and laborious, but the detection and analysis processes are used in conjunction with large-scale instruments. Therefore, this can not achieve rapid and low-cost detection. Compared with traditional sensing methods, colorimetric sensing is simpler and more convenient. In recent years, chromaticity sensors based on fluorescence have attracted more and more attention. Compared with sensing methods based on changes in fluorescence intensity, changes in color gradients are easier to recognize by the naked eye. Accordingly, this work proposes to use single drop microextraction (SDME) technology to solve the above problems. After the dechlorination reaction was completed, the organic droplet extracts Cl⁻ and realizes fluorescence colorimetric sensing at the same time. This method was integrated sample processing and visual in-situ detection, simplifying the detection process. As a fluorescence colorimetric sensor material, CsPbBr₃ was encapsulated in MOF-5 to construct CsPbBr₃@MOF-5 fluorescence colorimetric composite. Then the fluorescence colorimetric sensor was constructed by dispersing the composite in SDME organic droplets. When the Br⁻ in CsPbBr₃ exchanges with Cl⁻ produced by the dechlorination reactions, it is converted into CsPbCl₃. The fluorescence color of the single droplet of SDME will change from green to blue emission, thereby realizing visual observation. Therein, SDME can enhance the concentration and enrichment of Cl⁻ and instead of sample pretreatment. The fluorescence color change of CsPbBr₃@MOF-5 can replace the detection process of large-scale instruments to achieve real-time rapid detection. Due to the absorption ability of MOF-5, it can not only improve the stability of CsPbBr₃, but induce the adsorption of Cl⁻. Simultaneously, accelerate the exchange of Br- and Cl⁻ in CsPbBr₃ and the detection process of Cl⁻. The absorption process was verified by density functional theory (DFT) calculations. This method exhibits exceptional linearity for Cl⁻ in the range of 10⁻² - 10⁻⁶ M (10000 μM - 1 μM) with a limit of detection of 10⁻⁷ M. Whereafter, the dechlorination reactions of different kinds of CBHs were also carried out with this method, and all had satisfactory detection ability. Also verified the accuracy by gas chromatography (GC), and it was found that the SDME we developed in this work had high credibility. In summary, the in-situ visualization method of dechlorination reaction detection was a combination of sample processing and fluorescence colorimetric sensing. Thus, the strategy researched herein represents a promising method for the visual detection of dechlorination reactions and can be extended for applications in environments, chemical industries, and foods.Keywords: chlorobenzene homologues, colorimetric sensor, metal halide perovskite, metal-organic frameworks, single drop microextraction
Procedia PDF Downloads 14345 Fructose-Aided Cross-Linked Enzyme Aggregates of Laccase: An Insight on Its Chemical and Physical Properties
Authors: Bipasa Dey, Varsha Panwar, Tanmay Dutta
Abstract:
Laccase, a multicopper oxidase (EC 1.10.3.2) have been at the forefront as a superior industrial biocatalyst. They are versatile in terms of bestowing sustainable and ecological catalytic reactions such as polymerisation, xenobiotic degradation and bioremediation of phenolic and non-phenolic compounds. Regardless of the wide biotechnological applications, the critical limiting factors viz. reusability, retrieval, and storage stability still prevail. This can cause an impediment in their applicability. Crosslinked enzyme aggregates (CLEAs) have emerged as a promising technique that rehabilitates these essential facets, albeit at the expense of their enzymatic activity. The carrier free crosslinking method prevails over the carrier-bound immobilisation in conferring high productivity, low production cost owing to the absence of additional carrier and circumvent any non-catalytic ballast which could dilute the volumetric activity. To the best of our knowledge, the ε-amino group of lysyl residue is speculated as the best choice for forming Schiff’s base with glutaraldehyde. Despite being most preferrable, excess glutaraldehyde can bring about disproportionate and undesirable crosslinking within the catalytic site and hence could deliver undesirable catalytic losses. Moreover, the surface distribution of lysine residues in Trametes versicolor laccase is significantly less. Thus, to mitigate the adverse effect of glutaraldehyde in conjunction with scaling down the degradation or catalytic loss of the enzyme, crosslinking with inert substances like gelatine, collagen, Bovine serum albumin (BSA) or excess lysine is practiced. Analogous to these molecules, sugars have been well known as a protein stabiliser. It helps to retain the structural integrity, specifically secondary structure of the protein during aggregation by changing the solvent properties. They are comprehended to avert protein denaturation or enzyme deactivation during precipitation. We prepared crosslinked enzyme aggregates (CLEAs) of laccase from T. versicolor with the aid of sugars. The sugar CLEAs were compared with the classic BSA and glutaraldehyde laccase CLEAs concerning physico-chemical properties. The activity recovery for the fructose CLEAs were found to be ~20% higher than the non-sugar CLEA. Moreover, the 𝐾𝑐𝑎𝑡𝐾𝑚⁄ values of the CLEAs were two and three-fold higher than BSA-CLEA and GACLEA, respectively. The half-life (t1/2) deciphered by sugar-CLEA was higher than the t1/2 of GA-CLEAs and free enzyme, portraying more thermal stability. Besides, it demonstrated extraordinarily high pH stability, which was analogous to BSA-CLEA. The promising attributes of increased storage stability and recyclability (>80%) gives more edge to the sugar-CLEAs over conventional CLEAs of their corresponding free enzyme. Thus, sugar-CLEA prevails in furnishing the rudimentary properties required for a biocatalyst and holds many prospects.Keywords: cross-linked enzyme aggregates, laccase immobilization, enzyme reusability, enzyme stability
Procedia PDF Downloads 10244 Magnetron Sputtered Thin-Film Catalysts with Low Noble Metal Content for Proton Exchange Membrane Water Electrolysis
Authors: Peter Kus, Anna Ostroverkh, Yurii Yakovlev, Yevheniia Lobko, Roman Fiala, Ivan Khalakhan, Vladimir Matolin
Abstract:
Hydrogen economy is a concept of low-emission society which harvests most of its energy from renewable sources (e.g., wind and solar) and in case of overproduction, electrochemically turns the excess amount into hydrogen, which serves as an energy carrier. Proton exchange membrane water electrolyzers (PEMWE) are the backbone of this concept. By fast-response electricity to hydrogen conversion, the PEMWEs will not only stabilize the electrical grid but also provide high-purity hydrogen for variety of fuel cell powered devices, ranging from consumer electronics to vehicles. Wider commercialization of PEMWE technology is however hindered by high prices of noble metals which are necessary for catalyzing the redox reactions within the cell. Namely, platinum for hydrogen evolution reaction (HER), running on cathode, and iridium for oxygen evolution reaction (OER) on anode. Possible way of how to lower the loading of Pt and Ir is by using conductive high-surface nanostructures as catalyst supports in conjunction with thin-film catalyst deposition. The presented study discusses unconventional technique of membrane electron assembly (MEA) preparation. Noble metal catalysts (Pt and Ir) were magnetron sputtered in very low loadings onto the surface of porous sublayers (located on gas diffusion layer or directly on membrane), forming so to say localized three-phase boundary. Ultrasonically sprayed corrosion resistant TiC-based sublayer was used as a support material on anode, whereas magnetron sputtered nanostructured etched nitrogenated carbon (CNx) served the same role on cathode. By using this configuration, we were able to significantly decrease the amount of noble metals (to thickness of just tens of nanometers), while keeping the performance comparable to that of average state-of-the-art catalysts. Complex characterization of prepared supported catalysts includes in-cell performance and durability tests, electrochemical impedance spectroscopy (EIS) as well as scanning electron microscopy (SEM) imaging and X-ray photoelectron spectroscopy (XPS) analysis. Our research proves that magnetron sputtering is a suitable method for thin-film deposition of electrocatalysts. Tested set-up of thin-film supported anode and cathode catalysts with combined loading of just 120 ug.cm⁻² yields remarkable values of specific current. Described approach of thin-film low-loading catalyst deposition might be relevant when noble metal reduction is the topmost priority.Keywords: hydrogen economy, low-loading catalyst, magnetron sputtering, proton exchange membrane water electrolyzer
Procedia PDF Downloads 16343 Designing Sustainable and Energy-Efficient Urban Network: A Passive Architectural Approach with Solar Integration and Urban Building Energy Modeling (UBEM) Tools
Authors: A. Maghoul, A. Rostampouryasouri, MR. Maghami
Abstract:
The development of an urban design and power network planning has been gaining momentum in recent years. The integration of renewable energy with urban design has been widely regarded as an increasingly important solution leading to climate change and energy security. Through the use of passive strategies and solar integration with Urban Building Energy Modeling (UBEM) tools, architects and designers can create high-quality designs that meet the needs of clients and stakeholders. To determine the most effective ways of combining renewable energy with urban development, we analyze the relationship between urban form and renewable energy production. The procedure involved in this practice include passive solar gain (in building design and urban design), solar integration, location strategy, and 3D models with a case study conducted in Tehran, Iran. The study emphasizes the importance of spatial and temporal considerations in the development of sector coupling strategies for solar power establishment in arid and semi-arid regions. The substation considered in the research consists of two parallel transformers, 13 lines, and 38 connection points. Each urban load connection point is equipped with 500 kW of solar PV capacity and 1 kWh of battery Energy Storage (BES) to store excess power generated from solar, injecting it into the urban network during peak periods. The simulations and analyses have occurred in EnergyPlus software. Passive solar gain involves maximizing the amount of sunlight that enters a building to reduce the need for artificial lighting and heating. Solar integration involves integrating solar photovoltaic (PV) power into smart grids to reduce emissions and increase energy efficiency. Location strategy is crucial to maximize the utilization of solar PV in an urban distribution feeder. Additionally, 3D models are made in Revit, and they are keys component of decision-making in areas including climate change mitigation, urban planning, and infrastructure. we applied these strategies in this research, and the results show that it is possible to create sustainable and energy-efficient urban environments. Furthermore, demand response programs can be used in conjunction with solar integration to optimize energy usage and reduce the strain on the power grid. This study highlights the influence of ancient Persian architecture on Iran's urban planning system, as well as the potential for reducing pollutants in building construction. Additionally, the paper explores the advances in eco-city planning and development and the emerging practices and strategies for integrating sustainability goals.Keywords: energy-efficient urban planning, sustainable architecture, solar energy, sustainable urban design
Procedia PDF Downloads 7642 COVID-19 Laws and Policy: The Use of Policy Surveillance For Better Legal Preparedness
Authors: Francesca Nardi, Kashish Aneja, Katherine Ginsbach
Abstract:
The COVID-19 pandemic has demonstrated both a need for evidence-based and rights-based public health policy and how challenging it can be to make effective decisions with limited information, evidence, and data. The O’Neill Institute, in conjunction with several partners, has been working since the beginning of the pandemic to collect, analyze, and distribute critical data on public health policies enacted in response to COVID-19 around the world in the COVID-19 Law Lab. Well-designed laws and policies can help build strong health systems, implement necessary measures to combat viral transmission, enforce actions that promote public health and safety for everyone, and on the individual level have a direct impact on health outcomes. Poorly designed laws and policies, on the other hand, can fail to achieve the intended results and/or obstruct the realization of fundamental human rights, further disease spread, or cause unintended collateral harms. When done properly, laws can provide the foundation that brings clarity to complexity, embrace nuance, and identifies gaps of uncertainty. However, laws can also shape the societal factors that make disease possible. Law is inseparable from the rest of society, and COVID-19 has exposed just how much laws and policies intersects all facets of society. In the COVID-19 context, evidence-based and well-informed law and policy decisions—made at the right time and in the right place—can and have meant the difference between life or death for many. Having a solid evidentiary base of legal information can promote the understanding of what works well and where, and it can drive resources and action to where they are needed most. We know that legal mechanisms can enable nations to reduce inequities and prepare for emerging threats, like novel pathogens that result in deadly disease outbreaks or antibiotic resistance. The collection and analysis of data on these legal mechanisms is a critical step towards ensuring that legal interventions and legal landscapes are effectively incorporated into more traditional kinds of health science data analyses. The COVID-19 Law Labs see a unique opportunity to collect and analyze this kind of non-traditional data to inform policy using laws and policies from across the globe and across diseases. This global view is critical to assessing the efficacy of policies in a wide range of cultural, economic, and demographic circumstances. The COVID-19 Law Lab is not just a collection of legal texts relating to COVID-19; it is a dataset of concise and actionable legal information that can be used by health researchers, social scientists, academics, human rights advocates, law and policymakers, government decision-makers, and others for cross-disciplinary quantitative and qualitative analysis to identify best practices from this outbreak, and previous ones, to be better prepared for potential future public health events.Keywords: public health law, surveillance, policy, legal, data
Procedia PDF Downloads 14141 Phonological Encoding and Working Memory in Kannada Speaking Adults Who Stutter
Authors: Nirmal Sugathan, Santosh Maruthy
Abstract:
Background: A considerable number of studies have evidenced that phonological encoding (PE) and working memory (WM) skills operate differently in adults who stutter (AWS). In order to tap these skills, several paradigms have been employed such as phonological priming, phoneme monitoring, and nonword repetition tasks. This study, however, utilizes a word jumble paradigm to assess both PE and WM using different modalities and this may give a better understanding of phonological processing deficits in AWS. Aim: The present study investigated PE and WM abilities in conjunction with lexical access in AWS using jumbled words. The study also aimed at investigating the effect of increase in cognitive load on phonological processing in AWS by comparing the speech reaction time (SRT) and accuracy scores across various syllable lengths. Method: Participants were 11 AWS (Age range=19-26) and 11 adults who do not stutter (AWNS) (Age range=19-26) matched for age, gender and handedness. Stimuli: Ninety 3-, 4-, and 5-syllable jumbled words (JWs) (n=30 per syllable length category) constructed from Kannada words served as stimuli for jumbled word paradigm. In order to generate jumbled words (JWs), the syllables in the real words were randomly transpositioned. Procedures: To assess PE, the JWs were presently visually using DMDX software and for WM task, JWs were presented through auditory mode through headphones. The participants were asked to silently manipulate the jumbled words to form a Kannada real word and verbally respond once. The responses for both tasks were audio recorded using record function in DMDX software and the recorded responses were analyzed using PRAAT software to calculate the SRT. Results: SRT: Mann-Whitney test results demonstrated that AWS performed significantly slower on both tasks (p < 0.001) as indicated by increased SRT. Also, AWS presented with increased SRT on both the tasks in all syllable length conditions (p < 0.001). Effect of syllable length: Wilcoxon signed rank test was carried out revealed that, on task assessing PE, the SRT of 4syllable JWs were significantly higher in both AWS (Z= -2.93, p=.003) and AWNS (Z= -2.41, p=.003) when compared to 3-syllable words. However, the findings for 4- and 5-syllable words were not significant. Task Accuracy: The accuracy scores were calculated for three syllable length conditions for both PE and PM tasks and were compared across the groups using Mann-Whitney test. The results indicated that the accuracy scores of AWS were significantly below that of AWNS in all the three syllable conditions for both the tasks (p < 0.001). Conclusion: The above findings suggest that PE and WM skills are compromised in AWS as indicated by increased SRT. Also, AWS were progressively less accurate in descrambling JWs of increasing syllable length and this may be interpreted as, rather than existing as a uniform deficiency, PE and WM deficits emerge when the cognitive load is increased. AWNS exhibited increased SRT and increased accuracy for JWs of longer syllable length whereas AWS was not benefited from increasing the reaction time, thus AWS had to compromise for both SRT and accuracy while solving JWs of longer syllable length.Keywords: adults who stutter, phonological ability, working memory, encoding, jumbled words
Procedia PDF Downloads 24040 The Effect of Mindfulness-Based Interventions for Individuals with Tourette Syndrome: A Scoping Review
Authors: Ilana Singer, Anastasia Lučić, Julie Leclerc
Abstract:
Introduction: Tics, characterized by repetitive, sudden, non-voluntary motor movements or vocalizations, are prevalent in chronic tic disorder (CT) and Tourette Syndrome (TS). These neurodevelopmental disorders often coexist with various psychiatric conditions, leading to challenges and reduced quality of life. While medication in conjunction with behavioral interventions, such as Habit Reversal Training (HRT), Exposure Response Prevention (ERP), and Comprehensive Behavioral Intervention for Tics (CBIT), has shown efficacy, a significant proportion of patients experience persistent tics. Thus, innovative treatment approaches are necessary to improve therapeutic outcomes, such as mindfulness-based approaches. Nonetheless, the effectiveness of mindfulness-based interventions in the context of CT and TS remains understudied. Objective: The objective of this scoping review is to provide an overview of the current state of research on mindfulness-based interventions for CT and TS, identify knowledge and evidence gaps, discuss the effectiveness of mindfulness-based interventions with other treatment options, and discuss implications for clinical practice and policy development. Method: Using guidelines from Peters (2020) and the PRISMA-ScR, a scoping review was conducted. Multiple electronic databases were searched from inception until June 2023, including MEDLINE, EMBASE, PsychInfo, Global Health, PubMed, Web of Science, and Érudit. Inclusion criteria were applied to select relevant studies, and data extraction was independently performed by two reviewers. Results: Five papers were included in the study. Firstly, we found that mindfulness interventions were found to be effective in reducing anxiety and depression while enhancing overall well-being in individuals with tics. Furthermore, the review highlighted the potential role of mindfulness in enhancing functional connectivity within the Default Mode Network (DMN) as a compensatory function in TS patients. This suggests that mindfulness interventions may complement and support traditional therapeutic approaches, particularly HRT, by positively influencing brain networks associated with tic regulation and control. Conclusion: This scoping review contributes to the understanding of the effectiveness of mindfulness-based interventions in managing CT and TS. By identifying research gaps, this review can guide future investigations and interventions to improve outcomes for individuals with CT or TS. Overall, these findings emphasize the potential benefits of incorporating mindfulness-based interventions as a smaller subset within comprehensive treatment strategies. However, it is essential to acknowledge the limitations of this scoping review, such as the exclusion of a pre-established protocol and the limited number of studies available for inclusion. Further research and clinical exploration are necessary to better understand the specific mechanisms and optimal integration of mindfulness-based interventions with existing behavioral interventions for this population.Keywords: scoping reviews, Tourette Syndrome, tics, mindfulness-based, therapy, intervention
Procedia PDF Downloads 8339 Scalable Performance Testing: Facilitating The Assessment Of Application Performance Under Substantial Loads And Mitigating The Risk Of System Failures
Authors: Solanki Ravirajsinh
Abstract:
In the software testing life cycle, failing to conduct thorough performance testing can result in significant losses for an organization due to application crashes and improper behavior under high user loads in production. Simulating large volumes of requests, such as 5 million within 5-10 minutes, is challenging without a scalable performance testing framework. Leveraging cloud services to implement a performance testing framework makes it feasible to handle 5-10 million requests in just 5-10 minutes, helping organizations ensure their applications perform reliably under peak conditions. Implementing a scalable performance testing framework using cloud services and tools like JMeter, EC2 instances (Virtual machine), cloud logs (Monitor errors and logs), EFS (File storage system), and security groups offers several key benefits for organizations. Creating performance test framework using this approach helps optimize resource utilization, effective benchmarking, increased reliability, cost savings by resolving performance issues before the application is released. In performance testing, a master-slave framework facilitates distributed testing across multiple EC2 instances to emulate many concurrent users and efficiently handle high loads. The master node orchestrates the test execution by coordinating with multiple slave nodes to distribute the workload. Slave nodes execute the test scripts provided by the master node, with each node handling a portion of the overall user load and generating requests to the target application or service. By leveraging JMeter's master-slave framework in conjunction with cloud services like EC2 instances, EFS, CloudWatch logs, security groups, and command-line tools, organizations can achieve superior scalability and flexibility in their performance testing efforts. In this master-slave framework, JMeter must be installed on both the master and each slave EC2 instance. The master EC2 instance functions as the "brain," while the slave instances operate as the "body parts." The master directs each slave to execute a specified number of requests. Upon completion of the execution, the slave instances transmit their results back to the master. The master then consolidates these results into a comprehensive report detailing metrics such as the number of requests sent, encountered errors, network latency, response times, server capacity, throughput, and bandwidth. Leveraging cloud services, the framework benefits from automatic scaling based on the volume of requests. Notably, integrating cloud services allows organizations to handle more than 5-10 million requests within 5 minutes, depending on the server capacity of the hosted website or application.Keywords: identify crashes of application under heavy load, JMeter with cloud Services, Scalable performance testing, JMeter master and slave using cloud Services
Procedia PDF Downloads 2738 An Overview of Bioinformatics Methods to Detect Novel Riboswitches Highlighting the Importance of Structure Consideration
Authors: Danny Barash
Abstract:
Riboswitches are RNA genetic control elements that were originally discovered in bacteria and provide a unique mechanism of gene regulation. They work without the participation of proteins and are believed to represent ancient regulatory systems in the evolutionary timescale. One of the biggest challenges in riboswitch research is that many are found in prokaryotes but only a small percentage of known riboswitches have been found in certain eukaryotic organisms. The few examples of eukaryotic riboswitches were identified using sequence-based bioinformatics search methods that include some slight structural considerations. These pattern-matching methods were the first ones to be applied for the purpose of riboswitch detection and they can also be programmed very efficiently using a data structure called affix arrays, making them suitable for genome-wide searches of riboswitch patterns. However, they are limited by their ability to detect harder to find riboswitches that deviate from the known patterns. Several methods have been developed since then to tackle this problem. The most commonly used by practitioners is Infernal that relies on Hidden Markov Models (HMMs) and Covariance Models (CMs). Profile Hidden Markov Models were also carried out in the pHMM Riboswitch Scanner web application, independently from Infernal. Other computational approaches that have been developed include RMDetect by the use of 3D structural modules and RNAbor that utilizes Boltzmann probability of structural neighbors. We have tried to incorporate more sophisticated secondary structure considerations based on RNA folding prediction using several strategies. The first idea was to utilize window-based methods in conjunction with folding predictions by energy minimization. The moving window approach is heavily geared towards secondary structure consideration relative to sequence that is treated as a constraint. However, the method cannot be used genome-wide due to its high cost because each folding prediction by energy minimization in the moving window is computationally expensive, enabling to scan only at the vicinity of genes of interest. The second idea was to remedy the inefficiency of the previous approach by constructing a pipeline that consists of inverse RNA folding considering RNA secondary structure, followed by a BLAST search that is sequence-based and highly efficient. This approach, which relies on inverse RNA folding in general and our own in-house fragment-based inverse RNA folding program called RNAfbinv in particular, shows capability to find attractive candidates that are missed by Infernal and other standard methods being used for riboswitch detection. We demonstrate attractive candidates found by both the moving-window approach and the inverse RNA folding approach performed together with BLAST. We conclude that structure-based methods like the two strategies outlined above hold considerable promise in detecting riboswitches and other conserved RNAs of functional importance in a variety of organisms.Keywords: riboswitches, RNA folding prediction, RNA structure, structure-based methods
Procedia PDF Downloads 23437 Integrating Natural Language Processing (NLP) and Machine Learning in Lung Cancer Diagnosis
Authors: Mehrnaz Mostafavi
Abstract:
The assessment and categorization of incidental lung nodules present a considerable challenge in healthcare, often necessitating resource-intensive multiple computed tomography (CT) scans for growth confirmation. This research addresses this issue by introducing a distinct computational approach leveraging radiomics and deep-learning methods. However, understanding local services is essential before implementing these advancements. With diverse tracking methods in place, there is a need for efficient and accurate identification approaches, especially in the context of managing lung nodules alongside pre-existing cancer scenarios. This study explores the integration of text-based algorithms in medical data curation, indicating their efficacy in conjunction with machine learning and deep-learning models for identifying lung nodules. Combining medical images with text data has demonstrated superior data retrieval compared to using each modality independently. While deep learning and text analysis show potential in detecting previously missed nodules, challenges persist, such as increased false positives. The presented research introduces a Structured-Query-Language (SQL) algorithm designed for identifying pulmonary nodules in a tertiary cancer center, externally validated at another hospital. Leveraging natural language processing (NLP) and machine learning, the algorithm categorizes lung nodule reports based on sentence features, aiming to facilitate research and assess clinical pathways. The hypothesis posits that the algorithm can accurately identify lung nodule CT scans and predict concerning nodule features using machine-learning classifiers. Through a retrospective observational study spanning a decade, CT scan reports were collected, and an algorithm was developed to extract and classify data. Results underscore the complexity of lung nodule cohorts in cancer centers, emphasizing the importance of careful evaluation before assuming a metastatic origin. The SQL and NLP algorithms demonstrated high accuracy in identifying lung nodule sentences, indicating potential for local service evaluation and research dataset creation. Machine-learning models exhibited strong accuracy in predicting concerning changes in lung nodule scan reports. While limitations include variability in disease group attribution, the potential for correlation rather than causality in clinical findings, and the need for further external validation, the algorithm's accuracy and potential to support clinical decision-making and healthcare automation represent a significant stride in lung nodule management and research.Keywords: lung cancer diagnosis, structured-query-language (SQL), natural language processing (NLP), machine learning, CT scans
Procedia PDF Downloads 10036 Sustainable Living Where the Immaterial Matters
Authors: Maria Hadjisoteriou, Yiorgos Hadjichristou
Abstract:
This paper aims to explore and provoke a debate, through the work of the design studio, “living where the immaterial matters” of the architecture department of the University of Nicosia, on the role that the “immaterial matter” can play in enhancing innovative sustainable architecture and viewing the cities as sustainable organisms that always grow and alter. The blurring, juxtaposing binary of immaterial and matter, as the theoretical backbone of the Unit is counterbalanced by the practicalities of the contested sites of the last divided capital Nicosia with its ambiguous green line and the ghost city of Famagusta in the island of Cyprus. Jonathan Hill argues that the ‘immaterial is as important to architecture as the material concluding that ‘Immaterial–Material’ weaves the two together, so that they are in conjunction not opposition’. This understanding of the relationship of the immaterial vs material set the premises and the departing point of our argument, and talks about new recipes for creating hybrid public space that can lead to the unpredictability of a complex and interactive, sustainable city. We hierarchized the human experience as a priority. We distinguish the notion of space and place referring to Heidegger’s ‘building dwelling thinking’: ‘a distinction between space and place, where spaces gain authority not from ‘space’ appreciated mathematically but ‘place’ appreciated through human experience’. Following the above, architecture and the city are seen as one organism. The notions of boundaries, porous borders, fluidity, mobility, and spaces of flows are the lenses of the investigation of the unit’s methodology, leading to the notion of a new hybrid urban environment, where the main constituent elements are in a flux relationship. The material and the immaterial flows of the town are seen interrelated and interwoven with the material buildings and their immaterial contents, yielding to new sustainable human built environments. The above premises consequently led to choices of controversial sites. Indisputably a provoking site was the ghost town of Famagusta where the time froze back in 1974. Inspired by the fact that the nature took over the a literally dormant, decaying city, a sustainable rebirthing was seen as an opportunity where both nature and built environment, material and immaterial are interwoven in a new emergent urban environment. Similarly, we saw the dividing ‘green line’ of Nicosia completely failing to prevent the trespassing of images, sounds and whispers, smells and symbols that define the two prevailing cultures and becoming a porous creative entity which tends to start reuniting instead of separating , generating sustainable cultures and built environments. The authors would like to contribute to the debate by introducing a question about a new recipe of cooking the built environment. Can we talk about a new ‘urban recipe’: ‘cooking architecture and city’ to deliver an ever changing urban sustainable organism, whose identity will mainly depend on the interrelationship of the immaterial and material constituents?Keywords: blurring zones, porous borders, spaces of flow, urban recipe
Procedia PDF Downloads 42035 Diversity and Inclusion in Focus: Cultivating a Sense of Belonging in Higher Education
Authors: Naziema Jappie
Abstract:
South Africa is a diverse nation but with many challenges. The fundamental changes in the political, economic and educational domains in South Africa in the late 1990s affected the South African community profoundly. In higher education, experiences of discrimination and bias are detrimental to the sense of belonging of staff and students. It is therefore important to cultivate an appreciation of diversity and inclusion. To bridge common understandings with the reality of racial inequality, we must understand the ways in which senior and executive leadership at universities think about social justice issues relating to diversity and inclusion and contextualize these within the current post-democracy landscape. The position and status of social justice issues and initiatives in South African higher education is a slow process. The focus is to highlight how and to what extent initiatives or practices around campus diversity and inclusion have been considered and made part of the mainstream intellectual and academic conversations in South Africa. This involves an examination of the social and epistemological conditions of possibility for meaningful research and curriculum practices, staff and student recruitment, and student access and success in addressing the challenges posed by social diversity on campuses. Methodology: In this study, university senior and executive leadership were interviewed about their perceptions and advancement of social justice and examine the buffering effects of diverse and inclusive peer interactions and institutional commitment on the relationship between discrimination–bias and sense of belonging for staff and students at the institutions. The paper further explores diversity and inclusion initiatives at the three institutions using a Critical Race Theory approach in conjunction with a literature review on social justice with a special focus on diversity and inclusion. Findings: This paper draws on research findings that demonstrate the need to address social justice issues of diversity and inclusion in the SA higher education context. The reason for this is so that university leaders can live out their experiences and values as they work to transform students into being accountable and responsible. Documents were selected for review with the intent of illustrating how diversity and inclusion work being done across an institution can shape the experiences of previously disadvantaged persons at these institutions. The research has highlighted the need for institutional leaders to embody their own mission and vision as they frame social justice issues for the campus community. Finally, the paper provides recommendations to institutions for strengthening high-level diversity and inclusion programs/initiatives among staff, students and administrators. The conclusion stresses the importance of addressing the historical and current policies and practices that either facilitate or negate the goals of social justice, encouraging these privileged institutions to create internal committees or task forces that focus on racial and ethnic disparities in the institution.Keywords: diversity, higher education, inclusion, social justice
Procedia PDF Downloads 12134 Interethnic Communication in Multicultural Areas: A Case Study of Intercultural Sensitivity Between Baloch and Persians in Iran
Authors: Mehraveh Taghizadeh
Abstract:
Iran is home to a diverse range of ethnic groups such as Baloch, Kurds, Persians, Lors, Arabs, and Turks. The Persian ethnicity is the largest group, while Baloch people are considered a minority residing on the southeastern border of the country with different language and religion. As a consequence, Political discussions have often prioritized national identity and national security over Baloch ethnic identity. However, to improve intercultural understanding and reduce cultural schemas, it's crucial to decrease ethnocentrism and increase intercultural communication. In the meantime, Kerman, a multicultural province that borders Sistan and Baluchistan, has become a destination for Baloch immigrants. By recognizing the current status of intercultural competence, we can develop effective policies for expanding intercultural communication and creating a more inclusive and peaceful society. As a result, this research aims to study the domain of intercultural sensitivity between Persians and Baloch in Kerman. Therefore, the question is how do Persians and Baloch ethnicities perceive each other? This study represents the first exploration of communication dynamics between Persians and Baloch individuals. Utilizing a qualitative approach, this study employs thematic analysis in conjunction with Bennett's intercultural sensitivities model. The model comprises two components: ethnocentrism, which spans from denial and defense to minimization, and ethno-relativism, which ranges from acceptance and adaptation to integration. To attain this objective, 30 individuals from Persian and Baloch ethnicities were interviewed using a semi-structured format. it analysis suggests that the Baluch and Persians exhibit a range of intercultural sensitivities characterized by defensive and minimizing attitudes in the ethnocentrism domain, and accepting attitudes in the ethno-relativism domain. The concept of minimization involves recognizing the shared humanity and positive schemas of both groups. Furthermore, in the adaptation domain, Persians' efforts to assimilate into Baloch culture at an acceptance level are primarily focused on the civilizational dimension, including using traditional Balochi clothing designs on their clothes. The Persians hold intercultural schemas about the Baloch people, including notions of religious fanaticism, tribalism, poverty, smuggling, and a nomadic way of life. Conversely, the Baloch people have intercultural schemas about Persians including religious fanaticism, disdain towards the Baloch, and ethnocentrism. Both groups tend to tie ethnicity to religion and judge each other accordingly. Also, the origin of these schemas is in the representation of the media and the encounter without interaction between the two ethnic groups. These findings indicate that they have not received adaptation and integration levels in ethno-relativism. Furthermore, the results indicate that developing personal communication in multicultural environments reduces intercultural sensitivity, and increases positive interactions and civilizational dialogues. People can understand each other better and perform better in their daily lives.Keywords: intercultural communication, intercultural sensitivity, interethnic communication, Iran, Baloch, Persians
Procedia PDF Downloads 5133 Application of MALDI-MS to Differentiate SARS-CoV-2 and Non-SARS-CoV-2 Symptomatic Infections in the Early and Late Phases of the Pandemic
Authors: Dmitriy Babenko, Sergey Yegorov, Ilya Korshukov, Aidana Sultanbekova, Valentina Barkhanskaya, Tatiana Bashirova, Yerzhan Zhunusov, Yevgeniya Li, Viktoriya Parakhina, Svetlana Kolesnichenko, Yeldar Baiken, Aruzhan Pralieva, Zhibek Zhumadilova, Matthew S. Miller, Gonzalo H. Hortelano, Anar Turmuhambetova, Antonella E. Chesca, Irina Kadyrova
Abstract:
Introduction: The rapidly evolving COVID-19 pandemic, along with the re-emergence of pathogens causing acute respiratory infections (ARI), has necessitated the development of novel diagnostic tools to differentiate various causes of ARI. MALDI-MS, due to its wide usage and affordability, has been proposed as a potential instrument for diagnosing SARS-CoV-2 versus non-SARS-CoV-2 ARI. The aim of this study was to investigate the potential of MALDI-MS in conjunction with a machine learning model to accurately distinguish between symptomatic infections caused by SARS-CoV-2 and non-SARS-CoV-2 during both the early and later phases of the pandemic. Furthermore, this study aimed to analyze mass spectrometry (MS) data obtained from nasal swabs of healthy individuals. Methods: We gathered mass spectra from 252 samples, comprising 108 SARS-CoV-2-positive samples obtained in 2020 (Covid 2020), 7 SARS-CoV- 2-positive samples obtained in 2023 (Covid 2023), 71 samples from symptomatic individuals without SARS-CoV-2 (Control non-Covid ARVI), and 66 samples from healthy individuals (Control healthy). All the samples were subjected to RT-PCR testing. For data analysis, we employed the caret R package to train and test seven machine-learning algorithms: C5.0, KNN, NB, RF, SVM-L, SVM-R, and XGBoost. We conducted a training process using a five-fold (outer) nested repeated (five times) ten-fold (inner) cross-validation with a randomized stratified splitting approach. Results: In this study, we utilized the Covid 2020 dataset as a case group and the non-Covid ARVI dataset as a control group to train and test various machine learning (ML) models. Among these models, XGBoost and SVM-R demonstrated the highest performance, with accuracy values of 0.97 [0.93, 0.97] and 0.95 [0.95; 0.97], specificity values of 0.86 [0.71; 0.93] and 0.86 [0.79; 0.87], and sensitivity values of 0.984 [0.984; 1.000] and 1.000 [0.968; 1.000], respectively. When examining the Covid 2023 dataset, the Naive Bayes model achieved the highest classification accuracy of 43%, while XGBoost and SVM-R achieved accuracies of 14%. For the healthy control dataset, the accuracy of the models ranged from 0.27 [0.24; 0.32] for k-nearest neighbors to 0.44 [0.41; 0.45] for the Support Vector Machine with a radial basis function kernel. Conclusion: Therefore, ML models trained on MALDI MS of nasopharyngeal swabs obtained from patients with Covid during the initial phase of the pandemic, as well as symptomatic non-Covid individuals, showed excellent classification performance, which aligns with the results of previous studies. However, when applied to swabs from healthy individuals and a limited sample of patients with Covid in the late phase of the pandemic, ML models exhibited lower classification accuracy.Keywords: SARS-CoV-2, MALDI-TOF MS, ML models, nasopharyngeal swabs, classification
Procedia PDF Downloads 10832 Impact of Transportation on Access to Reproductive and Maternal Health Services in Northeast Cambodia: A Policy Brief
Authors: Zaman Jawahar, Anne Rouve-Khiev, Elizabeth Hoban, Joanne Williams
Abstract:
Ensuring access to timely obstetric care is essential to prevent maternal deaths. Geographical barriers pose significant challenges for women accessing quality reproductive and maternal health services in rural Cambodia. This policy brief affirms the need to address the issue of transportation and cost (direct and indirect) as critical barriers to accessing reproductive and maternal health (RMH) services in four provinces in Northeast Cambodia (Kratie, Ratanak Kiri, Mondul Kiri, Stung Treng). A systemic search of the literature identified 1,116 articles, and only ten articles from low-and-middle-income countries met the inclusion criteria. The ten articles reported on transportation and cost related to accessing RMH services. In addition, research findings from Partnering to Save Lives (PSL) studies in the four provinces were included in the analysis. Thematic data analysis using the information in the ten articles and PSL research findings was conducted, and the findings are presented in this paper. The key findings are the critical barriers to accessing RMH services in the four provinces because women experience: 1) difficulties finding affordable transportation; 2) lack of available and accessible transportation; 3) greater distance and traveling time to services; 4) poor geographical terrain and; 5) higher opportunity costs. Distance and poverty pose a double burden for the women accessing RMH services making a facility-based delivery less feasible compared to home delivery. Furthermore, indirect and hidden costs associated with institutional delivery may have an impact on women’s decision to seek RMH care. Existing health financing schemes in Cambodia such as the Health Equity Fund (HEF) and the Voucher Scheme contributed to the solution but have also shown some limitations. These schemes contribute to improving access to RMH services for the poorest group, but the barrier of transportation costs remains. In conclusion, initiatives that are proven to be effective in the Cambodian context should continue or be expanded in conjunction with the HEF, and special consideration should be given to communities living in geographically remote regions and difficult to access areas. The following strategies are recommended: 1) maintain and further strengthen transportation support in the HEF scheme; 2) expand community-based initiatives such as Community Managed Health Equity Funds and Village Saving Loans Associations; 3) establish maternity waiting homes; and 4) include antenatal and postnatal care in the provision of integrated outreach services. This policy brief can be used to inform key policymakers and provide evidence that can assist them to develop strategies to increase poor women’s access to RMH services in low-income settings, taking into consideration the geographic distance and other indirect costs associated with a facility-based delivery.Keywords: access, barriers, northeast Cambodia, reproductive and maternal health service, transportation and cost
Procedia PDF Downloads 14131 Developing Thai-UK Double Degree Programmes: An Exploratory Study Identifying Challenges, Competing Interests and Risks
Abstract:
In Thailand, a 4.0 policy has been initiated that is designed to prepare and train an appropriate workforce to support the move to a value-based economy. One aspect of support for this policy is a project to encourage the creation of double degree programmes, specifically between Thai and UK universities. This research into the project, conducted with its key players, explores the factors that can either enable or hinder the development of such programmes. It is an area that has received little research attention to date. Key findings focus on differences in quality assurance requirements, attitudes to benefits, risks, and committed levels of institutional support, thus providing valuable input into future policy making. The Transnational Education (TNE) Development Project was initiated in 2015 by the British Council, in conjunction with the Office for Higher Education Commission (OHEC), Thailand. The purpose of the project was to facilitate opportunities for Thai Universities to partner with UK Universities so as to develop double degree programme models. In this arrangement, the student gains both a UK and a Thai qualification, spending time studying in both countries. Twenty-two partnerships were initiated via the project. Utilizing a qualitative approach, data sources included participation in TNE project workshops, peer reviews, and over 20 semi-structured interviews conducted with key informants within the participating UK and Thai universities. Interviews were recorded, transcribed, and analysed for key themes. The research has revealed that the strength of the relationship between the two partner institutions is critical. Successful partnerships are often built on previous personal contact, have senior-level involvement and are strengthened by partnership on different levels, such as research, student exchange, and other forms of mobility. The support of the British Council was regarded as a key enabler in developing these types of projects for those universities that had not been involved in TNE previously. The involvement of industry is apparent in programmes that have high scientific content but not well developed in other subject areas. Factors that hinder the development of partnership programmes include the approval processes and quality requirements of each institution. Significant differences in fee levels between Thai and UK universities provide a challenge and attempts to bridge them require goodwill on the part of the latter that may be difficult to realise. This research indicates the key factors to which attention needs to be given when developing a TNE programme. Early attention to these factors can reduce the likelihood that the partnership will fail to develop. Representatives in both partner universities need to understand their respective processes of development and approval. The research has important practical implications for policy-makers and planners involved with TNE, not only in relation to the specific TNE project but also more widely in relation to the development of TNE programmes in other countries and other subject areas. Future research will focus on assessing the success of the double degree programmes generated by the TNE Development Project from the perspective of universities, policy makers, and industry partners.Keywords: double-degree, internationalization, partnerships, Thai-UK
Procedia PDF Downloads 10330 Border Security: Implementing the “Memory Effect” Theory in Irregular Migration
Authors: Iliuta Cumpanasu, Veronica Oana Cumpanasu
Abstract:
This paper focuses on studying the conjunction between the new emerged theory of “Memory Effect” in Irregular Migration and Related Criminality and the notion of securitization, and its impact on border management, bringing about a scientific advancement in the field by identifying the patterns corresponding to the linkage of the two concepts, for the first time, and developing a theoretical explanation, with respect to the effects of the non-military threats on border security. Over recent years, irregular migration has experienced a significant increase worldwide. The U.N.'s refugee agency reports that the number of displaced people is at its highest ever - surpassing even post-World War II numbers when the world was struggling to come to terms with the most devastating event in history. This is also the fresh reality within the core studied coordinate, the Balkan Route of Irregular Migration, which starts from Asia and Africa and continues to Turkey, Greece, North Macedonia or Bulgaria, Serbia, and ends in Romania, where thousands of migrants find themselves in an irregular situation concerning their entry to the European Union, with its important consequences concerning the related criminality. The data from the past six years was collected by making use of semi-structured interviews with experts in the field of migration and desk research within some organisations involved in border security, pursuing the gathering of genuine insights from the aforementioned field, which was constantly addressed the existing literature and subsequently subjected to the mixed methods of analysis, including the use of the Vector Auto-Regression estimates model. Thereafter, the analysis of the data followed the processes and outcomes in Grounded Theory, and a new Substantive Theory emerged, explaining how the phenomena of irregular migration and cross-border criminality are the decisive impetus for implementing the concept of securitization in border management by using the proposed pattern. The findings of the study are therefore able to capture an area that has not yet benefitted from a comprehensive approach in the scientific community, such as the seasonality, stationarity, dynamics, predictions, or the pull and push factors in Irregular Migration, also highlighting how the recent ‘Pandemic’ interfered with border security. Therefore, the research uses an inductive revelatory theoretical approach which aims at offering a new theory in order to explain a phenomenon, triggering a practically handy contribution for the scientific community, research institutes or Academia and also usefulness to organizational practitioners in the field, among which UN, IOM, UNHCR, Frontex, Interpol, Europol, or national agencies specialized in border security. The scientific outcomes of this study were validated on June 30, 2021, when the author defended his dissertation for the European Joint Master’s in Strategic Border Management, a two years prestigious program supported by the European Commission and Frontex Agency and a Consortium of six European Universities and is currently one of the research objectives of his pending PhD research at the West University Timisoara.Keywords: migration, border, security, memory effect
Procedia PDF Downloads 9229 Hydrogen Purity: Developing Low-Level Sulphur Speciation Measurement Capability
Authors: Sam Bartlett, Thomas Bacquart, Arul Murugan, Abigail Morris
Abstract:
Fuel cell electric vehicles provide the potential to decarbonise road transport, create new economic opportunities, diversify national energy supply, and significantly reduce the environmental impacts of road transport. A potential issue, however, is that the catalyst used at the fuel cell cathode is susceptible to degradation by impurities, especially sulphur-containing compounds. A recent European Directive (2014/94/EU) stipulates that, from November 2017, all hydrogen provided to fuel cell vehicles in Europe must comply with the hydrogen purity specifications listed in ISO 14687-2; this includes reactive and toxic chemicals such as ammonia and total sulphur-containing compounds. This requirement poses great analytical challenges due to the instability of some of these compounds in calibration gas standards at relatively low amount fractions and the difficulty associated with undertaking measurements of groups of compounds rather than individual compounds. Without the available reference materials and analytical infrastructure, hydrogen refuelling stations will not be able to demonstrate compliance to the ISO 14687 specifications. The hydrogen purity laboratory at NPL provides world leading, accredited purity measurements to allow hydrogen refuelling stations to evidence compliance to ISO 14687. Utilising state-of-the-art methods that have been developed by NPL’s hydrogen purity laboratory, including a novel method for measuring total sulphur compounds at 4 nmol/mol and a hydrogen impurity enrichment device, we provide the capabilities necessary to achieve these goals. An overview of these capabilities will be given in this paper. As part of the EMPIR Hydrogen co-normative project ‘Metrology for sustainable hydrogen energy applications’, NPL are developing a validated analytical methodology for the measurement of speciated sulphur-containing compounds in hydrogen at low amount fractions pmol/mol to nmol/mol) to allow identification and measurement of individual sulphur-containing impurities in real samples of hydrogen (opposed to a ‘total sulphur’ measurement). This is achieved by producing a suite of stable gravimetrically-prepared primary reference gas standards containing low amount fractions of sulphur-containing compounds (hydrogen sulphide, carbonyl sulphide, carbon disulphide, 2-methyl-2-propanethiol and tetrahydrothiophene have been selected for use in this study) to be used in conjunction with novel dynamic dilution facilities to enable generation of pmol/mol to nmol/mol level gas mixtures (a dynamic method is required as compounds at these levels would be unstable in gas cylinder mixtures). Method development and optimisation are performed using gas chromatographic techniques assisted by cryo-trapping technologies and coupled with sulphur chemiluminescence detection to allow improved qualitative and quantitative analyses of sulphur-containing impurities in hydrogen. The paper will review the state-of-the art gas standard preparation techniques, including the use and testing of dynamic dilution technologies for reactive chemical components in hydrogen. Method development will also be presented highlighting the advances in the measurement of speciated sulphur compounds in hydrogen at low amount fractions.Keywords: gas chromatography, hydrogen purity, ISO 14687, sulphur chemiluminescence detector
Procedia PDF Downloads 22528 Low-Cost, Portable Optical Sensor with Regression Algorithm Models for Accurate Monitoring of Nitrites in Environments
Authors: David X. Dong, Qingming Zhang, Meng Lu
Abstract:
Nitrites enter waterways as runoff from croplands and are discharged from many industrial sites. Excessive nitrite inputs to water bodies lead to eutrophication. On-site rapid detection of nitrite is of increasing interest for managing fertilizer application and monitoring water source quality. Existing methods for detecting nitrites use spectrophotometry, ion chromatography, electrochemical sensors, ion-selective electrodes, chemiluminescence, and colorimetric methods. However, these methods either suffer from high cost or provide low measurement accuracy due to their poor selectivity to nitrites. Therefore, it is desired to develop an accurate and economical method to monitor nitrites in environments. We report a low-cost optical sensor, in conjunction with a machine learning (ML) approach to enable high-accuracy detection of nitrites in water sources. The sensor works under the principle of measuring molecular absorptions of nitrites at three narrowband wavelengths (295 nm, 310 nm, and 357 nm) in the ultraviolet (UV) region. These wavelengths are chosen because they have relatively high sensitivity to nitrites; low-cost light-emitting devices (LEDs) and photodetectors are also available at these wavelengths. A regression model is built, trained, and utilized to minimize cross-sensitivities of these wavelengths to the same analyte, thus achieving precise and reliable measurements with various interference ions. The measured absorbance data is input to the trained model that can provide nitrite concentration prediction for the sample. The sensor is built with i) a miniature quartz cuvette as the test cell that contains a liquid sample under test, ii) three low-cost UV LEDs placed on one side of the cell as light sources, with each LED providing a narrowband light, and iii) a photodetector with a built-in amplifier and an analog-to-digital converter placed on the other side of the test cell to measure the power of transmitted light. This simple optical design allows measuring the absorbance data of the sample at the three wavelengths. To train the regression model, absorbances of nitrite ions and their combination with various interference ions are first obtained at the three UV wavelengths using a conventional spectrophotometer. Then, the spectrophotometric data are inputs to different regression algorithm models for training and evaluating high-accuracy nitrite concentration prediction. Our experimental results show that the proposed approach enables instantaneous nitrite detection within several seconds. The sensor hardware costs about one hundred dollars, which is much cheaper than a commercial spectrophotometer. The ML algorithm helps to reduce the average relative errors to below 3.5% over a concentration range from 0.1 ppm to 100 ppm of nitrites. The sensor has been validated to measure nitrites at three sites in Ames, Iowa, USA. This work demonstrates an economical and effective approach to the rapid, reagent-free determination of nitrites with high accuracy. The integration of the low-cost optical sensor and ML data processing can find a wide range of applications in environmental monitoring and management.Keywords: optical sensor, regression model, nitrites, water quality
Procedia PDF Downloads 7227 Transforming Challenges of Urban and Peri-Urban Agriculture into Opportunities for Urban Food Security in India
Authors: G. Kiran Kumar, K. Padmaja
Abstract:
The rise of urban and peri-urban agriculture (UPA) is an important urban phenomenon that needs to be well understood before we pronounce a verdict whether it is beneficial or not. The challenge of supply of safe and nutritious food is faced by urban inhabitants. The definition of urban and peri-urban varies from city to city depending on the local policies framed with a view to bring regulated urban habitations as part of governance. Expansion of cities and the blurring of boundaries between urban and rural areas make it difficult to define peri-urban agriculture. The problem is further exacerbated by the fact that definition adopted in one region may not fit in the other. On the other hand the proportion of urban population is on the rise vis-à-vis rural. The rise of UPA does not promise that the food requirements of cities can be entirely met from this practice, since availability of enormous amounts of spaces on rooftops and vacant plots is impossible for raising crops. However, UPA reduces impact of price volatility, particularly for vegetables, which relatively have a longer shelf life. UPA improves access to fresh, nutritious and safe food for the urban poor. UPA provides employment to food handlers and traders in the supply chain. UPA can pose environmental and health risks from inappropriate agricultural practices; increased competition for land, water and energy; alter the ecological landscape and make it vulnerable to increased pollution. The present work is based on case studies in peri-urban agriculture in Hyderabad, India and relies on secondary data. This paper tries to analyze the need for more intensive production technologies without affecting the environment. An optimal solution in terms of urban-rural linkages has to be devised. There is a need to develop a spatial vision and integrate UPA in urban planning in a harmonious manner. Zoning of peri-urban areas for agriculture, milk and poultry production is an essential step to preserve the traditional nurturing character of these areas. Urban local bodies in conjunction with Departments of Agriculture and Horticulture can provide uplift to existing UPA models, without which the UPA can develop into a haphazard phenomenon and add to the increasing list of urban challenges. Land to be diverted for peri-urban agriculture may render the concept of urban and peri-urban forestry ineffective. This paper suggests that UPA may be practiced for high value vegetables which can be cultivated under protected conditions and are better resilient to climate change. UPA can provide models for climate resilient agriculture in urban areas which can be replicated in rural areas. Production of organic farm produce is another option for promote UPA owing to the proximity to informed consumers and access to markets within close range. Waste lands in peri-urban areas can be allotted to unemployed rural youth with the support of Urban Local Bodies (ULBs) and used for UPA. This can serve the purposes of putting wastelands to food production, enhancing employment opportunities and enhancing access to fresh produce for urban consumers.Keywords: environment, food security, urban and peri-urban agriculture, zoning
Procedia PDF Downloads 31826 Developing a GIS-Based Tool for the Management of Fats, Oils, and Grease (FOG): A Case Study of Thames Water Wastewater Catchment
Authors: Thomas D. Collin, Rachel Cunningham, Bruce Jefferson, Raffaella Villa
Abstract:
Fats, oils and grease (FOG) are by-products of food preparation and cooking processes. FOG enters wastewater systems through a variety of sources such as households, food service establishments, and industrial food facilities. Over time, if no source control is in place, FOG builds up on pipe walls, leading to blockages, and potentially to sewer overflows which are a major risk to the Environment and Human Health. UK water utilities spend millions of pounds annually trying to control FOG. Despite UK legislation specifying that discharge of such material is against the law, it is often complicated for water companies to identify and prosecute offenders. Hence, it leads to uncertainties regarding the attitude to take in terms of FOG management. Research is needed to seize the full potential of implementing current practices. The aim of this research was to undertake a comprehensive study to document the extent of FOG problems in sewer lines and reinforce existing knowledge. Data were collected to develop a model estimating quantities of FOG available for recovery within Thames Water wastewater catchments. Geographical Information System (GIS) software was used in conjunction to integrate data with a geographical component. FOG was responsible for at least 1/3 of sewer blockages in Thames Water waste area. A waste-based approach was developed through an extensive review to estimate the potential for FOG collection and recovery. Three main sources were identified: residential, commercial and industrial. Commercial properties were identified as one of the major FOG producers. The total potential FOG generated was estimated for the 354 wastewater catchments. Additionally, raw and settled sewage were sampled and analysed for FOG (as hexane extractable material) monthly at 20 sewage treatment works (STW) for three years. A good correlation was found with the sampled FOG and population equivalent (PE). On average, a difference of 43.03% was found between the estimated FOG (waste-based approach) and sampled FOG (raw sewage sampling). It was suggested that the approach undertaken could overestimate the FOG available, the sampling could only capture a fraction of FOG arriving at STW, and/or the difference could account for FOG accumulating in sewer lines. Furthermore, it was estimated that on average FOG could contribute up to 12.99% of the primary sludge removed. The model was further used to investigate the relationship between estimated FOG and number of blockages. The higher the FOG potential, the higher the number of FOG-related blockages is. The GIS-based tool was used to identify critical areas (i.e. high FOG potential and high number of FOG blockages). As reported in the literature, FOG was one of the main causes of sewer blockages. By identifying critical areas (i.e. high FOG potential and high number of FOG blockages) the model further explored the potential for source-control in terms of ‘sewer relief’ and waste recovery. Hence, it helped targeting where benefits from implementation of management strategies could be the highest. However, FOG is still likely to persist throughout the networks, and further research is needed to assess downstream impacts (i.e. at STW).Keywords: fat, FOG, GIS, grease, oil, sewer blockages, sewer networks
Procedia PDF Downloads 20925 Non-Invasive Evaluation of Patients After Percutaneous Coronary Revascularization. The Role of Cardiac Imaging
Authors: Abdou Elhendy
Abstract:
Numerous study have shown the efficacy of the percutaneous intervention (PCI) and coronary stenting in improving left ventricular function and relieving exertional angina. Furthermore, PCI remains the main line of therapy in acute myocardial infarction. Improvement of procedural techniques and new devices have resulted in an increased number of PCI in those with difficult and extensive lesions, multivessel disease as well as total occlusion. Immediate and late outcome may be compromised by acute thrombosis or the development of fibro-intimal hyperplasia. In addition, progression of coronary artery disease proximal or distal to the stent as well as in non-stented arteries is not uncommon. As a result, complications can occur, such as acute myocardial infarction, worsened heart failure or recurrence of angina. In a stent, restenosis can occur without symptoms or with atypical complaints rendering the clinical diagnosis difficult. Routine invasive angiography is not appropriate as a follow up tool due to associated risk and cost and the limited functional assessment. Exercise and pharmacologic stress testing are increasingly used to evaluate the myocardial function, perfusion and adequacy of revascularization. Information obtained by these techniques provide important clues regarding presence and severity of compromise in myocardial blood flow. Stress echocardiography can be performed in conjunction with exercise or dobutamine infusion. The diagnostic accuracy has been moderate, but the results provide excellent prognostic stratification. Adding myocardial contrast agents can improve imaging quality and allows assessment of both function and perfusion. Stress radionuclide myocardial perfusion imaging is an alternative to evaluate these patients. The extent and severity of wall motion and perfusion abnormalities observed during exercise or pharmacologic stress are predictors of survival and risk of cardiac events. According to current guidelines, stress echocardiography and radionuclide imaging are considered to have appropriate indication among patients after PCI who have cardiac symptoms and those who underwent incomplete revascularization. Stress testing is not recommended in asymptomatic patients, particularly early after revascularization, Coronary CT angiography is increasingly used and provides high sensitive for the diagnosis of coronary artery stenosis. Average sensitivity and specificity for the diagnosis of in stent stenosis in pooled data are 79% and 81%, respectively. Limitations include blooming artifacts and low feasibility in patients with small stents or thick struts. Anatomical and functional cardiac imaging modalities are corner stone for the assessment of patients after PCI and provide salient diagnostic and prognostic information. Current imaging techniques cans serve as gate keeper for coronary angiography, thus limiting the risk of invasive procedures to those who are likely to benefit from subsequent revascularization. The determination of which modality to apply requires careful identification of merits and limitation of each technique as well as the unique characteristic of each individual patient.Keywords: coronary artery disease, stress testing, cardiac imaging, restenosis
Procedia PDF Downloads 16824 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models
Authors: V. Mantey, N. Findlay, I. Maddox
Abstract:
The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.Keywords: building detection, disaster relief, mask-RCNN, satellite mapping
Procedia PDF Downloads 169