Search results for: artificial air storage reservoir
50 ESRA: An End-to-End System for Re-identification and Anonymization of Swiss Court Decisions
Authors: Joel Niklaus, Matthias Sturmer
Abstract:
The publication of judicial proceedings is a cornerstone of many democracies. It enables the court system to be made accountable by ensuring that justice is made in accordance with the laws. Equally important is privacy, as a fundamental human right (Article 12 in the Declaration of Human Rights). Therefore, it is important that the parties (especially minors, victims, or witnesses) involved in these court decisions be anonymized securely. Today, the anonymization of court decisions in Switzerland is performed either manually or semi-automatically using primitive software. While much research has been conducted on anonymization for tabular data, the literature on anonymization for unstructured text documents is thin and virtually non-existent for court decisions. In 2019, it has been shown that manual anonymization is not secure enough. In 21 of 25 attempted Swiss federal court decisions related to pharmaceutical companies, pharmaceuticals, and legal parties involved could be manually re-identified. This was achieved by linking the decisions with external databases using regular expressions. An automated re-identification system serves as an automated test for the safety of existing anonymizations and thus promotes the right to privacy. Manual anonymization is very expensive (recurring annual costs of over CHF 20M in Switzerland alone, according to an estimation). Consequently, many Swiss courts only publish a fraction of their decisions. An automated anonymization system reduces these costs substantially, further leading to more capacity for publishing court decisions much more comprehensively. For the re-identification system, topic modeling with latent dirichlet allocation is used to cluster an amount of over 500K Swiss court decisions into meaningful related categories. A comprehensive knowledge base with publicly available data (such as social media, newspapers, government documents, geographical information systems, business registers, online address books, obituary portal, web archive, etc.) is constructed to serve as an information hub for re-identifications. For the actual re-identification, a general-purpose language model is fine-tuned on the respective part of the knowledge base for each category of court decisions separately. The input to the model is the court decision to be re-identified, and the output is a probability distribution over named entities constituting possible re-identifications. For the anonymization system, named entity recognition (NER) is used to recognize the tokens that need to be anonymized. Since the focus lies on Swiss court decisions in German, a corpus for Swiss legal texts will be built for training the NER model. The recognized named entities are replaced by the category determined by the NER model and an identifier to preserve context. This work is part of an ongoing research project conducted by an interdisciplinary research consortium. Both a legal analysis and the implementation of the proposed system design ESRA will be performed within the next three years. This study introduces the system design of ESRA, an end-to-end system for re-identification and anonymization of Swiss court decisions. Firstly, the re-identification system tests the safety of existing anonymizations and thus promotes privacy. Secondly, the anonymization system substantially reduces the costs of manual anonymization of court decisions and thus introduces a more comprehensive publication practice.Keywords: artificial intelligence, courts, legal tech, named entity recognition, natural language processing, ·privacy, topic modeling
Procedia PDF Downloads 14849 Digital Mapping of First-Order Drainages and Springs of the Guajiru River, Northeast of Brazil, Based on Satellite and Drone Images
Authors: Sebastião Milton Pinheiro da Silva, Michele Barbosa da Rocha, Ana Lúcia Fernandes Campos, Miquéias Rildo de Souza Silva
Abstract:
Water is an essential natural resource for life on Earth. Rivers, lakes, lagoons and dams are the main sources of water storage for human consumption. The costs of extracting and using these water sources are lower than those of exploiting groundwater on transition zones to semi-arid terrains. However, the volume of surface water has decreased over time, with the depletion of first-order drainage and the disappearance of springs, phenomena which are easily observed in the field. Climate change worsens water scarcity, compromising supply and hydric security for rural populations. To minimize the expected impacts, producing and storing water through watershed management planning requires detailed cartographic information on the relief and topography, and updated data on the stage and intensity of catchment basin environmental degradation problems. The cartography available of the Brazilian northeastern territory dates to the 70s, with topographic maps, printed, at a scale of 1:100,000 which does not meet the requirements to execute this project. Exceptionally, there are topographic maps at scales of 1:50,000 and 1:25,000 of some coastal regions in northeastern Brazil. Still, due to scale limitations and outdatedness, they are products of little utility for mapping low-order watersheds drainage and springs. Remote sensing data and geographic information systems can contribute to guiding the process of mapping and environmental recovery by integrating detailed relief and topographic data besides social and other environmental information in the Guajiru River Basin, located on the east coast of Rio Grande do Norte, on the Northeast region of Brazil. This study aimed to recognize and map catchment basin, springs and low-order drainage features along estimating morphometric parameters. Alos PALSAR and Copernicus DEM digital elevation models were evaluated and provided regional drainage features and the watersheds limits extracted with Terraview/Terrahidro 5.0 software. CBERS 4A satellite images with 2 m spatial resolution, processed with ESA SNAP Toolbox, allowed generating land use land cover map of Guajiru River. A Mappir Survey 3 multiespectral camera onboard of a DJI Phantom 4, a Mavic 2 Pro PPK Drone and an X91 GNSS receiver to collect the precised position of selected points were employed to detail mapping. Satellite images enabled a first knowledge approach of watershed areas on a more regional scale, yet very current, and drone images were essential in mapping details of catchment basins. The drone multispectral image mosaics, the digital elevation model, the contour lines and geomorphometric parameters were generated using OpenDroneMap/ODM and QGis softwares. The drone images generated facilitated the location, understanding and mapping of watersheds, recharge areas and first-order ephemeral watercourses on an adequate scale and will be used in the following project’s phases: watershed management planning, recovery and environmental protection of Rio's springs Guajiru. Environmental degradation is being analyzed from the perspective of the availability and quality of surface water supply.Keywords: imaging, relief, UAV, water
Procedia PDF Downloads 3048 Effect of Selenium Source on Meat Quality of Bonsmara Bull Calves
Authors: J. van Soest, B. Bruneel, J. Smit, N. Williams, P. Swiegers
Abstract:
Selenium (Se) is an essential trace mineral involved in reducing oxidative stress, enhancing immune status, improving reproduction, and regulating growth. During finishing period, selenium supplementation can be applied to improve meat quality. Dietary selenium can be provided in inorganic or organic forms. Specifically, L-selenomethionine (organic selenium) allows for selenium storage in animal protein which supports the animal during periods of high oxidative stress. The objective of this study was to investigate the effects of synthetically produced, single amino acid, L-selenomethionine (Excential Selenium 4000, Orffa Additives BV) on production parameters, health status, and meat quality of Bonsmara bull calves. 24 calves, 7 months of age, completed a 60-day initial growing period at a commercial feedlot, after which they were transported to research station Rumen-8 (Bethlehem, South-Africa). After a ten-day adaptation period, the bulls were allocated to a control (n=12) or treatment (n=12) group. Each group was divided over 3 pens based on weight. Both groups received Total Mixed Ration supplemented with 5.25 mg Se/head per day. The control group was supplemented with sodium selenite as Se source, whilst the treatment group was supplemented with L-selenomethionine (Excential Selenium 4000, Orffa Additives BV). Animals were limited to 10 kg feed intake per head per day to ensure similar Se intake. Treatment period lasted 1.5 months. A beta-adrenergic agonist was included in the feed for the last 30 days. During the treatment period, average daily gain, average daily feed intake, and feed conversion ratio were recorded. Blood parameters were measured at day 1, day 25, and before slaughter (day 47). After slaughter, carcass weight, dressing percentage, grading, and meat quality (pH, tenderness, colour, odour, purge, proximate analyses, acid detergent fibre, and neutral detergent fibre) were determined. No differences between groups were found in performance. A higher number of animals with cortisol levels below detection limit (27.6 nmol/l) was recorded for the treatment group. Other blood parameters showed no differences. No differences were found regarding carcass weight and dressing percentage. Important parameters of meat quality were significantly improved in the treatment group: instrumental tenderness at 14 days ageing was 2.8 and 3.4 for treatment and control respectively (P=0.010), and a 0.5% decrease in purge (of fresh samples) was shown, 1.5% and 2.0% for treatment group and control respectively (p=0.029). Besides, pH was shown to be numerically reduced in the treatment group. In summary, supplementation with L-selenomethionine as selenium source improved meat quality compared to sodium selenite. Lower instrumental tenderness (Warner Bratzler Shear Force, WBSF) was recorded for the treatment group. This indicates less tough meat and highest consumer satisfaction. Regarding purge, control was just below 2.0%, an important threshold for consumer acceptation. Treatment group scored 0.5% lower for purge than control, indicating higher consumer satisfaction. The lower pH in the treatment group could be an indication of higher glycogen reserves in muscle which could contribute to a reduced risk of Dark Firm Dry carcasses. More animals showed cortisol levels below detection limit in the treatment group, indicating lower levels of stress when animals receive L-selenomethionine.Keywords: calves, meat quality, nutrition, selenium
Procedia PDF Downloads 18147 Effectiveness of Gamified Simulators in the Health Sector
Authors: Nuno Biga
Abstract:
The integration of serious games with gamification in management education and training has gained significant importance in recent years as innovative strategies are sought to improve target audience engagement and learning outcomes. This research builds on the author's previous work in this field and presents a case study that evaluates the ex-post impact of a sample of applications of the BIGAMES management simulator in the training of top managers from various hospital institutions. The methodology includes evaluating the reaction of participants after each edition of BIGAMES Accident & Emergency (A&E) carried out over the last 3 years, as well as monitoring the career path of a significant sample of participants and their feedback more than a year after their experience with this simulator. Control groups will be set up, according to the type of role their members held when they took part in the BIGAMES A&E simulator: Administrators, Clinical Directors and Nursing Directors. Former participants are invited to answer a questionnaire structured for this purpose, where they are asked, among other questions, about the importance and impact that the BIGAMES A&E simulator has had on their professional activity. The research methodology also includes an exhaustive literature review, focusing on empirical studies in the field of education and training in management and business that investigate the effectiveness of gamification and serious games in improving learning, team collaboration, critical thinking, problem-solving skills and overall performance, with a focus on training contexts in the health sector. The results of the research carried out show that gamification and serious games that simulate real scenarios, such as Business Interactive Games - BIGAMES©, can significantly increase the motivation and commitment of participants, stimulating the development of transversal skills, the mobilization of group synergies and the acquisition and retention of knowledge through interactive user-centred scenarios. Individuals who participate in game-based learning series show a higher level of commitment to learning because they find these teaching methods more enjoyable and interactive. This research study aims to demonstrate that, as executive education and training programs develop to meet the current needs of managers, gamification and serious games stand out as effective means of bridging the gap between traditional teaching methods and modern educational and training requirements. To this end, this research evaluates the medium/long-term effects of gamified learning on the professional performance of participants in the BIGAMES simulator applied to healthcare. Based on the conclusions of the evaluation of the effectiveness of training using gamification and taking into account the results of the opinion poll of former A&E participants, this research study proposes an integrated approach for the transversal application of the A&E Serious Game in various educational contexts, covering top management (traditionally the target audience of BIGAMES A&E), middle and operational management in healthcare institutions (functional area heads and professionals with career development potential), as well as higher education in medicine and nursing courses. The integrated solution called “BIGAMES A&E plus”, developed as part of this research, includes the digitalization of key processes and the incorporation of AI.Keywords: artificial intelligence (AI), executive training, gamification, higher education, management simulators, serious games (SG), training effectiveness
Procedia PDF Downloads 1346 Application of the Pattern Method to Form the Stable Neural Structures in the Learning Process as a Way of Solving Modern Problems in Education
Authors: Liudmyla Vesper
Abstract:
The problems of modern education are large-scale and diverse. The aspirations of parents, teachers, and experts converge - everyone interested in growing up a generation of whole, well-educated persons. Both the family and society are expected in the future generation to be self-sufficient, desirable in the labor market, and capable of lifelong learning. Today's children have a powerful potential that is difficult to realize in the conditions of traditional school approaches. Focusing on STEM education in practice often ends with the simple use of computers and gadgets during class. "Science", "technology", "engineering" and "mathematics" are difficult to combine within school and university curricula, which have not changed much during the last 10 years. Solving the problems of modern education largely depends on teachers - innovators, teachers - practitioners who develop and implement effective educational methods and programs. Teachers who propose innovative pedagogical practices that allow students to master large-scale knowledge and apply it to the practical plane. Effective education considers the creation of stable neural structures during the learning process, which allow to preserve and increase knowledge throughout life. The author proposed a method of integrated lessons – cases based on the maths patterns for forming a holistic perception of the world. This method and program are scientifically substantiated and have more than 15 years of practical application experience in school and student classrooms. The first results of the practical application of the author's methodology and curriculum were announced at the International Conference "Teaching and Learning Strategies to Promote Elementary School Success", 2006, April 22-23, Yerevan, Armenia, IREX-administered 2004-2006 Multiple Component Education Project. This program is based on the concept of interdisciplinary connections and its implementation in the process of continuous learning. This allows students to save and increase knowledge throughout life according to a single pattern. The pattern principle stores information on different subjects according to one scheme (pattern), using long-term memory. This is how neural structures are created. The author also admits that a similar method can be successfully applied to the training of artificial intelligence neural networks. However, this assumption requires further research and verification. The educational method and program proposed by the author meet the modern requirements for education, which involves mastering various areas of knowledge, starting from an early age. This approach makes it possible to involve the child's cognitive potential as much as possible and direct it to the preservation and development of individual talents. According to the methodology, at the early stages of learning students understand the connection between school subjects (so-called "sciences" and "humanities") and in real life, apply the knowledge gained in practice. This approach allows students to realize their natural creative abilities and talents, which makes it easier to navigate professional choices and find their place in life.Keywords: science education, maths education, AI, neuroplasticity, innovative education problem, creativity development, modern education problem
Procedia PDF Downloads 6145 Opportunities for Reducing Post-Harvest Losses of Cactus Pear (Opuntia Ficus-Indica) to Improve Small-Holder Farmers Income in Eastern Tigray, Northern Ethiopia: Value Chain Approach
Authors: Meron Zenaselase Rata, Euridice Leyequien Abarca
Abstract:
The production of major crops in Northern Ethiopia, especially the Tigray Region, is at subsistence level due to drought, erratic rainfall, and poor soil fertility. Since cactus pear is a drought-resistant plant, it is considered as a lifesaver fruit and a strategy for poverty reduction in a drought-affected area of the region. Despite its contribution to household income and food security in the area, the cactus pear sub-sector is experiencing many constraints with limited attention given to its post-harvest loss management. Therefore, this research was carried out to identify opportunities for reducing post-harvest losses and recommend possible strategies to reduce post-harvest losses, thereby improving production and smallholder’s income. Both probability and non-probability sampling techniques were employed to collect the data. Ganta Afeshum district was selected from Eastern Tigray, and two peasant associations (Buket and Golea) were also selected from the district purposively for being potential in cactus pear production. Simple random sampling techniques were employed to survey 30 households from each of the two peasant associations, and a semi-structured questionnaire was used as a tool for data collection. Moreover, in this research 2 collectors, 2 wholesalers, 1 processor, 3 retailers, 2 consumers were interviewed; and two focus group discussion was also done with 14 key farmers using semi-structured checklist; and key informant interview with governmental and non-governmental organizations were interviewed to gather more information about the cactus pear production, post-harvest losses, the strategies used to reduce the post-harvest losses and suggestions to improve the post-harvest management. To enter and analyze the quantitative data, SPSS version 20 was used, whereas MS-word were used to transcribe the qualitative data. The data were presented using frequency and descriptive tables and graphs. The data analysis was also done using a chain map, correlations, stakeholder matrix, and gross margin. Mean comparisons like ANOVA and t-test between variables were used. The analysis result shows that the present cactus pear value chain involves main actors and supporters. However, there is inadequate information flow and informal market linkages among actors in the cactus pear value chain. The farmer's gross margin is higher when they sell to the processor than sell to collectors. The significant postharvest loss in the cactus pear value chain is at the producer level, followed by wholesalers and retailers. The maximum and minimum volume of post-harvest losses at the producer level is 4212 and 240 kgs per season. The post-harvest loss was caused by limited farmers skill on-farm management and harvesting, low market price, limited market information, absence of producer organization, poor post-harvest handling, absence of cold storage, absence of collection centers, poor infrastructure, inadequate credit access, using traditional transportation system, absence of quality control, illegal traders, inadequate research and extension services and using inappropriate packaging material. Therefore, some of the recommendations were providing adequate practical training, forming producer organizations, and constructing collection centers.Keywords: cactus pear, post-harvest losses, profit margin, value-chain
Procedia PDF Downloads 13044 Big Data Applications for Transportation Planning
Authors: Antonella Falanga, Armando Cartenì
Abstract:
"Big data" refers to extremely vast and complex sets of data, encompassing extraordinarily large and intricate datasets that require specific tools for meaningful analysis and processing. These datasets can stem from diverse origins like sensors, mobile devices, online transactions, social media platforms, and more. The utilization of big data is pivotal, offering the chance to leverage vast information for substantial advantages across diverse fields, thereby enhancing comprehension, decision-making, efficiency, and fostering innovation in various domains. Big data, distinguished by its remarkable attributes of enormous volume, high velocity, diverse variety, and significant value, represent a transformative force reshaping the industry worldwide. Their pervasive impact continues to unlock new possibilities, driving innovation and advancements in technology, decision-making processes, and societal progress in an increasingly data-centric world. The use of these technologies is becoming more widespread, facilitating and accelerating operations that were once much more complicated. In particular, big data impacts across multiple sectors such as business and commerce, healthcare and science, finance, education, geography, agriculture, media and entertainment and also mobility and logistics. Within the transportation sector, which is the focus of this study, big data applications encompass a wide variety, spanning across optimization in vehicle routing, real-time traffic management and monitoring, logistics efficiency, reduction of travel times and congestion, enhancement of the overall transportation systems, but also mitigation of pollutant emissions contributing to environmental sustainability. Meanwhile, in public administration and the development of smart cities, big data aids in improving public services, urban planning, and decision-making processes, leading to more efficient and sustainable urban environments. Access to vast data reservoirs enables deeper insights, revealing hidden patterns and facilitating more precise and timely decision-making. Additionally, advancements in cloud computing and artificial intelligence (AI) have further amplified the potential of big data, enabling more sophisticated and comprehensive analyses. Certainly, utilizing big data presents various advantages but also entails several challenges regarding data privacy and security, ensuring data quality, managing and storing large volumes of data effectively, integrating data from diverse sources, the need for specialized skills to interpret analysis results, ethical considerations in data use, and evaluating costs against benefits. Addressing these difficulties requires well-structured strategies and policies to balance the benefits of big data with privacy, security, and efficient data management concerns. Building upon these premises, the current research investigates the efficacy and influence of big data by conducting an overview of the primary and recent implementations of big data in transportation systems. Overall, this research allows us to conclude that big data better provide to enhance rational decision-making for mobility choices and is imperative for adeptly planning and allocating investments in transportation infrastructures and services.Keywords: big data, public transport, sustainable mobility, transport demand, transportation planning
Procedia PDF Downloads 6043 Variations in Spatial Learning and Memory across Natural Populations of Zebrafish, Danio rerio
Authors: Tamal Roy, Anuradha Bhat
Abstract:
Cognitive abilities aid fishes in foraging, avoiding predators & locating mates. Factors like predation pressure & habitat complexity govern learning & memory in fishes. This study aims to compare spatial learning & memory across four natural populations of zebrafish. Zebrafish, a small cyprinid inhabits a diverse range of freshwater habitats & this makes it amenable to studies investigating role of native environment in spatial cognitive abilities. Four populations were collected across India from waterbodies with contrasting ecological conditions. Habitat complexity of the water-bodies was evaluated as a combination of channel substrate diversity and diversity of vegetation. Experiments were conducted on populations under controlled laboratory conditions. A square shaped spatial testing arena (maze) was constructed for testing the performance of adult zebrafish. The square tank consisted of an inner square shaped layer with the edges connected to the diagonal ends of the tank-walls by connections thereby forming four separate chambers. Each of the four chambers had a main door in the centre. Each chamber had three sections separated by two windows. A removable coloured window-pane (red, yellow, green or blue) identified each main door. A food reward associated with an artificial plant was always placed inside the left-hand section of the red-door chamber. The position of food-reward and plant within the red-door chamber was fixed. A test fish would have to explore the maze by taking turns and locate the food inside the right-side section of the red-door chamber. Fishes were sorted from each population stock and kept individually in separate containers for identification. At a time, a test fish was released into the arena and allowed 20 minutes to explore in order to find the food-reward. In this way, individual fishes were trained through the maze to locate the food reward for eight consecutive days. The position of red door, with the plant and the reward, was shuffled every day. Following training, an intermission of four days was given during which the fishes were not subjected to trials. Post-intermission, the fishes were re-tested on the 13th day following the same protocol for their ability to remember the learnt task. Exploratory tendencies and latency of individuals to explore on 1st day of training, performance time across trials, and number of mistakes made each day were recorded. Additionally, mechanism used by individuals to solve the maze each day was analyzed across populations. Fishes could be expected to use algorithm (sequence of turns) or associative cues in locating the food reward. Individuals of populations did not differ significantly in latencies and tendencies to explore. No relationship was found between exploration and learning across populations. High habitat-complexity populations had higher rates of learning & stronger memory while low habitat-complexity populations had lower rates of learning and much reduced abilities to remember. High habitat-complexity populations used associative cues more than algorithm for learning and remembering while low habitat-complexity populations used both equally. The study, therefore, helped understand the role of natural ecology in explaining variations in spatial learning abilities across populations.Keywords: algorithm, associative cue, habitat complexity, population, spatial learning
Procedia PDF Downloads 28742 Implementation of Green Deal Policies and Targets in Energy System Optimization Models: The TEMOA-Europe Case
Authors: Daniele Lerede, Gianvito Colucci, Matteo Nicoli, Laura Savoldi
Abstract:
The European Green Deal is the first internationally agreed set of measures to contrast climate change and environmental degradation. Besides the main target of reducing emissions by at least 55% by 2030, it sets the target of accompanying European countries through an energy transition to make the European Union into a modern, resource-efficient, and competitive net-zero emissions economy by 2050, decoupling growth from the use of resources and ensuring a fair adaptation of all social categories to the transformation process. While the general purpose to allow the realization of the purposes of the Green Deal already dates back to 2019, strategies and policies keep being developed coping with recent circumstances and achievements. However, general long-term measures like the Circular Economy Action Plan, the proposals to shift from fossil natural gas to renewable and low-carbon gases, in particular biomethane and hydrogen, and to end the sale of gasoline and diesel cars by 2035, will all have significant effects on energy supply and demand evolution across the next decades. The interactions between energy supply and demand over long-term time frames are usually assessed via energy system models to derive useful insights for policymaking and to address technological choices and research and development. TEMOA-Europe is a newly developed energy system optimization model instance based on the minimization of the total cost of the system under analysis, adopting a technologically integrated, detailed, and explicit formulation and considering the evolution of the system in partial equilibrium in competitive markets with perfect foresight. TEMOA-Europe is developed on the TEMOA platform, an open-source modeling framework totally implemented in Python, therefore ensuring third-party verification even on large and complex models. TEMOA-Europe is based on a single-region representation of the European Union and EFTA countries on a time scale between 2005 and 2100, relying on a set of assumptions for socio-economic developments based on projections by the International Energy Outlook and a large technological dataset including 7 sectors: the upstream and power sectors for the production of all energy commodities and the end-use sectors, including industry, transport, residential, commercial and agriculture. TEMOA-Europe also includes an updated hydrogen module considering its production, storage, transportation, and utilization. Besides, it can rely on a wide set of innovative technologies, ranging from nuclear fusion and electricity plants equipped with CCS in the power sector to electrolysis-based steel production processes and steel in the industrial sector – with a techno-economic characterization based on public literature – to produce insightful energy scenarios and especially to cope with the very long analyzed time scale. The aim of this work is to examine in detail the scheme of measures and policies for the realization of the purposes of the Green Deal and to transform them into a set of constraints and new socio-economic development pathways. Based on them, TEMOA-Europe will be used to produce and comparatively analyze scenarios to assess the consequences of Green Deal-related measures on the future evolution of the energy mix over the whole energy system in an economic optimization environment.Keywords: European Green Deal, energy system optimization modeling, scenario analysis, TEMOA-Europe
Procedia PDF Downloads 10541 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence
Authors: Gus Calderon, Richard McCreight, Tammy Schwartz
Abstract:
Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.
Procedia PDF Downloads 10840 Fold and Thrust Belts Seismic Imaging and Interpretation
Authors: Sunjay
Abstract:
Plate tectonics is of very great significance as it represents the spatial relationships of volcanic rock suites at plate margins, the distribution in space and time of the conditions of different metamorphic facies, the scheme of deformation in mountain belts, or orogens, and the association of different types of economic deposit. Orogenic belts are characterized by extensive thrust faulting, movements along large strike-slip fault zones, and extensional deformation that occur deep within continental interiors. Within oceanic areas there also are regions of crustal extension and accretion in the backarc basins that are located on the landward sides of many destructive plate margins.Collisional orogens develop where a continent or island arc collides with a continental margin as a result of subduction. collisional and noncollisional orogens can be explained by differences in the strength and rheology of the continental lithosphere and by processes that influence these properties during orogenesis.Seismic Imaging Difficulties-In triangle zones, several factors reduce the effectiveness of seismic methods. The topography in the central part of the triangle zone is usually rugged and is associated with near-surface velocity inversions which degrade the quality of the seismic image. These characteristics lead to low signal-to-noise ratio, inadequate penetration of energy through overburden, poor geophone coupling with the surface and wave scattering. Depth Seismic Imaging Techniques-Seismic processing relates to the process of altering the seismic data to suppress noise, enhancing the desired signal (higher signal-to-noise ratio) and migrating seismic events to their appropriate location in space and depth. Processing steps generally include analysis of velocities, static corrections, moveout corrections, stacking and migration. Exploration seismology Bow-tie effect -Shadow Zones-areas with no reflections (dead areas). These are called shadow zones and are common in the vicinity of faults and other discontinuous areas in the subsurface. Shadow zones result when energy from a reflector is focused on receivers that produce other traces. As a result, reflectors are not shown in their true positions. Subsurface Discontinuities-Diffractions occur at discontinuities in the subsurface such as faults and velocity discontinuities (as at “bright spot” terminations). Bow-tie effect caused by the two deep-seated synclines. Seismic imaging of thrust faults and structural damage-deepwater thrust belts, Imaging deformation in submarine thrust belts using seismic attributes,Imaging thrust and fault zones using 3D seismic image processing techniques, Balanced structural cross sections seismic interpretation pitfalls checking, The seismic pitfalls can originate due to any or all of the limitations of data acquisition, processing, interpretation of the subsurface geology,Pitfalls and limitations in seismic attribute interpretation of tectonic features, Seismic attributes are routinely used to accelerate and quantify the interpretation of tectonic features in 3D seismic data. Coherence (or variance) cubes delineate the edges of megablocks and faulted strata, curvature delineates folds and flexures, while spectral components delineate lateral changes in thickness and lithology. Carbon capture and geological storage leakage surveillance because fault behave as a seal or a conduit for hydrocarbon transportation to a trap,etc.Keywords: tectonics, seismic imaging, fold and thrust belts, seismic interpretation
Procedia PDF Downloads 7039 Oxidation Behavior of Ferritic Stainless Steel Interconnects Modified Using Nanoparticles of Rare-Earth Elements under Operating Conditions Specific to Solid Oxide Electrolyzer Cells
Authors: Łukasz Mazur, Kamil Domaradzki, Bartosz Kamecki, Justyna Ignaczak, Sebastian Molin, Aleksander Gil, Tomasz Brylewski
Abstract:
The rising global power consumption necessitates the development of new energy storage solutions. Prospective technologies include solid oxide electrolyzer cells (SOECs), which convert surplus electrical energy into hydrogen. An electrolyzer cell consists of a porous anode, and cathode, and a dense electrolyte. Power output is increased by connecting cells into stacks using interconnects. Interconnects are currently made from high-chromium ferritic steels – for example, Crofer 22 APU – which exhibit high oxidation resistance and a thermal expansion coefficient that is similar to that of electrode materials. These materials have one disadvantage – their area-specific resistance (ASR) gradually increases due to the formation of a Cr₂O₃ scale on their surface as a result of oxidation. The chromia in the scale also reacts with the water vapor present in the reaction media, forming volatile chromium oxyhydroxides, which in turn react with electrode materials and cause their deterioration. The electrochemical efficiency of SOECs thus decreases. To mitigate this, the interconnect surface can be modified with protective-conducting coatings of spinel or other materials. The high prices of SOEC components -especially the Crofer 22 APU- have prevented their widespread adoption. More inexpensive counterparts, therefore, need to be found, and their properties need to be enhanced to make them viable. Candidates include the Nirosta 4016/1,4016 low-chromium ferritic steel with a chromium content of just 16.3 wt%. This steel's resistance to high-temperature oxidation was improved by depositing Gd₂O₃ nanoparticles on its surface via either dip coating or electrolysis. Modification with CeO₂ or Ce₀.₉Y₀.₁O₂ nanoparticles deposited by means of spray pyrolysis was also tested. These methods were selected because of their low cost and simplicity of application. The aim of this study was to investigate the oxidation kinetics of Nirosta 4016/1,4016 modified using the afore-mentioned methods and to subsequently measure the obtained samples' ASR. The samples were oxidized for 100 h in the air as well as air/H₂O and Ar/H₂/H₂O mixtures at 1073 K. Such conditions reflect those found in the anode and cathode operating space during real-life use of SOECs. Phase and chemical composition and the microstructure of oxidation products were determined using XRD and SEM-EDS. ASR was measured over the range of 623-1073 K using a four-point, two-probe DC technique. The results indicate that the applied nanoparticles improve the oxidation resistance and electrical properties of the studied layered systems. The properties of individual systems varied significantly depending on the applied reaction medium. Gd₂O₃ nanoparticles improved oxidation resistance to a greater degree than either CeO₂ or Ce₀.₉Y₀.₁O₂ nanoparticles. On the other hand, the cerium-containing nanoparticles improved electrical properties regardless of the reaction medium. The ASR values of all surface-modified steel samples were below the 0.1 Ω.cm² threshold set for interconnect materials, which was exceeded in the case of the unmodified reference sample. It can be concluded that the applied modifications increased the oxidation resistance of Nirosta 4016/1.4016 to a level that allows its use as SOEC interconnect material. Acknowledgments: Funding of Research project supported by program "Excellence initiative – research university" for the AGH University of Krakow" is gratefully acknowledged (TB).Keywords: cerium oxide, ferritic stainless steel, gadolinium oxide, interconnect, SOEC
Procedia PDF Downloads 8738 Application of Large Eddy Simulation-Immersed Boundary Volume Penalization Method for Heat and Mass Transfer in Granular Layers
Authors: Artur Tyliszczak, Ewa Szymanek, Maciej Marek
Abstract:
Flow through granular materials is important to a vast array of industries, for instance in construction industry where granular layers are used for bulkheads and isolators, in chemical engineering and catalytic reactors where large surfaces of packed granular beds intensify chemical reactions, or in energy production systems, where granulates are promising materials for heat storage and heat transfer media. Despite the common usage of granulates and extensive research performed in this field, phenomena occurring between granular solid elements or between solids and fluid are still not fully understood. In the present work we analyze the heat exchange process between the flowing medium (gas, liquid) and solid material inside the granular layers. We consider them as a composite of isolated solid elements and inter-granular spaces in which a gas or liquid can flow. The structure of the layer is controlled by shapes of particular granular elements (e.g., spheres, cylinders, cubes, Raschig rings), its spatial distribution or effective characteristic dimension (total volume or surface area). We will analyze to what extent alteration of these parameters influences on flow characteristics (turbulent intensity, mixing efficiency, heat transfer) inside the layer and behind it. Analysis of flow inside granular layers is very complicated because the use of classical experimental techniques (LDA, PIV, fibber probes) inside the layers is practically impossible, whereas the use of probes (e.g. thermocouples, Pitot tubes) requires drilling of holes inside the solid material. Hence, measurements of the flow inside granular layers are usually performed using for instance advanced X-ray tomography. In this respect, theoretical or numerical analyses of flow inside granulates seem crucial. Application of discrete element methods in combination with the classical finite volume/finite difference approaches is problematic as a mesh generation process for complex granular material can be very arduous. A good alternative for simulation of flow in complex domains is an immersed boundary-volume penalization (IB-VP) in which the computational meshes have simple Cartesian structure and impact of solid objects on the fluid is mimicked by source terms added to the Navier-Stokes and energy equations. The present paper focuses on application of the IB-VP method combined with large eddy simulation (LES). The flow solver used in this work is a high-order code (SAILOR), which was used previously in various studies, including laminar/turbulent transition in free flows and also for flows in wavy channels, wavy pipes and over various shape obstacles. In these cases a formal order of approximation turned out to be in between 1 and 2, depending on the test case. The current research concentrates on analyses of the flows in dense granular layers with elements distributed in a deterministic regular manner and validation of the results obtained using LES-IB method and body-fitted approach. The comparisons are very promising and show very good agreement. It is found that the size, number of elements and their distribution have huge impact on the obtained results. Ordering of the granular elements (or lack of it) affects both the pressure drop and efficiency of the heat transfer as it significantly changes mixing process.Keywords: granular layers, heat transfer, immersed boundary method, numerical simulations
Procedia PDF Downloads 13637 An Integrated Real-Time Hydrodynamic and Coastal Risk Assessment Model
Authors: M. Reza Hashemi, Chris Small, Scott Hayward
Abstract:
The Northeast Coast of the US faces damaging effects of coastal flooding and winds due to Atlantic tropical and extratropical storms each year. Historically, several large storm events have produced substantial levels of damage to the region; most notably of which were the Great Atlantic Hurricane of 1938, Hurricane Carol, Hurricane Bob, and recently Hurricane Sandy (2012). The objective of this study was to develop an integrated modeling system that could be used as a forecasting/hindcasting tool to evaluate and communicate the risk coastal communities face from these coastal storms. This modeling system utilizes the ADvanced CIRCulation (ADCIRC) model for storm surge predictions and the Simulating Waves Nearshore (SWAN) model for the wave environment. These models were coupled, passing information to each other and computing over the same unstructured domain, allowing for the most accurate representation of the physical storm processes. The coupled SWAN-ADCIRC model was validated and has been set up to perform real-time forecast simulations (as well as hindcast). Modeled storm parameters were then passed to a coastal risk assessment tool. This tool, which is generic and universally applicable, generates spatial structural damage estimate maps on an individual structure basis for an area of interest. The required inputs for the coastal risk model included a detailed information about the individual structures, inundation levels, and wave heights for the selected region. Additionally, calculation of wind damage to structures was incorporated. The integrated coastal risk assessment system was then tested and applied to Charlestown, a small vulnerable coastal town along the southern shore of Rhode Island. The modeling system was applied to Hurricane Sandy and a synthetic storm. In both storm cases, effect of natural dunes on coastal risk was investigated. The resulting damage maps for the area (Charlestown) clearly showed that the dune eroded scenarios affected more structures, and increased the estimated damage. The system was also tested in forecast mode for a large Nor’Easters: Stella (March 2017). The results showed a good performance of the coupled model in forecast mode when compared to observations. Finally, a nearshore model XBeach was then nested within this regional grid (ADCIRC-SWAN) to simulate nearshore sediment transport processes and coastal erosion. Hurricane Irene (2011) was used to validate XBeach, on the basis of a unique beach profile dataset at the region. XBeach showed a relatively good performance, being able to estimate eroded volumes along the beach transects with a mean error of 16%. The validated model was then used to analyze the effectiveness of several erosion mitigation methods that were recommended in a recent study of coastal erosion in New England: beach nourishment, coastal bank (engineered core), and submerged breakwater as well as artificial surfing reef. It was shown that beach nourishment and coastal banks perform better to mitigate shoreline retreat and coastal erosion.Keywords: ADCIRC, coastal flooding, storm surge, coastal risk assessment, living shorelines
Procedia PDF Downloads 11636 A Case Study of a Rehabilitated Child by Joint Efforts of Parents and Community
Authors: Fouzia Arif, Arif S. Mohammad, Hifsa Altaf, Lubna Raees
Abstract:
Introduction: The term "disability", refers to any condition that impedes the completion of daily tasks using traditional methods. In developing countries like Pakistan, disable population is usually excluded from the mainstream. In squatter settlements the situation is more critical. Sultanabad is one of the squatter settlements of Karachi. Purpose of case study is to improve the health of disabled children’s, and create awareness among the parents and community. Through a household visit, Shiraz, a young disabled boy of 15.5 years old was identified. Her mother articulated that her son was living normally and happily with his parents two years back. When he was 13 years old and student of class 8th, both his legs were traumatized in a Railway Train Accident while playing cricket. He got both femoral shaft fractured severely. He was taken to Jinnah Post Graduate Medical Centre (JPMC) where his left leg was amputated at above knee level and right leg was opened & fixed by reduction internally, luckily bone healed moderately with the passage of time. Methods: In Squatter settlements of Karachi Sultanabad, a survey was conducted in two sectors. Disability screening questionnaire was developed, collaboration with community through household visits, outreach sessions 23cases of disabled were identified who were socialized through sports, Musical program and get-together was organized with stockholder for creating awareness among community and parent’s. Collaboration was established with different NGOs, Government, stakeholders and community support for establishment of Physiotherapy Center. During home visit it was identified that Shiraz was on bed since last 1 year, his family could not afforded cost of physiotherapist and medical consultation due to poverty. Parents counseling was done mentioning that Shiraz needed to take treatment. After motivation his parents agreed for treatment. He was consulted by an orthopedic surgeon in AKUH, Who referred to DMC University of Health Science for rehabilitation service. There he was assessed and referred for Community Based Physiotherapy Centre Sultanabad. Physiotherapist visited home along with Coordinator for Special children and assessed him regularly, planned Physiotherapy treatment for abdominal, high muscles strutting exercise foot muscles strengthening exercise, knee mobilization weight bearing from partial to full weight gradually, also strengthen exercise were given for residual limb as the boy was dependent on it. He was also provided by an artificial leg and training was done. Result: Shiraz is now fully mobile, he can walk independently even out of home, functional ability progress improved and dependency factors reduced. It was difficult but not impossible. We all have sympathy but if we have empathy then we can rehabilitate the community in a better way. His parents are very happy and also the community is surprised to see him in such better condition. Conclusion: Combined efforts of physiotherapist, Coordinator of special children, community and parents made a drastic change in Shiraz’s case by continuously motivating him for better outcome. He is going to school regularly without support. Since he belongs to a poor family he faces financial constraints for education and clinical follow ups regularly.Keywords: femoral shaft fracture, trauma, orthopedic surgeon, physiotherapy treatment
Procedia PDF Downloads 24335 Circular Tool and Dynamic Approach to Grow the Entrepreneurship of Macroeconomic Metabolism
Authors: Maria Areias, Diogo Simões, Ana Figueiredo, Anishur Rahman, Filipa Figueiredo, João Nunes
Abstract:
It is expected that close to 7 billion people will live in urban areas by 2050. In order to improve the sustainability of the territories and its transition towards circular economy, it’s necessary to understand its metabolism and promote and guide the entrepreneurship answer. The study of a macroeconomic metabolism involves the quantification of the inputs, outputs and storage of energy, water, materials and wastes for an urban region. This quantification and analysis representing one opportunity for the promotion of green entrepreneurship. There are several methods to assess the environmental impacts of an urban territory, such as human and environmental risk assessment (HERA), life cycle assessment (LCA), ecological footprint assessment (EF), material flow analysis (MFA), physical input-output table (PIOT), ecological network analysis (ENA), multicriteria decision analysis (MCDA) among others. However, no consensus exists about which of those assessment methods are best to analyze the sustainability of these complex systems. Taking into account the weaknesses and needs identified, the CiiM - Circular Innovation Inter-Municipality project aims to define an uniform and globally accepted methodology through the integration of various methodologies and dynamic approaches to increase the efficiency of macroeconomic metabolisms and promoting entrepreneurship in a circular economy. The pilot territory considered in CiiM project has a total area of 969,428 ha, comprising a total of 897,256 inhabitants (about 41% of the population of the Center Region). The main economic activities in the pilot territory, which contribute to a gross domestic product of 14.4 billion euros, are: social support activities for the elderly; construction of buildings; road transport of goods, retailing in supermarkets and hypermarkets; mass production of other garments; inpatient health facilities; and the manufacture of other components and accessories for motor vehicles. The region's business network is mostly constituted of micro and small companies (similar to the Central Region of Portugal), with a total of 53,708 companies identified in the CIM Region of Coimbra (39 large companies), 28,146 in the CIM Viseu Dão Lafões (22 large companies) and 24,953 in CIM Beiras and Serra da Estrela (13 large companies). For the construction of the database was taking into account data available at the National Institute of Statistics (INE), General Directorate of Energy and Geology (DGEG), Eurostat, Pordata, Strategy and Planning Office (GEP), Portuguese Environment Agency (APA), Commission for Coordination and Regional Development (CCDR) and Inter-municipal Community (CIM), as well as dedicated databases. In addition to the collection of statistical data, it was necessary to identify and characterize the different stakeholder groups in the pilot territory that are relevant to the different metabolism components under analysis. The CIIM project also adds the potential of a Geographic Information System (GIS) so that it is be possible to obtain geospatial results of the territorial metabolisms (rural and urban) of the pilot region. This platform will be a powerful visualization tool of flows of products/services that occur within the region and will support the stakeholders, improving their circular performance and identifying new business ideas and symbiotic partnerships.Keywords: circular economy tools, life cycle assessment macroeconomic metabolism, multicriteria decision analysis, decision support tools, circular entrepreneurship, industrial and regional symbiosis
Procedia PDF Downloads 10134 Experimental Proof of Concept for Piezoelectric Flow Harvesting for In-Pipe Metering Systems
Authors: Sherif Keddis, Rafik Mitry, Norbert Schwesinger
Abstract:
Intelligent networking of devices has rapidly been gaining importance over the past years and with recent advances in the fields of microcontrollers, integrated circuits and wireless communication, low power applications have emerged, enabling this trend even more. Connected devices provide a much larger database thus enabling highly intelligent and accurate systems. Ensuring safe drinking water is one of the fields that require constant monitoring and can benefit from an increased accuracy. Monitoring is mainly achieved either through complex measures, such as collecting samples from the points of use, or through metering systems typically distant to the points of use which deliver less accurate assessments of the quality of water. Constant metering near the points of use is complicated due to their inaccessibility; e.g. buried water pipes, locked spaces, which makes system maintenance extremely difficult and often unviable. The research presented here attempts to overcome this challenge by providing these systems with enough energy through a flow harvester inside the pipe thus eliminating the maintenance requirements in terms of battery replacements or containment of leakage resulting from wiring such systems. The proposed flow harvester exploits the piezoelectric properties of polyvinylidene difluoride (PVDF) films to convert turbulence induced oscillations into electrical energy. It is intended to be used in standard water pipes with diameters between 0.5 and 1 inch. The working principle of the harvester uses a ring shaped bluff body inside the pipe to induce pressure fluctuations. Additionally the bluff body houses electronic components such as storage, circuitry and RF-unit. Placing the piezoelectric films downstream of that bluff body causes their oscillation which generates electrical charge. The PVDF-film is placed as a multilayered wrap fixed to the pipe wall leaving the top part to oscillate freely inside the flow. The warp, which allows for a larger active, consists of two layers of 30µm thick and 12mm wide PVDF layered alternately with two centered 6µm thick and 8mm wide aluminum foil electrodes. The length of the layers depends on the number of windings and is part of the investigation. Sealing the harvester against liquid penetration is achieved by wrapping it in a ring-shaped LDPE-film and welding the open ends. The fabrication of the PVDF-wraps is done by hand. After validating the working principle using a wind tunnel, experiments have been conducted in water, placing the harvester inside a 1 inch pipe at water velocities of 0.74m/s. To find a suitable placement of the wrap inside the pipe, two forms of fixation were compared regarding their power output. Further investigations regarding the number of windings required for efficient transduction were made. Best results were achieved using a wrap with 3 windings of the active layers which delivers a constant power output of 0.53µW at a 2.3MΩ load and an effective voltage of 1.1V. Considering the extremely low power requirements of sensor applications, these initial results are promising. For further investigations and optimization, machine designs are currently being developed to automate the fabrication and decrease tolerance of the prototypes.Keywords: maintenance-free sensors, measurements at point of use, piezoelectric flow harvesting, universal micro generator, wireless metering systems
Procedia PDF Downloads 19333 Learning from Dendrites: Improving the Point Neuron Model
Authors: Alexander Vandesompele, Joni Dambre
Abstract:
The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.Keywords: dendritic computation, spiking neural networks, point neuron model
Procedia PDF Downloads 13332 Gis Based Flash Flood Runoff Simulation Model of Upper Teesta River Besin - Using Aster Dem and Meteorological Data
Authors: Abhisek Chakrabarty, Subhraprakash Mandal
Abstract:
Flash flood is one of the catastrophic natural hazards in the mountainous region of India. The recent flood in the Mandakini River in Kedarnath (14-17th June, 2013) is a classic example of flash floods that devastated Uttarakhand by killing thousands of people.The disaster was an integrated effect of high intensityrainfall, sudden breach of Chorabari Lake and very steep topography. Every year in Himalayan Region flash flood occur due to intense rainfall over a short period of time, cloud burst, glacial lake outburst and collapse of artificial check dam that cause high flow of river water. In Sikkim-Derjeeling Himalaya one of the probable flash flood occurrence zone is Teesta Watershed. The Teesta River is a right tributary of the Brahmaputra with draining mountain area of approximately 8600 Sq. km. It originates in the Pauhunri massif (7127 m). The total length of the mountain section of the river amounts to 182 km. The Teesta is characterized by a complex hydrological regime. The river is fed not only by precipitation, but also by melting glaciers and snow as well as groundwater. The present study describes an attempt to model surface runoff in upper Teesta basin, which is directly related to catastrophic flood events, by creating a system based on GIS technology. The main object was to construct a direct unit hydrograph for an excess rainfall by estimating the stream flow response at the outlet of a watershed. Specifically, the methodology was based on the creation of a spatial database in GIS environment and on data editing. Moreover, rainfall time-series data collected from Indian Meteorological Department and they were processed in order to calculate flow time and the runoff volume. Apart from the meteorological data, background data such as topography, drainage network, land cover and geological data were also collected. Clipping the watershed from the entire area and the streamline generation for Teesta watershed were done and cross-sectional profiles plotted across the river at various locations from Aster DEM data using the ERDAS IMAGINE 9.0 and Arc GIS 10.0 software. The analysis of different hydraulic model to detect flash flood probability ware done using HEC-RAS, Flow-2D, HEC-HMS Software, which were of great importance in order to achieve the final result. With an input rainfall intensity above 400 mm per day for three days the flood runoff simulation models shows outbursts of lakes and check dam individually or in combination with run-off causing severe damage to the downstream settlements. Model output shows that 313 Sq. km area were found to be most vulnerable to flash flood includes Melli, Jourthang, Chungthang, and Lachung and 655sq. km. as moderately vulnerable includes Rangpo,Yathang, Dambung,Bardang, Singtam, Teesta Bazarand Thangu Valley. The model was validated by inserting the rain fall data of a flood event took place in August 1968, and 78% of the actual area flooded reflected in the output of the model. Lastly preventive and curative measures were suggested to reduce the losses by probable flash flood event.Keywords: flash flood, GIS, runoff, simulation model, Teesta river basin
Procedia PDF Downloads 31731 Applying Concept Mapping to Explore Temperature Abuse Factors in the Processes of Cold Chain Logistics Centers
Authors: Marco F. Benaglia, Mei H. Chen, Kune M. Tsai, Chia H. Hung
Abstract:
As societal and family structures, consumer dietary habits, and awareness about food safety and quality continue to evolve in most developed countries, the demand for refrigerated and frozen foods has been growing, and the issues related to their preservation have gained increasing attention. A well-established cold chain logistics system is essential to avoid any temperature abuse; therefore, assessing potential disruptions in the operational processes of cold chain logistics centers becomes pivotal. This study preliminarily employs HACCP to find disruption factors in cold chain logistics centers that may cause temperature abuse. Then, concept mapping is applied: selected experts engage in brainstorming sessions to identify any further factors. The panel consists of ten experts, including four from logistics and home delivery, two from retail distribution, one from the food industry, two from low-temperature logistics centers, and one from the freight industry. Disruptions include equipment-related aspects, human factors, management aspects, and process-related considerations. The areas of observation encompass freezer rooms, refrigerated storage areas, loading docks, sorting areas, and vehicle parking zones. The experts also categorize the disruption factors based on perceived similarities and build a similarity matrix. Each factor is evaluated for its impact, frequency, and investment importance. Next, multiple scale analysis, cluster analysis, and other methods are used to analyze these factors. Simultaneously, key disruption factors are identified based on their impact and frequency, and, subsequently, the factors that companies prioritize and are willing to invest in are determined by assessing investors’ risk aversion behavior. Finally, Cumulative Prospect Theory (CPT) is applied to verify the risk patterns. 66 disruption factors are found and categorized into six clusters: (1) "Inappropriate Use and Maintenance of Hardware and Software Facilities", (2) "Inadequate Management and Operational Negligence", (3) "Product Characteristics Affecting Quality and Inappropriate Packaging", (4) "Poor Control of Operation Timing and Missing Distribution Processing", (5) "Inadequate Planning for Peak Periods and Poor Process Planning", and (6) "Insufficient Cold Chain Awareness and Inadequate Training of Personnel". This study also identifies five critical factors in the operational processes of cold chain logistics centers: "Lack of Personnel’s Awareness Regarding Cold Chain Quality", "Personnel Not Following Standard Operating Procedures", "Personnel’s Operational Negligence", "Management’s Inadequacy", and "Lack of Personnel’s Knowledge About Cold Chain". The findings show that cold chain operators prioritize prevention and improvement efforts in the "Inappropriate Use and Maintenance of Hardware and Software Facilities" cluster, particularly focusing on the factors of "Temperature Setting Errors" and "Management’s Inadequacy". However, through the application of CPT theory, this study reveals that companies are not usually willing to invest in the improvement of factors related to the "Inappropriate Use and Maintenance of Hardware and Software Facilities" cluster due to its low occurrence likelihood, but they acknowledge the severity of the consequences if it does occur. Hence, the main implication is that the key disruption factors in cold chain logistics centers’ processes are associated with personnel issues; therefore, comprehensive training, periodic audits, and the establishment of reasonable incentives and penalties for both new employees and managers may significantly reduce disruption issues.Keywords: concept mapping, cold chain, HACCP, cumulative prospect theory
Procedia PDF Downloads 6830 Computer-Integrated Surgery of the Human Brain, New Possibilities
Authors: Ugo Galvanetto, Pirto G. Pavan, Mirco Zaccariotto
Abstract:
The discipline of Computer-integrated surgery (CIS) will provide equipment able to improve the efficiency of healthcare systems and, which is more important, clinical results. Surgeons and machines will cooperate in new ways that will extend surgeons’ ability to train, plan and carry out surgery. Patient specific CIS of the brain requires several steps: 1 - Fast generation of brain models. Based on image recognition of MR images and equipped with artificial intelligence, image recognition techniques should differentiate among all brain tissues and segment them. After that, automatic mesh generation should create the mathematical model of the brain in which the various tissues (white matter, grey matter, cerebrospinal fluid …) are clearly located in the correct positions. 2 – Reliable and fast simulation of the surgical process. Computational mechanics will be the crucial aspect of the entire procedure. New algorithms will be used to simulate the mechanical behaviour of cutting through cerebral tissues. 3 – Real time provision of visual and haptic feedback A sophisticated human-machine interface based on ergonomics and psychology will provide the feedback to the surgeon. The present work will address in particular point 2. Modelling the cutting of soft tissue in a structure as complex as the human brain is an extremely challenging problem in computational mechanics. The finite element method (FEM), that accurately represents complex geometries and accounts for material and geometrical nonlinearities, is the most used computational tool to simulate the mechanical response of soft tissues. However, the main drawback of FEM lies in the mechanics theory on which it is based, classical continuum Mechanics, which assumes matter is a continuum with no discontinuity. FEM must resort to complex tools such as pre-defined cohesive zones, external phase-field variables, and demanding remeshing techniques to include discontinuities. However, all approaches to equip FEM computational methods with the capability to describe material separation, such as interface elements with cohesive zone models, X-FEM, element erosion, phase-field, have some drawbacks that make them unsuitable for surgery simulation. Interface elements require a-priori knowledge of crack paths. The use of XFEM in 3D is cumbersome. Element erosion does not conserve mass. The Phase Field approach adopts a diffusive crack model instead of describing true tissue separation typical of surgical procedures. Modelling discontinuities, so difficult when using computational approaches based on classical continuum Mechanics, is instead easy for novel computational methods based on Peridynamics (PD). PD is a non-local theory of mechanics formulated with no use of spatial derivatives. Its governing equations are valid at points or surfaces of discontinuity, and it is, therefore especially suited to describe crack propagation and fragmentation problems. Moreover, PD does not require any criterium to decide the direction of crack propagation or the conditions for crack branching or coalescence; in the PD-based computational methods, cracks develop spontaneously in the way which is the most convenient from an energy point of view. Therefore, in PD computational methods, crack propagation in 3D is as easy as it is in 2D, with a remarkable advantage with respect to all other computational techniques.Keywords: computational mechanics, peridynamics, finite element, biomechanics
Procedia PDF Downloads 8029 Made on Land, Ends Up in the Water "I-Clare" Intelligent Remediation System for Removal of Harmful Contaminants in Water using Modified Reticulated Vitreous Carbon Foam
Authors: Sabina Żołędowska, Tadeusz Ossowski, Robert Bogdanowicz, Jacek Ryl, Paweł Rostkowski, Michał Kruczkowski, Michał Sobaszek, Zofia Cebula, Grzegorz Skowierzak, Paweł Jakóbczyk, Lilit Hovhannisyan, Paweł Ślepski, Iwona Kaczmarczyk, Mattia Pierpaoli, Bartłomiej Dec, Dawid Nidzworski
Abstract:
The circular economy of water presents a pressing environmental challenge in our society. Water contains various harmful substances, such as drugs, antibiotics, hormones, and dioxides, which can pose silent threats. Water pollution has severe consequences for aquatic ecosystems. It disrupts the balance of ecosystems by harming aquatic plants, animals, and microorganisms. Water pollution poses significant risks to human health. Exposure to toxic chemicals through contaminated water can have long-term health effects, such as cancer, developmental disorders, and hormonal imbalances. However, effective remediation systems can be implemented to remove these contaminants using electrocatalytic processes, which offer an environmentally friendly alternative to other treatment methods, and one of them is the innovative iCLARE system. The project's primary focus revolves around a few main topics: Reactor design and construction, selection of a specific type of reticulated vitreous carbon foams (RVC), analytical studies of harmful contaminants parameters and AI implementation. This high-performance electrochemical reactor will be build based on a novel type of electrode material. The proposed approach utilizes the application of reticulated vitreous carbon foams (RVC) with deposited modified metal oxides (MMO) and diamond thin films. The following setup is characterized by high surface area development and satisfactory mechanical and electrochemical properties, designed for high electrocatalytic process efficiency. The consortium validated electrode modification methods that are the base of the iCLARE product and established the procedures for the detection of chemicals detection: - deposition of metal oxides WO3 and V2O5-deposition of boron-doped diamond/nanowalls structures by CVD process. The chosen electrodes (porous Ferroterm electrodes) were stress tested for various parameters that might occur inside the iCLARE machine–corosis, the long-term structure of the electrode surface during electrochemical processes, and energetic efficacy using cyclic polarization and electrochemical impedance spectroscopy (before and after electrolysis) and dynamic electrochemical impedance spectroscopy (DEIS). This tool allows real-time monitoring of the changes at the electrode/electrolyte interphase. On the other hand, the toxicity of iCLARE chemicals and products of electrolysis are evaluated before and after the treatment using MARA examination (IBMM) and HPLC-MS-MS (NILU), giving us information about the harmfulness of using electrode material and the efficiency of iClare system in the disposal of pollutants. Implementation of data into the system that uses artificial intelligence and the possibility of practical application is in progress (SensDx).Keywords: waste water treatement, RVC, electrocatalysis, paracetamol
Procedia PDF Downloads 8828 An Intelligence-Led Methodologly for Detecting Dark Actors in Human Trafficking Networks
Authors: Andrew D. Henshaw, James M. Austin
Abstract:
Introduction: Human trafficking is an increasingly serious transnational criminal enterprise and social security issue. Despite ongoing efforts to mitigate the phenomenon and a significant expansion of security scrutiny over past decades, it is not receding. This is true for many nations in Southeast Asia, widely recognized as the global hub for trafficked persons, including men, women, and children. Clearly, human trafficking is difficult to address because there are numerous drivers, causes, and motivators for it to persist, such as non-military and non-traditional security challenges, i.e., climate change, global warming displacement, and natural disasters. These make displaced persons and refugees particularly vulnerable. The issue is so large conservative estimates put a dollar value at around $150 billion-plus per year (Niethammer, 2020) spanning sexual slavery and exploitation, forced labor, construction, mining and in conflict roles, and forced marriages of girls and women. Coupled with corruption throughout military, police, and civil authorities around the world, and the active hands of powerful transnational criminal organizations, it is likely that such figures are grossly underestimated as human trafficking is misreported, under-detected, and deliberately obfuscated to protect those profiting from it. For example, the 2022 UN report on human trafficking shows a 56% reduction in convictions in that year alone (UNODC, 2022). Our Approach: To better understand this, our research utilizes a bespoke methodology. Applying a JAM (Juxtaposition Assessment Matrix), which we previously developed to detect flows of dark money around the globe (Henshaw, A & Austin, J, 2021), we now focus on the human trafficking paradigm. Indeed, utilizing a JAM methodology has identified key indicators of human trafficking not previously explored in depth. Being a set of structured analytical techniques that provide panoramic interpretations of the subject matter, this iteration of the JAM further incorporates behavioral and driver indicators, including the employment of Open-Source Artificial Intelligence (OS-AI) across multiple collection points. The extracted behavioral data was then applied to identify non-traditional indicators as they contribute to human trafficking. Furthermore, as the JAM OS-AI analyses data from the inverted position, i.e., the viewpoint of the traffickers, it examines the behavioral and physical traits required to succeed. This transposed examination of the requirements of success delivers potential leverage points for exploitation in the fight against human trafficking in a new and novel way. Findings: Our approach identified new innovative datasets that have previously been overlooked or, at best, undervalued. For example, the JAM OS-AI approach identified critical 'dark agent' lynchpins within human trafficking that are difficult to detect and harder to connect to actors and agents within a network. Our preliminary data suggests this is in part due to the fact that ‘dark agents’ in extant research have been difficult to detect and potentially much harder to directly connect to the actors and organizations in human trafficking networks. Our research demonstrates that using new investigative techniques such as OS-AI-aided JAM introduces a powerful toolset to increase understanding of human trafficking and transnational crime and illuminate networks that, to date, avoid global law enforcement scrutiny.Keywords: human trafficking, open-source intelligence, transnational crime, human security, international human rights, intelligence analysis, JAM OS-AI, Dark Money
Procedia PDF Downloads 9027 Deciphering Information Quality: Unraveling the Impact of Information Distortion in the UK Aerospace Supply Chains
Authors: Jing Jin
Abstract:
The incorporation of artificial intelligence (AI) and machine learning (ML) in aircraft manufacturing and aerospace supply chains leads to the generation of a substantial amount of data among various tiers of suppliers and OEMs. Identifying the high-quality information challenges decision-makers. The application of AI/ML models necessitates access to 'high-quality' information to yield desired outputs. However, the process of information sharing introduces complexities, including distortion through various communication channels and biases introduced by both human and AI entities. This phenomenon significantly influences the quality of information, impacting decision-makers engaged in configuring supply chain systems. Traditionally, distorted information is categorized as 'low-quality'; however, this study challenges this perception, positing that distorted information, contributing to stakeholder goals, can be deemed high-quality within supply chains. The main aim of this study is to identify and evaluate the dimensions of information quality crucial to the UK aerospace supply chain. Guided by a central research question, "What information quality dimensions are considered when defining information quality in the UK aerospace supply chain?" the study delves into the intricate dynamics of information quality in the aerospace industry. Additionally, the research explores the nuanced impact of information distortion on stakeholders' decision-making processes, addressing the question, "How does the information distortion phenomenon influence stakeholders’ decisions regarding information quality in the UK aerospace supply chain system?" This study employs deductive methodologies rooted in positivism, utilizing a cross-sectional approach and a mono-quantitative method -a questionnaire survey. Data is systematically collected from diverse tiers of supply chain stakeholders, encompassing end-customers, OEMs, Tier 0.5, Tier 1, and Tier 2 suppliers. Employing robust statistical data analysis methods, including mean values, mode values, standard deviation, one-way analysis of variance (ANOVA), and Pearson’s correlation analysis, the study interprets and extracts meaningful insights from the gathered data. Initial analyses challenge conventional notions, revealing that information distortion positively influences the definition of information quality, disrupting the established perception of distorted information as inherently low-quality. Further exploration through correlation analysis unveils the varied perspectives of different stakeholder tiers on the impact of information distortion on specific information quality dimensions. For instance, Tier 2 suppliers demonstrate strong positive correlations between information distortion and dimensions like access security, accuracy, interpretability, and timeliness. Conversely, Tier 1 suppliers emphasise strong negative influences on the security of accessing information and negligible impact on information timeliness. Tier 0.5 suppliers showcase very strong positive correlations with dimensions like conciseness and completeness, while OEMs exhibit limited interest in considering information distortion within the supply chain. Introducing social network analysis (SNA) provides a structural understanding of the relationships between information distortion and quality dimensions. The moderately high density of ‘information distortion-by-information quality’ underscores the interconnected nature of these factors. In conclusion, this study offers a nuanced exploration of information quality dimensions in the UK aerospace supply chain, highlighting the significance of individual perspectives across different tiers. The positive influence of information distortion challenges prevailing assumptions, fostering a more nuanced understanding of information's role in the Industry 4.0 landscape.Keywords: information distortion, information quality, supply chain configuration, UK aerospace industry
Procedia PDF Downloads 6426 The Influence of Screen Translation on Creative Audiovisual Writing: A Corpus-Based Approach
Authors: John D. Sanderson
Abstract:
The popularity of American cinema worldwide has contributed to the development of sociolects related to specific film genres in other cultural contexts by means of screen translation, in many cases eluding norms of usage in the target language, a process whose result has come to be known as 'dubbese'. A consequence for the reception in countries where local audiovisual fiction consumption is far lower than American imported productions is that this linguistic construct is preferred, even though it differs from common everyday speech. The iconography of film genres such as science-fiction, western or sword-and-sandal films, for instance, generates linguistic expectations in international audiences who will accept more easily the sociolects assimilated by the continuous reception of American productions, even if the themes, locations, characters, etc., portrayed on screen may belong in origin to other cultures. And the non-normative language (e.g., calques, semantic loans) used in the preferred mode of linguistic transfer, whether it is translation for dubbing or subtitling, has diachronically evolved in many cases into a status of canonized sociolect, not only accepted but also required, by foreign audiences of American films. However, a remarkable step forward is taken when this typology of artificial linguistic constructs starts being used creatively by nationals of these target cultural contexts. In the case of Spain, the success of American sitcoms such as Friends in the 1990s led Spanish television scriptwriters to include in national productions lexical and syntactical indirect borrowings (Anglicisms not formally identifiable as such because they include elements from their own language) in order to target audiences of the former. However, this commercial strategy had already taken place decades earlier when Spain became a favored location for the shooting of foreign films in the early 1960s. The international popularity of the then newly developed sub-genre known as Spaghetti-Western encouraged Spanish investors to produce their own movies, and local scriptwriters made use of the dubbese developed nationally since the advent of sound in film instead of using normative language. As a result, direct Anglicisms, as well as lexical and syntactical borrowings made up the creative writing of these Spanish productions, which also became commercially successful. Interestingly enough, some of these films were even marketed in English-speaking countries as original westerns (some of the names of actors and directors were anglified to that purpose) dubbed into English. The analysis of these 'back translations' will also foreground some semantic distortions that arose in the process. In order to perform the research on these issues, a wide corpus of American films has been used, which chronologically range from Stagecoach (John Ford, 1939) to Django Unchained (Quentin Tarantino, 2012), together with a shorter corpus of Spanish films produced during the golden age of Spaghetti Westerns, from una tumba para el sheriff (Mario Caiano; in English lone and angry man, William Hawkins) to tu fosa será la exacta, amigo (Juan Bosch, 1972; in English my horse, my gun, your widow, John Wood). The methodology of analysis and the conclusions reached could be applied to other genres and other cultural contexts.Keywords: dubbing, film genre, screen translation, sociolect
Procedia PDF Downloads 17025 Mobi-DiQ: A Pervasive Sensing System for Delirium Risk Assessment in Intensive Care Unit
Authors: Subhash Nerella, Ziyuan Guan, Azra Bihorac, Parisa Rashidi
Abstract:
Intensive care units (ICUs) provide care to critically ill patients in severe and life-threatening conditions. However, patient monitoring in the ICU is limited by the time and resource constraints imposed on healthcare providers. Many critical care indices such as mobility are still manually assessed, which can be subjective, prone to human errors, and lack granularity. Other important aspects, such as environmental factors, are not monitored at all. For example, critically ill patients often experience circadian disruptions due to the absence of effective environmental “timekeepers” such as the light/dark cycle and the systemic effect of acute illness on chronobiologic markers. Although the occurrence of delirium is associated with circadian disruption risk factors, these factors are not routinely monitored in the ICU. Hence, there is a critical unmet need to develop systems for precise and real-time assessment through novel enabling technologies. We have developed the mobility and circadian disruption quantification system (Mobi-DiQ) by augmenting biomarker and clinical data with pervasive sensing data to generate mobility and circadian cues related to mobility, nightly disruptions, and light and noise exposure. We hypothesize that Mobi-DiQ can provide accurate mobility and circadian cues that correlate with bedside clinical mobility assessments and circadian biomarkers, ultimately important for delirium risk assessment and prevention. The collected multimodal dataset consists of depth images, Electromyography (EMG) data, patient extremity movement captured by accelerometers, ambient light levels, Sound Pressure Level (SPL), and indoor air quality measured by volatile organic compounds, and the equivalent CO₂ concentration. For delirium risk assessment, the system recognizes mobility cues (axial body movement features and body key points) and circadian cues, including nightly disruptions, ambient SPL, and light intensity, as well as other environmental factors such as indoor air quality. The Mobi-DiQ system consists of three major components: the pervasive sensing system, a data storage and analysis server, and a data annotation system. For data collection, six local pervasive sensing systems were deployed, including a local computer and sensors. A video recording tool with graphical user interface (GUI) developed in python was used to capture depth image frames for analyzing patient mobility. All sensor data is encrypted, then automatically uploaded to the Mobi-DiQ server through a secured VPN connection. Several data pipelines are developed to automate the data transfer, curation, and data preparation for annotation and model training. The data curation and post-processing are performed on the server. A custom secure annotation tool with GUI was developed to annotate depth activity data. The annotation tool is linked to the MongoDB database to record the data annotation and to provide summarization. Docker containers are also utilized to manage services and pipelines running on the server in an isolated manner. The processed clinical data and annotations are used to train and develop real-time pervasive sensing systems to augment clinical decision-making and promote targeted interventions. In the future, we intend to evaluate our system as a clinical implementation trial, as well as to refine and validate it by using other data sources, including neurological data obtained through continuous electroencephalography (EEG).Keywords: deep learning, delirium, healthcare, pervasive sensing
Procedia PDF Downloads 9324 A Review on Biological Control of Mosquito Vectors
Authors: Asim Abbasi, Muhammad Sufyan, Iqra, Hafiza Javaria Ashraf
Abstract:
The share of vector-borne diseases (VBDs) in the global burden of infectious diseases is almost 17%. The advent of new drugs and latest research in medical science helped mankind to compete with these lethal diseases but still diseases transmitted by different mosquito species, including filariasis, malaria, viral encephalitis and dengue are serious threats for people living in disease endemic areas. Injudicious and repeated use of pesticides posed selection pressure on mosquitoes leading to development of resistance. Hence biological control agents are under serious consideration of scientific community to be used in vector control programmes. Fish have a history of predating immature stages of different aquatic insects including mosquitoes. The noteworthy examples in Africa and Asia includes, Aphanius discolour and a fish in the Panchax group. Moreover, common mosquito fish, Gambusia affinis predates mostly on temporary water mosquitoes like anopheline as compared to permanent water breeders like culicines. Mosquitoes belonging to genus Toxorhynchites have a worldwide distribution and are mostly associated with the predation of other mosquito larvae habituating with them in natural and artificial water containers. These species are harmless to humans as their adults do not suck human blood but feeds on floral nectar. However, their activity is mostly temperature dependent as Toxorhynchites brevipalpis consume 359 Aedes aegypti larvae at 30-32 ºC in contrast to 154 larvae at 20-26 ºC. Although many bacterial species were isolated from mosquito cadavers but those belonging to genus Bacillus are found highly pathogenic against them. The successful species of this genus include Bacillus thuringiensis and Bacillus sphaericus. The prime targets of B. thuringiensis are mostly the immatures of genus Aedes, Culex, Anopheles and Psorophora while B. sphaericus is specifically toxic against species of Culex, Psorophora and Culiseta. The entomopathogenic nematodes belonging to family, mermithidae are also pathogenic to different mosquito species. Eighty different species of mosquitoes including Anopheles, Aedes and Culex proved to be highly vulnerable to the attack of two mermithid species, Romanomermis culicivorax and R. iyengari. Cytoplasmic polyhedrosis virus was the first described pathogenic virus, isolated from the cadavers of mosquito specie, Culex tarsalis. Other viruses which are pathogenic to culicine includes, iridoviruses, cytopolyhedrosis viruses, entomopoxviruses and parvoviruses. Protozoa species belonging to division microsporidia are the common pathogenic protozoans in mosquito populations which kill their host by the chronic effects of parasitism. Moreover, due to their wide prevalence in anopheline mosquitoes and transversal and horizontal transmission from infected to healthy host, microsporidia of the genera Nosema and Amblyospora have received much attention in various mosquito control programmes. Fungal based mycopesticides are used in biological control of insect pests with 47 species reported virulent against different stages of mosquitoes. These include both aquatic fungi i.e. species of Coelomomyces, Lagenidium giganteum and Culicinomyces clavosporus, and the terrestrial fungi Metarhizium anisopliae and Beauveria bassiana. Hence, it was concluded that the integrated use of all these biological control agents can be a healthy contribution in mosquito control programmes and become a dire need of the time to avoid repeated use of pesticides.Keywords: entomopathogenic nematodes, protozoa, Toxorhynchites, vector-borne
Procedia PDF Downloads 26623 Northern Nigeria Vaccine Direct Delivery System
Authors: Evelyn Castle, Adam Thompson
Abstract:
Background: In 2013, the Kano State Primary Health Care Management Board redesigned its Routine immunization supply chain from diffused pull to direct delivery push. It addressed issues around stockouts and reduced time spent by health facility staff collecting, and reporting on vaccine usage. The health care board sought the help of a 3PL for twice-monthly deliveries from its cold store to 484 facilities across 44 local governments. eHA’s Health Delivery Systems group formed a 3PL to serve 326 of these new facilities in partnership with the State. We focused on designing and implementing a technology system throughout. Basic methodologies: GIS Mapping: - Planning the delivery of vaccines to hundreds of health facilities requires detailed route planning for delivery vehicles. Mapping the road networks across Kano and Bauchi with a custom routing tool provided information for the optimization of deliveries. Reducing the number of kilometers driven each round by 20%, - reducing cost and delivery time. Direct Delivery Information System: - Vaccine Direct Deliveries are facilitated through pre-round planning (driven by health facility database, extensive GIS, and inventory workflow rules), manager and driver control panel customizing delivery routines and reporting, progress dashboard, schedules/routes, packing lists, delivery reports, and driver data collection applications. Move: Last Mile Logistics Management System: - MOVE has improved vaccine supply information management to be timely, accurate and actionable. Provides stock management workflow support, alerts management for cold chain exceptions/stock outs, and on-device analytics for health and supply chain staff. Software was built to be offline-first with user-validated interface and experience. Deployed to hundreds of vaccine storage site the improved information tools helps facilitate the process of system redesign and change management. Findings: - Stock-outs reduced from 90% to 33% - Redesigned current health systems and managing vaccine supply for 68% of Kano’s wards. - Near real time reporting and data availability to track stock. - Paperwork burdens of health staff have been dramatically reduced. - Medicine available when the community needs it. - Consistent vaccination dates for children under one to prevent polio, yellow fever, tetanus. - Higher immunization rates = Lower infection rates. - Hundreds of millions of Naira worth of vaccines successfully transported. - Fortnightly service to 326 facilities in 326 wards across 30 Local Government areas. - 6,031 cumulative deliveries. - Over 3.44 million doses transported. - Minimum travel distance covered in a round of delivery is 2000 kms & maximum of 6297 kms. - 153,409 kms travelled by 6 drivers. - 500 facilities in 326 wards. - Data captured and synchronized for the first time. - Data driven decision making now possible. Conclusion: eHA’s Vaccine Direct delivery has met challenges in Kano and Bauchi State and provided a reliable delivery service of vaccinations that ensure t health facilities can run vaccination clinics for children under one. eHA uses innovative technology that delivers vaccines from Northern Nigerian zonal stores straight to healthcare facilities. Helped healthcare workers spend less time managing supplies and more time delivering care, and will be rolled out nationally across Nigeria.Keywords: direct delivery information system, health delivery system, GIS mapping, Northern Nigeria, vaccines
Procedia PDF Downloads 37322 Human Bone Marrow Stem Cell Behavior on 3D Printed Scaffolds as Trabecular Bone Grafts
Authors: Zeynep Busra Velioglu, Deniz Pulat, Beril Demirbakan, Burak Ozcan, Ece Bayrak, Cevat Erisken
Abstract:
Bone tissue has the ability to perform a wide array of functions including providing posture, load-bearing capacity, protection for the internal organs, initiating hematopoiesis, and maintaining the homeostasis of key electrolytes via calcium/phosphate ion storage. The most common cause for bone defects is extensive trauma and subsequent infection. Bone tissue has the self-healing capability without a scar tissue formation for the majority of the injuries. However, some may result with delayed union or fracture non-union. Such cases include reconstruction of large bone defects or cases of compromised regenerative process as a result of avascular necrosis and osteoporosis. Several surgical methods exist to treat bone defects, including Ilizarov method, Masquelete technique, growth factor stimulation, and bone replacement. Unfortunately, these are technically demanding and come with noteworthy disadvantages such as lengthy treatment duration, adverse effects on the patient’s psychology, repeated surgical procedures, and often long hospitalization times. These limitations associated with surgical techniques make bone substitutes an attractive alternative. Here, it was hypothesized that a 3D printed scaffold will mimic trabecular bone in terms of biomechanical properties and that such scaffolds will support cell attachment and survival. To test this hypothesis, this study aimed at fabricating poly(lactic acid), PLA, structures using 3D printing technology for trabecular bone defects, characterizing the scaffolds and comparing with bovine trabecular bone. Capacity of scaffolds on human bone marrow stem cell (hBMSC) attachment and survival was also evaluated. Cubes with a volume of 1 cm³ having pore sizes of 0.50, 1.00 and 1.25 mm were printed. The scaffolds/grafts were characterized in terms of porosity, contact angle, compressive mechanical properties as well cell response. Porosities of the 3D printed scaffolds were calculated based on apparent densities. For contact angles, 50 µl distilled water was dropped over the surface of scaffolds, and contact angles were measured using ‘Image J’ software. Mechanical characterization under compression was performed on scaffolds and native trabecular bone (bovine, 15 months) specimens using a universal testing machine at a rate of 0.5mm/min. hBMSCs were seeded onto the 3D printed scaffolds. After 3 days of incubation with fully supplemented Dulbecco’s modified Eagle’s medium, the cells were fixed using 2% formaldehyde and glutaraldehyde mixture. The specimens were then imaged under scanning electron microscopy. Cell proliferation was determined by using EZQuant dsDNA Quantitation kit. Fluorescence was measured using microplate reader Spectramax M2 at the excitation and emission wavelengths of 485nm and 535nm, respectively. Findings suggested that porosity of scaffolds with pore dimensions of 0.5mm, 1.0mm and 1.25mm were not affected by pore size, while contact angle and compressive modulus decreased with increasing pore size. Biomechanical characterization of trabecular bone yielded higher modulus values as compared to scaffolds with all pore sizes studied. Cells attached and survived in all surfaces, demonstrating higher proliferation on scaffolds with 1.25mm pores as compared with those of 1mm. Collectively, given lower mechanical properties of scaffolds as compared to native bone, and biocompatibility of the scaffolds, the 3D printed PLA scaffolds of this study appear as candidate substitutes for bone repair and regeneration.Keywords: 3D printing, biomechanics, bone repair, stem cell
Procedia PDF Downloads 17221 Synthesis of Carbonyl Iron Particles Modified with Poly (Trimethylsilyloxyethyl Methacrylate) Nano-Grafts
Authors: Martin Cvek, Miroslav Mrlik, Michal Sedlacik, Tomas Plachy
Abstract:
Magnetorheological elastomers (MREs) are multi-phase composite materials containing micron-sized ferromagnetic particles dispersed in an elastomeric matrix. Their properties such as modulus, damping, magneto-striction, and electrical conductivity can be controlled by an external magnetic field and/or pressure. These features of the MREs are used in the development of damping devices, shock attenuators, artificial muscles, sensors or active elements of electric circuits. However, imperfections on the particle/matrix interfaces result in the lower performance of the MREs when compared with theoretical values. Moreover, magnetic particles are susceptible to corrosion agents such as acid rains or sea humidity. Therefore, the modification of particles is an effective tool for the improvement of MRE performance due to enhanced compatibility between particles and matrix as well as improvements of their thermo-oxidation and chemical stability. In this study, the carbonyl iron (CI) particles were controllably modified with poly(trimethylsilyloxyethyl methacrylate) (PHEMATMS) nano-grafts to develop magnetic core–shell structures exhibiting proper wetting with various elastomeric matrices resulting in improved performance within a frame of rheological, magneto-piezoresistance, pressure-piezoresistance, or radio-absorbing properties. The desired molecular weight of PHEMATMS nano-grafts was precisely tailored using surface-initiated atom transfer radical polymerization (ATRP). The CI particles were firstly functionalized using a 3-aminopropyltriethoxysilane agent, followed by esterification reaction with α-bromoisobutyryl bromide. The ATRP was performed in the anisole medium using ethyl α-bromoisobutyrate as a macroinitiator, N, N´, N´´, N´´-pentamethyldiethylenetriamine as a ligand, and copper bromide as an initiator. To explore the effect PHEMATMS molecular weights on final properties, two variants of core-shell structures with different nano-graft lengths were synthesized, while the reaction kinetics were designed through proper reactant feed ratios and polymerization times. The PHEMATMS nano-grafts were characterized by nuclear magnetic resonance and gel permeation chromatography proving information to their monomer conversions, molecular chain lengths, and low polydispersity indexes (1.28 and 1.35) as the results of the executed ATRP. The successful modifications were confirmed via Fourier transform infrared- and energy-dispersive spectroscopies while expected wavenumber outputs and element presences, respectively, of constituted PHEMATMS nano-grafts, were occurring in the spectra. The surface morphology of bare CI and their PHEMATMS-grafted analogues was further studied by scanning electron microscopy, and the thicknesses of grafted polymeric layers were directly observed by transmission electron microscopy. The contact angles as a measure of particle/matrix compatibility were investigated employing the static sessile drop method. The PHEMATMS nano-grafts enhanced compatibility of hydrophilic CI with low-surface-energy hydrophobic polymer matrix in terms of their wettability and dispersibility in an elastomeric matrix. Thus, the presence of possible defects at the particle/matrix interface is reduced, and higher performance of modified MREs is expected.Keywords: atom transfer radical polymerization, core-shell, particle modification, wettability
Procedia PDF Downloads 200