Search results for: semantic web technologies
2878 Geovisualisation for Defense Based on a Deep Learning Monocular Depth Reconstruction Approach
Authors: Daniel R. dos Santos, Mateus S. Maldonado, Estevão J. R. Batista
Abstract:
The military commanders increasingly dependent on spatial awareness, as knowing where enemy are, understanding how war battle scenarios change over time, and visualizing these trends in ways that offer insights for decision-making. Thanks to advancements in geospatial technologies and artificial intelligence algorithms, the commanders are now able to modernize military operations on a universal scale. Thus, geovisualisation has become an essential asset in the defense sector. It has become indispensable for better decisionmaking in dynamic/temporal scenarios, operation planning and management for the war field, situational awareness, effective planning, monitoring, and others. For example, a 3D visualization of war field data contributes to intelligence analysis, evaluation of postmission outcomes, and creation of predictive models to enhance decision-making and strategic planning capabilities. However, old-school visualization methods are slow, expensive, and unscalable. Despite modern technologies in generating 3D point clouds, such as LIDAR and stereo sensors, monocular depth values based on deep learning can offer a faster and more detailed view of the environment, transforming single images into visual information for valuable insights. We propose a dedicated monocular depth reconstruction approach via deep learning techniques for 3D geovisualisation of satellite images. It introduces scalability in terrain reconstruction and data visualization. First, a dataset with more than 7,000 satellite images and associated digital elevation model (DEM) is created. It is based on high resolution optical and radar imageries collected from Planet and Copernicus, on which we fuse highresolution topographic data obtained using technologies such as LiDAR and the associated geographic coordinates. Second, we developed an imagery-DEM fusion strategy that combine feature maps from two encoder-decoder networks. One network is trained with radar and optical bands, while the other is trained with DEM features to compute dense 3D depth. Finally, we constructed a benchmark with sparse depth annotations to facilitate future research. To demonstrate the proposed method's versatility, we evaluated its performance on no annotated satellite images and implemented an enclosed environment useful for Geovisualisation applications. The algorithms were developed in Python 3.0, employing open-source computing libraries, i.e., Open3D, TensorFlow, and Pythorch3D. The proposed method provides fast and accurate decision-making with GIS for localization of troops, position of the enemy, terrain and climate conditions. This analysis enhances situational consciousness, enabling commanders to fine-tune the strategies and distribute the resources proficiently.Keywords: depth, deep learning, geovisualisation, satellite images
Procedia PDF Downloads 102877 Energy-Led Sustainability Assessment Approach for Energy-Efficient Manufacturing
Authors: Aldona Kluczek
Abstract:
In recent years, manufacturing processes have interacted with sustainability issues realized in the cost-effective ways that minimalize energy, decrease negative impacts on the environment and are safe for society. However, the attention has been on separate sustainability assessment methods considering energy and material flow, energy consumption, and emission release or process control. In this paper, the energy-led sustainability assessment approach combining the methods: energy Life Cycle Assessment to assess environmental impact, Life Cycle Cost to analyze costs, and Social Life Cycle Assessment through ‘energy LCA-based value stream map’, is used to assess the energy sustainability of the hardwood lumber manufacturing process in terms of technologies. The approach integrating environmental, economic and social issues can be visualized in the considered energy-efficient technologies on the map of an energy LCA-related (input and output) inventory data. It will enable the identification of efficient technology of a given process to be reached, through the effective analysis of energy flow. It is also indicated that interventions in the considered technology should focus on environmental, economic improvements to achieve energy sustainability. The results have indicated that the most intense energy losses are caused by a cogeneration technology. The environmental impact analysis shows that a substantial reduction by 34% can be achieved with the improvement of it. From the LCC point of view, the result seems to be cost-effective, when done at that plant where the improvement is used. By demonstrating the social dimension, every component of the energy of plant labor use in the life-cycle process of the lumber production has positive energy benefits. The energy required to install the energy-efficient technology amounts to 30.32 kJ compared to others components of the energy of plant labor and it has the highest value in terms of energy-related social indicators. The paper depicts an example of hardwood lumber production in order to prove the applicability of a sustainability assessment method.Keywords: energy efficiency, energy life cycle assessment, life cycle cost, social life cycle analysis, manufacturing process, sustainability assessment
Procedia PDF Downloads 2472876 Demonstration Operation of Distributed Power Generation System Based on Carbonized Biomass Gasification
Authors: Kunio Yoshikawa, Ding Lu
Abstract:
Small-scale, distributed and low-cost biomass power generation technologies are highly required in the modern society. There are big needs for these technologies in the disaster areas of developed countries and un-electrified rural areas of developing countries. This work aims to present a technical feasibility of the portable ultra-small power generation system based on the gasification of carbonized wood pellets/briquettes. Our project is designed for enabling independent energy production from various kinds of biomass resources in the open-field. The whole process mainly consists of two processes: biomass and waste pretreatment; gasification and power generation. The first process includes carbonization, densification (briquetting or pelletization), and the second includes updraft fixed bed gasification of carbonized pellets/briquettes, syngas purification, and power generation employing an internal combustion gas engine. A combined pretreatment processes including carbonization without external energy and densification were adopted to deal with various biomass. Carbonized pellets showed a better gasification performance than carbonized briquettes and their mixture. The 100-hour continuous operation results indicated that pelletization/briquetting of carbonized fuel realized the stable operation of an updraft gasifier if there were no blocking issues caused by the accumulation of tar. The cold gas efficiency and the carbon conversion during carbonized wood pellets gasification was about 49.2% and 70.5% with the air equivalence ratio value of around 0.32, and the corresponding overall efficiency of the gas engine was 20.3% during the stable stage. Moreover, the maximum output power was 21 kW at the air flow rate of 40 Nm³·h⁻¹. Therefore, the comprehensive system covering biomass carbonization, densification, gasification, syngas purification, and engine system is feasible for portable, ultra-small power generation. This work has been supported by Innovative Science and Technology Initiative for Security (Ministry of Defence, Japan).Keywords: biomass carbonization, densification, distributed power generation, gasification
Procedia PDF Downloads 1562875 Automated System: Managing the Production and Distribution of Radiopharmaceuticals
Authors: Shayma Mohammed, Adel Trabelsi
Abstract:
Radiopharmacy is the art of preparing high-quality, radioactive, medicinal products for use in diagnosis and therapy. Radiopharmaceuticals unlike normal medicines, this dual aspect (radioactive, medical) makes their management highly critical. One of the most convincing applications of modern technologies is the ability to delegate the execution of repetitive tasks to programming scripts. Automation has found its way to the most skilled jobs, to improve the company's overall performance by allowing human workers to focus on more important tasks than document filling. This project aims to contribute to implement a comprehensive system to insure rigorous management of radiopharmaceuticals through the use of a platform that links the Nuclear Medicine Service Management System to the Nuclear Radio-pharmacy Management System in accordance with the recommendations of World Health Organization (WHO) and International Atomic Energy Agency (IAEA). In this project we attempt to build a web application that targets radiopharmacies, the platform is built atop the inherently compatible web stack which allows it to work in virtually any environment. Different technologies are used in this project (PHP, Symfony, MySQL Workbench, Bootstrap, Angular 7, Visual Studio Code and TypeScript). The operating principle of the platform is mainly based on two parts: Radiopharmaceutical Backoffice for the Radiopharmacian, who is responsible for the realization of radiopharmaceutical preparations and their delivery and Medical Backoffice for the Doctor, who holds the authorization for the possession and use of radionuclides and he/she is responsible for ordering radioactive products. The application consists of sven modules: Production, Quality Control/Quality Assurance, Release, General Management, References, Transport and Stock Management. It allows 8 classes of users: The Production Manager (PM), Quality Control Manager (QCM), Stock Manager (SM), General Manager (GM), Client (Doctor), Parking and Transport Manager (PTM), Qualified Person (QP) and Technical and Production Staff. Digital platform bringing together all players involved in the use of radiopharmaceuticals and integrating the stages of preparation, production and distribution, Web technologies, in particular, promise to offer all the benefits of automation while requiring no more than a web browser to act as a user client, which is a strength because the web stack is by nature multi-platform. This platform will provide a traceability system for radiopharmaceuticals products to ensure the safety and radioprotection of actors and of patients. The new integrated platform is an alternative to write all the boilerplate paperwork manually, which is a tedious and error-prone task. It would minimize manual human manipulation, which has proven to be the main source of error in nuclear medicine. A codified electronic transfer of information from radiopharmaceutical preparation to delivery will further reduce the risk of maladministration.Keywords: automated system, management, radiopharmacy, technical papers
Procedia PDF Downloads 1562874 42CrMo4 Steel Flow Behavior Characterization for High Temperature Closed Dies Hot Forging in Automotive Components Applications
Authors: O. Bilbao, I. Loizaga, F. A. Girot, A. Torregaray
Abstract:
The current energetical situation and the high competitiveness in industrial sectors as the automotive one have become the development of new manufacturing processes with less energy and raw material consumption a real necessity. As consequence, new forming processes related with high temperature hot forging in closed dies have emerged in the last years as new solutions to expand the possibilities of hot forging and iron casting in the automotive industry. These technologies are mid-way between hot forging and semi-solid metal processes, working at temperatures higher than the hot forging but below the solidus temperature or the semi solid range, where no liquid phase is expected. This represents an advantage comparing with semi-solid forming processes as thixoforging, by the reason that no so high temperatures need to be reached in the case of high melting point alloys as steels, reducing the manufacturing costs and the difficulties associated to semi-solid processing of them. Comparing with hot forging, this kind of technologies allow the production of parts with as forged properties and more complex and near-net shapes (thinner sidewalls), enhancing the possibility of designing lightweight components. From the process viewpoint, the forging forces are significantly decreased, and a significant reduction of the raw material, energy consumption, and the forging steps have been demonstrated. Despite the mentioned advantages, from the material behavior point of view, the expansion of these technologies has shown the necessity of developing new material flow behavior models in the process working temperature range to make the simulation or the prediction of these new forming processes feasible. Moreover, the knowledge of the material flow behavior at the working temperature range also allows the design of the new closed dies concept required. In this work, the flow behavior characterization in the mentioned temperature range of the widely used in automotive commercial components 42CrMo4 steel has been studied. For that, hot compression tests have been carried out in a thermomechanical tester in a temperature range that covers the material behavior from the hot forging until the NDT (Nil Ductility Temperature) temperature (1250 ºC, 1275 ºC, 1300 ºC, 1325 ºC, 1350ºC, and 1375 ºC). As for the strain rates, three different orders of magnitudes have been considered (0,1 s-1, 1s-1, and 10s-1). Then, results obtained from the hot compression tests have been treated in order to adapt or re-write the Spittel model, widely used in automotive commercial softwares as FORGE® that restrict the current existing models up to 1250ºC. Finally, the obtained new flow behavior model has been validated by the process simulation in a commercial automotive component and the comparison of the results of the simulation with the already made experimental tests in a laboratory cellule of the new technology. So as a conclusion of the study, a new flow behavior model for the 42CrMo4 steel in the new working temperature range and the new process simulation in its application in automotive commercial components has been achieved and will be shown.Keywords: 42CrMo4 high temperature flow behavior, high temperature hot forging in closed dies, simulation of automotive commercial components, spittel flow behavior model
Procedia PDF Downloads 1292873 The Development of Digital Economy in Thailand
Authors: Danuvasin Charoen
Abstract:
This study investigates the development of the digital economy policy in Thailand. The researcher describes the importance of digital technologies for competitiveness development of the country. In addition, the researcher analyzes the components and roadmap of the digital economy policy in Thailand. Main problems and challenges of the policy were identified. The data were gathered and analyzed from secondary sources. The finding can be used to guide the implementation of the digital economy in Thailand and other developing economies.Keywords: digital economy, ICT in developing countries, Thailand, ICT development
Procedia PDF Downloads 3462872 DNA Fingerprinting of Some Major Genera of Subterranean Termites (Isoptera) (Anacanthotermes, Psammotermes and Microtermes) from Western Saudi Arabia
Authors: AbdelRahman A. Faragalla, Mohamed H. Alqhtani, Mohamed M. M.Ahmed
Abstract:
Saudi Arabia has currently been beset by a barrage of bizarre assemblages of subterranean termite fauna, inflicting heavy catastrophic havocs on human valued properties in various homes, storage facilities, warehouses, agricultural and horticultural crops including okra, sweet pepper, tomatoes, sorghum, date palm trees, citruses and many forest domains and green lush desert oases. The most pressing urgent priority is to use modern technologies to alleviate the painstaking obstacle of taxonomic identification of these injurious noxious pests that might lead to effective pest control in both infested agricultural commodities and field crops. Our study has indicated the use of DNA fingerprinting technologies, in order to generate basic information of the genetic similarity between 3 predominant families containing the most destructive termite species. The methodologies included extraction and DNA isolation from members of the major families and the use of randomly selected primers and PCR amplifications with the nucleotide sequences. GC content and annealing temperatures for all primers, PCR amplifications and agarose gel electrophoresis were also conducted in addition to the scoring and analysis of Random Amplification Polymorphic DNA-PCR (RAPDs). A phylogenetic analysis for different species using statistical computer program on the basis of RAPD-DNA results, represented as a dendrogram based on the average of band sharing ratio between different species. Our study aims to shed more light on this intriguing subject, which may lead to an expedited display of the kinship and relatedness of species in an ambitious undertaking to arrive at correct taxonomic classification of termite species, discover sibling species, so that a logistic rational pest management strategy could be delineated.Keywords: DNA fingerprinting, Western Saudi Arabia, DNA primers, RAPD
Procedia PDF Downloads 4302871 Sustainable Membranes Based on 2D Materials for H₂ Separation and Purification
Authors: Juan A. G. Carrio, Prasad Talluri, Sergio G. Echeverrigaray, Antonio H. Castro Neto
Abstract:
Hydrogen as a fuel and environmentally pleasant energy carrier is part of this transition towards low-carbon systems. The extensive deployment of hydrogen production, purification and transport infrastructures still represents significant challenges. Independent of the production process, the hydrogen generally is mixed with light hydrocarbons and other undesirable gases that need to be removed to obtain H₂ with the required purity for end applications. In this context, membranes are one of the simplest, most attractive, sustainable, and performant technologies enabling hydrogen separation and purification. They demonstrate high separation efficiencies and low energy consumption levels in operation, which is a significant leap compared to current energy-intensive options technologies. The unique characteristics of 2D laminates have given rise to a diversity of research on their potential applications in separation systems. Specifically, it is already known in the scientific literature that graphene oxide-based membranes present the highest reported selectivity of H₂ over other gases. This work explores the potential of a new type of 2D materials-based membranes in separating H₂ from CO₂ and CH₄. We have developed nanostructured composites based on 2D materials that have been applied in the fabrication of membranes to maximise H₂ selectivity and permeability, for different gas mixtures, by adjusting the membranes' characteristics. Our proprietary technology does not depend on specific porous substrates, which allows its integration in diverse separation modules with different geometries and configurations, looking to address the technical performance required for industrial applications and economic viability. The tuning and precise control of the processing parameters allowed us to control the thicknesses of the membranes below 100 nanometres to provide high permeabilities. Our results for the selectivity of new nanostructured 2D materials-based membranes are in the range of the performance reported in the available literature around 2D materials (such as graphene oxide) applied to hydrogen purification, which validates their use as one of the most promising next-generation hydrogen separation and purification solutions.Keywords: membranes, 2D materials, hydrogen purification, nanocomposites
Procedia PDF Downloads 1342870 Domestic Led Lighting Designs Using Internet of Things
Authors: Gouresh Singhal, Rajib Kumar Panigrahi
Abstract:
In this paper, we try to examine historical and technological changes in lighting industry. We propose a (proto) technical solution at block diagram and circuit level. Untapped and upcoming technologies such as Cloud and 6LoWPAN are further explored. The paper presents a robust hardware realistic design. A mobile application is also provided to provide last mile user interface. The paper highlights the current challenges to be faced and concludes with a pragmatic view of lighting industry.Keywords: 6lowpan, internet of things, mobile application, led
Procedia PDF Downloads 5712869 Efficient Backup Protection for Hybrid WDM/TDM GPON System
Authors: Elmahdi Mohammadine, Ahouzi Esmail, Najid Abdellah
Abstract:
This contribution aims to present a new protected hybrid WDM/TDM PON architecture using Wavelength Selective Switches and Optical Line Protection devices. The objective from using these technologies is to improve flexibility and enhance the protection of GPON networks.Keywords: Wavlenght Division Multiplexed Passive Optical Network (WDM-PON), Time Division Multiplexed PON (TDM-PON), architecture, Protection, Wavelength Selective Switches (WSS), Optical Line Protection (OLP)
Procedia PDF Downloads 5422868 The Attitudinal Effects of Dental Hygiene Students When Changing Conventional Practices of Preventive Therapy in the Dental Hygiene Curriculum
Authors: Shawna Staud, Mary Kaye Scaramucci
Abstract:
Objective: Rubber cup polishing has been a traditional method of preventative therapy in dental hygiene treatment. Newer methods such as air polishing have changed the way dental hygiene care is provided, yet this technique has not been embraced by students in the program nor by practitioners in the workforce. Students entering the workforce tend to follow office protocol and are limited in confidence to introduce technologies learned in the curriculum. This project was designed to help students gain confidence in newer skills and encourage private practice settings to adopt newer technologies for patient care. Our program recently introduced air polishing earlier in the program before the rubber cup technique to determine if students would embrace the technology to become leading-edge professionals when they enter the marketplace. Methods: The class of 2022 was taught the traditional method of polishing in the first-year curriculum and air polishing in the second-year curriculum. The class of 2023 will be taught the air polishing method in the first-year curriculum and the traditional method of polishing in the second-year curriculum. Pre- and post-graduation survey data will be collected from both cohorts. Descriptive statistics and pre and post-paired t-tests with alpha set at .05 to compare pre and post-survey results will be used to assess data. Results: This study is currently in progress, with a completion date of October 2023. The class of 2022 completed the pre-graduation survey in the spring of 2022. The post-gradation survey will be sent out in October 2022. The class of 2023 cohort will be surveyed in the spring of 2023 and October 2023. Conclusion: Our hypothesis is students who are taught air polishing first will be more inclined to adopt that skill in private practice, thereby embracing newer technology and improving oral health care.Keywords: luggage handling system at world’s largest pilgrimage center
Procedia PDF Downloads 1032867 Text Localization in Fixed-Layout Documents Using Convolutional Networks in a Coarse-to-Fine Manner
Authors: Beier Zhu, Rui Zhang, Qi Song
Abstract:
Text contained within fixed-layout documents can be of great semantic value and so requires a high localization accuracy, such as ID cards, invoices, cheques, and passports. Recently, algorithms based on deep convolutional networks achieve high performance on text detection tasks. However, for text localization in fixed-layout documents, such algorithms detect word bounding boxes individually, which ignores the layout information. This paper presents a novel architecture built on convolutional neural networks (CNNs). A global text localization network and a regional bounding-box regression network are introduced to tackle the problem in a coarse-to-fine manner. The text localization network simultaneously locates word bounding points, which takes the layout information into account. The bounding-box regression network inputs the features pooled from arbitrarily sized RoIs and refine the localizations. These two networks share their convolutional features and are trained jointly. A typical type of fixed-layout documents: ID cards, is selected to evaluate the effectiveness of the proposed system. These networks are trained on data cropped from nature scene images, and synthetic data produced by a synthetic text generation engine. Experiments show that our approach locates high accuracy word bounding boxes and achieves state-of-the-art performance.Keywords: bounding box regression, convolutional networks, fixed-layout documents, text localization
Procedia PDF Downloads 1942866 The Role of Ideophones: Phonological and Morphological Characteristics in Literature
Authors: Cristina Bahón Arnaiz
Abstract:
Many Asian languages, such as Korean and Japanese, are well-known for their wide use of sound symbolic words or ideophones. This is a very particular characteristic which enriches its lexicon hugely. Ideophones are a class of sound symbolic words that utilize sound symbolism to express aspects, states, emotions, or conditions that can be experienced through the senses, such as shape, color, smell, action or movement. Ideophones have very particular characteristics in terms of sound symbolism and morphology, which distinguish them from other words. The phonological characteristics of ideophones are vowel ablaut or vowel gradation and consonant mutation. In the case of Korean, there are light vowels and dark vowels. Depending on the type of vowel that is used, the meaning will slightly change. Consonant mutation, also known as consonant ablaut, contributes to the level of intensity, emphasis, and volume of an expression. In addition to these phonological characteristics, there is one main morphological singularity, which is reduplication and it carries the meaning of continuity, repetition, intensity, emphasis, and plurality. All these characteristics play an important role in both linguistics and literature as they enhance the meaning of what is trying to be expressed with incredible semantic detail, expressiveness, and rhythm. The following study will analyze the ideophones used in a single paragraph of a Korean novel, which add incredible yet subtle detail to the meaning of the words, and advance the expressiveness and rhythm of the text. The results from analyzing one paragraph from a novel, after presenting the phonological and morphological characteristics of Korean ideophones, will evidence the important role that ideophones play in literature.Keywords: ideophones, mimetic words, phonomimes, phenomimes, psychomimes, sound symbolism
Procedia PDF Downloads 1492865 Comparative Study of Properties of Iranian Historical Gardens by Focusing on Climate
Authors: Malihe Ahmadi
Abstract:
Nowadays, stress, tension and neural problems are among the most important concerns of the present age. The environment plays key role on improving mental health and reducing stress of citizens. Establishing balance and appropriate relationship between city and natural environment is of the most important approaches of present century. Type of approach and logical planning for urban green spaces as one of the basic sections of integration with nature, not only plays key role on quality and efficiency of comprehensive urban planning; but also it increases the system of distributing social activities and happiness and lively property of urban environments that leads to permanent urban development. The main purpose of recovering urban identity is considering culture, history and human life style in past. This is a documentary-library research that evaluates the historical properties of Iranian gardens in compliance with climate condition. Results of this research reveal that in addition to following Iranian gardens from common principles of land lot, structure of flowers and plants, water, specific buildings during different ages, the role of climate at different urban areas is among the basics of determining method of designing green spaces and different buildings located at diverse areas i.e. Iranian gardens are a space for merging natural and artificial elements that has inseparable connection with semantic principles and guarantees different functions. Some of the necessities of designing present urban gardens are including: recognition and recreation.Keywords: historical gardens, climate, properties of Iranian gardens, Iran
Procedia PDF Downloads 3972864 A 500 MWₑ Coal-Fired Power Plant Operated under Partial Oxy-Combustion: Methodology and Economic Evaluation
Authors: Fernando Vega, Esmeralda Portillo, Sara Camino, Benito Navarrete, Elena Montavez
Abstract:
The European Union aims at strongly reducing their CO₂ emissions from energy and industrial sector by 2030. The energy sector contributes with more than two-thirds of the CO₂ emission share derived from anthropogenic activities. Although efforts are mainly focused on the use of renewables by energy production sector, carbon capture and storage (CCS) remains as a frontline option to reduce CO₂ emissions from industrial process, particularly from fossil-fuel power plants and cement production. Among the most feasible and near-to-market CCS technologies, namely post-combustion and oxy-combustion, partial oxy-combustion is a novel concept that can potentially reduce the overall energy requirements of the CO₂ capture process. This technology consists in the use of higher oxygen content in the oxidizer that should increase the CO₂ concentration of the flue gas once the fuel is burnt. The CO₂ is then separated from the flue gas downstream by means of a conventional CO₂ chemical absorption process. The production of a higher CO₂ concentrated flue gas should enhance the CO₂ absorption into the solvent, leading to further reductions of the CO₂ separation performance in terms of solvent flow-rate, equipment size, and energy penalty related to the solvent regeneration. This work evaluates a portfolio of CCS technologies applied to fossil-fuel power plants. For this purpose, an economic evaluation methodology was developed in detail to determine the main economical parameters for CO₂ emission removal such as the levelized cost of electricity (LCOE) and the CO₂ captured and avoided costs. ASPEN Plus™ software was used to simulate the main units of power plant and solve the energy and mass balance. Capital and investment costs were determined from the purchased cost of equipment, also engineering costs and project and process contingencies. The annual capital cost and operating and maintenance costs were later obtained. A complete energy balance was performed to determine the net power produced in each case. The baseline case consists of a supercritical 500 MWe coal-fired power plant using anthracite as a fuel without any CO₂ capture system. Four cases were proposed: conventional post-combustion capture, oxy-combustion and partial oxy-combustion using two levels of oxygen-enriched air (40%v/v and 75%v/v). CO₂ chemical absorption process using monoethanolamine (MEA) was used as a CO₂ separation process whereas the O₂ requirement was achieved using a conventional air separation unit (ASU) based on Linde's cryogenic process. Results showed a reduction of 15% of the total investment cost of the CO₂ separation process when partial oxy-combustion was used. Oxygen-enriched air production also reduced almost half the investment costs required for ASU in comparison with oxy-combustion cases. Partial oxy-combustion has a significant impact on the performance of both CO₂ separation and O₂ production technologies, and it can lead to further energy reductions using new developments on both CO₂ and O₂ separation processes.Keywords: carbon capture, cost methodology, economic evaluation, partial oxy-combustion
Procedia PDF Downloads 1482863 Good Practices for Model Structure Development and Managing Structural Uncertainty in Decision Making
Authors: Hossein Afzali
Abstract:
Increasingly, decision analytic models are used to inform decisions about whether or not to publicly fund new health technologies. It is well noted that the accuracy of model predictions is strongly influenced by the appropriateness of model structuring. However, there is relatively inadequate methodological guidance surrounding this issue in guidelines developed by national funding bodies such as the Australian Pharmaceutical Benefits Advisory Committee (PBAC) and The National Institute for Health and Care Excellence (NICE) in the UK. This presentation aims to discuss issues around model structuring within decision making with a focus on (1) the need for a transparent and evidence-based model structuring process to inform the most appropriate set of structural aspects as the base case analysis; (2) the need to characterise structural uncertainty (If there exist alternative plausible structural assumptions (or judgements), there is a need to appropriately characterise the related structural uncertainty). The presentation will provide an opportunity to share ideas and experiences on how the guidelines developed by national funding bodies address the above issues and identify areas for further improvements. First, a review and analysis of the literature and guidelines developed by PBAC and NICE will be provided. Then, it will be discussed how the issues around model structuring (including structural uncertainty) are not handled and justified in a systematic way within the decision-making process, its potential impact on the quality of public funding decisions, and how it should be presented in submissions to national funding bodies. This presentation represents a contribution to the good modelling practice within the decision-making process. Although the presentation focuses on the PBAC and NICE guidelines, the discussion can be applied more widely to many other national funding bodies that use economic evaluation to inform funding decisions but do not transparently address model structuring issues e.g. the Medical Services Advisory Committee (MSAC) in Australia or the Canadian Agency for Drugs and Technologies in Health.Keywords: decision-making process, economic evaluation, good modelling practice, structural uncertainty
Procedia PDF Downloads 1862862 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines
Authors: Alexander Guzman Urbina, Atsushi Aoyama
Abstract:
The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.Keywords: deep learning, risk assessment, neuro fuzzy, pipelines
Procedia PDF Downloads 2922861 Personalizing Human Physical Life Routines Recognition over Cloud-based Sensor Data via AI and Machine Learning
Authors: Kaushik Sathupadi, Sandesh Achar
Abstract:
Pervasive computing is a growing research field that aims to acknowledge human physical life routines (HPLR) based on body-worn sensors such as MEMS sensors-based technologies. The use of these technologies for human activity recognition is progressively increasing. On the other hand, personalizing human life routines using numerous machine-learning techniques has always been an intriguing topic. In contrast, various methods have demonstrated the ability to recognize basic movement patterns. However, it still needs to be improved to anticipate the dynamics of human living patterns. This study introduces state-of-the-art techniques for recognizing static and dy-namic patterns and forecasting those challenging activities from multi-fused sensors. Further-more, numerous MEMS signals are extracted from one self-annotated IM-WSHA dataset and two benchmarked datasets. First, we acquired raw data is filtered with z-normalization and denoiser methods. Then, we adopted statistical, local binary pattern, auto-regressive model, and intrinsic time scale decomposition major features for feature extraction from different domains. Next, the acquired features are optimized using maximum relevance and minimum redundancy (mRMR). Finally, the artificial neural network is applied to analyze the whole system's performance. As a result, we attained a 90.27% recognition rate for the self-annotated dataset, while the HARTH and KU-HAR achieved 83% on nine living activities and 90.94% on 18 static and dynamic routines. Thus, the proposed HPLR system outperformed other state-of-the-art systems when evaluated with other methods in the literature.Keywords: artificial intelligence, machine learning, gait analysis, local binary pattern (LBP), statistical features, micro-electro-mechanical systems (MEMS), maximum relevance and minimum re-dundancy (MRMR)
Procedia PDF Downloads 202860 Fault Tolerant and Testable Designs of Reversible Sequential Building Blocks
Authors: Vishal Pareek, Shubham Gupta, Sushil Chandra Jain
Abstract:
With increasing high-speed computation demand the power consumption, heat dissipation and chip size issues are posing challenges for logic design with conventional technologies. Recovery of bit loss and bit errors is other issues that require reversibility and fault tolerance in the computation. The reversible computing is emerging as an alternative to conventional technologies to overcome the above problems and helpful in a diverse area such as low-power design, nanotechnology, quantum computing. Bit loss issue can be solved through unique input-output mapping which require reversibility and bit error issue require the capability of fault tolerance in design. In order to incorporate reversibility a number of combinational reversible logic based circuits have been developed. However, very few sequential reversible circuits have been reported in the literature. To make the circuit fault tolerant, a number of fault model and test approaches have been proposed for reversible logic. In this paper, we have attempted to incorporate fault tolerance in sequential reversible building blocks such as D flip-flop, T flip-flop, JK flip-flop, R-S flip-flop, Master-Slave D flip-flop, and double edge triggered D flip-flop by making them parity preserving. The importance of this proposed work lies in the fact that it provides the design of reversible sequential circuits completely testable for any stuck-at fault and single bit fault. In our opinion our design of reversible building blocks is superior to existing designs in term of quantum cost, hardware complexity, constant input, garbage output, number of gates and design of online testable D flip-flop have been proposed for the first time. We hope our work can be extended for building complex reversible sequential circuits.Keywords: parity preserving gate, quantum computing, fault tolerance, flip-flop, sequential reversible logic
Procedia PDF Downloads 5452859 Opportunities and Challenges for Decarbonizing Steel Production by Creating Markets for ‘Green Steel’ Products
Authors: Hasan Muslemani, Xi Liang, Kathi Kaesehage, Francisco Ascui, Jeffrey Wilson
Abstract:
The creation of a market for lower-carbon steel products, here called ‘green steel’, has been identified as an important means to support the introduction of breakthrough emission reduction technologies into the steel sector. However, the definition of what ‘green’ entails in the context of steel production, the implications on the competitiveness of green steel products in local and international markets, and the necessary market mechanisms to support their successful market penetration remain poorly explored. This paper addresses this gap by holding semi-structured interviews with international sustainability experts and commercial managers from leading steel trade associations, research institutes and steelmakers. Our findings show that there is an urgent need to establish a set of standards to define what ‘greenness’ means in the steelmaking context; standards that avoid market disruptions, unintended consequences, and opportunities for greenwashing. We also highlight that the introduction of green steel products will have implications on product competitiveness on three different levels: 1) between primary and secondary steelmaking routes, 2) with traditional, lesser green steel, and 3) with other substitutable materials (e.g. cement and plastics). This paper emphasises the need for steelmakers to adopt a transitional approach in deploying different low-carbon technologies, based on their stage of technological maturity, applicability in certain country contexts, capacity to reduce emissions over time, and the ability of the investment community to support their deployment. We further identify market mechanisms to support green steel production, including carbon border adjustments and public procurement, highlighting a need for implementing a combination of complementary policies to ensure the products’ roll-out. The study further shows that the auto industry is a likely candidate for green steel consumption, where a market would be supported by price premiums paid by willing consumers, such as those of high-end luxury vehicles.Keywords: green steel, decarbonisation, business model innovation, market analysis
Procedia PDF Downloads 1332858 Human Factors Interventions for Risk and Reliability Management of Defence Systems
Authors: Chitra Rajagopal, Indra Deo Kumar, Ila Chauhan, Ruchi Joshi, Binoy Bhargavan
Abstract:
Reliability and safety are essential for the success of mission-critical and safety-critical defense systems. Humans are part of the entire life cycle of defense systems development and deployment. The majority of industrial accidents or disasters are attributed to human errors. Therefore, considerations of human performance and human reliability are critical in all complex systems, including defense systems. Defense systems are operating from the ground, naval and aerial platforms in diverse conditions impose unique physical and psychological challenges to the human operators. Some of the safety and mission-critical defense systems with human-machine interactions are fighter planes, submarines, warships, combat vehicles, aerial and naval platforms based missiles, etc. Human roles and responsibilities are also going through a transition due to the infusion of artificial intelligence and cyber technologies. Human operators, not accustomed to such challenges, are more likely to commit errors, which may lead to accidents or loss events. In such a scenario, it is imperative to understand the human factors in defense systems for better systems performance, safety, and cost-effectiveness. A case study using Task Analysis (TA) based methodology for assessment and reduction of human errors in the Air and Missile Defense System in the context of emerging technologies were presented. Action-oriented task analysis techniques such as Hierarchical Task Analysis (HTA) and Operator Action Event Tree (OAET) along with Critical Action and Decision Event Tree (CADET) for cognitive task analysis was used. Human factors assessment based on the task analysis helps in realizing safe and reliable defense systems. These techniques helped in the identification of human errors during different phases of Air and Missile Defence operations, leading to meet the requirement of a safe, reliable and cost-effective mission.Keywords: defence systems, reliability, risk, safety
Procedia PDF Downloads 1352857 Digital Transformation in Education: Artificial Intelligence Awareness of Preschool Teachers
Authors: Cansu Bozer, Saadet İrem Turgut
Abstract:
Artificial intelligence (AI) has become one of the most important technologies of the digital age and is transforming many sectors, including education. The advantages offered by AI, such as automation, personalised learning, and data analytics, create new opportunities for both teachers and students in education systems. Preschool education plays a fundamental role in the cognitive, social, and emotional development of children. In this period, the foundations of children's creative thinking, problem-solving, and critical thinking skills are laid. Educational technologies, especially artificial intelligence-based applications, are thought to contribute to the development of these skills. For example, artificial intelligence-supported digital learning tools can support learning processes by offering activities that can be customised according to the individual needs of each child. However, the successful use of artificial intelligence-based applications in preschool education can be realised under the guidance of teachers who have the right knowledge about this technology. Therefore, it is of great importance to measure preschool teachers' awareness levels of artificial intelligence and to understand which variables affect this awareness. The aim of this study is to measure preschool teachers' awareness levels of artificial intelligence and to determine which factors are related to this awareness. In line with this purpose, teachers' level of knowledge about artificial intelligence, their thoughts about the role of artificial intelligence in education, and their attitudes towards artificial intelligence will be evaluated. The study will be conducted with 100 teachers working in Turkey using a descriptive survey model. In this context, ‘Artificial Intelligence Awareness Level Scale for Teachers’ developed by Ferikoğlu and Akgün (2022) will be used. The collected data will be analysed using SPSS (Statistical Package for the Social Sciences) software. Descriptive statistics (frequency, percentage, mean, standard deviation) and relationship analyses (correlation and regression analyses) will be used in data analysis. As a result of the study, the level of artificial intelligence awareness of preschool teachers will be determined, and the factors affecting this awareness will be identified. The findings obtained will contribute to the determination of studies that can be done to increase artificial intelligence awareness in preschool education.Keywords: education, child development, artificial intelligence, preschool teachers
Procedia PDF Downloads 192856 Investigation of Delivery of Triple Play Data in GE-PON Fiber to the Home Network
Authors: Ashima Anurag Sharma
Abstract:
Optical fiber based networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This research paper is targeted to show the simultaneous delivery of triple play service (data, voice, and video). The comparison between various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be decreases due to increase in bit error rate.Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT
Procedia PDF Downloads 5272855 Investigation of Delivery of Triple Play Services
Authors: Paramjit Mahey, Monica Sharma, Jasbinder Singh
Abstract:
Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT
Procedia PDF Downloads 5412854 True and False Cognates of Japanese, Chinese and Philippine Languages: A Contrastive Analysis
Authors: Jose Marie E. Ocdenaria, Riceli C. Mendoza
Abstract:
Culturally, languages meet, merge, share, exchange, appropriate, donate, and divide in and to and from each other. Further, this type of recurrence manifests in East Asian cultures, where language influence diffuses across geographical proximities. Historically, China has notable impacts on Japan’s culture. For instance, Japanese borrowed words from China and their way of reading and writing. This qualitative and descriptive employing contrastive analysis study addressed the true and false cognates of Japanese-Philippine languages and Chinese-Philippine languages. It involved a rich collection of data from various sources like textual pieces of evidence or corpora to gain a deeper understanding of true and false cognates between L1 and L2. Cognates of Japanese-Philippine languages and Chinese-Philippine languages were analyzed contrastively according to orthography, phonology, and semantics. The words presented were the roots; however, derivatives, reduplications, and variants of stress were included when they shed emphases on the comparison. The basis of grouping the cognates was its phonetic-semantic resemblance. Based on the analysis, it revealed that there are words which may have several types of lexical relationship. Further, the study revealed that the Japanese language has more false cognates in the Philippine languages, particularly in Tagalog and Cebuano. On the other hand, there are more true cognates of Chinese in Tagalog. It is the hope of this study to provide a significant contribution to a diverse audience. These include the teachers and learners of foreign languages such as Japanese and Chinese, future researchers and investigators, applied linguists, curricular theorists, community, and publishers.Keywords: Contrastive Analysis, Japanese, Chinese and Philippine languages, Qualitative and descriptive study, True and False Cognates
Procedia PDF Downloads 1372853 Safe and Scalable Framework for Participation of Nodes in Smart Grid Networks in a P2P Exchange of Short-Term Products
Authors: Maciej Jedrzejczyk, Karolina Marzantowicz
Abstract:
Traditional utility value chain is being transformed during last few years into unbundled markets. Increased distributed generation of energy is one of considerable challenges faced by Smart Grid networks. New sources of energy introduce volatile demand response which has a considerable impact on traditional middlemen in E&U market. The purpose of this research is to search for ways to allow near-real-time electricity markets to transact with surplus energy based on accurate time synchronous measurements. A proposed framework evaluates the use of secure peer-2-peer (P2P) communication and distributed transaction ledgers to provide flat hierarchy, and allow real-time insights into present and forecasted grid operations, as well as state and health of the network. An objective is to achieve dynamic grid operations with more efficient resource usage, higher security of supply and longer grid infrastructure life cycle. Methods used for this study are based on comparative analysis of different distributed ledger technologies in terms of scalability, transaction performance, pluggability with external data sources, data transparency, privacy, end-to-end security and adaptability to various market topologies. An intended output of this research is a design of a framework for safer, more efficient and scalable Smart Grid network which is bridging a gap between traditional components of the energy network and individual energy producers. Results of this study are ready for detailed measurement testing, a likely follow-up in separate studies. New platforms for Smart Grid achieving measurable efficiencies will allow for development of new types of Grid KPI, multi-smart grid branches, markets, and businesses.Keywords: autonomous agents, Distributed computing, distributed ledger technologies, large scale systems, micro grids, peer-to-peer networks, Self-organization, self-stabilization, smart grids
Procedia PDF Downloads 3002852 Policy Views of Sustainable Integrated Solution for Increased Synergy between Light Railways and Electrical Distribution Network
Authors: Mansoureh Zangiabadi, Shamil Velji, Rajendra Kelkar, Neal Wade, Volker Pickert
Abstract:
The EU has set itself a long-term goal of reducing greenhouse gas emissions by 80-95% of the 1990 levels by 2050 as set in the Energy Roadmap 2050. This paper reports on the European Union H2020 funded E-Lobster project which demonstrates tools and technologies, software and hardware in integrating the grid distribution, and the railway power systems with power electronics technologies (Smart Soft Open Point - sSOP) and local energy storage. In this context this paper describes the existing policies and regulatory frameworks of the energy market at European level with a special focus then at National level, on the countries where the members of the consortium are located, and where the demonstration activities will be implemented. By taking into account the disciplinary approach of E-Lobster, the main policy areas investigated includes electricity, energy market, energy efficiency, transport and smart cities. Energy storage will play a key role in enabling the EU to develop a low-carbon electricity system. In recent years, Energy Storage System (ESSs) are gaining importance due to emerging applications, especially electrification of the transportation sector and grid integration of volatile renewables. The need for storage systems led to ESS technologies performance improvements and significant price decline. This allows for opening a new market where ESSs can be a reliable and economical solution. One such emerging market for ESS is R+G management which will be investigated and demonstrated within E-Lobster project. The surplus of energy in one type of power system (e.g., due to metro braking) might be directly transferred to the other power system (or vice versa). However, it would usually happen at unfavourable instances when the recipient does not need additional power. Thus, the role of ESS is to enhance advantages coming from interconnection of the railway power systems and distribution grids by offering additional energy buffer. Consequently, the surplus/deficit of energy in, e.g. railway power systems, is not to be immediately transferred to/from the distribution grid but it could be stored and used when it is really needed. This will assure better energy management exchange between the railway power systems and distribution grids and lead to more efficient loss reduction. In this framework, to identify the existing policies and regulatory frameworks is crucial for the project activities and for the future development of business models for the E-Lobster solutions. The projections carried out by the European Commission, the Member States and stakeholders and their analysis indicated some trends, challenges, opportunities and structural changes needed to design the policy measures to provide the appropriate framework for investors. This study will be used as reference for the discussion in the envisaged workshops with stakeholders (DSOs and Transport Managers) in the E-Lobster project.Keywords: light railway, electrical distribution network, Electrical Energy Storage, policy
Procedia PDF Downloads 1352851 Human-Automation Interaction in Law: Mapping Legal Decisions and Judgments, Cognitive Processes, and Automation Levels
Authors: Dovile Petkeviciute-Barysiene
Abstract:
Legal technologies not only create new ways for accessing and providing legal services but also transform the role of legal practitioners. Both lawyers and users of legal services expect automated solutions to outperform people with objectivity and impartiality. Although fairness of the automated decisions is crucial, research on assessing various characteristics of automated processes related to the perceived fairness has only begun. One of the major obstacles to this research is the lack of comprehensive understanding of what legal actions are automated and could be meaningfully automated, and to what extent. Neither public nor legal practitioners oftentimes cannot envision technological input due to the lack of general without illustrative examples. The aim of this study is to map decision making stages and automation levels which are and/or could be achieved in legal actions related to pre-trial and trial processes. Major legal decisions and judgments are identified during the consultations with legal practitioners. The dual-process model of information processing is used to describe cognitive processes taking place while making legal decisions and judgments during pre-trial and trial action. Some of the existing legal technologies are incorporated into the analysis as well. Several published automation level taxonomies are considered because none of them fit well into the legal context, as they were all created for avionics, teleoperation, unmanned aerial vehicles, etc. From the information processing perspective, analysis of the legal decisions and judgments expose situations that are most sensitive to cognitive bias, among others, also help to identify areas that would benefit from the automation the most. Automation level analysis, in turn, provides a systematic approach to interaction and cooperation between humans and algorithms. Moreover, an integrated map of legal decisions and judgments, information processing characteristics, and automation levels all together provide some groundwork for the research of legal technology perceived fairness and acceptance. Acknowledgment: This project has received funding from European Social Fund (project No 09.3.3-LMT-K-712-19-0116) under grant agreement with the Research Council of Lithuania (LMTLT).Keywords: automation levels, information processing, legal judgment and decision making, legal technology
Procedia PDF Downloads 1422850 Microbial Bioproduction with Design of Metabolism and Enzyme Engineering
Authors: Tomokazu Shirai, Akihiko Kondo
Abstract:
Technologies of metabolic engineering or synthetic biology are essential for effective microbial bioproduction. It is especially important to develop an in silico tool for designing a metabolic pathway producing an unnatural and valuable chemical such as fossil materials of fuel or plastics. We here demonstrated two in silico tools for designing novel metabolic pathways: BioProV and HyMeP. Furthermore, we succeeded in creating an artificial metabolic pathway by enzyme engineering.Keywords: bioinformatics, metabolic engineering, synthetic biology, genome scale model
Procedia PDF Downloads 3392849 Metagenomic analysis of Irish cattle faecal samples using Oxford Nanopore MinION Next Generation Sequencing
Authors: Niamh Higgins, Dawn Howard
Abstract:
The Irish agri-food sector is of major importance to Ireland’s manufacturing sector and to the Irish economy through employment and the exporting of animal products worldwide. Infectious diseases and parasites have an impact on farm animal health causing profitability and productivity to be affected. For the sustainability of Irish dairy farming, there must be the highest standard of animal health. There can be a lack of information in accounting for > 1% of complete microbial diversity in an environment. There is the tendency of culture-based methods of microbial identification to overestimate the prevalence of species which grow easily on an agar surface. There is a need for new technologies to address these issues to assist with animal health. Metagenomic approaches provide information on both the whole genome and transcriptome present through DNA sequencing of total DNA from environmental samples producing high determination of functional and taxonomic information. Nanopore Next Generation Technologies have the ability to be powerful sequencing technologies. They provide high throughput, low material requirements and produce ultra-long reads, simplifying the experimental process. The aim of this study is to use a metagenomics approach to analyze dairy cattle faecal samples using the Oxford Nanopore MinION Next Generation Sequencer and to establish an in-house pipeline for metagenomic characterization of complex samples. Faecal samples will be obtained from Irish dairy farms, DNA extracted and the MinION will be used for sequencing, followed by bioinformatics analysis. Of particular interest, will be the parasite Buxtonella sulcata, which there has been little research on and which there is no research on its presence on Irish dairy farms. Preliminary results have shown the ability of the MinION to produce hundreds of reads in a relatively short time frame of eight hours. The faecal samples were obtained from 90 dairy cows on a Galway farm. The results from Oxford Nanopore ‘What’s in my pot’ (WIMP) using the Epi2me workflow, show that from a total of 926 classified reads, 87% were from the Kingdom Bacteria, 10% were from the Kingdom Eukaryota, 3% were from the Kingdom Archaea and < 1% were from the Kingdom Viruses. The most prevalent bacteria were those from the Genus Acholeplasma (71 reads), Bacteroides (35 reads), Clostridium (33 reads), Acinetobacter (20 reads). The most prevalent species present were those from the Genus Acholeplasma and included Acholeplasma laidlawii (39 reads) and Acholeplasma brassicae (26 reads). The preliminary results show the ability of the MinION for the identification of microorganisms to species level coming from a complex sample. With ongoing optimization of the pipe-line, the number of classified reads are likely to increase. Metagenomics has the potential in animal health for diagnostics of microorganisms present on farms. This would support wprevention rather than a cure approach as is outlined in the DAFMs National Farmed Animal Health Strategy 2017-2022.Keywords: animal health, buxtonella sulcata, infectious disease, irish dairy cattle, metagenomics, minION, next generation sequencing
Procedia PDF Downloads 150