Search results for: automated checklists
135 Criticality Assessment Model for Water Pipelines Using Fuzzy Analytical Network Process
Abstract:
Water networks (WNs) are responsible of providing adequate amounts of safe, high quality, water to the public. As other critical infrastructure systems, WNs are subjected to deterioration which increases the number of breaks and leaks and lower water quality. In Canada, 35% of water assets require critical attention and there is a significant gap between the needed and the implemented investments. Thus, the need for efficient rehabilitation programs is becoming more urgent given the paradigm of aging infrastructure and tight budget. The first step towards developing such programs is to formulate a Performance Index that reflects the current condition of water assets along with its criticality. While numerous studies in the literature have focused on various aspects of condition assessment and reliability, limited efforts have investigated the criticality of such components. Critical water mains are those whose failure cause significant economic, environmental or social impacts on a community. Inclusion of criticality in computing the performance index will serve as a prioritizing tool for the optimum allocating of the available resources and budget. In this study, several social, economic, and environmental factors that dictate the criticality of a water pipelines have been elicited from analyzing the literature. Expert opinions were sought to provide pairwise comparisons of the importance of such factors. Subsequently, Fuzzy Logic along with Analytical Network Process (ANP) was utilized to calculate the weights of several criteria factors. Multi Attribute Utility Theories (MAUT) was then employed to integrate the aforementioned weights with the attribute values of several pipelines in Montreal WN. The result is a criticality index, 0-1, that quantifies the severity of the consequence of failure of each pipeline. A novel contribution of this approach is that it accounts for both the interdependency between criteria factors as well as the inherited uncertainties in calculating the criticality. The practical value of the current study is represented by the automated tool, Excel-MATLAB, which can be used by the utility managers and decision makers in planning for future maintenance and rehabilitation activities where high-level efficiency in use of materials and time resources is required.Keywords: water networks, criticality assessment, asset management, fuzzy analytical network process
Procedia PDF Downloads 147134 Digital Immunity System for Healthcare Data Security
Authors: Nihar Bheda
Abstract:
Protecting digital assets such as networks, systems, and data from advanced cyber threats is the aim of Digital Immunity Systems (DIS), which are a subset of cybersecurity. With features like continuous monitoring, coordinated reactions, and long-term adaptation, DIS seeks to mimic biological immunity. This minimizes downtime by automatically identifying and eliminating threats. Traditional security measures, such as firewalls and antivirus software, are insufficient for enterprises, such as healthcare providers, given the rapid evolution of cyber threats. The number of medical record breaches that have occurred in recent years is proof that attackers are finding healthcare data to be an increasingly valuable target. However, obstacles to enhancing security include outdated systems, financial limitations, and a lack of knowledge. DIS is an advancement in cyber defenses designed specifically for healthcare settings. Protection akin to an "immune system" is produced by core capabilities such as anomaly detection, access controls, and policy enforcement. Coordination of responses across IT infrastructure to contain attacks is made possible by automation and orchestration. Massive amounts of data are analyzed by AI and machine learning to find new threats. After an incident, self-healing enables services to resume quickly. The implementation of DIS is consistent with the healthcare industry's urgent requirement for resilient data security in light of evolving risks and strict guidelines. With resilient systems, it can help organizations lower business risk, minimize the effects of breaches, and preserve patient care continuity. DIS will be essential for protecting a variety of environments, including cloud computing and the Internet of medical devices, as healthcare providers quickly adopt new technologies. DIS lowers traditional security overhead for IT departments and offers automated protection, even though it requires an initial investment. In the near future, DIS may prove to be essential for small clinics, blood banks, imaging centers, large hospitals, and other healthcare organizations. Cyber resilience can become attainable for the whole healthcare ecosystem with customized DIS implementations.Keywords: digital immunity system, cybersecurity, healthcare data, emerging technology
Procedia PDF Downloads 67133 The Regulation of Reputational Information in the Sharing Economy
Authors: Emre Bayamlıoğlu
Abstract:
This paper aims to provide an account of the legal and the regulative aspects of the algorithmic reputation systems with a special emphasis on the sharing economy (i.e., Uber, Airbnb, Lyft) business model. The first section starts with an analysis of the legal and commercial nature of the tripartite relationship among the parties, namely, the host platform, individual sharers/service providers and the consumers/users. The section further examines to what extent an algorithmic system of reputational information could serve as an alternative to legal regulation. Shortcomings are explained and analyzed with specific examples from Airbnb Platform which is a pioneering success in the sharing economy. The following section focuses on the issue of governance and control of the reputational information. The section first analyzes the legal consequences of algorithmic filtering systems to detect undesired comments and how a delicate balance could be struck between the competing interests such as freedom of speech, privacy and the integrity of the commercial reputation. The third section deals with the problem of manipulation by users. Indeed many sharing economy businesses employ certain techniques of data mining and natural language processing to verify consistency of the feedback. Software agents referred as "bots" are employed by the users to "produce" fake reputation values. Such automated techniques are deceptive with significant negative effects for undermining the trust upon which the reputational system is built. The third section is devoted to explore the concerns with regard to data mobility, data ownership, and the privacy. Reputational information provided by the consumers in the form of textual comment may be regarded as a writing which is eligible to copyright protection. Algorithmic reputational systems also contain personal data pertaining both the individual entrepreneurs and the consumers. The final section starts with an overview of the notion of reputation as a communitarian and collective form of referential trust and further provides an evaluation of the above legal arguments from the perspective of public interest in the integrity of reputational information. The paper concludes with certain guidelines and design principles for algorithmic reputation systems, to address the above raised legal implications.Keywords: sharing economy, design principles of algorithmic regulation, reputational systems, personal data protection, privacy
Procedia PDF Downloads 465132 Applications of Artificial Intelligence (AI) in Cardiac imaging
Authors: Angelis P. Barlampas
Abstract:
The purpose of this study is to inform the reader, about the various applications of artificial intelligence (AI), in cardiac imaging. AI grows fast and its role is crucial in medical specialties, which use large amounts of digital data, that are very difficult or even impossible to be managed by human beings and especially doctors.Artificial intelligence (AI) refers to the ability of computers to mimic human cognitive function, performing tasks such as learning, problem-solving, and autonomous decision making based on digital data. Whereas AI describes the concept of using computers to mimic human cognitive tasks, machine learning (ML) describes the category of algorithms that enable most current applications described as AI. Some of the current applications of AI in cardiac imaging are the follows: Ultrasound: Automated segmentation of cardiac chambers across five common views and consequently quantify chamber volumes/mass, ascertain ejection fraction and determine longitudinal strain through speckle tracking. Determine the severity of mitral regurgitation (accuracy > 99% for every degree of severity). Identify myocardial infarction. Distinguish between Athlete’s heart and hypertrophic cardiomyopathy, as well as restrictive cardiomyopathy and constrictive pericarditis. Predict all-cause mortality. CT Reduce radiation doses. Calculate the calcium score. Diagnose coronary artery disease (CAD). Predict all-cause 5-year mortality. Predict major cardiovascular events in patients with suspected CAD. MRI Segment of cardiac structures and infarct tissue. Calculate cardiac mass and function parameters. Distinguish between patients with myocardial infarction and control subjects. It could potentially reduce costs since it would preclude the need for gadolinium-enhanced CMR. Predict 4-year survival in patients with pulmonary hypertension. Nuclear Imaging Classify normal and abnormal myocardium in CAD. Detect locations with abnormal myocardium. Predict cardiac death. ML was comparable to or better than two experienced readers in predicting the need for revascularization. AI emerge as a helpful tool in cardiac imaging and for the doctors who can not manage the overall increasing demand, in examinations such as ultrasound, computed tomography, MRI, or nuclear imaging studies.Keywords: artificial intelligence, cardiac imaging, ultrasound, MRI, CT, nuclear medicine
Procedia PDF Downloads 78131 Sea of Light: A Game 'Based Approach for Evidence-Centered Assessment of Collaborative Problem Solving
Authors: Svenja Pieritz, Jakab Pilaszanovich
Abstract:
Collaborative Problem Solving (CPS) is recognized as being one of the most important skills of the 21st century with having a potential impact on education, job selection, and collaborative systems design. Therefore, CPS has been adopted in several standardized tests, including the Programme for International Student Assessment (PISA) in 2015. A significant challenge of evaluating CPS is the underlying interplay of cognitive and social skills, which requires a more holistic assessment. However, the majority of the existing tests are using a questionnaire-based assessment, which oversimplifies this interplay and undermines ecological validity. Two major difficulties were identified: Firstly, the creation of a controllable, real-time environment allowing natural behaviors and communication between at least two people. Secondly, the development of an appropriate method to collect and synthesize both cognitive and social metrics of collaboration. This paper proposes a more holistic and automated approach to the assessment of CPS. To address these two difficulties, a multiplayer problem-solving game called Sea of Light was developed: An environment allowing students to deploy a variety of measurable collaborative strategies. This controlled environment enables researchers to monitor behavior through the analysis of game actions and chat. The according solution for the statistical model is a combined approach of Natural Language Processing (NLP) and Bayesian network analysis. Social exchanges via the in-game chat are analyzed through NLP and fed into the Bayesian network along with other game actions. This Bayesian network synthesizes evidence to track and update different subdimensions of CPS. Major findings focus on the correlations between the evidences collected through in- game actions, the participants’ chat features and the CPS self- evaluation metrics. These results give an indication of which game mechanics can best describe CPS evaluation. Overall, Sea of Light gives test administrators control over different problem-solving scenarios and difficulties while keeping the student engaged. It enables a more complete assessment based on complex, socio-cognitive information on actions and communication. This tool permits further investigations of the effects of group constellations and personality in collaborative problem-solving.Keywords: bayesian network, collaborative problem solving, game-based assessment, natural language processing
Procedia PDF Downloads 132130 Adaption of the Design Thinking Method for Production Planning in the Meat Industry Using Machine Learning Algorithms
Authors: Alica Höpken, Hergen Pargmann
Abstract:
The resource-efficient planning of the complex production planning processes in the meat industry and the reduction of food waste is a permanent challenge. The complexity of the production planning process occurs in every part of the supply chain, from agriculture to the end consumer. It arises from long and uncertain planning phases. Uncertainties such as stochastic yields, fluctuations in demand, and resource variability are part of this process. In the meat industry, waste mainly relates to incorrect storage, technical causes in production, or overproduction. The high amount of food waste along the complex supply chain in the meat industry could not be reduced by simple solutions until now. Therefore, resource-efficient production planning by conventional methods is currently only partially feasible. The realization of intelligent, automated production planning is basically possible through the application of machine learning algorithms, such as those of reinforcement learning. By applying the adapted design thinking method, machine learning methods (especially reinforcement learning algorithms) are used for the complex production planning process in the meat industry. This method represents a concretization to the application area. A resource-efficient production planning process is made available by adapting the design thinking method. In addition, the complex processes can be planned efficiently by using this method, since this standardized approach offers new possibilities in order to challenge the complexity and the high time consumption. It represents a tool to support the efficient production planning in the meat industry. This paper shows an elegant adaption of the design thinking method to apply the reinforcement learning method for a resource-efficient production planning process in the meat industry. Following, the steps that are necessary to introduce machine learning algorithms into the production planning of the food industry are determined. This is achieved based on a case study which is part of the research project ”REIF - Resource Efficient, Economic and Intelligent Food Chain” supported by the German Federal Ministry for Economic Affairs and Climate Action of Germany and the German Aerospace Center. Through this structured approach, significantly better planning results are achieved, which would be too complex or very time consuming using conventional methods.Keywords: change management, design thinking method, machine learning, meat industry, reinforcement learning, resource-efficient production planning
Procedia PDF Downloads 128129 AIR SAFE: an Internet of Things System for Air Quality Management Leveraging Artificial Intelligence Algorithms
Authors: Mariangela Viviani, Daniele Germano, Simone Colace, Agostino Forestiero, Giuseppe Papuzzo, Sara Laurita
Abstract:
Nowadays, people spend most of their time in closed environments, in offices, or at home. Therefore, secure and highly livable environmental conditions are needed to reduce the probability of aerial viruses spreading. Also, to lower the human impact on the planet, it is important to reduce energy consumption. Heating, Ventilation, and Air Conditioning (HVAC) systems account for the major part of energy consumption in buildings [1]. Devising systems to control and regulate the airflow is, therefore, essential for energy efficiency. Moreover, an optimal setting for thermal comfort and air quality is essential for people’s well-being, at home or in offices, and increases productivity. Thanks to the features of Artificial Intelligence (AI) tools and techniques, it is possible to design innovative systems with: (i) Improved monitoring and prediction accuracy; (ii) Enhanced decision-making and mitigation strategies; (iii) Real-time air quality information; (iv) Increased efficiency in data analysis and processing; (v) Advanced early warning systems for air pollution events; (vi) Automated and cost-effective m onitoring network; and (vii) A better understanding of air quality patterns and trends. We propose AIR SAFE, an IoT-based infrastructure designed to optimize air quality and thermal comfort in indoor environments leveraging AI tools. AIR SAFE employs a network of smart sensors collecting indoor and outdoor data to be analyzed in order to take any corrective measures to ensure the occupants’ wellness. The data are analyzed through AI algorithms able to predict the future levels of temperature, relative humidity, and CO₂ concentration [2]. Based on these predictions, AIR SAFE takes actions, such as opening/closing the window or the air conditioner, to guarantee a high level of thermal comfort and air quality in the environment. In this contribution, we present the results from the AI algorithm we have implemented on the first s et o f d ata c ollected i n a real environment. The results were compared with other models from the literature to validate our approach.Keywords: air quality, internet of things, artificial intelligence, smart home
Procedia PDF Downloads 93128 Development of an Instrument for Measurement of Thermal Conductivity and Thermal Diffusivity of Tropical Fruit Juice
Authors: T. Ewetumo, K. D. Adedayo, Festus Ben
Abstract:
Knowledge of the thermal properties of foods is of fundamental importance in the food industry to establish the design of processing equipment. However, for tropical fruit juice, there is very little information in literature, seriously hampering processing procedures. This research work describes the development of an instrument for automated thermal conductivity and thermal diffusivity measurement of tropical fruit juice using a transient thermal probe technique based on line heat principle. The system consists of two thermocouple sensors, constant current source, heater, thermocouple amplifier, microcontroller, microSD card shield and intelligent liquid crystal. A fixed distance of 6.50mm was maintained between the two probes. When heat is applied, the temperature rise at the heater probe measured with time at time interval of 4s for 240s. The measuring element conforms as closely as possible to an infinite line source of heat in an infinite fluid. Under these conditions, thermal conductivity and thermal diffusivity are simultaneously measured, with thermal conductivity determined from the slope of a plot of the temperature rise of the heating element against the logarithm of time while thermal diffusivity was determined from the time it took the sample to attain a peak temperature and the time duration over a fixed diffusivity distance. A constant current source was designed to apply a power input of 16.33W/m to the probe throughout the experiment. The thermal probe was interfaced with a digital display and data logger by using an application program written in C++. Calibration of the instrument was done by determining the thermal properties of distilled water. Error due to convection was avoided by adding 1.5% agar to the water. The instrument has been used for measurement of thermal properties of banana, orange and watermelon. Thermal conductivity values of 0.593, 0.598, 0.586 W/m^o C and thermal diffusivity values of 1.053 ×〖10〗^(-7), 1.086 ×〖10〗^(-7), and 0.959 ×〖10〗^(-7) 〖m/s〗^2 were obtained for banana, orange and water melon respectively. Measured values were stored in a microSD card. The instrument performed very well as it measured the thermal conductivity and thermal diffusivity of the tropical fruit juice samples with statistical analysis (ANOVA) showing no significant difference (p>0.05) between the literature standards and estimated averages of each sample investigated with the developed instrument.Keywords: thermal conductivity, thermal diffusivity, tropical fruit juice, diffusion equation
Procedia PDF Downloads 357127 Logistics and Supply Chain Management Using Smart Contracts on Blockchain
Authors: Armen Grigoryan, Milena Arakelyan
Abstract:
The idea of smart logistics is still quite a complicated one. It can be used to market products to a large number of customers or to acquire raw materials of the highest quality at the lowest cost in geographically dispersed areas. The use of smart contracts in logistics and supply chain management has the potential to revolutionize the way that goods are tracked, transported, and managed. Smart contracts are simply computer programs written in one of the blockchain programming languages (Solidity, Rust, Vyper), which are capable of self-execution once the predetermined conditions are met. They can be used to automate and streamline many of the traditional manual processes that are currently used in logistics and supply chain management, including the tracking and movement of goods, the management of inventory, and the facilitation of payments and settlements between different parties in the supply chain. Currently, logistics is a core area for companies which is concerned with transporting products between parties. Still, the problem of this sector is that its scale may lead to detainments and defaults in the delivery of goods, as well as other issues. Moreover, large distributors require a large number of workers to meet all the needs of their stores. All this may contribute to big detainments in order processing and increases the potentiality of losing orders. In an attempt to break this problem, companies have automated all their procedures, contributing to a significant augmentation in the number of businesses and distributors in the logistics sector. Hence, blockchain technology and smart contracted legal agreements seem to be suitable concepts to redesign and optimize collaborative business processes and supply chains. The main purpose of this paper is to examine the scope of blockchain technology and smart contracts in the field of logistics and supply chain management. This study discusses the research question of how and to which extent smart contracts and blockchain technology can facilitate and improve the implementation of collaborative business structures for sustainable entrepreneurial activities in smart supply chains. The intention is to provide a comprehensive overview of the existing research on the use of smart contracts in logistics and supply chain management and to identify any gaps or limitations in the current knowledge on this topic. This review aims to provide a summary and evaluation of the key findings and themes that emerge from the research, as well as to suggest potential directions for future research on the use of smart contracts in logistics and supply chain management.Keywords: smart contracts, smart logistics, smart supply chain management, blockchain and smart contracts in logistics, smart contracts for controlling supply chain management
Procedia PDF Downloads 95126 A Survey and Analysis on Inflammatory Pain Detection and Standard Protocol Selection Using Medical Infrared Thermography from Image Processing View Point
Authors: Mrinal Kanti Bhowmik, Shawli Bardhan Jr., Debotosh Bhattacharjee
Abstract:
Human skin containing temperature value more than absolute zero, discharges infrared radiation related to the frequency of the body temperature. The difference in infrared radiation from the skin surface reflects the abnormality present in human body. Considering the difference, detection and forecasting the temperature variation of the skin surface is the main objective of using Medical Infrared Thermography(MIT) as a diagnostic tool for pain detection. Medical Infrared Thermography(MIT) is a non-invasive imaging technique that records and monitors the temperature flow in the body by receiving the infrared radiated from the skin and represent it through thermogram. The intensity of the thermogram measures the inflammation from the skin surface related to pain in human body. Analysis of thermograms provides automated anomaly detection associated with suspicious pain regions by following several image processing steps. The paper represents a rigorous study based survey related to the processing and analysis of thermograms based on the previous works published in the area of infrared thermal imaging for detecting inflammatory pain diseases like arthritis, spondylosis, shoulder impingement, etc. The study also explores the performance analysis of thermogram processing accompanied by thermogram acquisition protocols, thermography camera specification and the types of pain detected by thermography in summarized tabular format. The tabular format provides a clear structural vision of the past works. The major contribution of the paper introduces a new thermogram acquisition standard associated with inflammatory pain detection in human body to enhance the performance rate. The FLIR T650sc infrared camera with high sensitivity and resolution is adopted to increase the accuracy of thermogram acquisition and analysis. The survey of previous research work highlights that intensity distribution based comparison of comparable and symmetric region of interest and their statistical analysis assigns adequate result in case of identifying and detecting physiological disorder related to inflammatory diseases.Keywords: acquisition protocol, inflammatory pain detection, medical infrared thermography (MIT), statistical analysis
Procedia PDF Downloads 342125 A Unified Approach for Digital Forensics Analysis
Authors: Ali Alshumrani, Nathan Clarke, Bogdan Ghite, Stavros Shiaeles
Abstract:
Digital forensics has become an essential tool in the investigation of cyber and computer-assisted crime. Arguably, given the prevalence of technology and the subsequent digital footprints that exist, it could have a significant role across almost all crimes. However, the variety of technology platforms (such as computers, mobiles, Closed-Circuit Television (CCTV), Internet of Things (IoT), databases, drones, cloud computing services), heterogeneity and volume of data, forensic tool capability, and the investigative cost make investigations both technically challenging and prohibitively expensive. Forensic tools also tend to be siloed into specific technologies, e.g., File System Forensic Analysis Tools (FS-FAT) and Network Forensic Analysis Tools (N-FAT), and a good deal of data sources has little to no specialist forensic tools. Increasingly it also becomes essential to compare and correlate evidence across data sources and to do so in an efficient and effective manner enabling an investigator to answer high-level questions of the data in a timely manner without having to trawl through data and perform the correlation manually. This paper proposes a Unified Forensic Analysis Tool (U-FAT), which aims to establish a common language for electronic information and permit multi-source forensic analysis. Core to this approach is the identification and development of forensic analyses that automate complex data correlations, enabling investigators to investigate cases more efficiently. The paper presents a systematic analysis of major crime categories and identifies what forensic analyses could be used. For example, in a child abduction, an investigation team might have evidence from a range of sources including computing devices (mobile phone, PC), CCTV (potentially a large number), ISP records, and mobile network cell tower data, in addition to third party databases such as the National Sex Offender registry and tax records, with the desire to auto-correlate and across sources and visualize in a cognitively effective manner. U-FAT provides a holistic, flexible, and extensible approach to providing digital forensics in technology, application, and data-agnostic manner, providing powerful and automated forensic analysis.Keywords: digital forensics, evidence correlation, heterogeneous data, forensics tool
Procedia PDF Downloads 196124 Development of a 3D Model of Real Estate Properties in Fort Bonifacio, Taguig City, Philippines Using Geographic Information Systems
Authors: Lyka Selene Magnayi, Marcos Vinas, Roseanne Ramos
Abstract:
As the real estate industry continually grows in the Philippines, Geographic Information Systems (GIS) provide advantages in generating spatial databases for efficient delivery of information and services. The real estate sector is not only providing qualitative data about real estate properties but also utilizes various spatial aspects of these properties for different applications such as hazard mapping and assessment. In this study, a three-dimensional (3D) model and a spatial database of real estate properties in Fort Bonifacio, Taguig City are developed using GIS and SketchUp. Spatial datasets include political boundaries, buildings, road network, digital terrain model (DTM) derived from Interferometric Synthetic Aperture Radar (IFSAR) image, Google Earth satellite imageries, and hazard maps. Multiple model layers were created based on property listings by a partner real estate company, including existing and future property buildings. Actual building dimensions, building facade, and building floorplans are incorporated in these 3D models for geovisualization. Hazard model layers are determined through spatial overlays, and different scenarios of hazards are also presented in the models. Animated maps and walkthrough videos were created for company presentation and evaluation. Model evaluation is conducted through client surveys requiring scores in terms of the appropriateness, information content, and design of the 3D models. Survey results show very satisfactory ratings, with the highest average evaluation score equivalent to 9.21 out of 10. The output maps and videos obtained passing rates based on the criteria and standards set by the intended users of the partner real estate company. The methodologies presented in this study were found useful and have remarkable advantages in the real estate industry. This work may be extended to automated mapping and creation of online spatial databases for better storage, access of real property listings and interactive platform using web-based GIS.Keywords: geovisualization, geographic information systems, GIS, real estate, spatial database, three-dimensional model
Procedia PDF Downloads 158123 Sorting Maize Haploids from Hybrids Using Single-Kernel Near-Infrared Spectroscopy
Authors: Paul R Armstrong
Abstract:
Doubled haploids (DHs) have become an important breeding tool for creating maize inbred lines, although several bottlenecks in the DH production process limit wider development, application, and adoption of the technique. DH kernels are typically sorted manually and represent about 10% of the seeds in a much larger pool where the remaining 90% are hybrid siblings. This introduces time constraints on DH production and manual sorting is often not accurate. Automated sorting based on the chemical composition of the kernel can be effective, but devices, namely NMR, have not achieved the sorting speed to be a cost-effective replacement to manual sorting. This study evaluated a single kernel near-infrared reflectance spectroscopy (skNIR) platform to accurately identify DH kernels based on oil content. The skNIR platform is a higher-throughput device, approximately 3 seeds/s, that uses spectra to predict oil content of each kernel from maize crosses intentionally developed to create larger than normal oil differences, 1.5%-2%, between DH and hybrid kernels. Spectra from the skNIR were used to construct a partial least squares regression (PLS) model for oil and for a categorical reference model of 1 (DH kernel) or 2 (hybrid kernel) and then used to sort several crosses to evaluate performance. Two approaches were used for sorting. The first used a general PLS model developed from all crosses to predict oil content and then used for sorting each induction cross, the second was the development of a specific model from a single induction cross where approximately fifty DH and one hundred hybrid kernels used. This second approach used a categorical reference value of 1 and 2, instead of oil content, for the PLS model and kernels selected for the calibration set were manually referenced based on traditional commercial methods using coloration of the tip cap and germ areas. The generalized PLS oil model statistics were R2 = 0.94 and RMSE = .93% for kernels spanning an oil content of 2.7% to 19.3%. Sorting by this model resulted in extracting 55% to 85% of haploid kernels from the four induction crosses. Using the second method of generating a model for each cross yielded model statistics ranging from R2s = 0.96 to 0.98 and RMSEs from 0.08 to 0.10. Sorting in this case resulted in 100% correct classification but required models that were cross. In summary, the first generalized model oil method could be used to sort a significant number of kernels from a kernel pool but was not close to the accuracy of developing a sorting model from a single cross. The penalty for the second method is that a PLS model would need to be developed for each individual cross. In conclusion both methods could find useful application in the sorting of DH from hybrid kernels.Keywords: NIR, haploids, maize, sorting
Procedia PDF Downloads 302122 FEM and Experimental Modal Analysis of Computer Mount
Authors: Vishwajit Ghatge, David Looper
Abstract:
Over the last few decades, oilfield service rolling equipment has significantly increased in weight, primarily because of emissions regulations, which require larger/heavier engines, larger cooling systems, and emissions after-treatment systems, in some cases, etc. Larger engines cause more vibration and shock loads, leading to failure of electronics and control systems. If the vibrating frequency of the engine matches the system frequency, high resonance is observed on structural parts and mounts. One such existing automated control equipment system comprising wire rope mounts used for mounting computers was designed approximately 12 years ago. This includes the use of an industrial- grade computer to control the system operation. The original computer had a smaller, lighter enclosure. After a few years, a newer computer version was introduced, which was 10 lbm heavier. Some failures of internal computer parts have been documented for cases in which the old mounts were used. Because of the added weight, there is a possibility of having the two brackets impact each other under off-road conditions, which causes a high shock input to the computer parts. This added failure mode requires validating the existing mount design to suit the new heavy-weight computer. This paper discusses the modal finite element method (FEM) analysis and experimental modal analysis conducted to study the effects of vibration on the wire rope mounts and the computer. The existing mount was modelled in ANSYS software, and resultant mode shapes and frequencies were obtained. The experimental modal analysis was conducted, and actual frequency responses were observed and recorded. Results clearly revealed that at resonance frequency, the brackets were colliding and potentially causing damage to computer parts. To solve this issue, spring mounts of different stiffness were modeled in ANSYS software, and the resonant frequency was determined. Increasing the stiffness of the system increased the resonant frequency zone away from the frequency window at which the engine showed heavy vibrations or resonance. After multiple iterations in ANSYS software, the stiffness of the spring mount was finalized, which was again experimentally validated.Keywords: experimental modal analysis, FEM Modal Analysis, frequency, modal analysis, resonance, vibration
Procedia PDF Downloads 321121 Prevalence of ESBL E. coli Susceptibility to Oral Antibiotics in Outpatient Urine Culture: Multicentric, Analysis of Three Years Data (2019-2021)
Authors: Mazoun Nasser Rashid Al Kharusi, Nada Al Siyabi
Abstract:
Objectives: The main aim of this study is to Find the rate of susceptibility of ESBL E. coli causing UTI to oral antibiotics. Secondary objectives: Prevalence of ESBL E. coli from community urine samples, identify the best empirical oral antibiotics with the least resistance rate for UTI and identify alternative oral antibiotics for testing and utilization. Methods: This study is a retrospective descriptive study of the last three years in five major hospitals in Oman (Khowla Hospital, AN’Nahdha Hospital, Rustaq Hospital, Nizwa Hospital, and Ibri Hospital) equipped with a microbiologist. Inclusion criteria include all eligible outpatient urine culture isolates, excluding isolates from admitted patients with hospital-acquired urinary tract infections. Data was collected through the MOH database. The MOH hospitals are using different types of testing, automated methods like Vitek2 and manual methods. Vitek2 machine uses the principle of the fluorogenic method for organism identification and a turbidimetric method for susceptibility testing. The manual method is done by double disc diffusion for identifying ESBL and the disc diffusion method is for antibiotic susceptibility. All laboratories follow the clinical laboratory science institute (CLSI) guidelines. Analysis was done by SPSS statistical package. Results: Total urine cultures were (23048). E. coli grew in (11637) 49.6% of the urine, whereas (2199) 18.8% of those were confirmed as ESBL. As expected, the resistance rate to amoxicillin and cefuroxime is 100%. Moreover, the susceptibility of those ESBL-producing E. coli to nitrofurantoin, trimethoprim+sulfamethoxazole, ciprofloxacin and amoxicillin-clavulanate is progressing over the years; however, still low. ESBL E. coli was predominating in the female gender and those aged 66-74 years old throughout all the years. Other oral antibiotic options need to be explored and tested so that we add to the pool of oral antibiotics for ESBL E. coli causing UTI in the community. Conclusion: High rate of ESBL E. coli in urine from the community. The high resistance rates to oral antibiotics highlight the need for alternative treatment options for UTIs caused by these bacteria. Further research is needed to identify new and effective treatments for UTIs caused by ESBL-E. Coli.Keywords: UTI, ESBL, oral antibiotics, E. coli, susceptibility
Procedia PDF Downloads 93120 Utilizing Topic Modelling for Assessing Mhealth App’s Risks to Users’ Health before and during the COVID-19 Pandemic
Authors: Pedro Augusto Da Silva E Souza Miranda, Niloofar Jalali, Shweta Mistry
Abstract:
BACKGROUND: Software developers utilize automated solutions to scrape users’ reviews to extract meaningful knowledge to identify problems (e.g., bugs, compatibility issues) and possible enhancements (e.g., users’ requests) to their solutions. However, most of these solutions do not consider the health risk aspects to users. Recent works have shed light on the importance of including health risk considerations in the development cycle of mHealth apps to prevent harm to its users. PROBLEM: The COVID-19 Pandemic in Canada (and World) is currently forcing physical distancing upon the general population. This new lifestyle made the usage of mHealth applications more essential than ever, with a projected market forecast of 332 billion dollars by 2025. However, this new insurgency in mHealth usage comes with possible risks to users’ health due to mHealth apps problems (e.g., wrong insulin dosage indication due to a UI error). OBJECTIVE: These works aim to raise awareness amongst mHealth developers of the importance of considering risks to users’ health within their development lifecycle. Moreover, this work also aims to help mHealth developers with a Proof-of-Concept (POC) solution to understand, process, and identify possible health risks to users of mHealth apps based on users’ reviews. METHODS: We conducted a mixed-method study design. We developed a crawler to mine the negative reviews from two samples of mHealth apps (my fitness, medisafe) from the Google Play store users. For each mHealth app, we performed the following steps: • The reviews are divided into two groups, before starting the COVID-19 (reviews’ submission date before 15 Feb 2019) and during the COVID-19 (reviews’ submission date starts from 16 Feb 2019 till Dec 2020). For each period, the Latent Dirichlet Allocation (LDA) topic model was used to identify the different clusters of reviews based on similar topics of review The topics before and during COVID-19 are compared, and the significant difference in frequency and severity of similar topics are identified. RESULTS: We successfully scraped, filtered, processed, and identified health-related topics in both qualitative and quantitative approaches. The results demonstrated the similarity between topics before and during the COVID-19.Keywords: natural language processing (NLP), topic modeling, mHealth, COVID-19, software engineering, telemedicine, health risks
Procedia PDF Downloads 130119 Assessment of the Impact of Atmospheric Air, Drinking Water and Socio-Economic Indicators on the Primary Incidence of Children in Altai Krai
Authors: A. P. Pashkov
Abstract:
The number of environmental factors that adversely affect children's health is growing every year; their combination in each territory is different. The contribution of socio-economic factors to the health status of the younger generation is increasing. It is the child’s body that is most sensitive to changes in environmental conditions, responding to this with a deterioration in health. Over the past years, scientists have determined the influence of environmental factors and the incidence of children. Currently, there is a tendency to study regional characteristics of the interaction of a combination of environmental factors with the child's body. The aim of the work was to identify trends in the primary non-infectious morbidity of the children of the Altai Territory as a unique region that combines territories with different levels of environmental quality indicators, as well as to assess the effect of atmospheric air, drinking water and socio-economic indicators on the incidence of children in the region. An unfavorable tendency has been revealed in the region for incidence of such nosological groups as neoplasms, including malignant ones, diseases of the endocrine system, including obesity and thyroid disease, diseases of the circulatory system, digestive diseases, diseases of the genitourinary system, congenital anomalies, and respiratory diseases. Between some groups of diseases revealed a pattern of geographical distribution during mapping and a significant correlation. Some nosologies have a relationship with socio-economic indicators for an integrated assessment: circulatory system diseases, respiratory diseases (direct connection), endocrine system diseases, eating disorders, and metabolic disorders (feedback). The analysis of associations of the incidence of children with average annual concentrations of substances that pollute the air and drinking water showed the existence of reliable correlation in areas of critical and intense degree of environmental quality. This fact confirms that the population living in contaminated areas is subject to the negative influence of environmental factors, which immediately affects the health status of children. The results obtained indicate the need for a detailed assessment of the influence of environmental factors on the incidence of children in the regional aspect, the formation of a database, and the development of automated programs that can predict the incidence in each specific territory. This will increase the effectiveness, including economic of preventive measures.Keywords: incidence of children, regional features, socio-economic factors, environmental factors
Procedia PDF Downloads 115118 A Digital Twin Approach to Support Real-time Situational Awareness and Intelligent Cyber-physical Control in Energy Smart Buildings
Authors: Haowen Xu, Xiaobing Liu, Jin Dong, Jianming Lian
Abstract:
Emerging smart buildings often employ cyberinfrastructure, cyber-physical systems, and Internet of Things (IoT) technologies to increase the automation and responsiveness of building operations for better energy efficiency and lower carbon emission. These operations include the control of Heating, Ventilation, and Air Conditioning (HVAC) and lighting systems, which are often considered a major source of energy consumption in both commercial and residential buildings. Developing energy-saving control models for optimizing HVAC operations usually requires the collection of high-quality instrumental data from iterations of in-situ building experiments, which can be time-consuming and labor-intensive. This abstract describes a digital twin approach to automate building energy experiments for optimizing HVAC operations through the design and development of an adaptive web-based platform. The platform is created to enable (a) automated data acquisition from a variety of IoT-connected HVAC instruments, (b) real-time situational awareness through domain-based visualizations, (c) adaption of HVAC optimization algorithms based on experimental data, (d) sharing of experimental data and model predictive controls through web services, and (e) cyber-physical control of individual instruments in the HVAC system using outputs from different optimization algorithms. Through the digital twin approach, we aim to replicate a real-world building and its HVAC systems in an online computing environment to automate the development of building-specific model predictive controls and collaborative experiments in buildings located in different climate zones in the United States. We present two case studies to demonstrate our platform’s capability for real-time situational awareness and cyber-physical control of the HVAC in the flexible research platforms within the Oak Ridge National Laboratory (ORNL) main campus. Our platform is developed using adaptive and flexible architecture design, rendering the platform generalizable and extendable to support HVAC optimization experiments in different types of buildings across the nation.Keywords: energy-saving buildings, digital twins, HVAC, cyber-physical system, BIM
Procedia PDF Downloads 110117 Validation of an Impedance-Based Flow Cytometry Technique for High-Throughput Nanotoxicity Screening
Authors: Melanie Ostermann, Eivind Birkeland, Ying Xue, Alexander Sauter, Mihaela R. Cimpan
Abstract:
Background: New reliable and robust techniques to assess biological effects of nanomaterials (NMs) in vitro are needed to speed up safety analysis and to identify key physicochemical parameters of NMs, which are responsible for their acute cytotoxicity. The central aim of this study was to validate and evaluate the applicability and reliability of an impedance-based flow cytometry (IFC) technique for the high-throughput screening of NMs. Methods: Eight inorganic NMs from the European Commission Joint Research Centre Repository were used: NM-302 and NM-300k (Ag: 200 nm rods and 16.7 nm spheres, respectively), NM-200 and NM- 203 (SiO₂: 18.3 nm and 24.7 nm amorphous, respectively), NM-100 and NM-101 (TiO₂: 100 nm and 6 nm anatase, respectively), and NM-110 and NM-111 (ZnO: 147 nm and 141 nm, respectively). The aim was to assess the biological effects of these materials on human monoblastoid (U937) cells. Dispersions of NMs were prepared as described in the NANOGENOTOX dispersion protocol and cells were exposed to NMs at relevant concentrations (2, 10, 20, 50, and 100 µg/mL) for 24 hrs. The change in electrical impedance was measured at 0.5, 2, 6, and 12 MHz using the IFC AmphaZ30 (Amphasys AG, Switzerland). A traditional toxicity assay, Trypan Blue Dye Exclusion assay, and dark-field microscopy were used to validate the IFC method. Results: Spherical Ag particles (NM-300K) showed the highest toxic effect on U937 cells followed by ZnO (NM-111 ≥ NM-110) particles. Silica particles were moderate to non-toxic at all used concentrations under these conditions. A higher toxic effect was seen with smaller sized TiO2 particles (NM-101) compared to their larger analogues (NM-100). No interferences between the IFC and the used NMs were seen. Uptake and internalization of NMs were observed after 24 hours exposure, confirming actual NM-cell interactions. Conclusion: Results collected with the IFC demonstrate the applicability of this method for rapid nanotoxicity assessment, which proved to be less prone to nano-related interference issues compared to some traditional toxicity assays. Furthermore, this label-free and novel technique shows good potential for up-scaling in directions of an automated high-throughput screening and for future NM toxicity assessment. This work was supported by the EC FP7 NANoREG (Grant Agreement NMP4-LA-2013-310584), the Research Council of Norway, project NorNANoREG (239199/O70), the EuroNanoMed II 'GEMN' project (246672), and the UH-Nett Vest project.Keywords: cytotoxicity, high-throughput, impedance, nanomaterials
Procedia PDF Downloads 361116 Tagging a corpus of Media Interviews with Diplomats: Challenges and Solutions
Authors: Roberta Facchinetti, Sara Corrizzato, Silvia Cavalieri
Abstract:
Increasing interconnection between data digitalization and linguistic investigation has given rise to unprecedented potentialities and challenges for corpus linguists, who need to master IT tools for data analysis and text processing, as well as to develop techniques for efficient and reliable annotation in specific mark-up languages that encode documents in a format that is both human and machine-readable. In the present paper, the challenges emerging from the compilation of a linguistic corpus will be taken into consideration, focusing on the English language in particular. To do so, the case study of the InterDiplo corpus will be illustrated. The corpus, currently under development at the University of Verona (Italy), represents a novelty in terms both of the data included and of the tag set used for its annotation. The corpus covers media interviews and debates with diplomats and international operators conversing in English with journalists who do not share the same lingua-cultural background as their interviewees. To date, this appears to be the first tagged corpus of international institutional spoken discourse and will be an important database not only for linguists interested in corpus analysis but also for experts operating in international relations. In the present paper, special attention will be dedicated to the structural mark-up, parts of speech annotation, and tagging of discursive traits, that are the innovational parts of the project being the result of a thorough study to find the best solution to suit the analytical needs of the data. Several aspects will be addressed, with special attention to the tagging of the speakers’ identity, the communicative events, and anthropophagic. Prominence will be given to the annotation of question/answer exchanges to investigate the interlocutors’ choices and how such choices impact communication. Indeed, the automated identification of questions, in relation to the expected answers, is functional to understand how interviewers elicit information as well as how interviewees provide their answers to fulfill their respective communicative aims. A detailed description of the aforementioned elements will be given using the InterDiplo-Covid19 pilot corpus. The data yielded by our preliminary analysis of the data will highlight the viable solutions found in the construction of the corpus in terms of XML conversion, metadata definition, tagging system, and discursive-pragmatic annotation to be included via Oxygen.Keywords: spoken corpus, diplomats’ interviews, tagging system, discursive-pragmatic annotation, english linguistics
Procedia PDF Downloads 185115 Three Year Pedometer Based Physical Activity Intervention of the Adult Population in Qatar
Authors: Mercia I. Van Der Walt, Suzan Sayegh, Izzeldin E. L. J. Ibrahim, Mohamed G. Al-Kuwari, Manaf Kamil
Abstract:
Background: Increased physical activity is associated with improvements in health conditions. Walking is recognized as an easy form of physical activity and a strategy used in health promotion. Step into Health (SIH), a national community program, was established in Qatar to support physical activity promotion through the monitoring of step counts. This study aims to assess the physical activity levels of the adult population in Qatar through a pedometer-based community program over a three-year-period. Methodology: This cross-sectional longitudinal study was conducted between from January 2013 and December 2015 based on daily step counts. A total of 15,947 adults (8,551 males and 7,396 females), from different nationalities enrolled in the program and aged 18 to 64, are included. The program involves free distribution of pedometers to members who voluntarily choose to register. It is also supported by a self-monitoring online account and linked to a web-database. All members are informed about the 10,000 steps/day target and automated emails as well as text messages are sent as reminders to upload data. Daily step counts were measured through the Omron HJ-324U pedometer (Omron Healthcare Co., Ltd., Japan). Analyses are done on the data extracted from the web-database. Results: Daily average step count for the overall community increased from 4,830 steps/day (2013) to 6,124 steps /day (2015). This increase was also observed within the three age categories (18–30), (31-45) and (>45) years. Average steps per day were found to be more among males compared with females in each of the aforementioned age groups. Moreover, males and females in the age group (>45 years) show the highest average step count with 7,010 steps/day and 5,564 steps/day respectively. The 21% increase in overall step count throughout the study period is associated with well-resourced program and ongoing impact in smaller communities such as workplaces and universities, a step in the right direction. However, the average step count of 6,124 steps/day in the third year is still classified as the low active category. Although the program showed an increase step count we found, 33% of the study population are low active, 35 % are sedentary with only 32% being active. Conclusion: This study indicates that the pedometer-based intervention was effective in increasing the daily physical activity of participants. However, alternative approaches need to be incorporated within the program to educate and encourage the community to meet the physical activity recommendations in relation to step count.Keywords: pedometer, physical activity, Qatar, step count
Procedia PDF Downloads 250114 Algorithm for Modelling Land Surface Temperature and Land Cover Classification and Their Interaction
Authors: Jigg Pelayo, Ricardo Villar, Einstine Opiso
Abstract:
The rampant and unintended spread of urban areas resulted in increasing artificial component features in the land cover types of the countryside and bringing forth the urban heat island (UHI). This paved the way to wide range of negative influences on the human health and environment which commonly relates to air pollution, drought, higher energy demand, and water shortage. Land cover type also plays a relevant role in the process of understanding the interaction between ground surfaces with the local temperature. At the moment, the depiction of the land surface temperature (LST) at city/municipality scale particularly in certain areas of Misamis Oriental, Philippines is inadequate as support to efficient mitigations and adaptations of the surface urban heat island (SUHI). Thus, this study purposely attempts to provide application on the Landsat 8 satellite data and low density Light Detection and Ranging (LiDAR) products in mapping out quality automated LST model and crop-level land cover classification in a local scale, through theoretical and algorithm based approach utilizing the principle of data analysis subjected to multi-dimensional image object model. The paper also aims to explore the relationship between the derived LST and land cover classification. The results of the presented model showed the ability of comprehensive data analysis and GIS functionalities with the integration of object-based image analysis (OBIA) approach on automating complex maps production processes with considerable efficiency and high accuracy. The findings may potentially lead to expanded investigation of temporal dynamics of land surface UHI. It is worthwhile to note that the environmental significance of these interactions through combined application of remote sensing, geographic information tools, mathematical morphology and data analysis can provide microclimate perception, awareness and improved decision-making for land use planning and characterization at local and neighborhood scale. As a result, it can aid in facilitating problem identification, support mitigations and adaptations more efficiently.Keywords: LiDAR, OBIA, remote sensing, local scale
Procedia PDF Downloads 282113 MB-Slam: A Slam Framework for Construction Monitoring
Authors: Mojtaba Noghabaei, Khashayar Asadi, Kevin Han
Abstract:
Simultaneous Localization and Mapping (SLAM) technology has recently attracted the attention of construction companies for real-time performance monitoring. To effectively use SLAM for construction performance monitoring, SLAM results should be registered to a Building Information Models (BIM). Registring SLAM and BIM can provide essential insights for construction managers to identify construction deficiencies in real-time and ultimately reduce rework. Also, registering SLAM to BIM in real-time can boost the accuracy of SLAM since SLAM can use features from both images and 3d models. However, registering SLAM with the BIM in real-time is a challenge. In this study, a novel SLAM platform named Model-Based SLAM (MB-SLAM) is proposed, which not only provides automated registration of SLAM and BIM but also improves the localization accuracy of the SLAM system in real-time. This framework improves the accuracy of SLAM by aligning perspective features such as depth, vanishing points, and vanishing lines from the BIM to the SLAM system. This framework extracts depth features from a monocular camera’s image and improves the localization accuracy of the SLAM system through a real-time iterative process. Initially, SLAM can be used to calculate a rough camera pose for each keyframe. In the next step, each SLAM video sequence keyframe is registered to the BIM in real-time by aligning the keyframe’s perspective with the equivalent BIM view. The alignment method is based on perspective detection that estimates vanishing lines and points by detecting straight edges on images. This process will generate the associated BIM views from the keyframes' views. The calculated poses are later improved during a real-time gradient descent-based iteration method. Two case studies were presented to validate MB-SLAM. The validation process demonstrated promising results and accurately registered SLAM to BIM and significantly improved the SLAM’s localization accuracy. Besides, MB-SLAM achieved real-time performance in both indoor and outdoor environments. The proposed method can fully automate past studies and generate as-built models that are aligned with BIM. The main contribution of this study is a SLAM framework for both research and commercial usage, which aims to monitor construction progress and performance in a unified framework. Through this platform, users can improve the accuracy of the SLAM by providing a rough 3D model of the environment. MB-SLAM further boosts the application to practical usage of the SLAM.Keywords: perspective alignment, progress monitoring, slam, stereo matching.
Procedia PDF Downloads 224112 Enhancing Tower Crane Safety: A UAV-based Intelligent Inspection Approach
Authors: Xin Jiao, Xin Zhang, Jian Fan, Zhenwei Cai, Yiming Xu
Abstract:
Tower cranes play a crucial role in the construction industry, facilitating the vertical and horizontal movement of materials and aiding in building construction, especially for high-rise structures. However, tower crane accidents can lead to severe consequences, highlighting the importance of effective safety management and inspection. This paper presents an innovative approach to tower crane inspection utilizing Unmanned Aerial Vehicles (UAVs) and an Intelligent Inspection APP System. The system leverages UAVs equipped with high-definition cameras to conduct efficient and comprehensive inspections, reducing manual labor, inspection time, and risk. By integrating advanced technologies such as Real-Time Kinematic (RTK) positioning and digital image processing, the system enables precise route planning and collection of safety hazards images. A case study conducted on a construction site demonstrates the practicality and effectiveness of the proposed method, showcasing its potential to enhance tower crane safety. On-site testing of UAV intelligent inspections reveals key findings: efficient tower crane hazard inspection within 30 minutes, with a full-identification capability coverage rates of 76.3%, 64.8%, and 76.2% for major, significant, and general hazards respectively and a preliminary-identification capability coverage rates of 18.5%, 27.2%, and 19%, respectively. Notably, UAVs effectively identify various tower crane hazards, except for those requiring auditory detection. The limitations of this study primarily involve two aspects: Firstly, during the initial inspection, manual drone piloting is required for marking tower crane points, followed by automated flight inspections and reuse based on the marked route. Secondly, images captured by the drone necessitate manual identification and review, which can be time-consuming for equipment management personnel, particularly when dealing with a large volume of images. Subsequent research efforts will focus on AI training and recognition of safety hazard images, as well as the automatic generation of inspection reports and corrective management based on recognition results. The ongoing development in this area is currently in progress, and outcomes will be released at an appropriate time.Keywords: tower crane, inspection, unmanned aerial vehicle (UAV), intelligent inspection app system, safety management
Procedia PDF Downloads 42111 Determination of Identification and Antibiotic Resistance Rates of Serratia marcescens and Providencia Spp. from Various Clinical Specimens by Using Both the Conventional and Automated (VITEK2) Methods
Authors: Recep Keşli, Gülşah Aşık, Cengiz Demir, Onur Türkyılmaz
Abstract:
Objective: Serratia species are identified as aerobic, motile Gram negative rods. The species Serratia marcescens (S. marcescens) causes both opportunistic and nosocomial infections. The genus Providencia is Gram-negative bacilli and includes urease-producing that is responsible for a wide range of human infections. Although most Providencia infections involve the urinary tract, they are also associated with gastroenteritis, wound infections, and bacteremia. The aim of this study was evaluate the antimicrobial resistance rates of S. marcescens and Providencia spp. strains which had been isolated from various clinical materials obtained from different patients who belongs to intensive care units (ICU) and inpatient clinics. Methods: A total of 35 S. marcescens and Providencia spp. strains isolated from various clinical samples admitted to Medical Microbiology Laboratory, ANS Research and Practice Hospital, Afyon Kocatepe University between October 2013 and September 2015 were included in the study. Identification of the bacteria was determined by conventional methods and VITEK 2 system (bio-Merieux, Marcy l’etoile, France) was used additionally. Antibacterial resistance tests were performed by using Kirby Bauer disc (Oxoid, Hampshire, England) diffusion method following the recommendations of CLSI. Results: The distribution of clinical samples were as follows: upper and lower respiratory tract samples 26, 74.2 % wound specimen 6, 17.1 % blood cultures 3, 8.5%. Of the 35 S. marcescens and Providencia spp. strains; 28, 80% were isolated from clinical samples sent from ICU. The resistance rates of S. marcescens strains against trimethoprim-sulfamethoxazole, piperacillin-tazobactam, imipenem, gentamicin, ciprofloxacin, ceftazidime, cefepime and amikacin were found to be 8.5 %, 22.8 %, 11.4 %, 2.8 %, 17.1 %, 40 %, 28.5 % and 5.7 % respectively. Resistance rates of Providencia spp. strains against trimethoprim-sulfamethoxazole, piperacillin-tazobactam, imipenem, gentamicin, ciprofloxacin, ceftazidime, cefepime and amikacin were found to be 10.2 %, 33,3 %, 18.7 %, 8.7 %, 13.2 %, 38.6 %, 26.7%, and 11.8 % respectively. Conclusion: S. marcescens is usually resistant to ampicillin, amoxicillin, amoxicillin/clavulanate, ampicillin/sulbactam, cefuroxime, cephamycins, nitrofurantoin, and colistin. The most effective antibiotic on the total of S. marcescens strains was found to be gentamicin 2.8 %, of the totally tested strains the highest resistance rate found against to ceftazidime 40 %. The lowest and highest resistance rates were found against gentamiycin and ceftazidime with the rates of 8.7 % and 38.6 % for Providencia spp.Keywords: Serratia marcescens, Providencia spp., antibiotic resistance, intensive care unit
Procedia PDF Downloads 244110 Different Processing Methods to Obtain a Carbon Composite Element for Cycling
Authors: Maria Fonseca, Ana Branco, Joao Graca, Rui Mendes, Pedro Mimoso
Abstract:
The present work is focused on the production of a carbon composite element for cycling through different techniques, namely, blow-molding and high-pressure resin transfer injection (HP-RTM). The main objective of this work is to compare both processes to produce carbon composite elements for the cycling industry. It is well known that the carbon composite components for cycling are produced mainly through blow-molding; however, this technique depends strongly on manual labour, resulting in a time-consuming production process. Comparatively, HP-RTM offers a more automated process which should lead to higher production rates. Nevertheless, a comparison of the elements produced through both techniques must be done, in order to assess if the final products comply with the required standards of the industry. The main difference between said techniques lies in the used material. Blow-moulding uses carbon prepreg (carbon fibres pre-impregnated with a resin system), and the material is laid up by hand, piece by piece, on a mould or on a hard male. After that, the material is cured at a high temperature. On the other hand, in the HP-RTM technique, dry carbon fibres are placed on a mould, and then resin is injected at high pressure. After some research regarding the best material systems (prepregs and braids) and suppliers, an element was designed (similar to a handlebar) to be constructed. The next step was to perform FEM simulations in order to determine what the best layup of the composite material was. The simulations were done for the prepreg material, and the obtained layup was transposed to the braids. The selected material was a prepreg with T700 carbon fibre (24K) and an epoxy resin system, for the blow-molding technique. For HP-RTM, carbon fibre elastic UD tubes and ± 45º braids were used, with both 3K and 6K filaments per tow, and the resin system was an epoxy as well. After the simulations for the prepreg material, the optimized layup was: [45°, -45°,45°, -45°,0°,0°]. For HP-RTM, the transposed layup was [ ± 45° (6k); 0° (6k); partial ± 45° (6k); partial ± 45° (6k); ± 45° (3k); ± 45° (3k)]. The mechanical tests showed that both elements can withstand the maximum load (in this case, 1000 N); however, the one produced through blow-molding can support higher loads (≈1300N against 1100N from HP-RTM). In what concerns to the fibre volume fraction (FVF), the HP-RTM element has a slightly higher value ( > 61% compared to 59% of the blow-molding technique). The optical microscopy has shown that both elements have a low void content. In conclusion, the elements produced using HP-RTM can compare to the ones produced through blow-molding, both in mechanical testing and in the visual aspect. Nevertheless, there is still space for improvement in the HP-RTM elements since the layup of the braids, and UD tubes could be optimized.Keywords: HP-RTM, carbon composites, cycling, FEM
Procedia PDF Downloads 132109 Copyright Clearance for Artificial Intelligence Training Data: Challenges and Solutions
Authors: Erva Akin
Abstract:
– The use of copyrighted material for machine learning purposes is a challenging issue in the field of artificial intelligence (AI). While machine learning algorithms require large amounts of data to train and improve their accuracy and creativity, the use of copyrighted material without permission from the authors may infringe on their intellectual property rights. In order to overcome copyright legal hurdle against the data sharing, access and re-use of data, the use of copyrighted material for machine learning purposes may be considered permissible under certain circumstances. For example, if the copyright holder has given permission to use the data through a licensing agreement, then the use for machine learning purposes may be lawful. It is also argued that copying for non-expressive purposes that do not involve conveying expressive elements to the public, such as automated data extraction, should not be seen as infringing. The focus of such ‘copy-reliant technologies’ is on understanding language rules, styles, and syntax and no creative ideas are being used. However, the non-expressive use defense is within the framework of the fair use doctrine, which allows the use of copyrighted material for research or educational purposes. The questions arise because the fair use doctrine is not available in EU law, instead, the InfoSoc Directive provides for a rigid system of exclusive rights with a list of exceptions and limitations. One could only argue that non-expressive uses of copyrighted material for machine learning purposes do not constitute a ‘reproduction’ in the first place. Nevertheless, the use of machine learning with copyrighted material is difficult because EU copyright law applies to the mere use of the works. Two solutions can be proposed to address the problem of copyright clearance for AI training data. The first is to introduce a broad exception for text and data mining, either mandatorily or for commercial and scientific purposes, or to permit the reproduction of works for non-expressive purposes. The second is that copyright laws should permit the reproduction of works for non-expressive purposes, which opens the door to discussions regarding the transposition of the fair use principle from the US into EU law. Both solutions aim to provide more space for AI developers to operate and encourage greater freedom, which could lead to more rapid innovation in the field. The Data Governance Act presents a significant opportunity to advance these debates. Finally, issues concerning the balance of general public interests and legitimate private interests in machine learning training data must be addressed. In my opinion, it is crucial that robot-creation output should fall into the public domain. Machines depend on human creativity, innovation, and expression. To encourage technological advancement and innovation, freedom of expression and business operation must be prioritised.Keywords: artificial intelligence, copyright, data governance, machine learning
Procedia PDF Downloads 83108 Prevalence of Breast Cancer Molecular Subtypes at a Tertiary Cancer Institute
Authors: Nahush Modak, Meena Pangarkar, Anand Pathak, Ankita Tamhane
Abstract:
Background: Breast cancer is the prominent cause of cancer and mortality among women. This study was done to show the statistical analysis of a cohort of over 250 patients detected with breast cancer diagnosed by oncologists using Immunohistochemistry (IHC). IHC was performed by using ER; PR; HER2; Ki-67 antibodies. Materials and methods: Formalin fixed Paraffin embedded tissue samples were obtained by surgical manner and standard protocol was followed for fixation, grossing, tissue processing, embedding, cutting and IHC. The Ventana Benchmark XT machine was used for automated IHC of the samples. Antibodies used were supplied by F. Hoffmann-La Roche Ltd. Statistical analysis was performed by using SPSS for windows. Statistical tests performed were chi-squared test and Correlation tests with p<.01. The raw data was collected and provided by National Cancer Insitute, Jamtha, India. Result: Luminal B was the most prevailing molecular subtype of Breast cancer at our institute. Chi squared test of homogeneity was performed to find equality in distribution and Luminal B was the most prevalent molecular subtype. The worse prognostic indicator for breast cancer depends upon expression of Ki-67 and her2 protein in cancerous cells. Our study was done at p <.01 and significant dependence was observed. There exists no dependence of age on molecular subtype of breast cancer. Similarly, age is an independent variable while considering Ki-67 expression. Chi square test performed on Human epidermal growth factor receptor 2 (HER2) statuses of patients and strong dependence was observed in percentage of Ki-67 expression and Her2 (+/-) character which shows that, value of Ki depends upon Her2 expression in cancerous cells (p<.01). Surprisingly, dependence was observed in case of Ki-67 and Pr, at p <.01. This shows that Progesterone receptor proteins (PR) are over-expressed when there is an elevation in expression of Ki-67 protein. Conclusion: We conclude from that Luminal B is the most prevalent molecular subtype at National Cancer Institute, Jamtha, India. There was found no significant correlation between age and Ki-67 expression in any molecular subtype. And no dependence or correlation exists between patients’ age and molecular subtype. We also found that, when the diagnosis is Luminal A, out of the cohort of 257 patients, no patient shows >14% Ki-67 value. Statistically, extremely significant values were observed for dependence of PR+Her2- and PR-Her2+ scores on Ki-67 expression. (p<.01). Her2 is an important prognostic factor in breast cancer. Chi squared test for Her2 and Ki-67 shows that the expression of Ki depends upon Her2 statuses. Moreover, Ki-67 cannot be used as a standalone prognostic factor for determining breast cancer.Keywords: breast cancer molecular subtypes , correlation, immunohistochemistry, Ki-67 and HR, statistical analysis
Procedia PDF Downloads 123107 Mechanical Characterization and CNC Rotary Ultrasonic Grinding of Crystal Glass
Authors: Ricardo Torcato, Helder Morais
Abstract:
The manufacture of crystal glass parts is based on obtaining the rough geometry by blowing and/or injection, generally followed by a set of manual finishing operations using cutting and grinding tools. The forming techniques used do not allow the obtainment, with repeatability, of parts with complex shapes and the finishing operations use intensive specialized labor resulting in high cycle times and production costs. This work aims to explore the digital manufacture of crystal glass parts by investigating new subtractive techniques for the automated, flexible finishing of these parts. Finishing operations are essential to respond to customer demands in terms of crystal feel and shine. It is intended to investigate the applicability of different computerized finishing technologies, namely milling and grinding in a CNC machining center with or without ultrasonic assistance, to crystal processing. Research in the field of grinding hard and brittle materials, despite not being extensive, has increased in recent years, and scientific knowledge about the machinability of crystal glass is still very limited. However, it can be said that the unique properties of glass, such as high hardness and very low toughness, make any glass machining technology a very challenging process. This work will measure the performance improvement brought about by the use of ultrasound compared to conventional crystal grinding. This presentation is focused on the mechanical characterization and analysis of the cutting forces in CNC machining of superior crystal glass (Pb ≥ 30%). For the mechanical characterization, the Vickers hardness test provides an estimate of the material hardness (Hv) and the fracture toughness based on cracks that appear in the indentation. Mechanical impulse excitation test estimates the Young’s Modulus, shear modulus and Poisson ratio of the material. For the cutting forces, it a dynamometer was used to measure the forces in the face grinding process. The tests were made based on the Taguchi method to correlate the input parameters (feed rate, tool rotation speed and depth of cut) with the output parameters (surface roughness and cutting forces) to optimize the process (better roughness using the cutting forces that do not compromise the material structure and the tool life) using ANOVA. This study was conducted for conventional grinding and for the ultrasonic grinding process with the same cutting tools. It was possible to determine the optimum cutting parameters for minimum cutting forces and for minimum surface roughness in both grinding processes. Ultrasonic-assisted grinding provides a better surface roughness than conventional grinding.Keywords: CNC machining, crystal glass, cutting forces, hardness
Procedia PDF Downloads 153106 Contribution of PALB2 and BLM Mutations to Familial Breast Cancer Risk in BRCA1/2 Negative South African Breast Cancer Patients Detected Using High-Resolution Melting Analysis
Authors: N. C. van der Merwe, J. Oosthuizen, M. F. Makhetha, J. Adams, B. K. Dajee, S-R. Schneider
Abstract:
Women representing high-risk breast cancer families, who tested negative for pathogenic mutations in BRCA1 and BRCA2, are four times more likely to develop breast cancer compared to women in the general population. Sequencing of genes involved in genomic stability and DNA repair led to the identification of novel contributors to familial breast cancer risk. These include BLM and PALB2. Bloom's syndrome is a rare homozygous autosomal recessive chromosomal instability disorder with a high incidence of various types of neoplasia and is associated with breast cancer when in a heterozygous state. PALB2, on the other hand, binds to BRCA2 and together, they partake actively in DNA damage repair. Archived DNA samples of 66 BRCA1/2 negative high-risk breast cancer patients were retrospectively selected based on the presence of an extensive family history of the disease ( > 3 affecteds per family). All coding regions and splice-site boundaries of both genes were screened using High-Resolution Melting Analysis. Samples exhibiting variation were bi-directionally automated Sanger sequenced. The clinical significance of each variant was assessed using various in silico and splice site prediction algorithms. Comprehensive screening identified a total of 11 BLM and 26 PALB2 variants. The variants detected ranged from global to rare and included three novel mutations. Three BLM and two PALB2 likely pathogenic mutations were identified that could account for the disease in these extensive breast cancer families in the absence of BRCA mutations (BLM c.11T > A, p.V4D; BLM c.2603C > T, p.P868L; BLM c.3961G > A, p.V1321I; PALB2 c.421C > T, p.Gln141Ter; PALB2 c.508A > T, p.Arg170Ter). Conclusion: The study confirmed the contribution of pathogenic mutations in BLM and PALB2 to the familial breast cancer burden in South Africa. It explained the presence of the disease in 7.5% of the BRCA1/2 negative families with an extensive family history of breast cancer. Segregation analysis will be performed to confirm the clinical impact of these mutations for each of these families. These results justify the inclusion of both these genes in a comprehensive breast and ovarian next generation sequencing cancer panel and should be screened simultaneously with BRCA1 and BRCA2 as it might explain a significant percentage of familial breast and ovarian cancer in South Africa.Keywords: Bloom Syndrome, familial breast cancer, PALB2, South Africa
Procedia PDF Downloads 236