Search results for: quantum algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2591

Search results for: quantum algorithms

371 Enhancement of Density-Based Spatial Clustering Algorithm with Noise for Fire Risk Assessment and Warning in Metro Manila

Authors: Pinky Mae O. De Leon, Franchezka S. P. Flores

Abstract:

This study focuses on applying an enhanced density-based spatial clustering algorithm with noise for fire risk assessments and warnings in Metro Manila. Unlike other clustering algorithms, DBSCAN is known for its ability to identify arbitrary-shaped clusters and its resistance to noise. However, its performance diminishes when handling high dimensional data, wherein it can read the noise points as relevant data points. Also, the algorithm is dependent on the parameters (eps & minPts) set by the user; choosing the wrong parameters can greatly affect its clustering result. To overcome these challenges, the study proposes three key enhancements: first is to utilize multiple MinHash and locality-sensitive hashing to decrease the dimensionality of the data set, second is to implement Jaccard Similarity before applying the parameter Epsilon to ensure that only similar data points are considered neighbors, and third is to use the concept of Jaccard Neighborhood along with the parameter MinPts to improve in classifying core points and identifying noise in the data set. The results show that the modified DBSCAN algorithm outperformed three other clustering methods, achieving fewer outliers, which facilitated a clearer identification of fire-prone areas, high Silhouette score, indicating well-separated clusters that distinctly identify areas with potential fire hazards and exceptionally achieved a low Davies-Bouldin Index and a high Calinski-Harabasz score, highlighting its ability to form compact and well-defined clusters, making it an effective tool for assessing fire hazard zones. This study is intended for assessing areas in Metro Manila that are most prone to fire risk.

Keywords: DBSCAN, clustering, Jaccard similarity, MinHash LSH, fires

Procedia PDF Downloads 5
370 Quality of Service Based Routing Algorithm for Real Time Applications in MANETs Using Ant Colony and Fuzzy Logic

Authors: Farahnaz Karami

Abstract:

Routing is an important, challenging task in mobile ad hoc networks due to node mobility, lack of central control, unstable links, and limited resources. An ant colony has been found to be an attractive technique for routing in Mobile Ad Hoc Networks (MANETs). However, existing swarm intelligence based routing protocols find an optimal path by considering only one or two route selection metrics without considering correlations among such parameters making them unsuitable lonely for routing real time applications. Fuzzy logic combines multiple route selection parameters containing uncertain information or imprecise data in nature, but does not have multipath routing property naturally in order to provide load balancing. The objective of this paper is to design a routing algorithm using fuzzy logic and ant colony that can solve some of routing problems in mobile ad hoc networks, such as nodes energy consumption optimization to increase network lifetime, link failures rate reduction to increase packet delivery reliability and providing load balancing to optimize available bandwidth. In proposed algorithm, the path information will be given to fuzzy inference system by ants. Based on the available path information and considering the parameters required for quality of service (QoS), the fuzzy cost of each path is calculated and the optimal paths will be selected. NS2.35 simulation tools are used for simulation and the results are compared and evaluated with the newest QoS based algorithms in MANETs according to packet delivery ratio, end-to-end delay and routing overhead ratio criterions. The simulation results show significant improvement in the performance of these networks in terms of decreasing end-to-end delay, and routing overhead ratio, and also increasing packet delivery ratio.

Keywords: mobile ad hoc networks, routing, quality of service, ant colony, fuzzy logic

Procedia PDF Downloads 64
369 Improving Chest X-Ray Disease Detection with Enhanced Data Augmentation Using Novel Approach of Diverse Conditional Wasserstein Generative Adversarial Networks

Authors: Malik Muhammad Arslan, Muneeb Ullah, Dai Shihan, Daniyal Haider, Xiaodong Yang

Abstract:

Chest X-rays are instrumental in the detection and monitoring of a wide array of diseases, including viral infections such as COVID-19, tuberculosis, pneumonia, lung cancer, and various cardiac and pulmonary conditions. To enhance the accuracy of diagnosis, artificial intelligence (AI) algorithms, particularly deep learning models like Convolutional Neural Networks (CNNs), are employed. However, these deep learning models demand a substantial and varied dataset to attain optimal precision. Generative Adversarial Networks (GANs) can be employed to create new data, thereby supplementing the existing dataset and enhancing the accuracy of deep learning models. Nevertheless, GANs have their limitations, such as issues related to stability, convergence, and the ability to distinguish between authentic and fabricated data. In order to overcome these challenges and advance the detection and classification of CXR normal and abnormal images, this study introduces a distinctive technique known as DCWGAN (Diverse Conditional Wasserstein GAN) for generating synthetic chest X-ray (CXR) images. The study evaluates the effectiveness of this Idiosyncratic DCWGAN technique using the ResNet50 model and compares its results with those obtained using the traditional GAN approach. The findings reveal that the ResNet50 model trained on the DCWGAN-generated dataset outperformed the model trained on the classic GAN-generated dataset. Specifically, the ResNet50 model utilizing DCWGAN synthetic images achieved impressive performance metrics with an accuracy of 0.961, precision of 0.955, recall of 0.970, and F1-Measure of 0.963. These results indicate the promising potential for the early detection of diseases in CXR images using this Inimitable approach.

Keywords: CNN, classification, deep learning, GAN, Resnet50

Procedia PDF Downloads 88
368 Bridge Healthcare Access Gap with Artifical Intelligence

Authors: Moshmi Sangavarapu

Abstract:

The US healthcare industry has undergone tremendous digital transformation in recent years, but critical care access to lower-income ethnicities is still in its nascency. This population has historically showcased substantial hesitation to seek any medical assistance. While the lack of sufficient financial resources plays a critical role, the existing cultural and knowledge barriers also contribute significantly to widening the access gap. It is imperative to break these barriers to ensure timely access to therapeutic procedures that can save important lives! Based on ongoing research, healthcare access barriers can be best addressed by tapping the untapped potential of caregiver communities first. They play a critical role in patients’ diagnoses, building healthcare knowledge and instilling confidence in required therapeutic procedures. Recent technological advancements have opened many avenues by developing smart ways of reaching the large caregiver community. A digitized go-to-market strategy featuring connected media coupled with smart IoT devices and geo-location targeting can be collectively leveraged to reach this key audience group. AI/ML algorithms can be thoroughly trained to identify relevant data signals from users' location and browsing behavior and determine useful marketing touchpoints. The web behavior can be further assimilated with natural language processing to identify contextually relevant interest topics and decipher potential caregivers on digital avenues to serve that brand message. In conclusion, grasping the true health access journey of any lower-income ethnic group is important to design beneficial touchpoints that can alleviate patients’ concerns and allow them to break their own access barriers and opt for timely and quality healthcare.

Keywords: healthcare access, market access, diversity barriers, patient journey

Procedia PDF Downloads 54
367 Curvature Based-Methods for Automatic Coarse and Fine Registration in Dimensional Metrology

Authors: Rindra Rantoson, Hichem Nouira, Nabil Anwer, Charyar Mehdi-Souzani

Abstract:

Multiple measurements by means of various data acquisition systems are generally required to measure the shape of freeform workpieces for accuracy, reliability and holisticity. The obtained data are aligned and fused into a common coordinate system within a registration technique involving coarse and fine registrations. Standardized iterative methods have been established for fine registration such as Iterative Closest Points (ICP) and its variants. For coarse registration, no conventional method has been adopted yet despite a significant number of techniques which have been developed in the literature to supply an automatic rough matching between data sets. Two main issues are addressed in this paper: the coarse registration and the fine registration. For coarse registration, two novel automated methods based on the exploitation of discrete curvatures are presented: an enhanced Hough Transformation (HT) and an improved Ransac Transformation. The use of curvature features in both methods aims to reduce computational cost. For fine registration, a new variant of ICP method is proposed in order to reduce registration error using curvature parameters. A specific distance considering the curvature similarity has been combined with Euclidean distance to define the distance criterion used for correspondences searching. Additionally, the objective function has been improved by combining the point-to-point (P-P) minimization and the point-to-plane (P-Pl) minimization with automatic weights. These ones are determined from the preliminary calculated curvature features at each point of the workpiece surface. The algorithms are applied on simulated and real data performed by a computer tomography (CT) system. The obtained results reveal the benefit of the proposed novel curvature-based registration methods.

Keywords: discrete curvature, RANSAC transformation, hough transformation, coarse registration, ICP variant, point-to-point and point-to-plane minimization combination, computer tomography

Procedia PDF Downloads 424
366 Vibrational Spectra and Nonlinear Optical Investigations of a Chalcone Derivative (2e)-3-[4-(Methylsulfanyl) Phenyl]-1-(3-Bromophenyl) Prop-2-En-1-One

Authors: Amit Kumar, Archana Gupta, Poonam Tandon, E. D. D’Silva

Abstract:

Nonlinear optical (NLO) materials are the key materials for the fast processing of information and optical data storage applications. In the last decade, materials showing nonlinear optical properties have been the object of increasing attention by both experimental and computational points of view. Chalcones are one of the most important classes of cross conjugated NLO chromophores that are reported to exhibit good SHG efficiency, ultra fast optical nonlinearities and are easily crystallizable. The basic structure of chalcones is based on the π-conjugated system in which two aromatic rings are connected by a three-carbon α, β-unsaturated carbonyl system. Due to the overlap of π orbitals, delocalization of electronic charge distribution leads to a high mobility of the electron density. On a molecular scale, the extent of charge transfer across the NLO chromophore determines the level of SHG output. Hence, the functionalization of both ends of the π-bond system with appropriate electron donor and acceptor groups can enhance the asymmetric electronic distribution in either or both ground and excited states, leading to an increased optical nonlinearity. In this research, the experimental and theoretical study on the structure and vibrations of (2E)-3-[4-(methylsulfanyl) phenyl]-1-(3-bromophenyl) prop-2-en-1-one (3Br4MSP) is presented. The FT-IR and FT-Raman spectra of the NLO material in the solid phase have been recorded. Density functional theory (DFT) calculations at B3LYP with 6-311++G(d,p) basis set were carried out to study the equilibrium geometry, vibrational wavenumbers, infrared absorbance and Raman scattering activities. The interpretation of vibrational features (normal mode assignments, for instance) has an invaluable aid from DFT calculations that provide a quantum-mechanical description of the electronic energies and forces involved. Perturbation theory allows one to obtain the vibrational normal modes by estimating the derivatives of the Kohn−Sham energy with respect to atomic displacements. The molecular hyperpolarizability β plays a chief role in the NLO properties, and a systematical study on β has been carried out. Furthermore, the first order hyperpolarizability (β) and the related properties such as dipole moment (μ) and polarizability (α) of the title molecule are evaluated by Finite Field (FF) approach. The electronic α and β of the studied molecule are 41.907×10-24 and 79.035×10-24 e.s.u. respectively, indicating that 3Br4MSP can be used as a good nonlinear optical material.

Keywords: DFT, MEP, NLO, vibrational spectra

Procedia PDF Downloads 221
365 The Impact of Intelligent Control Systems on Biomedical Engineering and Research

Authors: Melkamu Tadesse Getachew

Abstract:

Intelligent control systems have revolutionized biomedical engineering, advancing research and enhancing medical practice. This review paper examines the impact of intelligent control on various aspects of biomedical engineering. It analyzes how these systems enhance precision and accuracy in biomedical instrumentation, improving diagnostics, monitoring, and treatment. Integration challenges are addressed, and potential solutions are proposed. The paper also investigates the optimization of drug delivery systems through intelligent control. It explores how intelligent systems contribute to precise dosing, targeted drug release, and personalized medicine. Challenges related to controlled drug release and patient variability are discussed, along with potential avenues for overcoming them. The comparison of algorithms used in intelligent control systems in biomedical control is also reviewed. The implications of intelligent control in computational and systems biology are explored, showcasing how these systems enable enhanced analysis and prediction of complex biological processes. Challenges such as interpretability, human-machine interaction, and machine reliability are examined, along with potential solutions. Intelligent control in biomedical engineering also plays a crucial role in risk management during surgical operations. This section demonstrates how intelligent systems improve patient safety and surgical outcomes when integrated into surgical robots, augmented reality, and preoperative planning. The challenges associated with these implementations and potential solutions are discussed in detail. In summary, this review paper comprehensively explores the widespread impact of intelligent control on biomedical engineering, showing the future of human health issues promising. It discusses application areas, challenges, and potential solutions, highlighting the transformative potential of these systems in advancing research and improving medical practice.

Keywords: Intelligent control systems, biomedical instrumentation, drug delivery systems, robotic surgical instruments, Computational monitoring and modeling

Procedia PDF Downloads 44
364 The Implications of Technological Advancements on the Constitutional Principles of Contract Law

Authors: Laura Çami (Vorpsi), Xhon Skënderi

Abstract:

In today's rapidly evolving technological landscape, the traditional principles of contract law are facing significant challenges. The emergence of new technologies, such as electronic signatures, smart contracts, and online dispute resolution mechanisms, is transforming the way contracts are formed, interpreted, and enforced. This paper examines the implications of these technological advancements on the constitutional principles of contract law. One of the fundamental principles of contract law is freedom of contract, which ensures that parties have the autonomy to negotiate and enter into contracts as they see fit. However, the use of technology in the contracting process has the potential to disrupt this principle. For example, online platforms and marketplaces often offer standard-form contracts, which may not reflect the specific needs or interests of individual parties. This raises questions about the equality of bargaining power between parties and the extent to which parties are truly free to negotiate the terms of their contracts. Another important principle of contract law is the requirement of consideration, which requires that each party receives something of value in exchange for their promise. The use of digital assets, such as cryptocurrencies, has created new challenges in determining what constitutes valuable consideration in a contract. Due to the ambiguity in this area, disagreements about the legality and enforceability of such contracts may arise. Furthermore, the use of technology in dispute resolution mechanisms, such as online arbitration and mediation, may raise concerns about due process and access to justice. The use of algorithms and artificial intelligence to determine the outcome of disputes may also raise questions about the impartiality and fairness of the process. Finally, it should be noted that there are many different and complex effects of technical improvements on the fundamental constitutional foundations of contract law. As technology continues to evolve, it will be important for policymakers and legal practitioners to consider the potential impacts on contract law and to ensure that the principles of fairness, equality, and access to justice are preserved in the contracting process.

Keywords: technological advancements, constitutional principles, contract law, smart contracts, online dispute resolution, freedom of contract

Procedia PDF Downloads 150
363 Modelling, Assessment, and Optimisation of Rules for Selected Umgeni Water Distribution Systems

Authors: Khanyisile Mnguni, Muthukrishnavellaisamy Kumarasamy, Jeff C. Smithers

Abstract:

Umgeni Water is a water board that supplies most parts of KwaZulu Natal with bulk portable water. Currently, Umgeni Water is running its distribution system based on required reservoir levels and demands and does not consider the energy cost at different times of the day, number of pump switches, and background leakages. Including these constraints can reduce operational cost, energy usage, leakages, and increase performance. Optimising pump schedules can reduce energy usage and costs while adhering to hydraulic and operational constraints. Umgeni Water has installed an online hydraulic software, WaterNet Advisor, that allows running different operational scenarios prior to implementation in order to optimise the distribution system. This study will investigate operation scenarios using optimisation techniques and WaterNet Advisor for a local water distribution system. Based on studies reported in the literature, introducing pump scheduling optimisation can reduce energy usage by approximately 30% without any change in infrastructure. Including tariff structures in an optimisation problem can reduce pumping costs by 15%, while including leakages decreases cost by 10%, and pressure drop in the system can be up to 12 m. Genetical optimisation algorithms are widely used due to their ability to solve nonlinear, non-convex, and mixed-integer problems. Other methods such as branch and bound linear programming have also been successfully used. A suitable optimisation method will be chosen based on its efficiency. The objective of the study is to reduce energy usage, operational cost, and leakages, and the feasibility of optimal solution will be checked using the Waternet Advisor. This study will provide an overview of the optimisation of hydraulic networks and progress made to date in multi-objective optimisation for a selected sub-system operated by Umgeni Water.

Keywords: energy usage, pump scheduling, WaterNet Advisor, leakages

Procedia PDF Downloads 92
362 VeriFy: A Solution to Implement Autonomy Safely and According to the Rules

Authors: Michael Naderhirn, Marco Pavone

Abstract:

Problem statement, motivation, and aim of work: So far, the development of control algorithms was done by control engineers in a way that the controller would fit a specification by testing. When it comes to the certification of an autonomous car in highly complex scenarios, the challenge is much higher since such a controller must mathematically guarantee to implement the rules of the road while on the other side guarantee aspects like safety and real time executability. What if it becomes reality to solve this demanding problem by combining Formal Verification and System Theory? The aim of this work is to present a workflow to solve the above mentioned problem. Summary of the presented results / main outcomes: We show the usage of an English like language to transform the rules of the road into system specification for an autonomous car. The language based specifications are used to define system functions and interfaces. Based on that a formal model is developed which formally correctly models the specifications. On the other side, a mathematical model describing the systems dynamics is used to calculate the systems reachability set which is further used to determine the system input boundaries. Then a motion planning algorithm is applied inside the system boundaries to find an optimized trajectory in combination with the formal specification model while satisfying the specifications. The result is a control strategy which can be applied in real time independent of the scenario with a mathematical guarantee to satisfy a predefined specification. We demonstrate the applicability of the method in simulation driving scenarios and a potential certification. Originality, significance, and benefit: To the authors’ best knowledge, it is the first time that it is possible to show an automated workflow which combines a specification in an English like language and a mathematical model in a mathematical formal verified way to synthesizes a controller for potential real time applications like autonomous driving.

Keywords: formal system verification, reachability, real time controller, hybrid system

Procedia PDF Downloads 241
361 A Personality-Based Behavioral Analysis on eSports

Authors: Halkiopoulos Constantinos, Gkintoni Evgenia, Koutsopoulou Ioanna, Antonopoulou Hera

Abstract:

E-sports and e-gaming have emerged in recent years since the increase in internet use have become universal and e-gamers are the new reality in our homes. The excessive involvement of young adults with e-sports has already been revealed and the adverse consequences have been reported in researches in the past few years, but the issue has not been fully studied yet. The present research is conducted in Greece and studies the psychological profile of video game players and provides information on personality traits, habits and emotional status that affect online gamers’ behaviors in order to help professionals and policy makers address the problem. Three standardized self-report questionnaires were administered to participants who were young male and female adults aged from 19-26 years old. The Profile of Mood States (POMS) scale was used to evaluate people’s perceptions of their everyday life mood; the personality features that can trace back to people’s habits and anticipated reactions were measured by Eysenck Personality Questionnaire (EPQ), and the Trait Emotional Intelligence Questionnaire (TEIQue) was used to measure which cognitive (gamers’ beliefs) and emotional parameters (gamers’ emotional abilities) mainly affected/ predicted gamers’ behaviors and leisure time activities?/ gaming behaviors. Data mining techniques were used to analyze the data, which resulted in machine learning algorithms that were included in the software package R. The research findings attempt to designate the effect of personality traits, emotional status and emotional intelligence influence and correlation with e-sports, gamers’ behaviors and help policy makers and stakeholders take action, shape social policy and prevent the adverse consequences on young adults. The need for further research, prevention and treatment strategies is also addressed.

Keywords: e-sports, e-gamers, personality traits, POMS, emotional intelligence, data mining, R

Procedia PDF Downloads 231
360 One-Class Classification Approach Using Fukunaga-Koontz Transform and Selective Multiple Kernel Learning

Authors: Abdullah Bal

Abstract:

This paper presents a one-class classification (OCC) technique based on Fukunaga-Koontz Transform (FKT) for binary classification problems. The FKT is originally a powerful tool to feature selection and ordering for two-class problems. To utilize the standard FKT for data domain description problem (i.e., one-class classification), in this paper, a set of non-class samples which exist outside of positive class (target class) describing boundary formed with limited training data has been constructed synthetically. The tunnel-like decision boundary around upper and lower border of target class samples has been designed using statistical properties of feature vectors belonging to the training data. To capture higher order of statistics of data and increase discrimination ability, the proposed method, termed one-class FKT (OC-FKT), has been extended to its nonlinear version via kernel machines and referred as OC-KFKT for short. Multiple kernel learning (MKL) is a favorable family of machine learning such that tries to find an optimal combination of a set of sub-kernels to achieve a better result. However, the discriminative ability of some of the base kernels may be low and the OC-KFKT designed by this type of kernels leads to unsatisfactory classification performance. To address this problem, the quality of sub-kernels should be evaluated, and the weak kernels must be discarded before the final decision making process. MKL/OC-FKT and selective MKL/OC-FKT frameworks have been designed stimulated by ensemble learning (EL) to weight and then select the sub-classifiers using the discriminability and diversities measured by eigenvalue ratios. The eigenvalue ratios have been assessed based on their regions on the FKT subspaces. The comparative experiments, performed on various low and high dimensional data, against state-of-the-art algorithms confirm the effectiveness of our techniques, especially in case of small sample size (SSS) conditions.

Keywords: ensemble methods, fukunaga-koontz transform, kernel-based methods, multiple kernel learning, one-class classification

Procedia PDF Downloads 21
359 Optimization of Traffic Agent Allocation for Minimizing Bus Rapid Transit Cost on Simplified Jakarta Network

Authors: Gloria Patricia Manurung

Abstract:

Jakarta Bus Rapid Transit (BRT) system which was established in 2009 to reduce private vehicle usage and ease the rush hour gridlock throughout the Jakarta Greater area, has failed to achieve its purpose. With gradually increasing the number of private vehicles ownership and reduced road space by the BRT lane construction, private vehicle users intuitively invade the exclusive lane of BRT, creating local traffic along the BRT network. Invaded BRT lanes costs become the same with the road network, making BRT which is supposed to be the main public transportation in the city becoming unreliable. Efforts to guard critical lanes with preventing the invasion by allocating traffic agents at several intersections have been expended, lead to the improving congestion level along the lane. Given a set of number of traffic agents, this study uses an analytical approach to finding the best deployment strategy of traffic agent on a simplified Jakarta road network in minimizing the BRT link cost which is expected to lead to the improvement of BRT system time reliability. User-equilibrium model of traffic assignment is used to reproduce the origin-destination demand flow on the network and the optimum solution conventionally can be obtained with brute force algorithm. This method’s main constraint is that traffic assignment simulation time escalates exponentially with the increase of set of agent’s number and network size. Our proposed metaheuristic and heuristic algorithms perform linear simulation time increase and result in minimized BRT cost approaching to brute force algorithm optimization. Further analysis of the overall network link cost should be performed to see the impact of traffic agent deployment to the network system.

Keywords: traffic assignment, user equilibrium, greedy algorithm, optimization

Procedia PDF Downloads 229
358 Numerical Solution of Portfolio Selecting Semi-Infinite Problem

Authors: Alina Fedossova, Jose Jorge Sierra Molina

Abstract:

SIP problems are part of non-classical optimization. There are problems in which the number of variables is finite, and the number of constraints is infinite. These are semi-infinite programming problems. Most algorithms for semi-infinite programming problems reduce the semi-infinite problem to a finite one and solve it by classical methods of linear or nonlinear programming. Typically, any of the constraints or the objective function is nonlinear, so the problem often involves nonlinear programming. An investment portfolio is a set of instruments used to reach the specific purposes of investors. The risk of the entire portfolio may be less than the risks of individual investment of portfolio. For example, we could make an investment of M euros in N shares for a specified period. Let yi> 0, the return on money invested in stock i for each dollar since the end of the period (i = 1, ..., N). The logical goal here is to determine the amount xi to be invested in stock i, i = 1, ..., N, such that we maximize the period at the end of ytx value, where x = (x1, ..., xn) and y = (y1, ..., yn). For us the optimal portfolio means the best portfolio in the ratio "risk-return" to the investor portfolio that meets your goals and risk ways. Therefore, investment goals and risk appetite are the factors that influence the choice of appropriate portfolio of assets. The investment returns are uncertain. Thus we have a semi-infinite programming problem. We solve a semi-infinite optimization problem of portfolio selection using the outer approximations methods. This approach can be considered as a developed Eaves-Zangwill method applying the multi-start technique in all of the iterations for the search of relevant constraints' parameters. The stochastic outer approximations method, successfully applied previously for robotics problems, Chebyshev approximation problems, air pollution and others, is based on the optimal criteria of quasi-optimal functions. As a result we obtain mathematical model and the optimal investment portfolio when yields are not clear from the beginning. Finally, we apply this algorithm to a specific case of a Colombian bank.

Keywords: outer approximation methods, portfolio problem, semi-infinite programming, numerial solution

Procedia PDF Downloads 309
357 Optimizing The Residential Design Process Using Automated Technologies

Authors: Martin Georgiev, Milena Nanova, Damyan Damov

Abstract:

Architects, engineers, and developers need to analyse and implement a wide spectrum of data in different formats, if they want to produce viable residential developments. Usually, this data comes from a number of different sources and is not well structured. The main objective of this research project is to provide parametric tools working with real geodesic data that can generate residential solutions. Various codes, regulations and design constraints are described by variables and prioritized. In this way, we establish a common workflow for architects, geodesists, and other professionals involved in the building and investment process. This collaborative medium ensures that the generated design variants conform to various requirements, contributing to a more streamlined and informed decision-making process. The quantification of distinctive characteristics inherent to typical residential structures allows a systematic evaluation of the generated variants, focusing on factors crucial to designers, such as daylight simulation, circulation analysis, space utilization, view orientation, etc. Integrating real geodesic data offers a holistic view of the built environment, enhancing the accuracy and relevance of the design solutions. The use of generative algorithms and parametric models offers high productivity and flexibility of the design variants. It can be implemented in more conventional CAD and BIM workflow. Experts from different specialties can join their efforts, sharing a common digital workspace. In conclusion, our research demonstrates that a generative parametric approach based on real geodesic data and collaborative decision-making could be introduced in the early phases of the design process. This gives the designers powerful tools to explore diverse design possibilities, significantly improving the qualities of the building investment during its entire lifecycle.

Keywords: architectural design, residential buildings, urban development, geodesic data, generative design, parametric models, workflow optimization

Procedia PDF Downloads 52
356 Using Machine Learning to Classify Different Body Parts and Determine Healthiness

Authors: Zachary Pan

Abstract:

Our general mission is to solve the problem of classifying images into different body part types and deciding if each of them is healthy or not. However, for now, we will determine healthiness for only one-sixth of the body parts, specifically the chest. We will detect pneumonia in X-ray scans of those chest images. With this type of AI, doctors can use it as a second opinion when they are taking CT or X-ray scans of their patients. Another ad-vantage of using this machine learning classifier is that it has no human weaknesses like fatigue. The overall ap-proach to this problem is to split the problem into two parts: first, classify the image, then determine if it is healthy. In order to classify the image into a specific body part class, the body parts dataset must be split into test and training sets. We can then use many models, like neural networks or logistic regression models, and fit them using the training set. Now, using the test set, we can obtain a realistic accuracy the models will have on images in the real world since these testing images have never been seen by the models before. In order to increase this testing accuracy, we can also apply many complex algorithms to the models, like multiplicative weight update. For the second part of the problem, to determine if the body part is healthy, we can have another dataset consisting of healthy and non-healthy images of the specific body part and once again split that into the test and training sets. We then use another neural network to train on those training set images and use the testing set to figure out its accuracy. We will do this process only for the chest images. A major conclusion reached is that convolutional neural networks are the most reliable and accurate at image classification. In classifying the images, the logistic regression model, the neural network, neural networks with multiplicative weight update, neural networks with the black box algorithm, and the convolutional neural network achieved 96.83 percent accuracy, 97.33 percent accuracy, 97.83 percent accuracy, 96.67 percent accuracy, and 98.83 percent accuracy, respectively. On the other hand, the overall accuracy of the model that de-termines if the images are healthy or not is around 78.37 percent accuracy.

Keywords: body part, healthcare, machine learning, neural networks

Procedia PDF Downloads 103
355 O-Functionalized CNT Mediated CO Hydro-Deoxygenation and Chain Growth

Authors: K. Mondal, S. Talapatra, M. Terrones, S. Pokhrel, C. Frizzel, B. Sumpter, V. Meunier, A. L. Elias

Abstract:

Worldwide energy independence is reliant on the ability to leverage locally available resources for fuel production. Recently, syngas produced through gasification of carbonaceous materials provided a gateway to a host of processes for the production of various chemicals including transportation fuels. The basis of the production of gasoline and diesel-like fuels is the Fischer Tropsch Synthesis (FTS) process: A catalyzed chemical reaction that converts a mixture of carbon monoxide (CO) and hydrogen (H2) into long chain hydrocarbons. Until now, it has been argued that only transition metal catalysts (usually Co or Fe) are active toward the CO hydrogenation and subsequent chain growth in the presence of hydrogen. In this paper, we demonstrate that carbon nanotube (CNT) surfaces are also capable of hydro-deoxygenating CO and producing long chain hydrocarbons similar to that obtained through the FTS but with orders of magnitude higher conversion efficiencies than the present state-of-the-art FTS catalysts. We have used advanced experimental tools such as XPS and microscopy techniques to characterize CNTs and identify C-O functional groups as the active sites for the enhanced catalytic activity. Furthermore, we have conducted quantum Density Functional Theory (DFT) calculations to confirm that C-O groups (inherent on CNT surfaces) could indeed be catalytically active towards reduction of CO with H2, and capable of sustaining chain growth. The DFT calculations have shown that the kinetically and thermodynamically feasible route for CO insertion and hydro-deoxygenation are different from that on transition metal catalysts. Experiments on a continuous flow tubular reactor with various nearly metal-free CNTs have been carried out and the products have been analyzed. CNTs functionalized by various methods were evaluated under different conditions. Reactor tests revealed that the hydrogen pre-treatment reduced the activity of the catalysts to negligible levels. Without the pretreatment, the activity for CO conversion as found to be 7 µmol CO/g CNT/s. The O-functionalized samples showed very activities greater than 85 µmol CO/g CNT/s with nearly 100% conversion. Analyses show that CO hydro-deoxygenation occurred at the C-O/O-H functional groups. It was found that while the products were similar to FT products, differences in selectivities were observed which, in turn, was a result of a different catalytic mechanism. These findings now open a new paradigm for CNT-based hydrogenation catalysts and constitute a defining point for obtaining clean, earth abundant, alternative fuels through the use of efficient and renewable catalyst.

Keywords: CNT, CO Hydrodeoxygenation, DFT, liquid fuels, XPS, XTL

Procedia PDF Downloads 347
354 Deep Learning Prediction of Residential Radon Health Risk in Canada and Sweden to Prevent Lung Cancer Among Non-Smokers

Authors: Selim M. Khan, Aaron A. Goodarzi, Joshua M. Taron, Tryggve Rönnqvist

Abstract:

Indoor air quality, a prime determinant of health, is strongly influenced by the presence of hazardous radon gas within the built environment. As a health issue, dangerously high indoor radon arose within the 20th century to become the 2nd leading cause of lung cancer. While the 21st century building metrics and human behaviors have captured, contained, and concentrated radon to yet higher and more hazardous levels, the issue is rapidly worsening in Canada. It is established that Canadians in the Prairies are the 2nd highest radon-exposed population in the world, with 1 in 6 residences experiencing 0.2-6.5 millisieverts (mSv) radiation per week, whereas the Canadian Nuclear Safety Commission sets maximum 5-year occupational limits for atomic workplace exposure at only 20 mSv. This situation is also deteriorating over time within newer housing stocks containing higher levels of radon. Deep machine learning (LSTM) algorithms were applied to analyze multiple quantitative and qualitative features, determine the most important contributory factors, and predicted radon levels in the known past (1990-2020) and projected future (2021-2050). The findings showed gradual downwards patterns in Sweden, whereas it would continue to go from high to higher levels in Canada over time. The contributory factors found to be the basement porosity, roof insulation depthness, R-factor, and air dynamics of the indoor environment related to human window opening behaviour. Building codes must consider including these factors to ensure adequate indoor ventilation and healthy living that can prevent lung cancer in non-smokers.

Keywords: radon, building metrics, deep learning, LSTM prediction model, lung cancer, canada, sweden

Procedia PDF Downloads 112
353 Sentiment Analysis of Social Media Responses: A Comparative Study of (NDA) and Indian National Developmental Inclusive Alliance (INDIA) during Indian General Elections 2024

Authors: Pankaj Dhiman, Simranjeet Kaur

Abstract:

This research paper presents a comprehensive sentiment analysis of social media responses to videos on Facebook, YouTube, Twitter, and Instagram during the 2024 Indian general elections. The study focuses on the sentiment patterns of voters towards the National Democratic Alliance (NDA) and The Indian National Developmental Inclusive Alliance (INDIA) on these platforms. The analysis aims to understand the impact of social media on voter sentiment and its correlation with the election outcome. The study employed a mixed-methods approach, combining both quantitative and qualitative methods. With a total of 200 posts analysed during general election-2024 final phase, the sentiment analysis was conducted using natural language processing (NLP) techniques, including sentiment dictionaries and machine learning algorithms. The results show that NDA received significantly more positive sentiment responses across all platforms, with a positive sentiment score of 47% compared to INDIA's score of 38.98 %. The analysis also revealed that Twitter and YouTube were the most influential platforms in shaping voter sentiment, with 60% of the total sentiment score coming from these two platforms. The study's findings suggest that social media sentiment analysis can be a valuable tool for understanding voter sentiment and predicting election outcomes. The results also highlight the importance of social media in shaping public opinion and the need for political parties to engage effectively with voters on these platforms. The study's implications are significant, as they indicate that social media can be a key factor in determining the outcome of elections. The findings also underscore the need for political parties to develop effective social media strategies to engage with voters and shape public opinion.

Keywords: Indian Elections-2024, NDA, INDIA, sentiment analysis, social media, democracy

Procedia PDF Downloads 52
352 Frequency Selective Filters for Estimating the Equivalent Circuit Parameters of Li-Ion Battery

Authors: Arpita Mondal, Aurobinda Routray, Sreeraj Puravankara, Rajashree Biswas

Abstract:

The most difficult part of designing a battery management system (BMS) is battery modeling. A good battery model can capture the dynamics which helps in energy management, by accurate model-based state estimation algorithms. So far the most suitable and fruitful model is the equivalent circuit model (ECM). However, in real-time applications, the model parameters are time-varying, changes with current, temperature, state of charge (SOC), and aging of the battery and this make a great impact on the performance of the model. Therefore, to increase the equivalent circuit model performance, the parameter estimation has been carried out in the frequency domain. The battery is a very complex system, which is associated with various chemical reactions and heat generation. Therefore, it’s very difficult to select the optimal model structure. As we know, if the model order is increased, the model accuracy will be improved automatically. However, the higher order model will face the tendency of over-parameterization and unfavorable prediction capability, while the model complexity will increase enormously. In the time domain, it becomes difficult to solve higher order differential equations as the model order increases. This problem can be resolved by frequency domain analysis, where the overall computational problems due to ill-conditioning reduce. In the frequency domain, several dominating frequencies can be found in the input as well as output data. The selective frequency domain estimation has been carried out, first by estimating the frequencies of the input and output by subspace decomposition, then by choosing the specific bands from the most dominating to the least, while carrying out the least-square, recursive least square and Kalman Filter based parameter estimation. In this paper, a second order battery model consisting of three resistors, two capacitors, and one SOC controlled voltage source has been chosen. For model identification and validation hybrid pulse power characterization (HPPC) tests have been carried out on a 2.6 Ah LiFePO₄ battery.

Keywords: equivalent circuit model, frequency estimation, parameter estimation, subspace decomposition

Procedia PDF Downloads 150
351 Fuzzy Logic Classification Approach for Exponential Data Set in Health Care System for Predication of Future Data

Authors: Manish Pandey, Gurinderjit Kaur, Meenu Talwar, Sachin Chauhan, Jagbir Gill

Abstract:

Health-care management systems are a unit of nice connection as a result of the supply a straightforward and fast management of all aspects relating to a patient, not essentially medical. What is more, there are unit additional and additional cases of pathologies during which diagnosing and treatment may be solely allotted by victimization medical imaging techniques. With associate ever-increasing prevalence, medical pictures area unit directly acquired in or regenerate into digital type, for his or her storage additionally as sequent retrieval and process. Data Mining is the process of extracting information from large data sets through using algorithms and Techniques drawn from the field of Statistics, Machine Learning and Data Base Management Systems. Forecasting may be a prediction of what's going to occur within the future, associated it's an unsure method. Owing to the uncertainty, the accuracy of a forecast is as vital because the outcome foretold by foretelling the freelance variables. A forecast management should be wont to establish if the accuracy of the forecast is within satisfactory limits. Fuzzy regression strategies have normally been wont to develop shopper preferences models that correlate the engineering characteristics with shopper preferences relating to a replacement product; the patron preference models offer a platform, wherever by product developers will decide the engineering characteristics so as to satisfy shopper preferences before developing the merchandise. Recent analysis shows that these fuzzy regression strategies area units normally will not to model client preferences. We tend to propose a Testing the strength of Exponential Regression Model over regression toward the mean Model.

Keywords: health-care management systems, fuzzy regression, data mining, forecasting, fuzzy membership function

Procedia PDF Downloads 279
350 Modelling the Impact of Installation of Heat Cost Allocators in District Heating Systems Using Machine Learning

Authors: Danica Maljkovic, Igor Balen, Bojana Dalbelo Basic

Abstract:

Following the regulation of EU Directive on Energy Efficiency, specifically Article 9, individual metering in district heating systems has to be introduced by the end of 2016. These directions have been implemented in member state’s legal framework, Croatia is one of these states. The directive allows installation of both heat metering devices and heat cost allocators. Mainly due to bad communication and PR, the general public false image was created that the heat cost allocators are devices that save energy. Although this notion is wrong, the aim of this work is to develop a model that would precisely express the influence of installation heat cost allocators on potential energy savings in each unit within multifamily buildings. At the same time, in recent years, a science of machine learning has gain larger application in various fields, as it is proven to give good results in cases where large amounts of data are to be processed with an aim to recognize a pattern and correlation of each of the relevant parameter as well as in the cases where the problem is too complex for a human intelligence to solve. A special method of machine learning, decision tree method, has proven an accuracy of over 92% in prediction general building consumption. In this paper, a machine learning algorithms will be used to isolate the sole impact of installation of heat cost allocators on a single building in multifamily houses connected to district heating systems. Special emphasises will be given regression analysis, logistic regression, support vector machines, decision trees and random forest method.

Keywords: district heating, heat cost allocator, energy efficiency, machine learning, decision tree model, regression analysis, logistic regression, support vector machines, decision trees and random forest method

Procedia PDF Downloads 249
349 Online Dietary Management System

Authors: Kyle Yatich Terik, Collins Oduor

Abstract:

The current healthcare system has made healthcare more accessible and efficient by the use of information technology through the implementation of computer algorithms that generate menus based on the diagnosis. While many systems just like these have been created over the years, their main objective is to help healthy individuals calculate their calorie intake and assist them by providing food selections based on a pre-specified calorie. That application has been proven to be useful in some ways, and they are not suitable for monitoring, planning, and managing hospital patients, especially that critical condition their dietary needs. The system also addresses a number of objectives, such as; the main objective is to be able to design, develop and implement an efficient, user-friendly as well as and interactive dietary management system. The specific design development objectives include developing a system that will facilitate a monitoring feature for users using graphs, developing a system that will provide system-generated reports to the users, dietitians, and system admins, design a system that allows users to measure their BMI (Body Mass Index), the system will also provide food template feature that will guide the user on a balanced diet plan. In order to develop the system, further research was carried out in Kenya, Nairobi County, using online questionnaires being the preferred research design approach. From the 44 respondents, one could create discussions such as the major challenges encountered from the manual dietary system, which include no easily accessible information of the calorie intake for food products, expensive to physically visit a dietitian to create a tailored diet plan. Conclusively, the system has the potential of improving the quality of life of people as a whole by providing a standard for healthy living and allowing individuals to have readily available knowledge through food templates that will guide people and allow users to create their own diet plans that consist of a balanced diet.

Keywords: DMS, dietitian, patient, administrator

Procedia PDF Downloads 161
348 Source Identification Model Based on Label Propagation and Graph Ordinary Differential Equations

Authors: Fuyuan Ma, Yuhan Wang, Junhe Zhang, Ying Wang

Abstract:

Identifying the sources of information dissemination is a pivotal task in the study of collective behaviors in networks, enabling us to discern and intercept the critical pathways through which information propagates from its origins. This allows for the control of the information’s dissemination impact in its early stages. Numerous methods for source detection rely on pre-existing, underlying propagation models as prior knowledge. Current models that eschew prior knowledge attempt to harness label propagation algorithms to model the statistical characteristics of propagation states or employ Graph Neural Networks (GNNs) for deep reverse modeling of the diffusion process. These approaches are either deficient in modeling the propagation patterns of information or are constrained by the over-smoothing problem inherent in GNNs, which limits the stacking of sufficient model depth to excavate global propagation patterns. Consequently, we introduce the ODESI model. Initially, the model employs a label propagation algorithm to delineate the distribution density of infected states within a graph structure and extends the representation of infected states from integers to state vectors, which serve as the initial states of nodes. Subsequently, the model constructs a deep architecture based on GNNs-coupled Ordinary Differential Equations (ODEs) to model the global propagation patterns of continuous propagation processes. Addressing the challenges associated with solving ODEs on graphs, we approximate the analytical solutions to reduce computational costs. Finally, we conduct simulation experiments on two real-world social network datasets, and the results affirm the efficacy of our proposed ODESI model in source identification tasks.

Keywords: source identification, ordinary differential equations, label propagation, complex networks

Procedia PDF Downloads 20
347 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning

Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan

Abstract:

The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.

Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass

Procedia PDF Downloads 116
346 Comparative Study of Skeletonization and Radial Distance Methods for Automated Finger Enumeration

Authors: Mohammad Hossain Mohammadi, Saif Al Ameri, Sana Ziaei, Jinane Mounsef

Abstract:

Automated enumeration of the number of hand fingers is widely used in several motion gaming and distance control applications, and is discussed in several published papers as a starting block for hand recognition systems. The automated finger enumeration technique should not only be accurate, but also must have a fast response for a moving-picture input. The high performance of video in motion games or distance control will inhibit the program’s overall speed, for image processing software such as Matlab need to produce results at high computation speeds. Since an automated finger enumeration with minimum error and processing time is desired, a comparative study between two finger enumeration techniques is presented and analyzed in this paper. In the pre-processing stage, various image processing functions were applied on a real-time video input to obtain the final cleaned auto-cropped image of the hand to be used for the two techniques. The first technique uses the known morphological tool of skeletonization to count the number of skeleton’s endpoints for fingers. The second technique uses a radial distance method to enumerate the number of fingers in order to obtain a one dimensional hand representation. For both discussed methods, the different steps of the algorithms are explained. Then, a comparative study analyzes the accuracy and speed of both techniques. Through experimental testing in different background conditions, it was observed that the radial distance method was more accurate and responsive to a real-time video input compared to the skeletonization method. All test results were generated in Matlab and were based on displaying a human hand for three different orientations on top of a plain color background. Finally, the limitations surrounding the enumeration techniques are presented.

Keywords: comparative study, hand recognition, fingertip detection, skeletonization, radial distance, Matlab

Procedia PDF Downloads 382
345 Resonant Tunnelling Diode Output Characteristics Dependence on Structural Parameters: Simulations Based on Non-Equilibrium Green Functions

Authors: Saif Alomari

Abstract:

The paper aims at giving physical and mathematical descriptions of how the structural parameters of a resonant tunnelling diode (RTD) affect its output characteristics. Specifically, the value of the peak voltage, peak current, peak to valley current ratio (PVCR), and the difference between peak and valley voltages and currents ΔV and ΔI. A simulation-based approach using the Non-Equilibrium Green Function (NEGF) formalism based on the Silvaco ATLAS simulator is employed to conduct a series of designed experiments. These experiments show how the doping concentration in the emitter and collector layers, their thicknesses, and the width of the barriers and the quantum well influence the above-mentioned output characteristics. Each of these parameters was systematically changed while holding others fixed in each set of experiments. Factorial experiments are outside the scope of this work and will be investigated in future. The physics involved in the operation of the device is thoroughly explained and mathematical models based on curve fitting and underlaying physical principles are deduced. The models can be used to design devices with predictable output characteristics. These models were found absent in the literature that the author acanned. Results show that the doping concentration in each region has an effect on the value of the peak voltage. It is found that increasing the carrier concentration in the collector region shifts the peak to lower values, whereas increasing it in the emitter shifts the peak to higher values. In the collector’s case, the shift is either controlled by the built-in potential resulting from the concentration gradient or the conductivity enhancement in the collector. The shift to higher voltages is found to be also related to the location of the Fermi-level. The thicknesses of these layers play a role in the location of the peak as well. It was found that increasing the thickness of each region shifts the peak to higher values until a specific characteristic length, afterwards the peak becomes independent of the thickness. Finally, it is shown that the thickness of the barriers can be optimized for a particular well width to produce the highest PVCR or the highest ΔV and ΔI. The location of the peak voltage is important in optoelectronic applications of RTDs where the operating point of the device is usually the peak voltage point. Furthermore, the PVCR, ΔV, and ΔI are of great importance for building RTD-based oscillators as they affect the frequency response and output power of the oscillator.

Keywords: peak to valley ratio, peak voltage shift, resonant tunneling diodes, structural parameters

Procedia PDF Downloads 142
344 Toward Indoor and Outdoor Surveillance using an Improved Fast Background Subtraction Algorithm

Authors: El Harraj Abdeslam, Raissouni Naoufal

Abstract:

The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes in variance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.

Keywords: video surveillance, background subtraction, contrast limited histogram equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes

Procedia PDF Downloads 256
343 Infrared Spectroscopy in Tandem with Machine Learning for Simultaneous Rapid Identification of Bacteria Isolated Directly from Patients' Urine Samples and Determination of Their Susceptibility to Antibiotics

Authors: Mahmoud Huleihel, George Abu-Aqil, Manal Suleiman, Klaris Riesenberg, Itshak Lapidot, Ahmad Salman

Abstract:

Urinary tract infections (UTIs) are considered to be the most common bacterial infections worldwide, which are caused mainly by Escherichia (E.) coli (about 80%). Klebsiella pneumoniae (about 10%) and Pseudomonas aeruginosa (about 6%). Although antibiotics are considered as the most effective treatment for bacterial infectious diseases, unfortunately, most of the bacteria already have developed resistance to the majority of the commonly available antibiotics. Therefore, it is crucial to identify the infecting bacteria and to determine its susceptibility to antibiotics for prescribing effective treatment. Classical methods are time consuming, require ~48 hours for determining bacterial susceptibility. Thus, it is highly urgent to develop a new method that can significantly reduce the time required for determining both infecting bacterium at the species level and diagnose its susceptibility to antibiotics. Fourier-Transform Infrared (FTIR) spectroscopy is well known as a sensitive and rapid method, which can detect minor molecular changes in bacterial genome associated with the development of resistance to antibiotics. The main goal of this study is to examine the potential of FTIR spectroscopy, in tandem with machine learning algorithms, to identify the infected bacteria at the species level and to determine E. coli susceptibility to different antibiotics directly from patients' urine in about 30minutes. For this goal, 1600 different E. coli isolates were isolated for different patients' urine sample, measured by FTIR, and analyzed using different machine learning algorithm like Random Forest, XGBoost, and CNN. We achieved 98% success in isolate level identification and 89% accuracy in susceptibility determination.

Keywords: urinary tract infections (UTIs), E. coli, Klebsiella pneumonia, Pseudomonas aeruginosa, bacterial, susceptibility to antibiotics, infrared microscopy, machine learning

Procedia PDF Downloads 170
342 Structural Design Optimization of Reinforced Thin-Walled Vessels under External Pressure Using Simulation and Machine Learning Classification Algorithm

Authors: Lydia Novozhilova, Vladimir Urazhdin

Abstract:

An optimization problem for reinforced thin-walled vessels under uniform external pressure is considered. The conventional approaches to optimization generally start with pre-defined geometric parameters of the vessels, and then employ analytic or numeric calculations and/or experimental testing to verify functionality, such as stability under the projected conditions. The proposed approach consists of two steps. First, the feasibility domain will be identified in the multidimensional parameter space. Every point in the feasibility domain defines a design satisfying both geometric and functional constraints. Second, an objective function defined in this domain is formulated and optimized. The broader applicability of the suggested methodology is maximized by implementing the Support Vector Machines (SVM) classification algorithm of machine learning for identification of the feasible design region. Training data for SVM classifier is obtained using the Simulation package of SOLIDWORKS®. Based on the data, the SVM algorithm produces a curvilinear boundary separating admissible and not admissible sets of design parameters with maximal margins. Then optimization of the vessel parameters in the feasibility domain is performed using the standard algorithms for the constrained optimization. As an example, optimization of a ring-stiffened closed cylindrical thin-walled vessel with semi-spherical caps under high external pressure is implemented. As a functional constraint, von Mises stress criterion is used but any other stability constraint admitting mathematical formulation can be incorporated into the proposed approach. Suggested methodology has a good potential for reducing design time for finding optimal parameters of thin-walled vessels under uniform external pressure.

Keywords: design parameters, feasibility domain, von Mises stress criterion, Support Vector Machine (SVM) classifier

Procedia PDF Downloads 327