Search results for: mathematical algorithms of targeting and persecution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4269

Search results for: mathematical algorithms of targeting and persecution

459 Optimization for Autonomous Robotic Construction by Visual Guidance through Machine Learning

Authors: Yangzhi Li

Abstract:

Network transfer of information and performance customization is now a viable method of digital industrial production in the era of Industry 4.0. Robot platforms and network platforms have grown more important in digital design and construction. The pressing need for novel building techniques is driven by the growing labor scarcity problem and increased awareness of construction safety. Robotic approaches in construction research are regarded as an extension of operational and production tools. Several technological theories related to robot autonomous recognition, which include high-performance computing, physical system modeling, extensive sensor coordination, and dataset deep learning, have not been explored using intelligent construction. Relevant transdisciplinary theory and practice research still has specific gaps. Optimizing high-performance computing and autonomous recognition visual guidance technologies improves the robot's grasp of the scene and capacity for autonomous operation. Intelligent vision guidance technology for industrial robots has a serious issue with camera calibration, and the use of intelligent visual guiding and identification technologies for industrial robots in industrial production has strict accuracy requirements. It can be considered that visual recognition systems have challenges with precision issues. In such a situation, it will directly impact the effectiveness and standard of industrial production, necessitating a strengthening of the visual guiding study on positioning precision in recognition technology. To best facilitate the handling of complicated components, an approach for the visual recognition of parts utilizing machine learning algorithms is proposed. This study will identify the position of target components by detecting the information at the boundary and corner of a dense point cloud and determining the aspect ratio in accordance with the guidelines for the modularization of building components. To collect and use components, operational processing systems assign them to the same coordinate system based on their locations and postures. The RGB image's inclination detection and the depth image's verification will be used to determine the component's present posture. Finally, a virtual environment model for the robot's obstacle-avoidance route will be constructed using the point cloud information.

Keywords: robotic construction, robotic assembly, visual guidance, machine learning

Procedia PDF Downloads 78
458 Kinetic Study of Physical Quality Changes on Jumbo Squid (Dosidicus gigas) Slices during Application High-Pressure Impregnation

Authors: Mario Perez-Won, Roberto Lemus-Mondaca, Fernanda Marin, Constanza Olivares

Abstract:

This study presents the simultaneous application of high hydrostatic pressure (HHP) and osmotic dehydration of jumbo squid (Dosidicus gigas) slice. Diffusion coefficients for both components water and solids were improved by the process pressure, being influenced by pressure level. The working conditions were different pressures such as 100, 250, 400 MPa and pressure atmospheric (0.1 MPa) for time intervals from 30 to 300 seconds and a 15% NaCl concentration. The mathematical expressions used for mass transfer simulations both water and salt were those corresponding to Newton, Henderson and Pabis, Page and Weibull models, where the Weibull and Henderson-Pabis models presented the best fitted to the water and salt experimental data, respectively. The values for water diffusivity coefficients varied from 1.62 to 8.10x10⁻⁹ m²/s whereas that for salt varied among 14.18 to 36.07x10⁻⁹ m²/s for selected conditions. Finally, as to quality parameters studied under the range of experimental conditions studied, the treatment at 250 MPa yielded on the samples a minimum hardness, whereas springiness, cohesiveness and chewiness at 100, 250 and 400 MPa treatments presented statistical differences regarding to unpressurized samples. The colour parameters L* (lightness) increased, however, but b* (yellowish) and a* (reddish) parameters decreased when increasing pressure level. This way, samples presented a brighter aspect and a mildly cooked appearance. The results presented in this study can support the enormous potential of hydrostatic pressure application as a technique important for compounds impregnation under high pressure.

Keywords: colour, diffusivity, high pressure, jumbo squid, modelling, texture

Procedia PDF Downloads 334
457 CyberSteer: Cyber-Human Approach for Safely Shaping Autonomous Robotic Behavior to Comply with Human Intention

Authors: Vinicius G. Goecks, Gregory M. Gremillion, William D. Nothwang

Abstract:

Modern approaches to train intelligent agents rely on prolonged training sessions, high amounts of input data, and multiple interactions with the environment. This restricts the application of these learning algorithms in robotics and real-world applications, in which there is low tolerance to inadequate actions, interactions are expensive, and real-time processing and action are required. This paper addresses this issue introducing CyberSteer, a novel approach to efficiently design intrinsic reward functions based on human intention to guide deep reinforcement learning agents with no environment-dependent rewards. CyberSteer uses non-expert human operators for initial demonstration of a given task or desired behavior. The trajectories collected are used to train a behavior cloning deep neural network that asynchronously runs in the background and suggests actions to the deep reinforcement learning module. An intrinsic reward is computed based on the similarity between actions suggested and taken by the deep reinforcement learning algorithm commanding the agent. This intrinsic reward can also be reshaped through additional human demonstration or critique. This approach removes the need for environment-dependent or hand-engineered rewards while still being able to safely shape the behavior of autonomous robotic agents, in this case, based on human intention. CyberSteer is tested in a high-fidelity unmanned aerial vehicle simulation environment, the Microsoft AirSim. The simulated aerial robot performs collision avoidance through a clustered forest environment using forward-looking depth sensing and roll, pitch, and yaw references angle commands to the flight controller. This approach shows that the behavior of robotic systems can be shaped in a reduced amount of time when guided by a non-expert human, who is only aware of the high-level goals of the task. Decreasing the amount of training time required and increasing safety during training maneuvers will allow for faster deployment of intelligent robotic agents in dynamic real-world applications.

Keywords: human-robot interaction, intelligent robots, robot learning, semisupervised learning, unmanned aerial vehicles

Procedia PDF Downloads 251
456 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: cost prediction, machine learning, project management, random forest, neural networks

Procedia PDF Downloads 30
455 Methodologies for Deriving Semantic Technical Information Using an Unstructured Patent Text Data

Authors: Jaehyung An, Sungjoo Lee

Abstract:

Patent documents constitute an up-to-date and reliable source of knowledge for reflecting technological advance, so patent analysis has been widely used for identification of technological trends and formulation of technology strategies. But, identifying technological information from patent data entails some limitations such as, high cost, complexity, and inconsistency because it rely on the expert’ knowledge. To overcome these limitations, researchers have applied to a quantitative analysis based on the keyword technique. By using this method, you can include a technological implication, particularly patent documents, or extract a keyword that indicates the important contents. However, it only uses the simple-counting method by keyword frequency, so it cannot take into account the sematic relationship with the keywords and sematic information such as, how the technologies are used in their technology area and how the technologies affect the other technologies. To automatically analyze unstructured technological information in patents to extract the semantic information, it should be transformed into an abstracted form that includes the technological key concepts. Specific sentence structure ‘SAO’ (subject, action, object) is newly emerged by representing ‘key concepts’ and can be extracted by NLP (Natural language processor). An SAO structure can be organized in a problem-solution format if the action-object (AO) states that the problem and subject (S) form the solution. In this paper, we propose the new methodology that can extract the SAO structure through technical elements extracting rules. Although sentence structures in the patents text have a unique format, prior studies have depended on general NLP (Natural language processor) applied to the common documents such as newspaper, research paper, and twitter mentions, so it cannot take into account the specific sentence structure types of the patent documents. To overcome this limitation, we identified a unique form of the patent sentences and defined the SAO structures in the patents text data. There are four types of technical elements that consist of technology adoption purpose, application area, tool for technology, and technical components. These four types of sentence structures from patents have their own specific word structure by location or sequence of the part of speech at each sentence. Finally, we developed algorithms for extracting SAOs and this result offer insight for the technology innovation process by providing different perspectives of technology.

Keywords: NLP, patent analysis, SAO, semantic-analysis

Procedia PDF Downloads 258
454 Design and Implementation of Generative Models for Odor Classification Using Electronic Nose

Authors: Kumar Shashvat, Amol P. Bhondekar

Abstract:

In the midst of the five senses, odor is the most reminiscent and least understood. Odor testing has been mysterious and odor data fabled to most practitioners. The delinquent of recognition and classification of odor is important to achieve. The facility to smell and predict whether the artifact is of further use or it has become undesirable for consumption; the imitation of this problem hooked on a model is of consideration. The general industrial standard for this classification is color based anyhow; odor can be improved classifier than color based classification and if incorporated in machine will be awfully constructive. For cataloging of odor for peas, trees and cashews various discriminative approaches have been used Discriminative approaches offer good prognostic performance and have been widely used in many applications but are incapable to make effectual use of the unlabeled information. In such scenarios, generative approaches have better applicability, as they are able to knob glitches, such as in set-ups where variability in the series of possible input vectors is enormous. Generative models are integrated in machine learning for either modeling data directly or as a transitional step to form an indeterminate probability density function. The algorithms or models Linear Discriminant Analysis and Naive Bayes Classifier have been used for classification of the odor of cashews. Linear Discriminant Analysis is a method used in data classification, pattern recognition, and machine learning to discover a linear combination of features that typifies or divides two or more classes of objects or procedures. The Naive Bayes algorithm is a classification approach base on Bayes rule and a set of qualified independence theory. Naive Bayes classifiers are highly scalable, requiring a number of restraints linear in the number of variables (features/predictors) in a learning predicament. The main recompenses of using the generative models are generally a Generative Models make stronger assumptions about the data, specifically, about the distribution of predictors given the response variables. The Electronic instrument which is used for artificial odor sensing and classification is an electronic nose. This device is designed to imitate the anthropological sense of odor by providing an analysis of individual chemicals or chemical mixtures. The experimental results have been evaluated in the form of the performance measures i.e. are accuracy, precision and recall. The investigational results have proven that the overall performance of the Linear Discriminant Analysis was better in assessment to the Naive Bayes Classifier on cashew dataset.

Keywords: odor classification, generative models, naive bayes, linear discriminant analysis

Procedia PDF Downloads 375
453 Optimized Deep Learning-Based Facial Emotion Recognition System

Authors: Erick C. Valverde, Wansu Lim

Abstract:

Facial emotion recognition (FER) system has been recently developed for more advanced computer vision applications. The ability to identify human emotions would enable smart healthcare facility to diagnose mental health illnesses (e.g., depression and stress) as well as better human social interactions with smart technologies. The FER system involves two steps: 1) face detection task and 2) facial emotion recognition task. It classifies the human expression in various categories such as angry, disgust, fear, happy, sad, surprise, and neutral. This system requires intensive research to address issues with human diversity, various unique human expressions, and variety of human facial features due to age differences. These issues generally affect the ability of the FER system to detect human emotions with high accuracy. Early stage of FER systems used simple supervised classification task algorithms like K-nearest neighbors (KNN) and artificial neural networks (ANN). These conventional FER systems have issues with low accuracy due to its inefficiency to extract significant features of several human emotions. To increase the accuracy of FER systems, deep learning (DL)-based methods, like convolutional neural networks (CNN), are proposed. These methods can find more complex features in the human face by means of the deeper connections within its architectures. However, the inference speed and computational costs of a DL-based FER system is often disregarded in exchange for higher accuracy results. To cope with this drawback, an optimized DL-based FER system is proposed in this study.An extreme version of Inception V3, known as Xception model, is leveraged by applying different network optimization methods. Specifically, network pruning and quantization are used to enable lower computational costs and reduce memory usage, respectively. To support low resource requirements, a 68-landmark face detector from Dlib is used in the early step of the FER system.Furthermore, a DL compiler is utilized to incorporate advanced optimization techniques to the Xception model to improve the inference speed of the FER system. In comparison to VGG-Net and ResNet50, the proposed optimized DL-based FER system experimentally demonstrates the objectives of the network optimization methods used. As a result, the proposed approach can be used to create an efficient and real-time FER system.

Keywords: deep learning, face detection, facial emotion recognition, network optimization methods

Procedia PDF Downloads 114
452 Adobe Attenuation Coefficient Determination and Its Comparison with Other Shielding Materials for Energies Found in Common X-Rays Procedures

Authors: Camarena Rodriguez C. S., Portocarrero Bonifaz A., Palma Esparza R., Romero Carlos N. A.

Abstract:

Adobe is a construction material that fulfills the same function as a conventional brick. Widely used since ancient times, it is present in an appreciable percentage of buildings in Latin America. Adobe is a mixture of clay and sand. The interest in the study of the properties of this material arises due to its presence in the infrastructure of hospital´s radiological services, located in places with low economic resources, for the attenuation of radiation. Some materials such as lead and concrete are the most used for shielding and are widely studied in the literature. The present study will determine the mass attenuation coefficient of Adobe. The minimum required thicknesses for the primary and secondary barriers will be estimated for the shielding of radiological facilities where conventional and dental X-rays are performed. For the experimental procedure, an X-ray source emitted direct radiation towards different thicknesses of an Adobe barrier, and a detector was placed on the other side. For this purpose, an UNFORS Xi solid state detector was used, which collected information on the difference of radiation intensity. The initial parameters of the exposure started at 45 kV; and then the tube tension was varied in increments of 5 kV, reaching a maximum of 125 kV. The X-Ray tube was positioned at a distance of 0.5 m from the surface of the Adobe bricks, and the collimation of the radiation beam was set for an area of 0.15 m x 0.15 m. Finally, mathematical methods were applied to determine the mass attenuation coefficient for different energy ranges. In conclusion, the mass attenuation coefficient for Adobe was determined and the approximate thicknesses of the most common Adobe barriers in the hospital buildings were calculated for their later application in the radiological protection.

Keywords: Adobe, attenuation coefficient, radiological protection, shielding, x-rays

Procedia PDF Downloads 153
451 A Machine Learning Approach for Efficient Resource Management in Construction Projects

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management

Procedia PDF Downloads 26
450 Numerical Simulation of Two-Phase Flows Using a Pressure-Based Solver

Authors: Lei Zhang, Jean-Michel Ghidaglia, Anela Kumbaro

Abstract:

This work focuses on numerical simulation of two-phase flows based on the bi-fluid six-equation model widely used in many industrial areas, such as nuclear power plant safety analysis. A pressure-based numerical method is adopted in our studies due to the fact that in two-phase flows, it is common to have a large range of Mach numbers because of the mixture of liquid and gas, and density-based solvers experience stiffness problems as well as a loss of accuracy when approaching the low Mach number limit. This work extends the semi-implicit pressure solver in the nuclear component CUPID code, where the governing equations are solved on unstructured grids with co-located variables to accommodate complicated geometries. A conservative version of the solver is developed in order to capture exactly the shock in one-phase flows, and is extended to two-phase situations. An inter-facial pressure term is added to the bi-fluid model to make the system hyperbolic and to establish a well-posed mathematical problem that will allow us to obtain convergent solutions with refined meshes. The ability of the numerical method to treat phase appearance and disappearance as well as the behavior of the scheme at low Mach numbers will be demonstrated through several numerical results. Finally, inter-facial mass and heat transfer models are included to deal with situations when mass and energy transfer between phases is important, and associated industrial numerical benchmarks with tabulated EOS (equations of state) for fluids are performed.

Keywords: two-phase flows, numerical simulation, bi-fluid model, unstructured grids, phase appearance and disappearance

Procedia PDF Downloads 387
449 On Lie Groupoids, Bundles, and Their Categories

Authors: P. G. Romeo

Abstract:

A Lie group is a highly sophisticated structure which is a smooth manifold whose underlying set of elements is equipped with the structure of a group such that the group multiplication and inverse-assigning functions are smooth. This structure was introduced by the Norwegian mathematician So- phus Lie who founded the theory of continuous groups. The Lie groups are well developed and have wide applications in areas including Mathematical Physics. There are several advances and generalizations for Lie groups and Lie groupoids is one such which is termed as a "many-object generalization" of Lie groups. A groupoid is a category whose morphisms are all invertible, obviously, every group is a groupoid but not conversely. Definition 1. A Lie groupoid G ⇒ M is a groupoid G on a base M together with smooth structures on G and M such that the maps α, β: G → M are surjective submertions, the object inclusion map x '→ 1x, M → G is smooth, and the partial multiplication G ∗ G → G is smooth. A bundle is a triple (E, p, B) where E, B are topological spaces p: E → B is a map. Space B is called the base space and space E is called total space and map p is the projection of the bundle. For each b ∈ B, the space p−1(b) is called the fibre of the bundle over b ∈ B. Intuitively a bundle is regarded as a union of fibres p−1(b) for b ∈ B parametrized by B and ’glued together’ by the topology of the space E. A cross-section of a bundle (E, p, B) is a map s: B → E such that ps = 1B. Example 1. Given any space B, a product bundle over B with fibre F is (B × F, p, B) where p is the projection on the first factor. Definition 2. A principal bundle P (M, G, π) consists of a manifold P, a Lie group G, and a free right action of G on P denoted (u, g) '→ ug, such that the orbits of the action coincide with the fibres of the surjective submersion π : P → M, and such that M is covered by the domains of local sections σ: U → P, U ⊆ M, of π. Definition 3. A Lie group bundle, or LGB, is a smooth fibre bundle (K, q, M ) in which each fibre (Km = q−1(m), and the fibre type G, has a Lie group structure, and for which there is an atlas {ψi: Ui × G → KUi } such that each {ψi,m : G → Km}, is an isomorphism of Lie groups. A morphism of LGB from (K, q, M ) to (K′, q′, M′) is a morphism (F, f ) of fibre bundles such that each Fm: Km → K′ is a morphism of Lie groups. In this paper, we will be discussing the Lie groupoid bundles. Here it is seen that to a Lie groupoid Ω on base B there is associated a collection of principal bundles Ωx(B, Ωx), all of which are mutually isomorphic and conversely, associated to any principal bundle P (B, G, p) there is a groupoid called the Ehresmann groupoid which is easily seen to be Lie. Further, some interesting properties of the category of Lie groupoids and bundles will be explored.

Keywords: groupoid, lie group, lie groupoid, bundle

Procedia PDF Downloads 66
448 Experimental and Numerical Performance Analysis for Steam Jet Ejectors

Authors: Abdellah Hanafi, G. M. Mostafa, Mohamed Mortada, Ahmed Hamed

Abstract:

The steam ejectors are the heart of most of the desalination systems that employ vacuum. The systems that employ low grade thermal energy sources like solar energy and geothermal energy use the ejector to drive the system instead of high grade electric energy. The jet-ejector is used to create vacuum employing the flow of steam or air and using the severe pressure drop at the outlet of the main nozzle. The present work involves developing a one dimensional mathematical model for designing jet-ejectors and transform it into computer code using Engineering Equation solver (EES) software. The model receives the required operating conditions at the inlets and outlet of the ejector as inputs and produces the corresponding dimensions required to reach these conditions. The one-dimensional model has been validated using an existed model working on Abu-Qir power station. A prototype has been designed according to the one-dimensional model and attached to a special test bench to be tested before using it in the solar desalination pilot plant. The tested ejector will be responsible for the startup evacuation of the system and adjusting the vacuum of the evaporating effects. The tested prototype has shown a good agreement with the results of the code. In addition a numerical analysis has been applied on one of the designed geometry to give an image of the pressure and velocity distribution inside the ejector from a side, and from other side, to show the difference in results between the two-dimensional ideal gas model and real prototype. The commercial edition of ANSYS Fluent v.14 software is used to solve the two-dimensional axisymmetric case.

Keywords: solar energy, jet ejector, vacuum, evaporating effects

Procedia PDF Downloads 610
447 Integrating Data Mining with Case-Based Reasoning for Diagnosing Sorghum Anthracnose

Authors: Mariamawit T. Belete

Abstract:

Cereal production and marketing are the means of livelihood for millions of households in Ethiopia. However, cereal production is constrained by technical and socio-economic factors. Among the technical factors, cereal crop diseases are the major contributing factors to the low yield. The aim of this research is to develop an integration of data mining and knowledge based system for sorghum anthracnose disease diagnosis that assists agriculture experts and development agents to make timely decisions. Anthracnose diagnosing systems gather information from Melkassa agricultural research center and attempt to score anthracnose severity scale. Empirical research is designed for data exploration, modeling, and confirmatory procedures for testing hypothesis and prediction to draw a sound conclusion. WEKA (Waikato Environment for Knowledge Analysis) was employed for the modeling. Knowledge based system has come across a variety of approaches based on the knowledge representation method; case-based reasoning (CBR) is one of the popular approaches used in knowledge-based system. CBR is a problem solving strategy that uses previous cases to solve new problems. The system utilizes hidden knowledge extracted by employing clustering algorithms, specifically K-means clustering from sampled anthracnose dataset. Clustered cases with centroid value are mapped to jCOLIBRI, and then the integrator application is created using NetBeans with JDK 8.0.2. The important part of a case based reasoning model includes case retrieval; the similarity measuring stage, reuse; which allows domain expert to transfer retrieval case solution to suit for the current case, revise; to test the solution, and retain to store the confirmed solution to the case base for future use. Evaluation of the system was done for both system performance and user acceptance. For testing the prototype, seven test cases were used. Experimental result shows that the system achieves an average precision and recall values of 70% and 83%, respectively. User acceptance testing also performed by involving five domain experts, and an average of 83% acceptance is achieved. Although the result of this study is promising, however, further study should be done an investigation on hybrid approach such as rule based reasoning, and pictorial retrieval process are recommended.

Keywords: sorghum anthracnose, data mining, case based reasoning, integration

Procedia PDF Downloads 73
446 DYVELOP Method Implementation for the Research Development in Small and Middle Enterprises

Authors: Jiří F. Urbánek, David Král

Abstract:

Small and Middle Enterprises (SME) have a specific mission, characteristics, and behavior in global business competitive environments. They must respect policy, rules, requirements and standards in all their inherent and outer processes of supply - customer chains and networks. Paper aims and purposes are to introduce computational assistance, which enables us the using of prevailing operation system MS Office (SmartArt...) for mathematical models, using DYVELOP (Dynamic Vector Logistics of Processes) method. It is providing for SMS´s global environment the capability and profit to achieve its commitment regarding the effectiveness of the quality management system in customer requirements meeting and also the continual improvement of the organization’s and SME´s processes overall performance and efficiency, as well as its societal security via continual planning improvement. DYVELOP model´s maps - the Blazons are able mathematically - graphically express the relationships among entities, actors, and processes, including the discovering and modeling of the cycling cases and their phases. The blazons need live PowerPoint presentation for better comprehension of this paper mission – added value analysis. The crisis management of SMEs is obliged to use the cycles for successful coping of crisis situations.  Several times cycling of these cases is a necessary condition for the encompassment of the both the emergency event and the mitigation of organization´s damages. Uninterrupted and continuous cycling process is a good indicator and controlling actor of SME continuity and its sustainable development advanced possibilities.

Keywords: blazons, computational assistance, DYVELOP method, small and middle enterprises

Procedia PDF Downloads 331
445 Chemical Study of Volatile Organic Compounds (VOCS) from Xylopia aromatica (LAM.) Mart (Annonaceae)

Authors: Vanessa G. P. Severino, JOÃO Gabriel M. Junqueira, Michelle N. G. do Nascimento, Francisco W. B. Aquino, João B. Fernandes, Ana P. Terezan

Abstract:

The scientific interest in analyzing VOCs represents a significant modern research field as a result of importance in most branches of the present life and industry. Therefore it is extremely important to investigate, identify and isolate volatile substances, since they can be used in different areas, such as food, medicine, cosmetics, perfumery, aromatherapy, pesticides, repellents and other household products through methods for extracting volatile constituents, such as solid phase microextraction (SPME), hydrodistillation (HD), solvent extraction (SE), Soxhlet extraction, supercritical fluid extraction (SFE), stream distillation (SD) and vacuum distillation (VD). The Chemometrics is an area of chemistry that uses statistical and mathematical tools for the planning and optimization of the experimental conditions, and to extract relevant chemical information multivariate chemical data. In this context, the focus of this work was the study of the chemical VOCs by SPME of the specie X. aromatica, in search of constituents that can be used in the industrial sector as well as in food, cosmetics and perfumery, since these areas industrial has a considerable role. In addition, by chemometric analysis, we sought to maximize the answers of this research, in order to search for the largest number of compounds. The investigation of flowers from X. aromatica in vitro and in alive mode proved consistent, but certain factors supposed influence the composition of metabolites, and the chemometric analysis strengthened the analysis. Thus, the study of the chemical composition of X. aromatica contributed to the VOCs knowledge of the species and a possible application.

Keywords: chemometrics, flowers, HS-SPME, Xylopia aromatica

Procedia PDF Downloads 346
444 Application of Electrochemical Impedance Spectroscopy to Monitor the Steel/Soil Interface During Cathodic Protection of Steel in Simulated Soil Solution

Authors: Mandlenkosi George Robert Mahlobo, Tumelo Seadira, Major Melusi Mabuza, Peter Apata Olubambi

Abstract:

Cathodic protection (CP) has been widely considered a suitable technique for mitigating corrosion of buried metal structures. Plenty of efforts have been made in developing techniques, in particular non-destructive techniques, for monitoring and quantifying the effectiveness of CP to ensure the sustainability and performance of buried steel structures. The aim of this study was to investigate the evolution of the electrochemical processes at the steel/soil interface during the application of CP on steel in simulated soil. Carbon steel was subjected to electrochemical tests with NS4 solution used as simulated soil conditions for 4 days before applying CP for a further 11 days. A previously modified non-destructive voltammetry technique was applied before and after the application of CP to measure the corrosion rate. Electrochemical impedance spectroscopy (EIS), in combination with mathematical modeling through equivalent electric circuits, was applied to determine the electrochemical behavior at the steel/soil interface. The measured corrosion rate was found to have decreased from 410 µm/yr to 8 µm/yr between days 5 and 14 because of the applied CP. Equivalent electrical circuits were successfully constructed and used to adequately model the EIS results. The modeling of the obtained EIS results revealed the formation of corrosion products via a mixed activation-diffusion mechanism during the first 4 days, while the activation mechanism prevailed in the presence of CP, resulting in a protective film. The x-ray diffraction analysis confirmed the presence of corrosion products and the predominant protective film corresponding to the calcareous deposit.

Keywords: carbon steel, cathodic protection, NS4 solution, voltammetry, EIS

Procedia PDF Downloads 53
443 Investigating Effect of Geometrical Proportions in Islamic Architecture and Music

Authors: Amir Hossein Allahdadi

Abstract:

The mystical and intuitive look of Islamic artists inspired by the Koranic and mystical principles and also based on the geometry and mathematics has left unique works whose range extends across the borders of Islam. The relationship between Islamic art and music in the traditional art is of one of the concepts that can be traced back to the other arts by detection of its components. One of the links is the art of painting whose subtleties that can be applicable to both architecture and music. So, architecture and music links can be traced in other arts with a traditional foundation in order to evaluate the equivalents of traditional arts. What is the relationship between physical space of architecture and nonphysical space of music? What is musical architecture? What is the music that tends to architecture? These questions are very small samples of the questions that arise in this category, and these questions and concerns remain as long as the music is played and the architecture is made. Efforts have been made in this area, references compiled and plans drawn. As an example, we can refer to views of ‘Mansour Falamaki’ in the book of architecture and music, as well as the book transition from mud to heart by ‘Hesamodin Seraj’. The method is such that a certain melody is given to an architect and it is tried to design a specified architecture using a certain theme. This study is not to follow the architecture of a particular type of music and the formation of a volume based on a sound. In this opportunity, it is tried to briefly review the relationship between music and architecture in the Iranian original and traditional arts, using the basic definitions of arts. The musician plays, the architect designs, the actor forms his desired space and painter displays his multi-dimensional world in the form of two-dimensions. The expression language is different, but all of them can be gathered in a form, a form which has no clear boundaries. In fact, in any original art, the artist applies his art as a tool to express his insights which are nothing but achieving the world beyond this place and time.

Keywords: architecture, music, geometric proportions, mathematical proportions

Procedia PDF Downloads 233
442 Enhancing Fault Detection in Rotating Machinery Using Wiener-CNN Method

Authors: Mohamad R. Moshtagh, Ahmad Bagheri

Abstract:

Accurate fault detection in rotating machinery is of utmost importance to ensure optimal performance and prevent costly downtime in industrial applications. This study presents a robust fault detection system based on vibration data collected from rotating gears under various operating conditions. The considered scenarios include: (1) both gears being healthy, (2) one healthy gear and one faulty gear, and (3) introducing an imbalanced condition to a healthy gear. Vibration data was acquired using a Hentek 1008 device and stored in a CSV file. Python code implemented in the Spider environment was used for data preprocessing and analysis. Winner features were extracted using the Wiener feature selection method. These features were then employed in multiple machine learning algorithms, including Convolutional Neural Networks (CNN), Multilayer Perceptron (MLP), K-Nearest Neighbors (KNN), and Random Forest, to evaluate their performance in detecting and classifying faults in both the training and validation datasets. The comparative analysis of the methods revealed the superior performance of the Wiener-CNN approach. The Wiener-CNN method achieved a remarkable accuracy of 100% for both the two-class (healthy gear and faulty gear) and three-class (healthy gear, faulty gear, and imbalanced) scenarios in the training and validation datasets. In contrast, the other methods exhibited varying levels of accuracy. The Wiener-MLP method attained 100% accuracy for the two-class training dataset and 100% for the validation dataset. For the three-class scenario, the Wiener-MLP method demonstrated 100% accuracy in the training dataset and 95.3% accuracy in the validation dataset. The Wiener-KNN method yielded 96.3% accuracy for the two-class training dataset and 94.5% for the validation dataset. In the three-class scenario, it achieved 85.3% accuracy in the training dataset and 77.2% in the validation dataset. The Wiener-Random Forest method achieved 100% accuracy for the two-class training dataset and 85% for the validation dataset, while in the three-class training dataset, it attained 100% accuracy and 90.8% accuracy for the validation dataset. The exceptional accuracy demonstrated by the Wiener-CNN method underscores its effectiveness in accurately identifying and classifying fault conditions in rotating machinery. The proposed fault detection system utilizes vibration data analysis and advanced machine learning techniques to improve operational reliability and productivity. By adopting the Wiener-CNN method, industrial systems can benefit from enhanced fault detection capabilities, facilitating proactive maintenance and reducing equipment downtime.

Keywords: fault detection, gearbox, machine learning, wiener method

Procedia PDF Downloads 70
441 Optimized Renewable Energy Mix for Energy Saving in Waste Water Treatment Plants

Authors: J. D. García Espinel, Paula Pérez Sánchez, Carlos Egea Ruiz, Carlos Lardín Mifsut, Andrés López-Aranguren Oliver

Abstract:

This paper shortly describes three main actuations over a Waste Water Treatment Plant (WWTP) for reducing its energy consumption: Optimization of the biological reactor in the aeration stage by including new control algorithms and introducing new efficient equipment, the installation of an innovative hybrid system with zero Grid injection (formed by 100kW of PV energy and 5 kW of mini-wind energy generation) and an intelligent management system for load consumption and energy generation control in the most optimum way. This project called RENEWAT, involved in the European Commission call LIFE 2013, has the main objective of reducing the energy consumptions through different actions on the processes which take place in a WWTP and introducing renewable energies on these treatment plants, with the purpose of promoting the usage of treated waste water for irrigation and decreasing the C02 gas emissions. WWTP is always required before waste water can be reused for irrigation or discharged in water bodies. However, the energetic demand of the treatment process is high enough for making the price of treated water to exceed the one for drinkable water. This makes any policy very difficult to encourage the re-use of treated water, with a great impact on the water cycle, particularly in those areas suffering hydric stress or deficiency. The cost of treating waste water involves another climate-change related burden: the energy necessary for the process is obtained mainly from the electric network, which is, in most of the cases in Europe, energy obtained from the burning of fossil fuels. The innovative part of this project is based on the implementation, adaptation and integration of solutions for this problem, together with a new concept of the integration of energy input and operative energy demand. Moreover, there is an important qualitative jump between the technologies used and the alleged technologies to use in the project which give it an innovative character, due to the fact that there are no similar previous experiences of a WWTP including an intelligent discrimination of energy sources, integrating renewable ones (PV and Wind) and the grid.

Keywords: aeration system, biological reactor, CO2 emissions, energy efficiency, hybrid systems, LIFE 2013 call, process optimization, renewable energy sources, wasted water treatment plants

Procedia PDF Downloads 346
440 Computational Fluid Dynamics Simulations and Analysis of Air Bubble Rising in a Column of Liquid

Authors: Baha-Aldeen S. Algmati, Ahmed R. Ballil

Abstract:

Multiphase flows occur widely in many engineering and industrial processes as well as in the environment we live in. In particular, bubbly flows are considered to be crucial phenomena in fluid flow applications and can be studied and analyzed experimentally, analytically, and computationally. In the present paper, the dynamic motion of an air bubble rising within a column of liquid is numerically simulated using an open-source CFD modeling tool 'OpenFOAM'. An interface tracking numerical algorithm called MULES algorithm, which is built-in OpenFOAM, is chosen to solve an appropriate mathematical model based on the volume of fluid (VOF) numerical method. The bubbles initially have a spherical shape and starting from rest in the stagnant column of liquid. The algorithm is initially verified against numerical results and is also validated against available experimental data. The comparison revealed that this algorithm provides results that are in a very good agreement with the 2D numerical data of other CFD codes. Also, the results of the bubble shape and terminal velocity obtained from the 3D numerical simulation showed a very good qualitative and quantitative agreement with the experimental data. The simulated rising bubbles yield a very small percentage of error in the bubble terminal velocity compared with the experimental data. The obtained results prove the capability of OpenFOAM as a powerful tool to predict the behavior of rising characteristics of the spherical bubbles in the stagnant column of liquid. This will pave the way for a deeper understanding of the phenomenon of the rise of bubbles in liquids.

Keywords: CFD simulations, multiphase flows, OpenFOAM, rise of bubble, volume of fluid method, VOF

Procedia PDF Downloads 115
439 Experimental Study for Examination of Nature of Diffusion Process during Wine Microoxygenation

Authors: Ilirjan Malollari, Redi Buzo, Lorina Lici

Abstract:

This study was done for the characterization of polyphenols changes of anthocyanins, flavonoids, the color intensity and total polyphenols index, maturity and oxidation index during the process of micro-oxygenation of wine that comes from a specific geographic area in the southeastern region of the country. Also, through mathematical modeling of the oxygen distribution within solution of wort for wine fermentation, was shown the strong impact of carbon dioxide present in the liquor. Analytical results show periodic increases of color intensity and tonality, reduction level of free anthocyanins and flavonoids free because of polycondensation reactions between tannins and anthocyanins, increased total polyphenols index and decrease the ratio between the flavonoids and anthocyanins offering a red stabilize wine proved by sensory degustation tasting for color intensity, tonality, body, tannic perception, taste and remained back taste which comes by specific area associated with environmental indications. Micro-oxygenation of wine is a wine-making technique, which consists in the addition of small and controlled amounts of oxygen in the different stages of wine production but more efficiently after end of alcoholic fermentation. The objectives of the process include improved mouth feel (body and texture), color enhanced stability, increased oxidative stability, and decreased vegetative aroma during polyphenols changes process. A very important factor is polyphenolics organic grape composition strongly associated with the environment geographical specifics area in which it is grown the grape.

Keywords: micro oxygenation, polyphenols, environment, wine stability, diffusion modeling

Procedia PDF Downloads 203
438 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana

Authors: Gautier Viaud, Paul-Henry Cournède

Abstract:

Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.

Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models

Procedia PDF Downloads 294
437 Poultry in Motion: Text Mining Social Media Data for Avian Influenza Surveillance in the UK

Authors: Samuel Munaf, Kevin Swingler, Franz Brülisauer, Anthony O’Hare, George Gunn, Aaron Reeves

Abstract:

Background: Avian influenza, more commonly known as Bird flu, is a viral zoonotic respiratory disease stemming from various species of poultry, including pets and migratory birds. Researchers have purported that the accessibility of health information online, in addition to the low-cost data collection methods the internet provides, has revolutionized the methods in which epidemiological and disease surveillance data is utilized. This paper examines the feasibility of using internet data sources, such as Twitter and livestock forums, for the early detection of the avian flu outbreak, through the use of text mining algorithms and social network analysis. Methods: Social media mining was conducted on Twitter between the period of 01/01/2021 to 31/12/2021 via the Twitter API in Python. The results were filtered firstly by hashtags (#avianflu, #birdflu), word occurrences (avian flu, bird flu, H5N1), and then refined further by location to include only those results from within the UK. Analysis was conducted on this text in a time-series manner to determine keyword frequencies and topic modeling to uncover insights in the text prior to a confirmed outbreak. Further analysis was performed by examining clinical signs (e.g., swollen head, blue comb, dullness) within the time series prior to the confirmed avian flu outbreak by the Animal and Plant Health Agency (APHA). Results: The increased search results in Google and avian flu-related tweets showed a correlation in time with the confirmed cases. Topic modeling uncovered clusters of word occurrences relating to livestock biosecurity, disposal of dead birds, and prevention measures. Conclusions: Text mining social media data can prove to be useful in relation to analysing discussed topics for epidemiological surveillance purposes, especially given the lack of applied research in the veterinary domain. The small sample size of tweets for certain weekly time periods makes it difficult to provide statistically plausible results, in addition to a great amount of textual noise in the data.

Keywords: veterinary epidemiology, disease surveillance, infodemiology, infoveillance, avian influenza, social media

Procedia PDF Downloads 99
436 Copyright Clearance for Artificial Intelligence Training Data: Challenges and Solutions

Authors: Erva Akin

Abstract:

– The use of copyrighted material for machine learning purposes is a challenging issue in the field of artificial intelligence (AI). While machine learning algorithms require large amounts of data to train and improve their accuracy and creativity, the use of copyrighted material without permission from the authors may infringe on their intellectual property rights. In order to overcome copyright legal hurdle against the data sharing, access and re-use of data, the use of copyrighted material for machine learning purposes may be considered permissible under certain circumstances. For example, if the copyright holder has given permission to use the data through a licensing agreement, then the use for machine learning purposes may be lawful. It is also argued that copying for non-expressive purposes that do not involve conveying expressive elements to the public, such as automated data extraction, should not be seen as infringing. The focus of such ‘copy-reliant technologies’ is on understanding language rules, styles, and syntax and no creative ideas are being used. However, the non-expressive use defense is within the framework of the fair use doctrine, which allows the use of copyrighted material for research or educational purposes. The questions arise because the fair use doctrine is not available in EU law, instead, the InfoSoc Directive provides for a rigid system of exclusive rights with a list of exceptions and limitations. One could only argue that non-expressive uses of copyrighted material for machine learning purposes do not constitute a ‘reproduction’ in the first place. Nevertheless, the use of machine learning with copyrighted material is difficult because EU copyright law applies to the mere use of the works. Two solutions can be proposed to address the problem of copyright clearance for AI training data. The first is to introduce a broad exception for text and data mining, either mandatorily or for commercial and scientific purposes, or to permit the reproduction of works for non-expressive purposes. The second is that copyright laws should permit the reproduction of works for non-expressive purposes, which opens the door to discussions regarding the transposition of the fair use principle from the US into EU law. Both solutions aim to provide more space for AI developers to operate and encourage greater freedom, which could lead to more rapid innovation in the field. The Data Governance Act presents a significant opportunity to advance these debates. Finally, issues concerning the balance of general public interests and legitimate private interests in machine learning training data must be addressed. In my opinion, it is crucial that robot-creation output should fall into the public domain. Machines depend on human creativity, innovation, and expression. To encourage technological advancement and innovation, freedom of expression and business operation must be prioritised.

Keywords: artificial intelligence, copyright, data governance, machine learning

Procedia PDF Downloads 77
435 Instructional Game in Teaching Algebra for High School Students: Basis for Instructional Intervention

Authors: Jhemson C. Elis, Alvin S. Magadia

Abstract:

Our world is full of numbers, shapes, and figures that illustrate the wholeness of a thing. Indeed, this statement signifies that mathematics is everywhere. Mathematics in its broadest sense helps people in their everyday life that is why in education it is a must to be taken by the students as a subject. The study aims to determine the profile of the respondents in terms of gender and age, performance of the control and experimental groups in the pretest and posttest, impact of the instructional game used as instructional intervention in teaching algebra for high school students, significant difference between the level of performance of the two groups of respondents in their pre–test and post–test results, and the instructional intervention can be proposed. The descriptive method was also utilized in this study. The use of the certain approach was to that it corresponds to the main objective of this research that is to determine the effectiveness of the instructional game used as an instructional intervention in teaching algebra for high school students. There were 30 students served as respondents, having an equal size of the sample of 15 each while a greater number of female teacher respondents which totaled 7 or 70 percent and male were 3 or 30 percent. The study recommended that mathematics teacher should conceptualize instructional games for the students to learn mathematics with fun and enjoyment while learning. Mathematics education program supervisor should give training for teachers on how to conceptualize mathematics intervention for the students learning. Meaningful activities must be provided to sustain the student’s interest in learning. Students must be given time to have fun at the classroom through playing while learning since mathematics for them was considered as difficult. Future researcher must continue conceptualizing some mathematics intervention to suffice the needs of the students, and teachers should inculcate more educational games so that the discussion will be successful and joyful.

Keywords: instructional game in algebra, mathematical intervention, joyful, successful

Procedia PDF Downloads 588
434 Revolutionizing Healthcare Facility Maintenance: A Groundbreaking AI, BIM, and IoT Integration Framework

Authors: Mina Sadat Orooje, Mohammad Mehdi Latifi, Behnam Fereydooni Eftekhari

Abstract:

The integration of cutting-edge Internet of Things (IoT) technologies with advanced Artificial Intelligence (AI) systems is revolutionizing healthcare facility management. However, the current landscape of hospital building maintenance suffers from slow, repetitive, and disjointed processes, leading to significant financial, resource, and time losses. Additionally, the potential of Building Information Modeling (BIM) in facility maintenance is hindered by a lack of data within digital models of built environments, necessitating a more streamlined data collection process. This paper presents a robust framework that harmonizes AI with BIM-IoT technology to elevate healthcare Facility Maintenance Management (FMM) and address these pressing challenges. The methodology begins with a thorough literature review and requirements analysis, providing insights into existing technological landscapes and associated obstacles. Extensive data collection and analysis efforts follow to deepen understanding of hospital infrastructure and maintenance records. Critical AI algorithms are identified to address predictive maintenance, anomaly detection, and optimization needs alongside integration strategies for BIM and IoT technologies, enabling real-time data collection and analysis. The framework outlines protocols for data processing, analysis, and decision-making. A prototype implementation is executed to showcase the framework's functionality, followed by a rigorous validation process to evaluate its efficacy and gather user feedback. Refinement and optimization steps are then undertaken based on evaluation outcomes. Emphasis is placed on the scalability of the framework in real-world scenarios and its potential applications across diverse healthcare facility contexts. Finally, the findings are meticulously documented and shared within the healthcare and facility management communities. This framework aims to significantly boost maintenance efficiency, cut costs, provide decision support, enable real-time monitoring, offer data-driven insights, and ultimately enhance patient safety and satisfaction. By tackling current challenges in healthcare facility maintenance management it paves the way for the adoption of smarter and more efficient maintenance practices in healthcare facilities.

Keywords: artificial intelligence, building information modeling, healthcare facility maintenance, internet of things integration, maintenance efficiency

Procedia PDF Downloads 44
433 SAFECARE: Integrated Cyber-Physical Security Solution for Healthcare Critical Infrastructure

Authors: Francesco Lubrano, Fabrizio Bertone, Federico Stirano

Abstract:

Modern societies strongly depend on Critical Infrastructures (CI). Hospitals, power supplies, water supplies, telecommunications are just few examples of CIs that provide vital functions to societies. CIs like hospitals are very complex environments, characterized by a huge number of cyber and physical systems that are becoming increasingly integrated. Ensuring a high level of security within such critical infrastructure requires a deep knowledge of vulnerabilities, threats, and potential attacks that may occur, as well as defence and prevention or mitigation strategies. The possibility to remotely monitor and control almost everything is pushing the adoption of network-connected devices. This implicitly introduces new threats and potential vulnerabilities, posing a risk, especially to those devices connected to the Internet. Modern medical devices used in hospitals are not an exception and are more and more being connected to enhance their functionalities and easing the management. Moreover, hospitals are environments with high flows of people, that are difficult to monitor and can somehow easily have access to the same places used by the staff, potentially creating damages. It is therefore clear that physical and cyber threats should be considered, analysed, and treated together as cyber-physical threats. This means that an integrated approach is required. SAFECARE, an integrated cyber-physical security solution, tries to respond to the presented issues within healthcare infrastructures. The challenge is to bring together the most advanced technologies from the physical and cyber security spheres, to achieve a global optimum for systemic security and for the management of combined cyber and physical threats and incidents and their interconnections. Moreover, potential impacts and cascading effects are evaluated through impact propagation models that rely on modular ontologies and a rule-based engine. Indeed, SAFECARE architecture foresees i) a macroblock related to cyber security field, where innovative tools are deployed to monitor network traffic, systems and medical devices; ii) a physical security macroblock, where video management systems are coupled with access control management, building management systems and innovative AI algorithms to detect behavior anomalies; iii) an integration system that collects all the incoming incidents, simulating their potential cascading effects, providing alerts and updated information regarding assets availability.

Keywords: cyber security, defence strategies, impact propagation, integrated security, physical security

Procedia PDF Downloads 155
432 The Outcome of Using Machine Learning in Medical Imaging

Authors: Adel Edwar Waheeb Louka

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery

Procedia PDF Downloads 57
431 Contribution of PALB2 and BLM Mutations to Familial Breast Cancer Risk in BRCA1/2 Negative South African Breast Cancer Patients Detected Using High-Resolution Melting Analysis

Authors: N. C. van der Merwe, J. Oosthuizen, M. F. Makhetha, J. Adams, B. K. Dajee, S-R. Schneider

Abstract:

Women representing high-risk breast cancer families, who tested negative for pathogenic mutations in BRCA1 and BRCA2, are four times more likely to develop breast cancer compared to women in the general population. Sequencing of genes involved in genomic stability and DNA repair led to the identification of novel contributors to familial breast cancer risk. These include BLM and PALB2. Bloom's syndrome is a rare homozygous autosomal recessive chromosomal instability disorder with a high incidence of various types of neoplasia and is associated with breast cancer when in a heterozygous state. PALB2, on the other hand, binds to BRCA2 and together, they partake actively in DNA damage repair. Archived DNA samples of 66 BRCA1/2 negative high-risk breast cancer patients were retrospectively selected based on the presence of an extensive family history of the disease ( > 3 affecteds per family). All coding regions and splice-site boundaries of both genes were screened using High-Resolution Melting Analysis. Samples exhibiting variation were bi-directionally automated Sanger sequenced. The clinical significance of each variant was assessed using various in silico and splice site prediction algorithms. Comprehensive screening identified a total of 11 BLM and 26 PALB2 variants. The variants detected ranged from global to rare and included three novel mutations. Three BLM and two PALB2 likely pathogenic mutations were identified that could account for the disease in these extensive breast cancer families in the absence of BRCA mutations (BLM c.11T > A, p.V4D; BLM c.2603C > T, p.P868L; BLM c.3961G > A, p.V1321I; PALB2 c.421C > T, p.Gln141Ter; PALB2 c.508A > T, p.Arg170Ter). Conclusion: The study confirmed the contribution of pathogenic mutations in BLM and PALB2 to the familial breast cancer burden in South Africa. It explained the presence of the disease in 7.5% of the BRCA1/2 negative families with an extensive family history of breast cancer. Segregation analysis will be performed to confirm the clinical impact of these mutations for each of these families. These results justify the inclusion of both these genes in a comprehensive breast and ovarian next generation sequencing cancer panel and should be screened simultaneously with BRCA1 and BRCA2 as it might explain a significant percentage of familial breast and ovarian cancer in South Africa.

Keywords: Bloom Syndrome, familial breast cancer, PALB2, South Africa

Procedia PDF Downloads 221
430 Investigating the Influence of Activation Functions on Image Classification Accuracy via Deep Convolutional Neural Network

Authors: Gulfam Haider, sana danish

Abstract:

Convolutional Neural Networks (CNNs) have emerged as powerful tools for image classification, and the choice of optimizers profoundly affects their performance. The study of optimizers and their adaptations remains a topic of significant importance in machine learning research. While numerous studies have explored and advocated for various optimizers, the efficacy of these optimization techniques is still subject to scrutiny. This work aims to address the challenges surrounding the effectiveness of optimizers by conducting a comprehensive analysis and evaluation. The primary focus of this investigation lies in examining the performance of different optimizers when employed in conjunction with the popular activation function, Rectified Linear Unit (ReLU). By incorporating ReLU, known for its favorable properties in prior research, the aim is to bolster the effectiveness of the optimizers under scrutiny. Specifically, we evaluate the adjustment of these optimizers with both the original Softmax activation function and the modified ReLU activation function, carefully assessing their impact on overall performance. To achieve this, a series of experiments are conducted using a well-established benchmark dataset for image classification tasks, namely the Canadian Institute for Advanced Research dataset (CIFAR-10). The selected optimizers for investigation encompass a range of prominent algorithms, including Adam, Root Mean Squared Propagation (RMSprop), Adaptive Learning Rate Method (Adadelta), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD). The performance analysis encompasses a comprehensive evaluation of the classification accuracy, convergence speed, and robustness of the CNN models trained with each optimizer. Through rigorous experimentation and meticulous assessment, we discern the strengths and weaknesses of the different optimization techniques, providing valuable insights into their suitability for image classification tasks. By conducting this in-depth study, we contribute to the existing body of knowledge surrounding optimizers in CNNs, shedding light on their performance characteristics for image classification. The findings gleaned from this research serve to guide researchers and practitioners in making informed decisions when selecting optimizers and activation functions, thus advancing the state-of-the-art in the field of image classification with convolutional neural networks.

Keywords: deep neural network, optimizers, RMsprop, ReLU, stochastic gradient descent

Procedia PDF Downloads 109