Search results for: artificial animal intelligence
3065 Emerging Technology for 6G Networks
Authors: Yaseein S. Hussein, Victor P. Gil Jiménez, Abdulmajeed Al-Jumaily
Abstract:
Due to the rapid advancement of technology, there is an increasing demand for wireless connections that are both fast and reliable, with minimal latency. New wireless communication standards are developed every decade, and the year 2030 is expected to see the introduction of 6G. The primary objectives of 6G network and terminal designs are focused on sustainability and environmental friendliness. The International Telecommunication Union-Recommendation division (ITU-R) has established the minimum requirements for 6G, with peak and user data rates of 1 Tbps and 10-100 Gbps, respectively. In this context, Light Fidelity (Li-Fi) technology is the most promising candidate to meet these requirements. This article will explore the various advantages, features, and potential applications of Li-Fi technology, and compare it with 5G networking, to showcase its potential impact among other emerging technologies that aim to enable 6G networks.Keywords: 6G networks, artificial intelligence (AI), Li-Fi technology, Terahertz (THz) communication, visible light communication (VLC)
Procedia PDF Downloads 953064 Design and Implementation of a Wearable Artificial Kidney Prototype for Home Dialysis
Authors: R. A. Qawasma, F. M. Haddad, H. O. Salhab
Abstract:
Hemodialysis is a life-preserving treatment for a number of patients with kidney failure. The standard procedure of hemodialysis is three times a week during the hemodialysis procedure, the patient usually suffering from many inconvenient, exhausting feeling and effect on the heart and cardiovascular system are the most common signs. This paper provides a solution to reduce the previous problems by designing a wearable artificial kidney (WAK) taking in consideration a minimization the size of the dialysis machine. The WAK system consists of two circuits: blood circuit and dialysate circuit. The blood from the patient is filtered in the dialyzer before returning back to the patient. Several parameters using an advanced microcontroller and array of sensors. WAK equipped with visible and audible alarm system to aware the patients if there is any problem.Keywords: artificial kidney, home dialysis, renal failure, wearable kidney
Procedia PDF Downloads 2353063 Terraria AI: YOLO Interface for Decision-Making Algorithms
Authors: Emmanuel Barrantes Chaves, Ernesto Rivera Alvarado
Abstract:
This paper presents a method to enable agents for the Terraria game to evaluate algorithms commonly used in general video game artificial intelligence competitions. The usage of the ‘You Only Look Once’ model in the first layer of the process obtains information from the screen, translating this information into a video game description language known as “Video Game Description Language”; the agents take that as input to make decisions. For this, the state-of-the-art algorithms were tested and compared; Monte Carlo Tree Search and Rolling Horizon Evolutionary; in this case, Rolling Horizon Evolutionary shows a better performance. This approach’s main advantage is that a VGDL beforehand is unnecessary. It will be built on the fly and opens the road for using more games as a framework for AI.Keywords: AI, MCTS, RHEA, Terraria, VGDL, YOLOv5
Procedia PDF Downloads 963062 Interpretation and Prediction of Geotechnical Soil Parameters Using Ensemble Machine Learning
Authors: Goudjil kamel, Boukhatem Ghania, Jlailia Djihene
Abstract:
This paper delves into the development of a sophisticated desktop application designed to calculate soil bearing capacity and predict limit pressure. Drawing from an extensive review of existing methodologies, the study meticulously examines various approaches employed in soil bearing capacity calculations, elucidating their theoretical foundations and practical applications. Furthermore, the study explores the burgeoning intersection of artificial intelligence (AI) and geotechnical engineering, underscoring the transformative potential of AI- driven solutions in enhancing predictive accuracy and efficiency.Central to the research is the utilization of cutting-edge machine learning techniques, including Artificial Neural Networks (ANN), XGBoost, and Random Forest, for predictive modeling. Through comprehensive experimentation and rigorous analysis, the efficacy and performance of each method are rigorously evaluated, with XGBoost emerging as the preeminent algorithm, showcasing superior predictive capabilities compared to its counterparts. The study culminates in a nuanced understanding of the intricate dynamics at play in geotechnical analysis, offering valuable insights into optimizing soil bearing capacity calculations and limit pressure predictions. By harnessing the power of advanced computational techniques and AI-driven algorithms, the paper presents a paradigm shift in the realm of geotechnical engineering, promising enhanced precision and reliability in civil engineering projects.Keywords: limit pressure of soil, xgboost, random forest, bearing capacity
Procedia PDF Downloads 253061 Evaluating Performance of an Anomaly Detection Module with Artificial Neural Network Implementation
Authors: Edward Guillén, Jhordany Rodriguez, Rafael Páez
Abstract:
Anomaly detection techniques have been focused on two main components: data extraction and selection and the second one is the analysis performed over the obtained data. The goal of this paper is to analyze the influence that each of these components has over the system performance by evaluating detection over network scenarios with different setups. The independent variables are as follows: the number of system inputs, the way the inputs are codified and the complexity of the analysis techniques. For the analysis, some approaches of artificial neural networks are implemented with different number of layers. The obtained results show the influence that each of these variables has in the system performance.Keywords: network intrusion detection, machine learning, artificial neural network, anomaly detection module
Procedia PDF Downloads 3453060 Artificial Neural Networks and Geographic Information Systems for Coastal Erosion Prediction
Authors: Angeliki Peponi, Paulo Morgado, Jorge Trindade
Abstract:
Artificial Neural Networks (ANNs) and Geographic Information Systems (GIS) are applied as a robust tool for modeling and forecasting the erosion changes in Costa Caparica, Lisbon, Portugal, for 2021. ANNs present noteworthy advantages compared with other methods used for prediction and decision making in urban coastal areas. Multilayer perceptron type of ANNs was used. Sensitivity analysis was conducted on natural and social forces and dynamic relations in the dune-beach system of the study area. Variations in network’s parameters were performed in order to select the optimum topology of the network. The developed methodology appears fitted to reality; however further steps would make it better suited.Keywords: artificial neural networks, backpropagation, coastal urban zones, erosion prediction
Procedia PDF Downloads 3943059 Parameters of Main Stage of Discharge between Artificial Charged Aerosol Cloud and Ground in Presence of Model Hydrometeor Arrays
Authors: D. S. Zhuravkova, A. G. Temnikov, O. S. Belova, L. L. Chernensky, T. K. Gerastenok, I. Y. Kalugina, N. Y. Lysov, A.V. Orlov
Abstract:
Investigation of the discharges from the artificial charged water aerosol clouds in presence of the arrays of the model hydrometeors could help to receive the new data about the peculiarities of the return stroke formation between the thundercloud and the ground when the large volumes of the hail particles participate in the lightning discharge initiation and propagation stimulation. Artificial charged water aerosol clouds of the negative or positive polarity with the potential up to one million volts have been used. Hail has been simulated by the group of the conductive model hydrometeors of the different form. Parameters of the impulse current of the main stage of the discharge between the artificial positively and negatively charged water aerosol clouds and the ground in presence of the model hydrometeors array and of its corresponding electromagnetic radiation have been determined. It was established that the parameters of the array of the model hydrometeors influence on the parameters of the main stage of the discharge between the artificial thundercloud cell and the ground. The maximal values of the main stage current impulse parameters and the electromagnetic radiation registered by the plate antennas have been found for the array of the model hydrometeors of the cylinder revolution form for the negatively charged aerosol cloud and for the array of the hydrometeors of the plate rhombus form for the positively charged aerosol cloud, correspondingly. It was found that parameters of the main stage of the discharge between the artificial charged water aerosol cloud and the ground in presence of the model hydrometeor array of the different considered forms depend on the polarity of the artificial charged aerosol cloud. In average, for all forms of the investigated model hydrometeors arrays, the values of the amplitude and the current rise of the main stage impulse current and the amplitude of the corresponding electromagnetic radiation for the artificial charged aerosol cloud of the positive polarity were in 1.1-1.9 times higher than for the charged aerosol cloud of the negative polarity. Thus, the received results could indicate to the possible more important role of the big volumes of the large hail arrays in the thundercloud on the parameters of the return stroke for the positive lightning.Keywords: main stage of discharge, hydrometeor form, lightning parameters, negative and positive artificial charged aerosol cloud
Procedia PDF Downloads 2563058 Epidemiology of Toxoplasma gondii Infection in Animals of the Arabian Peninsula: A Systematic Review and Meta-Analysis
Authors: Ebtisam A. Al-Mslemani, Khalid A. Enan, Asmaa Abdelgadier, Nada Assaad, Zaynab Elhussein, Khalid Eltom
Abstract:
Background: Toxoplasma gondii (T. gondii) is a zoonotic parasite that can be transmitted from animals to humans, with felids acting as its definitive host. Thus, understanding the epidemiology of this parasite in animal populations is vital to controlling its transmission to humans as well as to other animal groups. Objectives: This systematic review and meta-analysis aim to summarise and analyse reports of T. gondii infection in animal species residing in the Arabian Peninsula. Methods: It was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), with relevant studies being retrieved from MEDLINE/PubMed, Scopus, Cochrane Library, Google Scholar and ScienceDirect. All articles published in Arabic or English languages between January 2000 and December 2020 were screened for eligibility. The random effects model was used to calculate the pooled prevalence of T. gondii infection in different animal populations which were found to harbour this infection. The critical appraisal tool for prevalence studies designed by the Joanna Briggs Institute (JBI) was used to assess the risk of bias in all included studies. Results: A total of 15 studies were retrieved, reporting prevalence estimates from 4 countries in this region and in 13 animal species. A quantitative meta-analysis estimated a pooled prevalence of 43% in felids [95% confidence interval (CI) = 23-64%, I2 index = 100%], 48% in sheep (95% CI = 27-70%, I2 = 99%) and 21% in camels (95% CI = 7-35%, I2 = 99%). Evidence of possible publication bias was found in both felids and sheep. Conclusions: This meta-analysis estimates a high prevalence of T. gondii infection in animal species that are of high economic and cultural importance to countries of this region. Hence, these findings provide valuable insight to public health authorities as well as economic and animal resources advisors in countries of the Arabian Peninsula.Keywords: Arabian Peninsula, toxoplasma gondii, animals; meta-analysis, toxoplasmosis
Procedia PDF Downloads 863057 Healthy Architecture Applied to Inclusive Design for People with Cognitive Disabilities
Authors: Santiago Quesada-García, María Lozano-Gómez, Pablo Valero-Flores
Abstract:
The recent digital revolution, together with modern technologies, is changing the environment and the way people interact with inhabited space. However, in society, the elderly are a very broad and varied group that presents serious difficulties in understanding these modern technologies. Outpatients with cognitive disabilities, such as those suffering from Alzheimer's disease (AD), are distinguished within this cluster. This population group is in constant growth, and they have specific requirements for their inhabited space. According to architecture, which is one of the health humanities, environments are designed to promote well-being and improve the quality of life for all. Buildings, as well as the tools and technologies integrated into them, must be accessible, inclusive, and foster health. In this new digital paradigm, artificial intelligence (AI) appears as an innovative resource to help this population group improve their autonomy and quality of life. Some experiences and solutions, such as those that interact with users through chatbots and voicebots, show the potential of AI in its practical application. In the design of healthy spaces, the integration of AI in architecture will allow the living environment to become a kind of 'exo-brain' that can make up for certain cognitive deficiencies in this population. The objective of this paper is to address, from the discipline of neuroarchitecture, how modern technologies can be integrated into everyday environments and be an accessible resource for people with cognitive disabilities. For this, the methodology has a mixed structure. On the one hand, from an empirical point of view, the research carries out a review of the existing literature about the applications of AI to build space, following the critical review foundations. As a unconventional architectural research, an experimental analysis is proposed based on people with AD as a resource of data to study how the environment in which they live influences their regular activities. The results presented in this communication are part of the progress achieved in the competitive R&D&I project ALZARQ (PID2020-115790RB-I00). These outcomes are aimed at the specific needs of people with cognitive disabilities, especially those with AD, since, due to the comfort and wellness that the solutions entail, they can also be extrapolated to the whole society. As a provisional conclusion, it can be stated that, in the immediate future, AI will be an essential element in the design and construction of healthy new environments. The discipline of architecture has the compositional resources to, through this emerging technology, build an 'exo-brain' capable of becoming a personal assistant for the inhabitants, with whom to interact proactively and contribute to their general well-being. The main objective of this work is to show how this is possible.Keywords: Alzheimer’s disease, artificial intelligence, healthy architecture, neuroarchitecture, architectural design
Procedia PDF Downloads 623056 Influence of Model Hydrometeor Form on Probability of Discharge Initiation from Artificial Charged Water Aerosol Cloud
Authors: A. G. Temnikov, O. S. Belova, L. L. Chernensky, T. K. Gerastenok, N. Y. Lysov, A. V. Orlov, D. S. Zhuravkova
Abstract:
Hypothesis of the lightning initiation on the arrays of large hydrometeors are in the consideration. There is no agreement about the form the hydrometeors that could be the best for the lightning initiation from the thundercloud. Artificial charged water aerosol clouds of the positive or negative polarity could help investigate the possible influence of the hydrometeor form on the peculiarities and the probability of the lightning discharge initiation between the thundercloud and the ground. Artificial charged aerosol clouds that could create the electric field strength in the range of 5-6 kV/cm to 16-18 kV/cm have been used in experiments. The array of the model hydrometeors of the volume and plate form has been disposed near the bottom cloud boundary. It was established that the different kinds of the discharge could be initiated in the presence of the model hydrometeors array – from the cloud discharges up to the diffuse and channel discharges between the charged cloud and the ground. It was found that the form of the model hydrometeors could significantly influence the channel discharge initiation from the artificial charged aerosol cloud of the negative or positive polarity correspondingly. Analysis and generalization of the experimental results have shown that the maximal probability of the channel discharge initiation and propagation stimulation has been observed for the artificial charged cloud of the positive polarity when the arrays of the model hydrometeors of the cylinder revolution form have been used. At the same time, for the artificial charged clouds of the negative polarity, application of the model hydrometeor array of the plate rhombus form has provided the maximal probability of the channel discharge formation between the charged cloud and the ground. The established influence of the form of the model hydrometeors on the channel discharge initiation and from the artificial charged water aerosol cloud and its following successful propagation has been related with the different character of the positive and negative streamer and volume leader development on the model hydrometeors array being near the bottom boundary of the charged cloud. The received experimental results have shown the possibly important role of the form of the large hail particles precipitated in thundercloud on the discharge initiation.Keywords: cloud and channel discharges, hydrometeor form, lightning initiation, negative and positive artificial charged aerosol cloud
Procedia PDF Downloads 3163055 Compressive Strength Evaluation of Underwater Concrete Structures Integrating the Combination of Rebound Hardness and Ultrasonic Pulse Velocity Methods with Artificial Neural Networks
Authors: Seunghee Park, Junkyeong Kim, Eun-Seok Shin, Sang-Hun Han
Abstract:
In this study, two kinds of nondestructive evaluation (NDE) techniques (rebound hardness and ultrasonic pulse velocity methods) are investigated for the effective maintenance of underwater concrete structures. A new methodology to estimate the underwater concrete strengths more effectively, named “artificial neural network (ANN) – based concrete strength estimation with the combination of rebound hardness and ultrasonic pulse velocity methods” is proposed and verified throughout a series of experimental works.Keywords: underwater concrete, rebound hardness, Schmidt hammer, ultrasonic pulse velocity, ultrasonic sensor, artificial neural networks, ANN
Procedia PDF Downloads 5333054 Applying AI and IoT to Enhance Eye Vision Assessment, Early Detection of Eye Diseases, and Personalised Vision Correction
Authors: Gasim Alandjani
Abstract:
This research paper investigates the use of artificial intelligence (AI) and the Internet of Things (IoT) to improve eye healthcare; it concentrates on eye vision assessment, early discovery of eye ailments, and individualised vision correction. The study offers a broad review of literature and methodology; it features vital findings and inferences for advancing patient results, boosting admittance to care, elevating resource apportionment, and directing future research and practice. The study concluded that the assimilation of AI and IoT advancements provides progressive answers to traditional hurdles in eye healthcare, guaranteeing more precise, comprehensive, and individualised interventions for patients globally. The study emphasizes the significance of sustained innovation and the application of AI and IoT-driven methodologies to improve eye healthcare and vision for forthcoming generations.Keywords: AI, IoT, eye vision assessment, computer engineering
Procedia PDF Downloads 63053 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 1303052 Synthesis of a Model Predictive Controller for Artificial Pancreas
Authors: Mohamed El Hachimi, Abdelhakim Ballouk, Ilyas Khelafa, Abdelaziz Mouhou
Abstract:
Introduction: Type 1 diabetes occurs when beta cells are destroyed by the body's own immune system. Treatment of type 1 diabetes mellitus could be greatly improved by applying a closed-loop control strategy to insulin delivery, also known as an Artificial Pancreas (AP). Method: In this paper, we present a new formulation of the cost function for a Model Predictive Control (MPC) utilizing a technic which accelerates the speed of control of the AP and tackles the nonlinearity of the control problem via asymmetric objective functions. Finding: The finding of this work consists in a new Model Predictive Control algorithm that leads to good performances like decreasing the time of hyperglycaemia and avoiding hypoglycaemia. Conclusion: These performances are validated under in silico trials.Keywords: artificial pancreas, control algorithm, biomedical control, MPC, objective function, nonlinearity
Procedia PDF Downloads 3073051 Using Cooperation Approaches at Different Levels of Artificial Bee Colony Method
Authors: Vahid Zeighami, Mohsen Ghsemi, Reza Akbari
Abstract:
In this work, a Multi-Level Artificial Bee Colony (called MLABC) is presented. In MLABC two species are used. The first species employs n colonies in which each of the them optimizes the complete solution vector. The cooperation between these colonies is carried out by exchanging information through a leader colony, which contains a set of elite bees. The second species uses a cooperative approach in which the complete solution vector is divided to k sub-vectors, and each of these sub-vectors is optimized by a a colony. The cooperation between these colonies is carried out by compiling sub-vectors into the complete solution vector. Finally, the cooperation between two species is obtained by exchanging information between them. The proposed algorithm is tested on a set of well known test functions. The results show that MLABC algorithms provide efficiency and robustness to solve numerical functions.Keywords: artificial bee colony, cooperative, multilevel cooperation, vector
Procedia PDF Downloads 4473050 Gamification of a Business Intelligence Tool
Authors: Stephen Miller
Abstract:
The act of applying game mechanics and dynamics (which have been traditionally used in video games) into business applications is being widely trialed in an effort to make conventional business software a bit more participative, fun and engaging. This new trend, named ‘gamification’ has its believers and of course, its critics who still need convincing that the concept is an effective and beneficial business tool worthy of investment. The literature reveals that user engagement of business intelligence (BI) tools is much lower than expected and investors are failing to get a good return on their investment (ROI). So, a software prototype will be designed and developed to add gamification to a BI tool to determine its effect upon the user engagement levels of test participants. The experimental study will be evaluated using the comprehensive User Engagement Scale (UES) to see if there are improvements in areas such as; aesthetics, perceived usability, endurability, novelty, felt involvement and focused attention. The results of this unique study should demonstrate whether or not ‘gamifying’ a BI tool has the potential to increase an individual’s motivation to use BI software more often.Keywords: business intelligence, gamification, human computer interaction, user engagement
Procedia PDF Downloads 5853049 Case-Based Reasoning for Build Order in Real-Time Strategy Games
Authors: Ben G. Weber, Michael Mateas
Abstract:
We present a case-based reasoning technique for selecting build orders in a real-time strategy game. The case retrieval process generalizes features of the game state and selects cases using domain-specific recall methods, which perform exact matching on a subset of the case features. We demonstrate the performance of the technique by implementing it as a component of the integrated agent framework of McCoy and Mateas. Our results demonstrate that the technique outperforms nearest-neighbor retrieval when imperfect information is enforced in a real-time strategy game.Keywords: case based reasoning, real time strategy systems, requirements elicitation, requirement analyst, artificial intelligence
Procedia PDF Downloads 4423048 Augmented Reality Technology for a User Interface in an Automated Storage and Retrieval System
Authors: Wen-Jye Shyr, Chun-Yuan Chang, Bo-Lin Wei, Chia-Ming Lin
Abstract:
The task of creating an augmented reality technology was described in this study to give operators a user interface that might be a part of an automated storage and retrieval system. Its objective was to give graduate engineering and technology students a system of tools with which to experiment with the creation of augmented reality technologies. To collect and analyze data for maintenance applications, the students used augmented reality technology. Our findings support the evolution of artificial intelligence towards Industry 4.0 practices and the planned Industry 4.0 research stream. Important first insights into the study's effects on student learning were presented.Keywords: augmented reality, storage and retrieval system, user interface, programmable logic controller
Procedia PDF Downloads 893047 Determination of the Botanical Origin of Honey by the Artificial Neural Network Processing of PARAFAC Scores of Fluorescence Data
Authors: Lea Lenhardt, Ivana Zeković, Tatjana Dramićanin, Miroslav D. Dramićanin
Abstract:
Fluorescence spectroscopy coupled with parallel factor analysis (PARAFAC) and artificial neural networks (ANN) were used for characterization and classification of honey. Excitation emission spectra were obtained for 95 honey samples of different botanical origin (acacia, sunflower, linden, meadow, and fake honey) by recording emission from 270 to 640 nm with excitation in the range of 240-500 nm. Fluorescence spectra were described with a six-component PARAFAC model, and PARAFAC scores were further processed with two types of ANN’s (feed-forward network and self-organizing maps) to obtain algorithms for classification of honey on the basis of their botanical origin. Both ANN’s detected fake honey samples with 100% sensitivity and specificity.Keywords: honey, fluorescence, PARAFAC, artificial neural networks
Procedia PDF Downloads 9563046 Artificial Intelligence in Duolingo
Authors: Jwana Khateeb, Lamar Bawazeer, Hayat Sharbatly, Mozoun Alghamdi
Abstract:
This research paper explores the idea of learning new languages through an innovative-mobile based learning technology. Throughout this paper we will discuss and examine a mobile-based application called Duolingo. Duolingo is a college standard application for learning foreign languages such as Spanish and English. It is a smart application where it uses smart adaptive technologies to advance the level of their students at each period of time by offering new tasks. Furthermore, we will discuss the history of the application and the methodology used within it. We have conducted a study in which we surveyed ten people about their experience using Duolingo. The results are examined and analyzed in which it indicates the effectiveness on Duolingo students who are seeking to learn new languages. Thus, the research paper will furthermore discuss the diverse methods and approaches in learning new languages through this mobile-based application.Keywords: Duolingo, AI, personalized, customized
Procedia PDF Downloads 2903045 Performance Analysis of Artificial Neural Network with Decision Tree in Prediction of Diabetes Mellitus
Authors: J. K. Alhassan, B. Attah, S. Misra
Abstract:
Human beings have the ability to make logical decisions. Although human decision - making is often optimal, it is insufficient when huge amount of data is to be classified. medical dataset is a vital ingredient used in predicting patients health condition. In other to have the best prediction, there calls for most suitable machine learning algorithms. This work compared the performance of Artificial Neural Network (ANN) and Decision Tree Algorithms (DTA) as regards to some performance metrics using diabetes data. The evaluations was done using weka software and found out that DTA performed better than ANN. Multilayer Perceptron (MLP) and Radial Basis Function (RBF) were the two algorithms used for ANN, while RegTree and LADTree algorithms were the DTA models used. The Root Mean Squared Error (RMSE) of MLP is 0.3913,that of RBF is 0.3625, that of RepTree is 0.3174 and that of LADTree is 0.3206 respectively.Keywords: artificial neural network, classification, decision tree algorithms, diabetes mellitus
Procedia PDF Downloads 4113044 Estimation of Pressure Loss Coefficients in Combining Flows Using Artificial Neural Networks
Authors: Shahzad Yousaf, Imran Shafi
Abstract:
This paper presents a new method for calculation of pressure loss coefficients by use of the artificial neural network (ANN) in tee junctions. Geometry and flow parameters are feed into ANN as the inputs for purpose of training the network. Efficacy of the network is demonstrated by comparison of the experimental and ANN based calculated data of pressure loss coefficients for combining flows in a tee junction. Reynolds numbers ranging from 200 to 14000 and discharge ratios varying from minimum to maximum flow for calculation of pressure loss coefficients have been used. Pressure loss coefficients calculated using ANN are compared to the models from literature used in junction flows. The results achieved after the application of ANN agrees reasonably to the experimental values.Keywords: artificial neural networks, combining flow, pressure loss coefficients, solar collector tee junctions
Procedia PDF Downloads 3923043 The Effect of Artificial Intelligence on Marketing Distribution
Authors: Yousef Wageh Nagy Fahmy
Abstract:
Mobile phones are one of the direct marketing tools used to reach today's hard-to-reach consumers. Cell phones are very personal devices and you can have them with you anytime, anywhere. This offers marketers the opportunity to create personalized marketing messages and send them at the right time and place. The study examined consumer attitudes towards mobile marketing, particularly SMS marketing. Unlike similar studies, this study does not focus on young people, but includes consumers between the ages of 18 and 70 in the field study.The results showed that the majority of participants found SMS marketing disruptive. The biggest problems with SMS marketing are subscribing to message lists without the recipient's consent; large number of messages sent; and the irrelevance of message contentKeywords: direct marketing, mobile phones mobile marketing, sms advertising, marketing sponsorship, marketing communication theories, marketing communication tools
Procedia PDF Downloads 733042 Demand Forecasting Using Artificial Neural Networks Optimized by Particle Swarm Optimization
Authors: Daham Owaid Matrood, Naqaa Hussein Raheem
Abstract:
Evolutionary algorithms and Artificial neural networks (ANN) are two relatively young research areas that were subject to a steadily growing interest during the past years. This paper examines the use of Particle Swarm Optimization (PSO) to train a multi-layer feed forward neural network for demand forecasting. We use in this paper weekly demand data for packed cement and towels, which have been outfitted by the Northern General Company for Cement and General Company of prepared clothes respectively. The results showed superiority of trained neural networks using particle swarm optimization on neural networks trained using error back propagation because their ability to escape from local optima.Keywords: artificial neural network, demand forecasting, particle swarm optimization, weight optimization
Procedia PDF Downloads 4543041 The Challenge of Assessing Social AI Threats
Authors: Kitty Kioskli, Theofanis Fotis, Nineta Polemi
Abstract:
The European Union (EU) directive Artificial Intelligence (AI) Act in Article 9 requires that risk management of AI systems includes both technical and human oversight, while according to NIST_AI_RFM (Appendix C) and ENISA AI Framework recommendations, claim that further research is needed to understand the current limitations of social threats and human-AI interaction. AI threats within social contexts significantly affect the security and trustworthiness of the AI systems; they are interrelated and trigger technical threats as well. For example, lack of explainability (e.g. the complexity of models can be challenging for stakeholders to grasp) leads to misunderstandings, biases, and erroneous decisions. Which in turn impact the privacy, security, accountability of the AI systems. Based on the NIST four fundamental criteria for explainability it can also classify the explainability threats into four (4) sub-categories: a) Lack of supporting evidence: AI systems must provide supporting evidence or reasons for all their outputs. b) Lack of Understandability: Explanations offered by systems should be comprehensible to individual users. c) Lack of Accuracy: The provided explanation should accurately represent the system's process of generating outputs. d) Out of scope: The system should only function within its designated conditions or when it possesses sufficient confidence in its outputs. Biases may also stem from historical data reflecting undesired behaviors. When present in the data, biases can permeate the models trained on them, thereby influencing the security and trustworthiness of the of AI systems. Social related AI threats are recognized by various initiatives (e.g., EU Ethics Guidelines for Trustworthy AI), standards (e.g. ISO/IEC TR 24368:2022 on AI ethical concerns, ISO/IEC AWI 42105 on guidance for human oversight of AI systems) and EU legislation (e.g. the General Data Protection Regulation 2016/679, the NIS 2 Directive 2022/2555, the Directive on the Resilience of Critical Entities 2022/2557, the EU AI Act, the Cyber Resilience Act). Measuring social threats, estimating the risks to AI systems associated to these threats and mitigating them is a research challenge. In this paper it will present the efforts of two European Commission Projects (FAITH and THEMIS) from the HorizonEurope programme that analyse the social threats by building cyber-social exercises in order to study human behaviour, traits, cognitive ability, personality, attitudes, interests, and other socio-technical profile characteristics. The research in these projects also include the development of measurements and scales (psychometrics) for human-related vulnerabilities that can be used in estimating more realistically the vulnerability severity, enhancing the CVSS4.0 measurement.Keywords: social threats, artificial Intelligence, mitigation, social experiment
Procedia PDF Downloads 663040 Artificial Neural Network-Based Bridge Weigh-In-Motion Technique Considering Environmental Conditions
Authors: Changgil Lee, Junkyeong Kim, Jihwan Park, Seunghee Park
Abstract:
In this study, bridge weigh-in-motion (BWIM) system was simulated under various environmental conditions such as temperature, humidity, wind and so on to improve the performance of the BWIM system. The environmental conditions can make difficult to analyze measured data and hence those factors should be compensated. Various conditions were considered as input parameters for ANN (Artificial Neural Network). The number of hidden layers for ANN was decided so that nonlinearity could be sufficiently reflected in the BWIM results. The weight of vehicles and axle weight were more accurately estimated by applying ANN approach. Additionally, the type of bridge which was a target structure was considered as an input parameter for the ANN.Keywords: bridge weigh-in-motion (BWIM) system, environmental conditions, artificial neural network, type of bridges
Procedia PDF Downloads 4423039 Capacity for Care: A Management Model for Increasing Animal Live Release Rates, Reducing Animal Intake and Euthanasia Rates in an Australian Open Admission Animal Shelter
Authors: Ann Enright
Abstract:
More than ever, animal shelters need to identify ways to reduce the number of animals entering shelter facilities and the incidence of euthanasia. Managing animal overpopulation using euthanasia can have detrimental health and emotional consequences for the shelter staff involved. There are also community expectations with moral and financial implications to consider. To achieve the goals of reducing animal intake and the incidence of euthanasia, shelter best practice involves combining programs, procedures and partnerships to increase live release rates (LRR), reduce the incidence of disease, length of stay (LOS) and shelter intake whilst overall remaining financially viable. Analysing daily processes, tracking outcomes and implementing simple strategies enabled shelter staff to more effectively focus their efforts and achieve amazing results. The objective of this retrospective study was to assess the effect of implementing the capacity for care (C4C) management model. Data focusing on the average daily number of animals on site for a two year period (2016 – 2017) was exported from a shelter management system, Customer Logic (CL) Vet to Excel for manipulation and comparison. Following the implementation of C4C practices the average daily number of animals on site was reduced by >50%, (2016 average 103 compared to 2017 average 49), average LOS reduced by 50% from 8 weeks to 4 weeks and incidence of disease reduced from ≥ 70% to less than 2% of the cats on site at the completion of the study. The total number of stray cats entering the shelter due to council contracts reduced by 50% (486 to 248). Improved cat outcomes were attributed to strategies that increased adoptions and reduced euthanasia of poorly socialized cats, including foster programs. To continue to achieve improvements in LRR and LOS, strategies to decrease intake further would be beneficial, for example, targeted sterilisation programs. In conclusion, the study highlighted the benefits of using C4C as a management tool, delivering a significant reduction in animal intake and euthanasia with positive emotional, financial and community outcomes.Keywords: animal welfare, capacity for care, cat, euthanasia, length of stay, managed intake, shelter
Procedia PDF Downloads 1443038 Introduce a New Model of Anomaly Detection in Computer Networks Using Artificial Immune Systems
Authors: Mehrshad Khosraviani, Faramarz Abbaspour Leyl Abadi
Abstract:
The fundamental component of the computer network of modern information society will be considered. These networks are connected to the network of the internet generally. Due to the fact that the primary purpose of the Internet is not designed for, in recent decades, none of these networks in many of the attacks has been very important. Today, for the provision of security, different security tools and systems, including intrusion detection systems are used in the network. A common diagnosis system based on artificial immunity, the designer, the Adhasaz Foundation has been evaluated. The idea of using artificial safety methods in the diagnosis of abnormalities in computer networks it has been stimulated in the direction of their specificity, there are safety systems are similar to the common needs of m, that is non-diagnostic. For example, such methods can be used to detect any abnormalities, a variety of attacks, being memory, learning ability, and Khodtnzimi method of artificial immune algorithm pointed out. Diagnosis of the common system of education offered in this paper using only the normal samples is required for network and any additional data about the type of attacks is not. In the proposed system of positive selection and negative selection processes, selection of samples to create a distinction between the colony of normal attack is used. Copa real data collection on the evaluation of ij indicates the proposed system in the false alarm rate is often low compared to other ir methods and the detection rate is in the variations.Keywords: artificial immune system, abnormality detection, intrusion detection, computer networks
Procedia PDF Downloads 3553037 Smart Speed Bump
Authors: Mohammad Rahmani Rezaiyeh, Mojtaba Rahmani Rezaiyeh, Mehrdad Rahmani Rezaiyeh
Abstract:
Smart speed bump is a new invention and I am invented it. Smart speed bump is a system that can change the position of speed bumps either active or passive in necessary situations. The basic system of smart speed bumps is based on a robotic system which includes mechanic, electronic and artificial intelligence. The smart speed bump is capable of smart decision making and can change its position by anticipating the peak of terrific hours. It can be noted to the advantages of this system such as preventing the waste of petrol while crossing speed bumps, traffic management, accelerating, flowing and securing traffic, reducing accidents and judicial records.Keywords: invention, smart, robotic system, speed bump, traffic, management
Procedia PDF Downloads 4193036 Leveraging Digital Transformation Initiatives and Artificial Intelligence to Optimize Readiness and Simulate Mission Performance across the Fleet
Authors: Justin Woulfe
Abstract:
Siloed logistics and supply chain management systems throughout the Department of Defense (DOD) has led to disparate approaches to modeling and simulation (M&S), a lack of understanding of how one system impacts the whole, and issues with “optimal” solutions that are good for one organization but have dramatic negative impacts on another. Many different systems have evolved to try to understand and account for uncertainty and try to reduce the consequences of the unknown. As the DoD undertakes expansive digital transformation initiatives, there is an opportunity to fuse and leverage traditionally disparate data into a centrally hosted source of truth. With a streamlined process incorporating machine learning (ML) and artificial intelligence (AI), advanced M&S will enable informed decisions guiding program success via optimized operational readiness and improved mission success. One of the current challenges is to leverage the terabytes of data generated by monitored systems to provide actionable information for all levels of users. The implementation of a cloud-based application analyzing data transactions, learning and predicting future states from current and past states in real-time, and communicating those anticipated states is an appropriate solution for the purposes of reduced latency and improved confidence in decisions. Decisions made from an ML and AI application combined with advanced optimization algorithms will improve the mission success and performance of systems, which will improve the overall cost and effectiveness of any program. The Systecon team constructs and employs model-based simulations, cutting across traditional silos of data, aggregating maintenance, and supply data, incorporating sensor information, and applying optimization and simulation methods to an as-maintained digital twin with the ability to aggregate results across a system’s lifecycle and across logical and operational groupings of systems. This coupling of data throughout the enterprise enables tactical, operational, and strategic decision support, detachable and deployable logistics services, and configuration-based automated distribution of digital technical and product data to enhance supply and logistics operations. As a complete solution, this approach significantly reduces program risk by allowing flexible configuration of data, data relationships, business process workflows, and early test and evaluation, especially budget trade-off analyses. A true capability to tie resources (dollars) to weapon system readiness in alignment with the real-world scenarios a warfighter may experience has been an objective yet to be realized to date. By developing and solidifying an organic capability to directly relate dollars to readiness and to inform the digital twin, the decision-maker is now empowered through valuable insight and traceability. This type of educated decision-making provides an advantage over the adversaries who struggle with maintaining system readiness at an affordable cost. The M&S capability developed allows program managers to independently evaluate system design and support decisions by quantifying their impact on operational availability and operations and support cost resulting in the ability to simultaneously optimize readiness and cost. This will allow the stakeholders to make data-driven decisions when trading cost and readiness throughout the life of the program. Finally, sponsors are available to validate product deliverables with efficiency and much higher accuracy than in previous years.Keywords: artificial intelligence, digital transformation, machine learning, predictive analytics
Procedia PDF Downloads 162