Search results for: artificial law
1221 In Vitro Evaluation of an Artificial Venous Valve
Authors: Joon Hock Yeo, Munirah Ismail
Abstract:
Chronic venous insufficiency is a condition where the venous wall or venous valves fail to operate properly. As such, it is difficult for the blood to return from the lower extremities back to the heart. Chronic venous insufficiency affects many people worldwide. In last decade, there have been many new and innovative designs of prosthetic venous valves to replace the malfunction native venous valves. However, thus far, to the authors’ knowledge, there is no successful prosthetic venous valve. In this project, we have developed a venous valve which could operate under low pressure. While further testing is warranted, this unique valve could potentially alleviate problems associated with chronic venous insufficiency.Keywords: prosthetic venous valve, bi-leaflet valve, chronic venous insufficiency, valve hemodynamics
Procedia PDF Downloads 1961220 Forecasting of Grape Juice Flavor by Using Support Vector Regression
Authors: Ren-Jieh Kuo, Chun-Shou Huang
Abstract:
The research of juice flavor forecasting has become more important in China. Due to the fast economic growth in China, many different kinds of juices have been introduced to the market. If a beverage company can understand their customers’ preference well, the juice can be served more attractively. Thus, this study intends to introduce the basic theory and computing process of grapes juice flavor forecasting based on support vector regression (SVR). Applying SVR, BPN and LR to forecast the flavor of grapes juice in real data, the result shows that SVR is more suitable and effective at predicting performance.Keywords: flavor forecasting, artificial neural networks, Support Vector Regression, China
Procedia PDF Downloads 4941219 The Effects of Circadian Rhythms Change in High Latitudes
Authors: Ekaterina Zvorykina
Abstract:
Nowadays, Arctic and Antarctic regions are distinguished to be one of the most important strategic resources for global development. Nonetheless, living conditions in Arctic regions still demand certain improvements. As soon as the region is rarely populated, one of the main points of interest is health accommodation of the people, who migrate to Arctic region for permanent and shift work. At Arctic and Antarctic latitudes, personnel face polar day and polar night conditions during the time of the year. It means that they are deprived of natural sunlight in winter season and have continuous daylight in summer. Firstly, the change in light intensity during 24-hours period due to migration affects circadian rhythms. Moreover, the controlled artificial light in winter is also an issue. The results of the recent studies on night shift medical professionals, who were exposed to permanent artificial light, have already demonstrated higher risks in cancer, depression, Alzheimer disease. Moreover, people exposed to frequent time zones change are also subjected to higher risks of heart attack and cancer. Thus, our main goals are to understand how high latitude work and living conditions can affect human health and how it can be prevented. In our study, we analyze molecular and cellular factors, which play important role in circadian rhythm change and distinguish main risk groups in people, migrating to high latitudes. The main well-studied index of circadian timing is melatonin or its metabolite 6-sulfatoxymelatonin. In low light intensity melatonin synthesis is disturbed and as a result human organism requires more time for sleep, which is still disregarded when it comes to working time organization. Lack of melatonin also causes shortage in serotonin production, which leads to higher depression risk. Melatonin is also known to inhibit oncogenes and increase apoptosis level in cells, the main factors for tumor growth, as well as circadian clock genes (for example Per2). Thus, people who work in high latitudes can be distinguished as a risk group for cancer diseases and demand more attention. Clock/Clock genes, known to be one of the main circadian clock regulators, decrease sensitivity of hypothalamus to estrogen and decrease glucose sensibility, which leads to premature aging and oestrous cycle disruption. Permanent light exposure also leads to accumulation superoxide dismutase and oxidative stress, which is one of the main factors for early dementia and Alzheimer disease. We propose a new screening system adjusted for people, migrating from middle to high latitudes and accommodation therapy. Screening is focused on melatonin and estrogen levels, sleep deprivation and neural disorders, depression level, cancer risks and heart and vascular disorders. Accommodation therapy includes different types artificial light exposure, additional melatonin and neuroprotectors. Preventive procedures can lead to increase of migration intensity to high latitudes and, as a result, the prosperity of Arctic region.Keywords: circadian rhythm, high latitudes, melatonin, neuroprotectors
Procedia PDF Downloads 1581218 An Artificial Neural Network Model Based Study of Seismic Wave
Authors: Hemant Kumar, Nilendu Das
Abstract:
A study based on ANN structure gives us the information to predict the size of the future in realizing a past event. ANN, IMD (Indian meteorological department) data and remote sensing were used to enable a number of parameters for calculating the size that may occur in the future. A threshold selected specifically above the high-frequency harvest reached the area during the selected seismic activity. In the field of human and local biodiversity it remains to obtain the right parameter compared to the frequency of impact. But during the study the assumption is that predicting seismic activity is a difficult process, not because of the parameters involved here, which can be analyzed and funded in research activity.Keywords: ANN, Bayesion class, earthquakes, IMD
Procedia PDF Downloads 1261217 Analysis of Histogram Asymmetry for Waste Recognition
Authors: Janusz Bobulski, Kamila Pasternak
Abstract:
Despite many years of effort and research, the problem of waste management is still current. So far, no fully effective waste management system has been developed. Many programs and projects improve statistics on the percentage of waste recycled every year. In these efforts, it is worth using modern Computer Vision techniques supported by artificial intelligence. In the article, we present a method of identifying plastic waste based on the asymmetry analysis of the histogram of the image containing the waste. The method is simple but effective (94%), which allows it to be implemented on devices with low computing power, in particular on microcomputers. Such de-vices will be used both at home and in waste sorting plants.Keywords: waste management, environmental protection, image processing, computer vision
Procedia PDF Downloads 1201216 Assignment of Legal Personality to Robots: A Premature Meditation
Authors: Solomon Okorley
Abstract:
With the emergence of artificial intelligence, a proposition that has been made with increasing conviction is the need to assign legal personhood to robots. A major problem that arises when dealing with robots is the issue of liability: who do it hold liable when a robot causes harm? The suggestion to assign legal personality to robots has been made to aid in the assignment of liability. This paper contends that it is premature to assign legal personhood to robots. The paper employed the doctrinal and comparative research methodology. The paper first discusses the various theories that underpin the granting of legal personhood to juridical personalities to ascertain whether these theories can aid in the proposition to assign legal personhood to robots. These theories include fiction theory, aggregate theory, realist theory, and organism theory. Except for the aggregate theory, the fiction theory, the realist theory and the organism theory provide a good foundation to the proposal for legal personhood to be assigned to robots. The paper considers whether robots should be assigned legal personhood from a jurisprudential approach. The legal positivists assert that no metaphysical presuppositions are needed to determine who could be a legal person: the sole deciding factor is the engagement in legal relations and this prerequisite could be fulfilled by robots. However, rationalists, religionists and naturalists assert that the satisfaction of the metaphysical criteria is the basis of legal personality and since robots do not possess this feature, they cannot be assigned legal personhood. This differing perspective shows that the jurisprudential school of thought to which one belongs influences the decision whether to assign legal personhood to robots. The paper makes arguments for and against the assigning of legal personhood to robots. Assigning legal personhood to robots is necessary for the assigning of liability; and since robots are independent in their operation, they should be assigned legal personhood. However, it is argued that the degree of autonomy is insufficient. Robots do not understand legal obligations; they do not have a will of their own and the purported autonomy that they possess is an ‘imputed autonomy’. A crucial question to be asked is ‘whether it is desirable to confer legal personhood on robots’ and not ‘whether legal personhood should be assigned to robots’. This is due to the subjective nature of the responses to such a question as well as the peculiarities of countries in response to this question. The main argument in support of assigning legal personhood to robots is to aid in assigning liability. However, it is argued conferring legal personhood on robots is not the only way to deal with liability issues. Since any of the stakeholders involved with the robot system can be held liable for an accident, it is not desirable to assign legal personhood to robot. It is forecasted that in the epoch of strong artificial intelligence, granting robots legal personhood is plausible; however, in the current era, it is premature.Keywords: autonomy, legal personhood, premature, jurisprudential
Procedia PDF Downloads 691215 Use of Artificial Intelligence and Two Object-Oriented Approaches (k-NN and SVM) for the Detection and Characterization of Wetlands in the Centre-Val de Loire Region, France
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
Nowadays, wetlands are the subject of contradictory debates opposing scientific, political and administrative meanings. Indeed, given their multiple services (drinking water, irrigation, hydrological regulation, mineral, plant and animal resources...), wetlands concentrate many socio-economic and biodiversity issues. In some regions, they can cover vast areas (>100 thousand ha) of the landscape, such as the Camargue area in the south of France, inside the Rhone delta. The high biological productivity of wetlands, the strong natural selection pressures and the diversity of aquatic environments have produced many species of plants and animals that are found nowhere else. These environments are tremendous carbon sinks and biodiversity reserves depending on their age, composition and surrounding environmental conditions, wetlands play an important role in global climate projections. Covering more than 3% of the earth's surface, wetlands have experienced since the beginning of the 1990s a tremendous revival of interest, which has resulted in the multiplication of inventories, scientific studies and management experiments. The geographical and physical characteristics of the wetlands of the central region conceal a large number of natural habitats that harbour a great biological diversity. These wetlands, one of the natural habitats, are still influenced by human activities, especially agriculture, which affects its layout and functioning. In this perspective, decision-makers need to delimit spatial objects (natural habitats) in a certain way to be able to take action. Thus, wetlands are no exception to this rule even if it seems to be a difficult exercise to delimit a type of environment as whose main characteristic is often to occupy the transition between aquatic and terrestrial environment. However, it is possible to map wetlands with databases, derived from the interpretation of photos and satellite images, such as the European database Corine Land cover, which allows quantifying and characterizing for each place the characteristic wetland types. Scientific studies have shown limitations when using high spatial resolution images (SPOT, Landsat, ASTER) for the identification and characterization of small wetlands (1 hectare). To address this limitation, it is important to note that these wetlands generally represent spatially complex features. Indeed, the use of very high spatial resolution images (>3m) is necessary to map small and large areas. However, with the recent evolution of artificial intelligence (AI) and deep learning methods for satellite image processing have shown a much better performance compared to traditional processing based only on pixel structures. Our research work is also based on spectral and textural analysis on THR images (Spot and IRC orthoimage) using two object-oriented approaches, the nearest neighbour approach (k-NN) and the Super Vector Machine approach (SVM). The k-NN approach gave good results for the delineation of wetlands (wet marshes and moors, ponds, artificial wetlands water body edges, ponds, mountain wetlands, river edges and brackish marshes) with a kappa index higher than 85%.Keywords: land development, GIS, sand dunes, segmentation, remote sensing
Procedia PDF Downloads 721214 Transforming Breast Density Measurement with Artificial Intelligence: Population-Level Insights from BreastScreen NSW
Authors: Douglas Dunn, Ricahrd Walton, Matthew Warner-Smith, Chirag Mistry, Kan Ren, David Roder
Abstract:
Introduction: Breast density is a risk factor for breast cancer, both due to increased fibro glandular tissue that can harbor malignancy and the masking of lesions on mammography. Therefore, evaluation of breast density measurement is useful for risk stratification on an individual and population level. This study investigates the performance of Lunit INSIGHT MMG for automated breast density measurement. We analyze the reliability of Lunit compared to breast radiologists, explore density variations across the BreastScreen NSW population, and examine the impact of breast implants on density measurements. Methods: 15,518 mammograms were utilized for a comparative analysis of intra- and inter-reader reliability between Lunit INSIGHT MMG and breast radiologists. Subsequently, Lunit was used to evaluate 624,113 mammograms for investigation of density variations according to age and birth country, providing insights into diverse population subgroups. Finally, we compared breast density in 4,047 clients with implants to clients without implants, controlling for age and birth country. Results: Inter-reader variability between Lunit and Breast Radiologists weighted kappa coefficient was 0.72 (95%CI 0.71-0.73). Highest breast densities were seen in women with a North-East Asia background, whilst those of Aboriginal background had the lowest density. Across all backgrounds, density was demonstrated to reduce with age, though at different rates according to country of birth. Clients with implants had higher density relative to the age-matched no-implant strata. Conclusion: Lunit INSIGHT MMG demonstrates reasonable inter- and intra-observer reliability for automated breast density measurement. The scale of this study is significantly larger than any previous study assessing breast density due to the ability to process large volumes of data using AI. As a result, it provides valuable insights into population-level density variations. Our findings highlight the influence of age, birth country, and breast implants on density, emphasizing the need for personalized risk assessment and screening approaches. The large-scale and diverse nature of this study enhances the generalisability of our results, offering valuable information for breast cancer screening programs internationally.Keywords: breast cancer, screening, breast density, artificial intelligence, mammography
Procedia PDF Downloads 121213 Integrating Inference, Simulation and Deduction in Molecular Domain Analysis and Synthesis with Peculiar Attention to Drug Discovery
Authors: Diego Liberati
Abstract:
Standard molecular modeling is traditionally done through Schroedinger equations via the help of powerful tools helping to manage them atom by atom, often needing High Performance Computing. Here, a full portfolio of new tools, conjugating statistical inference in the so called eXplainable Artificial Intelligence framework (in the form of Machine Learning of understandable rules) to the more traditional modeling and simulation control theory of mixed dynamic logic hybrid processes, is offered as quite a general purpose even if making an example to a popular chemical physics set of problems.Keywords: understandable rules ML, k-means, PCA, PieceWise Affine Auto Regression with eXogenous input
Procedia PDF Downloads 321212 A Machine Learning Approach for Classification of Directional Valve Leakage in the Hydraulic Final Test
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Due to increasing cost pressure in global markets, artificial intelligence is becoming a technology that is decisive for competition. Predictive quality enables machinery and plant manufacturers to ensure product quality by using data-driven forecasts via machine learning models as a decision-making basis for test results. The use of cross-process Bosch production data along the value chain of hydraulic valves is a promising approach to classifying the quality characteristics of workpieces.Keywords: predictive quality, hydraulics, machine learning, classification, supervised learning
Procedia PDF Downloads 2321211 Digital Twin for a Floating Solar Energy System with Experimental Data Mining and AI Modelling
Authors: Danlei Yang, Luofeng Huang
Abstract:
The integration of digital twin technology with renewable energy systems offers an innovative approach to predicting and optimising performance throughout the entire lifecycle. A digital twin is a continuously updated virtual replica of a real-world entity, synchronised with data from its physical counterpart and environment. Many digital twin companies today claim to have mature digital twin products, but their focus is primarily on equipment visualisation. However, the core of a digital twin should be its model, which can mirror, shadow, and thread with the real-world entity, which is still underdeveloped. For a floating solar energy system, a digital twin model can be defined in three aspects: (a) the physical floating solar energy system along with environmental factors such as solar irradiance and wave dynamics, (b) a digital model powered by artificial intelligence (AI) algorithms, and (c) the integration of real system data with the AI-driven model and a user interface. The experimental setup for the floating solar energy system, is designed to replicate real-ocean conditions of floating solar installations within a controlled laboratory environment. The system consists of a water tank that simulates an aquatic surface, where a floating catamaran structure supports a solar panel. The solar simulator is set up in three positions: one directly above and two inclined at a 45° angle in front and behind the solar panel. This arrangement allows the simulation of different sun angles, such as sunrise, midday, and sunset. The solar simulator is positioned 400 mm away from the solar panel to maintain consistent solar irradiance on its surface. Stability for the floating structure is achieved through ropes attached to anchors at the bottom of the tank, which simulates the mooring systems used in real-world floating solar applications. The floating solar energy system's sensor setup includes various devices to monitor environmental and operational parameters. An irradiance sensor measures solar irradiance on the photovoltaic (PV) panel. Temperature sensors monitor ambient air and water temperatures, as well as the PV panel temperature. Wave gauges measure wave height, while load cells capture mooring force. Inclinometers and ultrasonic sensors record heave and pitch amplitudes of the floating system’s motions. An electric load measures the voltage and current output from the solar panel. All sensors collect data simultaneously. Artificial neural network (ANN) algorithms are central to developing the digital model, which processes historical and real-time data, identifies patterns, and predicts the system’s performance in real time. The data collected from various sensors are partly used to train the digital model, with the remaining data reserved for validation and testing. The digital twin model combines the experimental setup with the ANN model, enabling monitoring, analysis, and prediction of the floating solar energy system's operation. The digital model mirrors the functionality of the physical setup, running in sync with the experiment to provide real-time insights and predictions. It provides useful industrial benefits, such as informing maintenance plans as well as design and control strategies for optimal energy efficiency. In long term, this digital twin will help improve overall solar energy yield whilst minimising the operational costs and risks.Keywords: digital twin, floating solar energy system, experiment setup, artificial intelligence
Procedia PDF Downloads 121210 Leveraging xAPI in a Corporate e-Learning Environment to Facilitate the Tracking, Modelling, and Predictive Analysis of Learner Behaviour
Authors: Libor Zachoval, Daire O Broin, Oisin Cawley
Abstract:
E-learning platforms, such as Blackboard have two major shortcomings: limited data capture as a result of the limitations of SCORM (Shareable Content Object Reference Model), and lack of incorporation of Artificial Intelligence (AI) and machine learning algorithms which could lead to better course adaptations. With the recent development of Experience Application Programming Interface (xAPI), a large amount of additional types of data can be captured and that opens a window of possibilities from which online education can benefit. In a corporate setting, where companies invest billions on the learning and development of their employees, some learner behaviours can be troublesome for they can hinder the knowledge development of a learner. Behaviours that hinder the knowledge development also raise ambiguity about learner’s knowledge mastery, specifically those related to gaming the system. Furthermore, a company receives little benefit from their investment if employees are passing courses without possessing the required knowledge and potential compliance risks may arise. Using xAPI and rules derived from a state-of-the-art review, we identified three learner behaviours, primarily related to guessing, in a corporate compliance course. The identified behaviours are: trying each option for a question, specifically for multiple-choice questions; selecting a single option for all the questions on the test; and continuously repeating tests upon failing as opposed to going over the learning material. These behaviours were detected on learners who repeated the test at least 4 times before passing the course. These findings suggest that gauging the mastery of a learner from multiple-choice questions test scores alone is a naive approach. Thus, next steps will consider the incorporation of additional data points, knowledge estimation models to model knowledge mastery of a learner more accurately, and analysis of the data for correlations between knowledge development and identified learner behaviours. Additional work could explore how learner behaviours could be utilised to make changes to a course. For example, course content may require modifications (certain sections of learning material may be shown to not be helpful to many learners to master the learning outcomes aimed at) or course design (such as the type and duration of feedback).Keywords: artificial intelligence, corporate e-learning environment, knowledge maintenance, xAPI
Procedia PDF Downloads 1221209 Advancements in AI Training and Education for a Future-Ready Healthcare System
Authors: Shamie Kumar
Abstract:
Background: Radiologists and radiographers (RR) need to educate themselves and their colleagues to ensure that AI is integrated safely, useful, and in a meaningful way with the direction it always benefits the patients. AI education and training are fundamental to the way RR work and interact with it, such that they feel confident using it as part of their clinical practice in a way they understand it. Methodology: This exploratory research will outline the current educational and training gaps for radiographers and radiologists in AI radiology diagnostics. It will review the status, skills, challenges of educating and teaching. Understanding the use of artificial intelligence within daily clinical practice, why it is fundamental, and justification on why learning about AI is essential for wider adoption. Results: The current knowledge among RR is very sparse, country dependent, and with radiologists being the majority of the end-users for AI, their targeted training and learning AI opportunities surpass the ones available to radiographers. There are many papers that suggest there is a lack of knowledge, understanding, and training of AI in radiology amongst RR, and because of this, they are unable to comprehend exactly how AI works, integrates, benefits of using it, and its limitations. There is an indication they wish to receive specific training; however, both professions need to actively engage in learning about it and develop the skills that enable them to effectively use it. There is expected variability amongst the profession on their degree of commitment to AI as most don’t understand its value; this only adds to the need to train and educate RR. Currently, there is little AI teaching in either undergraduate or postgraduate study programs, and it is not readily available. In addition to this, there are other training programs, courses, workshops, and seminars available; most of these are short and one session rather than a continuation of learning which cover a basic understanding of AI and peripheral topics such as ethics, legal, and potential of AI. There appears to be an obvious gap between the content of what the training program offers and what the RR needs and wants to learn. Due to this, there is a risk of ineffective learning outcomes and attendees feeling a lack of clarity and depth of understanding of the practicality of using AI in a clinical environment. Conclusion: Education, training, and courses need to have defined learning outcomes with relevant concepts, ensuring theory and practice are taught as a continuation of the learning process based on use cases specific to a clinical working environment. Undergraduate and postgraduate courses should be developed robustly, ensuring the delivery of it is with expertise within that field; in addition, training and other programs should be delivered as a way of continued professional development and aligned with accredited institutions for a degree of quality assurance.Keywords: artificial intelligence, training, radiology, education, learning
Procedia PDF Downloads 871208 Hidro-IA: An Artificial Intelligent Tool Applied to Optimize the Operation Planning of Hydrothermal Systems with Historical Streamflow
Authors: Thiago Ribeiro de Alencar, Jacyro Gramulia Junior, Patricia Teixeira Leite
Abstract:
The area of the electricity sector that deals with energy needs by the hydroelectric in a coordinated manner is called Operation Planning of Hydrothermal Power Systems (OPHPS). The purpose of this is to find a political operative to provide electrical power to the system in a given period, with reliability and minimal cost. Therefore, it is necessary to determine an optimal schedule of generation for each hydroelectric, each range, so that the system meets the demand reliably, avoiding rationing in years of severe drought, and that minimizes the expected cost of operation during the planning, defining an appropriate strategy for thermal complementation. Several optimization algorithms specifically applied to this problem have been developed and are used. Although providing solutions to various problems encountered, these algorithms have some weaknesses, difficulties in convergence, simplification of the original formulation of the problem, or owing to the complexity of the objective function. An alternative to these challenges is the development of techniques for simulation optimization and more sophisticated and reliable, it can assist the planning of the operation. Thus, this paper presents the development of a computational tool, namely Hydro-IA for solving optimization problem identified and to provide the User an easy handling. Adopted as intelligent optimization technique is Genetic Algorithm (GA) and programming language is Java. First made the modeling of the chromosomes, then implemented the function assessment of the problem and the operators involved, and finally the drafting of the graphical interfaces for access to the User. The results with the Genetic Algorithms were compared with the optimization technique nonlinear programming (NLP). Tests were conducted with seven hydroelectric plants interconnected hydraulically with historical stream flow from 1953 to 1955. The results of comparison between the GA and NLP techniques shows that the cost of operating the GA becomes increasingly smaller than the NLP when the number of hydroelectric plants interconnected increases. The program has managed to relate a coherent performance in problem resolution without the need for simplification of the calculations together with the ease of manipulating the parameters of simulation and visualization of output results.Keywords: energy, optimization, hydrothermal power systems, artificial intelligence and genetic algorithms
Procedia PDF Downloads 4201207 Use of AI for the Evaluation of the Effects of Steel Corrosion in Mining Environments
Authors: Maria Luisa de la Torre, Javier Aroba, Jose Miguel Davila, Aguasanta M. Sarmiento
Abstract:
Steel is one of the most widely used materials in polymetallic sulfide mining installations. One of the main problems suffered by these facilities is the economic losses due to the corrosion of this material, which is accelerated and aggravated by the contact with acid waters generated in these mines when sulfides come into contact with oxygen and water. This generation of acidic water, in turn, is accelerated by the presence of acidophilic bacteria. In order to gain a more detailed understanding of this corrosion process and the interaction between steel and acidic water, a laboratory experiment was carried out in which carbon steel plates were introduced into four different solutions for 27 days: distilled water (BK), which tried to assimilate the effect produced by rain on this material, an acid solution from a mine with a high Fe2+/Fe3+ (PO) content, another acid solution of water from another mine with a high Fe3+/Fe2+ (PH) content and, finally, one that reproduced the acid mine water with a high Fe2+/Fe3+ content but in which there were no bacteria (ST). Every 24 hours, physicochemical parameters were measured and water samples were taken to carry out an analysis of the dissolved elements. The results of these measurements were processed using an explainable AI model based on fuzzy logic. It could be seen that, in all cases, there was an increase in pH, as well as in the concentrations of Fe and, in particular, Fe(II), as a consequence of the oxidation of the steel plates. Proportionally, the increase in Fe concentration was higher in PO and ST than in PH because Fe precipitates were produced in the latter. The rise of Fe(II) was proportionally much higher in PH and, especially in the first hours of exposure, because it started from a lower initial concentration of this ion. Although to a lesser extent than in PH, the greater increase in Fe(II) also occurred faster in PO than in ST, a consequence of the action of the catalytic bacteria. On the other hand, Cu concentrations decreased throughout the experiment (with the exception of distilled water, which initially had no Cu, as a result of an electrochemical process that generates a precipitation of Cu together with Fe hydroxides. This decrease is lower in PH because the high total acidity keeps it in solution for a longer time. With the application of an artificial intelligence tool, it has been possible to evaluate the effects of steel corrosion in mining environments, corroborating and extending what was obtained by means of classical statistics. Acknowledgments: This work has been supported by MCIU/AEI/10.13039/501100011033/FEDER, UE, throughout the project PID2021-123130OB-I00.Keywords: carbon steel, corrosion, acid mine drainage, artificial intelligence, fuzzy logic
Procedia PDF Downloads 231206 Smart Construction Sites in KSA: Challenges and Prospects
Authors: Ahmad Mohammad Sharqi, Mohamed Hechmi El Ouni, Saleh Alsulamy
Abstract:
Due to the emerging technologies revolution worldwide, the need to exploit and employ innovative technologies for other functions and purposes in different aspects has become a remarkable matter. Saudi Arabia is considered one of the most powerful economic countries in the world, where the construction sector participates effectively in its economy. Thus, the construction sector in KSA should convoy the rapid digital revolution and transformation and implement smart devices on sites. A Smart Construction Site (SCS) includes smart devices, artificial intelligence, the internet of things, augmented reality, building information modeling, geographical information systems, and cloud information. This paper aims to study the level of implementation of SCS in KSA, analyze the obstacles and challenges of adopting SCS and find out critical success factors for its implementation. A survey of close-ended questions (scale and multi-choices) has been conducted on professionals in the construction sector of Saudi Arabia. A total number of twenty-nine questions has been prepared for respondents. Twenty-four scale questions were established, and those questions were categorized into several themes: quality, scheduling, cost, occupational safety and health, technologies and applications, and general perception. Consequently, the 5-point Likert scale tool (very low to very high) was adopted for this survey. In addition, five close-ended questions with multi-choice types have also been prepared; these questions have been derived from a previous study implemented in the United Kingdom (UK) and the Dominic Republic (DR), these questions have been rearranged and organized to fit the structured survey in order to place the Kingdom of Saudi Arabia in comparison with the United Kingdom (UK) as well as the Dominican Republic (DR). A total number of one hundred respondents have participated in this survey from all regions of the Kingdom of Saudi Arabia: southern, central, western, eastern, and northern regions. The drivers, obstacles, and success factors for implementing smart devices and technologies in KSA’s construction sector have been investigated and analyzed. Besides, it has been concluded that KSA is on the right path toward adopting smart construction sites with attractive results comparable to and even better than the UK in some factors.Keywords: artificial intelligence, construction projects management, internet of things, smart construction sites, smart devices
Procedia PDF Downloads 1561205 Data Access, AI Intensity, and Scale Advantages
Authors: Chuping Lo
Abstract:
This paper presents a simple model demonstrating that ceteris paribus countries with lower barriers to accessing global data tend to earn higher incomes than other countries. Therefore, large countries that inherently have greater data resources tend to have higher incomes than smaller countries, such that the former may be more hesitant than the latter to liberalize cross-border data flows to maintain this advantage. Furthermore, countries with higher artificial intelligence (AI) intensity in production technologies tend to benefit more from economies of scale in data aggregation, leading to higher income and more trade as they are better able to utilize global data.Keywords: digital intensity, digital divide, international trade, scale of economics
Procedia PDF Downloads 681204 Ethical Issues in AI: Analyzing the Gap Between Theory and Practice - A Case Study of AI and Robotics Researchers
Authors: Sylvie Michel, Emmanuelle Gagnou, Joanne Hamet
Abstract:
New major ethical dilemmas are posed by artificial intelligence. This article identifies an existing gap between the ethical questions that AI/robotics researchers grapple with in their research practice and those identified by literature review. The objective is to understand which ethical dilemmas are identified or concern AI researchers in order to compare them with the existing literature. This will enable to conduct training and awareness initiatives for AI researchers, encouraging them to consider these questions during the development of AI. Qualitative analyses were conducted based on direct observation of an AI/Robotics research team focused on collaborative robotics over several months. Subsequently, semi-structured interviews were conducted with 16 members of the team. The entire process took place during the first semester of 2023. The observations were analyzed using an analytical framework, and the interviews were thematically analyzed using Nvivo software. While the literature identifies three primary ethical concerns regarding AI—transparency, bias, and responsibility—the results firstly demonstrate that AI researchers are primarily concerned with the publication and valorization of their work, with the initial ethical concerns revolving around this matter. Questions arise regarding the extent to which to "market" publications and the usefulness of some publications. Research ethics are a central consideration for these teams. Secondly, another result shows that the researchers studied adopt a consequentialist ethics (though not explicitly formulated as such). They ponder the consequences of their development in terms of safety (for humans in relation to Robots/AI), worker autonomy in relation to the robot, and the role of work in society (can robots take over jobs?). Lastly, results indicate that the ethical dilemmas highlighted in the literature (responsibility, transparency, bias) do not explicitly appear in AI/Robotics research. AI/robotics researchers raise specific and pragmatic ethical questions, primarily concerning publications initially and consequentialist considerations afterward. Results demonstrate that these concerns are distant from the existing literature. However, the dilemmas highlighted in the literature also deserve to be explicitly contemplated by researchers. This article proposes that the journals these researchers target should mandate ethical reflection for all presented works. Furthermore, results suggest offering awareness programs in the form of short educational sessions for researchers.Keywords: ethics, artificial intelligence, research, robotics
Procedia PDF Downloads 811203 Exploring the Potential of Replika: An AI Chatbot for Mental Health Support
Authors: Nashwah Alnajjar
Abstract:
This research paper provides an overview of Replika, an AI chatbot application that uses natural language processing technology to engage in conversations with users. The app was developed to provide users with a virtual AI friend who can converse with them on various topics, including mental health. This study explores the experiences of Replika users using quantitative research methodology. A survey was conducted with 12 participants to collect data on their demographics, usage patterns, and experiences with the Replika app. The results showed that Replika has the potential to play a role in mental health support and well-being.Keywords: Replika, chatbot, mental health, artificial intelligence, natural language processing
Procedia PDF Downloads 871202 A Comparative Study of Multi-SOM Algorithms for Determining the Optimal Number of Clusters
Authors: Imèn Khanchouch, Malika Charrad, Mohamed Limam
Abstract:
The interpretation of the quality of clusters and the determination of the optimal number of clusters is still a crucial problem in clustering. We focus in this paper on multi-SOM clustering method which overcomes the problem of extracting the number of clusters from the SOM map through the use of a clustering validity index. We then tested multi-SOM using real and artificial data sets with different evaluation criteria not used previously such as Davies Bouldin index, Dunn index and silhouette index. The developed multi-SOM algorithm is compared to k-means and Birch methods. Results show that it is more efficient than classical clustering methods.Keywords: clustering, SOM, multi-SOM, DB index, Dunn index, silhouette index
Procedia PDF Downloads 5991201 Half-Circle Fuzzy Number Threshold Determination via Swarm Intelligence Method
Authors: P. W. Tsai, J. W. Chen, C. W. Chen, C. Y. Chen
Abstract:
In recent years, many researchers are involved in the field of fuzzy theory. However, there are still a lot of issues to be resolved. Especially on topics related to controller design such as the field of robot, artificial intelligence, and nonlinear systems etc. Besides fuzzy theory, algorithms in swarm intelligence are also a popular field for the researchers. In this paper, a concept of utilizing one of the swarm intelligence method, which is called Bacterial-GA Foraging, to find the stabilized common P matrix for the fuzzy controller system is proposed. An example is given in in the paper, as well.Keywords: half-circle fuzzy numbers, predictions, swarm intelligence, Lyapunov method
Procedia PDF Downloads 6871200 Compression and Air Storage Systems for Small Size CAES Plants: Design and Off-Design Analysis
Authors: Coriolano Salvini, Ambra Giovannelli
Abstract:
The use of renewable energy sources for electric power production leads to reduced CO2 emissions and contributes to improving the domestic energy security. On the other hand, the intermittency and unpredictability of their availability poses relevant problems in fulfilling safely and in a cost efficient way the load demand along the time. Significant benefits in terms of “grid system applications”, “end-use applications” and “renewable applications” can be achieved by introducing energy storage systems. Among the currently available solutions, CAES (Compressed Air Energy Storage) shows favorable features. Small-medium size plants equipped with artificial air reservoirs can constitute an interesting option to get efficient and cost-effective distributed energy storage systems. The present paper is addressed to the design and off-design analysis of the compression system of small size CAES plants suited to absorb electric power in the range of hundreds of kilowatt. The system of interest is constituted by an intercooled (in case aftercooled) multi-stage reciprocating compressor and a man-made reservoir obtained by connecting large diameter steel pipe sections. A specific methodology for the system preliminary sizing and off-design modeling has been developed. Since during the charging phase the electric power absorbed along the time has to change according to the peculiar CAES requirements and the pressure ratio increases continuously during the filling of the reservoir, the compressor has to work at variable mass flow rate. In order to ensure an appropriately wide range of operations, particular attention has been paid to the selection of the most suitable compressor capacity control device. Given the capacity regulation margin of the compressor and the actual level of charge of the reservoir, the proposed approach allows the instant-by-instant evaluation of minimum and maximum electric power absorbable from the grid. The developed tool gives useful information to appropriately size the compression system and to manage it in the most effective way. Various cases characterized by different system requirements are analysed. Results are given and widely discussed.Keywords: artificial air storage reservoir, compressed air energy storage (CAES), compressor design, compression system management.
Procedia PDF Downloads 2301199 The Use of Artificial Intelligence in the Context of a Space Traffic Management System: Legal Aspects
Authors: George Kyriakopoulos, Photini Pazartzis, Anthi Koskina, Crystalie Bourcha
Abstract:
The need for securing safe access to and return from outer space, as well as ensuring the viability of outer space operations, maintains vivid the debate over the promotion of organization of space traffic through a Space Traffic Management System (STM). The proliferation of outer space activities in recent years as well as the dynamic emergence of the private sector has gradually resulted in a diverse universe of actors operating in outer space. The said developments created an increased adverse impact on outer space sustainability as the case of the growing number of space debris clearly demonstrates. The above landscape sustains considerable threats to outer space environment and its operators that need to be addressed by a combination of scientific-technological measures and regulatory interventions. In this context, recourse to recent technological advancements and, in particular, to Artificial Intelligence (AI) and machine learning systems, could achieve exponential results in promoting space traffic management with respect to collision avoidance as well as launch and re-entry procedures/phases. New technologies can support the prospects of a successful space traffic management system at an international scale by enabling, inter alia, timely, accurate and analytical processing of large data sets and rapid decision-making, more precise space debris identification and tracking and overall minimization of collision risks and reduction of operational costs. What is more, a significant part of space activities (i.e. launch and/or re-entry phase) takes place in airspace rather than in outer space, hence the overall discussion also involves the highly developed, both technically and legally, international (and national) Air Traffic Management System (ATM). Nonetheless, from a regulatory perspective, the use of AI for the purposes of space traffic management puts forward implications that merit particular attention. Key issues in this regard include the delimitation of AI-based activities as space activities, the designation of the applicable legal regime (international space or air law, national law), the assessment of the nature and extent of international legal obligations regarding space traffic coordination, as well as the appropriate liability regime applicable to AI-based technologies when operating for space traffic coordination, taking into particular consideration the dense regulatory developments at EU level. In addition, the prospects of institutionalizing international cooperation and promoting an international governance system, together with the challenges of establishment of a comprehensive international STM regime are revisited in the light of intervention of AI technologies. This paper aims at examining regulatory implications advanced by the use of AI technology in the context of space traffic management operations and its key correlating concepts (SSA, space debris mitigation) drawing in particular on international and regional considerations in the field of STM (e.g. UNCOPUOS, International Academy of Astronautics, European Space Agency, among other actors), the promising advancements of the EU approach to AI regulation and, last but not least, national approaches regarding the use of AI in the context of space traffic management, in toto. Acknowledgment: The present work was co-funded by the European Union and Greek national funds through the Operational Program "Human Resources Development, Education and Lifelong Learning " (NSRF 2014-2020), under the call "Supporting Researchers with an Emphasis on Young Researchers – Cycle B" (MIS: 5048145).Keywords: artificial intelligence, space traffic management, space situational awareness, space debris
Procedia PDF Downloads 2591198 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation
Authors: Jonathan Gong
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning
Procedia PDF Downloads 1311197 Electromechanical Behaviour of Chitosan Based Electroactive Polymer
Authors: M. Sarikanat, E. Akar, I. Şen, Y. Seki, O. C. Yılmaz, B. O. Gürses, L. Cetin, O. Özdemir, K. Sever
Abstract:
Chitosan is a natural, nontoxic, polyelectrolyte, cheap polymer. In this study, chitosan based electroactive polymer (CBEAP) was fabricated. Electroactive properties of this polymer were investigated at different voltages. It exhibited excellent tip displacement at low voltages (1, 3, 5, 7 V). Tip displacement was increased as the applied voltage increased. Best tip displacement was investigated as 28 mm at 5V. Characterization of CBEAP was investigated by scanning electron microscope, X-ray diffraction and tensile testing. CBEAP exhibited desired electroactive properties at low voltages. It is suitable for using in artificial muscle and various robotic applications.Keywords: chitosan, electroactive polymer, electroactive properties
Procedia PDF Downloads 5131196 Towards a Computational Model of Consciousness: Global Abstraction Workspace
Authors: Halim Djerroud, Arab Ali Cherif
Abstract:
We assume that conscious functions are implemented automatically. In other words that consciousness as well as the non-consciousness aspect of human thought, planning, and perception, are produced by biologically adaptive algorithms. We propose that the mechanisms of consciousness can be produced using similar adaptive algorithms to those executed by the mechanism. In this paper, we propose a computational model of consciousness, the ”Global Abstraction Workspace” which is an internal environmental modelling perceived as a multi-agent system. This system is able to evolve and generate new data and processes as well as actions in the environment.Keywords: artificial consciousness, cognitive architecture, global abstraction workspace, multi-agent system
Procedia PDF Downloads 3411195 Using the Semantic Web Technologies to Bring Adaptability in E-Learning Systems
Authors: Fatima Faiza Ahmed, Syed Farrukh Hussain
Abstract:
The last few decades have seen a large proportion of our population bending towards e-learning technologies, starting from learning tools used in primary and elementary schools to competency based e-learning systems specifically designed for applications like finance and marketing. The huge diversity in this crowd brings about a large number of challenges for the designers of these e-learning systems, one of which is the adaptability of such systems. This paper focuses on adaptability in the learning material in an e-learning course and how artificial intelligence and the semantic web can be used as an effective tool for this purpose. The study proved that the semantic web, still a hot topic in the area of computer science can prove to be a powerful tool in designing and implementing adaptable e-learning systems.Keywords: adaptable e-learning, HTMLParser, information extraction, semantic web
Procedia PDF Downloads 3411194 Microscopic Insights into Water Transport Through a Biomimetic Artificial Water Nano-Channels-Polyamide Membrane
Authors: Aziz Ghoufi, Ayman Kanaan
Abstract:
Clean water is ubiquitous from drinking to agriculture and from energy supply to industrial manufacturing. Since the conventional water sources are becoming increasingly rare, the development of new technologies for water supply is crucial to address the world’s clean water needs in the 21st century. Desalination is in many regards the most promising approach to long-term water supply since it potentially delivers an unlimited source of fresh water. Seawater desalination using reverse osmosis (RO) membranes has become over the past decade a standard approach to produce fresh water. While this technology has proven to be efficient, it remains however relatively costly in terms of energy input due to the use of high-pressure pumps resulting of the low water permeation through polymeric RO membranes. Recently, water channels incorporated in lipidic and polymeric membranes were demonstrated to provide a selective water translocation that enables to break permeability- selectivity trade-off. Biomimetic Artificial Water channels (AWCs) are becoming highly attractive systems to achieve a selective transport of water. The first developed AWCs formed from imidazole quartet (I-quartet) embedded in lipidic membranes exhibited an ion selectivity higher than AQPs however associated with a lower water flow performance. Recently it has been conducted pioneer work in this field with the fabrication of the first AWC@Polyamide(PA) composite membrane with outstanding desalination performance. However, the microscopic desalination mechanism in play is still unknown and its understanding represents the shortest way for a long-term conception and design of AWC@PA composite membranes with better performance. In this work we gain an unprecedented fundamental understanding and rationalization of the nanostructuration of the AWC@PA membranes and the microscopic mechanism at the origin of their water transport performance from advanced molecular simulations. Using osmotic molecular dynamics simulations and a non-equilibrium method with water slab control, we demonstrate an increase in porosity near the AWC@PA interfaces, enhancing water transport without compromising the rejection rate. Indeed, the water transport pathways exhibit a single-file structure connected by hydrogen bonds. Finally, by comparing AWC@PA and PA membranes, we show that the difference in water flux aligns well with experimental results, validating the model used.Keywords: water desalination, biomimetic membranes, molecular simulation, nanochannels
Procedia PDF Downloads 221193 Selecting the Best RBF Neural Network Using PSO Algorithm for ECG Signal Prediction
Authors: Najmeh Mohsenifar, Narjes Mohsenifar, Abbas Kargar
Abstract:
In this paper, has been presented a stable method for predicting the ECG signals through the RBF neural networks, by the PSO algorithm. In spite of quasi-periodic ECG signal from a healthy person, there are distortions in electro cardiographic data for a patient. Therefore, there is no precise mathematical model for prediction. Here, we have exploited neural networks that are capable of complicated nonlinear mapping. Although the architecture and spread of RBF networks are usually selected through trial and error, the PSO algorithm has been used for choosing the best neural network. In this way, 2 second of a recorded ECG signal is employed to predict duration of 20 second in advance. Our simulations show that PSO algorithm can find the RBF neural network with minimum MSE and the accuracy of the predicted ECG signal is 97 %.Keywords: electrocardiogram, RBF artificial neural network, PSO algorithm, predict, accuracy
Procedia PDF Downloads 6281192 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 130