Search results for: artificial recharge
1521 Obstacle Avoidance Using Image-Based Visual Servoing Based on Deep Reinforcement Learning
Authors: Tong He, Long Chen, Irag Mantegh, Wen-Fang Xie
Abstract:
This paper proposes an image-based obstacle avoidance and tracking target identification strategy in GPS-degraded or GPS-denied environment for an Unmanned Aerial Vehicle (UAV). The traditional force algorithm for obstacle avoidance could produce local minima area, in which UAV cannot get away obstacle effectively. In order to eliminate it, an artificial potential approach based on harmonic potential is proposed to guide the UAV to avoid the obstacle by using the vision system. And image-based visual servoing scheme (IBVS) has been adopted to implement the proposed obstacle avoidance approach. In IBVS, the pixel accuracy is a key factor to realize the obstacle avoidance. In this paper, the deep reinforcement learning framework has been applied by reducing pixel errors through constant interaction between the environment and the agent. In addition, the combination of OpenTLD and Tensorflow based on neural network is used to identify the type of tracking target. Numerical simulation in Matlab and ROS GAZEBO show the satisfactory result in target identification and obstacle avoidance.Keywords: image-based visual servoing, obstacle avoidance, tracking target identification, deep reinforcement learning, artificial potential approach, neural network
Procedia PDF Downloads 1431520 Fuzzy Neuro Approach for Integrated Water Management System
Authors: Stuti Modi, Aditi Kambli
Abstract:
This paper addresses the need for intelligent water management and distribution system in smart cities to ensure optimal consumption and distribution of water for drinking and sanitation purposes. Water being a limited resource in cities require an effective system for collection, storage and distribution. In this paper, applications of two mostly widely used particular types of data-driven models, namely artificial neural networks (ANN) and fuzzy logic-based models, to modelling in the water resources management field are considered. The objective of this paper is to review the principles of various types and architectures of neural network and fuzzy adaptive systems and their applications to integrated water resources management. Final goal of the review is to expose and formulate progressive direction of their applicability and further research of the AI-related and data-driven techniques application and to demonstrate applicability of the neural networks, fuzzy systems and other machine learning techniques in the practical issues of the regional water management. Apart from this the paper will deal with water storage, using ANN to find optimum reservoir level and predicting peak daily demands.Keywords: artificial neural networks, fuzzy systems, peak daily demand prediction, water management and distribution
Procedia PDF Downloads 1861519 Detection of Alzheimer's Protein on Nano Designed Polymer Surfaces in Water and Artificial Saliva
Authors: Sevde Altuntas, Fatih Buyukserin
Abstract:
Alzheimer’s disease is responsible for irreversible neural damage of brain parts. One of the disease markers is Amyloid-β 1-42 protein that accumulates in the brain in the form plaques. The basic problem for detection of the protein is the low amount of protein that cannot be detected properly in body liquids such as blood, saliva or urine. To solve this problem, tests like ELISA or PCR are proposed which are expensive, require specialized personnel and can contain complex protocols. Therefore, Surface-enhanced Raman Spectroscopy (SERS) a good candidate for detection of Amyloid-β 1-42 protein. Because the spectroscopic technique can potentially allow even single molecule detection from liquid and solid surfaces. Besides SERS signal can be improved by using nanopattern surface and also is specific to molecules. In this context, our study proposes to fabricate diagnostic test models that utilize Au-coated nanopatterned polycarbonate (PC) surfaces modified with Thioflavin - T to detect low concentrations of Amyloid-β 1-42 protein in water and artificial saliva medium by the enhancement of protein SERS signal. The nanopatterned PC surface that was used to enhance SERS signal was fabricated by using Anodic Alumina Membranes (AAM) as a template. It is possible to produce AAMs with different column structures and varying thicknesses depending on voltage and anodization time. After fabrication process, the pore diameter of AAMs can be arranged with dilute acid solution treatment. In this study, two different columns structures were prepared. After a surface modification to decrease their surface energy, AAMs were treated with PC solution. Following the solvent evaporation, nanopatterned PC films with tunable pillared structures were peeled off from the membrane surface. The PC film was then modified with Au and Thioflavin-T for the detection of Amyloid-β 1-42 protein. The protein detection studies were conducted first in water via this biosensor platform. Same measurements were conducted in artificial saliva to detect the presence of Amyloid Amyloid-β 1-42 protein. SEM, SERS and contact angle measurements were carried out for the characterization of different surfaces and further demonstration of the protein attachment. SERS enhancement factor calculations were also completed via experimental results. As a result, our research group fabricated diagnostic test models that utilize Au-coated nanopatterned polycarbonate (PC) surfaces modified with Thioflavin-T to detect low concentrations of Alzheimer’s Amiloid – β protein in water and artificial saliva medium. This work was supported by The Scientific and Technological Research Council of Turkey (TUBITAK) Grant No: 214Z167.Keywords: alzheimer, anodic aluminum oxide, nanotopography, surface enhanced Raman spectroscopy
Procedia PDF Downloads 2911518 Solar Radiation Time Series Prediction
Authors: Cameron Hamilton, Walter Potter, Gerrit Hoogenboom, Ronald McClendon, Will Hobbs
Abstract:
A model was constructed to predict the amount of solar radiation that will make contact with the surface of the earth in a given location an hour into the future. This project was supported by the Southern Company to determine at what specific times during a given day of the year solar panels could be relied upon to produce energy in sufficient quantities. Due to their ability as universal function approximators, an artificial neural network was used to estimate the nonlinear pattern of solar radiation, which utilized measurements of weather conditions collected at the Griffin, Georgia weather station as inputs. A number of network configurations and training strategies were utilized, though a multilayer perceptron with a variety of hidden nodes trained with the resilient propagation algorithm consistently yielded the most accurate predictions. In addition, a modeled DNI field and adjacent weather station data were used to bolster prediction accuracy. In later trials, the solar radiation field was preprocessed with a discrete wavelet transform with the aim of removing noise from the measurements. The current model provides predictions of solar radiation with a mean square error of 0.0042, though ongoing efforts are being made to further improve the model’s accuracy.Keywords: artificial neural networks, resilient propagation, solar radiation, time series forecasting
Procedia PDF Downloads 3841517 Machine Learning Predictive Models for Hydroponic Systems: A Case Study Nutrient Film Technique and Deep Flow Technique
Authors: Kritiyaporn Kunsook
Abstract:
Machine learning algorithms (MLAs) such us artificial neural networks (ANNs), decision tree, support vector machines (SVMs), Naïve Bayes, and ensemble classifier by voting are powerful data driven methods that are relatively less widely used in the mapping of technique of system, and thus have not been comparatively evaluated together thoroughly in this field. The performances of a series of MLAs, ANNs, decision tree, SVMs, Naïve Bayes, and ensemble classifier by voting in technique of hydroponic systems prospectively modeling are compared based on the accuracy of each model. Classification of hydroponic systems only covers the test samples from vegetables grown with Nutrient film technique (NFT) and Deep flow technique (DFT). The feature, which are the characteristics of vegetables compose harvesting height width, temperature, require light and color. The results indicate that the classification performance of the ANNs is 98%, decision tree is 98%, SVMs is 97.33%, Naïve Bayes is 96.67%, and ensemble classifier by voting is 98.96% algorithm respectively.Keywords: artificial neural networks, decision tree, support vector machines, naïve Bayes, ensemble classifier by voting
Procedia PDF Downloads 3721516 Design of EV Steering Unit Using AI Based on Estimate and Control Model
Authors: Seong Jun Yoon, Jasurbek Doliev, Sang Min Oh, Rodi Hartono, Kyoojae Shin
Abstract:
Electric power steering (EPS), which is commonly used in electric vehicles recently, is an electric-driven steering device for vehicles. Compared to hydraulic systems, EPS offers advantages such as simple system components, easy maintenance, and improved steering performance. However, because the EPS system is a nonlinear model, difficult problems arise in controller design. To address these, various machine learning and artificial intelligence approaches, notably artificial neural networks (ANN), have been applied. ANN can effectively determine relationships between inputs and outputs in a data-driven manner. This research explores two main areas: designing an EPS identifier using an ANN-based backpropagation (BP) algorithm and enhancing the EPS system controller with an ANN-based Levenberg-Marquardt (LM) algorithm. The proposed ANN-based BP algorithm shows superior performance and accuracy compared to linear transfer function estimators, while the LM algorithm offers better input angle reference tracking and faster response times than traditional PID controllers. Overall, the proposed ANN methods demonstrate significant promise in improving EPS system performance.Keywords: ANN backpropagation modelling, electric power steering, transfer function estimator, electrical vehicle driving system
Procedia PDF Downloads 431515 Fundamental Theory of the Evolution Force: Gene Engineering utilizing Synthetic Evolution Artificial Intelligence
Authors: L. K. Davis
Abstract:
The effects of the evolution force are observable in nature at all structural levels ranging from small molecular systems to conversely enormous biospheric systems. However, the evolution force and work associated with formation of biological structures has yet to be described mathematically or theoretically. In addressing the conundrum, we consider evolution from a unique perspective and in doing so we introduce the “Fundamental Theory of the Evolution Force: FTEF”. We utilized synthetic evolution artificial intelligence (SYN-AI) to identify genomic building blocks and to engineer 14-3-3 ζ docking proteins by transforming gene sequences into time-based DNA codes derived from protein hierarchical structural levels. The aforementioned served as templates for random DNA hybridizations and genetic assembly. The application of hierarchical DNA codes allowed us to fast forward evolution, while dampening the effect of point mutations. Natural selection was performed at each hierarchical structural level and mutations screened using Blosum 80 mutation frequency-based algorithms. Notably, SYN-AI engineered a set of three architecturally conserved docking proteins that retained motion and vibrational dynamics of native Bos taurus 14-3-3 ζ.Keywords: 14-3-3 docking genes, synthetic protein design, time-based DNA codes, writing DNA code from scratch
Procedia PDF Downloads 1141514 Automating Self-Representation in the Caribbean: AI Autoethnography and Cultural Analysis
Authors: Steffon Campbell
Abstract:
This research explores the potential of using artificial intelligence (AI) autoethnographies to study, document, explore, and understand aspects of Caribbean culture. As a digital research methodology, AI autoethnography merges computer science and technology with ethnography, providing a fresh approach to collecting and analyzing data to generate novel insights. This research investigates how AI autoethnography can best be applied to understanding the various complexities and nuances of Caribbean culture, as well as examining how technology can be a valuable tool for enriching study of the region. By applying AI autoethnography to Caribbean studies, the research aims to produce new and innovative ways of discovering, understanding, and appreciating the Caribbean. The study found that AI autoethnographies can offer a valuable method for exploring Caribbean culture. Specifically, AI autoethnographies can facilitate experiences of self-reflection, facilitate reconciliation with the past, and provide a platform to explore and understand the cultural, social, political, and economic concerns of Caribbean people. Findings also reveal that these autoethnographies can create a space for people to reimagine and reframe the conversation around Caribbean culture by enabling them to actively participate in the process of knowledge creation. The study also finds that AI autoethnography offers the potential for cross-cultural dialogue, allowing participants to connect with one another over cultural considerations and engage in meaningful discourse.Keywords: artificial intelligence, autoethnography, caribbean, culture
Procedia PDF Downloads 241513 Artificial Neural Network for Forecasting of Daily Reservoir Inflow: Case Study of the Kotmale Reservoir in Sri Lanka
Authors: E. U. Dampage, Ovindi D. Bandara, Vinushi S. Waraketiya, Samitha S. R. De Silva, Yasiru S. Gunarathne
Abstract:
The knowledge of water inflow figures is paramount in decision making on the allocation for consumption for numerous purposes; irrigation, hydropower, domestic and industrial usage, and flood control. The understanding of how reservoir inflows are affected by different climatic and hydrological conditions is crucial to enable effective water management and downstream flood control. In this research, we propose a method using a Long Short Term Memory (LSTM) Artificial Neural Network (ANN) to assist the aforesaid decision-making process. The Kotmale reservoir, which is the uppermost reservoir in the Mahaweli reservoir complex in Sri Lanka, was used as the test bed for this research. The ANN uses the runoff in the Kotmale reservoir catchment area and the effect of Sea Surface Temperatures (SST) to make a forecast for seven days ahead. Three types of ANN are tested; Multi-Layer Perceptron (MLP), Convolutional Neural Network (CNN), and LSTM. The extensive field trials and validation endeavors found that the LSTM ANN provides superior performance in the aspects of accuracy and latency.Keywords: convolutional neural network, CNN, inflow, long short-term memory, LSTM, multi-layer perceptron, MLP, neural network
Procedia PDF Downloads 1511512 Tide Contribution in the Flood Event of Jeddah City: Mathematical Modelling and Different Field Measurements of the Groundwater Rise
Authors: Aïssa Rezzoug
Abstract:
This paper is aimed to bring new elements that demonstrate the tide caused the groundwater to rise in the shoreline band, on which the urban areas occurs, especially in the western coastal cities of the Kingdom of Saudi Arabia like Jeddah. The reason for the last events of Jeddah inundation was the groundwater rise in the city coupled at the same time to a strong precipitation event. This paper will illustrate the tide participation in increasing the groundwater level significantly. It shows that the reason for internal groundwater recharge within the urban area is not only the excess of the water supply coming from surrounding areas, due to the human activity, with lack of sufficient and efficient sewage system, but also due to tide effect. The research study follows a quantitative method to assess groundwater level rise risks through many in-situ measurements and mathematical modelling. The proposed approach highlights groundwater level, in the urban areas of the city on the shoreline band, reaching the high tide level without considering any input from precipitation. Despite the small tide in the Red Sea compared to other oceanic coasts, the groundwater level is considerably enhanced by the tide from the seaside and by the freshwater table from the landside of the city. In these conditions, the groundwater level becomes high in the city and prevents the soil to evacuate quickly enough the surface flow caused by the storm event, as it was observed in the last historical flood catastrophe of Jeddah in 2009.Keywords: flood, groundwater rise, Jeddah, tide
Procedia PDF Downloads 1141511 Indium-Gallium-Zinc Oxide Photosynaptic Device with Alkylated Graphene Oxide for Optoelectronic Spike Processing
Authors: Seyong Oh, Jin-Hong Park
Abstract:
Recently, neuromorphic computing based on brain-inspired artificial neural networks (ANNs) has attracted huge amount of research interests due to the technological abilities to facilitate massively parallel, low-energy consuming, and event-driven computing. In particular, research on artificial synapse that imitate biological synapses responsible for human information processing and memory is in the spotlight. Here, we demonstrate a photosynaptic device, wherein a synaptic weight is governed by a mixed spike consisting of voltage and light spikes. Compared to the device operated only by the voltage spike, ∆G in the proposed photosynaptic device significantly increased from -2.32nS to 5.95nS with no degradation of nonlinearity (NL) (potentiation/depression values were changed from 4.24/8 to 5/8). Furthermore, the Modified National Institute of Standards and Technology (MNIST) digit pattern recognition rates improved from 36% and 49% to 50% and 62% in ANNs consisting of the synaptic devices with 20 and 100 weight states, respectively. We expect that the photosynaptic device technology processed by optoelectronic spike will play an important role in implementing the neuromorphic computing systems in the future.Keywords: optoelectronic synapse, IGZO (Indium-Gallium-Zinc Oxide) photosynaptic device, optoelectronic spiking process, neuromorphic computing
Procedia PDF Downloads 1731510 Application of Distributed Value Property Zones Approach on the Hydraulic Conductivity for Real Site Located in Al-Najaf Region, Iraq to Investigate the Groundwater Resources
Authors: Hayder H. Kareem, Ayad K. Hussein, Aseel A. Alkatib
Abstract:
Groundwater accumulated at geological formations constitutes a worldwide vital water resource component which can be used to supply agriculture, industry, and domestic uses. The subsurface environment is affected by human activities; consequently, planning and sustainable management of aquifers require serious attention, especially as the world is exposed to the problem of global warming. Establishing accurate and efficient groundwater models will provide confident results for the behavior of the aquifer's system. The new approach, 'Distributed Value Property Zones,' available in Visual MODFLOW, is used to reconstruct the subsurface zones of the Al-Najaf region aquifer, and then its effect is compared with those manual and automated (PEST) approaches. Results show that the model has become more accurate with the use of the new approach, as the calibration and results analyses revealed. The assessment of the Al-Najaf region groundwater aquifer has revealed a degree of insufficiency of the required pumping demand, which reflects dry areas in both of the aquifer's layers. In addition, with pumping, the Euphrates River loses water of 7458 m³/day to the aquifer, while without pumping, it gains 28837 m³/day from the rainfall's recharge. The distributed value property zones approach achieves a precise groundwater model to assess the state of the Al-Najaf region aquifer.Keywords: Al-Najaf region, distributed value property zones approach, hydraulic conductivity, groundwater modelling using visual MODFLOW
Procedia PDF Downloads 1711509 Emotional Artificial Intelligence and the Right to Privacy
Authors: Emine Akar
Abstract:
The majority of privacy-related regulation has traditionally focused on concepts that are perceived to be well-understood or easily describable, such as certain categories of data and personal information or images. In the past century, such regulation appeared reasonably suitable for its purposes. However, technologies such as AI, combined with ever-increasing capabilities to collect, process, and store “big data”, not only require calibration of these traditional understandings but may require re-thinking of entire categories of privacy law. In the presentation, it will be explained, against the background of various emerging technologies under the umbrella term “emotional artificial intelligence”, why modern privacy law will need to embrace human emotions as potentially private subject matter. This argument can be made on a jurisprudential level, given that human emotions can plausibly be accommodated within the various concepts that are traditionally regarded as the underlying foundation of privacy protection, such as, for example, dignity, autonomy, and liberal values. However, the practical reasons for regarding human emotions as potentially private subject matter are perhaps more important (and very likely more convincing from the perspective of regulators). In that respect, it should be regarded as alarming that, according to most projections, the usefulness of emotional data to governments and, particularly, private companies will not only lead to radically increased processing and analysing of such data but, concerningly, to an exponential growth in the collection of such data. In light of this, it is also necessity to discuss options for how regulators could address this emerging threat.Keywords: AI, privacy law, data protection, big data
Procedia PDF Downloads 881508 Using Optical Character Recognition to Manage the Unstructured Disaster Data into Smart Disaster Management System
Authors: Dong Seop Lee, Byung Sik Kim
Abstract:
In the 4th Industrial Revolution, various intelligent technologies have been developed in many fields. These artificial intelligence technologies are applied in various services, including disaster management. Disaster information management does not just support disaster work, but it is also the foundation of smart disaster management. Furthermore, it gets historical disaster information using artificial intelligence technology. Disaster information is one of important elements of entire disaster cycle. Disaster information management refers to the act of managing and processing electronic data about disaster cycle from its’ occurrence to progress, response, and plan. However, information about status control, response, recovery from natural and social disaster events, etc. is mainly managed in the structured and unstructured form of reports. Those exist as handouts or hard-copies of reports. Such unstructured form of data is often lost or destroyed due to inefficient management. It is necessary to manage unstructured data for disaster information. In this paper, the Optical Character Recognition approach is used to convert handout, hard-copies, images or reports, which is printed or generated by scanners, etc. into electronic documents. Following that, the converted disaster data is organized into the disaster code system as disaster information. Those data are stored in the disaster database system. Gathering and creating disaster information based on Optical Character Recognition for unstructured data is important element as realm of the smart disaster management. In this paper, Korean characters were improved to over 90% character recognition rate by using upgraded OCR. In the case of character recognition, the recognition rate depends on the fonts, size, and special symbols of character. We improved it through the machine learning algorithm. These converted structured data is managed in a standardized disaster information form connected with the disaster code system. The disaster code system is covered that the structured information is stored and retrieve on entire disaster cycle such as historical disaster progress, damages, response, and recovery. The expected effect of this research will be able to apply it to smart disaster management and decision making by combining artificial intelligence technologies and historical big data.Keywords: disaster information management, unstructured data, optical character recognition, machine learning
Procedia PDF Downloads 1291507 Analysis of Cardiovascular Diseases Using Artificial Neural Network
Authors: Jyotismita Talukdar
Abstract:
In this paper, a study has been made on the possibility and accuracy of early prediction of several Heart Disease using Artificial Neural Network. (ANN). The study has been made in both noise free environment and noisy environment. The data collected for this analysis are from five Hospitals. Around 1500 heart patient’s data has been collected and studied. The data is analysed and the results have been compared with the Doctor’s diagnosis. It is found that, in noise free environment, the accuracy varies from 74% to 92%and in noisy environment (2dB), the results of accuracy varies from 62% to 82%. In the present study, four basic attributes considered are Blood Pressure (BP), Fasting Blood Sugar (FBS), Thalach (THAL) and Cholesterol (CHOL.). It has been found that highest accuracy(93%), has been achieved in case of PPI( Post-Permanent-Pacemaker Implementation ), around 79% in case of CAD(Coronary Artery disease), 87% in DCM (Dilated Cardiomyopathy), 89% in case of RHD&MS(Rheumatic heart disease with Mitral Stenosis), 75 % in case of RBBB +LAFB (Right Bundle Branch Block + Left Anterior Fascicular Block), 72% for CHB(Complete Heart Block) etc. The lowest accuracy has been obtained in case of ICMP (Ischemic Cardiomyopathy), about 38% and AF( Atrial Fibrillation), about 60 to 62%.Keywords: coronary heart disease, chronic stable angina, sick sinus syndrome, cardiovascular disease, cholesterol, Thalach
Procedia PDF Downloads 1741506 Robotic Exoskeleton Response During Infant Physiological Knee Kinematics
Authors: Breanna Macumber, Victor A. Huayamave, Emir A. Vela, Wangdo Kim, Tamara T. Chamber, Esteban Centeno
Abstract:
Spina bifida is a type of neural tube defect that affects the nervous system and can lead to problems such as total leg paralysis. Treatment requires physical therapy and rehabilitation. Robotic exoskeletons have been used for rehabilitation to train muscle movement and assist in injury recovery; however, current models focus on the adult populations and not on the infant population. The proposed framework aims to couple a musculoskeletal infant model with a robotic exoskeleton using vacuum-powered artificial muscles to provide rehabilitation to infants affected by spina bifida. The study that drove the input values for the robotic exoskeleton used motion capture technology to collect data from the spontaneous kicking movement of a 2.4-month-old infant lying supine. OpenSim was used to develop the musculoskeletal model, and Inverse kinematics was used to estimate hip joint angles. A total of 4 kicks (A, B, C, D) were selected, and the selection was based on range, transient response, and stable response. Kicks had at least 5° of range of motion with a smooth transient response and a stable period. The robotic exoskeleton used a Vacuum-Powered Artificial Muscle (VPAM) the structure comprised of cells that were clipped in a collapsed state and unclipped when desired to simulate infant’s age. The artificial muscle works with vacuum pressure. When air is removed, the muscle contracts and when air is added, the muscle relaxes. Bench testing was performed using a 6-month-old infant mannequin. The previously developed exoskeleton worked really well with controlled ranges of motion and frequencies, which are typical of rehabilitation protocols for infants suffering with spina bifida. However, the random kicking motion in this study contained high frequency kicks and was not able to accurately replicate all the investigated kicks. Kick 'A' had a greater error when compared to the other kicks. This study has the potential to advance the infant rehabilitation field.Keywords: musculoskeletal modeling, soft robotics, rehabilitation, pediatrics
Procedia PDF Downloads 1181505 Performance Evaluation of Distributed Deep Learning Frameworks in Cloud Environment
Authors: Shuen-Tai Wang, Fang-An Kuo, Chau-Yi Chou, Yu-Bin Fang
Abstract:
2016 has become the year of the Artificial Intelligence explosion. AI technologies are getting more and more matured that most world well-known tech giants are making large investment to increase the capabilities in AI. Machine learning is the science of getting computers to act without being explicitly programmed, and deep learning is a subset of machine learning that uses deep neural network to train a machine to learn features directly from data. Deep learning realizes many machine learning applications which expand the field of AI. At the present time, deep learning frameworks have been widely deployed on servers for deep learning applications in both academia and industry. In training deep neural networks, there are many standard processes or algorithms, but the performance of different frameworks might be different. In this paper we evaluate the running performance of two state-of-the-art distributed deep learning frameworks that are running training calculation in parallel over multi GPU and multi nodes in our cloud environment. We evaluate the training performance of the frameworks with ResNet-50 convolutional neural network, and we analyze what factors that result in the performance among both distributed frameworks as well. Through the experimental analysis, we identify the overheads which could be further optimized. The main contribution is that the evaluation results provide further optimization directions in both performance tuning and algorithmic design.Keywords: artificial intelligence, machine learning, deep learning, convolutional neural networks
Procedia PDF Downloads 2111504 Standard Essential Patents for Artificial Intelligence Hardware and the Implications For Intellectual Property Rights
Authors: Wendy de Gomez
Abstract:
Standardization is a critical element in the ability of a society to reduce uncertainty, subjectivity, misrepresentation, and interpretation while simultaneously contributing to innovation. Technological standardization is critical to codify specific operationalization through legal instruments that provide rules of development, expectation, and use. In the current emerging technology landscape Artificial Intelligence (AI) hardware as a general use technology has seen incredible growth as evidenced from AI technology patents between 2012 and 2018 in the United States Patent Trademark Office (USPTO) AI dataset. However, as outlined in the 2023 United States Government National Standards Strategy for Critical and Emerging Technology the codification through standardization of emerging technologies such as AI has not kept pace with its actual technological proliferation. This gap has the potential to cause significant divergent possibilities for the downstream outcomes of AI in both the short and long term. This original empirical research provides an overview of the standardization efforts around AI in different geographies and provides a background to standardization law. It quantifies the longitudinal trend of Artificial Intelligence hardware patents through the USPTO AI dataset. It seeks evidence of existing Standard Essential Patents from these AI hardware patents through a text analysis of the Statement of patent history and the Field of the invention of these patents in Patent Vector and examines their determination as a Standard Essential Patent and their inclusion in existing AI technology standards across the four main AI standards bodies- European Telecommunications Standards Institute (ETSI); International Telecommunication Union (ITU)/ Telecommunication Standardization Sector (-T); Institute of Electrical and Electronics Engineers (IEEE); and the International Organization for Standardization (ISO). Once the analysis is complete the paper will discuss both the theoretical and operational implications of F/Rand Licensing Agreements for the owners of these Standard Essential Patents in the United States Court and Administrative system. It will conclude with an evaluation of how Standard Setting Organizations (SSOs) can work with SEP owners more effectively through various forms of Intellectual Property mechanisms such as patent pools.Keywords: patents, artifical intelligence, standards, F/Rand agreements
Procedia PDF Downloads 871503 New Advanced Medical Software Technology Challenges and Evolution of the Regulatory Framework in Expert Software, Artificial Intelligence, and Machine Learning
Authors: Umamaheswari Shanmugam, Silvia Ronchi
Abstract:
Software, artificial intelligence, and machine learning can improve healthcare through innovative and advanced technologies that can use the large amount and variety of data generated during healthcare services every day; one of the significant advantages of these new technologies is the ability to get experience and knowledge from real-world use and to improve their performance continuously. Healthcare systems and institutions can significantly benefit because the use of advanced technologies improves the efficiency and efficacy of healthcare. Software-defined as a medical device, is stand-alone software that is intended to be used for patients for one or more of these specific medical intended uses: - diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of a disease, any other health conditions, replacing or modifying any part of a physiological or pathological process–manage the received information from in vitro specimens derived from the human samples (body) and without principal main action of its principal intended use by pharmacological, immunological or metabolic definition. Software qualified as medical devices must comply with the general safety and performance requirements applicable to medical devices. These requirements are necessary to ensure high performance and quality and protect patients' safety. The evolution and the continuous improvement of software used in healthcare must consider the increase in regulatory requirements, which are becoming more complex in each market. The gap between these advanced technologies and the new regulations is the biggest challenge for medical device manufacturers. Regulatory requirements can be considered a market barrier, as they can delay or obstacle the device's approval. Still, they are necessary to ensure performance, quality, and safety. At the same time, they can be a business opportunity if the manufacturer can define the appropriate regulatory strategy in advance. The abstract will provide an overview of the current regulatory framework, the evolution of the international requirements, and the standards applicable to medical device software in the potential market all over the world.Keywords: artificial intelligence, machine learning, SaMD, regulatory, clinical evaluation, classification, international requirements, MDR, 510k, PMA, IMDRF, cyber security, health care systems
Procedia PDF Downloads 881502 Adolescent-Parent Relationship as the Most Important Factor in Preventing Mood Disorders in Adolescents: An Application of Artificial Intelligence to Social Studies
Authors: Elżbieta Turska
Abstract:
Introduction: One of the most difficult times in a person’s life is adolescence. The experiences in this period may shape the future life of this person to a large extent. This is the reason why many young people experience sadness, dejection, hopelessness, sense of worthlessness, as well as losing interest in various activities and social relationships, all of which are often classified as mood disorders. As many as 15-40% adolescents experience depressed moods and for most of them they resolve and are not carried into adulthood. However, (5-6%) of those affected by mood disorders develop the depressive syndrome and as many as (1-3%) develop full-blown clinical depression. Materials: A large questionnaire was given to 2508 students, aged 13–16 years old, and one of its parts was the Burns checklist, i.e. the standard test for identifying depressed mood. The questionnaire asked about many aspects of the student’s life, it included a total of 53 questions, most of which had subquestions. It is important to note that the data suffered from many problems, the most important of which were missing data and collinearity. Aim: In order to identify the correlates of mood disorders we built predictive models which were then trained and validated. Our aim was not to be able to predict which students suffer from mood disorders but rather to explore the factors influencing mood disorders. Methods: The problems with data described above practically excluded using all classical statistical methods. For this reason, we attempted to use the following Artificial Intelligence (AI) methods: classification trees with surrogate variables, random forests and xgboost. All analyses were carried out with the use of the mlr package for the R programming language. Resuts: The predictive model built by classification trees algorithm outperformed the other algorithms by a large margin. As a result, we were able to rank the variables (questions and subquestions from the questionnaire) from the most to least influential as far as protection against mood disorder is concerned. Thirteen out of twenty most important variables reflect the relationships with parents. This seems to be a really significant result both from the cognitive point of view and also from the practical point of view, i.e. as far as interventions to correct mood disorders are concerned.Keywords: mood disorders, adolescents, family, artificial intelligence
Procedia PDF Downloads 1011501 A Comparative Study on ANN, ANFIS and SVM Methods for Computing Resonant Frequency of A-Shaped Compact Microstrip Antennas
Authors: Ahmet Kayabasi, Ali Akdagli
Abstract:
In this study, three robust predicting methods, namely artificial neural network (ANN), adaptive neuro fuzzy inference system (ANFIS) and support vector machine (SVM) were used for computing the resonant frequency of A-shaped compact microstrip antennas (ACMAs) operating at UHF band. Firstly, the resonant frequencies of 144 ACMAs with various dimensions and electrical parameters were simulated with the help of IE3D™ based on method of moment (MoM). The ANN, ANFIS and SVM models for computing the resonant frequency were then built by considering the simulation data. 124 simulated ACMAs were utilized for training and the remaining 20 ACMAs were used for testing the ANN, ANFIS and SVM models. The performance of the ANN, ANFIS and SVM models are compared in the training and test process. The average percentage errors (APE) regarding the computed resonant frequencies for training of the ANN, ANFIS and SVM were obtained as 0.457%, 0.399% and 0.600%, respectively. The constructed models were then tested and APE values as 0.601% for ANN, 0.744% for ANFIS and 0.623% for SVM were achieved. The results obtained here show that ANN, ANFIS and SVM methods can be successfully applied to compute the resonant frequency of ACMAs, since they are useful and versatile methods that yield accurate results.Keywords: a-shaped compact microstrip antenna, artificial neural network (ANN), adaptive neuro-fuzzy inference system (ANFIS), support vector machine (SVM)
Procedia PDF Downloads 4411500 Exoskeleton Response During Infant Physiological Knee Kinematics And Dynamics
Authors: Breanna Macumber, Victor A. Huayamave, Emir A. Vela, Wangdo Kim, Tamara T. Chamber, Esteban Centeno
Abstract:
Spina bifida is a type of neural tube defect that affects the nervous system and can lead to problems such as total leg paralysis. Treatment requires physical therapy and rehabilitation. Robotic exoskeletons have been used for rehabilitation to train muscle movement and assist in injury recovery; however, current models focus on the adult populations and not on the infant population. The proposed framework aims to couple a musculoskeletal infant model with a robotic exoskeleton using vacuum-powered artificial muscles to provide rehabilitation to infants affected by spina bifida. The study that drove the input values for the robotic exoskeleton used motion capture technology to collect data from the spontaneous kicking movement of a 2.4-month-old infant lying supine. OpenSim was used to develop the musculoskeletal model, and Inverse kinematics was used to estimate hip joint angles. A total of 4 kicks (A, B, C, D) were selected, and the selection was based on range, transient response, and stable response. Kicks had at least 5° of range of motion with a smooth transient response and a stable period. The robotic exoskeleton used a Vacuum-Powered Artificial Muscle (VPAM) the structure comprised of cells that were clipped in a collapsed state and unclipped when desired to simulate infant’s age. The artificial muscle works with vacuum pressure. When air is removed, the muscle contracts and when air is added, the muscle relaxes. Bench testing was performed using a 6-month-old infant mannequin. The previously developed exoskeleton worked really well with controlled ranges of motion and frequencies, which are typical of rehabilitation protocols for infants suffering with spina bifida. However, the random kicking motion in this study contained high frequency kicks and was not able to accurately replicate all the investigated kicks. Kick 'A' had a greater error when compared to the other kicks. This study has the potential to advance the infant rehabilitation field.Keywords: musculoskeletal modeling, soft robotics, rehabilitation, pediatrics
Procedia PDF Downloads 831499 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience
Authors: Amanda Kavner, Richard Lamb
Abstract:
Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience
Procedia PDF Downloads 1191498 Design of a Backlight Hyperspectral Imaging System for Enhancing Image Quality in Artificial Vision Food Packaging Online Inspections
Authors: Ferran Paulí Pla, Pere Palacín Farré, Albert Fornells Herrera, Pol Toldrà Fernández
Abstract:
Poor image acquisition is limiting the promising growth of industrial vision in food control. In recent years, the food industry has witnessed a significant increase in the implementation of automation in quality control through artificial vision, a trend that continues to grow. During the packaging process, some defects may appear, compromising the proper sealing of the products and diminishing their shelf life, sanitary conditions and overall properties. While failure to detect a defective product leads to major losses, food producers also aim to minimize over-rejection to avoid unnecessary waste. Thus, accuracy in the evaluation of the products is crucial, and, given the large production volumes, even small improvements have a significant impact. Recently, efforts have been focused on maximizing the performance of classification neural networks; nevertheless, their performance is limited by the quality of the input data. Monochrome linear backlight systems are most commonly used for online inspections of food packaging thermo-sealing zones. These simple acquisition systems fit the high cadence of the production lines imposed by the market demand. Nevertheless, they provide a limited amount of data, which negatively impacts classification algorithm training. A desired situation would be one where data quality is maximized in terms of obtaining the key information to detect defects while maintaining a fast working pace. This work presents a backlight hyperspectral imaging system designed and implemented replicating an industrial environment to better understand the relationship between visual data quality and spectral illumination range for a variety of packed food products. Furthermore, results led to the identification of advantageous spectral bands that significantly enhance image quality, providing clearer detection of defects.Keywords: artificial vision, food packaging, hyperspectral imaging, image acquisition, quality control
Procedia PDF Downloads 221497 Artificial Intelligence-Based Chest X-Ray Test of COVID-19 Patients
Authors: Dhurgham Al-Karawi, Nisreen Polus, Shakir Al-Zaidi, Sabah Jassim
Abstract:
The management of COVID-19 patients based on chest imaging is emerging as an essential tool for evaluating the spread of the pandemic which has gripped the global community. It has already been used to monitor the situation of COVID-19 patients who have issues in respiratory status. There has been increase to use chest imaging for medical triage of patients who are showing moderate-severe clinical COVID-19 features, this is due to the fast dispersal of the pandemic to all continents and communities. This article demonstrates the development of machine learning techniques for the test of COVID-19 patients using Chest X-Ray (CXR) images in nearly real-time, to distinguish the COVID-19 infection with a significantly high level of accuracy. The testing performance has covered a combination of different datasets of CXR images of positive COVID-19 patients, patients with viral and bacterial infections, also, people with a clear chest. The proposed AI scheme successfully distinguishes CXR scans of COVID-19 infected patients from CXR scans of viral and bacterial based pneumonia as well as normal cases with an average accuracy of 94.43%, sensitivity 95%, and specificity 93.86%. Predicted decisions would be supported by visual evidence to help clinicians speed up the initial assessment process of new suspected cases, especially in a resource-constrained environment.Keywords: COVID-19, chest x-ray scan, artificial intelligence, texture analysis, local binary pattern transform, Gabor filter
Procedia PDF Downloads 1451496 Employee Well-being in the Age of AI: Perceptions, Concerns, Behaviors, and Outcomes
Authors: Soheila Sadeghi
Abstract:
— The growing integration of Artificial Intelligence (AI) into Human Resources (HR) processes has transformed the way organizations manage recruitment, performance evaluation, and employee engagement. While AI offers numerous advantages—such as improved efficiency, reduced bias, and hyper-personalization—it raises significant concerns about employee well-being, job security, fairness, and transparency. The study examines how AI shapes employee perceptions, job satisfaction, mental health, and retention. Key findings reveal that: (a) while AI can enhance efficiency and reduce bias, it also raises concerns about job security, fairness, and privacy; (b) transparency in AI systems emerges as a critical factor in fostering trust and positive employee attitudes; and (c) AI systems can both support and undermine employee well-being, depending on how they are implemented and perceived. The research introduces an AI-employee well-being Interaction Framework, illustrating how AI influences employee perceptions, behaviors, and outcomes. Organizational strategies, such as (a) clear communication, (b) upskilling programs, and (c) employee involvement in AI implementation, are identified as crucial for mitigating negative impacts and enhancing positive outcomes. The study concludes that the successful integration of AI in HR requires a balanced approach that (a) prioritizes employee well-being, (b) facilitates human-AI collaboration, and (c) ensures ethical and transparent AI practices alongside technological advancement.Keywords: artificial intelligence, human resources, employee well-being, job satisfaction, organizational support, transparency in AI
Procedia PDF Downloads 291495 Roasting Degree of Cocoa Beans by Artificial Neural Network (ANN) Based Electronic Nose System and Gas Chromatography (GC)
Authors: Juzhong Tan, William Kerr
Abstract:
Roasting is one critical procedure in chocolate processing, where special favors are developed, moisture content is decreased, and better processing properties are developed. Therefore, determination of roasting degree of cocoa bean is important for chocolate manufacturers to ensure the quality of chocolate products, and it also decides the commercial value of cocoa beans collected from cocoa farmers. The roasting degree of cocoa beans currently relies on human specialists, who sometimes are biased, and chemical analysis, which take long time and are inaccessible to many manufacturers and farmers. In this study, a self-made electronic nose system consists of gas sensors (TGS 800 and 2000 series) was used to detecting the gas generated by cocoa beans with a different roasting degree (0min, 20min, 30min, and 40min) and the signals collected by gas sensors were used to train a three-layers ANN. Chemical analysis of the graded beans was operated by traditional GC-MS system and the contents of volatile chemical compounds were used to train another ANN as a reference to electronic nosed signals trained ANN. Both trained ANN were used to predict cocoa beans with a different roasting degree for validation. The best accuracy of grading achieved by electronic nose signals trained ANN (using signals from TGS 813 826 820 880 830 2620 2602 2610) turned out to be 96.7%, however, the GC trained ANN got the accuracy of 83.8%.Keywords: artificial neutron network, cocoa bean, electronic nose, roasting
Procedia PDF Downloads 2341494 Physiological Response of Naturally Regenerated Pinus taeda L. Saplings to Four Levels of Stem Inoculation with Leptographium terebrantis
Authors: John K. Mensah, Mary A. Sword Sayer, Ryan L. Nadel, George Matusick, Zhaofei Fan, Lori G. Eckhardt
Abstract:
Leptographium terebrantis is an opportunistic root pathogen commonly associated with loblolly pine (Pinus taeda L.) stands that are undergoing a loss of vigor in the southeastern US. In order to understand the relationship between L. terebrantis inoculum density and host physiology, an artificial inoculation study was conducted in a five-year-old naturally regenerated loblolly pine stand over a 24 week period in a completely randomized design. L. terebrantis caused sapwood occlusions that increased in severity as inoculum density increased. The occlusions significantly reduced water transport through the stem but did not interfere with fascicle-level stomatal conductance or induce moisture stress in the saplings. The resilience of stomatal conductance among pathogen-infested saplings is attributed to the growth and hydraulic function of new sapwood that developed after artificial inoculation. Results demonstrate that faster-growing families of loblolly pine may be capable of tolerating the vascular root disease when the formation of new sapwood is supported by sustained crown health.Keywords: hydraulic conductance, inoculum density, Leptographium terebrantis, Pinus taeda, sapwood occlusion
Procedia PDF Downloads 3211493 Recommender Systems Using Ensemble Techniques
Authors: Yeonjeong Lee, Kyoung-jae Kim, Youngtae Kim
Abstract:
This study proposes a novel recommender system that uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user’s preference. The proposed model consists of two steps. In the first step, this study uses logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. Then, this study combines the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. In the second step, this study uses the market basket analysis to extract association rules for co-purchased products. Finally, the system selects customers who have high likelihood to purchase products in each product group and recommends proper products from same or different product groups to them through above two steps. We test the usability of the proposed system by using prototype and real-world transaction and profile data. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The results also show that the proposed system may be useful in real-world online shopping store.Keywords: product recommender system, ensemble technique, association rules, decision tree, artificial neural networks
Procedia PDF Downloads 2941492 The Role of Artificial Intelligence in Patent Claim Interpretation: Legal Challenges and Opportunities
Authors: Mandeep Saini
Abstract:
The rapid advancement of Artificial Intelligence (AI) is transforming various fields, including intellectual property law. This paper explores the emerging role of AI in interpreting patent claims, a critical and highly specialized area within intellectual property rights. Patent claims define the scope of legal protection granted to an invention, and their precise interpretation is crucial in determining the boundaries of the patent holder's rights. Traditionally, this interpretation has relied heavily on the expertise of patent examiners, legal professionals, and judges. However, the increasing complexity of modern inventions, especially in fields like biotechnology, software, and electronics, poses significant challenges to human interpretation. Introducing AI into patent claim interpretation raises several legal and ethical concerns. This paper addresses critical issues such as the reliability of AI-driven interpretations, the potential for algorithmic bias, and the lack of transparency in AI decision-making processes. It considers the legal implications of relying on AI, particularly regarding accountability for errors and the potential challenges to AI interpretations in court. The paper includes a comparative study of AI-driven patent claim interpretations versus human interpretations across different jurisdictions to provide a comprehensive analysis. This comparison highlights the variations in legal standards and practices, offering insights into how AI could impact the harmonization of international patent laws. The paper proposes policy recommendations for the responsible use of AI in patent law. It suggests legal frameworks that ensure AI tools complement, rather than replace, human expertise in patent claim interpretation. These recommendations aim to balance the benefits of AI with the need for maintaining trust, transparency, and fairness in the legal process. By addressing these critical issues, this research contributes to the ongoing discourse on integrating AI into the legal field, specifically within intellectual property rights. It provides a forward-looking perspective on how AI could reshape patent law, offering both opportunities for innovation and challenges that must be carefully managed to protect the integrity of the legal system.Keywords: artificial intelligence (ai), patent claim interpretation, intellectual property rights, algorithmic bias, natural language processing, patent law harmonization, legal ethics
Procedia PDF Downloads 21