Search results for: motor intelligence
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2486

Search results for: motor intelligence

1436 Developing a Cloud Intelligence-Based Energy Management Architecture Facilitated with Embedded Edge Analytics for Energy Conservation in Demand-Side Management

Authors: Yu-Hsiu Lin, Wen-Chun Lin, Yen-Chang Cheng, Chia-Ju Yeh, Yu-Chuan Chen, Tai-You Li

Abstract:

Demand-Side Management (DSM) has the potential to reduce electricity costs and carbon emission, which are associated with electricity used in the modern society. A home Energy Management System (EMS) commonly used by residential consumers in a down-stream sector of a smart grid to monitor, control, and optimize energy efficiency to domestic appliances is a system of computer-aided functionalities as an energy audit for residential DSM. Implementing fault detection and classification to domestic appliances monitored, controlled, and optimized is one of the most important steps to realize preventive maintenance, such as residential air conditioning and heating preventative maintenance in residential/industrial DSM. In this study, a cloud intelligence-based green EMS that comes up with an Internet of Things (IoT) technology stack for residential DSM is developed. In the EMS, Arduino MEGA Ethernet communication-based smart sockets that module a Real Time Clock chip to keep track of current time as timestamps via Network Time Protocol are designed and implemented for readings of load phenomena reflecting on voltage and current signals sensed. Also, a Network-Attached Storage providing data access to a heterogeneous group of IoT clients via Hypertext Transfer Protocol (HTTP) methods is configured to data stores of parsed sensor readings. Lastly, a desktop computer with a WAMP software bundle (the Microsoft® Windows operating system, Apache HTTP Server, MySQL relational database management system, and PHP programming language) serves as a data science analytics engine for dynamic Web APP/REpresentational State Transfer-ful web service of the residential DSM having globally-Advanced Internet of Artificial Intelligence (AI)/Computational Intelligence. Where, an abstract computing machine, Java Virtual Machine, enables the desktop computer to run Java programs, and a mash-up of Java, R language, and Python is well-suited and -configured for AI in this study. Having the ability of sending real-time push notifications to IoT clients, the desktop computer implements Google-maintained Firebase Cloud Messaging to engage IoT clients across Android/iOS devices and provide mobile notification service to residential/industrial DSM. In this study, in order to realize edge intelligence that edge devices avoiding network latency and much-needed connectivity of Internet connections for Internet of Services can support secure access to data stores and provide immediate analytical and real-time actionable insights at the edge of the network, we upgrade the designed and implemented smart sockets to be embedded AI Arduino ones (called embedded AIduino). With the realization of edge analytics by the proposed embedded AIduino for data analytics, an Arduino Ethernet shield WizNet W5100 having a micro SD card connector is conducted and used. The SD library is included for reading parsed data from and writing parsed data to an SD card. And, an Artificial Neural Network library, ArduinoANN, for Arduino MEGA is imported and used for locally-embedded AI implementation. The embedded AIduino in this study can be developed for further applications in manufacturing industry energy management and sustainable energy management, wherein in sustainable energy management rotating machinery diagnostics works to identify energy loss from gross misalignment and unbalance of rotating machines in power plants as an example.

Keywords: demand-side management, edge intelligence, energy management system, fault detection and classification

Procedia PDF Downloads 251
1435 The Application of Sensory Integration Techniques in Science Teaching Students with Autism

Authors: Joanna Estkowska

Abstract:

The Sensory Integration Method is aimed primarily at children with learning disabilities. It can also be used as a complementary method in treatment of children with cerebral palsy, autistic, mentally handicapped, blind and deaf. Autism is holistic development disorder that manifests itself in the specific functioning of a child. The most characteristic are: disorders in communication, difficulties in social relations, rigid patterns of behavior and impairment in sensory processing. In addition to these disorders may occur abnormal intellectual development, attention deficit disorders, perceptual disorders and others. This study was focused on the application sensory integration techniques in science education of autistic students. The lack of proper sensory integration causes problems with complicated processes such as motor coordination, movement planning, visual or auditory perception, speech, writing, reading or counting. Good functioning and cooperation of proprioceptive, tactile and vestibular sense affect the child’s mastery of skills that require coordination of both sides of the body and synchronization of the cerebral hemispheres. These include, for example, all sports activities, precise manual skills such writing, as well as, reading and counting skills. All this takes place in stages. Achieving skills from the first stage determines the development of fitness from the next level. Any deficit in the scope of the first three stages can affect the development of new skills. This ultimately reflects on the achievements at school and in further professional and personal life. After careful analysis symptoms from the emotional and social spheres appear to be secondary to deficits of sensory integration. During our research, the students gained knowledge and skills in the classroom of experience by learning biology, chemistry and physics with application sensory integration techniques. Sensory integration therapy aims to teach the child an adequate response to stimuli coming to him from both the outside world and the body. Thanks to properly selected exercises, a child can improve perception and interpretation skills, motor skills, coordination of movements, attention and concentration or self-awareness, as well as social and emotional functioning.

Keywords: autism spectrum disorder, science education, sensory integration, special educational needs

Procedia PDF Downloads 184
1434 Disparities in Language Competence and Conflict: The Moderating Role of Cultural Intelligence in Intercultural Interactions

Authors: Catherine Peyrols Wu

Abstract:

Intercultural interactions are becoming increasingly common in organizations and life. These interactions are often the stage of miscommunication and conflict. In management research, these problems are commonly attributed to cultural differences in values and interactional norms. As a result, the notion that intercultural competence can minimize these challenges is widely accepted. Cultural differences, however, are not the only source of a challenge during intercultural interactions. The need to rely on a lingua franca – or common language between people who have different mother tongues – is another important one. In theory, a lingua franca can improve communication and ease coordination. In practice however, disparities in people’s ability and confidence to communicate in the language can exacerbate tensions and generate inefficiencies. In this study, we draw on power theory to develop a model of disparities in language competence and conflict in a multicultural work context. Specifically, we hypothesized that differences in language competence between interaction partners would be positively related to conflict such that people would report greater conflict with partners who have more dissimilar levels of language competence and lesser conflict with partners with more similar levels of language competence. Furthermore, we proposed that cultural intelligence (CQ) an intercultural competence that denotes an individual’s capability to be effective in intercultural situations, would weaken the relationship between disparities in language competence and conflict such that people would report less conflict with partners who have more dissimilar levels of language competence when the interaction partner has high CQ and more conflict when the partner has low CQ. We tested this model with a sample of 135 undergraduate students working in multicultural teams for 13 weeks. We used a round-robin design to examine conflict in 646 dyads nested within 21 teams. Results of analyses using social relations modeling provided support for our hypotheses. Specifically, we found that in intercultural dyads with large disparities in language competence, partners with the lowest level of language competence would report higher levels of interpersonal conflict. However, this relationship disappeared when the partner with higher language competence was also high in CQ. These findings suggest that communication in a lingua franca can be a source of conflict in intercultural collaboration when partners differ in their level of language competence and that CQ can alleviate these effects during collaboration with partners who have relatively lower levels of language competence. Theoretically, this study underscores the benefits of CQ as a complement to language competence for intercultural effectiveness. Practically, these results further attest to the benefits of investing resources to develop language competence and CQ in employees engaged in multicultural work.

Keywords: cultural intelligence, intercultural interactions, language competence, multicultural teamwork

Procedia PDF Downloads 165
1433 A Comparison Between the Internal Combustion Engine and Electric Motor in the Automobile

Authors: Jack Mason, Ahmad Pourmovhed

Abstract:

This paper will discuss the advantages and disadvantages of the internal combustion engine when compared to different types of electric vehicles. The Internal Combustion Engine (ICE)'s overall cost, environmental impact, and usability will all be compared to different types of Electric Vehicles (EVs) including Battery Electric Vehicles (BEVs) and Hydrogen Fuel Cell Electric Vehicles (FCEVs). Also, the ways to solve the issues of the problems each vehicle presents will be discussed.

Keywords: interal combustion engine, battery electric vehicle, fuel cell electric vehicle, emissions

Procedia PDF Downloads 176
1432 The Untreated Burden of Parkinson’s Disease: A Patient Perspective

Authors: John Acord, Ankita Batla, Kiran Khepar, Maude Schmidt, Charlotte Allen, Russ Bradford

Abstract:

Objectives: Despite the availability oftreatment options, Parkinson’s disease (PD) continues to impact heavily on a patient’s quality of life (QoL), as many symptoms that bother the patient remain unexplored and untreated in clinical settings. The aims of this research were to understand the burden of PDsymptoms from a patient perspective, particularly those which are the most persistent and debilitating, and to determine if current treatments and treatment algorithms adequately focus on their resolution. Methods: A13-question, online, patient-reported survey was created based on the MDS-Unified Parkinson’s Disease Rating Scale (MDS-UPDRS)and symptoms listed on Parkinson’s Disease Patient Advocacy Groups websites, and then validated by 10 Parkinson’s patients. In the survey, patients were asked to choose both their most common and their most bothersome symptoms, whether they had received treatment for those and, if so, had it been effective in resolving those symptoms. Results: The most bothersome symptoms reported by the 111 participants who completed the survey were sleep problems (61%), feeling tired (56%), slowness of movements (54%), and pain in some parts of the body (49%). However, while 86% of patients reported receiving dopamine or dopamine like drugs to treat their PD, far fewer reported receiving targeted therapies for additional symptoms. For example, of the patients who reported having sleep problems, only 33% received some form of treatment for this symptom. This was also true for feeling tired (30% received treatment for this symptom), slowness of movements (62% received treatment for this symptom), and pain in some parts of the body (61% received treatment for this symptom). Additionally, 65% of patients reported that the symptoms they experienced were not adequately controlled by the treatments they received, and 9% reported that their current treatments had no effect on their symptoms whatsoever. Conclusion: The survey outcomes highlight that the majority of patients involved in the study received treatment focused on their disease, however, symptom-based treatments were less well represented. Consequently, patient-reported symptoms such as sleep problems and feeling tired tended to receive more fragmented intervention than ‘classical’ PD symptoms, such as slowness of movement, even though they were reported as being amongst the most bothersome symptoms for patients. This research highlights the need to explore symptom burden from the patient’s perspective and offer Customised treatment/support for both motor and non-motor symptoms maximize patients’ quality of life.

Keywords: survey, patient reported symptom burden, unmet needs, parkinson's disease

Procedia PDF Downloads 295
1431 Copyright Clearance for Artificial Intelligence Training Data: Challenges and Solutions

Authors: Erva Akin

Abstract:

– The use of copyrighted material for machine learning purposes is a challenging issue in the field of artificial intelligence (AI). While machine learning algorithms require large amounts of data to train and improve their accuracy and creativity, the use of copyrighted material without permission from the authors may infringe on their intellectual property rights. In order to overcome copyright legal hurdle against the data sharing, access and re-use of data, the use of copyrighted material for machine learning purposes may be considered permissible under certain circumstances. For example, if the copyright holder has given permission to use the data through a licensing agreement, then the use for machine learning purposes may be lawful. It is also argued that copying for non-expressive purposes that do not involve conveying expressive elements to the public, such as automated data extraction, should not be seen as infringing. The focus of such ‘copy-reliant technologies’ is on understanding language rules, styles, and syntax and no creative ideas are being used. However, the non-expressive use defense is within the framework of the fair use doctrine, which allows the use of copyrighted material for research or educational purposes. The questions arise because the fair use doctrine is not available in EU law, instead, the InfoSoc Directive provides for a rigid system of exclusive rights with a list of exceptions and limitations. One could only argue that non-expressive uses of copyrighted material for machine learning purposes do not constitute a ‘reproduction’ in the first place. Nevertheless, the use of machine learning with copyrighted material is difficult because EU copyright law applies to the mere use of the works. Two solutions can be proposed to address the problem of copyright clearance for AI training data. The first is to introduce a broad exception for text and data mining, either mandatorily or for commercial and scientific purposes, or to permit the reproduction of works for non-expressive purposes. The second is that copyright laws should permit the reproduction of works for non-expressive purposes, which opens the door to discussions regarding the transposition of the fair use principle from the US into EU law. Both solutions aim to provide more space for AI developers to operate and encourage greater freedom, which could lead to more rapid innovation in the field. The Data Governance Act presents a significant opportunity to advance these debates. Finally, issues concerning the balance of general public interests and legitimate private interests in machine learning training data must be addressed. In my opinion, it is crucial that robot-creation output should fall into the public domain. Machines depend on human creativity, innovation, and expression. To encourage technological advancement and innovation, freedom of expression and business operation must be prioritised.

Keywords: artificial intelligence, copyright, data governance, machine learning

Procedia PDF Downloads 83
1430 Enhancing Email Security: A Multi-Layered Defense Strategy Approach and an AI-Powered Model for Identifying and Mitigating Phishing Attacks

Authors: Anastasios Papathanasiou, George Liontos, Athanasios Katsouras, Vasiliki Liagkou, Euripides Glavas

Abstract:

Email remains a crucial communication tool due to its efficiency, accessibility and cost-effectiveness, enabling rapid information exchange across global networks. However, the global adoption of email has also made it a prime target for cyber threats, including phishing, malware and Business Email Compromise (BEC) attacks, which exploit its integral role in personal and professional realms in order to perform fraud and data breaches. To combat these threats, this research advocates for a multi-layered defense strategy incorporating advanced technological tools such as anti-spam and anti-malware software, machine learning algorithms and authentication protocols. Moreover, we developed an artificial intelligence model specifically designed to analyze email headers and assess their security status. This AI-driven model examines various components of email headers, such as "From" addresses, ‘Received’ paths and the integrity of SPF, DKIM and DMARC records. Upon analysis, it generates comprehensive reports that indicate whether an email is likely to be malicious or benign. This capability empowers users to identify potentially dangerous emails promptly, enhancing their ability to avoid phishing attacks, malware infections and other cyber threats.

Keywords: email security, artificial intelligence, header analysis, threat detection, phishing, DMARC, DKIM, SPF, ai model

Procedia PDF Downloads 58
1429 Induction Machine Bearing Failure Detection Using Advanced Signal Processing Methods

Authors: Abdelghani Chahmi

Abstract:

This article examines the detection and localization of faults in electrical systems, particularly those using asynchronous machines. First, the process of failure will be characterized, relevant symptoms will be defined and based on those processes and symptoms, a model of those malfunctions will be obtained. Second, the development of the diagnosis of the machine will be shown. As studies of malfunctions in electrical systems could only rely on a small amount of experimental data, it has been essential to provide ourselves with simulation tools which allowed us to characterize the faulty behavior. Fault detection uses signal processing techniques in known operating phases.

Keywords: induction motor, modeling, bearing damage, airgap eccentricity, torque variation

Procedia PDF Downloads 139
1428 A Comparative Study on Deep Learning Models for Pneumonia Detection

Authors: Hichem Sassi

Abstract:

Pneumonia, being a respiratory infection, has garnered global attention due to its rapid transmission and relatively high mortality rates. Timely detection and treatment play a crucial role in significantly reducing mortality associated with pneumonia. Presently, X-ray diagnosis stands out as a reasonably effective method. However, the manual scrutiny of a patient's X-ray chest radiograph by a proficient practitioner usually requires 5 to 15 minutes. In situations where cases are concentrated, this places immense pressure on clinicians for timely diagnosis. Relying solely on the visual acumen of imaging doctors proves to be inefficient, particularly given the low speed of manual analysis. Therefore, the integration of artificial intelligence into the clinical image diagnosis of pneumonia becomes imperative. Additionally, AI recognition is notably rapid, with convolutional neural networks (CNNs) demonstrating superior performance compared to human counterparts in image identification tasks. To conduct our study, we utilized a dataset comprising chest X-ray images obtained from Kaggle, encompassing a total of 5216 training images and 624 test images, categorized into two classes: normal and pneumonia. Employing five mainstream network algorithms, we undertook a comprehensive analysis to classify these diseases within the dataset, subsequently comparing the results. The integration of artificial intelligence, particularly through improved network architectures, stands as a transformative step towards more efficient and accurate clinical diagnoses across various medical domains.

Keywords: deep learning, computer vision, pneumonia, models, comparative study

Procedia PDF Downloads 64
1427 Synthetic Classicism: A Machine Learning Approach to the Recognition and Design of Circular Pavilions

Authors: Federico Garrido, Mostafa El Hayani, Ahmed Shams

Abstract:

The exploration of the potential of artificial intelligence (AI) in architecture is still embryonic, however, its latent capacity to change design disciplines is significant. 'Synthetic Classism' is a research project that questions the underlying aspects of classically organized architecture not just in aesthetic terms but also from a geometrical and morphological point of view, intending to generate new architectural information using historical examples as source material. The main aim of this paper is to explore the uses of artificial intelligence and machine learning algorithms in architectural design while creating a coherent narrative to be contained within a design process. The purpose is twofold: on one hand, to develop and train machine learning algorithms to produce architectural information of small pavilions and on the other, to synthesize new information from previous architectural drawings. These algorithms intend to 'interpret' graphical information from each pavilion and then generate new information from it. The procedure, once these algorithms are trained, is the following: parting from a line profile, a synthetic 'front view' of a pavilion is generated, then using it as a source material, an isometric view is created from it, and finally, a top view is produced. Thanks to GAN algorithms, it is also possible to generate Front and Isometric views without any graphical input as well. The final intention of the research is to produce isometric views out of historical information, such as the pavilions from Sebastiano Serlio, James Gibbs, or John Soane. The idea is to create and interpret new information not just in terms of historical reconstruction but also to explore AI as a novel tool in the narrative of a creative design process. This research also challenges the idea of the role of algorithmic design associated with efficiency or fitness while embracing the possibility of a creative collaboration between artificial intelligence and a human designer. Hence the double feature of this research, both analytical and creative, first by synthesizing images based on a given dataset and then by generating new architectural information from historical references. We find that the possibility of creatively understand and manipulate historic (and synthetic) information will be a key feature in future innovative design processes. Finally, the main question that we propose is whether an AI could be used not just to create an original and innovative group of simple buildings but also to explore the possibility of fostering a novel architectural sensibility grounded on the specificities on the architectural dataset, either historic, human-made or synthetic.

Keywords: architecture, central pavilions, classicism, machine learning

Procedia PDF Downloads 140
1426 The National Socialist and Communist Propaganda Activities in the Turkish Press during the World War II

Authors: Asuman Tezcan Mirer

Abstract:

This proposed paper discusses nationalist socialist and communist propaganda struggles in the Turkish press during World War II. The paper aspires to analyze how government agencies directed and organized the Turkish press to prevent the "5th column" from influencing public opinion. During the Second World War, one of the most emphasized issues was propaganda and how Turkish citizens would be protected from the effects of disinformation. Istanbul became a significant headquarters for belligerent countries' intelligence services, and these services were involved in gathering intelligence and disseminating propaganda. The main motive of national socialist propaganda was "anti-communism" in Turkey. Subsidizing certain magazines, controlling German companies' advertisements and paper trade, spreading rumors, printing propaganda brochures, and showing German propaganda films are some tactics that the nationalist socialists applied before and during the Second World War. On the other hand, the communists targeted Turkish racist/ultra-nationalist groups and their publications, which were influenced by the Nazi regime. They were also involved in distributing Marxist publications, printing brochures, and broadcasting radio programs. This study composes of three parts. The first part describes the nationalist socialist and communist propaganda activities in Turkey during the Second World War. The second part addresses the debates over propaganda among selected newspapers representing different ideologies. Finally, the last part analyzes the Turkish government's press policy. It explains why the government allowed ideological debates in the press despite its authoritarian press policy and "active neutrality" stance in the international arena.

Keywords: propaganda, press, 5th column, World War II, Turkey

Procedia PDF Downloads 101
1425 Integrated Free Space Optical Communication and Optical Sensor Network System with Artificial Intelligence Techniques

Authors: Yibeltal Chanie Manie, Zebider Asire Munyelet

Abstract:

5G and 6G technology offers enhanced quality of service with high data transmission rates, which necessitates the implementation of the Internet of Things (IoT) in 5G/6G architecture. In this paper, we proposed the integration of free space optical communication (FSO) with fiber sensor networks for IoT applications. Recently, free-space optical communications (FSO) are gaining popularity as an effective alternative technology to the limited availability of radio frequency (RF) spectrum. FSO is gaining popularity due to flexibility, high achievable optical bandwidth, and low power consumption in several applications of communications, such as disaster recovery, last-mile connectivity, drones, surveillance, backhaul, and satellite communications. Hence, high-speed FSO is an optimal choice for wireless networks to satisfy the full potential of 5G/6G technology, offering 100 Gbit/s or more speed in IoT applications. Moreover, machine learning must be integrated into the design, planning, and optimization of future optical wireless communication networks in order to actualize this vision of intelligent processing and operation. In addition, fiber sensors are important to achieve real-time, accurate, and smart monitoring in IoT applications. Moreover, we proposed deep learning techniques to estimate the strain changes and peak wavelength of multiple Fiber Bragg grating (FBG) sensors using only the spectrum of FBGs obtained from the real experiment.

Keywords: optical sensor, artificial Intelligence, Internet of Things, free-space optics

Procedia PDF Downloads 63
1424 Effects of a Head Mounted Display Adaptation on Reaching Behaviour: Implications for a Therapeutic Approach in Unilateral Neglect

Authors: Taku Numao, Kazu Amimoto, Tomoko Shimada, Kyohei Ichikawa

Abstract:

Background: Unilateral spatial neglect (USN) is a common syndrome following damage to one hemisphere of the brain (usually the right side), in which a patient fails to report or respond to stimulation from the contralesional side. These symptoms are not due to primary sensory or motor deficits, but instead, reflect an inability to process input from that side of their environment. Prism adaptation (PA) is a therapeutic treatment for USN, wherein a patient’s visual field is artificially shifted laterally, resulting in a sensory-motor adaptation. However, patients with USN also tend to perceive a left-leaning subjective vertical in the frontal plane. The traditional PA cannot be used to correct a tilt in the subjective vertical, because a prism can only polarize, not twist, the surroundings. However, this can be accomplished using a head mounted display (HMD) and a web-camera. Therefore, this study investigated whether an HMD system could be used to correct the spatial perception of USN patients in the frontal as well as the horizontal plane. We recruited healthy subjects in order to collect data for the refinement of USN patient therapy. Methods: Eight healthy subjects sat on a chair wearing a HMD (Oculus rift DK2), with a web-camera (Ovrvision) displaying a 10 degree leftward rotation and a 10 degree counter-clockwise rotation along the frontal plane. Subjects attempted to point a finger at one of four targets, assigned randomly, a total of 48 times. Before and after the intervention, each subject’s body-centre judgment (BCJ) was tested by asking them to point a finger at a touch panel straight in front of their xiphisternum, 10 times sight unseen. Results: Intervention caused the location pointed to during the BCJ to shift 35 ± 17 mm (Ave ± SD) leftward in the horizontal plane, and 46 ± 29 mm downward in the frontal plane. The results in both planes were significant by paired-t-test (p<.01). Conclusions: The results in the horizontal plane are consistent with those observed following PA. Furthermore, the HMD and web-camera were able to elicit 3D effects, including in both the horizontal and frontal planes. Future work will focus on applying this method to patients with and without USN, and investigating whether subject posture is also affected by the HMD system.

Keywords: head mounted display, posture, prism adaptation, unilateral spatial neglect

Procedia PDF Downloads 280
1423 Data Analytics in Energy Management

Authors: Sanjivrao Katakam, Thanumoorthi I., Antony Gerald, Ratan Kulkarni, Shaju Nair

Abstract:

With increasing energy costs and its impact on the business, sustainability today has evolved from a social expectation to an economic imperative. Therefore, finding methods to reduce cost has become a critical directive for Industry leaders. Effective energy management is the only way to cut costs. However, Energy Management has been a challenge because it requires a change in old habits and legacy systems followed for decades. Today exorbitant levels of energy and operational data is being captured and stored by Industries, but they are unable to convert these structured and unstructured data sets into meaningful business intelligence. It must be noted that for quick decisions, organizations must learn to cope with large volumes of operational data in different formats. Energy analytics not only helps in extracting inferences from these data sets, but also is instrumental in transformation from old approaches of energy management to new. This in turn assists in effective decision making for implementation. It is the requirement of organizations to have an established corporate strategy for reducing operational costs through visibility and optimization of energy usage. Energy analytics play a key role in optimization of operations. The paper describes how today energy data analytics is extensively used in different scenarios like reducing operational costs, predicting energy demands, optimizing network efficiency, asset maintenance, improving customer insights and device data insights. The paper also highlights how analytics helps transform insights obtained from energy data into sustainable solutions. The paper utilizes data from an array of segments such as retail, transportation, and water sectors.

Keywords: energy analytics, energy management, operational data, business intelligence, optimization

Procedia PDF Downloads 364
1422 Designing of Tooling Solution for Material Handling in Highly Automated Manufacturing System

Authors: Muhammad Umair, Yuri Nikolaev, Denis Artemov, Ighor Uzhinsky

Abstract:

A flexible manufacturing system is an integral part of a smart factory of industry 4.0 in which every machine is interconnected and works autonomously. Robots are in the process of replacing humans in every industrial sector. As the cyber-physical-system (CPS) and artificial intelligence (AI) are advancing, the manufacturing industry is getting more dependent on computers than human brains. This modernization has boosted the production with high quality and accuracy and shifted from classic production to smart manufacturing systems. However, material handling for such automated productions is a challenge and needs to be addressed with the best possible solution. Conventional clamping systems are designed for manual work and not suitable for highly automated production systems. Researchers and engineers are trying to find the most economical solution for loading/unloading and transportation workpieces from a warehouse to a machine shop for machining operations and back to the warehouse without human involvement. This work aims to propose an advanced multi-shape tooling solution for highly automated manufacturing systems. The currently obtained result shows that it could function well with automated guided vehicles (AGVs) and modern conveyor belts. The proposed solution is following requirements to be automation-friendly, universal for different part geometry and production operations. We used a bottom-up approach in this work, starting with studying different case scenarios and their limitations and finishing with the general solution.

Keywords: artificial intelligence, cyber physics system, Industry 4.0, material handling, smart factory, flexible manufacturing system

Procedia PDF Downloads 131
1421 Quantum Cum Synaptic-Neuronal Paradigm and Schema for Human Speech Output and Autism

Authors: Gobinathan Devathasan, Kezia Devathasan

Abstract:

Objective: To improve the current modified Broca-Wernicke-Lichtheim-Kussmaul speech schema and provide insight into autism. Methods: We reviewed the pertinent literature. Current findings, involving Brodmann areas 22, 46, 9,44,45,6,4 are based on neuropathology and functional MRI studies. However, in primary autism, there is no lucid explanation and changes described, whether neuropathology or functional MRI, appear consequential. Findings: We forward an enhanced model which may explain the enigma related to autism. Vowel output is subcortical and does need cortical representation whereas consonant speech is cortical in origin. Left lateralization is needed to commence the circuitry spin as our life have evolved with L-amino acids and left spin of electrons. A fundamental species difference is we are capable of three syllable-consonants and bi-syllable expression whereas cetaceans and songbirds are confined to single or dual consonants. The 4 key sites for speech are superior auditory cortex, Broca’s two areas, and the supplementary motor cortex. Using the Argand’s diagram and Reimann’s projection, we theorize that the Euclidean three dimensional synaptic neuronal circuits of speech are quantized to coherent waves, and then decoherence takes place at area 6 (spherical representation). In this quantum state complex, 3-consonant languages are instantaneously integrated and multiple languages can be learned, verbalized and differentiated. Conclusion: We postulate that evolutionary human speech is elevated to quantum interaction unlike cetaceans and birds to achieve the three consonants/bi-syllable speech. In classical primary autism, the sudden speech switches off and on noted in several cases could now be explained not by any anatomical lesion but failure of coherence. Area 6 projects directly into prefrontal saccadic area (8); and this further explains the second primary feature in autism: lack of eye contact. The third feature which is repetitive finger gestures, located adjacent to the speech/motor areas, are actual attempts to communicate with the autistic child akin to sign language for the deaf.

Keywords: quantum neuronal paradigm, cetaceans and human speech, autism and rapid magnetic stimulation, coherence and decoherence of speech

Procedia PDF Downloads 195
1420 The Protection of Artificial Intelligence (AI)-Generated Creative Works Through Authorship: A Comparative Analysis Between the UK and Nigerian Copyright Experience to Determine Lessons to Be Learnt from the UK

Authors: Esther Ekundayo

Abstract:

The nature of AI-generated works makes it difficult to identify an author. Although, some scholars have suggested that all the players involved in its creation should be allocated authorship according to their respective contribution. From the programmer who creates and designs the AI to the investor who finances the AI and to the user of the AI who most likely ends up creating the work in question. While others suggested that this issue may be resolved by the UK computer-generated works (CGW) provision under Section 9(3) of the Copyright Designs and Patents Act 1988. However, under the UK and Nigerian copyright law, only human-created works are recognised. This is usually assessed based on their originality. This simply means that the work must have been created as a result of its author’s creative and intellectual abilities and not copied. Such works are literary, dramatic, musical and artistic works and are those that have recently been a topic of discussion with regards to generative artificial intelligence (Generative AI). Unlike Nigeria, the UK CDPA recognises computer-generated works and vests its authorship with the human who made the necessary arrangement for its creation . However, making necessary arrangement in the case of Nova Productions Ltd v Mazooma Games Ltd was interpreted similarly to the traditional authorship principle, which requires the skills of the creator to prove originality. Although, some recommend that computer-generated works complicates this issue, and AI-generated works should enter the public domain as authorship cannot be allocated to AI itself. Additionally, the UKIPO recognising these issues in line with the growing AI trend in a public consultation launched in the year 2022, considered whether computer-generated works should be protected at all and why. If not, whether a new right with a different scope and term of protection should be introduced. However, it concluded that the issue of computer-generated works would be revisited as AI was still in its early stages. Conversely, due to the recent developments in this area with regards to Generative AI systems such as ChatGPT, Midjourney, DALL-E and AIVA, amongst others, which can produce human-like copyright creations, it is therefore important to examine the relevant issues which have the possibility of altering traditional copyright principles as we know it. Considering that the UK and Nigeria are both common law jurisdictions but with slightly differing approaches to this area, this research, therefore, seeks to answer the following questions by comparative analysis: 1)Who is the author of an AI-generated work? 2)Is the UK’s CGW provision worthy of emulation by the Nigerian law? 3) Would a sui generis law be capable of protecting AI-generated works and its author under both jurisdictions? This research further examines the possible barriers to the implementation of the new law in Nigeria, such as limited technical expertise and lack of awareness by the policymakers, amongst others.

Keywords: authorship, artificial intelligence (AI), generative ai, computer-generated works, copyright, technology

Procedia PDF Downloads 96
1419 Impact of Research-Informed Teaching and Case-Based Teaching on Memory Retention and Recall in University Students

Authors: Durvi Yogesh Vagani

Abstract:

This research paper explores the effectiveness of Research-informed teaching and Case-based teaching in enhancing the retention and recall of memory during discussions among university students. Additionally, it investigates the impact of using Artificial Intelligence (AI) tools on the quality of research conducted by students and its correlation with better recollection. The study hypothesizes that Case-based teaching will lead to greater recall and storage of information. The research gap in the use of AI in educational settings, particularly with actual participants, is addressed by leveraging a multi-method approach. The hypothesis is that the use of AI, such as ChatGPT and Bard, would lead to better retention and recall of information. Before commencing the study, participants' attention levels and IQ were assessed using the Digit Span Test and the Wechsler Adult Intelligence Scale, respectively, to ensure comparability among participants. Subsequently, participants were divided into four conditions, each group receiving identical information presented in different formats based on their assigned condition. Following this, participants engaged in a group discussion on the given topic. Their responses were then evaluated against a checklist. Finally, participants completed a brief test to measure their recall ability after the discussion. Preliminary findings suggest that students who utilize AI tools for learning demonstrate improved grasping of information and are more likely to integrate relevant information into discussions compared to providing extraneous details. Furthermore, Case-based teaching fosters greater attention and recall during discussions, while Research-informed teaching leads to greater knowledge for application. By addressing the research gap in AI application in education, this study contributes to a deeper understanding of effective teaching methodologies and the role of technology in student learning outcomes. The implication of the present research is to tailor teaching methods based on the subject matter. Case-based teaching facilitates application-based teaching, and research-based teaching can be beneficial for theory-heavy topics. Integrating AI in education. Combining AI with research-based teaching may optimize instructional strategies and deepen learning experiences. This research suggests tailoring teaching methods in psychology based on subject matter. Case-based teaching suits practical subjects, facilitating application, while research-based teaching aids understanding of theory-heavy topics. Integrating AI in education could enhance learning outcomes, offering detailed information tailored to students' needs.

Keywords: artificial intelligence, attention, case-based teaching, memory recall, memory retention, research-informed teaching

Procedia PDF Downloads 28
1418 Artificial Intelligence-Generated Previews of Hyaluronic Acid-Based Treatments

Authors: Ciro Cursio, Giulia Cursio, Pio Luigi Cursio, Luigi Cursio

Abstract:

Communication between practitioner and patient is of the utmost importance in aesthetic medicine: as of today, images of previous treatments are the most common tool used by doctors to describe and anticipate future results for their patients. However, using photos of other people often reduces the engagement of the prospective patient and is further limited by the number and quality of pictures available to the practitioner. Pre-existing work solves this issue in two ways: 3D scanning of the area with manual editing of the 3D model by the doctor or automatic prediction of the treatment by warping the image with hand-written parameters. The first approach requires the manual intervention of the doctor, while the second approach always generates results that aren’t always realistic. Thus, in one case, there is significant manual work required by the doctor, and in the other case, the prediction looks artificial. We propose an AI-based algorithm that autonomously generates a realistic prediction of treatment results. For the purpose of this study, we focus on hyaluronic acid treatments in the facial area. Our approach takes into account the individual characteristics of each face, and furthermore, the prediction system allows the patient to decide which area of the face she wants to modify. We show that the predictions generated by our system are realistic: first, the quality of the generated images is on par with real images; second, the prediction matches the actual results obtained after the treatment is completed. In conclusion, the proposed approach provides a valid tool for doctors to show patients what they will look like before deciding on the treatment.

Keywords: prediction, hyaluronic acid, treatment, artificial intelligence

Procedia PDF Downloads 114
1417 Design and Implementation of Low-code Model-building Methods

Authors: Zhilin Wang, Zhihao Zheng, Linxin Liu

Abstract:

This study proposes a low-code model-building approach that aims to simplify the development and deployment of artificial intelligence (AI) models. With an intuitive way to drag and drop and connect components, users can easily build complex models and integrate multiple algorithms for training. After the training is completed, the system automatically generates a callable model service API. This method not only lowers the technical threshold of AI development and improves development efficiency but also enhances the flexibility of algorithm integration and simplifies the deployment process of models. The core strength of this method lies in its ease of use and efficiency. Users do not need to have a deep programming background and can complete the design and implementation of complex models with a simple drag-and-drop operation. This feature greatly expands the scope of AI technology, allowing more non-technical people to participate in the development of AI models. At the same time, the method performs well in algorithm integration, supporting many different types of algorithms to work together, which further improves the performance and applicability of the model. In the experimental part, we performed several performance tests on the method. The results show that compared with traditional model construction methods, this method can make more efficient use, save computing resources, and greatly shorten the model training time. In addition, the system-generated model service interface has been optimized for high availability and scalability, which can adapt to the needs of different application scenarios.

Keywords: low-code, model building, artificial intelligence, algorithm integration, model deployment

Procedia PDF Downloads 29
1416 Perspectives and Challenges a Functional Bread With Yeast Extract to Improve Human Diet

Authors: Cláudia Patrocínio, Beatriz Fernandes, Ana Filipa Pires

Abstract:

Background: Mirror therapy (MT) is used to improve motor function after stroke. During MT, a mirror is placed between the two upper limbs (UL), thus reflecting movements of the non- affected side as if it were the affected side. Objectives: The aim of this review is to analyze the evidence on the effec.tiveness of MT in the recovery of UL function in population with post chronic stroke. Methods: The literature search was carried out in PubMed, ISI Web of Science, and PEDro database. Inclusion criteria: a) studies that include individuals diagnosed with stroke for at least 6 months; b) intervention with MT in UL or comparing it with other interventions; c) articles published until 2023; d) articles published in English or Portuguese; e) randomized controlled studies. Exclusion criteria: a) animal studies; b) studies that do not provide a detailed description of the intervention; c) Studies using central electrical stimulation. The methodological quality of the included studies was assessed using the Physiotherapy Evidence Database (PEDro) scale. Studies with < 4 on PEDro scale were excluded. Eighteen studies met all the inclusion criteria. Main results and conclusions: The quality of the studies varies between 5 and 8. One article compared muscular strength training (MST) with MT vs without MT and four articles compared the use of MT vs conventional therapy (CT), one study compared extracorporeal shock therapy (EST) with and without MT and another study compared functional electrical stimulation (FES), MT and biofeedback, three studies compared MT with Mesh Glove (MG) or Sham Therapy, five articles compared performing bimanual exercises with and without MT and three studies compared MT with virtual reality (VR) or robot training (RT). The assessment of changes in function and structure (International Classification of Functioning, Disability and Health parameter) was carried out, in each article, mainly using the Fugl Meyer Assessment-Upper Limb scale, activity and participation (International Classification of Functioning, Disability and Health parameter) were evaluated using different scales, in each study. The positive results were seen in these parameters, globally. Results suggest that MT is more effective than other therapies in motor recovery and function of the affected UL, than these techniques alone, although the results have been modest in most of the included studies. There is also a more significant improvement in the distal movements of the affected hand than in the rest of the UL.

Keywords: physical therapy, mirror therapy, chronic stroke, upper limb, hemiplegia

Procedia PDF Downloads 53
1415 Opinion Mining to Extract Community Emotions on Covid-19 Immunization Possible Side Effects

Authors: Yahya Almurtadha, Mukhtar Ghaleb, Ahmed M. Shamsan Saleh

Abstract:

The world witnessed a fierce attack from the Covid-19 virus, which affected public life socially, economically, healthily and psychologically. The world's governments tried to confront the pandemic by imposing a number of precautionary measures such as general closure, curfews and social distancing. Scientists have also made strenuous efforts to develop an effective vaccine to train the immune system to develop antibodies to combat the virus, thus reducing its symptoms and limiting its spread. Artificial intelligence, along with researchers and medical authorities, has accelerated the vaccine development process through big data processing and simulation. On the other hand, one of the most important negatives of the impact of Covid 19 was the state of anxiety and fear due to the blowout of rumors through social media, which prompted governments to try to reassure the public with the available means. This study aims to proposed using Sentiment Analysis (AKA Opinion Mining) and deep learning as efficient artificial intelligence techniques to work on retrieving the tweets of the public from Twitter and then analyze it automatically to extract their opinions, expression and feelings, negatively or positively, about the symptoms they may feel after vaccination. Sentiment analysis is characterized by its ability to access what the public post in social media within a record time and at a lower cost than traditional means such as questionnaires and interviews, not to mention the accuracy of the information as it comes from what the public expresses voluntarily.

Keywords: deep learning, opinion mining, natural language processing, sentiment analysis

Procedia PDF Downloads 171
1414 AI Applications in Accounting: Transforming Finance with Technology

Authors: Alireza Karimi

Abstract:

Artificial Intelligence (AI) is reshaping various industries, and accounting is no exception. With the ability to process vast amounts of data quickly and accurately, AI is revolutionizing how financial professionals manage, analyze, and report financial information. In this article, we will explore the diverse applications of AI in accounting and its profound impact on the field. Automation of Repetitive Tasks: One of the most significant contributions of AI in accounting is automating repetitive tasks. AI-powered software can handle data entry, invoice processing, and reconciliation with minimal human intervention. This not only saves time but also reduces the risk of errors, leading to more accurate financial records. Pattern Recognition and Anomaly Detection: AI algorithms excel at pattern recognition. In accounting, this capability is leveraged to identify unusual patterns in financial data that might indicate fraud or errors. AI can swiftly detect discrepancies, enabling auditors and accountants to focus on resolving issues rather than hunting for them. Real-Time Financial Insights: AI-driven tools, using natural language processing and computer vision, can process documents faster than ever. This enables organizations to have real-time insights into their financial status, empowering decision-makers with up-to-date information for strategic planning. Fraud Detection and Prevention: AI is a powerful tool in the fight against financial fraud. It can analyze vast transaction datasets, flagging suspicious activities and reducing the likelihood of financial misconduct going unnoticed. This proactive approach safeguards a company's financial integrity. Enhanced Data Analysis and Forecasting: Machine learning, a subset of AI, is used for data analysis and forecasting. By examining historical financial data, AI models can provide forecasts and insights, aiding businesses in making informed financial decisions and optimizing their financial strategies. Artificial Intelligence is fundamentally transforming the accounting profession. From automating mundane tasks to enhancing data analysis and fraud detection, AI is making financial processes more efficient, accurate, and insightful. As AI continues to evolve, its role in accounting will only become more significant, offering accountants and finance professionals powerful tools to navigate the complexities of modern finance. Embracing AI in accounting is not just a trend; it's a necessity for staying competitive in the evolving financial landscape.

Keywords: artificial intelligence, accounting automation, financial analysis, fraud detection, machine learning in finance

Procedia PDF Downloads 63
1413 Optimization of Vertical Axis Wind Turbine

Authors: C. Andreu Sabater, D. Drago, C. Key-aberg, W. Moukrim, B. Naccache

Abstract:

Present study concerns the optimization of a new vertical axis wind turbine system associated to a dynamoelectric motor. The system is composed by three Savonius wind turbines, arranged in an equilateral triangle. The idea is to propose a new concept of wind turbines through a technical approach allowing find a specific power never obtained before and therefore, a significant reduction of installation costs. In this work different wind flows across the system have been simulated, as well as precise definition of parameters and relations established between them. It will allow define the optimal rotor specific power for a given volume. Calculations have been developed with classical Savonius dimensions.

Keywords: VAWT, savonius, specific power, optimization, weibull

Procedia PDF Downloads 330
1412 Web-Based Decision Support Systems and Intelligent Decision-Making: A Systematic Analysis

Authors: Serhat Tüzün, Tufan Demirel

Abstract:

Decision Support Systems (DSS) have been investigated by researchers and technologists for more than 35 years. This paper analyses the developments in the architecture and software of these systems, provides a systematic analysis for different Web-based DSS approaches and Intelligent Decision-making Technologies (IDT), with the suggestion for future studies. Decision Support Systems literature begins with building model-oriented DSS in the late 1960s, theory developments in the 1970s, and the implementation of financial planning systems and Group DSS in the early and mid-80s. Then it documents the origins of Executive Information Systems, online analytic processing (OLAP) and Business Intelligence. The implementation of Web-based DSS occurred in the mid-1990s. With the beginning of the new millennia, intelligence is the main focus on DSS studies. Web-based technologies are having a major impact on design, development and implementation processes for all types of DSS. Web technologies are being utilized for the development of DSS tools by leading developers of decision support technologies. Major companies are encouraging its customers to port their DSS applications, such as data mining, customer relationship management (CRM) and OLAP systems, to a web-based environment. Similarly, real-time data fed from manufacturing plants are now helping floor managers make decisions regarding production adjustment to ensure that high-quality products are produced and delivered. Web-based DSS are being employed by organizations as decision aids for employees as well as customers. A common usage of Web-based DSS has been to assist customers configure product and service according to their needs. These systems allow individual customers to design their own products by choosing from a menu of attributes, components, prices and delivery options. The Intelligent Decision-making Technologies (IDT) domain is a fast growing area of research that integrates various aspects of computer science and information systems. This includes intelligent systems, intelligent technology, intelligent agents, artificial intelligence, fuzzy logic, neural networks, machine learning, knowledge discovery, computational intelligence, data science, big data analytics, inference engines, recommender systems or engines, and a variety of related disciplines. Innovative applications that emerge using IDT often have a significant impact on decision-making processes in government, industry, business, and academia in general. This is particularly pronounced in finance, accounting, healthcare, computer networks, real-time safety monitoring and crisis response systems. Similarly, IDT is commonly used in military decision-making systems, security, marketing, stock market prediction, and robotics. Even though lots of research studies have been conducted on Decision Support Systems, a systematic analysis on the subject is still missing. Because of this necessity, this paper has been prepared to search recent articles about the DSS. The literature has been deeply reviewed and by classifying previous studies according to their preferences, taxonomy for DSS has been prepared. With the aid of the taxonomic review and the recent developments over the subject, this study aims to analyze the future trends in decision support systems.

Keywords: decision support systems, intelligent decision-making, systematic analysis, taxonomic review

Procedia PDF Downloads 279
1411 Artificial Intelligence in Bioscience: The Next Frontier

Authors: Parthiban Srinivasan

Abstract:

With recent advances in computational power and access to enough data in biosciences, artificial intelligence methods are increasingly being used in drug discovery research. These methods are essentially a series of advanced statistics based exercises that review the past to indicate the likely future. Our goal is to develop a model that accurately predicts biological activity and toxicity parameters for novel compounds. We have compiled a robust library of over 150,000 chemical compounds with different pharmacological properties from literature and public domain databases. The compounds are stored in simplified molecular-input line-entry system (SMILES), a commonly used text encoding for organic molecules. We utilize an automated process to generate an array of numerical descriptors (features) for each molecule. Redundant and irrelevant descriptors are eliminated iteratively. Our prediction engine is based on a portfolio of machine learning algorithms. We found Random Forest algorithm to be a better choice for this analysis. We captured non-linear relationship in the data and formed a prediction model with reasonable accuracy by averaging across a large number of randomized decision trees. Our next step is to apply deep neural network (DNN) algorithm to predict the biological activity and toxicity properties. We expect the DNN algorithm to give better results and improve the accuracy of the prediction. This presentation will review all these prominent machine learning and deep learning methods, our implementation protocols and discuss these techniques for their usefulness in biomedical and health informatics.

Keywords: deep learning, drug discovery, health informatics, machine learning, toxicity prediction

Procedia PDF Downloads 356
1410 The Importance of Visual Communication in Artificial Intelligence

Authors: Manjitsingh Rajput

Abstract:

Visual communication plays an important role in artificial intelligence (AI) because it enables machines to understand and interpret visual information, similar to how humans do. This abstract explores the importance of visual communication in AI and emphasizes the importance of various applications such as computer vision, object emphasis recognition, image classification and autonomous systems. In going deeper, with deep learning techniques and neural networks that modify visual understanding, In addition to AI programming, the abstract discusses challenges facing visual interfaces for AI, such as data scarcity, domain optimization, and interpretability. Visual communication and other approaches, such as natural language processing and speech recognition, have also been explored. Overall, this abstract highlights the critical role that visual communication plays in advancing AI capabilities and enabling machines to perceive and understand the world around them. The abstract also explores the integration of visual communication with other modalities like natural language processing and speech recognition, emphasizing the critical role of visual communication in AI capabilities. This methodology explores the importance of visual communication in AI development and implementation, highlighting its potential to enhance the effectiveness and accessibility of AI systems. It provides a comprehensive approach to integrating visual elements into AI systems, making them more user-friendly and efficient. In conclusion, Visual communication is crucial in AI systems for object recognition, facial analysis, and augmented reality, but challenges like data quality, interpretability, and ethics must be addressed. Visual communication enhances user experience, decision-making, accessibility, and collaboration. Developers can integrate visual elements for efficient and accessible AI systems.

Keywords: visual communication AI, computer vision, visual aid in communication, essence of visual communication.

Procedia PDF Downloads 95
1409 Don't Just Guess and Slip: Estimating Bayesian Knowledge Tracing Parameters When Observations Are Scant

Authors: Michael Smalenberger

Abstract:

Intelligent tutoring systems (ITS) are computer-based platforms which can incorporate artificial intelligence to provide step-by-step guidance as students practice problem-solving skills. ITS can replicate and even exceed some benefits of one-on-one tutoring, foster transactivity in collaborative environments, and lead to substantial learning gains when used to supplement the instruction of a teacher or when used as the sole method of instruction. A common facet of many ITS is their use of Bayesian Knowledge Tracing (BKT) to estimate parameters necessary for the implementation of the artificial intelligence component, and for the probability of mastery of a knowledge component relevant to the ITS. While various techniques exist to estimate these parameters and probability of mastery, none directly and reliably ask the user to self-assess these. In this study, 111 undergraduate students used an ITS in a college-level introductory statistics course for which detailed transaction-level observations were recorded, and users were also routinely asked direct questions that would lead to such a self-assessment. Comparisons were made between these self-assessed values and those obtained using commonly used estimation techniques. Our findings show that such self-assessments are particularly relevant at the early stages of ITS usage while transaction level data are scant. Once a user’s transaction level data become available after sufficient ITS usage, these can replace the self-assessments in order to eliminate the identifiability problem in BKT. We discuss how these findings are relevant to the number of exercises necessary to lead to mastery of a knowledge component, the associated implications on learning curves, and its relevance to instruction time.

Keywords: Bayesian Knowledge Tracing, Intelligent Tutoring System, in vivo study, parameter estimation

Procedia PDF Downloads 172
1408 Centrality and Patent Impact: Coupled Network Analysis of Artificial Intelligence Patents Based on Co-Cited Scientific Papers

Authors: Xingyu Gao, Qiang Wu, Yuanyuan Liu, Yue Yang

Abstract:

In the era of the knowledge economy, the relationship between scientific knowledge and patents has garnered significant attention. Understanding the intricate interplay between the foundations of science and technological innovation has emerged as a pivotal challenge for both researchers and policymakers. This study establishes a coupled network of artificial intelligence patents based on co-cited scientific papers. Leveraging centrality metrics from network analysis offers a fresh perspective on understanding the influence of information flow and knowledge sharing within the network on patent impact. The study initially obtained patent numbers for 446,890 granted US AI patents from the United States Patent and Trademark Office’s artificial intelligence patent database for the years 2002-2020. Subsequently, specific information regarding these patents was acquired using the Lens patent retrieval platform. Additionally, a search and deduplication process was performed on scientific non-patent references (SNPRs) using the Web of Science database, resulting in the selection of 184,603 patents that cited 37,467 unique SNPRs. Finally, this study constructs a coupled network comprising 59,379 artificial intelligence patents by utilizing scientific papers co-cited in patent backward citations. In this network, nodes represent patents, and if patents reference the same scientific papers, connections are established between them, serving as edges within the network. Nodes and edges collectively constitute the patent coupling network. Structural characteristics such as node degree centrality, betweenness centrality, and closeness centrality are employed to assess the scientific connections between patents, while citation count is utilized as a quantitative metric for patent influence. Finally, a negative binomial model is employed to test the nonlinear relationship between these network structural features and patent influence. The research findings indicate that network structural features such as node degree centrality, betweenness centrality, and closeness centrality exhibit inverted U-shaped relationships with patent influence. Specifically, as these centrality metrics increase, patent influence initially shows an upward trend, but once these features reach a certain threshold, patent influence starts to decline. This discovery suggests that moderate network centrality is beneficial for enhancing patent influence, while excessively high centrality may have a detrimental effect on patent influence. This finding offers crucial insights for policymakers, emphasizing the importance of encouraging moderate knowledge flow and sharing to promote innovation when formulating technology policies. It suggests that in certain situations, data sharing and integration can contribute to innovation. Consequently, policymakers can take measures to promote data-sharing policies, such as open data initiatives, to facilitate the flow of knowledge and the generation of innovation. Additionally, governments and relevant agencies can achieve broader knowledge dissemination by supporting collaborative research projects, adjusting intellectual property policies to enhance flexibility, or nurturing technology entrepreneurship ecosystems.

Keywords: centrality, patent coupling network, patent influence, social network analysis

Procedia PDF Downloads 54
1407 Evaluation of National Research Motivation Evolution with Improved Social Influence Network Theory Model: A Case Study of Artificial Intelligence

Authors: Yating Yang, Xue Zhang, Chengli Zhao

Abstract:

In the increasingly interconnected global environment brought about by globalization, it is crucial for countries to timely grasp the development motivations in relevant research fields of other countries and seize development opportunities. Motivation, as the intrinsic driving force behind actions, is abstract in nature, making it difficult to directly measure and evaluate. Drawing on the ideas of social influence network theory, the research motivations of a country can be understood as the driving force behind the development of its science and technology sector, which is simultaneously influenced by both the country itself and other countries/regions. In response to this issue, this paper improves upon Friedkin's social influence network theory and applies it to motivation description, constructing a dynamic alliance network and hostile network centered around the United States and China, as well as a sensitivity matrix, to remotely assess the changes in national research motivations under the influence of international relations. Taking artificial intelligence as a case study, the research reveals that the motivations of most countries/regions are declining, gradually shifting from a neutral attitude to a negative one. The motivation of the United States is hardly influenced by other countries/regions and remains at a high level, while the motivation of China has been consistently increasing in recent years. By comparing the results with real data, it is found that this model can reflect, to some extent, the trends in national motivations.

Keywords: influence network theory, remote assessment, relation matrix, dynamic sensitivity matrix

Procedia PDF Downloads 68