Search results for: cosmic intelligence
1095 Probing Extensive Air Shower Primaries and Their Interactions by Combining Individual Muon Tracks and Shower Depth
Authors: Moon Moon Devi, Ran Budnik
Abstract:
The current large area cosmic ray detector surface arrays typically measure only the net flux and arrival-time of the charged particles produced in an extensive air shower (EAS). Measurement of the individual charged particles at a surface array will provide additional distinguishing parameters to identify the primary and to map the very high energy interactions in the upper layers of the atmosphere. In turn, these may probe anomalies in QCD interactions at energies beyond the reach of current accelerators. The recent attempts of studying the individual muon tracks are limited in their expandability to larger arrays and can only probe primary particles with energy up to about 10^15.5 eV. New developments in detector technology allow for a realistic cost of large area detectors, however with limitations on energy resolutions, directional information, and dynamic range. In this study, we perform a simulation study using CORSIKA to combine the energy spectrum and lateral spread of the muons with the longitudinal depth (Xmax) of an EAS initiated by a primary at ultra high energies (10¹⁶ – 10¹⁹) eV. Using proton and iron as the shower primaries, we show that the muon observables and Xmax together can be used to distinguish the primary. This study can be used to design a future detector for the surface array, which will be able to enhance our knowledge of primaries and QCD interactions.Keywords: ultra high energy extensive air shower, muon tracking, air shower primaries, QCD interactions
Procedia PDF Downloads 2281094 AI Software Algorithms for Drivers Monitoring within Vehicles Traffic - SiaMOTO
Authors: Ioan Corneliu Salisteanu, Valentin Dogaru Ulieru, Mihaita Nicolae Ardeleanu, Alin Pohoata, Bogdan Salisteanu, Stefan Broscareanu
Abstract:
Creating a personalized statistic for an individual within the population using IT systems, based on the searches and intercepted spheres of interest they manifest, is just one 'atom' of the artificial intelligence analysis network. However, having the ability to generate statistics based on individual data intercepted from large demographic areas leads to reasoning like that issued by a human mind with global strategic ambitions. The DiaMOTO device is a technical sensory system that allows the interception of car events caused by a driver, positioning them in time and space. The device's connection to the vehicle allows the creation of a source of data whose analysis can create psychological, behavioural profiles of the drivers involved. The SiaMOTO system collects data from many vehicles equipped with DiaMOTO, driven by many different drivers with a unique fingerprint in their approach to driving. In this paper, we aimed to explain the software infrastructure of the SiaMOTO system, a system designed to monitor and improve driver driving behaviour, as well as the criteria and algorithms underlying the intelligent analysis process.Keywords: artificial intelligence, data processing, driver behaviour, driver monitoring, SiaMOTO
Procedia PDF Downloads 911093 The Role of Marketing Information System on Decision-Making: An Applied Study on Algeria Telecoms Mobile "MOBILIS"
Authors: Benlakhdar Mohamed Larbi, Yagoub Asma
Abstract:
Purpose: This study aims at highlighting the significance and importance of utilizing marketing information system (MKIS) on decision-making, by clarifying the need for quick and efficient decision-making due to time saving and preventing of duplication of work. Design, methodology, approach: The study shows the roles of each part of MKIS for developing marketing strategy, which present a real challenge to individuals and institutions in an era characterized by uncertainty and clarifying the importance of each part separately, depending on decision type and the nature of the situation. The empirical research method was evaluated by specialized experts, conducted by means of questionnaires. Correlation analysis was employed to test the validity of the procedure. Results: The empirical study findings confirmed positive relationships between the level of utilizing and adopting ‘decision support system and marketing intelligence’ and the success of an organizational decision-making, and provide the organization with a competitive advantage as it allows the organization to solve problems. Originality/value: The study offer better understanding of performance- increasing market share as an organizational decision making based on marketing information system.Keywords: database, marketing research, marketing intelligence, decision support system, decision-making
Procedia PDF Downloads 3301092 Leveraging Natural Language Processing for Legal Artificial Intelligence: A Longformer Approach for Taiwanese Legal Cases
Abstract:
Legal artificial intelligence (LegalAI) has been increasing applications within legal systems, propelled by advancements in natural language processing (NLP). Compared with general documents, legal case documents are typically long text sequences with intrinsic logical structures. Most existing language models have difficulty understanding the long-distance dependencies between different structures. Another unique challenge is that while the Judiciary of Taiwan has released legal judgments from various levels of courts over the years, there remains a significant obstacle in the lack of labeled datasets. This deficiency makes it difficult to train models with strong generalization capabilities, as well as accurately evaluate model performance. To date, models in Taiwan have yet to be specifically trained on judgment data. Given these challenges, this research proposes a Longformer-based pre-trained language model explicitly devised for retrieving similar judgments in Taiwanese legal documents. This model is trained on a self-constructed dataset, which this research has independently labeled to measure judgment similarities, thereby addressing a void left by the lack of an existing labeled dataset for Taiwanese judgments. This research adopts strategies such as early stopping and gradient clipping to prevent overfitting and manage gradient explosion, respectively, thereby enhancing the model's performance. The model in this research is evaluated using both the dataset and the Average Entropy of Offense-charged Clustering (AEOC) metric, which utilizes the notion of similar case scenarios within the same type of legal cases. Our experimental results illustrate our model's significant advancements in handling similarity comparisons within extensive legal judgments. By enabling more efficient retrieval and analysis of legal case documents, our model holds the potential to facilitate legal research, aid legal decision-making, and contribute to the further development of LegalAI in Taiwan.Keywords: legal artificial intelligence, computation and language, language model, Taiwanese legal cases
Procedia PDF Downloads 721091 Getting to Know the Enemy: Utilization of Phone Record Analysis Simulations to Uncover a Target’s Personal Life Attributes
Authors: David S. Byrne
Abstract:
The purpose of this paper is to understand how phone record analysis can enable identification of subjects in communication with a target of a terrorist plot. This study also sought to understand the advantages of the implementation of simulations to develop the skills of future intelligence analysts to enhance national security. Through the examination of phone reports which in essence consist of the call traffic of incoming and outgoing numbers (and not by listening to calls or reading the content of text messages), patterns can be uncovered that point toward members of a criminal group and activities planned. Through temporal and frequency analysis, conclusions were drawn to offer insights into the identity of participants and the potential scheme being undertaken. The challenge lies in the accurate identification of the users of the phones in contact with the target. Often investigators rely on proprietary databases and open sources to accomplish this task, however it is difficult to ascertain the accuracy of the information found. Thus, this paper poses two research questions: how effective are freely available web sources of information at determining the actual identification of callers? Secondly, does the identity of the callers enable an understanding of the lifestyle and habits of the target? The methodology for this research consisted of the analysis of the call detail records of the author’s personal phone activity spanning the period of a year combined with a hypothetical theory that the owner of said phone was a leader of terrorist cell. The goal was to reveal the identity of his accomplices and understand how his personal attributes can further paint a picture of the target’s intentions. The results of the study were interesting, nearly 80% of the calls were identified with over a 75% accuracy rating via datamining of open sources. The suspected terrorist’s inner circle was recognized including relatives and potential collaborators as well as financial institutions [money laundering], restaurants [meetings], a sporting goods store [purchase of supplies], and airline and hotels [travel itinerary]. The outcome of this research showed the benefits of cellphone analysis without more intrusive and time-consuming methodologies though it may be instrumental for potential surveillance, interviews, and developing probable cause for wiretaps. Furthermore, this research highlights the importance of building upon the skills of future intelligence analysts through phone record analysis via simulations; that hands-on learning in this case study emphasizes the development of the competencies necessary to improve investigations overall.Keywords: hands-on learning, intelligence analysis, intelligence education, phone record analysis, simulations
Procedia PDF Downloads 141090 AI and the Future of Misinformation: Opportunities and Challenges
Authors: Noor Azwa Azreen Binti Abd. Aziz, Muhamad Zaim Bin Mohd Rozi
Abstract:
Moving towards the 4th Industrial Revolution, artificial intelligence (AI) is now more popular than ever. This subject is gaining significance every day and is continually expanding, often merging with other fields. Instead of merely being passive observers, there are benefits to understanding modern technology by delving into its inner workings. However, in a world teeming with digital information, the impact of AI on the spread of disinformation has garnered significant attention. The dissemination of inaccurate or misleading information is referred to as misinformation, posing a serious threat to democratic society, public debate, and individual decision-making. This article delves deep into the connection between AI and the dissemination of false information, exploring its potential, risks, and ethical issues as AI technology advances. The rise of AI has ushered in a new era in the dissemination of misinformation as AI-driven technologies are increasingly responsible for curating, recommending, and amplifying information on online platforms. While AI holds the potential to enhance the detection and mitigation of misinformation through natural language processing and machine learning, it also raises concerns about the amplification and propagation of false information. AI-powered deepfake technology, for instance, can generate hyper-realistic videos and audio recordings, making it increasingly challenging to discern fact from fiction.Keywords: artificial intelligence, digital information, disinformation, ethical issues, misinformation
Procedia PDF Downloads 921089 Fundamental Theory of the Evolution Force: Gene Engineering utilizing Synthetic Evolution Artificial Intelligence
Authors: L. K. Davis
Abstract:
The effects of the evolution force are observable in nature at all structural levels ranging from small molecular systems to conversely enormous biospheric systems. However, the evolution force and work associated with formation of biological structures has yet to be described mathematically or theoretically. In addressing the conundrum, we consider evolution from a unique perspective and in doing so we introduce the “Fundamental Theory of the Evolution Force: FTEF”. We utilized synthetic evolution artificial intelligence (SYN-AI) to identify genomic building blocks and to engineer 14-3-3 ζ docking proteins by transforming gene sequences into time-based DNA codes derived from protein hierarchical structural levels. The aforementioned served as templates for random DNA hybridizations and genetic assembly. The application of hierarchical DNA codes allowed us to fast forward evolution, while dampening the effect of point mutations. Natural selection was performed at each hierarchical structural level and mutations screened using Blosum 80 mutation frequency-based algorithms. Notably, SYN-AI engineered a set of three architecturally conserved docking proteins that retained motion and vibrational dynamics of native Bos taurus 14-3-3 ζ.Keywords: 14-3-3 docking genes, synthetic protein design, time-based DNA codes, writing DNA code from scratch
Procedia PDF Downloads 1141088 Automating Self-Representation in the Caribbean: AI Autoethnography and Cultural Analysis
Authors: Steffon Campbell
Abstract:
This research explores the potential of using artificial intelligence (AI) autoethnographies to study, document, explore, and understand aspects of Caribbean culture. As a digital research methodology, AI autoethnography merges computer science and technology with ethnography, providing a fresh approach to collecting and analyzing data to generate novel insights. This research investigates how AI autoethnography can best be applied to understanding the various complexities and nuances of Caribbean culture, as well as examining how technology can be a valuable tool for enriching study of the region. By applying AI autoethnography to Caribbean studies, the research aims to produce new and innovative ways of discovering, understanding, and appreciating the Caribbean. The study found that AI autoethnographies can offer a valuable method for exploring Caribbean culture. Specifically, AI autoethnographies can facilitate experiences of self-reflection, facilitate reconciliation with the past, and provide a platform to explore and understand the cultural, social, political, and economic concerns of Caribbean people. Findings also reveal that these autoethnographies can create a space for people to reimagine and reframe the conversation around Caribbean culture by enabling them to actively participate in the process of knowledge creation. The study also finds that AI autoethnography offers the potential for cross-cultural dialogue, allowing participants to connect with one another over cultural considerations and engage in meaningful discourse.Keywords: artificial intelligence, autoethnography, caribbean, culture
Procedia PDF Downloads 251087 Emotional Artificial Intelligence and the Right to Privacy
Authors: Emine Akar
Abstract:
The majority of privacy-related regulation has traditionally focused on concepts that are perceived to be well-understood or easily describable, such as certain categories of data and personal information or images. In the past century, such regulation appeared reasonably suitable for its purposes. However, technologies such as AI, combined with ever-increasing capabilities to collect, process, and store “big data”, not only require calibration of these traditional understandings but may require re-thinking of entire categories of privacy law. In the presentation, it will be explained, against the background of various emerging technologies under the umbrella term “emotional artificial intelligence”, why modern privacy law will need to embrace human emotions as potentially private subject matter. This argument can be made on a jurisprudential level, given that human emotions can plausibly be accommodated within the various concepts that are traditionally regarded as the underlying foundation of privacy protection, such as, for example, dignity, autonomy, and liberal values. However, the practical reasons for regarding human emotions as potentially private subject matter are perhaps more important (and very likely more convincing from the perspective of regulators). In that respect, it should be regarded as alarming that, according to most projections, the usefulness of emotional data to governments and, particularly, private companies will not only lead to radically increased processing and analysing of such data but, concerningly, to an exponential growth in the collection of such data. In light of this, it is also necessity to discuss options for how regulators could address this emerging threat.Keywords: AI, privacy law, data protection, big data
Procedia PDF Downloads 881086 Using Optical Character Recognition to Manage the Unstructured Disaster Data into Smart Disaster Management System
Authors: Dong Seop Lee, Byung Sik Kim
Abstract:
In the 4th Industrial Revolution, various intelligent technologies have been developed in many fields. These artificial intelligence technologies are applied in various services, including disaster management. Disaster information management does not just support disaster work, but it is also the foundation of smart disaster management. Furthermore, it gets historical disaster information using artificial intelligence technology. Disaster information is one of important elements of entire disaster cycle. Disaster information management refers to the act of managing and processing electronic data about disaster cycle from its’ occurrence to progress, response, and plan. However, information about status control, response, recovery from natural and social disaster events, etc. is mainly managed in the structured and unstructured form of reports. Those exist as handouts or hard-copies of reports. Such unstructured form of data is often lost or destroyed due to inefficient management. It is necessary to manage unstructured data for disaster information. In this paper, the Optical Character Recognition approach is used to convert handout, hard-copies, images or reports, which is printed or generated by scanners, etc. into electronic documents. Following that, the converted disaster data is organized into the disaster code system as disaster information. Those data are stored in the disaster database system. Gathering and creating disaster information based on Optical Character Recognition for unstructured data is important element as realm of the smart disaster management. In this paper, Korean characters were improved to over 90% character recognition rate by using upgraded OCR. In the case of character recognition, the recognition rate depends on the fonts, size, and special symbols of character. We improved it through the machine learning algorithm. These converted structured data is managed in a standardized disaster information form connected with the disaster code system. The disaster code system is covered that the structured information is stored and retrieve on entire disaster cycle such as historical disaster progress, damages, response, and recovery. The expected effect of this research will be able to apply it to smart disaster management and decision making by combining artificial intelligence technologies and historical big data.Keywords: disaster information management, unstructured data, optical character recognition, machine learning
Procedia PDF Downloads 1291085 A Method of Representing Knowledge of Toolkits in a Pervasive Toolroom Maintenance System
Authors: A. Mohamed Mydeen, Pallapa Venkataram
Abstract:
The learning process needs to be so pervasive to impart the quality in acquiring the knowledge about a subject by making use of the advancement in the field of information and communication systems. However, pervasive learning paradigms designed so far are system automation types and they lack in factual pervasive realm. Providing factual pervasive realm requires subtle ways of teaching and learning with system intelligence. Augmentation of intelligence with pervasive learning necessitates the most efficient way of representing knowledge for the system in order to give the right learning material to the learner. This paper presents a method of representing knowledge for Pervasive Toolroom Maintenance System (PTMS) in which a learner acquires sublime knowledge about the various kinds of tools kept in the toolroom and also helps for effective maintenance of the toolroom. First, we explicate the generic model of knowledge representation for PTMS. Second, we expound the knowledge representation for specific cases of toolkits in PTMS. We have also presented the conceptual view of knowledge representation using ontology for both generic and specific cases. Third, we have devised the relations for pervasive knowledge in PTMS. Finally, events are identified in PTMS which are then linked with pervasive data of toolkits based on relation formulated. The experimental environment and case studies show the accuracy and efficient knowledge representation of toolkits in PTMS.Keywords: knowledge representation, pervasive computing, agent technology, ECA rules
Procedia PDF Downloads 3381084 Performance Evaluation of Distributed Deep Learning Frameworks in Cloud Environment
Authors: Shuen-Tai Wang, Fang-An Kuo, Chau-Yi Chou, Yu-Bin Fang
Abstract:
2016 has become the year of the Artificial Intelligence explosion. AI technologies are getting more and more matured that most world well-known tech giants are making large investment to increase the capabilities in AI. Machine learning is the science of getting computers to act without being explicitly programmed, and deep learning is a subset of machine learning that uses deep neural network to train a machine to learn features directly from data. Deep learning realizes many machine learning applications which expand the field of AI. At the present time, deep learning frameworks have been widely deployed on servers for deep learning applications in both academia and industry. In training deep neural networks, there are many standard processes or algorithms, but the performance of different frameworks might be different. In this paper we evaluate the running performance of two state-of-the-art distributed deep learning frameworks that are running training calculation in parallel over multi GPU and multi nodes in our cloud environment. We evaluate the training performance of the frameworks with ResNet-50 convolutional neural network, and we analyze what factors that result in the performance among both distributed frameworks as well. Through the experimental analysis, we identify the overheads which could be further optimized. The main contribution is that the evaluation results provide further optimization directions in both performance tuning and algorithmic design.Keywords: artificial intelligence, machine learning, deep learning, convolutional neural networks
Procedia PDF Downloads 2111083 New Advanced Medical Software Technology Challenges and Evolution of the Regulatory Framework in Expert Software, Artificial Intelligence, and Machine Learning
Authors: Umamaheswari Shanmugam, Silvia Ronchi
Abstract:
Software, artificial intelligence, and machine learning can improve healthcare through innovative and advanced technologies that can use the large amount and variety of data generated during healthcare services every day; one of the significant advantages of these new technologies is the ability to get experience and knowledge from real-world use and to improve their performance continuously. Healthcare systems and institutions can significantly benefit because the use of advanced technologies improves the efficiency and efficacy of healthcare. Software-defined as a medical device, is stand-alone software that is intended to be used for patients for one or more of these specific medical intended uses: - diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of a disease, any other health conditions, replacing or modifying any part of a physiological or pathological process–manage the received information from in vitro specimens derived from the human samples (body) and without principal main action of its principal intended use by pharmacological, immunological or metabolic definition. Software qualified as medical devices must comply with the general safety and performance requirements applicable to medical devices. These requirements are necessary to ensure high performance and quality and protect patients' safety. The evolution and the continuous improvement of software used in healthcare must consider the increase in regulatory requirements, which are becoming more complex in each market. The gap between these advanced technologies and the new regulations is the biggest challenge for medical device manufacturers. Regulatory requirements can be considered a market barrier, as they can delay or obstacle the device's approval. Still, they are necessary to ensure performance, quality, and safety. At the same time, they can be a business opportunity if the manufacturer can define the appropriate regulatory strategy in advance. The abstract will provide an overview of the current regulatory framework, the evolution of the international requirements, and the standards applicable to medical device software in the potential market all over the world.Keywords: artificial intelligence, machine learning, SaMD, regulatory, clinical evaluation, classification, international requirements, MDR, 510k, PMA, IMDRF, cyber security, health care systems
Procedia PDF Downloads 881082 Adolescent-Parent Relationship as the Most Important Factor in Preventing Mood Disorders in Adolescents: An Application of Artificial Intelligence to Social Studies
Authors: Elżbieta Turska
Abstract:
Introduction: One of the most difficult times in a person’s life is adolescence. The experiences in this period may shape the future life of this person to a large extent. This is the reason why many young people experience sadness, dejection, hopelessness, sense of worthlessness, as well as losing interest in various activities and social relationships, all of which are often classified as mood disorders. As many as 15-40% adolescents experience depressed moods and for most of them they resolve and are not carried into adulthood. However, (5-6%) of those affected by mood disorders develop the depressive syndrome and as many as (1-3%) develop full-blown clinical depression. Materials: A large questionnaire was given to 2508 students, aged 13–16 years old, and one of its parts was the Burns checklist, i.e. the standard test for identifying depressed mood. The questionnaire asked about many aspects of the student’s life, it included a total of 53 questions, most of which had subquestions. It is important to note that the data suffered from many problems, the most important of which were missing data and collinearity. Aim: In order to identify the correlates of mood disorders we built predictive models which were then trained and validated. Our aim was not to be able to predict which students suffer from mood disorders but rather to explore the factors influencing mood disorders. Methods: The problems with data described above practically excluded using all classical statistical methods. For this reason, we attempted to use the following Artificial Intelligence (AI) methods: classification trees with surrogate variables, random forests and xgboost. All analyses were carried out with the use of the mlr package for the R programming language. Resuts: The predictive model built by classification trees algorithm outperformed the other algorithms by a large margin. As a result, we were able to rank the variables (questions and subquestions from the questionnaire) from the most to least influential as far as protection against mood disorder is concerned. Thirteen out of twenty most important variables reflect the relationships with parents. This seems to be a really significant result both from the cognitive point of view and also from the practical point of view, i.e. as far as interventions to correct mood disorders are concerned.Keywords: mood disorders, adolescents, family, artificial intelligence
Procedia PDF Downloads 1011081 Deep Routing Strategy: Deep Learning based Intelligent Routing in Software Defined Internet of Things.
Authors: Zabeehullah, Fahim Arif, Yawar Abbas
Abstract:
Software Defined Network (SDN) is a next genera-tion networking model which simplifies the traditional network complexities and improve the utilization of constrained resources. Currently, most of the SDN based Internet of Things(IoT) environments use traditional network routing strategies which work on the basis of max or min metric value. However, IoT network heterogeneity, dynamic traffic flow and complexity demands intelligent and self-adaptive routing algorithms because traditional routing algorithms lack the self-adaptions, intelligence and efficient utilization of resources. To some extent, SDN, due its flexibility, and centralized control has managed the IoT complexity and heterogeneity but still Software Defined IoT (SDIoT) lacks intelligence. To address this challenge, we proposed a model called Deep Routing Strategy (DRS) which uses Deep Learning algorithm to perform routing in SDIoT intelligently and efficiently. Our model uses real-time traffic for training and learning. Results demonstrate that proposed model has achieved high accuracy and low packet loss rate during path selection. Proposed model has also outperformed benchmark routing algorithm (OSPF). Moreover, proposed model provided encouraging results during high dynamic traffic flow.Keywords: SDN, IoT, DL, ML, DRS
Procedia PDF Downloads 1101080 Artificial Intelligence-Based Chest X-Ray Test of COVID-19 Patients
Authors: Dhurgham Al-Karawi, Nisreen Polus, Shakir Al-Zaidi, Sabah Jassim
Abstract:
The management of COVID-19 patients based on chest imaging is emerging as an essential tool for evaluating the spread of the pandemic which has gripped the global community. It has already been used to monitor the situation of COVID-19 patients who have issues in respiratory status. There has been increase to use chest imaging for medical triage of patients who are showing moderate-severe clinical COVID-19 features, this is due to the fast dispersal of the pandemic to all continents and communities. This article demonstrates the development of machine learning techniques for the test of COVID-19 patients using Chest X-Ray (CXR) images in nearly real-time, to distinguish the COVID-19 infection with a significantly high level of accuracy. The testing performance has covered a combination of different datasets of CXR images of positive COVID-19 patients, patients with viral and bacterial infections, also, people with a clear chest. The proposed AI scheme successfully distinguishes CXR scans of COVID-19 infected patients from CXR scans of viral and bacterial based pneumonia as well as normal cases with an average accuracy of 94.43%, sensitivity 95%, and specificity 93.86%. Predicted decisions would be supported by visual evidence to help clinicians speed up the initial assessment process of new suspected cases, especially in a resource-constrained environment.Keywords: COVID-19, chest x-ray scan, artificial intelligence, texture analysis, local binary pattern transform, Gabor filter
Procedia PDF Downloads 1451079 Employee Well-being in the Age of AI: Perceptions, Concerns, Behaviors, and Outcomes
Authors: Soheila Sadeghi
Abstract:
— The growing integration of Artificial Intelligence (AI) into Human Resources (HR) processes has transformed the way organizations manage recruitment, performance evaluation, and employee engagement. While AI offers numerous advantages—such as improved efficiency, reduced bias, and hyper-personalization—it raises significant concerns about employee well-being, job security, fairness, and transparency. The study examines how AI shapes employee perceptions, job satisfaction, mental health, and retention. Key findings reveal that: (a) while AI can enhance efficiency and reduce bias, it also raises concerns about job security, fairness, and privacy; (b) transparency in AI systems emerges as a critical factor in fostering trust and positive employee attitudes; and (c) AI systems can both support and undermine employee well-being, depending on how they are implemented and perceived. The research introduces an AI-employee well-being Interaction Framework, illustrating how AI influences employee perceptions, behaviors, and outcomes. Organizational strategies, such as (a) clear communication, (b) upskilling programs, and (c) employee involvement in AI implementation, are identified as crucial for mitigating negative impacts and enhancing positive outcomes. The study concludes that the successful integration of AI in HR requires a balanced approach that (a) prioritizes employee well-being, (b) facilitates human-AI collaboration, and (c) ensures ethical and transparent AI practices alongside technological advancement.Keywords: artificial intelligence, human resources, employee well-being, job satisfaction, organizational support, transparency in AI
Procedia PDF Downloads 291078 General Mood and Emotional Regulation as Predictors of Bullying Behaviors among Adolescent Males: Basis for a Proposed Bullying Intervention Program
Authors: Angelyn Del Mundo
Abstract:
Bullying cases are a proliferating issue that schools need to address. This calls for a challenge in providing effective measures to reduce bullying. The study aimed to determine which among the socio-emotional aspects of adolescent males could predict bullying. The respondents of the study were the grades 10 and 11 level and the selection of the respondents was based on the names listed by the teachers and guidance counselors through the Student Nomination Questionnaire. The Bullying Survey Questionnaire Checklist was answered by the respondents to be able to identify their most observed bullying behavior. On the other hand, the level of their mental ability was measured through the use of Otis-Lennon School Ability Test, while their socio-emotional aspects was is classified into 2 contexts: emotional intelligence and personality traits which were determined with the use of Bar-On Emotional Quotient Inventory: Youth Version (BarOn EQ-i:YV) and the Five-Factor Personality Inventory-Children (FFPI-C). Results indicated that majority of the respondents have average level of mental ability and socio-emotional aspects. However, many students have low to markedly low level interpersonal scale. Furthermore, general mood and emotional regulation were found as predictors of bullying behaviors. These findings became the basis for a proposed bullying intervention program.Keywords: bullying, emotional intelligence, mental ability, personality traits
Procedia PDF Downloads 2821077 The Role of Artificial Intelligence in Patent Claim Interpretation: Legal Challenges and Opportunities
Authors: Mandeep Saini
Abstract:
The rapid advancement of Artificial Intelligence (AI) is transforming various fields, including intellectual property law. This paper explores the emerging role of AI in interpreting patent claims, a critical and highly specialized area within intellectual property rights. Patent claims define the scope of legal protection granted to an invention, and their precise interpretation is crucial in determining the boundaries of the patent holder's rights. Traditionally, this interpretation has relied heavily on the expertise of patent examiners, legal professionals, and judges. However, the increasing complexity of modern inventions, especially in fields like biotechnology, software, and electronics, poses significant challenges to human interpretation. Introducing AI into patent claim interpretation raises several legal and ethical concerns. This paper addresses critical issues such as the reliability of AI-driven interpretations, the potential for algorithmic bias, and the lack of transparency in AI decision-making processes. It considers the legal implications of relying on AI, particularly regarding accountability for errors and the potential challenges to AI interpretations in court. The paper includes a comparative study of AI-driven patent claim interpretations versus human interpretations across different jurisdictions to provide a comprehensive analysis. This comparison highlights the variations in legal standards and practices, offering insights into how AI could impact the harmonization of international patent laws. The paper proposes policy recommendations for the responsible use of AI in patent law. It suggests legal frameworks that ensure AI tools complement, rather than replace, human expertise in patent claim interpretation. These recommendations aim to balance the benefits of AI with the need for maintaining trust, transparency, and fairness in the legal process. By addressing these critical issues, this research contributes to the ongoing discourse on integrating AI into the legal field, specifically within intellectual property rights. It provides a forward-looking perspective on how AI could reshape patent law, offering both opportunities for innovation and challenges that must be carefully managed to protect the integrity of the legal system.Keywords: artificial intelligence (ai), patent claim interpretation, intellectual property rights, algorithmic bias, natural language processing, patent law harmonization, legal ethics
Procedia PDF Downloads 211076 Value-Based Argumentation Frameworks and Judicial Moral Reasoning
Authors: Sonia Anand Knowlton
Abstract:
As the use of Artificial Intelligence is becoming increasingly integrated in virtually every area of life, the need and interest to logically formalize the law and judicial reasoning is growing tremendously. The study of argumentation frameworks (AFs) provides promise in this respect. AF’s provide a way of structuring human reasoning using a formal system of non-monotonic logic. P.M. Dung first introduced this framework and demonstrated that certain arguments must prevail and certain arguments must perish based on whether they are logically “attacked” by other arguments. Dung labelled the set of prevailing arguments as the “preferred extension” of the given argumentation framework. Trevor Bench-Capon’s Value-based Argumentation Frameworks extended Dung’s AF system by allowing arguments to derive their force from the promotion of “preferred” values. In VAF systems, the success of an attack from argument A to argument B (i.e., the triumph of argument A) requires that argument B does not promote a value that is preferred to argument A. There has been thorough discussion of the application of VAFs to the law within the computer science literature, mainly demonstrating that legal cases can be effectively mapped out using VAFs. This article analyses VAFs from a jurisprudential standpoint to provide a philosophical and theoretical analysis of what VAFs tell the legal community about the judicial reasoning, specifically distinguishing between legal and moral reasoning. It highlights the limitations of using VAFs to account for judicial moral reasoning in theory and in practice.Keywords: nonmonotonic logic, legal formalization, computer science, artificial intelligence, morality
Procedia PDF Downloads 741075 Concept for Determining the Focus of Technology Monitoring Activities
Authors: Guenther Schuh, Christina Koenig, Nico Schoen, Markus Wellensiek
Abstract:
Identification and selection of appropriate product and manufacturing technologies are key factors for competitiveness and market success of technology-based companies. Therefore many companies perform technology intelligence (TI) activities to ensure the identification of evolving technologies at the right time. Technology monitoring is one of the three base activities of TI, besides scanning and scouting. As the technological progress is accelerating, more and more technologies are being developed. Against the background of limited resources it is therefore necessary to focus TI activities. In this paper, we propose a concept for defining appropriate search fields for technology monitoring. This limitation of search space leads to more concentrated monitoring activities. The concept will be introduced and demonstrated through an anonymized case study conducted within an industry project at the Fraunhofer Institute for Production Technology. The described concept provides a customized monitoring approach, which is suitable for use in technology-oriented companies especially those that have not yet defined an explicit technology strategy. It is shown in this paper that the definition of search fields and search tasks are suitable methods to define topics of interest and thus to direct monitoring activities. Current as well as planned product, production and material technologies as well as existing skills, capabilities and resources form the basis of the described derivation of relevant search areas. To further improve the concept of technology monitoring the proposed concept should be extended during future research e.g. by the definition of relevant monitoring parameters.Keywords: monitoring radar, search field, technology intelligence, technology monitoring
Procedia PDF Downloads 4741074 Integrating AI in Education: Enhancing Learning Processes and Personalization
Authors: Waleed Afandi
Abstract:
Artificial intelligence (AI) has rapidly transformed various sectors, including education. This paper explores the integration of AI in education, emphasizing its potential to revolutionize learning processes, enhance teaching methodologies, and personalize education. We examine the historical context of AI in education, current applications, and the potential challenges and ethical considerations associated with its implementation. By reviewing a wide range of literature, this study aims to provide a comprehensive understanding of how AI can be leveraged to improve educational outcomes and the future directions of AI-driven educational innovations. Additionally, the paper discusses the impact of AI on student engagement, teacher support, and administrative efficiency. Case studies highlighting successful AI applications in diverse educational settings are presented, showcasing the practical benefits and real-world implications. The analysis also addresses potential disparities in access to AI technologies and suggests strategies to ensure equitable implementation. Through a balanced examination of the promises and pitfalls of AI in education, this study seeks to inform educators, policymakers, and technologists about the optimal pathways for integrating AI to foster an inclusive, effective, and innovative educational environment.Keywords: artificial intelligence, education, personalized learning, teaching methodologies, educational outcomes, AI applications, student engagement, teacher support, administrative efficiency, equity in education
Procedia PDF Downloads 321073 Short Answer Grading Using Multi-Context Features
Authors: S. Sharan Sundar, Nithish B. Moudhgalya, Nidhi Bhandari, Vineeth Vijayaraghavan
Abstract:
Automatic Short Answer Grading is one of the prime applications of artificial intelligence in education. Several approaches involving the utilization of selective handcrafted features, graphical matching techniques, concept identification and mapping, complex deep frameworks, sentence embeddings, etc. have been explored over the years. However, keeping in mind the real-world application of the task, these solutions present a slight overhead in terms of computations and resources in achieving high performances. In this work, a simple and effective solution making use of elemental features based on statistical, linguistic properties, and word-based similarity measures in conjunction with tree-based classifiers and regressors is proposed. The results for classification tasks show improvements ranging from 1%-30%, while the regression task shows a stark improvement of 35%. The authors attribute these improvements to the addition of multiple similarity scores to provide ensemble of scoring criteria to the models. The authors also believe the work could reinstate that classical natural language processing techniques and simple machine learning models can be used to achieve high results for short answer grading.Keywords: artificial intelligence, intelligent systems, natural language processing, text mining
Procedia PDF Downloads 1331072 Investigation of the Effects of Visually Disabled and Typical Development Students on Their Multiple Intelligence by Applying Abacus and Right Brain Training
Authors: Sidika Di̇lşad Kaya, Ahmet Seli̇m Kaya, Ibrahi̇m Eri̇k, Havva Yaldiz, Yalçin Kaya
Abstract:
The aim of this study was to reveal the effects of right brain development on reading, comprehension, learning and concentration levels and rapid processing skills in students with low vision and students with standard development, and to explore the effects of right and left brain integration on students' academic success and the permanence of the learned knowledge. A total of 68 students with a mean age of 10.01±0.12 were included in the study, 58 of them with standard development, 9 partially visually impaired and 1 totally visually disabled student. The student with a total visual impairment could not participate in the reading speed test due to her total visual impairment. The following data were measured in the participant students before the project; Reading speed measurement in 1 minute, Reading comprehension questions, Burdon attention test, 50 questions of math quiz timed with a stopwatch. Participants were trained for 3 weeks, 5 days a week, for a total of two hours a day. In this study, right-brain developing exercises were carried out with the use of an abacus, and it was aimed to develop both mathematical and attention of students with questions prepared with numerical data taken from fairy tale activities. Among these problems, the study was supported with multiple-choice, 5W (what, where, who, why, when?), 1H (how?) questions along with true-false and fill-in-the-blank activities. By using memory cards, students' short-term memories were strengthened, photographic memory studies were conducted and their visual intelligence was supported. Auditory intelligence was supported by aiming to make calculations by using the abacus in the minds of the students with the numbers given aurally. When calculating the numbers by touching the real abacus, the development of students' tactile intelligence is enhanced. Research findings were analyzed in SPSS program, Kolmogorov Smirnov test was used for normality analysis. Since the variables did not show normal distribution, Wilcoxon test, one of the non-parametric tests, was used to compare the dependent groups. Statistical significance level was accepted as 0.05. The reading speed of the participants was 83.54±33.03 in the pre-test and 116.25±38.49 in the post-test. Narration pre-test 69.71±25.04 post-test 97.06±6.70; BURDON pretest 84.46±14.35 posttest 95.75±5.67; rapid math processing skills pretest 90.65±10.93, posttest 98.18±2.63 (P<0.05). It was determined that the pre-test and post-test averages of students with typical development and students with low vision were also significant for all four values (p<0.05). As a result of the data obtained from the participants, it is seen that the study was effective in terms of measurement parameters, and the findings were statistically significant. Therefore, it is recommended to use the method widely.Keywords: Abacus, reading speed, multiple intelligences, right brain training, visually impaired
Procedia PDF Downloads 1831071 LLM-Powered User-Centric Knowledge Graphs for Unified Enterprise Intelligence
Authors: Rajeev Kumar, Harishankar Kumar
Abstract:
Fragmented data silos within enterprises impede the extraction of meaningful insights and hinder efficiency in tasks such as product development, client understanding, and meeting preparation. To address this, we propose a system-agnostic framework that leverages large language models (LLMs) to unify diverse data sources into a cohesive, user-centered knowledge graph. By automating entity extraction, relationship inference, and semantic enrichment, the framework maps interactions, behaviors, and data around the user, enabling intelligent querying and reasoning across various data types, including emails, calendars, chats, documents, and logs. Its domain adaptability supports applications in contextual search, task prioritization, expertise identification, and personalized recommendations, all rooted in user-centric insights. Experimental results demonstrate its effectiveness in generating actionable insights, enhancing workflows such as trip planning, meeting preparation, and daily task management. This work advances the integration of knowledge graphs and LLMs, bridging the gap between fragmented data systems and intelligent, unified enterprise solutions focused on user interactions.Keywords: knowledge graph, entity extraction, relation extraction, LLM, activity graph, enterprise intelligence
Procedia PDF Downloads 31070 Early Impact Prediction and Key Factors Study of Artificial Intelligence Patents: A Method Based on LightGBM and Interpretable Machine Learning
Authors: Xingyu Gao, Qiang Wu
Abstract:
Patents play a crucial role in protecting innovation and intellectual property. Early prediction of the impact of artificial intelligence (AI) patents helps researchers and companies allocate resources and make better decisions. Understanding the key factors that influence patent impact can assist researchers in gaining a better understanding of the evolution of AI technology and innovation trends. Therefore, identifying highly impactful patents early and providing support for them holds immeasurable value in accelerating technological progress, reducing research and development costs, and mitigating market positioning risks. Despite the extensive research on AI patents, accurately predicting their early impact remains a challenge. Traditional methods often consider only single factors or simple combinations, failing to comprehensively and accurately reflect the actual impact of patents. This paper utilized the artificial intelligence patent database from the United States Patent and Trademark Office and the Len.org patent retrieval platform to obtain specific information on 35,708 AI patents. Using six machine learning models, namely Multiple Linear Regression, Random Forest Regression, XGBoost Regression, LightGBM Regression, Support Vector Machine Regression, and K-Nearest Neighbors Regression, and using early indicators of patents as features, the paper comprehensively predicted the impact of patents from three aspects: technical, social, and economic. These aspects include the technical leadership of patents, the number of citations they receive, and their shared value. The SHAP (Shapley Additive exPlanations) metric was used to explain the predictions of the best model, quantifying the contribution of each feature to the model's predictions. The experimental results on the AI patent dataset indicate that, for all three target variables, LightGBM regression shows the best predictive performance. Specifically, patent novelty has the greatest impact on predicting the technical impact of patents and has a positive effect. Additionally, the number of owners, the number of backward citations, and the number of independent claims are all crucial and have a positive influence on predicting technical impact. In predicting the social impact of patents, the number of applicants is considered the most critical input variable, but it has a negative impact on social impact. At the same time, the number of independent claims, the number of owners, and the number of backward citations are also important predictive factors, and they have a positive effect on social impact. For predicting the economic impact of patents, the number of independent claims is considered the most important factor and has a positive impact on economic impact. The number of owners, the number of sibling countries or regions, and the size of the extended patent family also have a positive influence on economic impact. The study primarily relies on data from the United States Patent and Trademark Office for artificial intelligence patents. Future research could consider more comprehensive data sources, including artificial intelligence patent data, from a global perspective. While the study takes into account various factors, there may still be other important features not considered. In the future, factors such as patent implementation and market applications may be considered as they could have an impact on the influence of patents.Keywords: patent influence, interpretable machine learning, predictive models, SHAP
Procedia PDF Downloads 501069 A Tool to Measure Efficiency and Trust Towards eXplainable Artificial Intelligence in Conflict Detection Tasks
Authors: Raphael Tuor, Denis Lalanne
Abstract:
The ATM research community is missing suitable tools to design, test, and validate new UI prototypes. Important stakes underline the implementation of both DSS and XAI methods into current systems. ML-based DSS are gaining in relevance as ATFM becomes increasingly complex. However, these systems only prove useful if a human can understand them, and thus new XAI methods are needed. The human-machine dyad should work as a team and should understand each other. We present xSky, a configurable benchmark tool that allows us to compare different versions of an ATC interface in conflict detection tasks. Our main contributions to the ATC research community are (1) a conflict detection task simulator (xSky) that allows to test the applicability of visual prototypes on scenarios of varying difficulty and outputting relevant operational metrics (2) a theoretical approach to the explanations of AI-driven trajectory predictions. xSky addresses several issues that were identified within available research tools. Researchers can configure the dimensions affecting scenario difficulty with a simple CSV file. Both the content and appearance of the XAI elements can be customized in a few steps. As a proof-of-concept, we implemented an XAI prototype inspired by the maritime field.Keywords: air traffic control, air traffic simulation, conflict detection, explainable artificial intelligence, explainability, human-automation collaboration, human factors, information visualization, interpretability, trajectory prediction
Procedia PDF Downloads 1601068 Detection of Hepatitis B by the Use of Artifical Intelegence
Authors: Shizra Waris, Bilal Shoaib, Munib Ahmad
Abstract:
Background; The using of clinical decision support systems (CDSSs) may recover unceasing disease organization, which requires regular visits to multiple health professionals, treatment monitoring, disease control, and patient behavior modification. The objective of this survey is to determine if these CDSSs improve the processes of unceasing care including diagnosis, treatment, and monitoring of diseases. Though artificial intelligence is not a new idea it has been widely documented as a new technology in computer science. Numerous areas such as education business, medical and developed have made use of artificial intelligence Methods: The survey covers articles extracted from relevant databases. It uses search terms related to information technology and viral hepatitis which are published between 2000 and 2016. Results: Overall, 80% of studies asserted the profit provided by information technology (IT); 75% of learning asserted the benefits concerned with medical domain;25% of studies do not clearly define the added benefits due IT. The CDSS current state requires many improvements to hold up the management of liver diseases such as HCV, liver fibrosis, and cirrhosis. Conclusion: We concluded that the planned model gives earlier and more correct calculation of hepatitis B and it works as promising tool for calculating of custom hepatitis B from the clinical laboratory data.Keywords: detection, hapataties, observation, disesese
Procedia PDF Downloads 1571067 Customized Design of Amorphous Solids by Generative Deep Learning
Authors: Yinghui Shang, Ziqing Zhou, Rong Han, Hang Wang, Xiaodi Liu, Yong Yang
Abstract:
The design of advanced amorphous solids, such as metallic glasses, with targeted properties through artificial intelligence signifies a paradigmatic shift in physical metallurgy and materials technology. Here, we developed a machine-learning architecture that facilitates the generation of metallic glasses with targeted multifunctional properties. Our architecture integrates the state-of-the-art unsupervised generative adversarial network model with supervised models, allowing the incorporation of general prior knowledge derived from thousands of data points across a vast range of alloy compositions, into the creation of data points for a specific type of composition, which overcame the common issue of data scarcity typically encountered in the design of a given type of metallic glasses. Using our generative model, we have successfully designed copper-based metallic glasses, which display exceptionally high hardness or a remarkably low modulus. Notably, our architecture can not only explore uncharted regions in the targeted compositional space but also permits self-improvement after experimentally validated data points are added to the initial dataset for subsequent cycles of data generation, hence paving the way for the customized design of amorphous solids without human intervention.Keywords: metallic glass, artificial intelligence, mechanical property, automated generation
Procedia PDF Downloads 561066 Macroeconomic Implications of Artificial Intelligence on Unemployment in Europe
Authors: Ahmad Haidar
Abstract:
Modern economic systems are characterized by growing complexity, and addressing their challenges requires innovative approaches. This study examines the implications of artificial intelligence (AI) on unemployment in Europe from a macroeconomic perspective, employing data modeling techniques to understand the relationship between AI integration and labor market dynamics. To understand the AI-unemployment nexus comprehensively, this research considers factors such as sector-specific AI adoption, skill requirements, workforce demographics, and geographical disparities. The study utilizes a panel data model, incorporating data from European countries over the last two decades, to explore the potential short-term and long-term effects of AI implementation on unemployment rates. In addition to investigating the direct impact of AI on unemployment, the study also delves into the potential indirect effects and spillover consequences. It considers how AI-driven productivity improvements and cost reductions might influence economic growth and, in turn, labor market outcomes. Furthermore, it assesses the potential for AI-induced changes in industrial structures to affect job displacement and creation. The research also highlights the importance of policy responses in mitigating potential negative consequences of AI adoption on unemployment. It emphasizes the need for targeted interventions such as skill development programs, labor market regulations, and social safety nets to enable a smooth transition for workers affected by AI-related job displacement. Additionally, the study explores the potential role of AI in informing and transforming policy-making to ensure more effective and agile responses to labor market challenges. In conclusion, this study provides a comprehensive analysis of the macroeconomic implications of AI on unemployment in Europe, highlighting the importance of understanding the nuanced relationships between AI adoption, economic growth, and labor market outcomes. By shedding light on these relationships, the study contributes valuable insights for policymakers, educators, and researchers, enabling them to make informed decisions in navigating the complex landscape of AI-driven economic transformation.Keywords: artificial intelligence, unemployment, macroeconomic analysis, european labor market
Procedia PDF Downloads 77