Search results for: standardization artificial intelligence
1641 Differences in Parental Acceptance, Rejection, and Attachment and Associations with Adolescent Emotional Intelligence and Life Satisfaction
Authors: Diana Coyl-Shepherd, Lisa Newland
Abstract:
Research and theory suggest that parenting and parent-child attachment influence emotional development and well-being. Studies indicate that adolescents often describe differences in relationships with each parent and may form different types of attachment to mothers and fathers. During adolescence and young adulthood, romantic partners may also become attachment figures, influencing well being, and providing a relational context for emotion skill development. Mothers, however, tend to be remain the primary attachment figure; fathers and romantic partners are more likely to be secondary attachment figures. The following hypotheses were tested: 1) participants would rate mothers as more accepting and less rejecting than fathers, 2) participants would rate secure attachment to mothers higher and insecure attachment lower compared to father and romantic partner, 3) parental rejection and insecure attachment would be negatively related to life satisfaction and emotional intelligence, and 4) secure attachment and parental acceptance would be positively related life satisfaction and emotional intelligence. After IRB and informed consent, one hundred fifty adolescents and young adults (ages 11-28, M = 19.64; 71% female) completed an online survey. Measures included parental acceptance, rejection, attachment (i.e., secure, dismissing, and preoccupied), emotional intelligence (i.e., seeking and providing comfort, use, and understanding of self emotions, expressing warmth, understanding and responding to others’ emotional needs), and well-being (i.e., self-confidence and life satisfaction). As hypothesized, compared to fathers’, mothers’ acceptance was significantly higher t (190) = 3.98, p = .000 and rejection significantly lower t (190) = - 4.40, p = .000. Group differences in secure attachment were significant, f (2, 389) = 40.24, p = .000; post-hoc analyses revealed significant differences between mothers and fathers and between mothers and romantic partners; mothers had the highest mean score. Group differences in preoccupied attachment were significant, f (2, 388) = 13.37, p = .000; post-hoc analyses revealed significant differences between mothers and romantic partners, and between fathers and romantic partners; mothers have the lowest mean score. However, group differences in dismissing attachment were not significant, f (2, 389) = 1.21, p = .30; scores for mothers and romantic partners were similar; father means score was highest. For hypotheses 3 and 4 significant negative correlations were found between life satisfaction and dismissing parent, and romantic attachment, preoccupied father and romantic attachment, and mother and father rejection variables; secure attachment variables and parental acceptance were positively correlated with life satisfaction. Self-confidence was correlated only with mother acceptance. For emotional intelligence, seeking and providing comfort were negatively correlated with parent dismissing and mother rejection; secure mother and romantic attachment and mother acceptance were positively correlated with these variables. Use and understanding of self-emotions were negatively correlated with parent and partner dismissing attachment, and parent rejection; romantic secure attachment and parent acceptance were positively correlated. Expressing warmth was negatively correlated with dismissing attachment variables, romantic preoccupied attachment, and parent rejection; whereas attachment secure variables were positively associated. Understanding and responding to others’ emotional needs were correlated with parent dismissing and preoccupied attachment variables and mother rejection; only secure father attachment was positively correlated.Keywords: adolescent emotional intelligence, life satisfaction, parent and romantic attachment, parental rejection and acceptance
Procedia PDF Downloads 1921640 Author Name Disambiguation for Biomedical Literature
Authors: Parthiban Srinivasan
Abstract:
PubMed provides online access to the National Library of Medicine database (MEDLINE) and other publications, which contain close to 25 million scientific citations from 1865 to the present. There are close to 80 million author name instances in those close to 25 million citations. For any work of literature, a fundamental issue is to identify the individual(s) who wrote it, and conversely, to identify all of the works that belong to a given individual. Due to the lack of universal standards for name information, there are two aspects of name ambiguity: name synonymy (a single author with multiple name representations), and name homonymy (multiple authors sharing the same name representation). In this talk, we present some results from our extensive work in author name disambiguation for PubMed citations. Information will be presented on the effectiveness and shortcomings of different aspects of successful name disambiguation such as parsing, validation, standardization and normalization.Keywords: disambiguation, normalization, parsing, PubMed
Procedia PDF Downloads 3001639 Artificial Neural Network Based Model for Detecting Attacks in Smart Grid Cloud
Authors: Sandeep Mehmi, Harsh Verma, A. L. Sangal
Abstract:
Ever since the idea of using computing services as commodity that can be delivered like other utilities e.g. electric and telephone has been floated, the scientific fraternity has diverted their research towards a new area called utility computing. New paradigms like cluster computing and grid computing came into existence while edging closer to utility computing. With the advent of internet the demand of anytime, anywhere access of the resources that could be provisioned dynamically as a service, gave rise to the next generation computing paradigm known as cloud computing. Today, cloud computing has become one of the most aggressively growing computer paradigm, resulting in growing rate of applications in area of IT outsourcing. Besides catering the computational and storage demands, cloud computing has economically benefitted almost all the fields, education, research, entertainment, medical, banking, military operations, weather forecasting, business and finance to name a few. Smart grid is another discipline that direly needs to be benefitted from the cloud computing advantages. Smart grid system is a new technology that has revolutionized the power sector by automating the transmission and distribution system and integration of smart devices. Cloud based smart grid can fulfill the storage requirement of unstructured and uncorrelated data generated by smart sensors as well as computational needs for self-healing, load balancing and demand response features. But, security issues such as confidentiality, integrity, availability, accountability and privacy need to be resolved for the development of smart grid cloud. In recent years, a number of intrusion prevention techniques have been proposed in the cloud, but hackers/intruders still manage to bypass the security of the cloud. Therefore, precise intrusion detection systems need to be developed in order to secure the critical information infrastructure like smart grid cloud. Considering the success of artificial neural networks in building robust intrusion detection, this research proposes an artificial neural network based model for detecting attacks in smart grid cloud.Keywords: artificial neural networks, cloud computing, intrusion detection systems, security issues, smart grid
Procedia PDF Downloads 3181638 Next-Gen Solutions: How Generative AI Will Reshape Businesses
Authors: Aishwarya Rai
Abstract:
This study explores the transformative influence of generative AI on startups, businesses, and industries. We will explore how large businesses can benefit in the area of customer operations, where AI-powered chatbots can improve self-service and agent effectiveness, greatly increasing efficiency. In marketing and sales, generative AI could transform businesses by automating content development, data utilization, and personalization, resulting in a substantial increase in marketing and sales productivity. In software engineering-focused startups, generative AI can streamline activities, significantly impacting coding processes and work experiences. It can be extremely useful in product R&D for market analysis, virtual design, simulations, and test preparation, altering old workflows and increasing efficiency. Zooming into the retail and CPG industry, industry findings suggest a 1-2% increase in annual revenues, equating to $400 billion to $660 billion. By automating customer service, marketing, sales, and supply chain management, generative AI can streamline operations, optimizing personalized offerings and presenting itself as a disruptive force. While celebrating economic potential, we acknowledge challenges like external inference and adversarial attacks. Human involvement remains crucial for quality control and security in the era of generative AI-driven transformative innovation. This talk provides a comprehensive exploration of generative AI's pivotal role in reshaping businesses, recognizing its strategic impact on customer interactions, productivity, and operational efficiency.Keywords: generative AI, digital transformation, LLM, artificial intelligence, startups, businesses
Procedia PDF Downloads 761637 XAI Implemented Prognostic Framework: Condition Monitoring and Alert System Based on RUL and Sensory Data
Authors: Faruk Ozdemir, Roy Kalawsky, Peter Hubbard
Abstract:
Accurate estimation of RUL provides a basis for effective predictive maintenance, reducing unexpected downtime for industrial equipment. However, while models such as the Random Forest have effective predictive capabilities, they are the so-called ‘black box’ models, where interpretability is at a threshold to make critical diagnostic decisions involved in industries related to aviation. The purpose of this work is to present a prognostic framework that embeds Explainable Artificial Intelligence (XAI) techniques in order to provide essential transparency in Machine Learning methods' decision-making mechanisms based on sensor data, with the objective of procuring actionable insights for the aviation industry. Sensor readings have been gathered from critical equipment such as turbofan jet engine and landing gear, and the prediction of the RUL is done by a Random Forest model. It involves steps such as data gathering, feature engineering, model training, and evaluation. These critical components’ datasets are independently trained and evaluated by the models. While suitable predictions are served, their performance metrics are reasonably good; such complex models, however obscure reasoning for the predictions made by them and may even undermine the confidence of the decision-maker or the maintenance teams. This is followed by global explanations using SHAP and local explanations using LIME in the second phase to bridge the gap in reliability within industrial contexts. These tools analyze model decisions, highlighting feature importance and explaining how each input variable affects the output. This dual approach offers a general comprehension of the overall model behavior and detailed insight into specific predictions. The proposed framework, in its third component, incorporates the techniques of causal analysis in the form of Granger causality tests in order to move beyond correlation toward causation. This will not only allow the model to predict failures but also present reasons, from the key sensor features linked to possible failure mechanisms to relevant personnel. The causality between sensor behaviors and equipment failures creates much value for maintenance teams due to better root cause identification and effective preventive measures. This step contributes to the system being more explainable. Surrogate Several simple models, including Decision Trees and Linear Models, can be used in yet another stage to approximately represent the complex Random Forest model. These simpler models act as backups, replicating important jobs of the original model's behavior. If the feature explanations obtained from the surrogate model are cross-validated with the primary model, the insights derived would be more reliable and provide an intuitive sense of how the input variables affect the predictions. We then create an iterative explainable feedback loop, where the knowledge learned from the explainability methods feeds back into the training of the models. This feeds into a cycle of continuous improvement both in model accuracy and interpretability over time. By systematically integrating new findings, the model is expected to adapt to changed conditions and further develop its prognosis capability. These components are then presented to the decision-makers through the development of a fully transparent condition monitoring and alert system. The system provides a holistic tool for maintenance operations by leveraging RUL predictions, feature importance scores, persistent sensor threshold values, and autonomous alert mechanisms. Since the system will provide explanations for the predictions given, along with active alerts, the maintenance personnel can make informed decisions on their end regarding correct interventions to extend the life of the critical machinery.Keywords: predictive maintenance, explainable artificial intelligence, prognostic, RUL, machine learning, turbofan engines, C-MAPSS dataset
Procedia PDF Downloads 61636 Deep Learning-Based Object Detection on Low Quality Images: A Case Study of Real-Time Traffic Monitoring
Authors: Jean-Francois Rajotte, Martin Sotir, Frank Gouineau
Abstract:
The installation and management of traffic monitoring devices can be costly from both a financial and resource point of view. It is therefore important to take advantage of in-place infrastructures to extract the most information. Here we show how low-quality urban road traffic images from cameras already available in many cities (such as Montreal, Vancouver, and Toronto) can be used to estimate traffic flow. To this end, we use a pre-trained neural network, developed for object detection, to count vehicles within images. We then compare the results with human annotations gathered through crowdsourcing campaigns. We use this comparison to assess performance and calibrate the neural network annotations. As a use case, we consider six months of continuous monitoring over hundreds of cameras installed in the city of Montreal. We compare the results with city-provided manual traffic counting performed in similar conditions at the same location. The good performance of our system allows us to consider applications which can monitor the traffic conditions in near real-time, making the counting usable for traffic-related services. Furthermore, the resulting annotations pave the way for building a historical vehicle counting dataset to be used for analysing the impact of road traffic on many city-related issues, such as urban planning, security, and pollution.Keywords: traffic monitoring, deep learning, image annotation, vehicles, roads, artificial intelligence, real-time systems
Procedia PDF Downloads 2001635 Intelligent Software Architecture and Automatic Re-Architecting Based on Machine Learning
Authors: Gebremeskel Hagos Gebremedhin, Feng Chong, Heyan Huang
Abstract:
Software system is the combination of architecture and organized components to accomplish a specific function or set of functions. A good software architecture facilitates application system development, promotes achievement of functional requirements, and supports system reconfiguration. We describe three studies demonstrating the utility of our architecture in the subdomain of mobile office robots and identify software engineering principles embodied in the architecture. The main aim of this paper is to analyze prove architecture design and automatic re-architecting using machine learning. Intelligence software architecture and automatic re-architecting process is reorganizing in to more suitable one of the software organizational structure system using the user access dataset for creating relationship among the components of the system. The 3-step approach of data mining was used to analyze effective recovery, transformation and implantation with the use of clustering algorithm. Therefore, automatic re-architecting without changing the source code is possible to solve the software complexity problem and system software reuse.Keywords: intelligence, software architecture, re-architecting, software reuse, High level design
Procedia PDF Downloads 1191634 Artificial Intelligence Assisted Sentiment Analysis of Hotel Reviews Using Topic Modeling
Authors: Sushma Ghogale
Abstract:
With a surge in user-generated content or feedback or reviews on the internet, it has become possible and important to know consumers' opinions about products and services. This data is important for both potential customers and businesses providing the services. Data from social media is attracting significant attention and has become the most prominent channel of expressing an unregulated opinion. Prospective customers look for reviews from experienced customers before deciding to buy a product or service. Several websites provide a platform for users to post their feedback for the provider and potential customers. However, the biggest challenge in analyzing such data is in extracting latent features and providing term-level analysis of the data. This paper proposes an approach to use topic modeling to classify the reviews into topics and conduct sentiment analysis to mine the opinions. This approach can analyse and classify latent topics mentioned by reviewers on business sites or review sites, or social media using topic modeling to identify the importance of each topic. It is followed by sentiment analysis to assess the satisfaction level of each topic. This approach provides a classification of hotel reviews using multiple machine learning techniques and comparing different classifiers to mine the opinions of user reviews through sentiment analysis. This experiment concludes that Multinomial Naïve Bayes classifier produces higher accuracy than other classifiers.Keywords: latent Dirichlet allocation, topic modeling, text classification, sentiment analysis
Procedia PDF Downloads 971633 Influence of Post Weld Heat Treatment on Mechanical and Metallurgical Properties of TIG Welded Aluminium Alloy Joints
Authors: Gurmeet Singh Cheema, Navjotinder Singh, Gurjinder Singh, Amardeep Singh
Abstract:
Aluminium and its alloys play have excellent corrosion resistant properties, ease of fabrication and high specific strength to weight ratio. In this investigation an attempt has been made to study the effect of different post weld heat treatment methods on the mechanical and metallurgical properties of TIG welded joints of the commercial aluminium alloy. Three different methods of post weld heat treatments are, solution heat treatment, artificial aged and combination of solution heat treatment and artificial aging are given to TIG welded aluminium joints. Mechanical and metallurgical properties of as welded and post weld treated joints of the aluminium alloys was examined.Keywords: aluminium alloys, TIG welding, post weld heat treatment
Procedia PDF Downloads 5751632 Bridge Health Monitoring: A Review
Authors: Mohammad Bakhshandeh
Abstract:
Structural Health Monitoring (SHM) is a crucial and necessary practice that plays a vital role in ensuring the safety and integrity of critical structures, and in particular, bridges. The continuous monitoring of bridges for signs of damage or degradation through Bridge Health Monitoring (BHM) enables early detection of potential problems, allowing for prompt corrective action to be taken before significant damage occurs. Although all monitoring techniques aim to provide accurate and decisive information regarding the remaining useful life, safety, integrity, and serviceability of bridges, understanding the development and propagation of damage is vital for maintaining uninterrupted bridge operation. Over the years, extensive research has been conducted on BHM methods, and experts in the field have increasingly adopted new methodologies. In this article, we provide a comprehensive exploration of the various BHM approaches, including sensor-based, non-destructive testing (NDT), model-based, and artificial intelligence (AI)-based methods. We also discuss the challenges associated with BHM, including sensor placement and data acquisition, data analysis and interpretation, cost and complexity, and environmental effects, through an extensive review of relevant literature and research studies. Additionally, we examine potential solutions to these challenges and propose future research ideas to address critical gaps in BHM.Keywords: structural health monitoring (SHM), bridge health monitoring (BHM), sensor-based methods, machine-learning algorithms, and model-based techniques, sensor placement, data acquisition, data analysis
Procedia PDF Downloads 901631 Predicting the Compressive Strength of Geopolymer Concrete Using Machine Learning Algorithms: Impact of Chemical Composition and Curing Conditions
Authors: Aya Belal, Ahmed Maher Eltair, Maggie Ahmed Mashaly
Abstract:
Geopolymer concrete is gaining recognition as a sustainable alternative to conventional Portland Cement concrete due to its environmentally friendly nature, which is a key goal for Smart City initiatives. It has demonstrated its potential as a reliable material for the design of structural elements. However, the production of Geopolymer concrete is hindered by batch-to-batch variations, which presents a significant challenge to the widespread adoption of Geopolymer concrete. To date, Machine learning has had a profound impact on various fields by enabling models to learn from large datasets and predict outputs accurately. This paper proposes an integration between the current drift to Artificial Intelligence and the composition of Geopolymer mixtures to predict their mechanical properties. This study employs Python software to develop machine learning model in specific Decision Trees. The research uses the percentage oxides and the chemical composition of the Alkali Solution along with the curing conditions as the input independent parameters, irrespective of the waste products used in the mixture yielding the compressive strength of the mix as the output parameter. The results showed 90 % agreement of the predicted values to the actual values having the ratio of the Sodium Silicate to the Sodium Hydroxide solution being the dominant parameter in the mixture.Keywords: decision trees, geopolymer concrete, machine learning, smart cities, sustainability
Procedia PDF Downloads 881630 A Comparative Soft Computing Approach to Supplier Performance Prediction Using GEP and ANN Models: An Automotive Case Study
Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari
Abstract:
In multi-echelon supply chain networks, optimal supplier selection significantly depends on the accuracy of suppliers’ performance prediction. Different methods of multi criteria decision making such as ANN, GA, Fuzzy, AHP, etc have been previously used to predict the supplier performance but the “black-box” characteristic of these methods is yet a major concern to be resolved. Therefore, the primary objective in this paper is to implement an artificial intelligence-based gene expression programming (GEP) model to compare the prediction accuracy with that of ANN. A full factorial design with %95 confidence interval is initially applied to determine the appropriate set of criteria for supplier performance evaluation. A test-train approach is then utilized for the ANN and GEP exclusively. The training results are used to find the optimal network architecture and the testing data will determine the prediction accuracy of each method based on measures of root mean square error (RMSE) and correlation coefficient (R2). The results of a case study conducted in Supplying Automotive Parts Co. (SAPCO) with more than 100 local and foreign supply chain members revealed that, in comparison with ANN, gene expression programming has a significant preference in predicting supplier performance by referring to the respective RMSE and R-squared values. Moreover, using GEP, a mathematical function was also derived to solve the issue of ANN black-box structure in modeling the performance prediction.Keywords: Supplier Performance Prediction, ANN, GEP, Automotive, SAPCO
Procedia PDF Downloads 4191629 Bias Prevention in Automated Diagnosis of Melanoma: Augmentation of a Convolutional Neural Network Classifier
Authors: Kemka Ihemelandu, Chukwuemeka Ihemelandu
Abstract:
Melanoma remains a public health crisis, with incidence rates increasing rapidly in the past decades. Improving diagnostic accuracy to decrease misdiagnosis using Artificial intelligence (AI) continues to be documented. Unfortunately, unintended racially biased outcomes, a product of lack of diversity in the dataset used, with a noted class imbalance favoring lighter vs. darker skin tone, have increasingly been recognized as a problem.Resulting in noted limitations of the accuracy of the Convolutional neural network (CNN)models. CNN models are prone to biased output due to biases in the dataset used to train them. Our aim in this study was the optimization of convolutional neural network algorithms to mitigate bias in the automated diagnosis of melanoma. We hypothesized that our proposed training algorithms based on a data augmentation method to optimize the diagnostic accuracy of a CNN classifier by generating new training samples from the original ones will reduce bias in the automated diagnosis of melanoma. We applied geometric transformation, including; rotations, translations, scale change, flipping, and shearing. Resulting in a CNN model that provided a modifiedinput data making for a model that could learn subtle racial features. Optimal selection of the momentum and batch hyperparameter increased our model accuracy. We show that our augmented model reduces bias while maintaining accuracy in the automated diagnosis of melanoma.Keywords: bias, augmentation, melanoma, convolutional neural network
Procedia PDF Downloads 2111628 Alpha: A Groundbreaking Avatar Merging User Dialogue with OpenAI's GPT-3.5 for Enhanced Reflective Thinking
Authors: Jonas Colin
Abstract:
Standing at the vanguard of AI development, Alpha represents an unprecedented synthesis of logical rigor and human abstraction, meticulously crafted to mirror the user's unique persona and personality, a feat previously unattainable in AI development. Alpha, an avant-garde artefact in the realm of artificial intelligence, epitomizes a paradigmatic shift in personalized digital interaction, amalgamating user-specific dialogic patterns with the sophisticated algorithmic prowess of OpenAI's GPT-3.5 to engender a platform for enhanced metacognitive engagement and individualized user experience. Underpinned by a sophisticated algorithmic framework, Alpha integrates vast datasets through a complex interplay of neural network models and symbolic AI, facilitating a dynamic, adaptive learning process. This integration enables the system to construct a detailed user profile, encompassing linguistic preferences, emotional tendencies, and cognitive styles, tailoring interactions to align with individual characteristics and conversational contexts. Furthermore, Alpha incorporates advanced metacognitive elements, enabling real-time reflection and adaptation in communication strategies. This self-reflective capability ensures continuous refinement of its interaction model, positioning Alpha not just as a technological marvel but as a harbinger of a new era in human-computer interaction, where machines engage with us on a deeply personal and cognitive level, transforming our interaction with the digital world.Keywords: chatbot, GPT 3.5, metacognition, symbiose
Procedia PDF Downloads 701627 Next Generation Radiation Risk Assessment and Prediction Tools Generation Applying AI-Machine (Deep) Learning Algorithms
Authors: Selim M. Khan
Abstract:
Indoor air quality is strongly influenced by the presence of radioactive radon (222Rn) gas. Indeed, exposure to high 222Rn concentrations is unequivocally linked to DNA damage and lung cancer and is a worsening issue in North American and European built environments, having increased over time within newer housing stocks as a function of as yet unclear variables. Indoor air radon concentration can be influenced by a wide range of environmental, structural, and behavioral factors. As some of these factors are quantitative while others are qualitative, no single statistical model can determine indoor radon level precisely while simultaneously considering all these variables across a complex and highly diverse dataset. The ability of AI- machine (deep) learning to simultaneously analyze multiple quantitative and qualitative features makes it suitable to predict radon with a high degree of precision. Using Canadian and Swedish long-term indoor air radon exposure data, we are using artificial deep neural network models with random weights and polynomial statistical models in MATLAB to assess and predict radon health risk to human as a function of geospatial, human behavioral, and built environmental metrics. Our initial artificial neural network with random weights model run by sigmoid activation tested different combinations of variables and showed the highest prediction accuracy (>96%) within the reasonable iterations. Here, we present details of these emerging methods and discuss strengths and weaknesses compared to the traditional artificial neural network and statistical methods commonly used to predict indoor air quality in different countries. We propose an artificial deep neural network with random weights as a highly effective method for assessing and predicting indoor radon.Keywords: radon, radiation protection, lung cancer, aI-machine deep learnng, risk assessment, risk prediction, Europe, North America
Procedia PDF Downloads 961626 UWB Open Spectrum Access for a Smart Software Radio
Authors: Hemalatha Rallapalli, K. Lal Kishore
Abstract:
In comparison to systems that are typically designed to provide capabilities over a narrow frequency range through hardware elements, the next generation cognitive radios are intended to implement a broader range of capabilities through efficient spectrum exploitation. This offers the user the promise of greater flexibility, seamless roaming possible on different networks, countries, frequencies, etc. It requires true paradigm shift i.e., liberalization over a wide band of spectrum as well as a growth path to more and greater capability. This work contributes towards the design and implementation of an open spectrum access (OSA) feature to unlicensed users thus offering a frequency agile radio platform that is capable of performing spectrum sensing over a wideband. Thus, an ultra-wideband (UWB) radio, which has the intelligence of spectrum sensing only, unlike the cognitive radio with complete intelligence, is named as a Smart Software Radio (SSR). The spectrum sensing mechanism is implemented based on energy detection. Simulation results show the accuracy and validity of this method.Keywords: cognitive radio, energy detection, software radio, spectrum sensing
Procedia PDF Downloads 4281625 Anomaly Detection in Financial Markets Using Tucker Decomposition
Authors: Salma Krafessi
Abstract:
The financial markets have a multifaceted, intricate environment, and enormous volumes of data are produced every day. To find investment possibilities, possible fraudulent activity, and market oddities, accurate anomaly identification in this data is essential. Conventional methods for detecting anomalies frequently fail to capture the complex organization of financial data. In order to improve the identification of abnormalities in financial time series data, this study presents Tucker Decomposition as a reliable multi-way analysis approach. We start by gathering closing prices for the S&P 500 index across a number of decades. The information is converted to a three-dimensional tensor format, which contains internal characteristics and temporal sequences in a sliding window structure. The tensor is then broken down using Tucker Decomposition into a core tensor and matching factor matrices, allowing latent patterns and relationships in the data to be captured. A possible sign of abnormalities is the reconstruction error from Tucker's Decomposition. We are able to identify large deviations that indicate unusual behavior by setting a statistical threshold. A thorough examination that contrasts the Tucker-based method with traditional anomaly detection approaches validates our methodology. The outcomes demonstrate the superiority of Tucker's Decomposition in identifying intricate and subtle abnormalities that are otherwise missed. This work opens the door for more research into multi-way data analysis approaches across a range of disciplines and emphasizes the value of tensor-based methods in financial analysis.Keywords: tucker decomposition, financial markets, financial engineering, artificial intelligence, decomposition models
Procedia PDF Downloads 691624 Python Implementation for S1000D Applicability Depended Processing Model - SALERNO
Authors: Theresia El Khoury, Georges Badr, Amir Hajjam El Hassani, Stéphane N’Guyen Van Ky
Abstract:
The widespread adoption of machine learning and artificial intelligence across different domains can be attributed to the digitization of data over several decades, resulting in vast amounts of data, types, and structures. Thus, data processing and preparation turn out to be a crucial stage. However, applying these techniques to S1000D standard-based data poses a challenge due to its complexity and the need to preserve logical information. This paper describes SALERNO, an S1000d AppLicability dEpended pRocessiNg mOdel. This python-based model analyzes and converts the XML S1000D-based files into an easier data format that can be used in machine learning techniques while preserving the different logic and relationships in files. The model parses the files in the given folder, filters them, and extracts the required information to be saved in appropriate data frames and Excel sheets. Its main idea is to group the extracted information by applicability. In addition, it extracts the full text by replacing internal and external references while maintaining the relationships between files, as well as the necessary requirements. The resulting files can then be saved in databases and used in different models. Documents in both English and French languages were tested, and special characters were decoded. Updates on the technical manuals were taken into consideration as well. The model was tested on different versions of the S1000D, and the results demonstrated its ability to effectively handle the applicability, requirements, references, and relationships across all files and on different levels.Keywords: aeronautics, big data, data processing, machine learning, S1000D
Procedia PDF Downloads 1571623 Opportunities and Optimization of the Our Eyes Initiative as the Strategy for Counter-Terrorism in ASEAN
Authors: Chastiti Mediafira Wulolo, Tri Legionosuko, Suhirwan, Yusuf
Abstract:
Terrorism and radicalization have become a common threat to every nation in this world. As a part of the asymmetric warfare threat, terrorism and radicalization need a complex strategy as the problem solver. One such way is by collaborating with the international community. The Our Eyes Initiative (OEI), for example, is a cooperation pact in the field of intelligence information exchanges related to terrorism and radicalization initiated by the Indonesian Ministry of Defence. The pact has been signed by Indonesia, Philippines, Malaysia, Brunei Darussalam, Thailand, and Singapore. This cooperation mostly engages military acts as a central role, but it still requires the involvement of various parties such as the police, intelligence agencies and other government institutions. This paper will use a qualitative content analysis method to address the opportunity and enhance the optimization of OEI. As the result, it will explain how OEI takes the opportunities as the strategy for counter-terrorism by building it up as the regional cooperation, building the legitimacy of government and creating the legal framework of the information sharing system.Keywords: our eyes initiative, terrorism, counter-terrorism, ASEAN, cooperation, strategy
Procedia PDF Downloads 1821622 Pharmacognostical and Phytochemical Investigation of the Endemic Medicinal Plant Tekchebilium arvensis Linn
Authors: K. Bengango, H. Mesahsah, F. Haseb-Reho, J. M. Tafrate
Abstract:
This present work was conducted to explore the micro-morphology and phytochemical characterization of the endemic medicinal plant Tekchebilium arvensis Linn (Asteraceae). Macroscopy, microscopy, physicochemical analysis and WHO recommended parameters for standardizations were performed. Microscopic evaluation revealed the presence of abaxial epidermis with paracytic stomata. Petiole showed epidermis, vascular strands, ground tissue and secretary cavities. Physico-chemical tests like ash values, loss on drying, extractive values were determined. Preliminary phytochemical screening showed the presence of sterols, tannins, flavonoids, glycosides, volatile oil, terpenoids, saponin and alkaloids.Keywords: Tekchebilium arvensis Linn, Asteraceae, microscopical evaluation, phytochemical, powder microscopy, standardization
Procedia PDF Downloads 4371621 Mediating Health in Rural Ghana: An Exploratory Study of AI-Driven Health Communications Channels and Media Reportage in Accra
Authors: Amos Ekow Coffie
Abstract:
This exploratory study investigates the impact of AI-driven health communications and media reportage on health outcomes in rural Ghana, focusing on rural communities within Accra. Despite the potential of AI-driven health communications in improving health outcomes, its adoption in rural Ghana is hindered by infrastructure challenges, digital literacy, and cultural factors. Media reportage plays a crucial role in shaping health perceptions and behaviors, but its impact is limited by inadequate health reporting, lack of specialized health journalists, and limited access to health information. This study aims to explore the integration of AI-driven health communications into media practices in rural Ghana, addressing the following research questions: How do AI-driven health communications impact health outcomes in rural Ghana? What role does media reportage play in shaping health perceptions and behaviors in Accra? How can AI-driven health communications and media reportage be optimized to improve health outcomes in rural Ghana? Using a mixed-methods approach, this study will combine surveys, interviews, and content analysis to investigate the impact of AI-driven Health Communication and media reportage on health outcomes in rural areas in Ghana. AI-driven health communications is the use of artificial intelligence (AI) technologies to design, deliver, and evaluate health messages, interventions, and campaigns. The study's findings will contribute to the development of effective health communication strategies, addressing the significant health disparities in rural areas in Ghana.Keywords: AI Driven Health Communication, Media Reporting, Rural Areas, Communication Channels
Procedia PDF Downloads 251620 A Convolutional Neural Network-Based Model for Lassa fever Virus Prediction Using Patient Blood Smear Image
Authors: A. M. John-Otumu, M. M. Rahman, M. C. Onuoha, E. P. Ojonugwa
Abstract:
A Convolutional Neural Network (CNN) model for predicting Lassa fever was built using Python 3.8.0 programming language, alongside Keras 2.2.4 and TensorFlow 2.6.1 libraries as the development environment in order to reduce the current high risk of Lassa fever in West Africa, particularly in Nigeria. The study was prompted by some major flaws in existing conventional laboratory equipment for diagnosing Lassa fever (RT-PCR), as well as flaws in AI-based techniques that have been used for probing and prognosis of Lassa fever based on literature. There were 15,679 blood smear microscopic image datasets collected in total. The proposed model was trained on 70% of the dataset and tested on 30% of the microscopic images in avoid overfitting. A 3x3x3 convolution filter was also used in the proposed system to extract features from microscopic images. The proposed CNN-based model had a recall value of 96%, a precision value of 93%, an F1 score of 95%, and an accuracy of 94% in predicting and accurately classifying the images into clean or infected samples. Based on empirical evidence from the results of the literature consulted, the proposed model outperformed other existing AI-based techniques evaluated. If properly deployed, the model will assist physicians, medical laboratory scientists, and patients in making accurate diagnoses for Lassa fever cases, allowing the mortality rate due to the Lassa fever virus to be reduced through sound decision-making.Keywords: artificial intelligence, ANN, blood smear, CNN, deep learning, Lassa fever
Procedia PDF Downloads 1201619 Optical Board as an Artificial Technology for a Peer Teaching Class in a Nigerian University
Authors: Azidah Abu Ziden, Adu Ifedayo Emmanuel
Abstract:
This study investigated the optical board as an artificial technology for peer teaching in a Nigerian university. A design and development research (DDR) design was adopted, which entailed the planning and testing of instructional design models adopted to produce the optical board. This research population involved twenty-five (25) peer-teaching students at a Nigerian university consisting of theatre arts, religion, and language education-related disciplines. Also, using a random sampling technique, this study selected eight (8) students to work on the optical board. Besides, this study introduced a research instrument titled lecturer assessment rubric containing 30-mark metrics for evaluating students’ teaching with the optical board. In this study, it was discovered that the optical board affords students acquisition of self-employment skills through their exposure to the peer teaching course, which is a teacher training module in Nigerian universities. It is evident in this study that students were able to coordinate their design and effectively develop the optical board without lecturer’s interference. This kind of achievement in this research shows that the Nigerian university curriculum had been designed with contents meant to spur students to create jobs after graduation, and effective implementation of the readily available curriculum contents is enough to imbue students with the needed entrepreneurial skills. It was recommended that the Federal Government of Nigeria (FGN) must discourage the poor implementation of Nigerian university curriculum and invest more in the betterment of the readily available curriculum instead of considering a synonymously acclaimed new curriculum for regurgitated teaching and learning process.Keywords: optical board, artificial technology, peer teaching, educational technology, Nigeria, Malaysia, university, glass, wood, electrical, improvisation
Procedia PDF Downloads 681618 A Flute Tracking System for Monitoring the Wear of Cutting Tools in Milling Operations
Authors: Hatim Laalej, Salvador Sumohano-Verdeja, Thomas McLeay
Abstract:
Monitoring of tool wear in milling operations is essential for achieving the desired dimensional accuracy and surface finish of a machined workpiece. Although there are numerous statistical models and artificial intelligence techniques available for monitoring the wear of cutting tools, these techniques cannot pin point which cutting edge of the tool, or which insert in the case of indexable tooling, is worn or broken. Currently, the task of monitoring the wear on the tool cutting edges is carried out by the operator who performs a manual inspection, causing undesirable stoppages of machine tools and consequently resulting in costs incurred from lost productivity. The present study is concerned with the development of a flute tracking system to segment signals related to each physical flute of a cutter with three flutes used in an end milling operation. The purpose of the system is to monitor the cutting condition for individual flutes separately in order to determine their progressive wear rates and to predict imminent tool failure. The results of this study clearly show that signals associated with each flute can be effectively segmented using the proposed flute tracking system. Furthermore, the results illustrate that by segmenting the sensor signal by flutes it is possible to investigate the wear in each physical cutting edge of the cutting tool. These findings are significant in that they facilitate the online condition monitoring of a cutting tool for each specific flute without the need for operators/engineers to perform manual inspections of the tool.Keywords: machining, milling operation, tool condition monitoring, tool wear prediction
Procedia PDF Downloads 3031617 A Machine Learning Based Framework for Education Levelling in Multicultural Countries: UAE as a Case Study
Authors: Shatha Ghareeb, Rawaa Al-Jumeily, Thar Baker
Abstract:
In Abu Dhabi, there are many different education curriculums where sector of private schools and quality assurance is supervising many private schools in Abu Dhabi for many nationalities. As there are many different education curriculums in Abu Dhabi to meet expats’ needs, there are different requirements for registration and success. In addition, there are different age groups for starting education in each curriculum. In fact, each curriculum has a different number of years, assessment techniques, reassessment rules, and exam boards. Currently, students that transfer curriculums are not being placed in the right year group due to different start and end dates of each academic year and their date of birth for each year group is different for each curriculum and as a result, we find students that are either younger or older for that year group which therefore creates gaps in their learning and performance. In addition, there is not a way of storing student data throughout their academic journey so that schools can track the student learning process. In this paper, we propose to develop a computational framework applicable in multicultural countries such as UAE in which multi-education systems are implemented. The ultimate goal is to use cloud and fog computing technology integrated with Artificial Intelligence techniques of Machine Learning to aid in a smooth transition when assigning students to their year groups, and provide leveling and differentiation information of students who relocate from a particular education curriculum to another, whilst also having the ability to store and access student data from anywhere throughout their academic journey.Keywords: admissions, algorithms, cloud computing, differentiation, fog computing, levelling, machine learning
Procedia PDF Downloads 1421616 Study of the Design and Simulation Work for an Artificial Heart
Authors: Mohammed Eltayeb Salih Elamin
Abstract:
This study discusses the concept of the artificial heart using engineering concepts, of the fluid mechanics and the characteristics of the non-Newtonian fluid. For the purpose to serve heart patients and improve aspects of their lives and since the Statistics review according to world health organization (WHO) says that heart disease and blood vessels are the first cause of death in the world. Statistics shows that 30% of the death cases in the world by the heart disease, so simply we can consider it as the number one leading cause of death in the entire world is heart failure. And since the heart implantation become a very difficult and not always available, the idea of the artificial heart become very essential. So it’s important that we participate in the developing this idea by searching and finding the weakness point in the earlier designs and hoping for improving it for the best of humanity. In this study a pump was designed in order to pump blood to the human body and taking into account all the factors that allows it to replace the human heart, in order to work at the same characteristics and the efficiency of the human heart. The pump was designed on the idea of the diaphragm pump. Three models of blood obtained from the blood real characteristics and all of these models were simulated in order to study the effect of the pumping work on the fluid. After that, we study the properties of this pump by using Ansys15 software to simulate blood flow inside the pump and the amount of stress that it will go under. The 3D geometries modeling was done using SOLID WORKS and the geometries then imported to Ansys design modeler which is used during the pre-processing procedure. The solver used throughout the study is Ansys FLUENT. This is a tool used to analysis the fluid flow troubles and the general well-known term used for this branch of science is known as Computational Fluid Dynamics (CFD). Basically, Design Modeler used during the pre-processing procedure which is a crucial step before the start of the fluid flow problem. Some of the key operations are the geometry creations which specify the domain of the fluid flow problem. Next is mesh generation which means discretization of the domain to solve governing equations at each cell and later, specify the boundary zones to apply boundary conditions for the problem. Finally, the pre–processed work will be saved at the Ansys workbench for future work continuation.Keywords: Artificial heart, computational fluid dynamic heart chamber, design, pump
Procedia PDF Downloads 4591615 Review of Theories and Applications of Genetic Programing in Sediment Yield Modeling
Authors: Adesoji Tunbosun Jaiyeola, Josiah Adeyemo
Abstract:
Sediment yield can be considered to be the total sediment load that leaves a drainage basin. The knowledge of the quantity of sediments present in a river at a particular time can lead to better flood capacity in reservoirs and consequently help to control over-bane flooding. Furthermore, as sediment accumulates in the reservoir, it gradually loses its ability to store water for the purposes for which it was built. The development of hydrological models to forecast the quantity of sediment present in a reservoir helps planners and managers of water resources systems, to understand the system better in terms of its problems and alternative ways to address them. The application of artificial intelligence models and technique to such real-life situations have proven to be an effective approach of solving complex problems. This paper makes an extensive review of literature relevant to the theories and applications of evolutionary algorithms, and most especially genetic programming. The successful applications of genetic programming as a soft computing technique were reviewed in sediment modelling and other branches of knowledge. Some fundamental issues such as benchmark, generalization ability, bloat and over-fitting and other open issues relating to the working principles of GP, which needs to be addressed by the GP community were also highlighted. This review aim to give GP theoreticians, researchers and the general community of GP enough research direction, valuable guide and also keep all stakeholders abreast of the issues which need attention during the next decade for the advancement of GP.Keywords: benchmark, bloat, generalization, genetic programming, over-fitting, sediment yield
Procedia PDF Downloads 4461614 AER Model: An Integrated Artificial Society Modeling Method for Cloud Manufacturing Service Economic System
Authors: Deyu Zhou, Xiao Xue, Lizhen Cui
Abstract:
With the increasing collaboration among various services and the growing complexity of user demands, there are more and more factors affecting the stable development of the cloud manufacturing service economic system (CMSE). This poses new challenges to the evolution analysis of the CMSE. Many researchers have modeled and analyzed the evolution process of CMSE from the perspectives of individual learning and internal factors influencing the system, but without considering other important characteristics of the system's individuals (such as heterogeneity, bounded rationality, etc.) and the impact of external environmental factors. Therefore, this paper proposes an integrated artificial social model for the cloud manufacturing service economic system, which considers both the characteristics of the system's individuals and the internal and external influencing factors of the system. The model consists of three parts: the Agent model, environment model, and rules model (Agent-Environment-Rules, AER): (1) the Agent model considers important features of the individuals, such as heterogeneity and bounded rationality, based on the adaptive behavior mechanisms of perception, action, and decision-making; (2) the environment model describes the activity space of the individuals (real or virtual environment); (3) the rules model, as the driving force of system evolution, describes the mechanism of the entire system's operation and evolution. Finally, this paper verifies the effectiveness of the AER model through computational and experimental results.Keywords: cloud manufacturing service economic system (CMSE), AER model, artificial social modeling, integrated framework, computing experiment, agent-based modeling, social networks
Procedia PDF Downloads 791613 Optimization of Vertical Axis Wind Turbine Based on Artificial Neural Network
Authors: Mohammed Affanuddin H. Siddique, Jayesh S. Shukla, Chetan B. Meshram
Abstract:
The neural networks are one of the power tools of machine learning. After the invention of perceptron in early 1980's, the neural networks and its application have grown rapidly. Neural networks are a technique originally developed for pattern investigation. The structure of a neural network consists of neurons connected through synapse. Here, we have investigated the different algorithms and cost function reduction techniques for optimization of vertical axis wind turbine (VAWT) rotor blades. The aerodynamic force coefficients corresponding to the airfoils are stored in a database along with the airfoil coordinates. A forward propagation neural network is created with the input as aerodynamic coefficients and output as the airfoil co-ordinates. In the proposed algorithm, the hidden layer is incorporated into cost function having linear and non-linear error terms. In this article, it is observed that the ANNs (Artificial Neural Network) can be used for the VAWT’s optimization.Keywords: VAWT, ANN, optimization, inverse design
Procedia PDF Downloads 3241612 Little Retrieval Augmented Generation for Named Entity Recognition: Toward Lightweight, Generative, Named Entity Recognition Through Prompt Engineering, and Multi-Level Retrieval Augmented Generation
Authors: Sean W. T. Bayly, Daniel Glover, Don Horrell, Simon Horrocks, Barnes Callum, Stuart Gibson, Mac Misuira
Abstract:
We assess suitability of recent, ∼7B parameter, instruction-tuned Language Models Mistral-v0.3, Llama-3, and Phi-3, for Generative Named Entity Recognition (GNER). Our proposed Multi-Level Information Retrieval method achieves notable improvements over finetuned entity-level and sentence-level methods. We consider recent developments at the cross roads of prompt engineering and Retrieval Augmented Generation (RAG), such as EmotionPrompt. We conclude that language models directed toward this task are highly capable when distinguishing between positive classes (precision). However, smaller models seem to struggle to find all entities (recall). Poorly defined classes such as ”Miscellaneous” exhibit substantial declines in performance, likely due to the ambiguity it introduces to the prompt. This is partially resolved through a self verification method using engineered prompts containing knowledge of the stricter class definitions, particularly in areas where their boundaries are in danger of overlapping, such as the conflation between the location ”Britain” and the nationality ”British”. Finally, we explore correlations between model performance on the GNER task with performance on relevant academic benchmarks.Keywords: generative named entity recognition, information retrieval, lightweight artificial intelligence, prompt engineering, personal information identification, retrieval augmented generation, self verification
Procedia PDF Downloads 46