Search results for: algorithmic recommendations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2106

Search results for: algorithmic recommendations

2076 High-Frequency Cryptocurrency Portfolio Management Using Multi-Agent System Based on Federated Reinforcement Learning

Authors: Sirapop Nuannimnoi, Hojjat Baghban, Ching-Yao Huang

Abstract:

Over the past decade, with the fast development of blockchain technology since the birth of Bitcoin, there has been a massive increase in the usage of Cryptocurrencies. Cryptocurrencies are not seen as an investment opportunity due to the market’s erratic behavior and high price volatility. With the recent success of deep reinforcement learning (DRL), portfolio management can be modeled and automated. In this paper, we propose a novel DRL-based multi-agent system to automatically make proper trading decisions on multiple cryptocurrencies and gain profits in the highly volatile cryptocurrency market. We also extend this multi-agent system with horizontal federated transfer learning for better adapting to the inclusion of new cryptocurrencies in our portfolio; therefore, we can, through the concept of diversification, maximize our profits and minimize the trading risks. Experimental results through multiple simulation scenarios reveal that this proposed algorithmic trading system can offer three promising key advantages over other systems, including maximized profits, minimized risks, and adaptability.

Keywords: cryptocurrency portfolio management, algorithmic trading, federated learning, multi-agent reinforcement learning

Procedia PDF Downloads 87
2075 Designing an Integrated Platform for Real-Time Recommendations Sharing among the Aged and People Living with Cancer

Authors: Adekunle O. Afolabi, Pekka Toivanen

Abstract:

The world is expected to experience growth in the number of ageing population, and this will bring about high cost of providing care for these valuable citizens. In addition, many of these live with chronic diseases that come with old age. Providing adequate care in the face of rising costs and dwindling personnel can be challenging. However, advances in technologies and emergence of the Internet of Things are providing a way to address these challenges while improving care giving. This study proposes the integration of recommendation systems into homecare to provide real-time recommendations for effective management of people receiving care at home and those living with chronic diseases. Using the simplified Training Logic Concept, stakeholders and requirements were identified. Specific requirements were gathered from people living with cancer. The solution designed has two components namely home and community, to enhance recommendations sharing for effective care giving. The community component of the design was implemented with the development of a mobile app called Recommendations Sharing Community for Aged and Chronically Ill People (ReSCAP). This component has illustrated the possibility of real-time recommendations, improved recommendations sharing among care receivers and between a physician and care receivers. Full implementation will increase access to health data for better care decision making.

Keywords: recommendation systems, Internet of Things, healthcare, homecare, real-time

Procedia PDF Downloads 131
2074 Carbohydrate-Based Recommendations as a Basis for Dietary Guidelines

Authors: A. E. Buyken, D. J. Mela, P. Dussort, I. T. Johnson, I. A. Macdonald, A. Piekarz, J. D. Stowell, F. Brouns

Abstract:

Recently a number of renewed dietary guidelines have been published by various health authorities. The aim of the present work was 1) to review the processes (systematic approach/review, inclusion of public consultation) and methodological approaches used to identify and select the underpinning evidence base for the established recommendations for total carbohydrate (CHO), fiber and sugar consumption, and 2) examine how differences in the methods and processes applied may have influenced the final recommendations. A search of WHO, US, Canada, Australia and European sources identified 13 authoritative dietary guidelines with the desired detailed information. Each of these guidelines was evaluated for its scientific basis (types and grading of the evidence) and the processes by which the guidelines were developed Based on the data retrieved the following conclusions can be drawn: 1) Generally, a relatively high total CHO and fiber intake and limited intake of sugars (added or free) is recommended. 2) Even where recommendations are quite similar, the specific, justifications for quantitative/qualitative recommendations differ across authorities. 3) Differences appear to be due to inconsistencies in underlying definitions of CHO exposure and in the concurrent appraisal of CHO-providing foods and nutrients as well the choice and number of health outcomes selected for the evidence appraisal. 4) Differences in the selected articles, time frames or data aggregation method appeared to be of rather minor influence. From this assessment, the main recommendations are for: 1) more explicit quantitative justifications for numerical guidelines and communication of uncertainty; and 2) greater international harmonization, particularly with regard to underlying definitions of exposures and range of relevant nutrition-related outcomes.

Keywords: carbohydrates, dietary fibres, dietary guidelines, recommendations, sugars

Procedia PDF Downloads 231
2073 Enhancing Residential Architecture through Generative Design: Balancing Aesthetics, Legal Constraints, and Environmental Considerations

Authors: Milena Nanova, Radul Shishkov, Damyan Damov, Martin Georgiev

Abstract:

This research paper presents an in-depth exploration of the use of generative design in urban residential architecture, with a dual focus on aligning aesthetic values with legal and environmental constraints. The study aims to demonstrate how generative design methodologies can innovate residential building designs that are not only legally compliant and environmentally conscious but also aesthetically compelling. At the core of our research is a specially developed generative design framework tailored for urban residential settings. This framework employs computational algorithms to produce diverse design solutions, meticulously balancing aesthetic appeal with practical considerations. By integrating site-specific features, urban legal restrictions, and environmental factors, our approach generates designs that resonate with the unique character of urban landscapes while adhering to regulatory frameworks. The paper places emphasis on algorithmic implementation of the logical constraint and intricacies in residential architecture by exploring the potential of generative design to create visually engaging and contextually harmonious structures. This exploration also contains an analysis of how these designs align with legal building parameters, showcasing the potential for creative solutions within the confines of urban building regulations. Concurrently, our methodology integrates functional, economic, and environmental factors. We investigate how generative design can be utilized to optimize buildings' performance, considering them, aiming to achieve a symbiotic relationship between the built environment and its natural surroundings. Through a blend of theoretical research and practical case studies, this research highlights the multifaceted capabilities of generative design and demonstrates practical applications of our framework. Our findings illustrate the rich possibilities that arise from an algorithmic design approach in the context of a vibrant urban landscape. This study contributes an alternative perspective to residential architecture, suggesting that the future of urban development lies in embracing the complex interplay between computational design innovation, regulatory adherence, and environmental responsibility.

Keywords: generative design, computational design, parametric design, algorithmic modeling

Procedia PDF Downloads 23
2072 A Bibliometric Analysis on Filter Bubble

Authors: Misbah Fatma, Anam Saiyeda

Abstract:

This analysis charts the introduction and expansion of research into the filter bubble phenomena over the last 10 years using a large dataset of academic publications. This bibliometric study demonstrates how interdisciplinary filter bubble research is. The identification of key authors and organizations leading the filter bubble study sheds information on collaborative networks and knowledge transfer. Relevant papers are organized based on themes including algorithmic bias, polarisation, social media, and ethical implications through a systematic examination of the literature. In order to shed light on how these patterns have changed over time, the study plots their historical history. The study also looks at how research is distributed globally, showing geographic patterns and discrepancies in scholarly output. The results of this bibliometric analysis let us fully comprehend the development and reach of filter bubble research. This study offers insights into the ongoing discussion surrounding information personalization and its implications for societal discourse, democratic participation, and the potential risks to an informed citizenry by exposing dominant themes, interdisciplinary collaborations, and geographic patterns. In order to solve the problems caused by filter bubbles and to advance a more diverse and inclusive information environment, this analysis is essential for scholars and researchers.

Keywords: bibliometric analysis, social media, social networking, algorithmic personalization, self-selection, content moderation policies and limited access to information, recommender system and polarization

Procedia PDF Downloads 89
2071 Alpha: A Groundbreaking Avatar Merging User Dialogue with OpenAI's GPT-3.5 for Enhanced Reflective Thinking

Authors: Jonas Colin

Abstract:

Standing at the vanguard of AI development, Alpha represents an unprecedented synthesis of logical rigor and human abstraction, meticulously crafted to mirror the user's unique persona and personality, a feat previously unattainable in AI development. Alpha, an avant-garde artefact in the realm of artificial intelligence, epitomizes a paradigmatic shift in personalized digital interaction, amalgamating user-specific dialogic patterns with the sophisticated algorithmic prowess of OpenAI's GPT-3.5 to engender a platform for enhanced metacognitive engagement and individualized user experience. Underpinned by a sophisticated algorithmic framework, Alpha integrates vast datasets through a complex interplay of neural network models and symbolic AI, facilitating a dynamic, adaptive learning process. This integration enables the system to construct a detailed user profile, encompassing linguistic preferences, emotional tendencies, and cognitive styles, tailoring interactions to align with individual characteristics and conversational contexts. Furthermore, Alpha incorporates advanced metacognitive elements, enabling real-time reflection and adaptation in communication strategies. This self-reflective capability ensures continuous refinement of its interaction model, positioning Alpha not just as a technological marvel but as a harbinger of a new era in human-computer interaction, where machines engage with us on a deeply personal and cognitive level, transforming our interaction with the digital world.

Keywords: chatbot, GPT 3.5, metacognition, symbiose

Procedia PDF Downloads 28
2070 Analysis of Labor Effectiveness at Green Tea Dry Sorting Workstation for Increasing Tea Factory Competitiveness

Authors: Bayu Anggara, Arita Dewi Nugrahini, Didik Purwadi

Abstract:

Dry sorting workstation needs labor to produce green tea in Gambung Tea Factory. Observation results show that there is labor who are not working at the moment and doing overtime jobs to meet production targets. The measurement of the level of labor effectiveness has never been done before. The purpose of this study is to determine the level of labor effectiveness and provide recommendations for improvement based on the results of the Pareto diagram and Ishikawa diagram. The method used to measure the level of labor effectiveness is Overall Labor Effectiveness (OLE). OLE had three indicators which are availability, performance, and quality. Recommendations are made based on the results of the Pareto diagram and Ishikawa diagram for indicators that do not meet world standards. Based on the results of the study, the OLE value was 68.19%. Recommendations given to improve labor performance are adding mechanics, rescheduling rest periods, providing special training for labor, and giving rewards to labor. Furthermore, the recommendations for improving the quality of labor are procuring water content measuring devices, create material standard policies, and rescheduling rest periods.

Keywords: Ishikawa diagram, labor effectiveness, OLE, Pareto diagram

Procedia PDF Downloads 192
2069 Business and Psychological Principles Integrated into Automated Capital Investment Systems through Mathematical Algorithms

Authors: Cristian Pauna

Abstract:

With few steps away from the 2020, investments in financial markets is a common activity nowadays. In the electronic trading environment, the automated investment software has become a major part in the business intelligence system of any modern financial company. The investment decisions are assisted and/or made automatically by computers using mathematical algorithms today. The complexity of these algorithms requires computer assistance in the investment process. This paper will present several investment strategies that can be automated with algorithmic trading for Deutscher Aktienindex DAX30. It was found that, based on several price action mathematical models used for high-frequency trading some investment strategies can be optimized and improved for automated investments with good results. This paper will present the way to automate these investment decisions. Automated signals will be built using all of these strategies. Three major types of investment strategies were found in this study. The types are separated by the target length and by the exit strategy used. The exit decisions will be also automated and the paper will present the specificity for each investment type. A comparative study will be also included in this paper in order to reveal the differences between strategies. Based on these results, the profit and the capital exposure will be compared and analyzed in order to qualify the investment methodologies presented and to compare them with any other investment system. As conclusion, some major investment strategies will be revealed and compared in order to be considered for inclusion in any automated investment system.

Keywords: Algorithmic trading, automated investment systems, limit conditions, trading principles, trading strategies

Procedia PDF Downloads 161
2068 Hidden Markov Model for Financial Limit Order Book and Its Application to Algorithmic Trading Strategy

Authors: Sriram Kashyap Prasad, Ionut Florescu

Abstract:

This study models the intraday asset prices as driven by Markov process. This work identifies the latent states of the Hidden Markov model, using limit order book data (trades and quotes) to continuously estimate the states throughout the day. This work builds a trading strategy using estimated states to generate signals. The strategy utilizes current state to recalibrate buy/ sell levels and the transition between states to trigger stop-loss when adverse price movements occur. The proposed trading strategy is tested on the Stevens High Frequency Trading (SHIFT) platform. SHIFT is a highly realistic market simulator with functionalities for creating an artificial market simulation by deploying agents, trading strategies, distributing initial wealth, etc. In the implementation several assets on the NASDAQ exchange are used for testing. In comparison to a strategy with static buy/ sell levels, this study shows that the number of limit orders that get matched and executed can be increased. Executing limit orders earns rebates on NASDAQ. The system can capture jumps in the limit order book prices, provide dynamic buy/sell levels and trigger stop loss signals to improve the PnL (Profit and Loss) performance of the strategy.

Keywords: algorithmic trading, Hidden Markov model, high frequency trading, limit order book learning

Procedia PDF Downloads 123
2067 Muslim Social Workers and Imams’ Recommendations in Marital and Child Custody Cases of Persons with Intellectual or Mental Disability

Authors: Badran Leena, Rimmerman Arie

Abstract:

Arab society in Israel is undergoing modernization and secularization. However, its approach to disability and mental illness is still dominated by religious and traditional stereotypes, as well as folk remedies and community practices. The present study examines differences in Muslim social workers' and Imams' recommendations in marriage/divorce and child custody cases of persons with intellectual disabilities (ID) or mental illness. The study has two goals: (1) To examine differences in recommendations between Imams and Muslim social workers; (2) To explore variables related to their differential recommendations as observed in their responses to vignettes—a quantitative study using vignettes resembling existing Muslim religious (Sharia) court cases. Muslim social workers (138) and Imams (48) completed a background questionnaire, a religiosity questionnaire, and a questionnaire that included 25 vignettes constructed by the researcher based on court rulings adapted for the study. Muslim social workers tended to consider the religious recommendation when the family of a person with ID or mental illness was portrayed in the vignette as religious. The same applied to Imams, albeit to a greater extent. The findings call for raising awareness among social workers and academics regarding the importance of religion and tradition in formulating professional recommendations.

Keywords: child custody, intellectual and developmental disability, marriage/divorce, mental illness, sharia court, social workers

Procedia PDF Downloads 160
2066 Automated Transformation of 3D Point Cloud to BIM Model: Leveraging Algorithmic Modeling for Efficient Reconstruction

Authors: Radul Shishkov, Orlin Davchev

Abstract:

The digital era has revolutionized architectural practices, with building information modeling (BIM) emerging as a pivotal tool for architects, engineers, and construction professionals. However, the transition from traditional methods to BIM-centric approaches poses significant challenges, particularly in the context of existing structures. This research introduces a technical approach to bridge this gap through the development of algorithms that facilitate the automated transformation of 3D point cloud data into detailed BIM models. The core of this research lies in the application of algorithmic modeling and computational design methods to interpret and reconstruct point cloud data -a collection of data points in space, typically produced by 3D scanners- into comprehensive BIM models. This process involves complex stages of data cleaning, feature extraction, and geometric reconstruction, which are traditionally time-consuming and prone to human error. By automating these stages, our approach significantly enhances the efficiency and accuracy of creating BIM models for existing buildings. The proposed algorithms are designed to identify key architectural elements within point clouds, such as walls, windows, doors, and other structural components, and to translate these elements into their corresponding BIM representations. This includes the integration of parametric modeling techniques to ensure that the generated BIM models are not only geometrically accurate but also embedded with essential architectural and structural information. Our methodology has been tested on several real-world case studies, demonstrating its capability to handle diverse architectural styles and complexities. The results showcase a substantial reduction in time and resources required for BIM model generation while maintaining high levels of accuracy and detail. This research contributes significantly to the field of architectural technology by providing a scalable and efficient solution for the integration of existing structures into the BIM framework. It paves the way for more seamless and integrated workflows in renovation and heritage conservation projects, where the accuracy of existing conditions plays a critical role. The implications of this study extend beyond architectural practices, offering potential benefits in urban planning, facility management, and historic preservation.

Keywords: BIM, 3D point cloud, algorithmic modeling, computational design, architectural reconstruction

Procedia PDF Downloads 22
2065 Unlocking the Future of Grocery Shopping: Graph Neural Network-Based Cold Start Item Recommendations with Reverse Next Item Period Recommendation (RNPR)

Authors: Tesfaye Fenta Boka, Niu Zhendong

Abstract:

Recommender systems play a crucial role in connecting individuals with the items they require, as is particularly evident in the rapid growth of online grocery shopping platforms. These systems predominantly rely on user-centered recommendations, where items are suggested based on individual preferences, garnering considerable attention and adoption. However, our focus lies on the item-centered recommendation task within the grocery shopping context. In the reverse next item period recommendation (RNPR) task, we are presented with a specific item and challenged to identify potential users who are likely to consume it in the upcoming period. Despite the ever-expanding inventory of products on online grocery platforms, the cold start item problem persists, posing a substantial hurdle in delivering personalized and accurate recommendations for new or niche grocery items. To address this challenge, we propose a Graph Neural Network (GNN)-based approach. By capitalizing on the inherent relationships among grocery items and leveraging users' historical interactions, our model aims to provide reliable and context-aware recommendations for cold-start items. This integration of GNN technology holds the promise of enhancing recommendation accuracy and catering to users' individual preferences. This research contributes to the advancement of personalized recommendations in the online grocery shopping domain. By harnessing the potential of GNNs and exploring item-centered recommendation strategies, we aim to improve the overall shopping experience and satisfaction of users on these platforms.

Keywords: recommender systems, cold start item recommendations, online grocery shopping platforms, graph neural networks

Procedia PDF Downloads 53
2064 Tweets to Touchdowns: Predicting National Football League Achievement from Social Media Optimism

Authors: Rohan Erasala, Ian McCulloh

Abstract:

The NFL Draft is a chance for every NFL team to select their next superstar. As a result, teams heavily invest in scouting, and millions of fans partake in the online discourse surrounding the draft. This paper investigates the potential correlations between positive sentiment in individual draft selection threads from the subreddit r/NFL and if this data can be used to make successful player recommendations. It is hypothesized that there will be limited correlations and nonviable recommendations made from these threads. The hypothesis is tested using sentiment analysis of draft thread comments and analyzing correlation and precision at k of top scores. The results indicate weak correlations between the percentage of positive comments in a draft selection thread and a player’s approximate value, but potentially viable recommendations from looking at players whose draft selection threads have the highest percentage of positive comments.

Keywords: national football league, NFL, NFL Draft, sentiment analysis, Reddit, social media, NLP

Procedia PDF Downloads 32
2063 Development of Active Learning Calculus Course for Biomedical Program

Authors: Mikhail Bouniaev

Abstract:

The paper reviews design and implementation of a Calculus Course required for the Biomedical Competency Based Program developed as a joint project between The University of Texas Rio Grande Valley, and the University of Texas’ Institute for Transformational Learning, from the theoretical perspective as presented in scholarly work on active learning, formative assessment, and on-line teaching. Following a four stage curriculum development process (objective, content, delivery, and assessment), and theoretical recommendations that guarantee effectiveness and efficiency of assessment in active learning, we discuss the practical recommendations on how to incorporate a strong formative assessment component to address disciplines’ needs, and students’ major needs. In design and implementation of this project, we used Constructivism and Stage-by-Stage Development of Mental Actions Theory recommendations.

Keywords: active learning, assessment, calculus, cognitive demand, mathematics, stage-by-stage development of mental action theory

Procedia PDF Downloads 328
2062 Accountability of Artificial Intelligence: An Analysis Using Edgar Morin’s Complex Thought

Authors: Sylvie Michel, Sylvie Gerbaix, Marc Bidan

Abstract:

Artificial intelligence (AI) can be held accountable for its detrimental impacts. This question gains heightened relevance given AI's pervasive reach across various domains, magnifying its power and potential. The expanding influence of AI raises fundamental ethical inquiries, primarily centering on biases, responsibility, and transparency. This encompasses discriminatory biases arising from algorithmic criteria or data, accidents attributed to autonomous vehicles or other systems, and the imperative of transparent decision-making. This article aims to stimulate reflection on AI accountability, denoting the necessity to elucidate the effects it generates. Accountability comprises two integral aspects: adherence to legal and ethical standards and the imperative to elucidate the underlying operational rationale. The objective is to initiate a reflection on the obstacles to this "accountability," facing the challenges of the complexity of artificial intelligence's system and its effects. Then, this article proposes to mobilize Edgar Morin's complex thought to encompass and face the challenges of this complexity. The first contribution is to point out the challenges posed by the complexity of A.I., with fractional accountability between a myriad of human and non-human actors, such as software and equipment, which ultimately contribute to the decisions taken and are multiplied in the case of AI. Accountability faces three challenges resulting from the complexity of the ethical issues combined with the complexity of AI. The challenge of the non-neutrality of algorithmic systems as fully ethically non-neutral actors is put forward by a revealing ethics approach that calls for assigning responsibilities to these systems. The challenge of the dilution of responsibility is induced by the multiplicity and distancing between the actors. Thus, a dilution of responsibility is induced by a split in decision-making between developers, who feel they fulfill their duty by strictly respecting the requests they receive, and management, which does not consider itself responsible for technology-related flaws. Accountability is confronted with the challenge of transparency of complex and scalable algorithmic systems, non-human actors self-learning via big data. A second contribution involves leveraging E. Morin's principles, providing a framework to grasp the multifaceted ethical dilemmas and subsequently paving the way for establishing accountability in AI. When addressing the ethical challenge of biases, the "hologrammatic" principle underscores the imperative of acknowledging the non-ethical neutrality of algorithmic systems inherently imbued with the values and biases of their creators and society. The "dialogic" principle advocates for the responsible consideration of ethical dilemmas, encouraging the integration of complementary and contradictory elements in solutions from the very inception of the design phase. Aligning with the principle of organizing recursiveness, akin to the "transparency" of the system, it promotes a systemic analysis to account for the induced effects and guides the incorporation of modifications into the system to rectify deviations and reintroduce modifications into the system to rectify its drifts. In conclusion, this contribution serves as an inception for contemplating the accountability of "artificial intelligence" systems despite the evident ethical implications and potential deviations. Edgar Morin's principles, providing a lens to contemplate this complexity, offer valuable perspectives to address these challenges concerning accountability.

Keywords: accountability, artificial intelligence, complexity, ethics, explainability, transparency, Edgar Morin

Procedia PDF Downloads 34
2061 Inner Derivations of Low-Dimensional Diassociative Algebras

Authors: M. A. Fiidow, Ahmad M. Alenezi

Abstract:

In this work, we study the inner derivations of diassociative algebras in dimension two and three, an algorithmic approach is adopted for the computation of inner derivation, using some results from the derivation of finite dimensional diassociative algebras. Some basic properties of inner derivation of finite dimensional diassociative algebras are also provided.

Keywords: diassociative algebras, inner derivations, derivations, left and right operator

Procedia PDF Downloads 242
2060 The Phenomenon of Suicide in the Social Consciousness: Recommendations for the Educational Strategy of the Society and Prevention of Suicide

Authors: Aldona Anna Osajda

Abstract:

Suicide is a phenomenon that worries both the public and scientists in various fields. In society, suicide is a taboo subject, and in addition, there are many myths and stereotypes that are detrimental to the proper understanding and appropriate response of a person at risk of suicide. It is necessary to educate society and the suicide prevention system for various age groups. The research covers the level of knowledge and views of Polish society, including teachers and youth, regarding suicides. The main research problem is to establish the level of awareness of Polish society about the phenomenon of suicides. The study will be based on the diagnostic survey method, using the survey technique. Information about the research will be disseminated electronically on the Internet via social messaging. The collected data will be analyzed using appropriate statistics. On the basis of the obtained results, answers will be given to research questions, which will become the basis for designing an appropriate educational strategy for the society in the field of suicide and developing recommendations and recommendations for teachers to conduct classes in the field of suicide prevention for children and adolescents.

Keywords: phenomenon of suicides, suicide, suicide prevention, suicidology

Procedia PDF Downloads 164
2059 Navigating the Digital Landscape: An Ethnographic Content Analysis of Black Youth's Encounters with Racially Traumatic Content on Social Media

Authors: Tiera Tanksley, Amanda M. McLeroy

Abstract:

The advent of technology and social media has ushered in a new era of communication, providing platforms for news dissemination and cause advocacy. However, this digital landscape has also exposed a distressing phenomenon termed "Black death," or trauma porn. This paper delves into the profound effects of repeated exposure to traumatic content on Black youth via social media, exploring the psychological impacts and potential reinforcing of stereotypes. Employing Critical Race Technology Theory (CRTT), the study sheds light on algorithmic anti-blackness and its influence on Black youth's lives and educational experiences. Through ethnographic content analysis, the research investigates common manifestations of Black death encountered online by Black adolescents. Findings unveil distressing viral videos, traumatic images, racial slurs, and hate speech, perpetuating stereotypes. However, amidst the distress, the study identifies narratives of activism and social justice on social media platforms, empowering Black youth to engage in positive change. Coping mechanisms and community support emerge as significant factors in navigating the digital landscape. The study underscores the need for comprehensive interventions and policies informed by evidence-based research. By addressing algorithmic anti-blackness and promoting digital resilience, the paper advocates for a more empathetic and inclusive online environment. Understanding coping mechanisms and community support becomes imperative for fostering mental well-being among Black adolescents navigating social media. In education, the implications are substantial. Acknowledging the impact of Black death content, educators play a pivotal role in promoting media literacy and digital resilience. Creating inclusive and safe online spaces, educators can mitigate negative effects and encourage open discussions about traumatic content. The application of CRTT in educational technology emphasizes dismantling systemic biases and promoting equity. In conclusion, this study calls for educators to be cognizant of the impact of Black death content on social media. By prioritizing media literacy, fostering digital resilience, and advocating for unbiased technologies, educators contribute to an inclusive and just educational environment for all students, irrespective of their race or background. Addressing challenges related to Black death content proactively ensures the well-being and mental health of Black adolescents, fostering an empathetic and inclusive digital space.

Keywords: algorithmic anti-Blackness, digital resilience, media literacy, traumatic content

Procedia PDF Downloads 29
2058 Recommendation Systems for Cereal Cultivation using Advanced Casual Inference Modeling

Authors: Md Yeasin, Ranjit Kumar Paul

Abstract:

In recent years, recommendation systems have become indispensable tools for agricultural system. The accurate and timely recommendations can significantly impact crop yield and overall productivity. Causal inference modeling aims to establish cause-and-effect relationships by identifying the impact of variables or factors on outcomes, enabling more accurate and reliable recommendations. New advancements in causal inference models have been found in the literature. With the advent of the modern era, deep learning and machine learning models have emerged as efficient tools for modeling. This study proposed an innovative approach to enhance recommendation systems-based machine learning based casual inference model. By considering the causal effect and opportunity cost of covariates, the proposed system can provide more reliable and actionable recommendations for cereal farmers. To validate the effectiveness of the proposed approach, experiments are conducted using cereal cultivation data of eastern India. Comparative evaluations are performed against existing correlation-based recommendation systems, demonstrating the superiority of the advanced causal inference modeling approach in terms of recommendation accuracy and impact on crop yield. Overall, it empowers farmers with personalized recommendations tailored to their specific circumstances, leading to optimized decision-making and increased crop productivity.

Keywords: agriculture, casual inference, machine learning, recommendation system

Procedia PDF Downloads 45
2057 Enforcement of Decisions of Ombudsmen and the South African Public Protector: Muzzling the Watchdogs

Authors: Roxan Venter

Abstract:

Ombudsmen often face the challenge of a lack of authority to have their decisions and recommendations enforced. This lack of authority may be seen as one of the major obstacles in the way of the effectiveness of the institutions of Ombudsman and also the South African Public Protector. The paper will address the current legal position in South Africa with regard to the status of the decisions and recommendations of the South African Public Protector and the enforcement thereof. In addition, the paper will compare the South African position with the experiences of other jurisdictions, including Scandinavian countries like Sweden, Denmark and Norway, but also New Zealand and Northern Ireland, with regard to the enforcement of the decisions of Ombudsmen. Finally, the paper will make recommendations with regard to the enhancement of the power and authority of Ombudsmen in order to effectively enforce their decisions. It is submitted that the creation of the office of Ombudsman, and the Public Protector in the South African system, is an essential tool to ensure the protection of society against governmental abuse of power and it is therefore imperative to ensure that these watchdogs of democracy are not muzzled by a lack of powers of enforcement.

Keywords: enforcement of decisions of ombudsmen, governmental control, ombudsman, South African public protector

Procedia PDF Downloads 365
2056 On a Generalization of the Spectral Dichotomy Method of a Matrix With Respect to Parabolas

Authors: Mouhamadou Dosso

Abstract:

This paper presents methods of spectral dichotomy of a matrix which compute spectral projectors on the subspace associated with the eigenvalues external to the parabolas described by a general equation. These methods are modifications of the one proposed in [A. N. Malyshev and M. Sadkane, SIAM J. MATRIX ANAL. APPL. 18 (2), 265-278, 1997] which uses the spectral dichotomy method of a matrix with respect to the imaginary axis. Theoretical and algorithmic aspects of the methods are developed. Numerical results obtained by applying methods presented on matrices are reported.

Keywords: spectral dichotomy method, spectral projector, eigensubspaces, eigenvalue

Procedia PDF Downloads 63
2055 Evolutionary Prediction of the Viral RNA-Dependent RNA Polymerase of Chandipura vesiculovirus and Related Viral Species

Authors: Maneesh Kumar, Roshan Kamal Topno, Manas Ranjan Dikhit, Vahab Ali, Ganesh Chandra Sahoo, Bhawana, Major Madhukar, Rishikesh Kumar, Krishna Pandey, Pradeep Das

Abstract:

Chandipura vesiculovirus is an emerging (-) ssRNA viral entity belonging to the genus Vesiculovirus of the family Rhabdoviridae, associated with fatal encephalitis in tropical regions. The multi-functionally active viral RNA-dependent RNA polymerase (vRdRp) that has been incorporated with conserved amino acid residues in the pathogens, assigned to synthesize distinct viral polypeptides. The lack of proofreading ability of the vRdRp produces many mutated variants. Here, we have performed the evolutionary analysis of 20 viral protein sequences of vRdRp of different strains of Chandipura vesiculovirus along with other viral species from genus Vesiculovirus inferred in MEGA6.06, employing the Neighbour-Joining method. The p-distance algorithmic method has been used to calculate the optimum tree which showed the sum of branch length of about 1.436. The percentage of replicate trees in which the associated taxa are clustered together in the bootstrap test (1000 replicates), is shown next to the branches. No mutation was observed in the Indian strains of Chandipura vesiculovirus. In vRdRp, 1230(His) and 1231(Arg) are actively participated in catalysis and, are found conserved in different strains of Chandipura vesiculovirus. Both amino acid residues were also conserved in the other viral species from genus Vesiculovirus. Many isolates exhibited maximum number of mutations in catalytic regions in strains of Chandipura vesiculovirus at position 26(Ser→Ala), 47 (Ser→Ala), 90(Ser→Tyr), 172(Gly→Ile, Val), 172(Ser→Tyr), 387(Asn→Ser), 1301(Thr→Ala), 1330(Ala→Glu), 2015(Phe→Ser) and 2065(Thr→Val) which make them variants under different tropical conditions from where they evolved. The result clarifies the actual concept of RNA evolution using vRdRp to develop as an evolutionary marker. Although, a limited number of vRdRp protein sequence similarities for Chandipura vesiculovirus and other species. This might endow with possibilities to identify the virulence level during viral multiplication in a host.

Keywords: Chandipura, (-) ssRNA, viral RNA-dependent RNA polymerase, neighbour-joining method, p-distance algorithmic, evolutionary marker

Procedia PDF Downloads 164
2054 Understanding the Influence on Drivers’ Recommendation and Review-Writing Behavior in the P2P Taxi Service

Authors: Liwen Hou

Abstract:

The booming mobile business has been penetrating the taxi industry worldwide with P2P (peer to peer) taxi services, as an emerging business model, transforming the industry. Parallel with other mobile businesses, member recommendations and online reviews are believed to be very effective with regard to acquiring new users for P2P taxi services. Based on an empirical dataset of the taxi industry in China, this study aims to reveal which factors influence users’ recommendations and review-writing behaviors. Differing from the existing literature, this paper takes the taxi driver’s perspective into consideration and hence selects a group of variables related to the drivers. We built two models to reflect the factors that influence the number of recommendations and reviews posted on the platform (i.e., the app). Our models show that all factors, except the driver’s score, significantly influence the recommendation behavior. Likewise, only one factor, passengers’ bad reviews, is insignificant in generating more drivers’ reviews. In the conclusion, we summarize the findings and limitations of the research.

Keywords: online recommendation, P2P taxi service, review-writing, word of mouth

Procedia PDF Downloads 286
2053 Artificial Intelligence Technologies Used in Healthcare: Its Implication on the Healthcare Workforce and Applications in the Diagnosis of Diseases

Authors: Rowanda Daoud Ahmed, Mansoor Abdulhak, Muhammad Azeem Afzal, Sezer Filiz, Usama Ahmad Mughal

Abstract:

This paper discusses important aspects of AI in the healthcare domain. The increase of data in healthcare both in size and complexity, opens more room for artificial intelligence applications. Our focus is to review the main AI methods within the scope of the health care domain. The results of the review show that recommendations for diagnosis and recommendations for treatment, patent engagement, and administrative tasks are the key applications of AI in healthcare. Understanding the potential of AI methods in the domain of healthcare would benefit healthcare practitioners and will improve patient outcomes.

Keywords: AI in healthcare, technologies of AI, neural network, future of AI in healthcare

Procedia PDF Downloads 85
2052 Getting It Right Before Implementation: Using Simulation to Optimize Recommendations and Interventions After Adverse Event Review

Authors: Melissa Langevin, Natalie Ward, Colleen Fitzgibbons, Christa Ramsey, Melanie Hogue, Anna Theresa Lobos

Abstract:

Description: Root Cause Analysis (RCA) is used by health care teams to examine adverse events (AEs) to identify causes which then leads to recommendations for prevention Despite widespread use, RCA has limitations. Best practices have not been established for implementing recommendations or tracking the impact of interventions after AEs. During phase 1 of this study, we used simulation to analyze two fictionalized AEs that occurred in hospitalized paediatric patients to identify and understand how the errors occurred and generated recommendations to mitigate and prevent recurrences. Scenario A involved an error of commission (inpatient drug error), and Scenario B involved detecting an error that already occurred (critical care drug infusion error). Recommendations generated were: improved drug labeling, specialized drug kids, alert signs and clinical checklists. Aim: Use simulation to optimize interventions recommended post critical event analysis prior to implementation in the clinical environment. Methods: Suggested interventions from Phase 1 were designed and tested through scenario simulation in the clinical environment (medicine ward or pediatric intensive care unit). Each scenario was simulated 8 times. Recommendations were tested using different, voluntary teams and each scenario was debriefed to understand why the error was repeated despite interventions and how interventions could be improved. Interventions were modified with subsequent simulations until recommendations were felt to have an optimal effect and data saturation was achieved. Along with concrete suggestions for design and process change, qualitative data pertaining to employee communication and hospital standard work was collected and analyzed. Results: Each scenario had a total of three interventions to test. In, scenario 1, the error was reproduced in the initial two iterations and mitigated following key intervention changes. In scenario 2, the error was identified immediately in all cases where the intervention checklist was utilized properly. Independently of intervention changes and improvements, the simulation was beneficial to identify which of these should be prioritized for implementation and highlighted that even the potential solutions most frequently suggested by participants did not always translate into error prevention in the clinical environment. Conclusion: We conclude that interventions that help to change process (epinephrine kit or mandatory checklist) were more successful at preventing errors than passive interventions (signage, change in memory aids). Given that even the most successful interventions needed modifications and subsequent re-testing, simulation is key to optimizing suggested changes. Simulation is a safe, practice changing modality for institutions to use prior to implementing recommendations from RCA following AE reviews.

Keywords: adverse events, patient safety, pediatrics, root cause analysis, simulation

Procedia PDF Downloads 122
2051 Price Prediction Line, Investment Signals and Limit Conditions Applied for the German Financial Market

Authors: Cristian Păuna

Abstract:

In the first decades of the 21st century, in the electronic trading environment, algorithmic capital investments became the primary tool to make a profit by speculations in financial markets. A significant number of traders, private or institutional investors are participating in the capital markets every day using automated algorithms. The autonomous trading software is today a considerable part in the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present one of these models called Price Prediction Line. A mathematical algorithm will be revealed to build a reliable trend line, which is the base for limit conditions and automated investment signals, the core for a computerized investment system. The paper will guide how to apply these tools to generate entry and exit investment signals, limit conditions to build a mathematical filter for the investment opportunities, and the methodology to integrate all of these in automated investment software. The paper will also present trading results obtained for the leading German financial market index with the presented methods to analyze and to compare different automated investment algorithms. It was found that a specific mathematical algorithm can be optimized and integrated into an automated trading system with good and sustained results for the leading German Market. Investment results will be compared in order to qualify the presented model. In conclusion, a 1:6.12 risk was obtained to reward ratio applying the trigonometric method to the DAX Deutscher Aktienindex on 24 months investment. These results are superior to those obtained with other similar models as this paper reveal. The general idea sustained by this paper is that the Price Prediction Line model presented is a reliable capital investment methodology that can be successfully applied to build an automated investment system with excellent results.

Keywords: algorithmic trading, automated trading systems, high-frequency trading, DAX Deutscher Aktienindex

Procedia PDF Downloads 103
2050 Analyzing the Value of Brand Engagement on Social Media for B2B Firms: Evidence from China

Authors: Shuai Yang, Bin Li, Sixing Chen

Abstract:

Engaging and co-creating value with buyers (i.e., the buying organizations) have rapidly become a rising trend for sellers (i.e., the selling organizations) within Business-to-Business (B2B) environments, through which buyers can interact more with sellers and be better informed about products. One important way to achieve this is through engaging with buyers on social media, termed as brand engagement on social media, which provides a platform for sellers to interact with customers. This study addresses the research gap by answering the following questions: (1) Are B2B firms’ brand engagement on social media related to their firm value? (2) To what extent do analyst stock recommendations channel B2B firms’ brand engagement on social media’s possible impact on firm value? To answer the research questions, this study collected data merged from multiple sources. The results show that there is a positive association between seller-initiated engagement and B2B sellers’ firm value. Besides, analyst stock recommendations mediate the positive relationships between seller-initiated engagement and firm value. However, this study reveals buyer-initiated engagement has a counterintuitive and negative relationship with firm value, which shows a dark side of buyer-initiated engagement on social media for B2B sellers.

Keywords: brand engagement, B2B firms, firm value, social media, stock recommendations

Procedia PDF Downloads 285
2049 Transparency of Algorithmic Decision-Making: Limits Posed by Intellectual Property Rights

Authors: Olga Kokoulina

Abstract:

Today, algorithms are assuming a leading role in various areas of decision-making. Prompted by a promise to provide increased economic efficiency and fuel solutions for pressing societal challenges, algorithmic decision-making is often celebrated as an impartial and constructive substitute for human adjudication. But in the face of this implied objectivity and efficiency, the application of algorithms is also marred with mounting concerns about embedded biases, discrimination, and exclusion. In Europe, vigorous debates on risks and adverse implications of algorithmic decision-making largely revolve around the potential of data protection laws to tackle some of the related issues. For example, one of the often-cited venues to mitigate the impact of potentially unfair decision-making practice is a so-called 'right to explanation'. In essence, the overall right is derived from the provisions of the General Data Protection Regulation (‘GDPR’) ensuring the right of data subjects to access and mandating the obligation of data controllers to provide the relevant information about the existence of automated decision-making and meaningful information about the logic involved. Taking corresponding rights and obligations in the context of the specific provision on automated decision-making in the GDPR, the debates mainly focus on efficacy and the exact scope of the 'right to explanation'. In essence, the underlying logic of the argued remedy lies in a transparency imperative. Allowing data subjects to acquire as much knowledge as possible about the decision-making process means empowering individuals to take control of their data and take action. In other words, forewarned is forearmed. The related discussions and debates are ongoing, comprehensive, and, often, heated. However, they are also frequently misguided and isolated: embracing the data protection law as ultimate and sole lenses are often not sufficient. Mandating the disclosure of technical specifications of employed algorithms in the name of transparency for and empowerment of data subjects potentially encroach on the interests and rights of IPR holders, i.e., business entities behind the algorithms. The study aims at pushing the boundaries of the transparency debate beyond the data protection regime. By systematically analysing legal requirements and current judicial practice, it assesses the limits of the transparency requirement and right to access posed by intellectual property law, namely by copyrights and trade secrets. It is asserted that trade secrets, in particular, present an often-insurmountable obstacle for realising the potential of the transparency requirement. In reaching that conclusion, the study explores the limits of protection afforded by the European Trade Secrets Directive and contrasts them with the scope of respective rights and obligations related to data access and portability enshrined in the GDPR. As shown, the far-reaching scope of the protection under trade secrecy is evidenced both through the assessment of its subject matter as well as through the exceptions from such protection. As a way forward, the study scrutinises several possible legislative solutions, such as flexible interpretation of the public interest exception in trade secrets as well as the introduction of the strict liability regime in case of non-transparent decision-making.

Keywords: algorithms, public interest, trade secrets, transparency

Procedia PDF Downloads 102
2048 Embedded Hybrid Intuition: A Deep Learning and Fuzzy Logic Approach to Collective Creation and Computational Assisted Narratives

Authors: Roberto Cabezas H

Abstract:

The current work shows the methodology developed to create narrative lighting spaces for the multimedia performance piece 'cluster: the vanished paradise.' This empirical research is focused on exploring unconventional roles for machines in subjective creative processes, by delving into the semantics of data and machine intelligence algorithms in hybrid technological, creative contexts to expand epistemic domains trough human-machine cooperation. The creative process in scenic and performing arts is guided mostly by intuition; from that idea, we developed an approach to embed collective intuition in computational creative systems, by joining the properties of Generative Adversarial Networks (GAN’s) and Fuzzy Clustering based on a semi-supervised data creation and analysis pipeline. The model makes use of GAN’s to learn from phenomenological data (data generated from experience with lighting scenography) and algorithmic design data (augmented data by procedural design methods), fuzzy logic clustering is then applied to artificially created data from GAN’s to define narrative transitions built on membership index; this process allowed for the creation of simple and complex spaces with expressive capabilities based on position and light intensity as the parameters to guide the narrative. Hybridization comes not only from the human-machine symbiosis but also on the integration of different techniques for the implementation of the aided design system. Machine intelligence tools as proposed in this work are well suited to redefine collaborative creation by learning to express and expand a conglomerate of ideas and a wide range of opinions for the creation of sensory experiences. We found in GAN’s and Fuzzy Logic an ideal tool to develop new computational models based on interaction, learning, emotion and imagination to expand the traditional algorithmic model of computation.

Keywords: fuzzy clustering, generative adversarial networks, human-machine cooperation, hybrid collective data, multimedia performance

Procedia PDF Downloads 116
2047 Design Data Sorter Circuit Using Insertion Sorting Algorithm

Authors: Hoda Abugharsa

Abstract:

In this paper we propose to design a sorter circuit using insertion sorting algorithm. The circuit will be designed using Algorithmic State Machines (ASM) method. That means converting the insertion sorting flowchart into an ASM chart. Then the ASM chart will be used to design the sorter circuit and the control unit.

Keywords: insert sorting algorithm, ASM chart, sorter circuit, state machine, control unit

Procedia PDF Downloads 423