Search results for: streaming analytics
126 Smart Campus Digital Twin: Basic Framework - Current State, Trends and Challenges
Authors: Enido Fabiano de Ramos, Ieda Kanashiro Makiya, Francisco I. Giocondo Cesar
Abstract:
This study presents an analysis of the Digital Twin concept applied to the academic environment, focusing on the development of a Digital Twin Smart Campus Framework. Using bibliometric analysis methodologies and literature review, the research investigates the evolution and applications of the Digital Twin in educational contexts, comparing these findings with the advances of Industry 4.0. It was identified gaps in the existing literature and highlighted the need to adapt Digital Twin principles to meet the specific demands of a smart campus. By integrating Industry 4.0 concepts such as automation, Internet of Things, and real-time data analytics, we propose an innovative framework for the successful implementation of the Digital Twin in academic settings. The results of this study provide valuable insights for university campus managers, allowing for a better understanding of the potential applications of the Digital Twin for operations, security, and user experience optimization. In addition, our framework offers practical guidance for transitioning from a digital campus to a digital twin smart campus, promoting innovation and efficiency in the educational environment. This work contributes to the growing literature on Digital Twins and Industry 4.0, while offering a specific and tailored approach to transforming university campuses into smart and connected spaces, high demanded by Society 5.0 trends. It is hoped that this framework will serve as a basis for future research and practical implementations in the field of higher education and educational technology.Keywords: smart campus, digital twin, industry 4.0, education trends, society 5.0
Procedia PDF Downloads 59125 Machine Learning Facing Behavioral Noise Problem in an Imbalanced Data Using One Side Behavioral Noise Reduction: Application to a Fraud Detection
Authors: Salma El Hajjami, Jamal Malki, Alain Bouju, Mohammed Berrada
Abstract:
With the expansion of machine learning and data mining in the context of Big Data analytics, the common problem that affects data is class imbalance. It refers to an imbalanced distribution of instances belonging to each class. This problem is present in many real world applications such as fraud detection, network intrusion detection, medical diagnostics, etc. In these cases, data instances labeled negatively are significantly more numerous than the instances labeled positively. When this difference is too large, the learning system may face difficulty when tackling this problem, since it is initially designed to work in relatively balanced class distribution scenarios. Another important problem, which usually accompanies these imbalanced data, is the overlapping instances between the two classes. It is commonly referred to as noise or overlapping data. In this article, we propose an approach called: One Side Behavioral Noise Reduction (OSBNR). This approach presents a way to deal with the problem of class imbalance in the presence of a high noise level. OSBNR is based on two steps. Firstly, a cluster analysis is applied to groups similar instances from the minority class into several behavior clusters. Secondly, we select and eliminate the instances of the majority class, considered as behavioral noise, which overlap with behavior clusters of the minority class. The results of experiments carried out on a representative public dataset confirm that the proposed approach is efficient for the treatment of class imbalances in the presence of noise.Keywords: machine learning, imbalanced data, data mining, big data
Procedia PDF Downloads 132124 Digital Repository as a Service: Enhancing Access and Preservation of Cultural Heritage Artefacts
Authors: Lefteris Tsipis, Demosthenes Vouyioukas, George Loumos, Antonis Kargas, Dimitris Varoutas
Abstract:
The employment of technology and digitization is crucial for cultural organizations to establish and sustain digital repositories for their cultural heritage artefacts. This utilization is also essential in facilitating the presentation of cultural works and exhibits to a broader audience. Consequently, in this work, we propose a digital repository that functions as Software as a Service (SaaS), primarily promoting the safe storage, display, and sharing of cultural materials, enhancing accessibility, and fostering a deeper understanding and appreciation of cultural heritage. Moreover, the proposed digital repository service is designed as a multitenant architecture, which enables organizations to expand their reach, enhance accessibility, foster collaboration, and ensure the preservation of their content. Specifically, this project aims to assist each cultural institution in organizing its digital cultural assets into collections and feeding other digital platforms, including educational, museum, pedagogical, and games, through appropriate interfaces. Moreover, the creation of this digital repository offers a cutting-edge and effective open-access laboratory solution. It allows organizations to have a significant influence on their audiences by fostering cultural understanding and appreciation. Additionally, it facilitates the connection between different digital repositories and national/European aggregators, promoting collaboration and information sharing. By embracing this solution, cultural institutions can benefit from shared resources and features, such as system updates, backup and recovery services, and data analytics tools, that are provided by the platform.Keywords: cultural technologies, gaming technologies, web sharing, digital repository
Procedia PDF Downloads 80123 pH-Responsive Carrier Based on Polymer Particle
Authors: Florin G. Borcan, Ramona C. Albulescu, Adela Chirita-Emandi
Abstract:
pH-responsive drug delivery systems are gaining more importance because these systems deliver the drug at a specific time in regards to pathophysiological necessity, resulting in improved patient therapeutic efficacy and compliance. Polyurethane materials are well-known for industrial applications (elastomers and foams used in different insulations and automotive), but they are versatile biocompatible materials with many applications in medicine, as artificial skin for the premature neonate, membrane in the hybrid artificial pancreas, prosthetic heart valves, etc. This study aimed to obtain the physico-chemical characterization of a drug delivery system based on polyurethane microparticles. The synthesis is based on a polyaddition reaction between an aqueous phase (mixture of polyethylene-glycol M=200, 1,4-butanediol and Tween® 20) and an organic phase (lysin-diisocyanate in acetone) combined with simultaneous emulsification. Different active agents (omeprazole, amoxicillin, metoclopramide) were used to verify the release profile of the macromolecular particles in different pH mediums. Zetasizer measurements were performed using an instrument based on two modules: a Vasco size analyzer and a Wallis Zeta potential analyzer (Cordouan Technol., France) in samples that were kept in various solutions with different pH and the maximum absorbance in UV-Vis spectra were collected on a UVi Line 9,400 Spectrophotometer (SI Analytics, Germany). The results of this investigation have revealed that these particles are proper for a prolonged release in gastric medium where they can assure an almost constant concentration of the active agents for 1-2 weeks, while they can be disassembled faster in a medium with neutral pHs, such as the intestinal fluid.Keywords: lysin-diisocyanate, nanostructures, polyurethane, Zetasizer
Procedia PDF Downloads 184122 The Study of Internship Performances: Comparison of Information Technology Interns towards Students’ Types and Background Profiles
Authors: Shutchapol Chopvitayakun
Abstract:
Internship program is a compulsory course of many undergraduate programs in Thailand. It gives opportunities to a lot of senior students as interns to practice their working skills in the real organizations and also gives chances for interns to face real-world working problems. Interns also learn how to solve those problems by direct and indirect experiences. This program in many schools is a well-structured course with a contract or agreement made with real business organizations. Moreover, this program also offers opportunities for interns to get jobs after completing it from where the internship program takes place. Interns also learn how to work as a team and how to associate with other colleagues, trainers, and superiors of each organization in term of social hierarchy, self-responsibility, and self-disciplinary. This research focuses on senior students of Suan Sunandha Rajabhat University, Thailand whose studying major is information technology program. They practiced their working skills or took internship programs in the real business sector or real operating organizations in 2015-2016. Interns are categorized in to two types: normal program and special program. For special program, students study in weekday evening from Monday to Friday or Weekend and most of them work full-time or part-time job. For normal program, students study in weekday working hours and most of them do not work. The differences of these characters and the outcomes of internship performance were studied and analyzed in this research. This work applied some statistical analytics to find out whether the internship performance of each intern type has different performances statistically or not.Keywords: internship, intern, senior student, information technology program
Procedia PDF Downloads 264121 Unlocking E-commerce: Analyzing User Behavior and Segmenting Customers for Strategic Insights
Authors: Aditya Patil, Arun Patil, Vaishali Patil, Sudhir Chitnis, Anjum Patel
Abstract:
Rapid growth has given e-commerce platforms a lot of client behavior and spending data. To maximize their strategy, businesses must understand how customers utilize online shopping platforms and what influences their purchases. Our research focuses on e-commerce user behavior and purchasing trends. This extensive study examines spending and user behavior. Regression and grouping disclose relevant data from the dataset. We can understand user spending trends via multilevel regression. We can analyze how pricing, user demographics, and product categories affect customer purchase decisions with this technique. Clustering groups consumers by spending. Important information was found. Purchase habits vary by user group. Our analysis illuminates the complex world of e-commerce consumer behavior and purchase trends. Understanding user behavior helps create effective e-commerce marketing strategies. This market can benefit from K-means clustering. This study focuses on tailoring strategies to user groups and improving product and price effectiveness. Customer buying behaviors across categories were shown via K-means clusters. Average spending is highest in Cluster 4 and lowest in Cluster 3. Clothing is less popular than gadgets and appliances around the holidays. Cluster spending distribution is examined using average variables. Our research enhances e-commerce analytics. Companies can improve customer service and decision-making with this data.Keywords: e-commerce, regression, clustering, k-means
Procedia PDF Downloads 23120 Anyword: A Digital Marketing Tool to Increase Productivity in Newly Launching Businesses
Authors: Jana Atteah, Wid Jan, Yara AlHibshi, Rahaf AlRougi
Abstract:
Anyword is an AI copywriting tool that helps marketers create effective campaigns for specific audiences. It offers a wide range of templates for various platforms, brand voice guidelines, and valuable analytics insights. Anyword is used by top global companies and has been recognized as one of the "Fastest Growing Products" in the 2023 software awards. A recent study examined the utilization and impact of AI-powered writing tools, specifically focusing on the adoption of AI in writing pursuits and the use of the Anyword platform. The results indicate that a majority of respondents (52.17%) had not previously used Anyword, but those who had were generally satisfied with the platform. Notable productivity improvements were observed among 13% of the participants, while an additional 34.8% reported a slight increase in productivity. A majority (47.8%) maintained a neutral stance, suggesting that their productivity remained unaffected. Only a minimal percentage (4.3%) claimed that their productivity did not improve with the usage of Anyword AI. In terms of the quality of written content generated, the participants responded positively. Approximately 91% of participants gave Anyword AI a score of 5 or higher, with roughly 17% giving it a perfect score. A small percentage (approximately 9%) gave a low score between 0-2. The mode result was a score of 7, indicating a generally positive perception of the quality of content generated using Anyword AI. These findings suggest that AI can contribute to increased productivity and positively influence the quality of written content. Further research and exploration of AI tools in writing pursuits are warranted to fully understand their potential and limitations.Keywords: artificial intelligence, marketing platforms, productivity, user interface
Procedia PDF Downloads 64119 The Impact of the Enron Scandal on the Reputation of Corporate Social Responsibility Rating Agencies
Authors: Jaballah Jamil
Abstract:
KLD (Peter Kinder, Steve Lydenberg and Amy Domini) research & analytics is an independent intermediary of social performance information that adopts an investor-pay model. KLD rating agency does not have an explicit monitoring on the rated firm which suggests that KLD ratings may not include private informations. Moreover, the incapacity of KLD to predict accurately the extra-financial rating of Enron casts doubt on the reliability of KLD ratings. Therefore, we first investigate whether KLD ratings affect investors' perception by studying the effect of KLD rating changes on firms' financial performances. Second, we study the impact of the Enron scandal on investors' perception of KLD rating changes by comparing the effect of KLD rating changes on firms' financial performances before and after the failure of Enron. We propose an empirical study that relates a number of equally-weighted portfolios returns, excess stock returns and book-to-market ratio to different dimensions of KLD social responsibility ratings. We first find that over the last two decades KLD rating changes influence significantly and negatively stock returns and book-to-market ratio of rated firms. This finding suggests that a raise in corporate social responsibility rating lowers the firm's risk. Second, to assess the Enron scandal's effect on the perception of KLD ratings, we compare the effect of KLD rating changes before and after the Enron scandal. We find that after the Enron scandal this significant effect disappears. This finding supports the view that the Enron scandal annihilates the KLD's effect on Socially Responsible Investors. Therefore, our findings may question results of recent studies that use KLD ratings as a proxy for Corporate Social Responsibility behavior.Keywords: KLD social rating agency, investors' perception, investment decision, financial performance
Procedia PDF Downloads 441118 The Driving Force for Taiwan Social Innovation Business Model Transformation: A Case Study of Social Innovation Internet Celebrity Training Project
Authors: Shih-Jie Ma, Jui-Hsu Hsiao, Ming-Ying Hsieh, Shin-Yan Yang, Chun-Han Yeh, Kuo-Chun Su
Abstract:
In Taiwan, social enterprises and non-profit organizations (NPOs) are not familiar with innovative business models, such as live streaming. In 2019, a brand new course called internet celebrity training project is introduced to them by the Social Innovation Lab. The Goal of this paper is to evaluate the effect of this project, to explore the role of new technology (internet live stream) in business process management (BPM), and to analyze how live stream programs can assist social enterprises in creating new business models. Social Innovation, with the purpose to solve social issues in innovative ways, is one of the most popular topics in the world. Social Innovation Lab was established in 2017 by Executive Yuan in Taiwan. The vision of Social Innovation Lab is to exploit technology, innovation and experimental methods to solve social issues, and to maximize the benefits from government investment. Social Innovation Lab aims at creating a platform for both supply and demand sides of social issues, to make social enterprises and start-ups communicate with each other, and to build an eco-system in which stakeholders can make a social impact. Social Innovation Lab keeps helping social enterprises and NPOs to gain better publicity and to enhance competitiveness by facilitating digital transformation. In this project, Social Innovation Lab exerted the influence of social media such as YouTube and Facebook, to make social enterprises and start-ups adjust their business models by using the live stream of social media, which becomes one of the tools to expand their market and diversify their sales channels. Internet live stream training courses were delivered in different regions of Taiwan in 2019, including Taitung, Taichung, Kaohsiung and Hualien. Through these courses, potential groups and enterprises were cultivated to become so-called internet celebrities. With their concern about social issues in mind, these internet celebrities know how to manipulate social media to make a social impact in different fields, such as aboriginal people, food and agriculture, LOHAS (Lifestyles of Health and Sustainability), environmental protection and senior citizens. Participants of live stream training courses in Taiwan are selected to take in-depth interviews and questionnaire surveys. Results indicate that the digital transformation process of social enterprises and NPOs can be successful by implementing business process reengineering, a significant change made by social innovation internet celebrities. Therefore, this project can be the new driving force to facilitate the business model transformation in Taiwan.Keywords: business process management, digital transformation, live stream, social innovation
Procedia PDF Downloads 147117 Leveraging Mobile Apps for Citizen-Centric Urban Planning: Insights from Tajawob Implementation
Authors: Alae El Fahsi
Abstract:
This study explores the ‘Tajawob’ app's role in urban development, demonstrating how mobile applications can empower citizens and facilitate urban planning. Tajawob serves as a digital platform for community feedback, engagement, and participatory governance, addressing urban challenges through innovative tech solutions. This research synthesizes data from a variety of sources, including user feedback, engagement metrics, and interviews with city officials, to assess the app’s impact on citizen participation in urban development in Morocco. By integrating advanced data analytics and user experience design, Tajawob has bridged the communication gap between citizens and government officials, fostering a more collaborative and transparent urban planning process. The findings reveal a significant increase in civic engagement, with users actively contributing to urban management decisions, thereby enhancing the responsiveness and inclusivity of urban governance. Challenges such as digital literacy, infrastructure limitations, and privacy concerns are also discussed, providing a comprehensive overview of the obstacles and opportunities presented by mobile app-based citizen engagement platforms. The study concludes with strategic recommendations for scaling the Tajawob model to other contexts, emphasizing the importance of adaptive technology solutions in meeting the evolving needs of urban populations. This research contributes to the burgeoning field of smart city innovations, offering key insights into the role of digital tools in facilitating more democratic and participatory urban environments.Keywords: smart cities, digital governance, urban planning, strategic design
Procedia PDF Downloads 60116 Implementation of an Accessible State-Wide Trauma Education Program
Authors: Christine Lassen, Elizabeth Leonard, Matthew Oliver
Abstract:
The management of trauma is often complex and outcomes dependent on clinical expertise, effective teamwork, and a supported trauma system. The implementation of a statewide trauma education program should be accessible to all clinicians who manage trauma, but this can be challenging due to diverse individual needs, trauma service needs and geography. The NSW Institute of Trauma and Injury Management (ITIM) is a government funded body, responsible for coordinating and supporting the NSW Trauma System. The aim of this presentation is to describe how education initiatives have been implemented across the state. Simulation: In 2006, ITIM developed a Trauma Team Training Course - aimed to educate clinicians on the technical and non-technical skills required to manage trauma. The course is now independently coordinated by trauma services across the state at major trauma centres as well as in regional and rural hospitals. ITIM is currently in the process of re-evaluating and updating the Trauma Team Training Course to allow for the development of new resources and simulation scenarios. Trauma Education Evenings: In 2013, ITIM supported major trauma services to develop trauma education evenings which allowed the provision of free education to staff within the area health service and local area. The success of these local events expanded to regional hospitals. A total of 75 trauma education evenings have been conducted within NSW, with over 10,000 attendees. Wed-Based Resources: Recently, ITIM commenced free live streaming of the trauma education evenings which have now had over 3000 live views. The Trauma App developed in 2015 provides trauma clinicians with a centralised portal for trauma information and works on smartphones and tablets that integrate with the ITIM website. This supports pre-hospital and bedside clinical decisions and allows for trauma care to be more standardised, evidence-based, timely, and appropriate. Online e-Learning modules have been developed to assist clinicians, reduce unwarranted clinical variation and provide up to date evidence based education. The modules incorporate clinically focused education content with summative and formative assessments. Conclusion: Since 2005, ITIM has helped to facilitate the development of trauma education programs for doctors, nurses, pre-hospital and allied health clinicians. ITIM has been actively involved in more than 100 specialized trauma education programs, seminars and clinical workshops - attended by over 12,000 staff. The provision of state-wide trauma education is a challenging task requiring collaboration amongst numerous agencies working towards a common goal – to provide easily accessible trauma education.Keywords: education, simulation, team-training, trauma
Procedia PDF Downloads 188115 Through the Robot’s Eyes: A Comparison of Robot-Piloted, Virtual Reality, and Computer Based Exposure for Fear of Injections
Authors: Bonnie Clough, Tamara Ownsworth, Vladimir Estivill-Castro, Matt Stainer, Rene Hexel, Andrew Bulmer, Wendy Moyle, Allison Waters, David Neumann, Jayke Bennett
Abstract:
The success of global vaccination programs is reliant on the uptake of vaccines to achieve herd immunity. Yet, many individuals do not obtain vaccines or venipuncture procedures when needed. Whilst health education may be effective for those individuals who are hesitant due to safety or efficacy concerns, for many of these individuals, the primary concern relates to blood or injection fear or phobia (BII). BII is highly prevalent and associated with a range of negative health impacts, both at individual and population levels. Exposure therapy is an efficacious treatment for specific phobias, including BII, but has high patient dropout and low implementation by therapists. Whilst virtual reality approaches exposure therapy may be more acceptable, they have similarly low rates of implementation by therapists and are often difficult to tailor to an individual client’s needs. It was proposed that a piloted robot may be able to adequately facilitate fear induction and be an acceptable approach to exposure therapy. The current study examined fear induction responses, acceptability, and feasibility of a piloted robot for BII exposure. A Nao humanoid robot was programmed to connect with a virtual reality head-mounted display, enabling live streaming and exploration of real environments from a distance. Thirty adult participants with BII fear were randomly assigned to robot-pilot or virtual reality exposure conditions in a laboratory-based fear exposure task. All participants also completed a computer-based two-dimensional exposure task, with an order of conditions counterbalanced across participants. Measures included fear (heart rate variability, galvanic skin response, stress indices, and subjective units of distress), engagement with a feared stimulus (eye gaze: time to first fixation and a total number of fixations), acceptability, and perceived treatment credibility. Preliminary results indicate that fear responses can be adequately induced via a robot-piloted platform. Further results will be discussed, as will implications for the treatment of BII phobia and other fears. It is anticipated that piloted robots may provide a useful platform for facilitating exposure therapy, being more acceptable than in-vivo exposure and more flexible than virtual reality exposure.Keywords: anxiety, digital mental health, exposure therapy, phobia, robot, virtual reality
Procedia PDF Downloads 78114 Digitalization and High Audit Fees: An Empirical Study Applied to US Firms
Authors: Arpine Maghakyan
Abstract:
The purpose of this paper is to study the relationship between the level of industry digitalization and audit fees, especially, the relationship between Big 4 auditor fees and industry digitalization level. On the one hand, automation of business processes decreases internal control weakness and manual mistakes; increases work effectiveness and integrations. On the other hand, it may cause serious misstatements, high business risks or even bankruptcy, typically in early stages of automation. Incomplete automation can bring high audit risk especially if the auditor does not fully understand client’s business automation model. Higher audit risk consequently will cause higher audit fees. Higher audit fees for clients with high automation level are more highlighted in Big 4 auditor’s behavior. Using data of US firms from 2005-2015, we found that industry level digitalization is an interaction for the auditor quality on audit fees. Moreover, the choice of Big4 or non-Big4 is correlated with client’s industry digitalization level. Big4 client, which has higher digitalization level, pays more than one with low digitalization level. In addition, a high-digitalized firm that has Big 4 auditor pays higher audit fee than non-Big 4 client. We use audit fees and firm-specific variables from Audit Analytics and Compustat databases. We analyze collected data by using fixed effects regression methods and Wald tests for sensitivity check. We use fixed effects regression models for firms for determination of the connections between technology use in business and audit fees. We control for firm size, complexity, inherent risk, profitability and auditor quality. We chose fixed effects model as it makes possible to control for variables that have not or cannot be measured.Keywords: audit fees, auditor quality, digitalization, Big4
Procedia PDF Downloads 302113 Optimizing Data Integration and Management Strategies for Upstream Oil and Gas Operations
Authors: Deepak Singh, Rail Kuliev
Abstract:
The abstract highlights the critical importance of optimizing data integration and management strategies in the upstream oil and gas industry. With its complex and dynamic nature generating vast volumes of data, efficient data integration and management are essential for informed decision-making, cost reduction, and maximizing operational performance. Challenges such as data silos, heterogeneity, real-time data management, and data quality issues are addressed, prompting the proposal of several strategies. These strategies include implementing a centralized data repository, adopting industry-wide data standards, employing master data management (MDM), utilizing real-time data integration technologies, and ensuring data quality assurance. Training and developing the workforce, “reskilling and upskilling” the employees and establishing robust Data Management training programs play an essential role and integral part in this strategy. The article also emphasizes the significance of data governance and best practices, as well as the role of technological advancements such as big data analytics, cloud computing, Internet of Things (IoT), and artificial intelligence (AI) and machine learning (ML). To illustrate the practicality of these strategies, real-world case studies are presented, showcasing successful implementations that improve operational efficiency and decision-making. In present study, by embracing the proposed optimization strategies, leveraging technological advancements, and adhering to best practices, upstream oil and gas companies can harness the full potential of data-driven decision-making, ultimately achieving increased profitability and a competitive edge in the ever-evolving industry.Keywords: master data management, IoT, AI&ML, cloud Computing, data optimization
Procedia PDF Downloads 71112 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race
Authors: Joonas Pääkkönen
Abstract:
In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling
Procedia PDF Downloads 125111 Governance of Social Media Using the Principles of Community Radio
Authors: Ken Zakreski
Abstract:
Regulating Canadian Facebook Groups, of a size and type, when they reach a threshold of audio video content. Consider the evolution of the Streaming Act, Parl GC Bill C-11 (44-1) and the regulations that will certainly follow. The Canadian Heritage Minister's office stipulates, "the Broadcasting Act only applies to audio and audiovisual content, not written journalism.” Governance— After 10 years, a community radio station for Gabriola Island, BC – Canadian Radio-television and Telecommunications Commission (“CRTC”) was approved but never started – became a Facebook Group “Community Bulletin Board - Life on Gabriola“ referred to as CBBlog. After CBBlog started and began to gather real traction, a member of the Group cloned the membership and ran their competing Facebook group under the banner of "free speech”. Here we see an inflection point [change of cultural stewardship] with two different mathematical results [engagement and membership growth]. Canada's telecommunication history of “portability” and “interoperability” made that Facebook Group CBBlog the better option, over broadcast FM radio for a community pandemic information sharing service for Gabriola Island, BC. A culture of ignorance flourishes in social media. Often people do not understand their own experience, or the experience of others because they do not have the concepts needed for understanding. It is thus important they are not denied concepts required for their full understanding. For example, Legislators need to know something about gay culture before they can make any decisions about it. Community Media policies and CRTC regulations are known and regulators can use that history to forge forward with regulations for internet platforms of a size and content type that reach a threshold of audio / video content. Mostly volunteer run media services, provide order of magnitude lower costs over commercial media. (Treating) Facebook Groups as new media.? Cathy Edwards, executive director of the Canadian Association of Community Television Users and Stations (“CACTUS”), calls it new media in that the distribution platform is not the issue. What does make community groups community media? Cathy responded, "... it's bylaws, articles of incorporation that state they are community media, they have accessibility, commitments to skills training, any member of the community can be a member, and there is accountability to a board of directors". Eligibility for funding through CACTUS requires these same commitments. It is risky for a community to invest into a platform as ownership has not been litigated. Is a FaceBook Group an asset of a not for profit society? The memo, from law student, Jared Hubbard summarizes, “Rights and interests in a Facebook group could, in theory, be transferred as property... This theory is currently unconfirmed by Canadian courts. “Keywords: social media, governance, community media, Canadian radio
Procedia PDF Downloads 72110 Bayesian System and Copula for Event Detection and Summarization of Soccer Videos
Authors: Dhanuja S. Patil, Sanjay B. Waykar
Abstract:
Event detection is a standout amongst the most key parts for distinctive sorts of area applications of video data framework. Recently, it has picked up an extensive interest of experts and in scholastics from different zones. While detecting video event has been the subject of broad study efforts recently, impressively less existing methodology has considered multi-model data and issues related efficiency. Start of soccer matches different doubtful circumstances rise that can't be effectively judged by the referee committee. A framework that checks objectively image arrangements would prevent not right interpretations because of some errors, or high velocity of the events. Bayesian networks give a structure for dealing with this vulnerability using an essential graphical structure likewise the probability analytics. We propose an efficient structure for analysing and summarization of soccer videos utilizing object-based features. The proposed work utilizes the t-cherry junction tree, an exceptionally recent advancement in probabilistic graphical models, to create a compact representation and great approximation intractable model for client’s relationships in an interpersonal organization. There are various advantages in this approach firstly; the t-cherry gives best approximation by means of junction trees class. Secondly, to construct a t-cherry junction tree can be to a great extent parallelized; and at last inference can be performed utilizing distributed computation. Examination results demonstrates the effectiveness, adequacy, and the strength of the proposed work which is shown over a far reaching information set, comprising more soccer feature, caught at better places.Keywords: summarization, detection, Bayesian network, t-cherry tree
Procedia PDF Downloads 327109 Context-Aware Point-Of-Interests Recommender Systems Using Integrated Sentiment and Network Analysis
Authors: Ho Yeon Park, Kyoung-Jae Kim
Abstract:
Recently, user’s interests for location-based social network service increases according to the advances of social web and location-based technologies. It may be easy to recommend preferred items if we can use user’s preference, context and social network information simultaneously. In this study, we propose context-aware POI (point-of-interests) recommender systems using location-based network analysis and sentiment analysis which consider context, social network information and implicit user’s preference score. We propose a context-aware POI recommendation system consisting of three sub-modules and an integrated recommendation system of them. First, we will develop a recommendation module based on network analysis. This module combines social network analysis and cluster-indexing collaboration filtering. Next, this study develops a recommendation module using social singular value decomposition (SVD) and implicit SVD. In this research, we will develop a recommendation module that can recommend preference scores based on the frequency of POI visits of user in POI recommendation process by using social and implicit SVD which can reflect implicit feedback in collaborative filtering. We also develop a recommendation module using them that can estimate preference scores based on the recommendation. Finally, this study will propose a recommendation module using opinion mining and emotional analysis using data such as reviews of POIs extracted from location-based social networks. Finally, we will develop an integration algorithm that combines the results of the three recommendation modules proposed in this research. Experimental results show the usefulness of the proposed model in relation to the recommended performance.Keywords: sentiment analysis, network analysis, recommender systems, point-of-interests, business analytics
Procedia PDF Downloads 252108 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data
Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L. Duan
Abstract:
The conditional density characterizes the distribution of a response variable y given other predictor x and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts as a motivating starting point. In this work, the authors extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zₚ, zₙ]. The zₚ component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zₙ component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach coined Augmented Posterior CDE (AP-CDE) only requires a simple modification of the common normalizing flow framework while significantly improving the interpretation of the latent component since zₚ represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of 𝑥-related variations due to factors such as lighting condition and subject id from the other random variations. Further, the experiments show that an unconditional NF neural network based on an unsupervised model of z, such as a Gaussian mixture, fails to generate interpretable results.Keywords: conditional density estimation, image generation, normalizing flow, supervised dimension reduction
Procedia PDF Downloads 99107 Analysis of Gender Budgeting in Healthcare Sector: A Case of Gujarat State of India
Authors: Juhi Pandya, Elekes Zsuzsanna
Abstract:
Health is related to every aspect of human being. Even a quintal change leads to ill-health of an individual. Gender plays an eminent role in determining an individual health exposure. Political implications on health have implicit effects on the individual, societal and economical. The inclusion of gender perspective into policies have plunged enormous attention globally, nationally and locally to detract inequalities and achieve economic growth. Simultaneously, there is an initiation of policies with gender perspective which are named differently but hold similar meaning or objective. They are named gender mainstreaming policies or gender sensitization policies. Gender budgeting acts as a tool for the application of gender mainstreaming policies. It incorporates gender perspective into the budgetary process by restricting the revenues and expenditures at all level of the budget. The current study takes into account the analysis of Gender Budgeting reports in terms of healthcare from the 2014-16 year of Gujarat State, India. The expenditures and literature under the heading of gender budgeting reports named “Health and Family Welfare Department” are discussed in the paper. The data analytics is done with the help of reports published by the Gujarat government on Gender Budgeting. The results discuss upon the expenditure and initiation of new policies as a roadmap for the promotion of gender equality from the path of gender budgeting. It states with the escalation of the budgetary numbers for the health expenditure. Additionally, the paper raises the questions on the hypothetical loopholes pertaining to the gender budgeting in Gujarat. The budget reports do not show a specify explanation to the expenditure use of budget for the schemes mentioned in healthcare. It also does not clarify that how many beneficiaries are benefited through gender budget. The explanation just provides an overlook of theory for healthcare Schemes/Yojana or Abhiyan.Keywords: gender, gender budgeting, gender equality, healthcare
Procedia PDF Downloads 352106 Analysis of Pangasinan State University: Bayambang Students’ Concerns Through Social Media Analytics and Latent Dirichlet Allocation Topic Modelling Approach
Authors: Matthew John F. Sino Cruz, Sarah Jane M. Ferrer, Janice C. Francisco
Abstract:
COVID-19 pandemic has affected more than 114 countries all over the world since it was considered a global health concern in 2020. Different sectors, including education, have shifted to remote/distant setups to follow the guidelines set to prevent the spread of the disease. One of the higher education institutes which shifted to remote setup is the Pangasinan State University (PSU). In order to continue providing quality instructions to the students, PSU designed Flexible Learning Model to still provide services to its stakeholders amidst the pandemic. The model covers the redesigning of delivering instructions in remote setup and the technology needed to support these adjustments. The primary goal of this study is to determine the insights of the PSU – Bayambang students towards the remote setup implemented during the pandemic and how they perceived the initiatives employed in relation to their experiences in flexible learning. In this study, the topic modelling approach was implemented using Latent Dirichlet Allocation. The dataset used in the study. The results show that the most common concern of the students includes time and resource management, poor internet connection issues, and difficulty coping with the flexible learning modality. Furthermore, the findings of the study can be used as one of the bases for the administration to review and improve the policies and initiatives implemented during the pandemic in relation to remote service delivery. In addition, further studies can be conducted to determine the overall sentiment of the other stakeholders in the policies implemented at the University.Keywords: COVID-19, topic modelling, students’ sentiment, flexible learning, Latent Dirichlet allocation
Procedia PDF Downloads 122105 Thick Data Techniques for Identifying Abnormality in Video Frames for Wireless Capsule Endoscopy
Authors: Jinan Fiaidhi, Sabah Mohammed, Petros Zezos
Abstract:
Capsule endoscopy (CE) is an established noninvasive diagnostic modality in investigating small bowel disease. CE has a pivotal role in assessing patients with suspected bleeding or identifying evidence of active Crohn's disease in the small bowel. However, CE produces lengthy videos with at least eighty thousand frames, with a frequency rate of 2 frames per second. Gastroenterologists cannot dedicate 8 to 15 hours to reading the CE video frames to arrive at a diagnosis. This is why the issue of analyzing CE videos based on modern artificial intelligence techniques becomes a necessity. However, machine learning, including deep learning, has failed to report robust results because of the lack of large samples to train its neural nets. In this paper, we are describing a thick data approach that learns from a few anchor images. We are using sound datasets like KVASIR and CrohnIPI to filter candidate frames that include interesting anomalies in any CE video. We are identifying candidate frames based on feature extraction to provide representative measures of the anomaly, like the size of the anomaly and the color contrast compared to the image background, and later feed these features to a decision tree that can classify the candidate frames as having a condition like the Crohn's Disease. Our thick data approach reported accuracy of detecting Crohn's Disease based on the availability of ulcer areas at the candidate frames for KVASIR was 89.9% and for the CrohnIPI was 83.3%. We are continuing our research to fine-tune our approach by adding more thick data methods for enhancing diagnosis accuracy.Keywords: thick data analytics, capsule endoscopy, Crohn’s disease, siamese neural network, decision tree
Procedia PDF Downloads 156104 Parallelization of Random Accessible Progressive Streaming of Compressed 3D Models over Web
Authors: Aayushi Somani, Siba P. Samal
Abstract:
Three-dimensional (3D) meshes are data structures, which store geometric information of an object or scene, generally in the form of vertices and edges. Current technology in laser scanning and other geometric data acquisition technologies acquire high resolution sampling which leads to high resolution meshes. While high resolution meshes give better quality rendering and hence is used often, the processing, as well as storage of 3D meshes, is currently resource-intensive. At the same time, web applications for data processing have become ubiquitous owing to their accessibility. For 3D meshes, the advancement of 3D web technologies, such as WebGL, WebVR, has enabled high fidelity rendering of huge meshes. However, there exists a gap in ability to stream huge meshes to a native client and browser application due to high network latency. Also, there is an inherent delay of loading WebGL pages due to large and complex models. The focus of our work is to identify the challenges faced when such meshes are streamed into and processed on hand-held devices, owing to its limited resources. One of the solutions that are conventionally used in the graphics community to alleviate resource limitations is mesh compression. Our approach deals with a two-step approach for random accessible progressive compression and its parallel implementation. The first step includes partition of the original mesh to multiple sub-meshes, and then we invoke data parallelism on these sub-meshes for its compression. Subsequent threaded decompression logic is implemented inside the Web Browser Engine with modification of WebGL implementation in Chromium open source engine. This concept can be used to completely revolutionize the way e-commerce and Virtual Reality technology works for consumer electronic devices. These objects can be compressed in the server and can be transmitted over the network. The progressive decompression can be performed on the client device and rendered. Multiple views currently used in e-commerce sites for viewing the same product from different angles can be replaced by a single progressive model for better UX and smoother user experience. Can also be used in WebVR for commonly and most widely used activities like virtual reality shopping, watching movies and playing games. Our experiments and comparison with existing techniques show encouraging results in terms of latency (compressed size is ~10-15% of the original mesh), processing time (20-22% increase over serial implementation) and quality of user experience in web browser.Keywords: 3D compression, 3D mesh, 3D web, chromium, client-server architecture, e-commerce, level of details, parallelization, progressive compression, WebGL, WebVR
Procedia PDF Downloads 170103 Digital Twin for Retail Store Security
Authors: Rishi Agarwal
Abstract:
Digital twins are emerging as a strong technology used to imitate and monitor physical objects digitally in real time across sectors. It is not only dealing with the digital space, but it is also actuating responses in the physical space in response to the digital space processing like storage, modeling, learning, simulation, and prediction. This paper explores the application of digital twins for enhancing physical security in retail stores. The retail sector still relies on outdated physical security practices like manual monitoring and metal detectors, which are insufficient for modern needs. There is a lack of real-time data and system integration, leading to ineffective emergency response and preventative measures. As retail automation increases, new digital frameworks must control safety without human intervention. To address this, the paper proposes implementing an intelligent digital twin framework. This collects diverse data streams from in-store sensors, surveillance, external sources, and customer devices and then Advanced analytics and simulations enable real-time monitoring, incident prediction, automated emergency procedures, and stakeholder coordination. Overall, the digital twin improves physical security through automation, adaptability, and comprehensive data sharing. The paper also analyzes the pros and cons of implementation of this technology through an Emerging Technology Analysis Canvas that analyzes different aspects of this technology through both narrow and wide lenses to help decision makers in their decision of implementing this technology. On a broader scale, this showcases the value of digital twins in transforming legacy systems across sectors and how data sharing can create a safer world for both retail store customers and owners.Keywords: digital twin, retail store safety, digital twin in retail, digital twin for physical safety
Procedia PDF Downloads 73102 Machine Learning Techniques for COVID-19 Detection: A Comparative Analysis
Authors: Abeer A. Aljohani
Abstract:
COVID-19 virus spread has been one of the extreme pandemics across the globe. It is also referred to as coronavirus, which is a contagious disease that continuously mutates into numerous variants. Currently, the B.1.1.529 variant labeled as omicron is detected in South Africa. The huge spread of COVID-19 disease has affected several lives and has surged exceptional pressure on the healthcare systems worldwide. Also, everyday life and the global economy have been at stake. This research aims to predict COVID-19 disease in its initial stage to reduce the death count. Machine learning (ML) is nowadays used in almost every area. Numerous COVID-19 cases have produced a huge burden on the hospitals as well as health workers. To reduce this burden, this paper predicts COVID-19 disease is based on the symptoms and medical history of the patient. This research presents a unique architecture for COVID-19 detection using ML techniques integrated with feature dimensionality reduction. This paper uses a standard UCI dataset for predicting COVID-19 disease. This dataset comprises symptoms of 5434 patients. This paper also compares several supervised ML techniques to the presented architecture. The architecture has also utilized 10-fold cross validation process for generalization and the principal component analysis (PCA) technique for feature reduction. Standard parameters are used to evaluate the proposed architecture including F1-Score, precision, accuracy, recall, receiver operating characteristic (ROC), and area under curve (AUC). The results depict that decision tree, random forest, and neural networks outperform all other state-of-the-art ML techniques. This achieved result can help effectively in identifying COVID-19 infection cases.Keywords: supervised machine learning, COVID-19 prediction, healthcare analytics, random forest, neural network
Procedia PDF Downloads 94101 Building Transparent Supply Chains through Digital Tracing
Authors: Penina Orenstein
Abstract:
In today’s world, particularly with COVID-19 a constant worldwide threat, organizations need greater visibility over their supply chains more than ever before, in order to find areas for improvement and greater efficiency, reduce the chances of disruption and stay competitive. The concept of supply chain mapping is one where every process and route is mapped in detail between each vendor and supplier. The simplest method of mapping involves sourcing publicly available data including news and financial information concerning relationships between suppliers. An additional layer of information would be disclosed by large, direct suppliers about their production and logistics sites. While this method has the advantage of not requiring any input from suppliers, it also doesn’t allow for much transparency beyond the first supplier tier and may generate irrelevant data—noise—that must be filtered out to find the actionable data. The primary goal of this research is to build data maps of supply chains by focusing on a layered approach. Using these maps, the secondary goal is to address the question as to whether the supply chain is re-engineered to make improvements, for example, to lower the carbon footprint. Using a drill-down approach, the end result is a comprehensive map detailing the linkages between tier-one, tier-two, and tier-three suppliers super-imposed on a geographical map. The driving force behind this idea is to be able to trace individual parts to the exact site where they’re manufactured. In this way, companies can ensure sustainability practices from the production of raw materials through the finished goods. The approach allows companies to identify and anticipate vulnerabilities in their supply chain. It unlocks predictive analytics capabilities and enables them to act proactively. The research is particularly compelling because it unites network science theory with empirical data and presents the results in a visual, intuitive manner.Keywords: data mining, supply chain, empirical research, data mapping
Procedia PDF Downloads 176100 Predicting Match Outcomes in Team Sport via Machine Learning: Evidence from National Basketball Association
Authors: Jacky Liu
Abstract:
This paper develops a team sports outcome prediction system with potential for wide-ranging applications across various disciplines. Despite significant advancements in predictive analytics, existing studies in sports outcome predictions possess considerable limitations, including insufficient feature engineering and underutilization of advanced machine learning techniques, among others. To address these issues, we extend the Sports Cross Industry Standard Process for Data Mining (SRP-CRISP-DM) framework and propose a unique, comprehensive predictive system, using National Basketball Association (NBA) data as an example to test this extended framework. Our approach follows a holistic methodology in feature engineering, employing both Time Series and Non-Time Series Data, as well as conducting Explanatory Data Analysis and Feature Selection. Furthermore, we contribute to the discourse on target variable choice in team sports outcome prediction, asserting that point spread prediction yields higher profits as opposed to game-winner predictions. Using machine learning algorithms, particularly XGBoost, results in a significant improvement in predictive accuracy of team sports outcomes. Applied to point spread betting strategies, it offers an astounding annual return of approximately 900% on an initial investment of $100. Our findings not only contribute to academic literature, but have critical practical implications for sports betting. Our study advances the understanding of team sports outcome prediction a burgeoning are in complex system predictions and pave the way for potential profitability and more informed decision making in sports betting markets.Keywords: machine learning, team sports, game outcome prediction, sports betting, profits simulation
Procedia PDF Downloads 10299 Compilation and Statistical Analysis of an Arabic-English Legal Corpus in Sketch Engine
Authors: C. Brierley, H. El-Farahaty, A. Farhan
Abstract:
The Leeds Parallel Corpus of Arabic-English Constitutions is a parallel corpus for the Arabic legal domain. Analysis of legal language via Corpus Linguistics techniques is an important development. In legal proceedings, a corpus-based approach to disambiguating meaning is set to replace the dictionary as an interpretative tool, and legal scholarship in the States is now attuned to the potential for Text Analytics over vast quantities of text-based legal material, following the business and medical industries. This trend is reflected in Europe: the interdisciplinary research group in Computer Assisted Legal Linguistics mines big data collections of legal and non-legal texts to analyse: legal interpretations; legal discourse; the comprehensibility of legal texts; conflict resolution; and linguistic human rights. This paper focuses on ‘dignity’ as an important aspect of the overarching concept of human rights in current constitutions across the Arab world. We have compiled a parallel, Arabic-English raw text corpus (169,861 Arabic words and 205,893 English words) from reputable websites such as the World Intellectual Property Organisation and CONSTITUTE, and uploaded and queried our corpus in Sketch Engine. Our most challenging task was sentence-level alignment of Arabic-English data. This entailed manual intervention to ensure correspondence on a one-to-many basis since Arabic sentences differ from English in length and punctuation. We have searched for morphological variants of ‘dignity’ (رامة ك, karāma) in the Arabic data and inspected their English translation equivalents. The term occurs most frequently in the Sudanese constitution (10 instances), and not at all in the constitution of Palestine. Its most frequent collocate, determined via the logDice statistic in Sketch Engine, is ‘human’ as in ‘human dignity’.Keywords: Arabic constitution, corpus-based legal linguistics, human rights, parallel Arabic-English legal corpora
Procedia PDF Downloads 18398 Software Development to Empowering Digital Libraries with Effortless Digital Cataloging and Access
Authors: Abdul Basit Kiani
Abstract:
The software for the digital library system is a cutting-edge solution designed to revolutionize the way libraries manage and provide access to their vast collections of digital content. This advanced software leverages the power of technology to offer a seamless and user-friendly experience for both library staff and patrons. By implementing this software, libraries can efficiently organize, store, and retrieve digital resources, including e-books, audiobooks, journals, articles, and multimedia content. Its intuitive interface allows library staff to effortlessly manage cataloging, metadata extraction, and content enrichment, ensuring accurate and comprehensive access to digital materials. For patrons, the software offers a personalized and immersive digital library experience. They can easily browse the digital catalog, search for specific items, and explore related content through intelligent recommendation algorithms. The software also facilitates seamless borrowing, lending, and preservation of digital items, enabling users to access their favorite resources anytime, anywhere, on multiple devices. With robust security features, the software ensures the protection of intellectual property rights and enforces access controls to safeguard sensitive content. Integration with external authentication systems and user management tools streamlines the library's administration processes, while advanced analytics provide valuable insights into patron behavior and content usage. Overall, this software for the digital library system empowers libraries to embrace the digital era, offering enhanced access, convenience, and discoverability of their vast collections. It paves the way for a more inclusive and engaging library experience, catering to the evolving needs of tech-savvy patrons.Keywords: software development, empowering digital libraries, digital cataloging and access, management system
Procedia PDF Downloads 8397 Revolutionizing Autonomous Trucking Logistics with Customer Relationship Management Cloud
Authors: Sharda Kumari, Saiman Shetty
Abstract:
Autonomous trucking is just one of the numerous significant shifts impacting fleet management services. The Society of Automotive Engineers (SAE) has defined six levels of vehicle automation that have been adopted internationally, including by the United States Department of Transportation. On public highways in the United States, organizations are testing driverless vehicles with at least Level 4 automation which indicates that a human is present in the vehicle and can disable automation, which is usually done while the trucks are not engaged in highway driving. However, completely driverless vehicles are presently being tested in the state of California. While autonomous trucking can increase safety, decrease trucking costs, provide solutions to trucker shortages, and improve efficiencies, logistics, too, requires advancements to keep up with trucking innovations. Given that artificial intelligence, machine learning, and automated procedures enable people to do their duties in other sectors with fewer resources, CRM (Customer Relationship Management) can be applied to the autonomous trucking business to provide the same level of efficiency. In a society witnessing significant digital disruptions, fleet management is likewise being transformed by technology. Utilizing strategic alliances to enhance core services is an effective technique for capitalizing on innovations and delivering enhanced services. Utilizing analytics on CRM systems improves cost control of fuel strategy, fleet maintenance, driver behavior, route planning, road safety compliance, and capacity utilization. Integration of autonomous trucks with automated fleet management, yard/terminal management, and customer service is possible, thus having significant power to redraw the lines between the public and private spheres in autonomous trucking logistics.Keywords: autonomous vehicles, customer relationship management, customer experience, autonomous trucking, digital transformation
Procedia PDF Downloads 110