Search results for: user acceptance
862 The Comparation of Limits of Detection of Lateral Flow Immunochromatographic Strips of Different Types of Mycotoxins
Authors: Xinyi Zhao, Furong Tian
Abstract:
Mycotoxins are secondary metabolic products of fungi. These are poisonous, carcinogens and mutagens in nature and pose a serious health threat to both humans and animals, causing severe illnesses and even deaths. The rapid, simple and cheap detection methods of mycotoxins are of immense importance and in great demand in the food and beverage industry as well as in agriculture and environmental monitoring. Lateral flow immunochromatographic strips (ICSTs) have been widely used in food safety, environment monitoring. Forty-six papers were identified and reviewed on Google Scholar and Scopus for their limit of detection and nanomaterial on Lateral flow immunochromatographic strips on different types of mycotoxins. The papers were dated 2001-2021. Twenty five papers were compared to identify the lowest limit of detection of among different mycotoxins (Aflatoxin B1: 10, Zearalenone:5, Fumonisin B1: 5, Trichothecene-A: 5). Most of these highly sensitive strips are competitive. Sandwich structure are usually used in large scale detection. In conclusion, the mycotoxin receives that most researches is aflatoxin B1 and its limit of detection is the lowest. Gold-nanopaticle based immunochromatographic test strips has the lowest limit of detection. Five papers involve smartphone detection and they all detect aflatoxin B1 with gold nanoparticles. In these papers, quantitative concentration results can be obtained when the user uploads the photograph of test lines using the smartphone application.Keywords: aflatoxin B1, limit of detection, gold nanoparticle, lateral flow immunochromatographic strips, mycotoxins
Procedia PDF Downloads 195861 AI Ethical Values as Dependent on the Role and Perspective of the Ethical AI Code Founder- A Mapping Review
Authors: Moshe Davidian, Shlomo Mark, Yotam Lurie
Abstract:
With the rapid development of technology and the concomitant growth in the capability of Artificial Intelligence (AI) systems and their power, the ethical challenges involved in these systems are also evolving and increasing. In recent years, various organizations, including governments, international institutions, professional societies, civic organizations, and commercial companies, have been choosing to address these various challenges by publishing ethical codes for AI systems. However, despite the apparent agreement that AI should be “ethical,” there is debate about the definition of “ethical artificial intelligence.” This study investigates the various AI ethical codes and their key ethical values. From the vast collection of codes that exist, it analyzes and compares 25 ethical codes that were found to be representative of different types of organizations. In addition, as part of its literature review, the study overviews data collected in three recent reviews of AI codes. The results of the analyses demonstrate a convergence around seven key ethical values. However, the key finding is that the different AI ethical codes eventually reflect the type of organization that designed the code; i.e., the organizations’ role as regulator, user, or developer affects the view of what ethical AI is. The results show a relationship between the organization’s role and the dominant values in its code. The main contribution of this study is the development of a list of the key values for all AI systems and specific values that need to impact the development and design of AI systems, but also allowing for differences according to the organization for which the system is being developed. This will allow an analysis of AI values in relation to stakeholders.Keywords: artificial intelligence, ethical codes, principles, values
Procedia PDF Downloads 107860 The Descending Genicular Artery Perforator Free Flap as a Reliable Flap: Literature Review
Authors: Doran C. Kalmin
Abstract:
The descending genicular artery (DGA) perforator free flap provides an alternative to free flap reconstruction based on a review of the literature detailing both anatomical and clinical studies. The descending genicular artery (DGA) supplies skin, muscle, tendon, and bone located around the medial aspect of the knee that has been used in several pioneering reports in reconstructing defects located in various areas throughout the body. After the success of the medial femoral condyle flap in early studies, a small number of studies have been published detailing the use of the DGA in free flap reconstruction. Despite early success in the use of the DGA flap, acceptance within the Plastic and Reconstructive Surgical community has been limited due primarily to anatomical variations of the pedicle. This literature review is aimed at detailing the progression of the DGA perforator free flap and its variations as an alternative and reliable free flap for reconstruction of composite defects with an exploration into both anatomical and clinical studies. A literature review was undertaken, and the progression of the DGA flap is explored from the early review by Acland et al. pioneering the saphenous free flap to exploring modern changes and studies of the anatomy of the DGA. An extensive review of the literature was undertaken that details the anatomy and its variations, approaches to harvesting the flap, the advantages, and disadvantages of the DGA perforator free flap as well as flap outcomes. There are 15 published clinical series of DGA perforator free flaps that incorporate cutaneous, osteoperiosteal, cartilage, osteocutaneous, osteoperiosteal and muscle, osteoperiosteal and subcutaneous and tendocutatenous. The commonest indication for using a DGA free flap was for non-union of bone, particularly that of the scaphoid whereby the medial femoral condyle could be used. In the case series, a success rate of over 90% was established, showing that these early studies have had good success with a wide range of tissue transfers. The greatest limitation is the anatomical variation of the DGA and therefore, the challenges associated with raising the flap. Despite the variation in anatomy and around 10-15% absence of the DGA, the saphenous artery can be used as well as the superior medial genicular artery if the vascular bone is required as part of the flap. Despite only a handful of anatomical and clinical studies describing the DGA perforator free flap, it ultimately provides a reliable flap that can include a variety of composite structure used for reconstruction in almost any area throughout the body. Although it has limitations, it provides a reliable option for free flap reconstruction that can routinely be performed as a single-stage procedure.Keywords: anatomical study, clinical study, descending genicular artery, literature review, perforator free flap reconstruction
Procedia PDF Downloads 144859 Visual Template Detection and Compositional Automatic Regular Expression Generation for Business Invoice Extraction
Authors: Anthony Proschka, Deepak Mishra, Merlyn Ramanan, Zurab Baratashvili
Abstract:
Small and medium-sized businesses receive over 160 billion invoices every year. Since these documents exhibit many subtle differences in layout and text, extracting structured fields such as sender name, amount, and VAT rate from them automatically is an open research question. In this paper, existing work in template-based document extraction is extended, and a system is devised that is able to reliably extract all required fields for up to 70% of all documents in the data set, more than any other previously reported method. The approaches are described for 1) detecting through visual features which template a given document belongs to, 2) automatically generating extraction rules for a given new template by composing regular expressions from multiple components, and 3) computing confidence scores that indicate the accuracy of the automatic extractions. The system can generate templates with as little as one training sample and only requires the ground truth field values instead of detailed annotations such as bounding boxes that are hard to obtain. The system is deployed and used inside a commercial accounting software.Keywords: data mining, information retrieval, business, feature extraction, layout, business data processing, document handling, end-user trained information extraction, document archiving, scanned business documents, automated document processing, F1-measure, commercial accounting software
Procedia PDF Downloads 130858 Feasibility of Simulating External Vehicle Aerodynamics Using Spalart-Allmaras Turbulence Model with Adjoint Method in OpenFOAM and Fluent
Authors: Arpit Panwar, Arvind Deshpande
Abstract:
The study of external vehicle aerodynamics using Spalart-Allmaras turbulence model with adjoint method was conducted. The accessibility and ease of working with the Fluent module of ANSYS and OpenFOAM were considered. The objective of the study was to understand and analyze the possibility of bringing high-level aerodynamic simulation to the average consumer vehicle. A form-factor of BMW M6 vehicle was designed in Solidworks, which was analyzed in OpenFOAM and Fluent. The turbulence model being a single equation provides much faster convergence rate when clubbed with the adjoint method. Fluent being commercial software still does not allow us to solve Spalart-Allmaras turbulence model using the adjoint method. Hence, the turbulence model was solved using the SIMPLE method in Fluent. OpenFOAM being an open source provide flexibility in simulation but is not user-friendly. It supports solving the defined turbulence model with the adjoint method. The result generated from the simulation gives us acceptable values of drag, when validated with the result of percentage error in drag values for a notch-back vehicle model on an extensive simulation produced at 6th ANSA and μETA conference, Greece. The success of this approach will allow us to bring more aerodynamic vehicle body design to all segments of the automobile and not limiting it to just the high-end sports cars.Keywords: Spalart-Allmaras turbulence model, OpenFOAM, adjoint method, SIMPLE method, vehicle aerodynamic design
Procedia PDF Downloads 200857 A Comparative Analysis of the Application and Use of Information and Communication Technologies (ICTS) in Selected Manufacturing Industries for Development in Nigeria
Authors: Kolawole Taiwo Olabode
Abstract:
This is a comparative study of ICTs adoption and use in selected manufacturing industries in for development. This study was carried out 2004 and was repeated 2013 (nine years after) using the same selected manufacturing industries to assess the level, improvement and extent ICT facilities used in these companies. The theory of modernization was explored to explain some developmental issues in this study. The same semi-structured questionnaire and IDI were used to elicit data on the subject matter. About 24.9% of the total workers (1,247) were sampled for this study using quota sampling technique. SPSS was used to analysis the quantitative data. The qualitative data was used to buttress the quantitative data. Findings indicated that Seven-Up Bottling Company and Frigoglass Glass Industry still remained Intensive ICT Users while only Niger Match Nigeria Limited still remained Non-Intensive ICT User while unfortunately, Askar Paint Nigeria Limited has gone liquidated. It is also important to discover that only the Intensive ICT users improved on relevant ICT facilities. The existing problems of ICT adoption and used in these companies remained the same in Niger Match Limited. The study concluded that for a society to be developed, management and government at all levels must do all things necessary to ensure that all existing organisations must be ICT compliance for workers and organisational performance and to enhance nation’s development in order to compete with other companies for global standard or recognition.Keywords: ICT, intensive ICT-users, entrepreneurial, manufacturing industries, industries and development
Procedia PDF Downloads 302856 Exploring Public Opinions Toward the Use of Generative Artificial Intelligence Chatbot in Higher Education: An Insight from Topic Modelling and Sentiment Analysis
Authors: Samer Muthana Sarsam, Abdul Samad Shibghatullah, Chit Su Mon, Abd Aziz Alias, Hosam Al-Samarraie
Abstract:
Generative Artificial Intelligence chatbots (GAI chatbots) have emerged as promising tools in various domains, including higher education. However, their specific role within the educational context and the level of legal support for their implementation remain unclear. Therefore, this study aims to investigate the role of Bard, a newly developed GAI chatbot, in higher education. To achieve this objective, English tweets were collected from Twitter's free streaming Application Programming Interface (API). The Latent Dirichlet Allocation (LDA) algorithm was applied to extract latent topics from the collected tweets. User sentiments, including disgust, surprise, sadness, anger, fear, joy, anticipation, and trust, as well as positive and negative sentiments, were extracted using the NRC Affect Intensity Lexicon and SentiStrength tools. This study explored the benefits, challenges, and future implications of integrating GAI chatbots in higher education. The findings shed light on the potential power of such tools, exemplified by Bard, in enhancing the learning process and providing support to students throughout their educational journey.Keywords: generative artificial intelligence chatbots, bard, higher education, topic modelling, sentiment analysis
Procedia PDF Downloads 83855 Requirements Management in Agile
Authors: Ravneet Kaur
Abstract:
The concept of Agile Requirements Engineering and Management is not new. However, the struggle to figure out how traditional Requirements Management Process fits within an Agile framework remains complex. This paper talks about a process that can merge the organization’s traditional Requirements Management Process nicely into the Agile Software Development Process. This process provides Traceability of the Product Backlog to the external documents on one hand and User Stories on the other hand. It also gives sufficient evidence that the system will deliver the right functionality with good quality in the form of various statistics and reports. In the nutshell, by overlaying a process on top of Agile, without disturbing the Agility, we are able to get synergic benefits in terms of productivity, profitability, its reporting, and end to end visibility to all Stakeholders. The framework can be used for just-in-time requirements definition or to build a repository of requirements for future use. The goal is to make sure that the business (specifically, the product owner) can clearly articulate what needs to be built and define what is of high quality. To accomplish this, the requirements cycle follows a Scrum-like process that mirrors the development cycle but stays two to three steps ahead. The goal is to create a process by which requirements can be thoroughly vetted, organized, and communicated in a manner that is iterative, timely, and quality-focused. Agile is quickly becoming the most popular way of developing software because it fosters continuous improvement, time-boxed development cycles, and more quickly delivering value to the end users. That value will be driven to a large extent by the quality and clarity of requirements that feed the software development process. An agile, lean, and timely approach to requirements as the starting point will help to ensure that the process is optimized.Keywords: requirements management, Agile
Procedia PDF Downloads 370854 Simplified Linear Regression Model to Quantify the Thermal Resilience of Office Buildings in Three Different Power Outage Day Times
Authors: Nagham Ismail, Djamel Ouahrani
Abstract:
Thermal resilience in the built environment reflects the building's capacity to adapt to extreme climate changes. In hot climates, power outages in office buildings pose risks to the health and productivity of workers. Therefore, it is of interest to quantify the thermal resilience of office buildings by developing a user-friendly simplified model. This simplified model begins with creating an assessment metric of thermal resilience that measures the duration between the power outage and the point at which the thermal habitability condition is compromised, considering different power interruption times (morning, noon, and afternoon). In this context, energy simulations of an office building are conducted for Qatar's summer weather by changing different parameters that are related to the (i) wall characteristics, (ii) glazing characteristics, (iii) load, (iv) orientation and (v) air leakage. The simulation results are processed using SPSS to derive linear regression equations, aiding stakeholders in evaluating the performance of commercial buildings during different power interruption times. The findings reveal the significant influence of glazing characteristics on thermal resilience, with the morning power outage scenario posing the most detrimental impact in terms of the shortest duration before compromising thermal resilience.Keywords: thermal resilience, thermal envelope, energy modeling, building simulation, thermal comfort, power disruption, extreme weather
Procedia PDF Downloads 74853 A Machine Learning Model for Predicting Students’ Academic Performance in Higher Institutions
Authors: Emmanuel Osaze Oshoiribhor, Adetokunbo MacGregor John-Otumu
Abstract:
There has been a need in recent years to predict student academic achievement prior to graduation. This is to assist them in improving their grades, especially for those who have struggled in the past. The purpose of this research is to use supervised learning techniques to create a model that predicts student academic progress. Many scholars have developed models that predict student academic achievement based on characteristics including smoking, demography, culture, social media, parent educational background, parent finances, and family background, to mention a few. This element, as well as the model used, could have misclassified the kids in terms of their academic achievement. As a prerequisite to predicting if the student will perform well in the future on related courses, this model is built using a logistic regression classifier with basic features such as the previous semester's course score, attendance to class, class participation, and the total number of course materials or resources the student is able to cover per semester. With a 96.7 percent accuracy, the model outperformed other classifiers such as Naive bayes, Support vector machine (SVM), Decision Tree, Random forest, and Adaboost. This model is offered as a desktop application with user-friendly interfaces for forecasting student academic progress for both teachers and students. As a result, both students and professors are encouraged to use this technique to predict outcomes better.Keywords: artificial intelligence, ML, logistic regression, performance, prediction
Procedia PDF Downloads 109852 Describing the Fine Electronic Structure and Predicting Properties of Materials with ATOMIC MATTERS Computation System
Authors: Rafal Michalski, Jakub Zygadlo
Abstract:
We present the concept and scientific methods and algorithms of our computation system called ATOMIC MATTERS. This is the first presentation of the new computer package, that allows its user to describe physical properties of atomic localized electron systems subject to electromagnetic interactions. Our solution applies to situations where an unclosed electron 2p/3p/3d/4d/5d/4f/5f subshell interacts with an electrostatic potential of definable symmetry and external magnetic field. Our methods are based on Crystal Electric Field (CEF) approach, which takes into consideration the electrostatic ligands field as well as the magnetic Zeeman effect. The application allowed us to predict macroscopic properties of materials such as: Magnetic, spectral and calorimetric as a result of physical properties of their fine electronic structure. We emphasize the importance of symmetry of charge surroundings of atom/ion, spin-orbit interactions (spin-orbit coupling) and the use of complex number matrices in the definition of the Hamiltonian. Calculation methods, algorithms and convention recalculation tools collected in ATOMIC MATTERS were chosen to permit the prediction of magnetic and spectral properties of materials in isostructural series.Keywords: atomic matters, crystal electric field (CEF) spin-orbit coupling, localized states, electron subshell, fine electronic structure
Procedia PDF Downloads 319851 Micro Celebrities in Social Media Instagram and Their Personal Influence in Business Perspective
Authors: Yoga Maulana Putra, Herry Hudrasyah
Abstract:
The Internet has now become an important part of human life; it can be accessed through a computer or even a smartphone almost anywhere and anytime. The Internet has created many social media such as Facebook, Twitter, and Instagram. Instagram has been acquired by Facebook in 2012. Since then, Instagram is growing fast. And now, Instagram is transforming from photo-sharing social media into business tools. As the result, some new behavior has been discovered. Some of Instagram user is becoming popular. These people also being called minor celebrity and they are also being used as marketing tools by many companies to influencing or promoting their product or service. This minor celebrity is existing because of their behavior in using Instagram. The company is using the personal influence of the minor celebrity to promoting and influencing their product or service, and the minor celebrity gets paid as much as their rate card. And their rate card based on their followers and insight. This research is using a qualitative method. An interview is being done to 6 minor celebrities from many different categories such as photographer, travel blogger, lifestyle, food blogger, fashion, and healthcare. Theory of reasoned behavior is being used as the grounded theory to discover the reason for their behavior and personal influence to describe their way to influencing people. The result of the interview is most of the minor celebrities is influenced by their friend’s circle in the process of using Instagram. They also had a different way to use their personal influence to affect their followers when the company employs them.Keywords: humanities and social sciences, Instagram, minor celebrity, social media
Procedia PDF Downloads 166850 Unstructured-Data Content Search Based on Optimized EEG Signal Processing and Multi-Objective Feature Extraction
Authors: Qais M. Yousef, Yasmeen A. Alshaer
Abstract:
Over the last few years, the amount of data available on the globe has been increased rapidly. This came up with the emergence of recent concepts, such as the big data and the Internet of Things, which have furnished a suitable solution for the availability of data all over the world. However, managing this massive amount of data remains a challenge due to their large verity of types and distribution. Therefore, locating the required file particularly from the first trial turned to be a not easy task, due to the large similarities of names for different files distributed on the web. Consequently, the accuracy and speed of search have been negatively affected. This work presents a method using Electroencephalography signals to locate the files based on their contents. Giving the concept of natural mind waves processing, this work analyses the mind wave signals of different people, analyzing them and extracting their most appropriate features using multi-objective metaheuristic algorithm, and then classifying them using artificial neural network to distinguish among files with similar names. The aim of this work is to provide the ability to find the files based on their contents using human thoughts only. Implementing this approach and testing it on real people proved its ability to find the desired files accurately within noticeably shorter time and retrieve them as a first choice for the user.Keywords: artificial intelligence, data contents search, human active memory, mind wave, multi-objective optimization
Procedia PDF Downloads 175849 Routing Medical Images with Tabu Search and Simulated Annealing: A Study on Quality of Service
Authors: Mejía M. Paula, Ramírez L. Leonardo, Puerta A. Gabriel
Abstract:
In telemedicine, the image repository service is important to increase the accuracy of diagnostic support of medical personnel. This study makes comparison between two routing algorithms regarding the quality of service (QoS), to be able to analyze the optimal performance at the time of loading and/or downloading of medical images. This study focused on comparing the performance of Tabu Search with other heuristic and metaheuristic algorithms that improve QoS in telemedicine services in Colombia. For this, Tabu Search and Simulated Annealing heuristic algorithms are chosen for their high usability in this type of applications; the QoS is measured taking into account the following metrics: Delay, Throughput, Jitter and Latency. In addition, routing tests were carried out on ten images in digital image and communication in medicine (DICOM) format of 40 MB. These tests were carried out for ten minutes with different traffic conditions, reaching a total of 25 tests, from a server of Universidad Militar Nueva Granada (UMNG) in Bogotá-Colombia to a remote user in Universidad de Santiago de Chile (USACH) - Chile. The results show that Tabu search presents a better QoS performance compared to Simulated Annealing, managing to optimize the routing of medical images, a basic requirement to offer diagnostic images services in telemedicine.Keywords: medical image, QoS, simulated annealing, Tabu search, telemedicine
Procedia PDF Downloads 219848 Proposed Framework based on Classification of Vertical Handover Decision Strategies in Heterogeneous Wireless Networks
Authors: Shidrokh Goudarzi, Wan Haslina Hassan
Abstract:
Heterogeneous wireless networks are converging towards an all-IP network as part of the so-called next-generation network. In this paradigm, different access technologies need to be interconnected; thus, vertical handovers or vertical handoffs are necessary for seamless mobility. In this paper, we conduct a review of existing vertical handover decision-making mechanisms that aim to provide ubiquitous connectivity to mobile users. To offer a systematic comparison, we categorize these vertical handover measurement and decision structures based on their respective methodology and parameters. Subsequently, we analyze several vertical handover approaches in the literature and compare them according to their advantages and weaknesses. The paper compares the algorithms based on the network selection methods, complexity of the technologies used and efficiency in order to introduce our vertical handover decision framework. We find that vertical handovers on heterogeneous wireless networks suffer from the lack of a standard and efficient method to satisfy both user and network quality of service requirements at different levels including architectural, decision-making and protocols. Also, the consolidation of network terminal, cross-layer information, multi packet casting and intelligent network selection algorithm appears to be an optimum solution for achieving seamless service continuity in order to facilitate seamless connectivity.Keywords: heterogeneous wireless networks, vertical handovers, vertical handover metric, decision-making algorithms
Procedia PDF Downloads 393847 On Exploring Search Heuristics for improving the efficiency in Web Information Extraction
Authors: Patricia Jiménez, Rafael Corchuelo
Abstract:
Nowadays the World Wide Web is the most popular source of information that relies on billions of on-line documents. Web mining is used to crawl through these documents, collect the information of interest and process it by applying data mining tools in order to use the gathered information in the best interest of a business, what enables companies to promote theirs. Unfortunately, it is not easy to extract the information a web site provides automatically when it lacks an API that allows to transform the user-friendly data provided in web documents into a structured format that is machine-readable. Rule-based information extractors are the tools intended to extract the information of interest automatically and offer it in a structured format that allow mining tools to process it. However, the performance of an information extractor strongly depends on the search heuristic employed since bad choices regarding how to learn a rule may easily result in loss of effectiveness and/or efficiency. Improving search heuristics regarding efficiency is of uttermost importance in the field of Web Information Extraction since typical datasets are very large. In this paper, we employ an information extractor based on a classical top-down algorithm that uses the so-called Information Gain heuristic introduced by Quinlan and Cameron-Jones. Unfortunately, the Information Gain relies on some well-known problems so we analyse an intuitive alternative, Termini, that is clearly more efficient; we also analyse other proposals in the literature and conclude that none of them outperforms the previous alternative.Keywords: information extraction, search heuristics, semi-structured documents, web mining.
Procedia PDF Downloads 335846 Entertainment-Education for the Prevention & Intervention of Eating Disorders in Adolescents
Authors: Tracey Lion-Cachet
Abstract:
Eating disorders typically manifest in adolescence and are notoriously difficult to treat. There are two notable reasons for this. Firstly, research consistently demonstrates that early intervention is a critical mediator of prognosis, with early intervention leading to a better prognosis. However, because eating disorders do not originate as full-syndrome diagnoses but rather as prodromal cases, they often go undetected; by the time symptoms meet diagnostic criteria, they have become recalcitrant. Another interrelated issue is motivation to change. Research demonstrates that in the early stages of an eating disorder, adolescents are highly resistant to change, and motivation increases only once symptoms have shifted from egosyntonic to egodystonic in nature. The purpose of this project was to design a prevention model based on the social psychology paradigm of Entertainment-Education, which embeds messages within the genre of film as a means of affecting change. The resulting project was a narrative screenplay targeting teenagers/young adults from diverse backgrounds. The goals of the project were to create a film script that, if ultimately made into a film, could serve to: 1) interrupt symptom progression and improve prognosis through early intervention; 2) incorporate techniques from third-wave cognitive behavioral treatment models, acceptance and commitment therapy (ACT) and rational recovery (RR), with a focus on the effects of mindfulness as a means of informing recovery; 3) target issues to do with motivation to change by shifting the perception of eating disorders from culturally specific psychiatric illnesses to habit-based brain wiring issues. Nine licensed clinicians were asked to evaluate two excerpts taken from the final script. They subsequently provided feedback on a Likert-scale, which assessed whether the script had achieved its goals. Overall, evaluators agreed that the project’s etiological and intervention models have the potential to inspire change and serve as an effective means of prevention and treatment of eating disorders. However, one-third of the evaluators did not find the content developmentally appropriate. This is a notable limitation to the study and will need to be addressed in the larger script before the final project can potentially be targeted to a teenage and young adult audience.Keywords: adolescents, eating disorders, pediatrics, entertainment-education, mindfulness-based intervention, prevention
Procedia PDF Downloads 91845 Poli4SDG: An Application for Environmental Crises Management and Gender Support
Authors: Angelica S. Valeriani, Lorenzo Biasiolo
Abstract:
In recent years, the scale of the impact of climate change and its related side effects has become ever more massive and devastating. Sustainable Development Goals (SDGs), promoted by United Nations, aim to front issues related to climate change, among others. In particular, the project CROWD4SDG focuses on a bunch of SDGs since it promotes environmental activities and climate-related issues. In this context, we developed a prototype of an application, under advanced development considering web design, that focuses on SDG 13 (SDG on climate action) by providing users with useful instruments to face environmental crises and climate-related disasters. Our prototype is thought and structured for both web and mobile development. The main goal of the application, POLI4SDG, is to help users to get through emergency services. To this extent, an organized overview and classification prove to be very effective and helpful to people in need. A careful analysis of data related to environmental crises prompted us to integrate the user contribution, i.e., exploiting a core principle of Citizen Science, into the realization of a public catalog, available for consulting and organized according to typology and specific features. In addition, gender equality and opportunity features are considered in the prototype in order to allow women, often the most vulnerable category, to have direct support. The overall description of the application functionalities is detailed. Moreover, the implementation features and properties of the prototype are discussed.Keywords: crowdsourcing, social media, SDG, climate change, natural disasters, gender equality
Procedia PDF Downloads 110844 Challenges and Opportunities of Utilization of Social Media by Business Education Students in Nigeria Universities
Authors: Titus Amodu Umoru
Abstract:
The global economy today is full of sophistication. All over the world, business and marketing practices are undergoing an unprecedented transformation. In realization of this fact, the federal government of Nigeria has put in place a robust transformation agenda in order to put Nigeria in a better position to be a competitive player and in the process transform all sectors of its economy. New technologies, especially the internet, are the driving force behind this transformation. However, technology has inadvertently affected the way businesses are done thus necessitating the acquisition of new skills. In developing countries like Nigeria, citizens are still battling with effective application of those technologies. Obviously, students of business education need to acquire relevant business knowledge to be able to transit into the world of work on graduation from school and compete favourably in the labour market. Therefore, effective utilization of social media by both teachers and students can help extensively in empowering students with the needed skills. Social media which is described as a group of internet-based applications that build on the ideological foundations of Web 2.0, and which allow the creation and exchange of user-generated content, if incorporated into the classroom experience may be the needed answer to unemployment and poverty in Nigeria as beneficiaries can easily connect with existing and potential enterprises and customers, engage with them and reinforce mutual business benefits. Challenges and benefits of social media use in education in Nigeria universities were revealed in this study.Keywords: business education, challenges, opportunities, utilization, social media
Procedia PDF Downloads 415843 talk2all: A Revolutionary Tool for International Medical Tourism
Authors: Madhukar Kasarla, Sumit Fogla, Kiran Panuganti, Gaurav Jain, Abhijit Ramanujam, Astha Jain, Shashank Kraleti, Sharat Musham, Arun Chaudhury
Abstract:
Patients have often chosen to travel for care — making pilgrimages to academic meccas and state-of-the-art hospitals for sophisticated surgery. This culture is still persistent in the landscape of US healthcare, with hundred thousand of visitors coming to the shores of United States to seek the high quality of medical care. One of the major challenges in this form of medical tourism has been the language barrier. Thus, an Iraqi patient, with immediate needs of communicating the healthcare needs to the treating team in the hospital, may face huge barrier in effective patient-doctor communication, delaying care and even at times reducing the quality. To circumvent these challenges, we are proposing the use of a state-of-the-art tool, Talk2All, which can translate nearly one hundred international languages (and even sign language) in real time. The tool is an easy to download app and highly user friendly. It builds on machine learning principles to decode different languages in real time. We suggest that the use of Talk2All will tremendously enhance communication in the hospital setting, effectively breaking the language barrier. We propose that vigorous incorporation of Talk2All shall overcome practical challenges in international medical and surgical tourism.Keywords: language translation, communication, machine learning, medical tourism
Procedia PDF Downloads 214842 D3Advert: Data-Driven Decision Making for Ad Personalization through Personality Analysis Using BiLSTM Network
Authors: Sandesh Achar
Abstract:
Personalized advertising holds greater potential for higher conversion rates compared to generic advertisements. However, its widespread application in the retail industry faces challenges due to complex implementation processes. These complexities impede the swift adoption of personalized advertisement on a large scale. Personalized advertisement, being a data-driven approach, necessitates consumer-related data, adding to its complexity. This paper introduces an innovative data-driven decision-making framework, D3Advert, which personalizes advertisements by analyzing personalities using a BiLSTM network. The framework utilizes the Myers–Briggs Type Indicator (MBTI) dataset for development. The employed BiLSTM network, specifically designed and optimized for D3Advert, classifies user personalities into one of the sixteen MBTI categories based on their social media posts. The classification accuracy is 86.42%, with precision, recall, and F1-Score values of 85.11%, 84.14%, and 83.89%, respectively. The D3Advert framework personalizes advertisements based on these personality classifications. Experimental implementation and performance analysis of D3Advert demonstrate a 40% improvement in impressions. D3Advert’s innovative and straightforward approach has the potential to transform personalized advertising and foster widespread personalized advertisement adoption in marketing.Keywords: personalized advertisement, deep Learning, MBTI dataset, BiLSTM network, NLP.
Procedia PDF Downloads 44841 A Novel Approach to Design and Implement Context Aware Mobile Phone
Authors: G. S. Thyagaraju, U. P. Kulkarni
Abstract:
Context-aware computing refers to a general class of computing systems that can sense their physical environment, and adapt their behaviour accordingly. Context aware computing makes systems aware of situations of interest, enhances services to users, automates systems and personalizes applications. Context-aware services have been introduced into mobile devices, such as PDA and mobile phones. In this paper we are presenting a novel approaches used to realize the context aware mobile. The context aware mobile phone (CAMP) proposed in this paper senses the users situation automatically and provides user context required services. The proposed system is developed by using artificial intelligence techniques like Bayesian Network, fuzzy logic and rough sets theory based decision table. Bayesian Network to classify the incoming call (high priority call, low priority call and unknown calls), fuzzy linguistic variables and membership degrees to define the context situations, the decision table based rules for service recommendation. To exemplify and demonstrate the effectiveness of the proposed methods, the context aware mobile phone is tested for college campus scenario including different locations like library, class room, meeting room, administrative building and college canteen.Keywords: context aware mobile, fuzzy logic, decision table, Bayesian probability
Procedia PDF Downloads 365840 Comparative Study between Inertial Navigation System and GPS in Flight Management System Application
Authors: Othman Maklouf, Matouk Elamari, M. Rgeai, Fateh Alej
Abstract:
In modern avionics the main fundamental component is the flight management system (FMS). An FMS is a specialized computer system that automates a wide variety of in-flight tasks, reducing the workload on the flight crew to the point that modern civilian aircraft no longer carry flight engineers or navigators. The main function of the FMS is in-flight management of the flight plan using various sensors such as Global Positioning System (GPS) and Inertial Navigation System (INS) to determine the aircraft's position and guide the aircraft along the flight plan. GPS which is satellite based navigation system, and INS which generally consists of inertial sensors (accelerometers and gyroscopes). GPS is used to locate positions anywhere on earth, it consists of satellites, control stations, and receivers. GPS receivers take information transmitted from the satellites and uses triangulation to calculate a user’s exact location. The basic principle of an INS is based on the integration of accelerations observed by the accelerometers on board the moving platform, the system will accomplish this task through appropriate processing of the data obtained from the specific force and angular velocity measurements. Thus, an appropriately initialized inertial navigation system is capable of continuous determination of vehicle position, velocity and attitude without the use of the external information. The main objective of article is to introduce a comparative study between the two systems under different conditions and scenarios using MATLAB with SIMULINK software.Keywords: flight management system, GPS, IMU, inertial navigation system
Procedia PDF Downloads 299839 Pre-Operative Psychological Factors Significantly Add to the Predictability of Chronic Narcotic Use: A Two Year Prospective Study
Authors: Dana El-Mughayyar, Neil Manson, Erin Bigney, Eden Richardson, Dean Tripp, Edward Abraham
Abstract:
Use of narcotics to treat pain has increased over the past two decades and is a contributing factor to the current public health crisis. Understanding the pre-operative risks of chronic narcotic use may be aided through investigation of psychological measures. The objective of the reported study is to determine predictors of narcotic use two years post-surgery in a thoracolumbar spine surgery population, including an array of psychological factors. A prospective observational study of 191 consecutively enrolled adult patients having undergone thoracolumbar spine surgery is presented. Baseline measures of interest included the Pain Catastrophizing Scale (PCS), Tampa Scale for Kinesiophobia, Multidimensional Scale for Perceived Social Support (MSPSS), Chronic Pain Acceptance Questionnaire (CPAQ-8), Oswestry Disability Index (ODI), Numeric Rating Scales for back and leg pain (NRS-B/L), SF-12’s Mental Component Summary (MCS), narcotic use and demographic variables. The post-operative measure of interest is narcotic use at 2-year follow-up. Narcotic use is collapsed into binary categories of use and no use. Descriptive statistics are run. Chi Square analysis is used for categorical variables and an ANOVA for continuous variables. Significant variables are built into a hierarchical logistic regression to determine predictors of post-operative narcotic use. Significance is set at α < 0.05. Results: A total of 27.23% of the sample were using narcotics two years after surgery. The regression model included ODI, NRS-Leg, time with condition, chief complaint, pre-operative drug use, gender, MCS, PCS subscale helplessness, and CPAQ subscale pain willingness and was significant χ² (13, N=191)= 54.99; p = .000. The model accounted for 39.6% of the variance in narcotic use and correctly predicted in 79.7% of cases. Psychological variables accounted for 9.6% of the variance over and above the other predictors. Conclusions: Managing chronic narcotic usage is central to the patient’s overall health and quality of life. Psychological factors in the preoperative period are significant predictors of narcotic use 2 years post-operatively. The psychological variables are malleable, potentially allowing surgeons to direct their patients to preventative resources prior to surgery.Keywords: narcotics, psychological factors, quality of life, spine surgery
Procedia PDF Downloads 144838 Video Object Segmentation for Automatic Image Annotation of Ethernet Connectors with Environment Mapping and 3D Projection
Authors: Marrone Silverio Melo Dantas Pedro Henrique Dreyer, Gabriel Fonseca Reis de Souza, Daniel Bezerra, Ricardo Souza, Silvia Lins, Judith Kelner, Djamel Fawzi Hadj Sadok
Abstract:
The creation of a dataset is time-consuming and often discourages researchers from pursuing their goals. To overcome this problem, we present and discuss two solutions adopted for the automation of this process. Both optimize valuable user time and resources and support video object segmentation with object tracking and 3D projection. In our scenario, we acquire images from a moving robotic arm and, for each approach, generate distinct annotated datasets. We evaluated the precision of the annotations by comparing these with a manually annotated dataset, as well as the efficiency in the context of detection and classification problems. For detection support, we used YOLO and obtained for the projection dataset an F1-Score, accuracy, and mAP values of 0.846, 0.924, and 0.875, respectively. Concerning the tracking dataset, we achieved an F1-Score of 0.861, an accuracy of 0.932, whereas mAP reached 0.894. In order to evaluate the quality of the annotated images used for classification problems, we employed deep learning architectures. We adopted metrics accuracy and F1-Score, for VGG, DenseNet, MobileNet, Inception, and ResNet. The VGG architecture outperformed the others for both projection and tracking datasets. It reached an accuracy and F1-score of 0.997 and 0.993, respectively. Similarly, for the tracking dataset, it achieved an accuracy of 0.991 and an F1-Score of 0.981.Keywords: RJ45, automatic annotation, object tracking, 3D projection
Procedia PDF Downloads 167837 Deep Learning Application for Object Image Recognition and Robot Automatic Grasping
Authors: Shiuh-Jer Huang, Chen-Zon Yan, C. K. Huang, Chun-Chien Ting
Abstract:
Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment.Keywords: deep learning, image processing, convolution neural network, YOLOv2, 7A6 series manipulator
Procedia PDF Downloads 250836 Effective Nutrition Label Use on Smartphones
Authors: Vladimir Kulyukin, Tanwir Zaman, Sarat Kiran Andhavarapu
Abstract:
Research on nutrition label use identifies four factors that impede comprehension and retention of nutrition information by consumers: label’s location on the package, presentation of information within the label, label’s surface size, and surrounding visual clutter. In this paper, a system is presented that makes nutrition label use more effective for nutrition information comprehension and retention. The system’s front end is a smartphone application. The system’s back end is a four node Linux cluster for image recognition and data storage. Image frames captured on the smartphone are sent to the back end for skewed or aligned barcode recognition. When barcodes are recognized, corresponding nutrition labels are retrieved from a cloud database and presented to the user on the smartphone’s touchscreen. Each displayed nutrition label is positioned centrally on the touchscreen with no surrounding visual clutter. Wikipedia links to important nutrition terms are embedded to improve comprehension and retention of nutrition information. Standard touch gestures (e.g., zoom in/out) available on mainstream smartphones are used to manipulate the label’s surface size. The nutrition label database currently includes 200,000 nutrition labels compiled from public web sites by a custom crawler. Stress test experiments with the node cluster are presented. Implications for proactive nutrition management and food policy are discussed.Keywords: mobile computing, cloud computing, nutrition label use, nutrition management, barcode scanning
Procedia PDF Downloads 373835 Using Computational Fluid Dynamics (CFD) Modeling to Predict the Impact of Nuclear Reactor Mixed Tank Flows Using the Momentum Equation
Authors: Joseph Amponsah
Abstract:
This research proposes an equation to predict and determine the momentum source equation term after factoring in the radial friction between the fluid and the blades and the impeller's propulsive power. This research aims to look at how CFD software can be used to predict the effect of flows in nuclear reactor stirred tanks through a momentum source equation and the concentration distribution of tracers that have been introduced in reactor tanks. The estimated findings, including the dimensionless concentration curves, power, and pumping numbers, dimensionless velocity profiles, and mixing times 4, were contrasted with results from tests in stirred containers. The investigation was carried out in Part I for vessels that were agitated by one impeller on a central shaft. The two types of impellers employed were an ordinary Rushton turbine and a 6-bladed 45° pitched blade turbine. The simulations made use of numerous reference frame techniques and the common k-e turbulence model. The impact of the grid type was also examined; unstructured, structured, and unique user-defined grids were looked at. The CFD model was used to simulate the flow field within the Rushton turbine nuclear reactor stirred tank. This method was validated using experimental data that were available close to the impeller tip and in the bulk area. Additionally, analyses of the computational efficiency and time using MRF and SM were done.Keywords: Ansys fluent, momentum equation, CFD, prediction
Procedia PDF Downloads 79834 Humanistic Psychology Workshop to Increase Psychological Well-Being
Authors: Nidia Thalia Alva Rangel, Ferran Padros Blazquez, Ma. Ines Gomez Del Campo Del Paso
Abstract:
Happiness has been since antiquity a concept of interest around the world. Positive psychology is the science that begins to study happiness in a more precise and controlled way, obtaining wide amount of research which can be applied. One of the central constructs of Positive Psychology is Carol Ryff’s psychological well-being model as eudaimonic happiness, which comprehends six dimensions: autonomy, environmental mastery, personal growth, positive relations with others, purpose in life, and self-acceptance. Humanistic psychology is a clear precedent of Positive Psychology, which has studied human development topics and it features a great variety of intervention techniques nevertheless has little evidence with controlled research. Therefore, the present research had the aim to evaluate the efficacy of a humanistic intervention program to increase psychological well-being in healthy adults through a mixed methods study. Before and after the intervention, it was applied Carol Ryff’s psychological well-being scale (PWBS) and the Symptom Check List 90 as pretest and posttest. In addition, a questionnaire of five open questions was applied after each session. The intervention program was designed in experiential workshop format, based on the foundational attitudes defined by Carl Rogers: congruence, unconditional positive regard and empathy, integrating humanistic intervention strategies from gestalt, psychodrama, logotherapy and psychological body therapy, with the aim to strengthen skills in the six dimensions of psychological well-being model. The workshop was applied to six volunteer adults in 12 sessions of 2 hours each. Finally, quantitative data were analyzed with Wilcoxon statistic test through the SPSS program, obtaining as results differences statistically significant in pathology symptoms between prettest and postest, also levels of dimensions of psychological well-being were increased, on the other hand for qualitative strand, by open questionnaires it showed how the participants were experiencing the techniques and changing through the sessions. Thus, the humanistic psychology program was effective to increase psychological well-being. Working to promote well-being prompts to be an effective way to reduce pathological symptoms as a secondary gain. Experiential workshops are a useful tool for small groups. There exists the need for research to count with more evidence of humanistic psychology interventions in different contexts and impulse the application of Positive Psychology knowledge.Keywords: happiness, humanistic psychology, positive psychology, psychological well-being, workshop
Procedia PDF Downloads 416833 Application to Monitor the Citizens for Corona and Get Medical Aids or Assistance from Hospitals
Authors: Vathsala Kaluarachchi, Oshani Wimalarathna, Charith Vandebona, Gayani Chandrarathna, Lakmal Rupasinghe, Windhya Rankothge
Abstract:
It is the fundamental function of a monitoring system to allow users to collect and process data. A worldwide threat, the corona outbreak has wreaked havoc in Sri Lanka, and the situation has gotten out of hand. Since the epidemic, the Sri Lankan government has been unable to establish a systematic system for monitoring corona patients and providing emergency care in the event of an outbreak. Most patients have been held at home because of the high number of patients reported in the nation, but they do not yet have access to a functioning medical system. It has resulted in an increase in the number of patients who have been left untreated because of a lack of medical care. The absence of competent medical monitoring is the biggest cause of mortality for many people nowadays, according to our survey. As a result, a smartphone app for analyzing the patient's state and determining whether they should be hospitalized will be developed. Using the data supplied, we are aiming to send an alarm letter or SMS to the hospital once the system recognizes them. Since we know what those patients need and when they need it, we will put up a desktop program at the hospital to monitor their progress. Deep learning, image processing and application development, natural language processing, and blockchain management are some of the components of the research solution. The purpose of this research paper is to introduce a mechanism to connect hospitals and patients even when they are physically apart. Further data security and user-friendliness are enhanced through blockchain and NLP.Keywords: blockchain, deep learning, NLP, monitoring system
Procedia PDF Downloads 133