Search results for: user interface
2963 Telemedicine App Powered by AI
Authors: Cotran Mabeya
Abstract:
This focuses on an artificially intelligent telemedicine application that aims to enrich the access to health care services, especially for those who live in remote and underserved areas. This app is highly packed with very advanced AI technologies—symptom checkers and virtual consultations—as well as health data integration for very efficient and user-friendly remote health support with main features: AI-based diagnostics, real-time health monitoring through wearables, and an intuitive interface. The Telemedicine Application tries too hard to address some of the healthcare problems, such as limited access in remote areas, high costs, lengthy wait times for certain services, as well as difficulty in getting second opinions. By making it friendlier for consultation remotely, the application removes geographic and financial barriers to accessing affordable and timely medical care. In addition, by having centralized patient records and communication between healthcare providers, it allows continuity of care by making it easier to transition to treatment. It has been confirmed that this multi-design approach incorporated both quantitative and qualitative designs to evaluate the socio-economic impacts of artificial intelligence and telemedicine on patients in Nairobi County. Adults made up the target population, while informers and respondents were categorized into patients, healthcare providers, and specialists in law, IT, and AI. Stratified and simple random sampling techniques were used to ensure diversely inclusive representation to enhance accuracy and triangulation in the data collected. Moreover, the study provides several recommendations, which include regular updating accuracy of AI symptom checkers, improving data security through encryption and multi-factor authentication, as well as real-time health data integration from bodily wearables for personal healthcareKeywords: artificial intelligence, virtual consultations, user-friendly, remote areas
Procedia PDF Downloads 22962 Component Interface Formalization in Robotic Systems
Authors: Anton Hristozov, Eric Matson, Eric Dietz, Marcus Rogers
Abstract:
Components are heavily used in many software systems, including robotics systems. The growth of sophistication and diversity of new capabilities for robotic systems presents new challenges to their architectures. Their complexity is growing exponentially with the advent of AI, smart sensors, and the complex tasks they have to accomplish. Such complexity requires a more rigorous approach to the creation, use, and interoperability of software components. The issue is exacerbated because robotic systems are becoming more and more reliant on third-party components for certain functions. In order to achieve this kind of interoperability, including dynamic component replacement, we need a way to standardize their interfaces. A formal approach is desperately needed to specify what an interface of a robotic software component should contain. This study performs an analysis of the issue and presents a universal and generic approach to standardizing component interfaces for robotic systems. Our approach is inspired by well-established robotic architectures such as ROS, PX4, and Ardupilot. The study is also applicable to other software systems that share similar characteristics with robotic systems. We consider the use of JSON or Domain Specific Languages (DSL) development with tools such as Antlr and automatic code and configuration file generation for frameworks such as ROS and PX4. A case study with ROS2 is presented as a proof of concept for the proposed methodology.Keywords: CPS, robots, software architecture, interface, ROS, autopilot
Procedia PDF Downloads 922961 Device Control Using Brain Computer Interface
Authors: P. Neeraj, Anurag Sharma, Harsukhpreet Singh
Abstract:
In current years, Brain-Computer Interface (BCI) scheme based on steady-state Visual Evoked Potential (SSVEP) have earned much consideration. This study tries to evolve an SSVEP based BCI scheme that can regulate any gadget mock-up in two unique positions ON and OFF. In this paper, two distinctive gleam frequencies in low-frequency part were utilized to evoke the SSVEPs and were shown on a Liquid Crystal Display (LCD) screen utilizing Lab View. Two stimuli shading, Yellow, and Blue were utilized to prepare the system in SSVEPs. The Electroencephalogram (EEG) signals recorded from the occipital part. Elements of the brain were separated by utilizing discrete wavelet Transform. A prominent system for multilayer system diverse Neural Network Algorithm (NNA), is utilized to characterize SSVEP signals. During training of the network with diverse calculation Regression plot results demonstrated that when Levenberg-Marquardt preparing calculation was utilized the exactness turns out to be 93.9%, which is superior to another training algorithm.Keywords: brain computer interface, electroencephalography, steady-state visual evoked potential, wavelet transform, neural network
Procedia PDF Downloads 3342960 Modeling of Global Solar Radiation on a Horizontal Surface Using Artificial Neural Network: A Case Study
Authors: Laidi Maamar, Hanini Salah
Abstract:
The present work investigates the potential of artificial neural network (ANN) model to predict the horizontal global solar radiation (HGSR). The ANN is developed and optimized using three years meteorological database from 2011 to 2013 available at the meteorological station of Blida (Blida 1 university, Algeria, Latitude 36.5°, Longitude 2.81° and 163 m above mean sea level). Optimal configuration of the ANN model has been determined by minimizing the Root Means Square Error (RMSE) and maximizing the correlation coefficient (R2) between observed and predicted data with the ANN model. To select the best ANN architecture, we have conducted several tests by using different combinations of parameters. A two-layer ANN model with six hidden neurons has been found as an optimal topology with (RMSE=4.036 W/m²) and (R²=0.999). A graphical user interface (GUI), was designed based on the best network structure and training algorithm, to enhance the users’ friendliness application of the model.Keywords: artificial neural network, global solar radiation, solar energy, prediction, Algeria
Procedia PDF Downloads 4992959 Uplift Segmentation Approach for Targeting Customers in a Churn Prediction Model
Authors: Shivahari Revathi Venkateswaran
Abstract:
Segmenting customers plays a significant role in churn prediction. It helps the marketing team with proactive and reactive customer retention. For the reactive retention, the retention team reaches out to customers who already showed intent to disconnect by giving some special offers. When coming to proactive retention, the marketing team uses churn prediction model, which ranks each customer from rank 1 to 100, where 1 being more risk to churn/disconnect (high ranks have high propensity to churn). The churn prediction model is built by using XGBoost model. However, with the churn rank, the marketing team can only reach out to the customers based on their individual ranks. To profile different groups of customers and to frame different marketing strategies for targeted groups of customers are not possible with the churn ranks. For this, the customers must be grouped in different segments based on their profiles, like demographics and other non-controllable attributes. This helps the marketing team to frame different offer groups for the targeted audience and prevent them from disconnecting (proactive retention). For segmentation, machine learning approaches like k-mean clustering will not form unique customer segments that have customers with same attributes. This paper finds an alternate approach to find all the combination of unique segments that can be formed from the user attributes and then finds the segments who have uplift (churn rate higher than the baseline churn rate). For this, search algorithms like fast search and recursive search are used. Further, for each segment, all customers can be targeted using individual churn ranks from the churn prediction model. Finally, a UI (User Interface) is developed for the marketing team to interactively search for the meaningful segments that are formed and target the right set of audience for future marketing campaigns and prevent them from disconnecting.Keywords: churn prediction modeling, XGBoost model, uplift segments, proactive marketing, search algorithms, retention, k-mean clustering
Procedia PDF Downloads 712958 Virtual Reality and Other Real-Time Visualization Technologies for Architecture Energy Certifications
Authors: Román Rodríguez Echegoyen, Fernando Carlos López Hernández, José Manuel López Ujaque
Abstract:
Interactive management of energy certification ratings has remained on the sidelines of the evolution of virtual reality (VR) despite related advances in architecture in other areas such as BIM and real-time working programs. This research studies to what extent VR software can help the stakeholders to better understand energy efficiency parameters in order to obtain reliable ratings assigned to the parts of the building. To evaluate this hypothesis, the methodology has included the construction of a software prototype. Current energy certification systems do not follow an intuitive data entry system; neither do they provide a simple or visual verification of the technical values included in the certification by manufacturers or other users. This software, by means of real-time visualization and a graphical user interface, proposes different improvements to the current energy certification systems that ease the understanding of how the certification parameters work in a building. Furthermore, the difficulty of using current interfaces, which are not friendly or intuitive for the user, means that untrained users usually get a poor idea of the grounds for certification and how the program works. In addition, the proposed software allows users to add further information, such as financial and CO₂ savings, energy efficiency, and an explanatory analysis of results for the least efficient areas of the building through a new visual mode. The software also helps the user to evaluate whether or not an investment to improve the materials of an installation is worth the cost of the different energy certification parameters. The evaluated prototype (named VEE-IS) shows promising results when it comes to representing in a more intuitive and simple manner the energy rating of the different elements of the building. Users can also personalize all the inputs necessary to create a correct certification, such as floor materials, walls, installations, or other important parameters. Working in real-time through VR allows for efficiently comparing, analyzing, and improving the rated elements, as well as the parameters that we must enter to calculate the final certification. The prototype also allows for visualizing the building in efficiency mode, which lets us move over the building to analyze thermal bridges or other energy efficiency data. This research also finds that the visual representation of energy efficiency certifications makes it easy for the stakeholders to examine improvements progressively, which adds value to the different phases of design and sale.Keywords: energetic certification, virtual reality, augmented reality, sustainability
Procedia PDF Downloads 1882957 Improving Library Service Quality in Local City of Indonesia
Authors: Prima Fithri, Afri Adnan, Verra Syahmer
Abstract:
Library as a public service should be able to provide excellent and quality service. The criteria that should be available in the library is having the collection which relevant, actual and reliable, qualified and professional employee, delivery system that prompt and appropriate as well as supported by proper infrastructure. The aim of this study is to show the performance as an effort to provide quality of services that appropriate with the needs and desires of user. Then, in this research has been carried out the calculation of the gap between the perceptions and expectations of user about the services of the library. The Sevqual and QFD methods are used in this study. Servqual method for measuring the value of the gap that occurs in the dimensions of service quality and QFD method for determine priority repairment that need to be done to improve the quality of services that occur in the dimensions of service quality. From 97 questionaires, shows that value of the gap that occurs in the dimensions of service quality using by Servqual is 27.7% dimensions of responsiveness. It show how much user expectations are not met by the quality of existing services. Construction of the library and standard library becomes priority improvements that need to be done to improve the quality of service that occurs in the dimensions of service quality using the QFD.Keywords: library, service quality, service quality, QFD
Procedia PDF Downloads 5792956 A Non-Iterative Shape Reconstruction of an Interface from Boundary Measurement
Authors: Mourad Hrizi
Abstract:
In this paper, we study the inverse problem of reconstructing an interior interface D appearing in the elliptic partial differential equation: Δu+χ(D)u=0 from the knowledge of the boundary measurements. This problem arises from a semiconductor transistor model. We propose a new shape reconstruction procedure that is based on the Kohn-Vogelius formulation and the topological sensitivity method. The inverse problem is formulated as a topology optimization one. A topological sensitivity analysis is derived from a function. The unknown subdomain D is reconstructed using a level-set curve of the topological gradient. Finally, we give several examples to show the viability of our proposed method.Keywords: inverse problem, topological optimization, topological gradient, Kohn-Vogelius formulation
Procedia PDF Downloads 2442955 Cosmetic Recommendation Approach Using Machine Learning
Authors: Shakila N. Senarath, Dinesh Asanka, Janaka Wijayanayake
Abstract:
The necessity of cosmetic products is arising to fulfill consumer needs of personality appearance and hygiene. A cosmetic product consists of various chemical ingredients which may help to keep the skin healthy or may lead to damages. Every chemical ingredient in a cosmetic product does not perform on every human. The most appropriate way to select a healthy cosmetic product is to identify the texture of the body first and select the most suitable product with safe ingredients. Therefore, the selection process of cosmetic products is complicated. Consumer surveys have shown most of the time, the selection process of cosmetic products is done in an improper way by consumers. From this study, a content-based system is suggested that recommends cosmetic products for the human factors. To such an extent, the skin type, gender and price range will be considered as human factors. The proposed system will be implemented by using Machine Learning. Consumer skin type, gender and price range will be taken as inputs to the system. The skin type of consumer will be derived by using the Baumann Skin Type Questionnaire, which is a value-based approach that includes several numbers of questions to derive the user’s skin type to one of the 16 skin types according to the Bauman Skin Type indicator (BSTI). Two datasets are collected for further research proceedings. The user data set was collected using a questionnaire given to the public. Those are the user dataset and the cosmetic dataset. Product details are included in the cosmetic dataset, which belongs to 5 different kinds of product categories (Moisturizer, Cleanser, Sun protector, Face Mask, Eye Cream). An alternate approach of TF-IDF (Term Frequency – Inverse Document Frequency) is applied to vectorize cosmetic ingredients in the generic cosmetic products dataset and user-preferred dataset. Using the IF-IPF vectors, each user-preferred products dataset and generic cosmetic products dataset can be represented as sparse vectors. The similarity between each user-preferred product and generic cosmetic product will be calculated using the cosine similarity method. For the recommendation process, a similarity matrix can be used. Higher the similarity, higher the match for consumer. Sorting a user column from similarity matrix in a descending order, the recommended products can be retrieved in ascending order. Even though results return a list of similar products, and since the user information has been gathered, such as gender and the price ranges for product purchasing, further optimization can be done by considering and giving weights for those parameters once after a set of recommended products for a user has been retrieved.Keywords: content-based filtering, cosmetics, machine learning, recommendation system
Procedia PDF Downloads 1352954 Graph Planning Based Composition for Adaptable Semantic Web Services
Authors: Rihab Ben Lamine, Raoudha Ben Jemaa, Ikram Amous Ben Amor
Abstract:
This paper proposes a graph planning technique for semantic adaptable Web Services composition. First, we use an ontology based context model for extending Web Services descriptions with information about the most suitable context for its use. Then, we transform the composition problem into a semantic context aware graph planning problem to build the optimal service composition based on user's context. The construction of the planning graph is based on semantic context aware Web Service discovery that allows for each step to add most suitable Web Services in terms of semantic compatibility between the services parameters and their context similarity with the user's context. In the backward search step, semantic and contextual similarity scores are used to find best composed Web Services list. Finally, in the ranking step, a score is calculated for each best solution and a set of ranked solutions is returned to the user.Keywords: semantic web service, web service composition, adaptation, context, graph planning
Procedia PDF Downloads 5212953 Identification of EEG Attention Level Using Empirical Mode Decompositions for BCI Applications
Authors: Chia-Ju Peng, Shih-Jui Chen
Abstract:
This paper proposes a method to discriminate electroencephalogram (EEG) signals between different concentration states using empirical mode decomposition (EMD). Brain-computer interface (BCI), also called brain-machine interface, is a direct communication pathway between the brain and an external device without the inherent pathway such as the peripheral nervous system or skeletal muscles. Attention level is a common index as a control signal of BCI systems. The EEG signals acquired from people paying attention or in relaxation, respectively, are decomposed into a set of intrinsic mode functions (IMF) by EMD. Fast Fourier transform (FFT) analysis is then applied to each IMF to obtain the frequency spectrums. By observing power spectrums of IMFs, the proposed method has the better identification of EEG attention level than the original EEG signals between different concentration states. The band power of IMF3 is the most obvious especially in β wave, which corresponds to fully awake and generally alert. The signal processing method and results of this experiment paves a new way for BCI robotic system using the attention-level control strategy. The integrated signal processing method reveals appropriate information for discrimination of the attention and relaxation, contributing to a more enhanced BCI performance.Keywords: biomedical engineering, brain computer interface, electroencephalography, rehabilitation
Procedia PDF Downloads 3912952 Digital Self-Care Intervention Evaluation from the Perspective of Healthcare Users
Authors: Dina Ziadlou, Anthony Sunjaya, Joyzen Cortez Ramos, Romario Muñoz Ramos, Richard Dasselaar
Abstract:
This study aimed to evaluate the opinions of users using digital health technologies to prevent, promote, and maintain their health and well-being with or without the support of a healthcare provider to delineate an overview of the future patient journey while considering the strategic initiatives in the digital transformation era. This research collected the opinions of healthcare clients through a structural questionnaire to collect user accessibility, user knowledge, user experience, user engagement, and personalized medicine to investigate the mindset of the users and illustrate their opinions, expectations, needs, and voices about digital self-care expansion. In the realm of digital transformation, the accessibility of users to the internet, digital health platforms, tools, and mobile health applications have revolutionized the healthcare ecosystem toward nurturing informed and empowered patients who are tech-savvy and can take the initiative to be in charge of their health, involved in medical decision-making, and seek digital health innovations to prevent diseases and promote their healthy lifestyles. Therefore, the future of the patient journey is digital self-care intervention in a healthcare ecosystem where the partnership of patients in healthcare services is tied to their health information and action ownership.Keywords: digital health, patient engagement, self-care intervention, digital self-care intervention, digital transformation
Procedia PDF Downloads 392951 Statistical Analysis with Prediction Models of User Satisfaction in Software Project Factors
Authors: Katawut Kaewbanjong
Abstract:
We analyzed a volume of data and found significant user satisfaction in software project factors. A statistical significance analysis (logistic regression) and collinearity analysis determined the significance factors from a group of 71 pre-defined factors from 191 software projects in ISBSG Release 12. The eight prediction models used for testing the prediction potential of these factors were Neural network, k-NN, Naïve Bayes, Random forest, Decision tree, Gradient boosted tree, linear regression and logistic regression prediction model. Fifteen pre-defined factors were truly significant in predicting user satisfaction, and they provided 82.71% prediction accuracy when used with a neural network prediction model. These factors were client-server, personnel changes, total defects delivered, project inactive time, industry sector, application type, development type, how methodology was acquired, development techniques, decision making process, intended market, size estimate approach, size estimate method, cost recording method, and effort estimate method. These findings may benefit software development managers considerably.Keywords: prediction model, statistical analysis, software project, user satisfaction factor
Procedia PDF Downloads 1242950 PsyVBot: Chatbot for Accurate Depression Diagnosis using Long Short-Term Memory and NLP
Authors: Thaveesha Dheerasekera, Dileeka Sandamali Alwis
Abstract:
The escalating prevalence of mental health issues, such as depression and suicidal ideation, is a matter of significant global concern. It is plausible that a variety of factors, such as life events, social isolation, and preexisting physiological or psychological health conditions, could instigate or exacerbate these conditions. Traditional approaches to diagnosing depression entail a considerable amount of time and necessitate the involvement of adept practitioners. This underscores the necessity for automated systems capable of promptly detecting and diagnosing symptoms of depression. The PsyVBot system employs sophisticated natural language processing and machine learning methodologies, including the use of the NLTK toolkit for dataset preprocessing and the utilization of a Long Short-Term Memory (LSTM) model. The PsyVBot exhibits a remarkable ability to diagnose depression with a 94% accuracy rate through the analysis of user input. Consequently, this resource proves to be efficacious for individuals, particularly those enrolled in academic institutions, who may encounter challenges pertaining to their psychological well-being. The PsyVBot employs a Long Short-Term Memory (LSTM) model that comprises a total of three layers, namely an embedding layer, an LSTM layer, and a dense layer. The stratification of these layers facilitates a precise examination of linguistic patterns that are associated with the condition of depression. The PsyVBot has the capability to accurately assess an individual's level of depression through the identification of linguistic and contextual cues. The task is achieved via a rigorous training regimen, which is executed by utilizing a dataset comprising information sourced from the subreddit r/SuicideWatch. The diverse data present in the dataset ensures precise and delicate identification of symptoms linked with depression, thereby guaranteeing accuracy. PsyVBot not only possesses diagnostic capabilities but also enhances the user experience through the utilization of audio outputs. This feature enables users to engage in more captivating and interactive interactions. The PsyVBot platform offers individuals the opportunity to conveniently diagnose mental health challenges through a confidential and user-friendly interface. Regarding the advancement of PsyVBot, maintaining user confidentiality and upholding ethical principles are of paramount significance. It is imperative to note that diligent efforts are undertaken to adhere to ethical standards, thereby safeguarding the confidentiality of user information and ensuring its security. Moreover, the chatbot fosters a conducive atmosphere that is supportive and compassionate, thereby promoting psychological welfare. In brief, PsyVBot is an automated conversational agent that utilizes an LSTM model to assess the level of depression in accordance with the input provided by the user. The demonstrated accuracy rate of 94% serves as a promising indication of the potential efficacy of employing natural language processing and machine learning techniques in tackling challenges associated with mental health. The reliability of PsyVBot is further improved by the fact that it makes use of the Reddit dataset and incorporates Natural Language Toolkit (NLTK) for preprocessing. PsyVBot represents a pioneering and user-centric solution that furnishes an easily accessible and confidential medium for seeking assistance. The present platform is offered as a modality to tackle the pervasive issue of depression and the contemplation of suicide.Keywords: chatbot, depression diagnosis, LSTM model, natural language process
Procedia PDF Downloads 712949 Evaluation of Corrosion by Impedance Spectroscopy of Embedded Steel in an Alternative Concrete Exposed a Chloride Ion
Authors: E. Ruíz, W. Aperador
Abstract:
In this article evaluates the protective effect of the concrete alternative obtained from the fly ash and iron and steel slag mixed in binary form and were placed on structural steel ASTM A 706. The study was conducted comparatively with specimens exposed to natural conditions free of chloride ion. The effect of chloride ion on the specimens was generated of form accelerated under controlled conditions (3.5% NaCl and 25 ° C temperature). The Impedance data were acquired over a range of 1 mHz to 100 kHz. At frequencies high is found the response of the interface means of the exposure-concrete and to frequency low the response of the interface corresponding to concrete-steel.Keywords: alternative concrete, corrosion, alkaline activation, impedance spectroscopy
Procedia PDF Downloads 3592948 Interface Designer as Cultural Producer: A Dialectic Materialist Approach to the Role of Visual Designer in the Present Digital Era
Authors: Cagri Baris Kasap
Abstract:
In this study, how interface designers can be viewed as producers of culture in the current era will be interrogated from a critical theory perspective. Walter Benjamin was a German Jewish literary critical theorist who, during 1930s, was engaged in opposing and criticizing the Nazi use of art and media. ‘The Author as Producer’ is an essay that Benjamin has read at the Communist Institute for the Study of Fascism in Paris. In this article, Benjamin relates directly to the dialectics between base and superstructure and argues that authors, normally placed within the superstructure should consider how writing and publishing is production and directly related to the base. Through it, he discusses what it could mean to see author as producer of his own text, as a producer of writing, understood as an ideological construct that rests on the apparatus of production and distribution. So Benjamin concludes that the author must write in ways that relate to the conditions of production, he must do so in order to prepare his readers to become writers and even make this possible for them by engineering an ‘improved apparatus’ and must work toward turning consumers to producers and collaborators. In today’s world, it has become a leading business model within Web 2.0 services of multinational Internet technologies and culture industries like Amazon, Apple and Google, to transform readers, spectators, consumers or users into collaborators and co-producers through platforms such as Facebook, YouTube and Amazon’s CreateSpace Kindle Direct Publishing print-on-demand, e-book and publishing platforms. However, the way this transformation happens is tightly controlled and monitored by combinations of software and hardware. In these global-market monopolies, it has become increasingly difficult to get insight into how one’s writing and collaboration is used, captured, and capitalized as a user of Facebook or Google. In the lens of this study, it could be argued that this criticism could very well be considered by digital producers or even by the mass of collaborators in contemporary social networking software. How do software and design incorporate users and their collaboration? Are they truly empowered, are they put in a position where they are able to understand the apparatus and how their collaboration is part of it? Or has the apparatus become a means against the producers? Thus, when using corporate systems like Google and Facebook, iPhone and Kindle without any control over the means of production, which is closed off by opaque interfaces and licenses that limit our rights of use and ownership, we are already the collaborators that Benjamin calls for. For example, the iPhone and the Kindle combine a specific use of technology to distribute the relations between the ‘authors’ and the ‘prodUsers’ in ways that secure their monopolistic business models by limiting the potential of the technology.Keywords: interface designer, cultural producer, Walter Benjamin, materialist aesthetics, dialectical thinking
Procedia PDF Downloads 1432947 The Analyzer: Clustering Based System for Improving Business Productivity by Analyzing User Profiles to Enhance Human Computer Interaction
Authors: Dona Shaini Abhilasha Nanayakkara, Kurugamage Jude Pravinda Gregory Perera
Abstract:
E-commerce platforms have revolutionized the shopping experience, offering convenient ways for consumers to make purchases. To improve interactions with customers and optimize marketing strategies, it is essential for businesses to understand user behavior, preferences, and needs on these platforms. This paper focuses on recommending businesses to customize interactions with users based on their behavioral patterns, leveraging data-driven analysis and machine learning techniques. Businesses can improve engagement and boost the adoption of e-commerce platforms by aligning behavioral patterns with user goals of usability and satisfaction. We propose TheAnalyzer, a clustering-based system designed to enhance business productivity by analyzing user-profiles and improving human-computer interaction. The Analyzer seamlessly integrates with business applications, collecting relevant data points based on users' natural interactions without additional burdens such as questionnaires or surveys. It defines five key user analytics as features for its dataset, which are easily captured through users' interactions with e-commerce platforms. This research presents a study demonstrating the successful distinction of users into specific groups based on the five key analytics considered by TheAnalyzer. With the assistance of domain experts, customized business rules can be attached to each group, enabling The Analyzer to influence business applications and provide an enhanced personalized user experience. The outcomes are evaluated quantitatively and qualitatively, demonstrating that utilizing TheAnalyzer’s capabilities can optimize business outcomes, enhance customer satisfaction, and drive sustainable growth. The findings of this research contribute to the advancement of personalized interactions in e-commerce platforms. By leveraging user behavioral patterns and analyzing both new and existing users, businesses can effectively tailor their interactions to improve customer satisfaction, loyalty and ultimately drive sales.Keywords: data clustering, data standardization, dimensionality reduction, human computer interaction, user profiling
Procedia PDF Downloads 752946 Non-Uniform Filter Banks-based Minimum Distance to Riemannian Mean Classifition in Motor Imagery Brain-Computer Interface
Authors: Ping Tan, Xiaomeng Su, Yi Shen
Abstract:
The motion intention in the motor imagery braincomputer interface is identified by classifying the event-related desynchronization (ERD) and event-related synchronization ERS characteristics of sensorimotor rhythm (SMR) in EEG signals. When the subject imagines different limbs or different parts moving, the rhythm components and bandwidth will change, which varies from person to person. How to find the effective sensorimotor frequency band of subjects is directly related to the classification accuracy of brain-computer interface. To solve this problem, this paper proposes a Minimum Distance to Riemannian Mean Classification method based on Non-Uniform Filter Banks. During the training phase, the EEG signals are decomposed into multiple different bandwidt signals by using multiple band-pass filters firstly; Then the spatial covariance characteristics of each frequency band signal are computered to be as the feature vectors. these feature vectors will be classified by the MDRM (Minimum Distance to Riemannian Mean) method, and cross validation is employed to obtain the effective sensorimotor frequency bands. During the test phase, the test signals are filtered by the bandpass filter of the effective sensorimotor frequency bands, and the extracted spatial covariance feature vectors will be classified by using the MDRM. Experiments on the BCI competition IV 2a dataset show that the proposed method is superior to other classification methods.Keywords: non-uniform filter banks, motor imagery, brain-computer interface, minimum distance to Riemannian mean
Procedia PDF Downloads 1262945 Gesture-Controlled Interface Using Computer Vision and Python
Authors: Vedant Vardhan Rathour, Anant Agrawal
Abstract:
The project aims to provide a touchless, intuitive interface for human-computer interaction, enabling users to control their computer using hand gestures and voice commands. The system leverages advanced computer vision techniques using the MediaPipe framework and OpenCV to detect and interpret real time hand gestures, transforming them into mouse actions such as clicking, dragging, and scrolling. Additionally, the integration of a voice assistant powered by the Speech Recognition library allows for seamless execution of tasks like web searches, location navigation and gesture control on the system through voice commands.Keywords: gesture recognition, hand tracking, machine learning, convolutional neural networks
Procedia PDF Downloads 162944 Wasting Human and Computer Resources
Authors: Mária Csernoch, Piroska Biró
Abstract:
The legends about “user-friendly” and “easy-to-use” birotical tools (computer-related office tools) have been spreading and misleading end-users. This approach has led us to the extremely high number of incorrect documents, causing serious financial losses in the creating, modifying, and retrieving processes. Our research proved that there are at least two sources of this underachievement: (1) The lack of the definition of the correctly edited, formatted documents. Consequently, end-users do not know whether their methods and results are correct or not. They are not aware of their ignorance. They are so ignorant that their ignorance does not allow them to realize their lack of knowledge. (2) The end-users’ problem-solving methods. We have found that in non-traditional programming environments end-users apply, almost exclusively, surface approach metacognitive methods to carry out their computer related activities, which are proved less effective than deep approach methods. Based on these findings we have developed deep approach methods which are based on and adapted from traditional programming languages. In this study, we focus on the most popular type of birotical documents, the text-based documents. We have provided the definition of the correctly edited text, and based on this definition, adapted the debugging method known in programming. According to the method, before the realization of text editing, a thorough debugging of already existing texts and the categorization of errors are carried out. With this method in advance to real text editing users learn the requirements of text-based documents and also of the correctly formatted text. The method has been proved much more effective than the previously applied surface approach methods. The advantages of the method are that the real text handling requires much less human and computer sources than clicking aimlessly in the GUI (Graphical User Interface), and the data retrieval is much more effective than from error-prone documents.Keywords: deep approach metacognitive methods, error-prone birotical documents, financial losses, human and computer resources
Procedia PDF Downloads 3822943 Integrated Model for Enhancing Data Security Processing Time in Cloud Computing
Authors: Amani A. Saad, Ahmed A. El-Farag, El-Sayed A. Helali
Abstract:
Cloud computing is an important and promising field in the recent decade. Cloud computing allows sharing resources, services and information among the people of the whole world. Although the advantages of using clouds are great, but there are many risks in a cloud. The data security is the most important and critical problem of cloud computing. In this research a new security model for cloud computing is proposed for ensuring secure communication system, hiding information from other users and saving the user's times. In this proposed model Blowfish encryption algorithm is used for exchanging information or data, and SHA-2 cryptographic hash algorithm is used for data integrity. For user authentication process a simple user-name and password is used, the password uses SHA-2 for one way encryption. The proposed system shows an improvement of the processing time of uploading and downloading files on the cloud in secure form.Keywords: cloud computing, data security, SAAS, PAAS, IAAS, Blowfish
Procedia PDF Downloads 3602942 Micro-Droplet Formation in a Microchannel under the Effect of an Electric Field: Experiment
Authors: Sercan Altundemir, Pinar Eribol, A. Kerem Uguz
Abstract:
Microfluidics systems allow many-large scale laboratory applications to be miniaturized on a single device in order to reduce cost and advance fluid control. Moreover, such systems enable to generate and control droplets which have a significant role on improved analysis for many chemical and biological applications. For example, they can be employed as the model for cells in microfluidic systems. In this work, the interfacial instability of two immiscible Newtonian liquids flowing in a microchannel is investigated. When two immiscible liquids are in laminar regime, a flat interface is formed between them. If a direct current electric field is applied, the interface may deform, i.e. may become unstable and it may be ruptured and form micro-droplets. First, the effect of thickness ratio, total flow rate, viscosity ratio of the silicone oil and ethylene glycol liquid couple on the critical voltage at which the interface starts to destabilize is investigated. Then the droplet sizes are measured under the effect of these parameters at various voltages. Moreover, the effect of total flow rate on the time elapsed for the interface to be ruptured to form droplets by hitting the wall of the channel is analyzed. It is observed that an increase in the viscosity or the thickness ratio of the silicone oil to the ethylene glycol has a stabilizing effect, i.e. a higher voltage is needed while the total flow rate has no effect on it. However, it is observed that an increase in the total flow rate results in shortening of the elapsed time for the interface to hit the wall. Moreover, the droplet size decreases down to 0.1 μL with an increase in the applied voltage, the viscosity ratio or the total flow rate or a decrease in the thickness ratio. In addition to these observations, two empirical models for determining the critical electric number, i.e., the dimensionless voltage and the droplet size and another model which is a combination of both models, for determining the droplet size at the critical voltage are established.Keywords: droplet formation, electrohydrodynamics, microfluidics, two-phase flow
Procedia PDF Downloads 1762941 Political Perspectives Regarding International Laws
Authors: Hamid Vahidkia
Abstract:
This exposition investigates the connection between two viewpoints on the nature of human rights. Agreeing with the “political” or “practical” point of view, human rights are claims that people have against certain regulation structures in specific present-day states, in the ethicalness of interface they have in settings that incorporate them. Agreeing with the more conventional “humanist” or “naturalistic” viewpoint, human rights are pre-institutional claims that people have against all other people in the ethicalness of interface characteristic of their common humankind. This paper contends that once we recognize the two viewpoints in their best light, we are able to see that they are complementary, and, in reality, we require both to form a great standardizing sense of the modern home of human rights. It clarifies how humanist and political contemplations can and ought to work in couple to account for the concept, substance, and legitimization of human rights.Keywords: politics, human rights, humanities, mankind, law
Procedia PDF Downloads 592940 Pro-BluCRM: A Proactive Customer Relationship Management System Using Bluetooth
Authors: Mohammad Alawairdhi
Abstract:
Customer Relationship Management (CRM) started gaining attention as late as the 1990s, and since then efforts are ongoing to define the domain’s precise specifications. There is yet no single agreed upon definition. However, a predominant majority perceives CRM as a mechanism for enhancing interaction with customers, thereby strengthening the relationship between a business and its clients. From the perspective of Information Technology (IT) companies, CRM systems can be viewed as facilitating software products or services to automate the marketing, selling and servicing functions of an organization. In this paper, we have proposed a Bluetooth enabled CRM system for small- and medium-scale organizations. In the proposed system, Bluetooth technology works as an automatic identification token in addition to its common use as a communication channel. The system comprises a server side accompanied by a user-interface support for both client and server sides. The system has been tested in two environments and users have expressed ease of use, convenience and understandability as major advantages of the proposed solution.Keywords: customer relationship management, CRM, bluetooth, automatic identification token
Procedia PDF Downloads 3422939 Knowledge-Driven Decision Support System Based on Knowledge Warehouse and Data Mining by Improving Apriori Algorithm with Fuzzy Logic
Authors: Pejman Hosseinioun, Hasan Shakeri, Ghasem Ghorbanirostam
Abstract:
In recent years, we have seen an increasing importance of research and study on knowledge source, decision support systems, data mining and procedure of knowledge discovery in data bases and it is considered that each of these aspects affects the others. In this article, we have merged information source and knowledge source to suggest a knowledge based system within limits of management based on storing and restoring of knowledge to manage information and improve decision making and resources. In this article, we have used method of data mining and Apriori algorithm in procedure of knowledge discovery one of the problems of Apriori algorithm is that, a user should specify the minimum threshold for supporting the regularity. Imagine that a user wants to apply Apriori algorithm for a database with millions of transactions. Definitely, the user does not have necessary knowledge of all existing transactions in that database, and therefore cannot specify a suitable threshold. Our purpose in this article is to improve Apriori algorithm. To achieve our goal, we tried using fuzzy logic to put data in different clusters before applying the Apriori algorithm for existing data in the database and we also try to suggest the most suitable threshold to the user automatically.Keywords: decision support system, data mining, knowledge discovery, data discovery, fuzzy logic
Procedia PDF Downloads 3362938 Noninvasive Brain-Machine Interface to Control Both Mecha TE Robotic Hands Using Emotiv EEG Neuroheadset
Authors: Adrienne Kline, Jaydip Desai
Abstract:
Electroencephalogram (EEG) is a noninvasive technique that registers signals originating from the firing of neurons in the brain. The Emotiv EEG Neuroheadset is a consumer product comprised of 14 EEG channels and was used to record the reactions of the neurons within the brain to two forms of stimuli in 10 participants. These stimuli consisted of auditory and visual formats that provided directions of ‘right’ or ‘left.’ Participants were instructed to raise their right or left arm in accordance with the instruction given. A scenario in OpenViBE was generated to both stimulate the participants while recording their data. In OpenViBE, the Graz Motor BCI Stimulator algorithm was configured to govern the duration and number of visual stimuli. Utilizing EEGLAB under the cross platform MATLAB®, the electrodes most stimulated during the study were defined. Data outputs from EEGLAB were analyzed using IBM SPSS Statistics® Version 20. This aided in determining the electrodes to use in the development of a brain-machine interface (BMI) using real-time EEG signals from the Emotiv EEG Neuroheadset. Signal processing and feature extraction were accomplished via the Simulink® signal processing toolbox. An Arduino™ Duemilanove microcontroller was used to link the Emotiv EEG Neuroheadset and the right and left Mecha TE™ Hands.Keywords: brain-machine interface, EEGLAB, emotiv EEG neuroheadset, OpenViBE, simulink
Procedia PDF Downloads 5022937 Player Experience: A Research on Cross-Platform Supported Games
Authors: Salih Akkemik
Abstract:
User Experience has a characterized perspective based on two fundamentals: the usage process and the product. Digital games can be considered as a special interactive system. This system has a very specific purpose and this is to make the player feel good while playing. At this point, Player Experience (PX) and User Experience (UX) are similar. UX focuses on the user feels good, PX focuses on the player feels good. The most important difference between the two is the action taken. These are actions of using and playing. In this study, the player experience will be examined primarily. PX may differ on different platforms. Nowadays, companies are releasing the successful and high-income games that they have developed with cross-platform support. Cross-platform is the most common expression that an application can run on different operating systems, in other words, be developed to support different operating systems. In terms of digital games, cross-platform support means that a game can be played on a computer, console or mobile device environment, more specifically, the game developed is designed and programmed to be played in the same way on at least two different platforms, such as Windows, MacOS, Linux, iOS, Android, Orbis OS or Xbox OS. Different platforms also accommodate different player groups, profiles and preferences. This study aims to examine these different player profiles in terms of player experience and to determine the effects of cross-platform support on player experience.Keywords: cross-platform, digital games, player experience, user experience
Procedia PDF Downloads 2062936 Meta Root ID Passwordless Authentication Using ZKP Bitcoin Protocol
Authors: Saransh Sharma, Atharv Dekhne
Abstract:
Passwords stored on central services and hashed are prone to cyberattacks and hacks. Hence, given all these nuisances, there’s a need to eliminate character-based authentication protocols, which would ultimately benefit all developers as well as end-users.To replace this conventional but antiquated protocol with a secure alternative would be Passwordless Authentication. The meta root.id system creates a public and private key, of which the user is only able to access the private key. Further, after signing the key, the user sends the information over the API to the server, which checks its validity with the public key and grants access accordingly.Keywords: passwordless, OAuth, bitcoin, ZKP, SIN, BIP
Procedia PDF Downloads 962935 Power Allocation in User-Centric Cell-Free Massive Multiple-Input Multiple-Output Systems with Limited Fronthaul Capacity
Authors: Siminfar Samakoush Galougah
Abstract:
In this paper, we study two power allocation problems for an uplink user-centric (UC) cell-free massive multiple-input multiple-output (CF-mMIMO) system. Besides, we assume each access point (AP) is connected to a central processing unit (CPU) via a fronthaul link with limited capacity. To efficiently use the fronthaul capacity, two strategies for transmitting signals from APs to the CPU are employed, namely, compress-forward estimate (CFE), estimate-compress-forward (ECF). The capacity of the aforementioned strategies in user-centric CF-mMIMO is drived. Then, we solved the two power allocation problems with minimum Spectral Efficiency (SE) and sum-SE maximization objectives for ECF and CFE strategies.Keywords: cell-free massive MIMO, limited capacity fronthaul, spectral efficiency
Procedia PDF Downloads 732934 The Design of Information Technology System for Traceability of Thailand’s Tubtimjun Roseapple
Authors: Pimploi Tirastittam, Phutthiwat Waiyawuththanapoom, Sawanath Treesathon
Abstract:
As there are several countries which import agriculture product from Thailand, those countries demand Thailand to establish the traceability system. The traceability system is the tool to reduce the risk in the supply chain in a very effective way as it will help the stakeholder in the supply chain to identify the defect point which will reduce the cost of operation in the supply chain. This research is aimed to design the traceability system for Tubtimjun roseapple for exporting to China, and it is the qualitative research. The data was collected from the expert in the tuntimjun roseapple and fruit exporting industry, and the data was used to design the traceability system. The design of the tubtimjun roseapple traceability system was followed the theory of supply chain which starts from the upstream of the supply chain to the downstream of the supply chain to support the process and condition of the exporting which included the database designing, system architecture, user interface design and information technology of the traceability system.Keywords: design information, technology system, traceability, tubtimjun roseapple
Procedia PDF Downloads 171