Search results for: network user rules
5365 Neuro-Connectivity Analysis Using Abide Data in Autism Study
Authors: Dulal Bhaumik, Fei Jie, Runa Bhaumik, Bikas Sinha
Abstract:
Human brain is an amazingly complex network. Aberrant activities in this network can lead to various neurological disorders such as multiple sclerosis, Parkinson’s disease, Alzheimer’s disease and autism. fMRI has emerged as an important tool to delineate the neural networks affected by such diseases, particularly autism. In this paper, we propose mixed-effects models together with an appropriate procedure for controlling false discoveries to detect disrupted connectivities in whole brain studies. Results are illustrated with a large data set known as Autism Brain Imaging Data Exchange or ABIDE which includes 361 subjects from 8 medical centers. We believe that our findings have addressed adequately the small sample inference problem, and thus are more reliable for therapeutic target for intervention. In addition, our result can be used for early detection of subjects who are at high risk of developing neurological disorders.Keywords: ABIDE, autism spectrum disorder, fMRI, mixed-effects model
Procedia PDF Downloads 2895364 Use Cloud-Based Watson Deep Learning Platform to Train Models Faster and More Accurate
Authors: Susan Diamond
Abstract:
Machine Learning workloads have traditionally been run in high-performance computing (HPC) environments, where users log in to dedicated machines and utilize the attached GPUs to run training jobs on huge datasets. Training of large neural network models is very resource intensive, and even after exploiting parallelism and accelerators such as GPUs, a single training job can still take days. Consequently, the cost of hardware is a barrier to entry. Even when upfront cost is not a concern, the lead time to set up such an HPC environment takes months from acquiring hardware to set up the hardware with the right set of firmware, software installed and configured. Furthermore, scalability is hard to achieve in a rigid traditional lab environment. Therefore, it is slow to react to the dynamic change in the artificial intelligent industry. Watson Deep Learning as a service, a cloud-based deep learning platform that mitigates the long lead time and high upfront investment in hardware. It enables robust and scalable sharing of resources among the teams in an organization. It is designed for on-demand cloud environments. Providing a similar user experience in a multi-tenant cloud environment comes with its own unique challenges regarding fault tolerance, performance, and security. Watson Deep Learning as a service tackles these challenges and present a deep learning stack for the cloud environments in a secure, scalable and fault-tolerant manner. It supports a wide range of deep-learning frameworks such as Tensorflow, PyTorch, Caffe, Torch, Theano, and MXNet etc. These frameworks reduce the effort and skillset required to design, train, and use deep learning models. Deep Learning as a service is used at IBM by AI researchers in areas including machine translation, computer vision, and healthcare.Keywords: deep learning, machine learning, cognitive computing, model training
Procedia PDF Downloads 2095363 Applying the Fuzzy Analytic Network Process to Establish the Relative Importance of Knowledge Sharing Barriers
Authors: Van Dong Phung, Igor Hawryszkiewycz, Kyeong Kang, Muhammad Hatim Binsawad
Abstract:
Knowledge sharing (KS) is the key to creativity and innovation in any organizations. Overcoming the KS barriers has created new challenges for designing in dynamic and complex environment. There may be interrelations and interdependences among the barriers. The purpose of this paper is to present a review of literature of KS barriers and impute the relative importance of them through the fuzzy analytic network process that is a generalization of the analytical hierarchy process (AHP). It helps to prioritize the barriers to find ways to remove them to facilitate KS. The study begins with a brief description of KS barriers and the most critical ones. The FANP and its role in identifying the relative importance of KS barriers are explained. The paper, then, proposes the model for research and expected outcomes. The study suggests that the use of the FANP is appropriate to impute the relative importance of KS barriers which are intertwined and interdependent. Implications and future research are also proposed.Keywords: FANP, ANP, knowledge sharing barriers, knowledge sharing, removing barriers, knowledge management
Procedia PDF Downloads 3335362 Predicting National Football League (NFL) Match with Score-Based System
Authors: Marcho Setiawan Handok, Samuel S. Lemma, Abdoulaye Fofana, Naseef Mansoor
Abstract:
This paper is proposing a method to predict the outcome of the National Football League match with data from 2019 to 2022 and compare it with other popular models. The model uses open-source statistical data of each team, such as passing yards, rushing yards, fumbles lost, and scoring. Each statistical data has offensive and defensive. For instance, a data set of anticipated values for a specific matchup is created by comparing the offensive passing yards obtained by one team to the defensive passing yards given by the opposition. We evaluated the model’s performance by contrasting its result with those of established prediction algorithms. This research is using a neural network to predict the score of a National Football League match and then predict the winner of the game.Keywords: game prediction, NFL, football, artificial neural network
Procedia PDF Downloads 845361 A Vehicle Monitoring System Based on the LoRa Technique
Authors: Chao-Linag Hsieh, Zheng-Wei Ye, Chen-Kang Huang, Yeun-Chung Lee, Chih-Hong Sun, Tzai-Hung Wen, Jehn-Yih Juang, Joe-Air Jiang
Abstract:
Air pollution and climate warming become more and more intensified in many areas, especially in urban areas. Environmental parameters are critical information to air pollution and weather monitoring. Thus, it is necessary to develop a suitable air pollution and weather monitoring system for urban areas. In this study, a vehicle monitoring system (VMS) based on the IoT technique is developed. Cars are selected as the research tool because it can reach a greater number of streets to collect data. The VMS can monitor different environmental parameters, including ambient temperature and humidity, and air quality parameters, including PM2.5, NO2, CO, and O3. The VMS can provide other information, including GPS signals and the vibration information through driving a car on the street. Different sensor modules are used to measure the parameters and collect the measured data and transmit them to a cloud server through the LoRa protocol. A user interface is used to show the sensing data storing at the cloud server. To examine the performance of the system, a researcher drove a Nissan x-trail 1998 to the area close to the Da’an District office in Taipei to collect monitoring data. The collected data are instantly shown on the user interface. The four kinds of information are provided by the interface: GPS positions, weather parameters, vehicle information, and air quality information. With the VMS, users can obtain the information regarding air quality and weather conditions when they drive their car to an urban area. Also, government agencies can make decisions on traffic planning based on the information provided by the proposed VMS.Keywords: LoRa, monitoring system, smart city, vehicle
Procedia PDF Downloads 4165360 A Research and Application of Feature Selection Based on IWO and Tabu Search
Authors: Laicheng Cao, Xiangqian Su, Youxiao Wu
Abstract:
Feature selection is one of the important problems in network security, pattern recognition, data mining and other fields. In order to remove redundant features, effectively improve the detection speed of intrusion detection system, proposes a new feature selection method, which is based on the invasive weed optimization (IWO) algorithm and tabu search algorithm(TS). Use IWO as a global search, tabu search algorithm for local search, to improve the results of IWO algorithm. The experimental results show that the feature selection method can effectively remove the redundant features of network data information in feature selection, reduction time, and to guarantee accurate detection rate, effectively improve the speed of detection system.Keywords: intrusion detection, feature selection, iwo, tabu search
Procedia PDF Downloads 5305359 Evolution of Web Development Progress in Modern Information Technology
Authors: Abdul Basit Kiani
Abstract:
Web development, the art of creating and maintaining websites, has witnessed remarkable advancements. The aim is to provide an overview of some of the cutting-edge developments in the field. Firstly, the rise of responsive web design has revolutionized user experiences across devices. With the increasing prevalence of smartphones and tablets, web developers have adapted to ensure seamless browsing experiences, regardless of screen size. This progress has greatly enhanced accessibility and usability, catering to the diverse needs of users worldwide. Additionally, the evolution of web frameworks and libraries has significantly streamlined the development process. Tools such as React, Angular, and Vue.js have empowered developers to build dynamic and interactive web applications with ease. These frameworks not only enhance efficiency but also bolster scalability, allowing for the creation of complex and feature-rich web solutions. Furthermore, the emergence of progressive web applications (PWAs) has bridged the gap between native mobile apps and web development. PWAs leverage modern web technologies to deliver app-like experiences, including offline functionality, push notifications, and seamless installation. This innovation has transformed the way users interact with websites, blurring the boundaries between traditional web and mobile applications. Moreover, the integration of artificial intelligence (AI) and machine learning (ML) has opened new horizons in web development. Chatbots, intelligent recommendation systems, and personalization algorithms have become integral components of modern websites. These AI-powered features enhance user engagement, provide personalized experiences, and streamline customer support processes, revolutionizing the way businesses interact with their audiences. Lastly, the emphasis on web security and privacy has been a pivotal area of progress. With the increasing incidents of cyber threats, web developers have implemented robust security measures to safeguard user data and ensure secure transactions. Innovations such as HTTPS protocol, two-factor authentication, and advanced encryption techniques have bolstered the overall security of web applications, fostering trust and confidence among users. Hence, recent progress in web development has propelled the industry forward, enabling developers to craft innovative and immersive digital experiences. From responsive design to AI integration and enhanced security, the landscape of web development continues to evolve, promising a future filled with endless possibilities.Keywords: progressive web applications (PWAs), web security, machine learning (ML), web frameworks, advancement responsive web design
Procedia PDF Downloads 545358 An Integrated Lightweight Naïve Bayes Based Webpage Classification Service for Smartphone Browsers
Authors: Mayank Gupta, Siba Prasad Samal, Vasu Kakkirala
Abstract:
The internet world and its priorities have changed considerably in the last decade. Browsing on smart phones has increased manifold and is set to explode much more. Users spent considerable time browsing different websites, that gives a great deal of insight into user’s preferences. Instead of plain information classifying different aspects of browsing like Bookmarks, History, and Download Manager into useful categories would improve and enhance the user’s experience. Most of the classification solutions are server side that involves maintaining server and other heavy resources. It has security constraints and maybe misses on contextual data during classification. On device, classification solves many such problems, but the challenge is to achieve accuracy on classification with resource constraints. This on device classification can be much more useful in personalization, reducing dependency on cloud connectivity and better privacy/security. This approach provides more relevant results as compared to current standalone solutions because it uses content rendered by browser which is customized by the content provider based on user’s profile. This paper proposes a Naive Bayes based lightweight classification engine targeted for a resource constraint devices. Our solution integrates with Web Browser that in turn triggers classification algorithm. Whenever a user browses a webpage, this solution extracts DOM Tree data from the browser’s rendering engine. This DOM data is a dynamic, contextual and secure data that can’t be replicated. This proposal extracts different features of the webpage that runs on an algorithm to classify into multiple categories. Naive Bayes based engine is chosen in this solution for its inherent advantages in using limited resources compared to other classification algorithms like Support Vector Machine, Neural Networks, etc. Naive Bayes classification requires small memory footprint and less computation suitable for smartphone environment. This solution has a feature to partition the model into multiple chunks that in turn will facilitate less usage of memory instead of loading a complete model. Classification of the webpages done through integrated engine is faster, more relevant and energy efficient than other standalone on device solution. This classification engine has been tested on Samsung Z3 Tizen hardware. The Engine is integrated into Tizen Browser that uses Chromium Rendering Engine. For this solution, extensive dataset is sourced from dmoztools.net and cleaned. This cleaned dataset has 227.5K webpages which are divided into 8 generic categories ('education', 'games', 'health', 'entertainment', 'news', 'shopping', 'sports', 'travel'). Our browser integrated solution has resulted in 15% less memory usage (due to partition method) and 24% less power consumption in comparison with standalone solution. This solution considered 70% of the dataset for training the data model and the rest 30% dataset for testing. An average accuracy of ~96.3% is achieved across the above mentioned 8 categories. This engine can be further extended for suggesting Dynamic tags and using the classification for differential uses cases to enhance browsing experience.Keywords: chromium, lightweight engine, mobile computing, Naive Bayes, Tizen, web browser, webpage classification
Procedia PDF Downloads 1635357 Under the 'Umbrella' Project: A Volunteer-Mentoring Approach for Socially Disadvantaged University Students
Authors: Evridiki Zachopoulou, Vasilis Grammatikopoulos, Michail Vitoulis, Athanasios Gregoriadis
Abstract:
In the last ten years, the recent economic crisis in Greece has decreased the financial ability and strength of several families when it comes to supporting their children’s studies. As a result, the number of students who are significantly delaying or even dropping out of their university studies is constantly increasing. The students who are at greater risk for academic failure are those who are facing various problems and social disadvantages, like health problems, special needs, family poverty or unemployment, single-parent students, immigrant students, etc. The ‘Umbrella’ project is a volunteer-based initiative to tackle this problem at International Hellenic University. The main purpose of the project is to provide support to disadvantaged students at a socio-emotional, academic, and practical level in order to help them complete their undergraduate studies. More specifically, the ‘Umbrella’ project has the following goals: (a) to develop a consulting-supporting network based on volunteering senior students, called ‘i-mentors’. (b) to train the volunteering i-mentors and create a systematic and consistent support procedure for students at-risk, (c), to develop a service that, parallel to the i-mentor network will be ensuring opportunities for at-risk students to find a job, (d) to support students who are coping with accessibility difficulties, (e) to secure the sustainability of the ‘Umbrella’ project after the completion of the funding of the project. The innovation of the Umbrella project is in its holistic-person-centered approach that will be providing individualized support -via the i-mentors network- to any disadvantaged student that will come ‘under the Umbrella.’Keywords: peer mentoring, student support, socially disadvantaged students, volunteerism in higher education
Procedia PDF Downloads 2345356 Social Media Diffusion And Implications For Opinion Leadership In Northcentral Nigeria
Authors: Chuks Odiegwu-Enwerem
Abstract:
The classical notion of opinion leadership presupposes that the media is at the center of an effective and successful opinion leadership. Under this idea, an opinion leader is an active media user who consumes, understands, digests and interprets the messages for the understanding and acceptance/adoption by lower-end media users – whose access and understanding of media content are supposedly low. Because of their unique access to and presumed understanding of media functions and their content, opinion leaders are typically esteemed by those who look forward to and accept their opinions. Lazarsfeld and Katz’s two-step flow of communication theory is the basis of opinion leadership – propelled by limited access to the media. With the emergence and spread of social media and its unlimited access by all and sundry, however, the study interrogates the relevance and application of opinion leaders and, by implication, the two-step flow communication theory in Nigeria’s Northcentral region. It seeks to determine whether opinion leaders still exist in the picture and if they still exert considerable influence, especially in matters of political conversations and decision-making among the citizens of this area. It further explores whether the diffusion of social media is a reality and how the ‘low-end’ media users react to the new-found freedom of access to media, and how they are using it to inform their decisions on important matters as well as examines if they are still glued to their opinion leaders. This study explores the empirical dimensions of the two-step flow hypothesis in relation to the activities of social media to determine if a change has occurred and in what direction, using mixed methos of Survey and in-depth interviews. Our understanding and belief in some theoretical assumptions may be enhanced or challenged by the study outcome.Keywords: Opinion Leadership, Active Media User, Two-Step-Flow, Social media, Northcentral Nigeria
Procedia PDF Downloads 705355 Speech Emotion Recognition with Bi-GRU and Self-Attention based Feature Representation
Authors: Bubai Maji, Monorama Swain
Abstract:
Speech is considered an essential and most natural medium for the interaction between machines and humans. However, extracting effective features for speech emotion recognition (SER) is remains challenging. The present studies show that the temporal information captured but high-level temporal-feature learning is yet to be investigated. In this paper, we present an efficient novel method using the Self-attention (SA) mechanism in a combination of Convolutional Neural Network (CNN) and Bi-directional Gated Recurrent Unit (Bi-GRU) network to learn high-level temporal-feature. In order to further enhance the representation of the high-level temporal-feature, we integrate a Bi-GRU output with learnable weights features by SA, and improve the performance. We evaluate our proposed method on our created SITB-OSED and IEMOCAP databases. We report that the experimental results of our proposed method achieve state-of-the-art performance on both databases.Keywords: Bi-GRU, 1D-CNNs, self-attention, speech emotion recognition
Procedia PDF Downloads 1135354 A Comparative Study on South-East Asian Leading Container Ports: Jawaharlal Nehru Port Trust, Chennai, Singapore, Dubai, and Colombo Ports
Authors: Jonardan Koner, Avinash Purandare
Abstract:
In today’s globalized world international business is a very key area for the country's growth. Some of the strategic areas for holding up a country’s international business to grow are in the areas of connecting Ports, Road Network, and Rail Network. India’s International Business is booming both in Exports as well as Imports. Ports play a very central part in the growth of international trade and ensuring competitive ports is of critical importance. India has a long coastline which is a big asset for the country as it has given the opportunity for development of a large number of major and minor ports which will contribute to the maritime trades’ development. The National Economic Development of India requires a well-functioning seaport system. To know the comparative strength of Indian ports over South-east Asian similar ports, the study is considering the objectives of (I) to identify the key parameters of an international mega container port, (II) to compare the five selected container ports (JNPT, Chennai, Singapore, Dubai, and Colombo Ports) according to user of the ports and iii) to measure the growth of selected five container ports’ throughput over time and their comparison. The study is based on both primary and secondary databases. The linear time trend analysis is done to show the trend in quantum of exports, imports and total goods/services handled by individual ports over the years. The comparative trend analysis is done for the selected five ports of cargo traffic handled in terms of Tonnage (weight) and number of containers (TEU’s). The comparative trend analysis is done between containerized and non-containerized cargo traffic in the five selected five ports. The primary data analysis is done comprising of comparative analysis of factor ratings through bar diagrams, statistical inference of factor ratings for the selected five ports, consolidated comparative line charts of factor rating for the selected five ports, consolidated comparative bar charts of factor ratings of the selected five ports and the distribution of ratings (frequency terms). The linear regression model is used to forecast the container capacities required for JNPT Port and Chennai Port by the year 2030. Multiple regression analysis is carried out to measure the impact of selected 34 explanatory variables on the ‘Overall Performance of the Port’ for each of the selected five ports. The research outcome is of high significance to the stakeholders of Indian container handling ports. Indian container port of JNPT and Chennai are benchmarked against international ports such as Singapore, Dubai, and Colombo Ports which are the competing ports in the neighbouring region. The study has analysed the feedback ratings for the selected 35 factors regarding physical infrastructure and services rendered to the port users. This feedback would provide valuable data for carrying out improvements in the facilities provided to the port users. These installations would help the ports’ users to carry out their work in more efficient manner.Keywords: throughput, twenty equivalent units, TEUs, cargo traffic, shipping lines, freight forwarders
Procedia PDF Downloads 1315353 An ANN-Based Predictive Model for Diagnosis and Forecasting of Hypertension
Authors: Obe Olumide Olayinka, Victor Balanica, Eugen Neagoe
Abstract:
The effects of hypertension are often lethal thus its early detection and prevention is very important for everybody. In this paper, a neural network (NN) model was developed and trained based on a dataset of hypertension causative parameters in order to forecast the likelihood of occurrence of hypertension in patients. Our research goal was to analyze the potential of the presented NN to predict, for a period of time, the risk of hypertension or the risk of developing this disease for patients that are or not currently hypertensive. The results of the analysis for a given patient can support doctors in taking pro-active measures for averting the occurrence of hypertension such as recommendations regarding the patient behavior in order to lower his hypertension risk. Moreover, the paper envisages a set of three example scenarios in order to determine the age when the patient becomes hypertensive, i.e. determine the threshold for hypertensive age, to analyze what happens if the threshold hypertensive age is set to a certain age and the weight of the patient if being varied, and, to set the ideal weight for the patient and analyze what happens with the threshold of hypertensive age.Keywords: neural network, hypertension, data set, training set, supervised learning
Procedia PDF Downloads 3925352 EU-SOLARIS: The European Infrastructure for Concentrated Solar Thermal and Solar Chemistry Technologies
Authors: Vassiliki Drosou, Theoni Oikonomou
Abstract:
EU-SOLARIS will form a new legal entity to explore and implement improved rules and procedures for Research Infrastructures (RI) for Concentrated Solar Thermal (CST) and solar chemistry technologies, in order to optimize RI development and R&D coordination. It is expected to be the first of its kind, where industrial needs and private funding will play a significant role. The success of EU-SOLARIS initiative will be the establishment of a new governance body, aided by sustainable financial models. EU-SOLARIS is expected to be an important tool, which will provide the most complete, high quality scientific infrastructure portfolio at international level and to facilitate researchers' access to highly specialised research infrastructure through a single access point. This will be accomplished by linking scientific communities, industry and universities involved in the CST sector. The access to be offered by EU-SOLARIS will guarantee the direct contact of experienced scientists with newcomers and interested students. The set of RIs participating in EU-SOLARIS will offer access to state of the art infrastructures, high-quality services, and will enable users to conduct high quality research. Access to these facilities will contribute to the enhancement of the European research area by: -Opening installations to European and non-European scientists, coming from both academia and industry, thus improving co-operation. -Improving scientific critical mass in domains where knowledge is now widely dispersed. -Generating strong Europe-wide R&D project consortia, increasing the competitiveness of each member alone. EU-SOLARIS will be created in the framework of a European project, co-funded by the 7th Framework Programme of the European Union –whose initiative is to foster, contribute and promote the scientific and technological development of the CST and solar chemistry technologies. Primary objective of EU-SOLARIS is to contribute to the improvement of the state of the art of these technologies with the aim of preserving and reinforcing the European leadership in this field, in which EU-SOLARIS is expected to be a valuable instrument. EU-SOLARIS scope, activities, objectives, current status and vision will be given in the article. Moreover, the rules, processes and criteria regulating the access to the research infrastructures included in EU-SOLARIS will be presented.Keywords: concentrated solar thermal (CST) technology, renewable energy sources, research infrastructures, solar chemistry
Procedia PDF Downloads 2385351 Expanding the Atelier: Design Lead Academic Project Using Immersive User-Generated Mobile Images and Augmented Reality
Authors: David Sinfield, Thomas Cochrane, Marcos Steagall
Abstract:
While there is much hype around the potential and development of mobile virtual reality (VR), the two key critical success factors are the ease of user experience and the development of a simple user-generated content ecosystem. Educational technology history is littered with the debris of over-hyped revolutionary new technologies that failed to gain mainstream adoption or were quickly superseded. Examples include 3D television, interactive CDROMs, Second Life, and Google Glasses. However, we argue that this is the result of curriculum design that substitutes new technologies into pre-existing pedagogical strategies that are focused upon teacher-delivered content rather than exploring new pedagogical strategies that enable student-determined learning or heutagogy. Visual Communication design based learning such as Graphic Design, Illustration, Photography and Design process is heavily based on the traditional forms of the classroom environment whereby student interaction takes place both at peer level and indeed teacher based feedback. In doing so, this makes for a healthy creative learning environment, but does raise other issue in terms of student to teacher learning ratios and reduced contact time. Such issues arise when students are away from the classroom and cannot interact with their peers and teachers and thus we see a decline in creative work from the student. Using AR and VR as a means of stimulating the students and to think beyond the limitation of the studio based classroom this paper will discuss the outcomes of a student project considering the virtual classroom and the techniques involved. The Atelier learning environment is especially suited to the Visual Communication model as it deals with the creative processing of ideas that needs to be shared in a collaborative manner. This has proven to have been a successful model over the years, in the traditional form of design education, but has more recently seen a shift in thinking as we move into a more digital model of learning and indeed away from the classical classroom structure. This study focuses on the outcomes of a student design project that employed Augmented Reality and Virtual Reality technologies in order to expand the dimensions of the classroom beyond its physical limits. Augmented Reality when integrated into the learning experience can improve the learning motivation and engagement of students. This paper will outline some of the processes used and the findings from the semester-long project that took place.Keywords: augmented reality, blogging, design in community, enhanced learning and teaching, graphic design, new technologies, virtual reality, visual communications
Procedia PDF Downloads 2385350 The Importance of Visual Communication in Artificial Intelligence
Authors: Manjitsingh Rajput
Abstract:
Visual communication plays an important role in artificial intelligence (AI) because it enables machines to understand and interpret visual information, similar to how humans do. This abstract explores the importance of visual communication in AI and emphasizes the importance of various applications such as computer vision, object emphasis recognition, image classification and autonomous systems. In going deeper, with deep learning techniques and neural networks that modify visual understanding, In addition to AI programming, the abstract discusses challenges facing visual interfaces for AI, such as data scarcity, domain optimization, and interpretability. Visual communication and other approaches, such as natural language processing and speech recognition, have also been explored. Overall, this abstract highlights the critical role that visual communication plays in advancing AI capabilities and enabling machines to perceive and understand the world around them. The abstract also explores the integration of visual communication with other modalities like natural language processing and speech recognition, emphasizing the critical role of visual communication in AI capabilities. This methodology explores the importance of visual communication in AI development and implementation, highlighting its potential to enhance the effectiveness and accessibility of AI systems. It provides a comprehensive approach to integrating visual elements into AI systems, making them more user-friendly and efficient. In conclusion, Visual communication is crucial in AI systems for object recognition, facial analysis, and augmented reality, but challenges like data quality, interpretability, and ethics must be addressed. Visual communication enhances user experience, decision-making, accessibility, and collaboration. Developers can integrate visual elements for efficient and accessible AI systems.Keywords: visual communication AI, computer vision, visual aid in communication, essence of visual communication.
Procedia PDF Downloads 955349 Mitigating Denial of Service Attacks in Information Centric Networking
Authors: Bander Alzahrani
Abstract:
Information-centric networking (ICN) using architectures such as Publish-Subscribe Internet Routing Paradigm (PSIRP) is one of the promising candidates for a future Internet, has recently been under the spotlight by the research community to investigate the possibility of redesigning the current Internet architecture to solve many issues such as routing scalability, security, and quality of services issues.. The Bloom filter-based forwarding is a source-routing approach that is used in the PSIRP architecture. This mechanism is vulnerable to brute force attacks which may lead to denial-of-service (DoS) attacks. In this work, we present a new forwarding approach that keeps the advantages of Bloom filter-based forwarding while mitigates attacks on the forwarding mechanism. In practice, we introduce a special type of forwarding nodes called Edge-FW to be placed at the edge of the network. The role of these node is to add an extra security layer by validating and inspecting packets at the edge of the network against brute-force attacks and check whether the packet contains a legitimate forwarding identifier (FId) or not. We leverage Certificateless Aggregate Signature (CLAS) scheme with a small size of 64-bit which is used to sign the FId. Hence, this signature becomes bound to a specific FId. Therefore, malicious nodes that inject packets with random FIds will be easily detected and dropped at the Edge-FW node when the signature verification fails. Our preliminary security analysis suggests that with the proposed approach, the forwarding plane is able to resist attacks such as DoS with very high probability.Keywords: bloom filter, certificateless aggregate signature, denial-of-service, information centric network
Procedia PDF Downloads 1985348 A Simple Fluid Dynamic Model for Slippery Pulse Pattern in Traditional Chinese Pulse Diagnosis
Authors: Yifang Gong
Abstract:
Pulse diagnosis is one of the most important diagnosis methods in traditional Chinese medicine. It is also the trickiest method to learn. It is known as that it can only to be sensed not explained. This becomes a serious threat to the survival of this diagnostic method. However, there are a large amount of experiences accumulated during the several thousand years of practice of Chinese doctors. A pulse pattern called 'Slippery pulse' is one of the indications of pregnancy. A simple fluid dynamic model is proposed to simulate the effects of the existence of a placenta. The placenta is modeled as an extra plenum in an extremely simplified fluid network model. It is found that because of the existence of the extra plenum, indeed the pulse pattern shows a secondary peak in one pulse period. As for the author’s knowledge, this work is the first time to show the link between Pulse diagnoses and basic physical principle. Key parameters which might affect the pattern are also investigated.Keywords: Chinese medicine, flow network, pregnancy, pulse
Procedia PDF Downloads 3835347 Analysis of Road Network Vulnerability Due to Merapi Volcano Eruption
Authors: Imam Muthohar, Budi Hartono, Sigit Priyanto, Hardiansyah Hardiansyah
Abstract:
The eruption of Merapi Volcano in Yogyakarta, Indonesia in 2010 caused many casualties due to minimum preparedness in facing disaster. Increasing population capacity and evacuating to safe places become very important to minimize casualties. Regional government through the Regional Disaster Management Agency has divided disaster-prone areas into three parts, namely ring 1 at a distance of 10 km, ring 2 at a distance of 15 km and ring 3 at a distance of 20 km from the center of Mount Merapi. The success of the evacuation is fully supported by road network infrastructure as a way to rescue in an emergency. This research attempts to model evacuation process based on the rise of refugees in ring 1, expanded to ring 2 and finally expanded to ring 3. The model was developed using SATURN (Simulation and Assignment of Traffic to Urban Road Networks) program version 11.3. 12W, involving 140 centroid, 449 buffer nodes, and 851 links across Yogyakarta Special Region, which was aimed at making a preliminary identification of road networks considered vulnerable to disaster. An assumption made to identify vulnerability was the improvement of road network performance in the form of flow and travel times on the coverage of ring 1, ring 2, ring 3, Sleman outside the ring, Yogyakarta City, Bantul, Kulon Progo, and Gunung Kidul. The research results indicated that the performance increase in the road networks existing in the area of ring 2, ring 3, and Sleman outside the ring. The road network in ring 1 started to increase when the evacuation was expanded to ring 2 and ring 3. Meanwhile, the performance of road networks in Yogyakarta City, Bantul, Kulon Progo, and Gunung Kidul during the evacuation period simultaneously decreased in when the evacuation areas were expanded. The results of preliminary identification of the vulnerability have determined that the road networks existing in ring 1, ring 2, ring 3 and Sleman outside the ring were considered vulnerable to the evacuation of Mount Merapi eruption. Therefore, it is necessary to pay a great deal of attention in order to face the disasters that potentially occur at anytime.Keywords: model, evacuation, SATURN, vulnerability
Procedia PDF Downloads 1705346 Transmission Network Expansion Planning in Deregulated Power Systems to Facilitate Competition under Uncertainties
Authors: Hooshang Mohammad Alikhani, Javad Nikoukar
Abstract:
Restructuring and deregulation of power industry have changed the objectives of transmission expansion planning and increased the uncertainties. Due to these changes, new approaches and criteria are needed for transmission planning in deregulated power systems. The objective of this research work is to present a new approach for transmission expansion planning with considering new objectives and uncertainties in deregulated power systems. The approach must take into account the desires of all stakeholders in transmission expansion planning. Market based criteria must be defined to achieve the new objectives. Combination of market based criteria, technical criteria and economical criteria must be used for measuring the goodness of expansion plans to achieve market requirements, technical requirements, and economical requirements altogether.Keywords: deregulated power systems, transmission network, stakeholder, energy systems
Procedia PDF Downloads 6545345 Semi-Supervised Outlier Detection Using a Generative and Adversary Framework
Authors: Jindong Gu, Matthias Schubert, Volker Tresp
Abstract:
In many outlier detection tasks, only training data belonging to one class, i.e., the positive class, is available. The task is then to predict a new data point as belonging either to the positive class or to the negative class, in which case the data point is considered an outlier. For this task, we propose a novel corrupted Generative Adversarial Network (CorGAN). In the adversarial process of training CorGAN, the Generator generates outlier samples for the negative class, and the Discriminator is trained to distinguish the positive training data from the generated negative data. The proposed framework is evaluated using an image dataset and a real-world network intrusion dataset. Our outlier-detection method achieves state-of-the-art performance on both tasks.Keywords: one-class classification, outlier detection, generative adversary networks, semi-supervised learning
Procedia PDF Downloads 1515344 Efficiency and Scale Elasticity in Network Data Envelopment Analysis: An Application to International Tourist Hotels in Taiwan
Authors: Li-Hsueh Chen
Abstract:
Efficient operation is more and more important for managers of hotels. Unlike the manufacturing industry, hotels cannot store their products. In addition, many hotels provide room service, and food and beverage service simultaneously. When efficiencies of hotels are evaluated, the internal structure should be considered. Hence, based on the operational characteristics of hotels, this study proposes a DEA model to simultaneously assess the efficiencies among the room production division, food and beverage production division, room service division and food and beverage service division. However, not only the enhancement of efficiency but also the adjustment of scale can improve the performance. In terms of the adjustment of scale, scale elasticity or returns to scale can help to managers to make decisions concerning expansion or contraction. In order to construct a reasonable approach to measure the efficiencies and scale elasticities of hotels, this study builds an alternative variable-returns-to-scale-based two-stage network DEA model with the combination of parallel and series structures to explore the scale elasticities of the whole system, room production division, food and beverage production division, room service division and food and beverage service division based on the data of international tourist hotel industry in Taiwan. The results may provide valuable information on operational performance and scale for managers and decision makers.Keywords: efficiency, scale elasticity, network data envelopment analysis, international tourist hotel
Procedia PDF Downloads 2255343 Hybrid Heat Pump for Micro Heat Network
Authors: J. M. Counsell, Y. Khalid, M. J. Stewart
Abstract:
Achieving nearly zero carbon heating continues to be identified by UK government analysis as an important feature of any lowest cost pathway to reducing greenhouse gas emissions. Heat currently accounts for 48% of UK energy consumption and approximately one third of UK’s greenhouse gas emissions. Heat Networks are being promoted by UK investment policies as one means of supporting hybrid heat pump based solutions. To this effect the RISE (Renewable Integrated and Sustainable Electric) heating system project is investigating how an all-electric heating sourceshybrid configuration could play a key role in long-term decarbonisation of heat. For the purposes of this study, hybrid systems are defined as systems combining the technologies of an electric driven air source heat pump, electric powered thermal storage, a thermal vessel and micro-heat network as an integrated system. This hybrid strategy allows for the system to store up energy during periods of low electricity demand from the national grid, turning it into a dynamic supply of low cost heat which is utilized only when required. Currently a prototype of such a system is being tested in a modern house integrated with advanced controls and sensors. This paper presents the virtual performance analysis of the system and its design for a micro heat network with multiple dwelling units. The results show that the RISE system is controllable and can reduce carbon emissions whilst being competitive in running costs with a conventional gas boiler heating system.Keywords: gas boilers, heat pumps, hybrid heating and thermal storage, renewable integrated and sustainable electric
Procedia PDF Downloads 4195342 Don't Just Guess and Slip: Estimating Bayesian Knowledge Tracing Parameters When Observations Are Scant
Authors: Michael Smalenberger
Abstract:
Intelligent tutoring systems (ITS) are computer-based platforms which can incorporate artificial intelligence to provide step-by-step guidance as students practice problem-solving skills. ITS can replicate and even exceed some benefits of one-on-one tutoring, foster transactivity in collaborative environments, and lead to substantial learning gains when used to supplement the instruction of a teacher or when used as the sole method of instruction. A common facet of many ITS is their use of Bayesian Knowledge Tracing (BKT) to estimate parameters necessary for the implementation of the artificial intelligence component, and for the probability of mastery of a knowledge component relevant to the ITS. While various techniques exist to estimate these parameters and probability of mastery, none directly and reliably ask the user to self-assess these. In this study, 111 undergraduate students used an ITS in a college-level introductory statistics course for which detailed transaction-level observations were recorded, and users were also routinely asked direct questions that would lead to such a self-assessment. Comparisons were made between these self-assessed values and those obtained using commonly used estimation techniques. Our findings show that such self-assessments are particularly relevant at the early stages of ITS usage while transaction level data are scant. Once a user’s transaction level data become available after sufficient ITS usage, these can replace the self-assessments in order to eliminate the identifiability problem in BKT. We discuss how these findings are relevant to the number of exercises necessary to lead to mastery of a knowledge component, the associated implications on learning curves, and its relevance to instruction time.Keywords: Bayesian Knowledge Tracing, Intelligent Tutoring System, in vivo study, parameter estimation
Procedia PDF Downloads 1725341 Robot Movement Using the Trust Region Policy Optimization
Authors: Romisaa Ali
Abstract:
The Policy Gradient approach is one of the deep reinforcement learning families that combines deep neural networks (DNN) with reinforcement learning RL to discover the optimum of the control problem through experience gained from the interaction between the robot and its surroundings. In contrast to earlier policy gradient algorithms, which were unable to handle these two types of error because of over-or under-estimation introduced by the deep neural network model, this article will discuss the state-of-the-art SOTA policy gradient technique, trust region policy optimization (TRPO), by applying this method in various environments compared to another policy gradient method, the Proximal Policy Optimization (PPO), to explain their robust optimization, using this SOTA to gather experience data during various training phases after observing the impact of hyper-parameters on neural network performance.Keywords: deep neural networks, deep reinforcement learning, proximal policy optimization, state-of-the-art, trust region policy optimization
Procedia PDF Downloads 1695340 Optimal Tracking Control of a Hydroelectric Power Plant Incorporating Neural Forecasting for Uncertain Input Disturbances
Authors: Marlene Perez Villalpando, Kelly Joel Gurubel Tun
Abstract:
In this paper, we propose an optimal control strategy for a hydroelectric power plant subject to input disturbances like meteorological phenomena. The engineering characteristics of the system are described by a nonlinear model. The random availability of renewable sources is predicted by a high-order neural network trained with an extended Kalman filter, whereas the power generation is regulated by the optimal control law. The main advantage of the system is the stabilization of the amount of power generated in the plant. A control supervisor maintains stability and availability in hydropower reservoirs water levels for power generation. The proposed approach demonstrated a good performance to stabilize the reservoir level and the power generation along their desired trajectories in the presence of disturbances.Keywords: hydropower, high order neural network, Kalman filter, optimal control
Procedia PDF Downloads 2985339 Modeling and Control Design of a Centralized Adaptive Cruise Control System
Authors: Markus Mazzola, Gunther Schaaf
Abstract:
A vehicle driving with an Adaptive Cruise Control System (ACC) is usually controlled decentrally, based on the information of radar systems and in some publications based on C2X-Communication (CACC) to guarantee stable platoons. In this paper, we present a Model Predictive Control (MPC) design of a centralized, server-based ACC-System, whereby the vehicular platoon is modeled and controlled as a whole. It is then proven that the proposed MPC design guarantees asymptotic stability and hence string stability of the platoon. The Networked MPC design is chosen to be able to integrate system constraints optimally as well as to reduce the effects of communication delay and packet loss. The performance of the proposed controller is then simulated and analyzed in an LTE communication scenario using the LTE/EPC Network Simulator LENA, which is based on the ns-3 network simulator.Keywords: adaptive cruise control, centralized server, networked model predictive control, string stability
Procedia PDF Downloads 5155338 Shedding Light on the Black Box: Explaining Deep Neural Network Prediction of Clinical Outcome
Authors: Yijun Shao, Yan Cheng, Rashmee U. Shah, Charlene R. Weir, Bruce E. Bray, Qing Zeng-Treitler
Abstract:
Deep neural network (DNN) models are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation, but due to the multiple non-linear inner transformations, DNN models are viewed by many as a black box. In this study, we developed a deep neural network model for predicting 1-year mortality of patients who underwent major cardio vascular procedures (MCVPs), using temporal image representation of past medical history as input. The dataset was obtained from the electronic medical data warehouse administered by Veteran Affairs Information and Computing Infrastructure (VINCI). We identified 21,355 veterans who had their first MCVP in 2014. Features for prediction included demographics, diagnoses, procedures, medication orders, hospitalizations, and frailty measures extracted from clinical notes. Temporal variables were created based on the patient history data in the 2-year window prior to the index MCVP. A temporal image was created based on these variables for each individual patient. To generate the explanation for the DNN model, we defined a new concept called impact score, based on the presence/value of clinical conditions’ impact on the predicted outcome. Like (log) odds ratio reported by the logistic regression (LR) model, impact scores are continuous variables intended to shed light on the black box model. For comparison, a logistic regression model was fitted on the same dataset. In our cohort, about 6.8% of patients died within one year. The prediction of the DNN model achieved an area under the curve (AUC) of 78.5% while the LR model achieved an AUC of 74.6%. A strong but not perfect correlation was found between the aggregated impact scores and the log odds ratios (Spearman’s rho = 0.74), which helped validate our explanation.Keywords: deep neural network, temporal data, prediction, frailty, logistic regression model
Procedia PDF Downloads 1535337 Data Augmentation for Early-Stage Lung Nodules Using Deep Image Prior and Pix2pix
Authors: Qasim Munye, Juned Islam, Haseeb Qureshi, Syed Jung
Abstract:
Lung nodules are commonly identified in computed tomography (CT) scans by experienced radiologists at a relatively late stage. Early diagnosis can greatly increase survival. We propose using a pix2pix conditional generative adversarial network to generate realistic images simulating early-stage lung nodule growth. We have applied deep images prior to 2341 slices from 895 computed tomography (CT) scans from the Lung Image Database Consortium (LIDC) dataset to generate pseudo-healthy medical images. From these images, 819 were chosen to train a pix2pix network. We observed that for most of the images, the pix2pix network was able to generate images where the nodule increased in size and intensity across epochs. To evaluate the images, 400 generated images were chosen at random and shown to a medical student beside their corresponding original image. Of these 400 generated images, 384 were defined as satisfactory - meaning they resembled a nodule and were visually similar to the corresponding image. We believe that this generated dataset could be used as training data for neural networks to detect lung nodules at an early stage or to improve the accuracy of such networks. This is particularly significant as datasets containing the growth of early-stage nodules are scarce. This project shows that the combination of deep image prior and generative models could potentially open the door to creating larger datasets than currently possible and has the potential to increase the accuracy of medical classification tasks.Keywords: medical technology, artificial intelligence, radiology, lung cancer
Procedia PDF Downloads 695336 A Review of Data Visualization Best Practices: Lessons for Open Government Data Portals
Authors: Bahareh Ansari
Abstract:
Background: The Open Government Data (OGD) movement in the last decade has encouraged many government organizations around the world to make their data publicly available to advance democratic processes. But current open data platforms have not yet reached to their full potential in supporting all interested parties. To make the data useful and understandable for everyone, scholars suggested that opening the data should be supplemented by visualization. However, different visualizations of the same information can dramatically change an individual’s cognitive and emotional experience in working with the data. This study reviews the data visualization literature to create a list of the methods empirically tested to enhance users’ performance and experience in working with a visualization tool. This list can be used in evaluating the OGD visualization practices and informing the future open data initiatives. Methods: Previous reviews of visualization literature categorized the visualization outcomes into four categories including recall/memorability, insight/comprehension, engagement, and enjoyment. To identify the papers, a search for these outcomes was conducted in the abstract of the publications of top-tier visualization venues including IEEE Transactions for Visualization and Computer Graphics, Computer Graphics, and proceedings of the CHI Conference on Human Factors in Computing Systems. The search results are complemented with a search in the references of the identified articles, and a search for 'open data visualization,' and 'visualization evaluation' keywords in the IEEE explore and ACM digital libraries. Articles are included if they provide empirical evidence through conducting controlled user experiments, or provide a review of these empirical studies. The qualitative synthesis of the studies focuses on identification and classifying the methods, and the conditions under which they are examined to positively affect the visualization outcomes. Findings: The keyword search yields 760 studies, of which 30 are included after the title/abstract review. The classification of the included articles shows five distinct methods: interactive design, aesthetic (artistic) style, storytelling, decorative elements that do not provide extra information including text, image, and embellishment on the graphs), and animation. Studies on decorative elements show consistency on the positive effects of these elements on user engagement and recall but are less consistent in their examination of the user performance. This inconsistency could be attributable to the particular data type or specific design method used in each study. The interactive design studies are consistent in their findings of the positive effect on the outcomes. Storytelling studies show some inconsistencies regarding the design effect on user engagement, enjoyment, recall, and performance, which could be indicative of the specific conditions required for the use of this method. Last two methods, aesthetics and animation, have been less frequent in the included articles, and provide consistent positive results on some of the outcomes. Implications for e-government: Review of the visualization best-practice methods show that each of these methods is beneficial under specific conditions. By using these methods in a potentially beneficial condition, OGD practices can promote a wide range of individuals to involve and work with the government data and ultimately engage in government policy-making procedures.Keywords: best practices, data visualization, literature review, open government data
Procedia PDF Downloads 106