Search results for: collaborative networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3531

Search results for: collaborative networks

2451 Losing Benefits from Social Network Sites Usage: An Approach to Estimate the Relationship between Social Network Sites Usage and Social Capital

Authors: Maoxin Ye

Abstract:

This study examines the relationship between social network sites (SNS) usage and social capital. Because SNS usage can expand the users’ networks, and people who are connected in this networks may become resources to SNS users and lead them to advantage in some situation, it is important to estimate the relationship between SNS usage and ‘who’ is connected or what resources the SNS users can get. Additionally, ‘who’ can be divided in two aspects – people who possess high position and people who are different, hence, it is important to estimate the relationship between SNS usage and high position people and different people. This study adapts Lin’s definition of social capital and the measurement of position generator which tells us who was connected, and can be divided into the same two aspects as well. A national data of America (N = 2,255) collected by Pew Research Center is utilized to do a general regression analysis about SNS usage and social capital. The results indicate that SNS usage is negatively associated with each factor of social capital, and it suggests that, in fact, comparing with non-users, although SNS users can get more connections, the variety and resources of these connections are fewer. For this reason, we could lose benefits through SNS usage.

Keywords: social network sites, social capital, position generator, general regression

Procedia PDF Downloads 259
2450 Proposing an Algorithm to Cluster Ad Hoc Networks, Modulating Two Levels of Learning Automaton and Nodes Additive Weighting

Authors: Mohammad Rostami, Mohammad Reza Forghani, Elahe Neshat, Fatemeh Yaghoobi

Abstract:

An Ad Hoc network consists of wireless mobile equipment which connects to each other without any infrastructure, using connection equipment. The best way to form a hierarchical structure is clustering. Various methods of clustering can form more stable clusters according to nodes' mobility. In this research we propose an algorithm, which allocates some weight to nodes based on factors, i.e. link stability and power reduction rate. According to the allocated weight in the previous phase, the cellular learning automaton picks out in the second phase nodes which are candidates for being cluster head. In the third phase, learning automaton selects cluster head nodes, member nodes and forms the cluster. Thus, this automaton does the learning from the setting and can form optimized clusters in terms of power consumption and link stability. To simulate the proposed algorithm we have used omnet++4.2.2. Simulation results indicate that newly formed clusters have a longer lifetime than previous algorithms and decrease strongly network overload by reducing update rate.

Keywords: mobile Ad Hoc networks, clustering, learning automaton, cellular automaton, battery power

Procedia PDF Downloads 406
2449 Symbol Synchronization and Resource Reuse Schemes for Layered Video Multicast Service in Long Term Evolution Networks

Authors: Chung-Nan Lee, Sheng-Wei Chu, You-Chiun Wang

Abstract:

LTE (Long Term Evolution) employs the eMBMS (evolved Multimedia Broadcast/Multicast Service) protocol to deliver video streams to a multicast group of users. However, it requires all multicast members to receive a video stream in the same transmission rate, which would degrade the overall service quality when some users encounter bad channel conditions. To overcome this problem, this paper provides two efficient resource allocation schemes in such LTE network: The symbol synchronization (S2) scheme assumes that the macro and pico eNodeBs use the same frequency channel to deliver the video stream to all users. It then adopts a multicast transmission index to guarantee the fairness among users. On the other hand, the resource reuse (R2) scheme allows eNodeBs to transmit data on different frequency channels. Then, by introducing the concept of frequency reuse, it can further improve the overall service quality. Extensive simulation results show that the S2 and R2 schemes can respectively improve around 50% of fairness and 14% of video quality as compared with the common maximum throughput method.

Keywords: LTE networks, multicast, resource allocation, layered video

Procedia PDF Downloads 385
2448 Aspect-Level Sentiment Analysis with Multi-Channel and Graph Convolutional Networks

Authors: Jiajun Wang, Xiaoge Li

Abstract:

The purpose of the aspect-level sentiment analysis task is to identify the sentiment polarity of aspects in a sentence. Currently, most methods mainly focus on using neural networks and attention mechanisms to model the relationship between aspects and context, but they ignore the dependence of words in different ranges in the sentence, resulting in deviation when assigning relationship weight to other words other than aspect words. To solve these problems, we propose a new aspect-level sentiment analysis model that combines a multi-channel convolutional network and graph convolutional network (GCN). Firstly, the context and the degree of association between words are characterized by Long Short-Term Memory (LSTM) and self-attention mechanism. Besides, a multi-channel convolutional network is used to extract the features of words in different ranges. Finally, a convolutional graph network is used to associate the node information of the dependency tree structure. We conduct experiments on four benchmark datasets. The experimental results are compared with those of other models, which shows that our model is better and more effective.

Keywords: aspect-level sentiment analysis, attention, multi-channel convolution network, graph convolution network, dependency tree

Procedia PDF Downloads 207
2447 Feasibility Study on the Application of Waste Materials for Production of Sustainable Asphalt Mixtures

Authors: Farzaneh Tahmoorian, Bijan Samali, John Yeaman

Abstract:

Road networks are expanding all over the world during the past few decades to meet the increasing freight volumes created by the population growth and industrial development. At the same time, the rate of generation of solid wastes in the society is increasing with the population growth, technological development, and changes in the lifestyle of people. Thus, the management of solid wastes has become an acute problem. Accordingly, there is a need for greater efficiency in the construction and maintenance of road networks, in reducing the overall cost, especially the utilization of natural materials such as aggregates. An efficient means to reduce construction and maintenance costs of road networks is to replace natural (virgin) materials by secondary, recycled materials. Recycling will also help to reduce pressure on landfills and demand for extraction of natural virgin materials thus ensuring sustainability. Application of solid wastes in asphalt layer reduces not only environmental issues associated with waste disposal but also the demand for virgin materials which will subsequently result in sustainability. Therefore, this research aims to investigate the feasibility of the application of some of the waste materials such as glass, construction and demolition wastes, etc. as alternative materials in pavement construction, particularly flexible pavements. To this end, various combination of different waste materials in certain percentages is considered in designing the asphalt mixture. One of the goals of this research is to determine the optimum percentage of all these materials in the mixture. This is done through a series of tests to evaluate the volumetric properties and resilient modulus of the mixture. The information and data collected from these tests are used to select the adequate samples for further assessment through advanced tests such as triaxial dynamic test and fatigue test, in order to investigate the asphalt mixture resistance to permanent deformation and also cracking. This paper presents the results of these investigations on the application of waste materials in asphalt mixture for production of a sustainable asphalt mix.

Keywords: asphalt, glass, pavement, recycled aggregate, sustainability

Procedia PDF Downloads 231
2446 Economic Life of Iranians on Instagram and the Disturbance in Politics

Authors: Mohammad Zaeimzade

Abstract:

The development of communication technologies is clearly and rapidly moving towards reducing the distance between the virtual and real worlds. Of course, living in a two-spatial or two-globalized world or any other interpretation that means mixing real and virtual life is still relevant and debatable. In the present age of communication, where social networks have transformed the message equation and turned the audience out of passivity and turned into a user. Platforms have penetrated widely in various aspects of human life, from culture and education and economy. Among the messengers, Instagram, which is one of the most extensive image-based interactive networks, plays a significant role in the new economic life. It doesn't need much explanation that the era of thinking of every messenger as a non-insulating conductor that is just a neutral load has passed. Every messenger has its own economic, political and of course security background, Instagram is no exception to this rule and of course it leaves its effects in bio-economics as well. Iran, as the 19th largest economy in the world, has not been unaffected by new platforms, including Instagram, and their consequences in the economy. Generally, in the policy-making space, there are two simple and inflexible pessimistic or optimistic views on this issue, and each of the holders of these views usually have their own one-dimensional policy recommendations regarding how to deal with Instagram. Prescriptions that are usually very different and sometimes contradictory. In this article, we show that this confusion of policymakers is the result of not accurately describing the reality of its effect, and the reason for this inaccurate description is the existence of a conflict of interests in the eyes of describers and researchers. In this article, we first take a look at the main indicators of the Iranian economy, estimate the role of the digital economy in Iran's economic growth, then study the conflicting descriptions of the Instagram-based digital economy, the statistics that show the tolerance of economic users of Instagram in Iran. 300 thousand to 9 million have been estimated. Finally, we take a look at the government's actions in this matter, especially in the context of street riots in October and November 2022. And we suggest an intermediate idea.

Keywords: digital economy, instagram, conflict of interest, social networks

Procedia PDF Downloads 71
2445 Potentiality of a Community of Practice between Public Schools and the Private Sector for Integrating Sustainable Development into the School Curriculum

Authors: Aiydh Aljeddani, Fran Martin

Abstract:

The critical time in which we live requires rethinking of many potential ways in order to make the concept of sustainability and its principles an integral part of our daily life. One of these potential approaches is how to attract community institutions, such as the private sector, to participate effectively in the sustainability industry by supporting public schools to fulfill their duties. A collaborative community of practice can support this purpose and can provide a flexible framework, which allows the members of the community to participate effectively. This study, conducted in Saudi Arabia, aimed to understand the process of a collaborative community of practice of involving the private sector as a member of this community to integrate the sustainability concept in school activities and projects. This study employed a qualitative methodology to understand this authentic and complex phenomenon. A case study approach, ethnography and some elements of action research were followed in this study. The methods of unstructured interviews, artifacts, observation, and teachers’ field notes were used to collect the data. The participants were three secondary teachers, twelve chief executive officers, and one school administrative officer. Certain contextual conditions, as shown by the data, should be taken into consideration when policy makers and school administrations in Saudi Arabia desire to integrate sustainability into school activities. The first of these was the acknowledgement of the valuable role of the members’ personality, efforts, abilities, and experiences, which played vital roles in integrating sustainability. Second, institutional culture, which was not expected to emerge as an important factor in this study, has a significant role in the integration of sustainability. Credibility among the members of the community towards the integration of the sustainability concept and its principles through school activities is another important condition. Fourth, some chief executive officers’ understanding of Corporate Social Responsibility (CSR) towards contribution to sustainability agenda was shallow and limited and this could impede the successful integration of sustainability. Fifth, a shared understanding between the members of the community about integrating sustainability was a vital condition in the integration process. The study also revealed that the integration of sustainability could not be an ongoing process if implemented in isolation of the other community institutions such as the private sector. The study finally offers a number of recommendations to improve on the current practices and suggests areas for further studies.

Keywords: community of practice, public schools, private sector, sustainable development

Procedia PDF Downloads 202
2444 Integration of Technology into Nursing Education: A Collaboration between College of Nursing and University Research Center

Authors: Lori Lioce, Gary Maddux, Norven Goddard, Ishella Fogle, Bernard Schroer

Abstract:

This paper presents the integration of technologies into nursing education. The collaborative effort includes the College of Nursing (CoN) at the University of Alabama in Huntsville (UAH) and the UAH Systems Management and Production Center (SMAP). The faculty at the CoN conducts needs assessments to identify education and training requirements. A team of CoN faculty and SMAP engineers then prioritize these requirements and establish improvement/development teams. The development teams consist of nurses to evaluate the models and to provide feedback and of undergraduate engineering students and their senior staff mentors from SMAP. The SMAP engineering staff develops and creates the physical models using 3D printing, silicone molds and specialized molding mixtures and techniques. The collaboration has focused on developing teaching and training, or clinical, simulators. In addition, the onset of the Covid-19 pandemic has intensified this relationship, as 3D modeling shifted to supplied personal protection equipment (PPE) to local health care providers. A secondary collaboration has been introducing students to clinical benchmarking through the UAH Center for Management and Economic Research. As a result of these successful collaborations the Model Exchange & Development of Nursing & Engineering Technology (MEDNET) has been established. MEDNET seeks to extend and expand the linkage between engineering and nursing to K-12 schools, technical schools and medical facilities in the region to the resources available from the CoN and SMAP. As an example, stereolithography (STL) files of the 3D printed models, along with the specifications to fabricate models, are available on the MEDNET website. Ten 3D printed models have been developed and are currently in use by the CoN. The following additional training simulators are currently under development:1) suture pads, 2) gelatin wound models and 3) printed wound tattoos. Specification sheets have been written for these simulations that describe the use, fabrication procedures and parts list. These specifications are available for viewing and download on MEDNET. Included in this paper are 1) descriptions of CoN, SMAP and MEDNET, 2) collaborative process used in product improvement/development, 3) 3D printed models of training and teaching simulators, 4) training simulators under development with specification sheets, 5) family care practice benchmarking, 6) integrating the simulators into the nursing curriculum, 7) utilizing MEDNET as a pandemic response, and 8) conclusions and lessons learned.

Keywords: 3D printing, nursing education, simulation, trainers

Procedia PDF Downloads 118
2443 Comparison of Deep Convolutional Neural Networks Models for Plant Disease Identification

Authors: Megha Gupta, Nupur Prakash

Abstract:

Identification of plant diseases has been performed using machine learning and deep learning models on the datasets containing images of healthy and diseased plant leaves. The current study carries out an evaluation of some of the deep learning models based on convolutional neural network (CNN) architectures for identification of plant diseases. For this purpose, the publicly available New Plant Diseases Dataset, an augmented version of PlantVillage dataset, available on Kaggle platform, containing 87,900 images has been used. The dataset contained images of 26 diseases of 14 different plants and images of 12 healthy plants. The CNN models selected for the study presented in this paper are AlexNet, ZFNet, VGGNet (four models), GoogLeNet, and ResNet (three models). The selected models are trained using PyTorch, an open-source machine learning library, on Google Colaboratory. A comparative study has been carried out to analyze the high degree of accuracy achieved using these models. The highest test accuracy and F1-score of 99.59% and 0.996, respectively, were achieved by using GoogLeNet with Mini-batch momentum based gradient descent learning algorithm.

Keywords: comparative analysis, convolutional neural networks, deep learning, plant disease identification

Procedia PDF Downloads 193
2442 Free and Open Source Licences, Software Programmers, and the Social Norm of Reciprocity

Authors: Luke McDonagh

Abstract:

Over the past three decades, free and open source software (FOSS) programmers have developed new, innovative and legally binding licences that have in turn enabled the creation of innumerable pieces of everyday software, including Linux, Mozilla Firefox and Open Office. That FOSS has been highly successful in competing with 'closed source software' (e.g. Microsoft Office) is now undeniable, but in noting this success, it is important to examine in detail why this system of FOSS has been so successful. One key reason is the existence of networks or communities of programmers, who are bound together by a key shared social norm of 'reciprocity'. At the same time, these FOSS networks are not unitary – they are highly diverse and there are large divergences of opinion between members regarding which licences are generally preferable: some members favour the flexible ‘free’ or 'no copyleft' licences, such as BSD and MIT, while other members favour the ‘strong open’ or 'strong copyleft' licences such as GPL. This paper argues that without both the existence of the shared norm of reciprocity and the diversity of licences, it is unlikely that the innovative legal framework provided by FOSS would have succeeded to the extent that it has.

Keywords: open source, copyright, licensing, copyleft

Procedia PDF Downloads 368
2441 Identifying a Drug Addict Person Using Artificial Neural Networks

Authors: Mustafa Al Sukar, Azzam Sleit, Abdullatif Abu-Dalhoum, Bassam Al-Kasasbeh

Abstract:

Use and abuse of drugs by teens is very common and can have dangerous consequences. The drugs contribute to physical and sexual aggression such as assault or rape. Some teenagers regularly use drugs to compensate for depression, anxiety or a lack of positive social skills. Teen resort to smoking should not be minimized because it can be "gateway drugs" for other drugs (marijuana, cocaine, hallucinogens, inhalants, and heroin). The combination of teenagers' curiosity, risk taking behavior, and social pressure make it very difficult to say no. This leads most teenagers to the questions: "Will it hurt to try once?" Nowadays, technological advances are changing our lives very rapidly and adding a lot of technologies that help us to track the risk of drug abuse such as smart phones, Wireless Sensor Networks (WSNs), Internet of Things (IoT), etc. This technique may help us to early discovery of drug abuse in order to prevent an aggravation of the influence of drugs on the abuser. In this paper, we have developed a Decision Support System (DSS) for detecting the drug abuse using Artificial Neural Network (ANN); we used a Multilayer Perceptron (MLP) feed-forward neural network in developing the system. The input layer includes 50 variables while the output layer contains one neuron which indicates whether the person is a drug addict. An iterative process is used to determine the number of hidden layers and the number of neurons in each one. We used multiple experiment models that have been completed with Log-Sigmoid transfer function. Particularly, 10-fold cross validation schemes are used to access the generalization of the proposed system. The experiment results have obtained 98.42% classification accuracy for correct diagnosis in our system. The data had been taken from 184 cases in Jordan according to a set of questions compiled from Specialists, and data have been obtained through the families of drug abusers.

Keywords: drug addiction, artificial neural networks, multilayer perceptron (MLP), decision support system

Procedia PDF Downloads 295
2440 Human Performance Evaluating of Advanced Cardiac Life Support Procedure Using Fault Tree and Bayesian Network

Authors: Shokoufeh Abrisham, Seyed Mahmoud Hossieni, Elham Pishbin

Abstract:

In this paper, a hybrid method based on the fault tree analysis (FTA) and Bayesian networks (BNs) are employed to evaluate the team performance quality of advanced cardiac life support (ACLS) procedures in emergency department. According to American Heart Association (AHA) guidelines, a category relying on staff action leading to clinical incidents and also some discussions with emergency medicine experts, a fault tree model for ACLS procedure is obtained based on the human performance. The obtained FTA model is converted into BNs, and some different scenarios are defined to demonstrate the efficiency and flexibility of the presented model of BNs. Also, a sensitivity analysis is conducted to indicate the effects of team leader presence and uncertainty knowledge of experts on the quality of ACLS. The proposed model based on BNs shows that how the results of risk analysis can be closed to reality comparing to the obtained results based on only FTA in medical procedures.

Keywords: advanced cardiac life support, fault tree analysis, Bayesian belief networks, numan performance, healthcare systems

Procedia PDF Downloads 142
2439 Use of Smartphones in 6th and 7th Grade (Elementary Schools) in Istria: Pilot Study

Authors: Maja Ruzic-Baf, Vedrana Keteles, Andrea Debeljuh

Abstract:

Younger and younger children are now using a smartphone, a device which has become ‘a must have’ and the life of children would be almost ‘unthinkable’ without one. Devices are becoming lighter and lighter but offering an array of options and applications as well as the unavoidable access to the Internet, without which it would be almost unusable. Numerous features such as taking of photographs, listening to music, information search on the Internet, access to social networks, usage of some of the chatting and messaging services, are only some of the numerous features offered by ‘smart’ devices. They have replaced the alarm clock, home phone, camera, tablet and other devices. Their use and possession have become a part of the everyday image of young people. Apart from the positive aspects, the use of smartphones has also some downsides. For instance, free time was usually spent in nature, playing, doing sports or other activities enabling children an adequate psychophysiological growth and development. The greater usage of smartphones during classes to check statuses on social networks, message your friends, play online games, are just some of the possible negative aspects of their application. Considering that the age of the population using smartphones is decreasing and that smartphones are no longer ‘foreign’ to children of pre-school age (smartphones are used at home or in coffee shops or shopping centers while waiting for their parents, playing video games often inappropriate to their age), particular attention must be paid to a very sensitive group, the teenagers who almost never separate from their ‘pets’. This paper is divided into two sections, theoretical and empirical ones. The theoretical section gives an overview of the pros and cons of the usage of smartphones, while the empirical section presents the results of a research conducted in three elementary schools regarding the usage of smartphones and, specifically, their usage during classes, during breaks and to search information on the Internet, check status updates and 'likes’ on the Facebook social network.

Keywords: education, smartphone, social networks, teenagers

Procedia PDF Downloads 447
2438 Urban Green Transitioning in The Face of Current Global Change: The Management Role of the Local Government and Residents

Authors: Titilope F. Onaolapo, Christiana A. Breed, Maya Pasgaard, Kristine E. Jensen, Peta Brom

Abstract:

In the face of fast-growing urbanization in most of the world's developing countries, there is a need to understand and address the risk and consequences involved in the indiscriminate use of urban green space. Tshwane city in South Africa has the potential to become one of the world's top biodiversity cities as South Africa is ranked one of the mega countries in biodiversity conservation, and Tshwane metropolitan municipality is the city with the wealthiest biodiversity with grassland biomes. In this study, we focus on the potentials and challenges of urban green transitioning from the Global South perspective with Tshwane city as the case study. We also address the issue of management conflicts that have resulted in informal and illegal activities in and around green spaces, with consequences such as land degradation, loss of livelihoods and biodiversity, and socio-ecological imbalances. A desk study review of eight policy frameworks related to green urban planning and development was done based on four GI principles: multifunctionality, connectivity, interdisciplinary and social inclusion. We interviewed 15 key informants in related departments in the city and administered 200 survey questionnaires among residents. We also had several workshops the other researchers and experts on biodiversity and ecosystem. We found out there is no specific document dedicated to green space management, and where green infrastructure was mentioned, it was focused on as an approach to climate mitigation and adaptation. Also, residents perceive green and open spaces as extra land that could be developed at will. We demonstrated the use of collaborative learning approaches in ecological and development research and the tying research to the existing frameworks, programs, and strategies. Based on this understanding. We outlined the need to incorporate principles of green infrastructure in policy frameworks on spatial planning and environmental development. Furthermore, we develop a model for co-management of green infrastructures by stakeholders, such as residents, developers, policymakers, and decision-makers, to maximize benefits. Our collaborative, interdisciplinary projects pursue SDG multifunctionality of goals 11 and 15 by simultaneously addressing issues around Sustainable Cities and Communities, Climate Action, Life on Land, and Strong Institutions, and halt and reverse land degradation and biodiversity.

Keywords: governance, green infrastructure, South Africa, sustainable development, urban planning, Tshwane

Procedia PDF Downloads 113
2437 Exploring Syntactic and Semantic Features for Text-Based Authorship Attribution

Authors: Haiyan Wu, Ying Liu, Shaoyun Shi

Abstract:

Authorship attribution is to extract features to identify authors of anonymous documents. Many previous works on authorship attribution focus on statistical style features (e.g., sentence/word length), content features (e.g., frequent words, n-grams). Modeling these features by regression or some transparent machine learning methods gives a portrait of the authors' writing style. But these methods do not capture the syntactic (e.g., dependency relationship) or semantic (e.g., topics) information. In recent years, some researchers model syntactic trees or latent semantic information by neural networks. However, few works take them together. Besides, predictions by neural networks are difficult to explain, which is vital in authorship attribution tasks. In this paper, we not only utilize the statistical style and content features but also take advantage of both syntactic and semantic features. Different from an end-to-end neural model, feature selection and prediction are two steps in our method. An attentive n-gram network is utilized to select useful features, and logistic regression is applied to give prediction and understandable representation of writing style. Experiments show that our extracted features can improve the state-of-the-art methods on three benchmark datasets.

Keywords: authorship attribution, attention mechanism, syntactic feature, feature extraction

Procedia PDF Downloads 132
2436 A Safety Analysis Method for Multi-Agent Systems

Authors: Ching Louis Liu, Edmund Kazmierczak, Tim Miller

Abstract:

Safety analysis for multi-agent systems is complicated by the, potentially nonlinear, interactions between agents. This paper proposes a method for analyzing the safety of multi-agent systems by explicitly focusing on interactions and the accident data of systems that are similar in structure and function to the system being analyzed. The method creates a Bayesian network using the accident data from similar systems. A feature of our method is that the events in accident data are labeled with HAZOP guide words. Our method uses an Ontology to abstract away from the details of a multi-agent implementation. Using the ontology, our methods then constructs an “Interaction Map,” a graphical representation of the patterns of interactions between agents and other artifacts. Interaction maps combined with statistical data from accidents and the HAZOP classifications of events can be converted into a Bayesian Network. Bayesian networks allow designers to explore “what it” scenarios and make design trade-offs that maintain safety. We show how to use the Bayesian networks, and the interaction maps to improve multi-agent system designs.

Keywords: multi-agent system, safety analysis, safety model, integration map

Procedia PDF Downloads 413
2435 Transboundary Pollution after Natural Disasters: Scenario Analyses for Uranium at Kyrgyzstan-Uzbekistan Border

Authors: Fengqing Li, Petra Schneider

Abstract:

Failure of tailings management facilities (TMF) of radioactive residues is an enormous challenge worldwide and can result in major catastrophes. Particularly in transboundary regions, such failure is most likely to lead to international conflict. This risk occurs in Kyrgyzstan and Uzbekistan, where the current major challenge is the quantification of impacts due to pollution from uranium legacy sites and especially the impact on river basins after natural hazards (i.e., landslides). By means of GoldSim, a probabilistic simulation model, the amount of tailing material that flows into the river networks of Mailuu Suu in Kyrgyzstan after pond failure was simulated for three scenarios, namely 10%, 20%, and 30% of material inputs. Based on Muskingum-Cunge flood routing procedure, the peak value of uranium flood wave along the river network was simulated. Among the 23 TMF, 19 ponds are close to the river networks. The spatiotemporal distributions of uranium along the river networks were then simulated for all the 19 ponds under three scenarios. Taking the TP7 which is 30 km far from the Kyrgyzstan-Uzbekistan border as one example, the uranium concentration decreased continuously along the longitudinal gradient of the river network, the concentration of uranium was observed at the border after 45 min of the pond failure and the highest value was detected after 69 min. The highest concentration of uranium at the border were 16.5, 33, and 47.5 mg/L under scenarios of 10%, 20%, and 30% of material inputs, respectively. In comparison to the guideline value of uranium in drinking water (i.e., 30 µg/L) provided by the World Health Organization, the observed concentrations of uranium at the border were 550‒1583 times higher. In order to mitigate the transboundary impact of a radioactive pollutant release, an integrated framework consisting of three major strategies were proposed. Among, the short-term strategy can be used in case of emergency event, the medium-term strategy allows both countries handling the TMF efficiently based on the benefit-sharing concept, and the long-term strategy intends to rehabilitate the site through the relocation of all TMF.

Keywords: Central Asia, contaminant transport modelling, radioactive residue, transboundary conflict

Procedia PDF Downloads 112
2434 Analyzing the Factors that Cause Parallel Performance Degradation in Parallel Graph-Based Computations Using Graph500

Authors: Mustafa Elfituri, Jonathan Cook

Abstract:

Recently, graph-based computations have become more important in large-scale scientific computing as they can provide a methodology to model many types of relations between independent objects. They are being actively used in fields as varied as biology, social networks, cybersecurity, and computer networks. At the same time, graph problems have some properties such as irregularity and poor locality that make their performance different than regular applications performance. Therefore, parallelizing graph algorithms is a hard and challenging task. Initial evidence is that standard computer architectures do not perform very well on graph algorithms. Little is known exactly what causes this. The Graph500 benchmark is a representative application for parallel graph-based computations, which have highly irregular data access and are driven more by traversing connected data than by computation. In this paper, we present results from analyzing the performance of various example implementations of Graph500, including a shared memory (OpenMP) version, a distributed (MPI) version, and a hybrid version. We measured and analyzed all the factors that affect its performance in order to identify possible changes that would improve its performance. Results are discussed in relation to what factors contribute to performance degradation.

Keywords: graph computation, graph500 benchmark, parallel architectures, parallel programming, workload characterization.

Procedia PDF Downloads 139
2433 Enhancing Knowledge Graph Convolutional Networks with Structural Adaptive Receptive Fields for Improved Node Representation and Information Aggregation

Authors: Zheng Zhihao

Abstract:

Recently, the Knowledge Graph Framework Network (KGCN) has developed powerful capabilities in knowledge representation and reasoning tasks. However, traditional KGCN often uses a fixed weight mechanism when aggregating information, failing to make full use of rich structural information, resulting in a certain expression ability of node representation and easily causing over-smoothing problems. In order to solve these challenges, the paper proposes an distinct graph neural network model called KGCN-STAR (Knowledge Graph Convolutional Network with Structural Adaptive Receptive Fields). This model dynamically adjusts the perception of each node by introducing a structural adaptive receptive field. Wild range and a subgraph aggregator is designed to capture local structural information more effectively. Experimental results show that KGCN-STAR shows significant performance improvement on multiple knowledge graph data sets, especially showing considerable capabilities in the task of representation learning of complex structures.

Keywords: knowledge graph(KG), graph neural networks (GNN), structural adaptive receptive fields, information aggregation

Procedia PDF Downloads 7
2432 A Bayesian Network Approach to Customer Loyalty Analysis: A Case Study of Home Appliances Industry in Iran

Authors: Azam Abkhiz, Abolghasem Nasir

Abstract:

To achieve sustainable competitive advantage in the market, it is necessary to provide and improve customer satisfaction and Loyalty. To reach this objective, companies need to identify and analyze their customers. Thus, it is critical to measure the level of customer satisfaction and Loyalty very carefully. This study attempts to build a conceptual model to provide clear insights of customer loyalty. Using Bayesian networks (BNs), a model is proposed to evaluate customer loyalty and its consequences, such as repurchase and positive word-of-mouth. BN is a probabilistic approach that predicts the behavior of a system based on observed stochastic events. The most relevant determinants of customer loyalty are identified by the literature review. Perceived value, service quality, trust, corporate image, satisfaction, and switching costs are the most important variables that explain customer loyalty. The data are collected by use of a questionnaire-based survey from 1430 customers of a home appliances manufacturer in Iran. Four scenarios and sensitivity analyses are performed to run and analyze the impact of different determinants on customer loyalty. The proposed model allows businesses to not only set their targets but proactively manage their customer behaviors as well.

Keywords: customer satisfaction, customer loyalty, Bayesian networks, home appliances industry

Procedia PDF Downloads 132
2431 Optimization of Monitoring Networks for Air Quality Management in Urban Hotspots

Authors: Vethathirri Ramanujam Srinivasan, S. M. Shiva Nagendra

Abstract:

Air quality management in urban areas is a serious concern in both developed and developing countries. In this regard, more number of air quality monitoring stations are planned to mitigate air pollution in urban areas. In India, Central Pollution Control Board has set up 574 air quality monitoring stations across the country and proposed to set up another 500 stations in the next few years. The number of monitoring stations for each city has been decided based on population data. The setting up of ambient air quality monitoring stations and their operation and maintenance are highly expensive. Therefore, there is a need to optimize monitoring networks for air quality management. The present paper discusses the various methods such as Indian Standards (IS) method, US EPA method and European Union (EU) method to arrive at the minimum number of air quality monitoring stations. In addition, optimization of rain-gauge method and Inverse Distance Weighted (IDW) method using Geographical Information System (GIS) are also explored in the present work for the design of air quality network in Chennai city. In summary, additionally 18 stations are required for Chennai city, and the potential monitoring locations with their corresponding land use patterns are ranked and identified from the 1km x 1km sized grids.

Keywords: air quality monitoring network, inverse distance weighted method, population based method, spatial variation

Procedia PDF Downloads 183
2430 Explainable Graph Attention Networks

Authors: David Pham, Yongfeng Zhang

Abstract:

Graphs are an important structure for data storage and computation. Recent years have seen the success of deep learning on graphs such as Graph Neural Networks (GNN) on various data mining and machine learning tasks. However, most of the deep learning models on graphs cannot easily explain their predictions and are thus often labelled as “black boxes.” For example, Graph Attention Network (GAT) is a frequently used GNN architecture, which adopts an attention mechanism to carefully select the neighborhood nodes for message passing and aggregation. However, it is difficult to explain why certain neighbors are selected while others are not and how the selected neighbors contribute to the final classification result. In this paper, we present a graph learning model called Explainable Graph Attention Network (XGAT), which integrates graph attention modeling and explainability. We use a single model to target both the accuracy and explainability of problem spaces and show that in the context of graph attention modeling, we can design a unified neighborhood selection strategy that selects appropriate neighbor nodes for both better accuracy and enhanced explainability. To justify this, we conduct extensive experiments to better understand the behavior of our model under different conditions and show an increase in both accuracy and explainability.

Keywords: explainable AI, graph attention network, graph neural network, node classification

Procedia PDF Downloads 182
2429 Training a Neural Network to Segment, Detect and Recognize Numbers

Authors: Abhisek Dash

Abstract:

This study had three neural networks, one for number segmentation, one for number detection and one for number recognition all of which are coupled to one another. All networks were trained on the MNIST dataset and were convolutional. It was assumed that the images had lighter background and darker foreground. The segmentation network took 28x28 images as input and had sixteen outputs. Segmentation training starts when a dark pixel is encountered. Taking a window(7x7) over that pixel as focus, the eight neighborhood of the focus was checked for further dark pixels. The segmentation network was then trained to move in those directions which had dark pixels. To this end the segmentation network had 16 outputs. They were arranged as “go east”, ”don’t go east ”, “go south east”, “don’t go south east”, “go south”, “don’t go south” and so on w.r.t focus window. The focus window was resized into a 28x28 image and the network was trained to consider those neighborhoods which had dark pixels. The neighborhoods which had dark pixels were pushed into a queue in a particular order. The neighborhoods were then popped one at a time stitched to the existing partial image of the number one at a time and trained on which neighborhoods to consider when the new partial image was presented. The above process was repeated until the image was fully covered by the 7x7 neighborhoods and there were no more uncovered black pixels. During testing the network scans and looks for the first dark pixel. From here on the network predicts which neighborhoods to consider and segments the image. After this step the group of neighborhoods are passed into the detection network. The detection network took 28x28 images as input and had two outputs denoting whether a number was detected or not. Since the ground truth of the bounds of a number was known during training the detection network outputted in favor of number not found until the bounds were not met and vice versa. The recognition network was a standard CNN that also took 28x28 images and had 10 outputs for recognition of numbers from 0 to 9. This network was activated only when the detection network votes in favor of number detected. The above methodology could segment connected and overlapping numbers. Additionally the recognition unit was only invoked when a number was detected which minimized false positives. It also eliminated the need for rules of thumb as segmentation is learned. The strategy can also be extended to other characters as well.

Keywords: convolutional neural networks, OCR, text detection, text segmentation

Procedia PDF Downloads 151
2428 Project Management Tools within SAP S/4 Hana Program Environment

Authors: Jagoda Bruni, Jan Müller-Lucanus, Gernot Stöger-Knes

Abstract:

The purpose of this article is to demonstrate modern project management approaches in the SAP S/R Hana surrounding a programming environment composed of multiple focus-diversified projects. We would like to propose innovative and goal-oriented management standards based on the specificity of the SAP transformations and customer-driven expectations. Due to the regular sprint-based controlling and management tools' application, it has been data-proven that extensive analysis of productive hours of the employees as much as a thorough review of the project progress (per GAP, per business process, and per Lot) within the whole program, can have a positive impact on customer satisfaction and improvement for projects' budget. This has been a collaborative study based on real-life experience and measurements in collaboration with our customers.

Keywords: project management, program management, SAP, controlling

Procedia PDF Downloads 82
2427 Ordinary Differentiation Equations (ODE) Reconstruction of High-Dimensional Genetic Networks through Game Theory with Application to Dissecting Tree Salt Tolerance

Authors: Libo Jiang, Huan Li, Rongling Wu

Abstract:

Ordinary differentiation equations (ODE) have proven to be powerful for reconstructing precise and informative gene regulatory networks (GRNs) from dynamic gene expression data. However, joint modeling and analysis of all genes, essential for the systematical characterization of genetic interactions, are challenging due to high dimensionality and a complex pattern of genetic regulation including activation, repression, and antitermination. Here, we address these challenges by unifying variable selection and game theory through ODE. Each gene within a GRN is co-expressed with its partner genes in a way like a game of multiple players, each of which tends to choose an optimal strategy to maximize its “fitness” across the whole network. Based on this unifying theory, we designed and conducted a real experiment to infer salt tolerance-related GRNs for Euphrates poplar, a hero tree that can grow in the saline desert. The pattern and magnitude of interactions between several hub genes within these GRNs were found to determine the capacity of Euphrates poplar to resist to saline stress.

Keywords: gene regulatory network, ordinary differential equation, game theory, LASSO, saline resistance

Procedia PDF Downloads 635
2426 A Study on the Application of Machine Learning and Deep Learning Techniques for Skin Cancer Detection

Authors: Hritwik Ghosh, Irfan Sadiq Rahat, Sachi Nandan Mohanty, J. V. R. Ravindra

Abstract:

In the rapidly evolving landscape of medical diagnostics, the early detection and accurate classification of skin cancer remain paramount for effective treatment outcomes. This research delves into the transformative potential of Artificial Intelligence (AI), specifically Deep Learning (DL), as a tool for discerning and categorizing various skin conditions. Utilizing a diverse dataset of 3,000 images representing nine distinct skin conditions, we confront the inherent challenge of class imbalance. This imbalance, where conditions like melanomas are over-represented, is addressed by incorporating class weights during the model training phase, ensuring an equitable representation of all conditions in the learning process. Our pioneering approach introduces a hybrid model, amalgamating the strengths of two renowned Convolutional Neural Networks (CNNs), VGG16 and ResNet50. These networks, pre-trained on the ImageNet dataset, are adept at extracting intricate features from images. By synergizing these models, our research aims to capture a holistic set of features, thereby bolstering classification performance. Preliminary findings underscore the hybrid model's superiority over individual models, showcasing its prowess in feature extraction and classification. Moreover, the research emphasizes the significance of rigorous data pre-processing, including image resizing, color normalization, and segmentation, in ensuring data quality and model reliability. In essence, this study illuminates the promising role of AI and DL in revolutionizing skin cancer diagnostics, offering insights into its potential applications in broader medical domains.

Keywords: artificial intelligence, machine learning, deep learning, skin cancer, dermatology, convolutional neural networks, image classification, computer vision, healthcare technology, cancer detection, medical imaging

Procedia PDF Downloads 78
2425 A Review of Current Trends in Grid Balancing Technologies

Authors: Kulkarni Rohini D.

Abstract:

While emerging as plausible sources of energy generation, new technologies, including photovoltaic (PV) solar panels, home battery energy storage systems, and electric vehicles (EVs), are exacerbating the operations of power distribution networks for distribution network operators (DNOs). Renewable energy production fluctuates, stemming in over- and under-generation energy, further complicating the issue of storing excess power and using it when necessary. Though renewable sources are non-exhausting and reoccurring, power storage of generated energy is almost as paramount as to its production process. Hence, to ensure smooth and efficient power storage at different levels, Grid balancing technologies are consequently the next theme to address in the sustainable space and growth sector. But, since hydrogen batteries were used in the earlier days to achieve this balance in power grids, new, recent advancements are more efficient and capable per unit of storage space while also being distinctive in terms of their underlying operating principles. The underlying technologies of "Flow batteries," "Gravity Solutions," and "Graphene Batteries" already have entered the market and are leading the race for efficient storage device solutions that will improve and stabilize Grid networks, followed by Grid balancing technologies.

Keywords: flow batteries, grid balancing, hydrogen batteries, power storage, solar

Procedia PDF Downloads 63
2424 Performance Enrichment of Deep Feed Forward Neural Network and Deep Belief Neural Networks for Fault Detection of Automobile Gearbox Using Vibration Signal

Authors: T. Praveenkumar, Kulpreet Singh, Divy Bhanpuriya, M. Saimurugan

Abstract:

This study analysed the classification accuracy for gearbox faults using Machine Learning Techniques. Gearboxes are widely used for mechanical power transmission in rotating machines. Its rotating components such as bearings, gears, and shafts tend to wear due to prolonged usage, causing fluctuating vibrations. Increasing the dependability of mechanical components like a gearbox is hampered by their sealed design, which makes visual inspection difficult. One way of detecting impending failure is to detect a change in the vibration signature. The current study proposes various machine learning algorithms, with aid of these vibration signals for obtaining the fault classification accuracy of an automotive 4-Speed synchromesh gearbox. Experimental data in the form of vibration signals were acquired from a 4-Speed synchromesh gearbox using Data Acquisition System (DAQs). Statistical features were extracted from the acquired vibration signal under various operating conditions. Then the extracted features were given as input to the algorithms for fault classification. Supervised Machine Learning algorithms such as Support Vector Machines (SVM) and unsupervised algorithms such as Deep Feed Forward Neural Network (DFFNN), Deep Belief Networks (DBN) algorithms are used for fault classification. The fusion of DBN & DFFNN classifiers were architected to further enhance the classification accuracy and to reduce the computational complexity. The fault classification accuracy for each algorithm was thoroughly studied, tabulated, and graphically analysed for fused and individual algorithms. In conclusion, the fusion of DBN and DFFNN algorithm yielded the better classification accuracy and was selected for fault detection due to its faster computational processing and greater efficiency.

Keywords: deep belief networks, DBN, deep feed forward neural network, DFFNN, fault diagnosis, fusion of algorithm, vibration signal

Procedia PDF Downloads 106
2423 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions

Authors: Vikrant Gupta, Amrit Goswami

Abstract:

The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.

Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition

Procedia PDF Downloads 133
2422 Network Based Molecular Profiling of Intracranial Ependymoma over Spinal Ependymoma

Authors: Hyeon Su Kim, Sungjin Park, Hae Ryung Chang, Hae Rim Jung, Young Zoo Ahn, Yon Hui Kim, Seungyoon Nam

Abstract:

Ependymoma, one of the most common parenchymal spinal cord tumor, represents 3-6% of all CNS tumor. Especially intracranial ependymomas, which are more frequent in childhood, have a more poor prognosis and more malignant than spinal ependymomas. Although there are growing needs to understand pathogenesis, detailed molecular understanding of pathogenesis remains to be explored. A cancer cell is composed of complex signaling pathway networks, and identifying interaction between genes and/or proteins are crucial for understanding these pathways. Therefore, we explored each ependymoma in terms of differential expressed genes and signaling networks. We used Microsoft Excel™ to manipulate microarray data gathered from NCBI’s GEO Database. To analyze and visualize signaling network, we used web-based PATHOME algorithm and Cytoscape. We show HOX family and NEFL are down-regulated but SCL family is up-regulated in cerebrum and posterior fossa cancers over a spinal cancer, and JAK/STAT signaling pathway and Chemokine signaling pathway are significantly different in the both intracranial ependymoma comparing to spinal ependymoma. We are considering there may be an age-dependent mechanism under different histological pathogenesis. We annotated mutation data of each gene subsequently in order to find potential target genes.

Keywords: systems biology, ependymoma, deg, network analysis

Procedia PDF Downloads 296