Search results for: network monitoring
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7514

Search results for: network monitoring

6164 Tabu Search to Draw Evacuation Plans in Emergency Situations

Authors: S. Nasri, H. Bouziri

Abstract:

Disasters are quite experienced in our days. They are caused by floods, landslides, and building fires that is the main objective of this study. To cope with these unexpected events, precautions must be taken to protect human lives. The emphasis on disposal work focuses on the resolution of the evacuation problem in case of no-notice disaster. The problem of evacuation is listed as a dynamic network flow problem. Particularly, we model the evacuation problem as an earliest arrival flow problem with load dependent transit time. This problem is classified as NP-Hard. Our challenge here is to propose a metaheuristic solution for solving the evacuation problem. We define our objective as the maximization of evacuees during earliest periods of a time horizon T. The objective provides the evacuation of persons as soon as possible. We performed an experimental study on emergency evacuation from the tunisian children’s hospital. This work prompts us to look for evacuation plans corresponding to several situations where the network dynamically changes.

Keywords: dynamic network flow, load dependent transit time, evacuation strategy, earliest arrival flow problem, tabu search metaheuristic

Procedia PDF Downloads 372
6163 Centrality and Patent Impact: Coupled Network Analysis of Artificial Intelligence Patents Based on Co-Cited Scientific Papers

Authors: Xingyu Gao, Qiang Wu, Yuanyuan Liu, Yue Yang

Abstract:

In the era of the knowledge economy, the relationship between scientific knowledge and patents has garnered significant attention. Understanding the intricate interplay between the foundations of science and technological innovation has emerged as a pivotal challenge for both researchers and policymakers. This study establishes a coupled network of artificial intelligence patents based on co-cited scientific papers. Leveraging centrality metrics from network analysis offers a fresh perspective on understanding the influence of information flow and knowledge sharing within the network on patent impact. The study initially obtained patent numbers for 446,890 granted US AI patents from the United States Patent and Trademark Office’s artificial intelligence patent database for the years 2002-2020. Subsequently, specific information regarding these patents was acquired using the Lens patent retrieval platform. Additionally, a search and deduplication process was performed on scientific non-patent references (SNPRs) using the Web of Science database, resulting in the selection of 184,603 patents that cited 37,467 unique SNPRs. Finally, this study constructs a coupled network comprising 59,379 artificial intelligence patents by utilizing scientific papers co-cited in patent backward citations. In this network, nodes represent patents, and if patents reference the same scientific papers, connections are established between them, serving as edges within the network. Nodes and edges collectively constitute the patent coupling network. Structural characteristics such as node degree centrality, betweenness centrality, and closeness centrality are employed to assess the scientific connections between patents, while citation count is utilized as a quantitative metric for patent influence. Finally, a negative binomial model is employed to test the nonlinear relationship between these network structural features and patent influence. The research findings indicate that network structural features such as node degree centrality, betweenness centrality, and closeness centrality exhibit inverted U-shaped relationships with patent influence. Specifically, as these centrality metrics increase, patent influence initially shows an upward trend, but once these features reach a certain threshold, patent influence starts to decline. This discovery suggests that moderate network centrality is beneficial for enhancing patent influence, while excessively high centrality may have a detrimental effect on patent influence. This finding offers crucial insights for policymakers, emphasizing the importance of encouraging moderate knowledge flow and sharing to promote innovation when formulating technology policies. It suggests that in certain situations, data sharing and integration can contribute to innovation. Consequently, policymakers can take measures to promote data-sharing policies, such as open data initiatives, to facilitate the flow of knowledge and the generation of innovation. Additionally, governments and relevant agencies can achieve broader knowledge dissemination by supporting collaborative research projects, adjusting intellectual property policies to enhance flexibility, or nurturing technology entrepreneurship ecosystems.

Keywords: centrality, patent coupling network, patent influence, social network analysis

Procedia PDF Downloads 54
6162 Cluster Based Ant Colony Routing Algorithm for Mobile Ad-Hoc Networks

Authors: Alaa Eddien Abdallah, Bajes Yousef Alskarnah

Abstract:

Ant colony based routing algorithms are known to grantee the packet delivery, but they su ffer from the huge overhead of control messages which are needed to discover the route. In this paper we utilize the network nodes positions to group the nodes in connected clusters. We use clusters-heads only on forwarding the route discovery control messages. Our simulations proved that the new algorithm has decreased the overhead dramatically without affecting the delivery rate.

Keywords: ad-hoc network, MANET, ant colony routing, position based routing

Procedia PDF Downloads 425
6161 Three-Stage Least Squared Models of a Station-Level Subway Ridership: Incorporating an Analysis on Integrated Transit Network Topology Measures

Authors: Jungyeol Hong, Dongjoo Park

Abstract:

The urban transit system is a critical part of a solution to the economic, energy, and environmental challenges. Furthermore, it ultimately contributes the improvement of people’s quality of lives. For taking these kinds of advantages, the city of Seoul has tried to construct an integrated transit system including both subway and buses. The effort led to the fact that approximately 6.9 million citizens use the integrated transit system every day for their trips. Diagnosing the current transit network is a significant task to provide more convenient and pleasant transit environment. Therefore, the critical objective of this study is to establish a methodological framework for the analysis of an integrated bus-subway network and to examine the relationship between subway ridership and parameters such as network topology measures, bus demand, and a variety of commercial business facilities. Regarding a statistical approach to estimate subway ridership at a station level, many previous studies relied on Ordinary Least Square regression, but there was lack of studies considering the endogeneity issues which might show in the subway ridership prediction model. This study focused on both discovering the impacts of integrated transit network topology measures and endogenous effect of bus demand on subway ridership. It could ultimately contribute to developing more accurate subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers Seoul city in South Korea, and it includes 243 subway stations and 10,120 bus stops with the temporal scope set during twenty-four hours with one-hour interval time panels each. The subway and bus ridership information in detail was collected from the Seoul Smart Card data in 2015 and 2016. First, integrated subway-bus network topology measures which have characteristics regarding connectivity, centrality, transitivity, and reciprocity were estimated based on the complex network theory. The results of integrated transit network topology analysis were compared to subway-only network topology. Also, the non-recursive approach which is Three-Stage Least Square was applied to develop the daily subway ridership model as capturing the endogeneity between bus and subway demands. Independent variables included roadway geometry, commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. Consequently, it was found that network topology measures were significant size effect. Especially, centrality measures showed that the elasticity was a change of 4.88% for closeness centrality, 24.48% for betweenness centrality while the elasticity of bus ridership was 8.85%. Moreover, it was proved that bus demand and subway ridership were endogenous in a non-recursive manner as showing that predicted bus ridership and predicted subway ridership is statistically significant in OLS regression models. Therefore, it shows that three-stage least square model appears to be a plausible model for efficient subway ridership estimation. It is expected that the proposed approach provides a reliable guideline that can be used as part of the spectrum of tools for evaluating a city-wide integrated transit network.

Keywords: integrated transit system, network topology measures, three-stage least squared, endogeneity, subway ridership

Procedia PDF Downloads 177
6160 Feasibility of Voluntary Deep Inspiration Breath-Hold Radiotherapy Technique Implementation without Deep Inspiration Breath-Hold-Assisting Device

Authors: Auwal Abubakar, Shazril Imran Shaukat, Noor Khairiah A. Karim, Mohammed Zakir Kassim, Gokula Kumar Appalanaido, Hafiz Mohd Zin

Abstract:

Background: Voluntary deep inspiration breath-hold radiotherapy (vDIBH-RT) is an effective cardiac dose reduction technique during left breast radiotherapy. This study aimed to assess the accuracy of the implementation of the vDIBH technique among left breast cancer patients without the use of a special device such as a surface-guided imaging system. Methods: The vDIBH-RT technique was implemented among thirteen (13) left breast cancer patients at the Advanced Medical and Dental Institute (AMDI), Universiti Sains Malaysia. Breath-hold monitoring was performed based on breath-hold skin marks and laser light congruence observed on zoomed CCTV images from the control console during each delivery. The initial setup was verified using cone beam computed tomography (CBCT) during breath-hold. Each field was delivered using multiple beam segments to allow a delivery time of 20 seconds, which can be tolerated by patients in breath-hold. The data were analysed using an in-house developed MATLAB algorithm. PTV margin was computed based on van Herk's margin recipe. Results: The setup error analysed from CBCT shows that the population systematic error in lateral (x), longitudinal (y), and vertical (z) axes was 2.28 mm, 3.35 mm, and 3.10 mm, respectively. Based on the CBCT image guidance, the Planning target volume (PTV) margin that would be required for vDIBH-RT using CCTV/Laser monitoring technique is 7.77 mm, 10.85 mm, and 10.93 mm in x, y, and z axes, respectively. Conclusion: It is feasible to safely implement vDIBH-RT among left breast cancer patients without special equipment. The breath-hold monitoring technique is cost-effective, radiation-free, easy to implement, and allows real-time breath-hold monitoring.

Keywords: vDIBH, cone beam computed tomography, radiotherapy, left breast cancer

Procedia PDF Downloads 57
6159 Calculation of the Normalized Difference Vegetation Index and the Spectral Signature of Coffee Crops: Benefits of Image Filtering on Mixed Crops

Authors: Catalina Albornoz, Giacomo Barbieri

Abstract:

Crop monitoring has shown to reduce vulnerability to spreading plagues and pathologies in crops. Remote sensing with Unmanned Aerial Vehicles (UAVs) has made crop monitoring more precise, cost-efficient and accessible. Nowadays, remote monitoring involves calculating maps of vegetation indices by using different software that takes either Truecolor (RGB) or multispectral images as an input. These maps are then used to segment the crop into management zones. Finally, knowing the spectral signature of a crop (the reflected radiation as a function of wavelength) can be used as an input for decision-making and crop characterization. The calculation of vegetation indices using software such as Pix4D has high precision for monoculture plantations. However, this paper shows that using this software on mixed crops may lead to errors resulting in an incorrect segmentation of the field. Within this work, authors propose to filter all the elements different from the main crop before the calculation of vegetation indices and the spectral signature. A filter based on the Sobel method for border detection is used for filtering a coffee crop. Results show that segmentation into management zones changes with respect to the traditional situation in which a filter is not applied. In particular, it is shown how the values of the spectral signature change in up to 17% per spectral band. Future work will quantify the benefits of filtering through the comparison between in situ measurements and the calculated vegetation indices obtained through remote sensing.

Keywords: coffee, filtering, mixed crop, precision agriculture, remote sensing, spectral signature

Procedia PDF Downloads 388
6158 Coupling Random Demand and Route Selection in the Transportation Network Design Problem

Authors: Shabnam Najafi, Metin Turkay

Abstract:

Network design problem (NDP) is used to determine the set of optimal values for certain pre-specified decision variables such as capacity expansion of nodes and links by optimizing various system performance measures including safety, congestion, and accessibility. The designed transportation network should improve objective functions defined for the system by considering the route choice behaviors of network users at the same time. The NDP studies mostly investigated the random demand and route selection constraints separately due to computational challenges. In this work, we consider both random demand and route selection constraints simultaneously. This work presents a nonlinear stochastic model for land use and road network design problem to address the development of different functional zones in urban areas by considering both cost function and air pollution. This model minimizes cost function and air pollution simultaneously with random demand and stochastic route selection constraint that aims to optimize network performance via road capacity expansion. The Bureau of Public Roads (BPR) link impedance function is used to determine the travel time function in each link. We consider a city with origin and destination nodes which can be residential or employment or both. There are set of existing paths between origin-destination (O-D) pairs. Case of increasing employed population is analyzed to determine amount of roads and origin zones simultaneously. Minimizing travel and expansion cost of routes and origin zones in one side and minimizing CO emission in the other side is considered in this analysis at the same time. In this work demand between O-D pairs is random and also the network flow pattern is subject to stochastic user equilibrium, specifically logit route choice model. Considering both demand and route choice, random is more applicable to design urban network programs. Epsilon-constraint is one of the methods to solve both linear and nonlinear multi-objective problems. In this work epsilon-constraint method is used to solve the problem. The problem was solved by keeping first objective (cost function) as the objective function of the problem and second objective as a constraint that should be less than an epsilon, where epsilon is an upper bound of the emission function. The value of epsilon should change from the worst to the best value of the emission function to generate the family of solutions representing Pareto set. A numerical example with 2 origin zones and 2 destination zones and 7 links is solved by GAMS and the set of Pareto points is obtained. There are 15 efficient solutions. According to these solutions as cost function value increases, emission function value decreases and vice versa.

Keywords: epsilon-constraint, multi-objective, network design, stochastic

Procedia PDF Downloads 647
6157 Predicting Indonesia External Debt Crisis: An Artificial Neural Network Approach

Authors: Riznaldi Akbar

Abstract:

In this study, we compared the performance of the Artificial Neural Network (ANN) model with back-propagation algorithm in correctly predicting in-sample and out-of-sample external debt crisis in Indonesia. We found that exchange rate, foreign reserves, and exports are the major determinants to experiencing external debt crisis. The ANN in-sample performance provides relatively superior results. The ANN model is able to classify correctly crisis of 89.12 per cent with reasonably low false alarms of 7.01 per cent. In out-of-sample, the prediction performance fairly deteriorates compared to their in-sample performances. It could be explained as the ANN model tends to over-fit the data in the in-sample, but it could not fit the out-of-sample very well. The 10-fold cross-validation has been used to improve the out-of-sample prediction accuracy. The results also offer policy implications. The out-of-sample performance could be very sensitive to the size of the samples, as it could yield a higher total misclassification error and lower prediction accuracy. The ANN model could be used to identify past crisis episodes with some accuracy, but predicting crisis outside the estimation sample is much more challenging because of the presence of uncertainty.

Keywords: debt crisis, external debt, artificial neural network, ANN

Procedia PDF Downloads 442
6156 Location Detection of Vehicular Accident Using Global Navigation Satellite Systems/Inertial Measurement Units Navigator

Authors: Neda Navidi, Rene Jr. Landry

Abstract:

Vehicle tracking and accident recognizing are considered by many industries like insurance and vehicle rental companies. The main goal of this paper is to detect the location of a car accident by combining different methods. The methods, which are considered in this paper, are Global Navigation Satellite Systems/Inertial Measurement Units (GNSS/IMU)-based navigation and vehicle accident detection algorithms. They are expressed by a set of raw measurements, which are obtained from a designed integrator black box using GNSS and inertial sensors. Another concern of this paper is the definition of accident detection algorithm based on its jerk to identify the position of that accident. In fact, the results convinced us that, even in GNSS blockage areas, the position of the accident could be detected by GNSS/INS integration with 50% improvement compared to GNSS stand alone.

Keywords: driver behavior monitoring, integration, IMU, GNSS, monitoring, tracking

Procedia PDF Downloads 234
6155 An Evaluation of Neural Network Efficacies for Image Recognition on Edge-AI Computer Vision Platform

Authors: Jie Zhao, Meng Su

Abstract:

Image recognition, as one of the most critical technologies in computer vision, works to help machine-like robotics understand a scene, that is, if deployed appropriately, will trigger the revolution in remote sensing and industry automation. With the developments of AI technologies, there are many prevailing and sophisticated neural networks as technologies developed for image recognition. However, computer vision platforms as hardware, supporting neural networks for image recognition, as crucial as the neural network technologies, need to be more congruently addressed as the research subjects. In contrast, different computer vision platforms are deterministic to leverage the performance of different neural networks for recognition. In this paper, three different computer vision platforms – Jetson Nano(with 4GB), a standalone laptop(with RTX 3000s, using CUDA), and Google Colab (web-based, using GPU) are explored and four prominent neural network architectures (including AlexNet, VGG(16/19), GoogleNet, and ResNet(18/34/50)), are investigated. In the context of pairwise usage between different computer vision platforms and distinctive neural networks, with the merits of recognition accuracy and time efficiency, the performances are evaluated. In the case study using public imageNets, our findings provide a nuanced perspective on optimizing image recognition tasks across Edge-AI platforms, offering guidance on selecting appropriate neural network structures to maximize performance under hardware constraints.

Keywords: alexNet, VGG, googleNet, resNet, Jetson nano, CUDA, COCO-NET, cifar10, imageNet large scale visual recognition challenge (ILSVRC), google colab

Procedia PDF Downloads 90
6154 Building Capacity and Personnel Flow Modeling for Operating amid COVID-19

Authors: Samuel Fernandes, Dylan Kato, Emin Burak Onat, Patrick Keyantuo, Raja Sengupta, Amine Bouzaghrane

Abstract:

The COVID-19 pandemic has spread across the United States, forcing cities to impose stay-at-home and shelter-in-place orders. Building operations had to adjust as non-essential personnel worked from home. But as buildings prepare for personnel to return, they need to plan for safe operations amid new COVID-19 guidelines. In this paper we propose a methodology for capacity and flow modeling of personnel within buildings to safely operate under COVID-19 guidelines. We model personnel flow within buildings by network flows with queuing constraints. We study maximum flow, minimum cost, and minimax objectives. We compare our network flow approach with a simulation model through a case study and present the results. Our results showcase various scenarios of how buildings could be operated under new COVID-19 guidelines and provide a framework for building operators to plan and operate buildings in this new paradigm.

Keywords: network analysis, building simulation, COVID-19

Procedia PDF Downloads 160
6153 Airborne Particulate Matter Passive Samplers for Indoor and Outdoor Exposure Monitoring: Development and Evaluation

Authors: Kholoud Abdulaziz, Kholoud Al-Najdi, Abdullah Kadri, Konstantinos E. Kakosimos

Abstract:

The Middle East area is highly affected by air pollution induced by anthropogenic and natural phenomena. There is evidence that air pollution, especially particulates, greatly affects the population health. Many studies have raised a warning of the high concentration of particulates and their affect not just around industrial and construction areas but also in the immediate working and living environment. One of the methods to study air quality is continuous and periodic monitoring using active or passive samplers. Active monitoring and sampling are the default procedures per the European and US standards. However, in many cases they have been inefficient to accurately capture the spatial variability of air pollution due to the small number of installations; which eventually is attributed to the high cost of the equipment and the limited availability of users with expertise and scientific background. Another alternative has been found to account for the limitations of the active methods that is the passive sampling. It is inexpensive, requires no continuous power supply, and easy to assemble which makes it a more flexible option, though less accurate. This study aims to investigate and evaluate the use of passive sampling for particulate matter pollution monitoring in dry tropical climates, like in the Middle East. More specifically, a number of field measurements have be conducted, both indoors and outdoors, at Qatar and the results have been compared with active sampling equipment and the reference methods. The samples have been analyzed, that is to obtain particle size distribution, by applying existing laboratory techniques (optical microscopy) and by exploring new approaches like the white light interferometry to. Then the new parameters of the well-established model have been calculated in order to estimate the atmospheric concentration of particulates. Additionally, an extended literature review will investigate for new and better models. The outcome of this project is expected to have an impact on the public, as well, as it will raise awareness among people about the quality of life and about the importance of implementing research culture in the community.

Keywords: air pollution, passive samplers, interferometry, indoor, outdoor

Procedia PDF Downloads 398
6152 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea

Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim

Abstract:

Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.

Keywords: deep learning, algae concentration, remote sensing, satellite

Procedia PDF Downloads 183
6151 Participatory Monitoring Strategy to Address Stakeholder Engagement Impact in Co-creation of NBS Related Project: The OPERANDUM Case

Authors: Teresa Carlone, Matteo Mannocchi

Abstract:

In the last decade, a growing number of International Organizations are pushing toward green solutions for adaptation to climate change. This is particularly true in the field of Disaster Risk Reduction (DRR) and land planning, where Nature-Based Solutions (NBS) had been sponsored through funding programs and planning tools. Stakeholder engagement and co-creation of NBS is growing as a practice and research field in environmental projects, fostering the consolidation of a multidisciplinary socio-ecological approach in addressing hydro-meteorological risk. Even thou research and financial interests are constantly spread, the NBS mainstreaming process is still at an early stage as innovative concepts and practices make it difficult to be fully accepted and adopted by a multitude of different actors to produce wide scale societal change. The monitoring and impact evaluation of stakeholders’ participation in these processes represent a crucial aspect and should be seen as a continuous and integral element of the co-creation approach. However, setting up a fit for purpose-monitoring strategy for different contexts is not an easy task, and multiple challenges emerge. In this scenario, the Horizon 2020 OPERANDUM project, designed to address the major hydro-meteorological risks that negatively affect European rural and natural territories through the co-design, co-deployment, and assessment of Nature-based Solution, represents a valid case study to test a monitoring strategy from which set a broader, general and scalable monitoring framework. Applying a participative monitoring methodology, based on selected indicators list that combines quantitative and qualitative data developed within the activity of the project, the paper proposes an experimental in-depth analysis of the stakeholder engagement impact in the co-creation process of NBS. The main focus will be to spot and analyze which factors increase knowledge, social acceptance, and mainstreaming of NBS, promoting also a base-experience guideline to could be integrated with the stakeholder engagement strategy in current and future similar strongly collaborative approach-based environmental projects, such as OPERANDUM. Measurement will be carried out through survey submitted at a different timescale to the same sample (stakeholder: policy makers, business, researchers, interest groups). Changes will be recorded and analyzed through focus groups in order to highlight causal explanation and to assess the proposed list of indicators to steer the conduction of similar activities in other projects and/or contexts. The idea of the paper is to contribute to the construction of a more structured and shared corpus of indicators that can support the evaluation of the activities of involvement and participation of various levels of stakeholders in the co-production, planning, and implementation of NBS to address climate change challenges.

Keywords: co-creation and collaborative planning, monitoring, nature-based solution, participation & inclusion, stakeholder engagement

Procedia PDF Downloads 112
6150 Analysis of the Keys Indicators of Sustainable Tourism: A Case Study in Lagoa da Confusão/to/Brazil

Authors: Veruska C. Dutra, Lucio F.M. Adorno, Mary L. G. S. Senna

Abstract:

Since it recognized the importance of planning sustainable tourism, which has been discussed effective methods of monitoring tourist. In this sense, the indicators, can transmit a set of information about complex processes, events or trends, showing up as an important monitoring tool and aid in the environmental assessment, helping to identify the progress of it and to chart future actions, contributing, so for decision making. The World Tourism Organization - WTO recognizes the importance of indicators to appraise the tourism activity in the point of view of sustainability, launching in 1995 eleven Keys Indicators of Sustainable Tourism to assist in the monitoring of tourist destinations. So we propose a case study to examine the applicability or otherwise of a monitoring methodology and aid in the understanding of tourism sustainability, analyzing the effectiveness of local indicators on the approach defined by the WTO. The study was applied to the Lagoa da Confusão City, in the state of Tocantins - North Brazil. The case study was carried out in 2006/2007, with the guiding deductive method. The indicators were measured by specific methodologies adapted to the study site, so that could generate quantitative results which could be analyzed at the proposed scale WTO (0 to 10 points). Applied indicators: Attractive Protection – AP (level of a natural and cultural attractive protection), Sociocultural Impact–SI (level of socio-cultural impacts), Waste Management - WM (level of management of solid waste generated), Planning Process-PP (trip planning level) Tourist Satisfaction-TS (satisfaction of the tourist experience), Community Satisfaction-CS (satisfaction of the local community with the development of local tourism) and Tourism Contribution to the Local Economy-TCLE (tourist level of contribution to the local economy). The city of Lagoa da Confusão was presented as an important object of study for the methodology in question, as offered condition to analyze the indicators and the complexities that arose during the research. The data collected can help discussions on the sustainability of tourism in the destination. The indicators TS, CS, WM , PP and AP appeared as satisfactory as allowed the measurement "translating" the reality under study, unlike TCLE and the SI indicators that were not seen as reliable and clear and should be reviewed and discussed for an adaptation and replication of the same. The application and study of various indicators of sustainable tourism, give better able to analyze the local tourism situation than monitor only one of the indicators, it does not demonstrate all collected data, which could result in a superficial analysis of the tourist destination.

Keywords: indicators, Lagoa da Confusão, Tocantins, Brazil, monitoring, sustainability

Procedia PDF Downloads 401
6149 Multilabel Classification with Neural Network Ensemble Method

Authors: Sezin Ekşioğlu

Abstract:

Multilabel classification has a huge importance for several applications, it is also a challenging research topic. It is a kind of supervised learning that contains binary targets. The distance between multilabel and binary classification is having more than one class in multilabel classification problems. Features can belong to one class or many classes. There exists a wide range of applications for multi label prediction such as image labeling, text categorization, gene functionality. Even though features are classified in many classes, they may not always be properly classified. There are many ensemble methods for the classification. However, most of the researchers have been concerned about better multilabel methods. Especially little ones focus on both efficiency of classifiers and pairwise relationships at the same time in order to implement better multilabel classification. In this paper, we worked on modified ensemble methods by getting benefit from k-Nearest Neighbors and neural network structure to address issues within a beneficial way and to get better impacts from the multilabel classification. Publicly available datasets (yeast, emotion, scene and birds) are performed to demonstrate the developed algorithm efficiency and the technique is measured by accuracy, F1 score and hamming loss metrics. Our algorithm boosts benchmarks for each datasets with different metrics.

Keywords: multilabel, classification, neural network, KNN

Procedia PDF Downloads 155
6148 A Novel Approach of NPSO on Flexible Logistic (S-Shaped) Model for Software Reliability Prediction

Authors: Pooja Rani, G. S. Mahapatra, S. K. Pandey

Abstract:

In this paper, we propose a novel approach of Neural Network and Particle Swarm Optimization methods for software reliability prediction. We first explain how to apply compound function in neural network so that we can derive a Flexible Logistic (S-shaped) Growth Curve (FLGC) model. This model mathematically represents software failure as a random process and can be used to evaluate software development status during testing. To avoid trapping in local minima, we have applied Particle Swarm Optimization method to train proposed model using failure test data sets. We drive our proposed model using computational based intelligence modeling. Thus, proposed model becomes Neuro-Particle Swarm Optimization (NPSO) model. We do test result with different inertia weight to update particle and update velocity. We obtain result based on best inertia weight compare along with Personal based oriented PSO (pPSO) help to choose local best in network neighborhood. The applicability of proposed model is demonstrated through real time test data failure set. The results obtained from experiments show that the proposed model has a fairly accurate prediction capability in software reliability.

Keywords: software reliability, flexible logistic growth curve model, software cumulative failure prediction, neural network, particle swarm optimization

Procedia PDF Downloads 344
6147 An AI-Based Dynamical Resource Allocation Calculation Algorithm for Unmanned Aerial Vehicle

Authors: Zhou Luchen, Wu Yubing, Burra Venkata Durga Kumar

Abstract:

As the scale of the network becomes larger and more complex than before, the density of user devices is also increasing. The development of Unmanned Aerial Vehicle (UAV) networks is able to collect and transform data in an efficient way by using software-defined networks (SDN) technology. This paper proposed a three-layer distributed and dynamic cluster architecture to manage UAVs by using an AI-based resource allocation calculation algorithm to address the overloading network problem. Through separating services of each UAV, the UAV hierarchical cluster system performs the main function of reducing the network load and transferring user requests, with three sub-tasks including data collection, communication channel organization, and data relaying. In this cluster, a head node and a vice head node UAV are selected considering the Central Processing Unit (CPU), operational (RAM), and permanent (ROM) memory of devices, battery charge, and capacity. The vice head node acts as a backup that stores all the data in the head node. The k-means clustering algorithm is used in order to detect high load regions and form the UAV layered clusters. The whole process of detecting high load areas, forming and selecting UAV clusters, and moving the selected UAV cluster to that area is proposed as offloading traffic algorithm.

Keywords: k-means, resource allocation, SDN, UAV network, unmanned aerial vehicles

Procedia PDF Downloads 111
6146 Fault Location Detection in Active Distribution System

Authors: R. Rezaeipour, A. R. Mehrabi

Abstract:

Recent increase of the DGs and microgrids in distribution systems, disturbs the tradition structure of the system. Coordination between protection devices in such a system becomes the concern of the network operators. This paper presents a new method for fault location detection in the active distribution networks, independent of the fault type or its resistance. The method uses synchronized voltage and current measurements at the interconnection of DG units and is able to adapt to changes in the topology of the system. The method has been tested on a 38-bus distribution system, with very encouraging results.

Keywords: fault location detection, active distribution system, micro grids, network operators

Procedia PDF Downloads 788
6145 Maximum Power Point Tracking for Small Scale Wind Turbine Using Multilayer Perceptron Neural Network Implementation without Mechanical Sensor

Authors: Piyangkun Kukutapan, Siridech Boonsang

Abstract:

The article proposes maximum power point tracking without mechanical sensor using Multilayer Perceptron Neural Network (MLPNN). The aim of article is to reduce the cost and complexity but still retain efficiency. The experimental is that duty cycle is generated maximum power, if it has suitable qualification. The measured data from DC generator, voltage (V), current (I), power (P), turnover rate of power (dP), and turnover rate of voltage (dV) are used as input for MLPNN model. The output of this model is duty cycle for driving the converter. The experiment implemented using Arduino Uno board. This diagram is compared to MPPT using MLPNN and P&O control (Perturbation and Observation control). The experimental results show that the proposed MLPNN based approach is more efficiency than P&O algorithm for this application.

Keywords: maximum power point tracking, multilayer perceptron netural network, optimal duty cycle, DC generator

Procedia PDF Downloads 325
6144 Perceptions and Expectations by Participants of Monitoring and Evaluation Short Course Training Programmes in Africa

Authors: Mokgophana Ramasobana

Abstract:

Background: At the core of the demand to utilize evidence-based approaches in the policy-making cycle, prioritization of limited financial resources and results driven initiatives is the urgency to develop a cohort of competent Monitoring and Evaluation (M&E) practitioners and public servants. The ongoing strides in the evaluation capacity building (ECB) initiatives are a direct response to produce the highly-sought after M&E skills. Notwithstanding the rapid growth of M&E short courses, participants perceived value and expectation of M&E short courses as a panacea for ECB have not been empirically quantified or measured. The objective of this article is to explicitly illustrate the importance of measuring ECB interventions and understanding what works in ECB and why it works. Objectives: This article illustrates the importance of establishing empirical ECB measurement tools to evaluate ECB interventions in order to ascertain its contribution to the broader evaluation practice. Method: The study was primarily a desktop review of existing literature, juxtaposed by a survey of the participants across the African continent based on the 43 M&E short courses hosted by the Centre for Learning on Evaluation and Results Anglophone Africa (CLEAR-AA) in collaboration with the Department of Planning Monitoring and Evaluation (DPME) Results: The article established that participants perceive short course training as a panacea to improve their M&E practical skill critical to executing their organizational duties. In tandem, participants are likely to demand customized training as opposed to general topics in Evaluation. However, the organizational environments constrain the application of the newly acquired skills. Conclusion: This article aims to contribute to the 'how to' measure ECB interventions discourse and contribute towards the improvement to evaluate ECB interventions. The study finds that participants prefer training courses with longer duration to cover more topics. At the same time, whilst organizations call for customization of programmes, the study found that individual participants demand knowledge of generic and popular evaluation topics.

Keywords: evaluation capacity building, effectiveness and training, monitoring and evaluation (M&E) short course training, perceptions and expectations

Procedia PDF Downloads 128
6143 Recognition of Tifinagh Characters with Missing Parts Using Neural Network

Authors: El Mahdi Barrah, Said Safi, Abdessamad Malaoui

Abstract:

In this paper, we present an algorithm for reconstruction from incomplete 2D scans for tifinagh characters. This algorithm is based on using correlation between the lost block and its neighbors. This system proposed contains three main parts: pre-processing, features extraction and recognition. In the first step, we construct a database of tifinagh characters. In the second step, we will apply “shape analysis algorithm”. In classification part, we will use Neural Network. The simulation results demonstrate that the proposed method give good results.

Keywords: Tifinagh character recognition, neural networks, local cost computation, ANN

Procedia PDF Downloads 334
6142 A Survey on Traditional Mac Layer Protocols in Cognitive Wireless Mesh Networks

Authors: Anusha M., V. Srikanth

Abstract:

Maximizing spectrum usage and numerous applications of the wireless communication networks have forced to a high interest of available spectrum. Cognitive Radio control its receiver and transmitter features exactly so that they can utilize the vacant approved spectrum without impacting the functionality of the principal licensed users. The Use of various channels assists to address interferences thereby improves the whole network efficiency. The MAC protocol in cognitive radio network explains the spectrum usage by interacting with multiple channels among the users. In this paper we studied about the architecture of cognitive wireless mesh network and traditional TDMA dependent MAC method to allocate channels dynamically. The majority of the MAC protocols suggested in the research are operated on Common-Control-Channel (CCC) to handle the services between Cognitive Radio secondary users. In this paper, an extensive study of Multi-Channel Multi-Radios or frequency range channel allotment and continually synchronized TDMA scheduling are shown in summarized way.

Keywords: TDMA, MAC, multi-channel, multi-radio, WMN’S, cognitive radios

Procedia PDF Downloads 561
6141 Water Demand Modelling Using Artificial Neural Network in Ramallah

Authors: F. Massri, M. Shkarneh, B. Almassri

Abstract:

Water scarcity and increasing water demand especially for residential use are major challenges facing Palestine. The need to accurately forecast water consumption is useful for the planning and management of this natural resource. The main objective of this paper is to (i) study the major factors influencing the water consumption in Palestine, (ii) understand the general pattern of Household water consumption, (iii) assess the possible changes in household water consumption and suggest appropriate remedies and (iv) develop prediction model based on the Artificial Neural Network to the water consumption in Palestinian cities. The paper is organized in four parts. The first part includes literature review of household water consumption studies. The second part concerns data collection methodology, conceptual frame work for the household water consumption surveys, survey descriptions and data processing methods. The third part presents descriptive statistics, multiple regression and analysis of the water consumption in the two Palestinian cities. The final part develops the use of Artificial Neural Network for modeling the water consumption in Palestinian cities.

Keywords: water management, demand forecasting, consumption, ANN, Ramallah

Procedia PDF Downloads 219
6140 A Highly Efficient Broadcast Algorithm for Computer Networks

Authors: Ganesh Nandakumaran, Mehmet Karaata

Abstract:

A wave is a distributed execution, often made up of a broadcast phase followed by a feedback phase, requiring the participation of all the system processes before a particular event called decision is taken. Wave algorithms with one initiator such as the 1-wave algorithm have been shown to be very efficient for broadcasting messages in tree networks. Extensions of this algorithm broadcasting a sequence of waves using a single initiator have been implemented in algorithms such as the m-wave algorithm. However as the network size increases, having a single initiator adversely affects the message delivery times to nodes further away from the initiator. As a remedy, broadcast waves can be allowed to be initiated by multiple initiator nodes distributed across the network to reduce the completion time of broadcasts. These waves initiated by one or more initiator processes form a collection of waves covering the entire network. Solutions to global-snapshots, distributed broadcast and various synchronization problems can be solved efficiently using waves with multiple concurrent initiators. In this paper, we propose the first stabilizing multi-wave sequence algorithm implementing waves started by multiple initiator processes such that every process in the network receives at least one sequence of broadcasts. Due to being stabilizing, the proposed algorithm can withstand transient faults and do not require initialization. We view a fault as a transient fault if it perturbs the configuration of the system but not its program.

Keywords: distributed computing, multi-node broadcast, propagation of information with feedback and cleaning (PFC), stabilization, wave algorithms

Procedia PDF Downloads 504
6139 Evaluating the Effectiveness of Plantar Sensory Insoles and Remote Patient Monitoring for Early Intervention in Diabetic Foot Ulcer Prevention in Patients with Peripheral Neuropathy

Authors: Brock Liden, Eric Janowitz

Abstract:

Introduction: Diabetic peripheral neuropathy (DPN) affects 70% of individuals with diabetes1. DPN causes a loss of protective sensation, which can lead to tissue damage and diabetic foot ulcer (DFU) formation2. These ulcers can result in infections and lower-extremity amputations of toes, the entire foot, and the lower leg. Even after a DFU is healed, recurrence is common, with 49% of DFU patients developing another ulcer within a year and 68% within 5 years3. This case series examines the use of sensory insoles and newly available plantar data (pressure, temperature, step count, adherence) and remote patient monitoring in patients at risk of DFU. Methods: Participants were provided with custom-made sensory insoles to monitor plantar pressure, temperature, step count, and daily use and were provided with real-time cues for pressure offloading as they went about their daily activities. The sensory insoles were used to track subject compliance, ulceration, and response to feedback from real-time alerts. Patients were remotely monitored by a qualified healthcare professional and were contacted when areas of concern were seen and provided coaching on reducing risk factors and overall support to improve foot health. Results: Of the 40 participants provided with the sensory insole system, 4 presented with a DFU. Based on flags generated from the available plantar data, patients were contacted by the remote monitor to address potential concerns. A standard clinical escalation protocol detailed when and how concerns should be escalated to the provider by the remote monitor. Upon escalation to the provider, patients were brought into the clinic as needed, allowing for any issues to be addressed before more serious complications might arise. Conclusion: This case series explores the use of innovative sensory technology to collect plantar data (pressure, temperature, step count, and adherence) for DFU detection and early intervention. The results from this case series suggest the importance of sensory technology and remote patient monitoring in providing proactive, preventative care for patients at risk of DFU. This robust plantar data, with the addition of remote patient monitoring, allow for patients to be seen in the clinic when concerns arise, giving providers the opportunity to intervene early and prevent more serious complications, such as wounds, from occurring.

Keywords: diabetic foot ulcer, DFU prevention, digital therapeutics, remote patient monitoring

Procedia PDF Downloads 77
6138 The Development of GPS Buoy for Ocean Surface Monitoring: Initial Results

Authors: Anuar Mohd Salleh, Mohd Effendi Daud

Abstract:

This study presents a kinematic positioning approach which is use the GPS buoy for precise ocean surface monitoring. A GPS buoy data from two experiments have been processed using a precise, medium-range differential kinematic technique. In each case the data were collected for more than 24 hours at nearby coastal site at a high rate (1 Hz), along with measurements from neighboring tidal stations, to verify the estimated sea surface heights. Kinematic coordinates of GPS buoy were estimated using the epoch-wise pre-elimination and the backward substitution algorithm. Test results show the centimeter level accuracy in sea surface height determination can be successfully achieved using proposed technique. The centimeter level agreement between two methods also suggests the possibility of using this inexpensive and more flexible GPS buoy equipment to enhance (or even replace) the current use of tidal gauge stations.

Keywords: global positioning system, kinematic GPS, sea surface height, GPS buoy, tide gauge

Procedia PDF Downloads 544
6137 The Omicron Variant BA.2.86.1 of SARS- 2 CoV-2 Demonstrates an Altered Interaction Network and Dynamic Features to Enhance the Interaction with the hACE2

Authors: Taimur Khan, Zakirullah, Muhammad Shahab

Abstract:

The SARS-CoV-2 variant BA.2.86 (Omicron) has emerged with unique mutations that may increase its transmission and infectivity. This study investigates how these mutations alter the Omicron receptor-binding domain's interaction network and dynamic properties (RBD) compared to the wild-type virus, focusing on its binding affinity to the human ACE2 (hACE2) receptor. Protein-protein docking and all-atom molecular dynamics simulations were used to analyze structural and dynamic differences. Despite the structural similarity to the wild-type virus, the Omicron variant exhibits a distinct interaction network involving new residues that enhance its binding capacity. The dynamic analysis reveals increased flexibility in the RBD, particularly in loop regions crucial for hACE2 interaction. Mutations significantly alter the secondary structure, leading to greater flexibility and conformational adaptability compared to the wild type. Binding free energy calculations confirm that the Omicron RBD has a higher binding affinity (-70.47 kcal/mol) to hACE2 than the wild-type RBD (-61.38 kcal/mol). These results suggest that the altered interaction network and enhanced dynamics of the Omicron variant contribute to its increased infectivity, providing insights for the development of targeted therapeutics and vaccines.

Keywords: SARS-CoV-2, molecular dynamic simulation, receptor binding domain, vaccine

Procedia PDF Downloads 21
6136 Gravity and Geodetic Control of Geodynamic Activity near Aswan Lake, Egypt

Authors: Anwar H. Radwan, Jan Mrlina, El-Sayed A. Issawy, Ali Rayan, Salah M. Mahmoud

Abstract:

Geodynamic investigations in the Aswan Lake region were started after the M=5.5 earthquake in 1981, triggered by the lake water fluctuations. Besides establishing the seismological networks, also the geodetic observations focused on the Kalabsha and Sayal fault zones were started. It was found that the Kalabsha fault is an active dextral strike-slip with normal component indicating uplift on its southern side. However, the annual velocity rates in both components do not exceed 2 mm/y, and do not therefore represent extremely active faulting. We also launched gravity monitoring in 1997, and performed another two campaigns in 2000 and 2002. The observed non- tidal temporal gravity changes indicate rather the flood water infiltration into the porous Nubian sandstone, than tectonic stress effect. The station nearest to the lake exhibited about 60 μGal positive gravity change within the 1997-2002 period.

Keywords: gravity monitoring, surface movements, Lake Aswan, groundwater change

Procedia PDF Downloads 501
6135 A Study on Solutions to Connect Distribution Power Grid up to Renewable Energy Sources at KEPCO

Authors: Seung Yoon Hyun, Hyeong Seung An, Myeong Ho Choi, Sung Hwan Bae, Yu Jong Sim

Abstract:

In 2015, the southern part of the Korean Peninsula has 8.6 million poles, 1.25 million km power lines, and 2 million transformers, etc. It is the massive amount of distribution equipments which could cover a round-trip distance from the earth to the moon and 11 turns around the earth. These distribution equipments are spread out like capillaries and supplying power to every corner of the Korean Peninsula. In order to manage these huge power facility efficiently, KEPCO use DAS (Distribution Automation System) to operate distribution power system since 1997. DAS is integrated system that enables to remotely supervise and control breakers and switches on distribution network. Using DAS, we can reduce outage time and power loss. KEPCO has about 160,000 switches, 50%(about 80,000) of switches are automated, and 41 distribution center monitoring&control these switches 24-hour 365 days to get the best efficiency of distribution networks. However, the rapid increasing renewable energy sources become the problem in the efficient operation of distributed power system. (currently 2,400 MW, 75,000 generators operate in distribution power system). In this paper, it suggests the way to interconnect between renewable energy source and distribution power system.

Keywords: distribution, renewable, connect, DAS (Distribution Automation System)

Procedia PDF Downloads 621