Search results for: network pinch analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 31060

Search results for: network pinch analysis

30070 Analyzing Strategic Alliances of Museums: The Case of Girona (Spain)

Authors: Raquel Camprubí

Abstract:

Cultural tourism has been postulated as relevant motivation for tourist over the world during the last decades. In this context, museums are the main attraction for cultural tourists who are seeking to connect with the history and culture of the visited place. From the point of view of an urban destination, museums and other cultural resources are essential to have a strong tourist supply at the destination, in order to be capable of catching attention and interest of cultural tourists. In particular, museums’ challenge is to be prepared to offer the best experience to their visitors without to forget their mission-based mainly on protection of its collection and other social goals. Thus, museums individually want to be competitive and have good positioning to achieve their strategic goals. The life cycle of the destination and the level of maturity of its tourism product influence the need of tourism agents to cooperate and collaborate among them, in order to rejuvenate their product and become more competitive as a destination. Additionally, prior studies have considered an approach of different models of a public and private partnership, and collaborative and cooperative relations developed among the agents of a tourism destination. However, there are no studies that pay special attention to museums and the strategic alliances developed to obtain mutual benefits. Considering this background, the purpose of this study is to analyze in what extent museums of a given urban destination have established strategic links and relations among them, in order to improve their competitive position at both individual and destination level. In order to achieve the aim of this study, the city of Girona (Spain) and the museums located in this city are taken as a case study. Data collection was conducted using in-depth interviews, in order to collect all the qualitative data related to nature, strengthen and purpose of the relational ties established among the museums of the city or other relevant tourism agents of the city. To conduct data analysis, a Social Network Analysis (SNA) approach was taken using UCINET software. Position of the agents in the network and structure of the network was analyzed, and qualitative data from interviews were used to interpret SNA results. Finding reveals the existence of strong ties among some of the museums of the city, particularly to create and promote joint products. Nevertheless, there were detected outsiders who have an individual strategy, without collaboration and cooperation with other museums or agents of the city. Results also show that some relational ties have an institutional origin, while others are the result of a long process of cooperation with common projects. Conclusions put in evidence that collaboration and cooperation of museums had been positive to increase the attractiveness of the museum and the city as a cultural destination. Future research and managerial implications are also mentioned.

Keywords: cultural tourism, competitiveness, museums, Social Network analysis

Procedia PDF Downloads 117
30069 Evaluation of Robust Feature Descriptors for Texture Classification

Authors: Jia-Hong Lee, Mei-Yi Wu, Hsien-Tsung Kuo

Abstract:

Texture is an important characteristic in real and synthetic scenes. Texture analysis plays a critical role in inspecting surfaces and provides important techniques in a variety of applications. Although several descriptors have been presented to extract texture features, the development of object recognition is still a difficult task due to the complex aspects of texture. Recently, many robust and scaling-invariant image features such as SIFT, SURF and ORB have been successfully used in image retrieval and object recognition. In this paper, we have tried to compare the performance for texture classification using these feature descriptors with k-means clustering. Different classifiers including K-NN, Naive Bayes, Back Propagation Neural Network , Decision Tree and Kstar were applied in three texture image sets - UIUCTex, KTH-TIPS and Brodatz, respectively. Experimental results reveal SIFTS as the best average accuracy rate holder in UIUCTex, KTH-TIPS and SURF is advantaged in Brodatz texture set. BP neuro network works best in the test set classification among all used classifiers.

Keywords: texture classification, texture descriptor, SIFT, SURF, ORB

Procedia PDF Downloads 369
30068 The Scenario Analysis of Shale Gas Development in China by Applying Natural Gas Pipeline Optimization Model

Authors: Meng Xu, Alexis K. H. Lau, Ming Xu, Bill Barron, Narges Shahraki

Abstract:

As an emerging unconventional energy, shale gas has been an economically viable step towards a cleaner energy future in U.S. China also has shale resources that are estimated to be potentially the largest in the world. In addition, China has enormous unmet for a clean alternative to substitute coal. Nonetheless, the geological complexity of China’s shale basins and issues of water scarcity potentially impose serious constraints on shale gas development in China. Further, even if China could replicate to a significant degree the U.S. shale gas boom, China faces the problem of transporting the gas efficiently overland with its limited pipeline network throughput capacity and coverage. The aim of this study is to identify the potential bottlenecks in China’s gas transmission network, as well as to examine the shale gas development affecting particular supply locations and demand centers. We examine this through application of three scenarios with projecting domestic shale gas supply by 2020: optimistic, medium and conservative shale gas supply, taking references from the International Energy Agency’s (IEA’s) projections and China’s shale gas development plans. Separately we project the gas demand at provincial level, since shale gas will have more significant impact regionally than nationally. To quantitatively assess each shale gas development scenario, we formulated a gas pipeline optimization model. We used ArcGIS to generate the connectivity parameters and pipeline segment length. Other parameters are collected from provincial “twelfth-five year” plans and “China Oil and Gas Pipeline Atlas”. The multi-objective optimization model uses GAMs and Matlab. It aims to minimize the demands that are unable to be met, while simultaneously seeking to minimize total gas supply and transmission costs. The results indicate that, even if the primary objective is to meet the projected gas demand rather than cost minimization, there’s a shortfall of 9% in meeting total demand under the medium scenario. Comparing the results between the optimistic and medium supply of shale gas scenarios, almost half of the shale gas produced in Sichuan province and Chongqing won’t be able to be transmitted out by pipeline. On the demand side, the Henan province and Shanghai gas demand gap could be filled as much as 82% and 39% respectively, with increased shale gas supply. To conclude, the pipeline network in China is currently not sufficient in meeting the projected natural gas demand in 2020 under medium and optimistic scenarios, indicating the need for substantial pipeline capacity expansion for some of the existing network, and the importance of constructing new pipelines from particular supply to demand sites. If the pipeline constraint is overcame, Beijing, Shanghai, Jiangsu and Henan’s gas demand gap could potentially be filled, and China could thereby reduce almost 25% its dependency on LNG imports under the optimistic scenario.

Keywords: energy policy, energy systematic analysis, scenario analysis, shale gas in China

Procedia PDF Downloads 288
30067 Leveraging Li-Fi to Enhance Security and Performance of Medical Devices

Authors: Trevor Kroeger, Hayden Williams, Edward Holzinger, David Coleman, Brian Haberman

Abstract:

The network connectivity of medical devices is increasing at a rapid rate. Many medical devices, such as vital sign monitors, share information via wireless or wired connections. However, these connectivity options suffer from a variety of well-known limitations. Wireless connectivity, especially in the unlicensed radio frequency bands, can be disrupted. Such disruption could be due to benign reasons, such as a crowded spectrum, or to malicious intent. While wired connections are less susceptible to interference, they inhibit the mobility of the medical devices, which could be critical in a variety of scenarios. This work explores the application of Light Fidelity (Li-Fi) communication to enhance the security, performance, and mobility of medical devices in connected healthcare scenarios. A simple bridge for connected devices serves as an avenue to connect traditional medical devices to the Li-Fi network. This bridge was utilized to conduct bandwidth tests on a small Li-Fi network installed into a Mock-ICU setting with a backend enterprise network similar to that of a hospital. Mobile and stationary tests were conducted to replicate various different situations that might occur within a hospital setting. Results show that in room Li-Fi connectivity provides reasonable bandwidth and latency within a hospital like setting.

Keywords: hospital, light fidelity, Li-Fi, medical devices, security

Procedia PDF Downloads 102
30066 Artificial Intelligence Approach to Water Treatment Processes: Case Study of Daspoort Treatment Plant, South Africa

Authors: Olumuyiwa Ojo, Masengo Ilunga

Abstract:

Artificial neural network (ANN) has broken the bounds of the convention programming, which is actually a function of garbage in garbage out by its ability to mimic the human brain. Its ability to adopt, adapt, adjust, evaluate, learn and recognize the relationship, behavior, and pattern of a series of data set administered to it, is tailored after the human reasoning and learning mechanism. Thus, the study aimed at modeling wastewater treatment process in order to accurately diagnose water control problems for effective treatment. For this study, a stage ANN model development and evaluation methodology were employed. The source data analysis stage involved a statistical analysis of the data used in modeling in the model development stage, candidate ANN architecture development and then evaluated using a historical data set. The model was developed using historical data obtained from Daspoort Wastewater Treatment plant South Africa. The resultant designed dimensions and model for wastewater treatment plant provided good results. Parameters considered were temperature, pH value, colour, turbidity, amount of solids and acidity. Others are total hardness, Ca hardness, Mg hardness, and chloride. This enables the ANN to handle and represent more complex problems that conventional programming is incapable of performing.

Keywords: ANN, artificial neural network, wastewater treatment, model, development

Procedia PDF Downloads 149
30065 Transit Network Design Problem Issues and Challenges

Authors: Mahmoud Owais

Abstract:

Public Transit (P.T) is very important means to reduce traffic congestion, to improve urban environmental conditions and consequently affects people social lives. Planning, designing and management of P.T are the key issues for offering a competitive mode that can compete with the private transportation. These transportation planning, designing and management issues are addressed in the Transit Network Design Problem (TNDP). It deals with a complete hierarchy of decision making process. It includes strategic, tactical and operational decisions. The main body of TNDP is two stages, namely; route design stage and frequency setting. The TNDP is extensively studied in the last five decades; however the research gate is still widely open due to its many practical and modeling challenges. In this paper, a comprehensive background is given to illustrate the issues and challenges related to the TNDP to help in directing the incoming researches towards the untouched areas of the problem.

Keywords: frequency setting, network design, transit planning, urban planning

Procedia PDF Downloads 385
30064 The Network Relative Model Accuracy (NeRMA) Score: A Method to Quantify the Accuracy of Prediction Models in a Concurrent External Validation

Authors: Carl van Walraven, Meltem Tuna

Abstract:

Background: Network meta-analysis (NMA) quantifies the relative efficacy of 3 or more interventions from studies containing a subgroup of interventions. This study applied the analytical approach of NMA to quantify the relative accuracy of prediction models with distinct inclusion criteria that are evaluated on a common population (‘concurrent external validation’). Methods: We simulated binary events in 5000 patients using a known risk function. We biased the risk function and modified its precision by pre-specified amounts to create 15 prediction models with varying accuracy and distinct patient applicability. Prediction model accuracy was measured using the Scaled Brier Score (SBS). Overall prediction model accuracy was measured using fixed-effects methods that accounted for model applicability patterns. Prediction model accuracy was summarized as the Network Relative Model Accuracy (NeRMA) Score which ranges from -∞ through 0 (accuracy of random guessing) to 1 (accuracy of most accurate model in concurrent external validation). Results: The unbiased prediction model had the highest SBS. The NeRMA score correctly ranked all simulated prediction models by the extent of bias from the known risk function. A SAS macro and R-function was created to implement the NeRMA Score. Conclusions: The NeRMA Score makes it possible to quantify the accuracy of binomial prediction models having distinct inclusion criteria in a concurrent external validation.

Keywords: prediction model accuracy, scaled brier score, fixed effects methods, concurrent external validation

Procedia PDF Downloads 236
30063 Dynamic Bandwidth Allocation in Fiber-Wireless (FiWi) Networks

Authors: Eman I. Raslan, Haitham S. Hamza, Reda A. El-Khoribi

Abstract:

Fiber-Wireless (FiWi) networks are a promising candidate for future broadband access networks. These networks combine the optical network as the back end where different passive optical network (PON) technologies are realized and the wireless network as the front end where different wireless technologies are adopted, e.g. LTE, WiMAX, Wi-Fi, and Wireless Mesh Networks (WMNs). The convergence of both optical and wireless technologies requires designing architectures with robust efficient and effective bandwidth allocation schemes. Different bandwidth allocation algorithms have been proposed in FiWi networks aiming to enhance the different segments of FiWi networks including wireless and optical subnetworks. In this survey, we focus on the differentiating between the different bandwidth allocation algorithms according to their enhancement segment of FiWi networks. We classify these techniques into wireless, optical and Hybrid bandwidth allocation techniques.

Keywords: fiber-wireless (FiWi), dynamic bandwidth allocation (DBA), passive optical networks (PON), media access control (MAC)

Procedia PDF Downloads 531
30062 Study on the Impact of Default Converter on the Quality of Energy Produced by DFIG Based Wind Turbine

Authors: N. Zerzouri, N. Benalia, N. Bensiali

Abstract:

This work is devoted to an analysis of the operation of a doubly fed induction generator (DFIG) integrated with a wind system. The power transfer between the stator and the network is carried out by acting on the rotor via a bidirectional signal converter. The analysis is devoted to the study of a fault in the converter due to an interruption of the control of a semiconductor. Simulation results obtained by the MATLAB/Simulink software illustrate the quality of the power generated at the default.

Keywords: doubly fed induction generator (DFIG), wind energy, PWM inverter, modeling

Procedia PDF Downloads 316
30061 Social Distancing as a Population Game in Networked Social Environments

Authors: Zhijun Wu

Abstract:

While social living is considered to be an indispensable part of human life in today's ever-connected world, social distancing has recently received much public attention on its importance since the outbreak of the coronavirus pandemic. In fact, social distancing has long been practiced in nature among solitary species and has been taken by humans as an effective way of stopping or slowing down the spread of infectious diseases. A social distancing problem is considered for how a population, when in the world with a network of social sites, decides to visit or stay at some sites while avoiding or closing down some others so that the social contacts across the network can be minimized. The problem is modeled as a population game, where every individual tries to find some network sites to visit or stay so that he/she can minimize all his/her social contacts. In the end, an optimal strategy can be found for everyone when the game reaches an equilibrium. The paper shows that a large class of equilibrium strategies can be obtained by selecting a set of social sites that forms a so-called maximal r-regular subnetwork. The latter includes many well-studied network types, which are easy to identify or construct and can be completely disconnected (with r = 0) for the most-strict isolation or allow certain degrees of connectivity (with r > 0) for more flexible distancing. The equilibrium conditions of these strategies are derived. Their rigidity and flexibility are analyzed on different types of r-regular subnetworks. It is proved that the strategies supported by maximal 0-regular subnetworks are strictly rigid, while those by general maximal r-regular subnetworks with r > 0 are flexible, though some can be weakly rigid. The proposed model can also be extended to weighted networks when different contact values are assigned to different network sites.

Keywords: social distancing, mitigation of spread of epidemics, populations games, networked social environments

Procedia PDF Downloads 133
30060 Evolving Digital Circuits for Early Stage Breast Cancer Detection Using Cartesian Genetic Programming

Authors: Zahra Khalid, Gul Muhammad Khan, Arbab Masood Ahmad

Abstract:

Cartesian Genetic Programming (CGP) is explored to design an optimal circuit capable of early stage breast cancer detection. CGP is used to evolve simple multiplexer circuits for detection of malignancy in the Fine Needle Aspiration (FNA) samples of breast. The data set used is extracted from Wisconsins Breast Cancer Database (WBCD). A range of experiments were performed, each with different set of network parameters. The best evolved network detected malignancy with an accuracy of 99.14%, which is higher than that produced with most of the contemporary non-linear techniques that are computational expensive than the proposed system. The evolved network comprises of simple multiplexers and can be implemented easily in hardware without any further complications or inaccuracy, being the digital circuit.

Keywords: breast cancer detection, cartesian genetic programming, evolvable hardware, fine needle aspiration

Procedia PDF Downloads 216
30059 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation using PINN

Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy

Abstract:

The physics informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary condition to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful to study various optical phenomena.

Keywords: deep learning, optical Soliton, neural network, partial differential equation

Procedia PDF Downloads 126
30058 A Computer-Aided System for Detection and Classification of Liver Cirrhosis

Authors: Abdel Hadi N. Ebraheim, Eman Azomi, Nefisa A. Fahmy

Abstract:

This paper designs and implements a computer-aided system (CAS) to help detect and diagnose liver cirrhosis in patients with Chronic Hepatitis C. Our system reduces the required features (tests) the patient is asked to do to tests to their minimal best most informative subset of tests, with a diagnostic accuracy above 99%, and hence saving both time and costs. We use the Support Vector Machine (SVM) with cross-validation, a Multilayer Perceptron Neural Network (MLP), and a Generalized Regression Neural Network (GRNN) that employs a base of radial functions for functional approximation, as classifiers. Our system is tested on 199 subjects, of them 99 Chronic Hepatitis C.The subjects were selected from among the outpatient clinic in National Herpetology and Tropical Medicine Research Institute (NHTMRI).

Keywords: liver cirrhosis, artificial neural network, support vector machine, multi-layer perceptron, classification, accuracy

Procedia PDF Downloads 461
30057 Application of Combined Cluster and Discriminant Analysis to Make the Operation of Monitoring Networks More Economical

Authors: Norbert Magyar, Jozsef Kovacs, Peter Tanos, Balazs Trasy, Tamas Garamhegyi, Istvan Gabor Hatvani

Abstract:

Water is one of the most important common resources, and as a result of urbanization, agriculture, and industry it is becoming more and more exposed to potential pollutants. The prevention of the deterioration of water quality is a crucial role for environmental scientist. To achieve this aim, the operation of monitoring networks is necessary. In general, these networks have to meet many important requirements, such as representativeness and cost efficiency. However, existing monitoring networks often include sampling sites which are unnecessary. With the elimination of these sites the monitoring network can be optimized, and it can operate more economically. The aim of this study is to illustrate the applicability of the CCDA (Combined Cluster and Discriminant Analysis) to the field of water quality monitoring and optimize the monitoring networks of a river (the Danube), a wetland-lake system (Kis-Balaton & Lake Balaton), and two surface-subsurface water systems on the watershed of Lake Neusiedl/Lake Fertő and on the Szigetköz area over a period of approximately two decades. CCDA combines two multivariate data analysis methods: hierarchical cluster analysis and linear discriminant analysis. Its goal is to determine homogeneous groups of observations, in our case sampling sites, by comparing the goodness of preconceived classifications obtained from hierarchical cluster analysis with random classifications. The main idea behind CCDA is that if the ratio of correctly classified cases for a grouping is higher than at least 95% of the ratios for the random classifications, then at the level of significance (α=0.05) the given sampling sites don’t form a homogeneous group. Due to the fact that the sampling on the Lake Neusiedl/Lake Fertő was conducted at the same time at all sampling sites, it was possible to visualize the differences between the sampling sites belonging to the same or different groups on scatterplots. Based on the results, the monitoring network of the Danube yields redundant information over certain sections, so that of 12 sampling sites, 3 could be eliminated without loss of information. In the case of the wetland (Kis-Balaton) one pair of sampling sites out of 12, and in the case of Lake Balaton, 5 out of 10 could be discarded. For the groundwater system of the catchment area of Lake Neusiedl/Lake Fertő all 50 monitoring wells are necessary, there is no redundant information in the system. The number of the sampling sites on the Lake Neusiedl/Lake Fertő can decrease to approximately the half of the original number of the sites. Furthermore, neighbouring sampling sites were compared pairwise using CCDA and the results were plotted on diagrams or isoline maps showing the location of the greatest differences. These results can help researchers decide where to place new sampling sites. The application of CCDA proved to be a useful tool in the optimization of the monitoring networks regarding different types of water bodies. Based on the results obtained, the monitoring networks can be operated more economically.

Keywords: combined cluster and discriminant analysis, cost efficiency, monitoring network optimization, water quality

Procedia PDF Downloads 348
30056 Deep Neural Network Approach for Navigation of Autonomous Vehicles

Authors: Mayank Raj, V. G. Narendra

Abstract:

Ever since the DARPA challenge on autonomous vehicles in 2005, there has been a lot of buzz about ‘Autonomous Vehicles’ amongst the major tech giants such as Google, Uber, and Tesla. Numerous approaches have been adopted to solve this problem, which can have a long-lasting impact on mankind. In this paper, we have used Deep Learning techniques and TensorFlow framework with the goal of building a neural network model to predict (speed, acceleration, steering angle, and brake) features needed for navigation of autonomous vehicles. The Deep Neural Network has been trained on images and sensor data obtained from the comma.ai dataset. A heatmap was used to check for correlation among the features, and finally, four important features were selected. This was a multivariate regression problem. The final model had five convolutional layers, followed by five dense layers. Finally, the calculated values were tested against the labeled data, where the mean squared error was used as a performance metric.

Keywords: autonomous vehicles, deep learning, computer vision, artificial intelligence

Procedia PDF Downloads 158
30055 Parallel Self Organizing Neural Network Based Estimation of Archie’s Parameters and Water Saturation in Sandstone Reservoir

Authors: G. M. Hamada, A. A. Al-Gathe, A. M. Al-Khudafi

Abstract:

Determination of water saturation in sandstone is a vital question to determine the initial oil or gas in place in reservoir rocks. Water saturation determination using electrical measurements is mainly on Archie’s formula. Consequently accuracy of Archie’s formula parameters affects water saturation values rigorously. Determination of Archie’s parameters a, m, and n is proceeded by three conventional techniques, Core Archie-Parameter Estimation (CAPE) and 3-D. This work introduces the hybrid system of parallel self-organizing neural network (PSONN) targeting accepted values of Archie’s parameters and, consequently, reliable water saturation values. This work focuses on Archie’s parameters determination techniques; conventional technique, CAPE technique, and 3-D technique, and then the calculation of water saturation using current. Using the same data, a hybrid parallel self-organizing neural network (PSONN) algorithm is used to estimate Archie’s parameters and predict water saturation. Results have shown that estimated Arche’s parameters m, a, and n are highly accepted with statistical analysis, indicating that the PSONN model has a lower statistical error and higher correlation coefficient. This study was conducted using a high number of measurement points for 144 core plugs from a sandstone reservoir. PSONN algorithm can provide reliable water saturation values, and it can supplement or even replace the conventional techniques to determine Archie’s parameters and thereby calculate water saturation profiles.

Keywords: water saturation, Archie’s parameters, artificial intelligence, PSONN, sandstone reservoir

Procedia PDF Downloads 128
30054 Heat Source Temperature for Centered Heat Source on Isotropic Plate with Lower Surface Forced Cooling Using Neural Network and Three Different Materials

Authors: Fadwa Haraka, Ahmad Elouatouati, Mourad Taha Janan

Abstract:

In this study, we propose a neural network based method in order to calculate the heat source temperature of isotropic plate with lower surface forced cooling. To validate the proposed model, the heat source temperatures values will be compared to the analytical method -variables separation- and finite element model. The mathematical simulation is done through 3D numerical simulation by COMSOL software considering three different materials: Aluminum, Copper, and Graphite. The proposed method will lead to a formulation of the heat source temperature based on the thermal and geometric properties of the base plate.

Keywords: thermal model, thermal resistance, finite element simulation, neural network

Procedia PDF Downloads 358
30053 Design of Control System Based On PLC and Kingview for Granulation Product Line

Authors: Mei-Feng, Yude-Fan, Min-Zhu

Abstract:

Based on PLC and kingview, this paper proposed a method that designed a set of the automatic control system according to the craft flow and demands for granulation product line. There were the main station and subordinate stations in PLC which were communicated by PROFIBUS network. PLC and computer were communicated by Ethernet network. The conversation function between human and machine was realized by kingview software, including actual time craft flows, historic report curves and product report forms. The construction of the control system, hardware collocation and software design were introduced. Besides these, PROFIBUS network frequency conversion control, the difficult points and configuration software design were elaborated. The running results showed that there were several advantages in the control system. They were high automatic degree, perfect function, perfect steady and convenient operation.

Keywords: PLC, PROFIBUS, configuration, frequency

Procedia PDF Downloads 402
30052 Predicting Shot Making in Basketball Learnt Fromadversarial Multiagent Trajectories

Authors: Mark Harmon, Abdolghani Ebrahimi, Patrick Lucey, Diego Klabjan

Abstract:

In this paper, we predict the likelihood of a player making a shot in basketball from multiagent trajectories. Previous approaches to similar problems center on hand-crafting features to capture domain-specific knowledge. Although intuitive, recent work in deep learning has shown, this approach is prone to missing important predictive features. To circumvent this issue, we present a convolutional neural network (CNN) approach where we initially represent the multiagent behavior as an image. To encode the adversarial nature of basketball, we use a multichannel image which we then feed into a CNN. Additionally, to capture the temporal aspect of the trajectories, we use “fading.” We find that this approach is superior to a traditional FFN model. By using gradient ascent, we were able to discover what the CNN filters look for during training. Last, we find that a combined FFN+CNN is the best performing network with an error rate of 39%.

Keywords: basketball, computer vision, image processing, convolutional neural network

Procedia PDF Downloads 153
30051 Neural Network and Support Vector Machine for Prediction of Foot Disorders Based on Foot Analysis

Authors: Monireh Ahmadi Bani, Adel Khorramrouz, Lalenoor Morvarid, Bagheri Mahtab

Abstract:

Background:- Foot disorders are common in musculoskeletal problems. Plantar pressure distribution measurement is one the most important part of foot disorders diagnosis for quantitative analysis. However, the association of plantar pressure and foot disorders is not clear. With the growth of dataset and machine learning methods, the relationship between foot disorders and plantar pressures can be detected. Significance of the study:- The purpose of this study was to predict the probability of common foot disorders based on peak plantar pressure distribution and center of pressure during walking. Methodologies:- 2323 participants were assessed in a foot therapy clinic between 2015 and 2021. Foot disorders were diagnosed by an experienced physician and then they were asked to walk on a force plate scanner. After the data preprocessing, due to the difference in walking time and foot size, we normalized the samples based on time and foot size. Some of force plate variables were selected as input to a deep neural network (DNN), and the probability of any each foot disorder was measured. In next step, we used support vector machine (SVM) and run dataset for each foot disorder (classification of yes or no). We compared DNN and SVM for foot disorders prediction based on plantar pressure distributions and center of pressure. Findings:- The results demonstrated that the accuracy of deep learning architecture is sufficient for most clinical and research applications in the study population. In addition, the SVM approach has more accuracy for predictions, enabling applications for foot disorders diagnosis. The detection accuracy was 71% by the deep learning algorithm and 78% by the SVM algorithm. Moreover, when we worked with peak plantar pressure distribution, it was more accurate than center of pressure dataset. Conclusion:- Both algorithms- deep learning and SVM will help therapist and patients to improve the data pool and enhance foot disorders prediction with less expense and error after removing some restrictions properly.

Keywords: deep neural network, foot disorder, plantar pressure, support vector machine

Procedia PDF Downloads 358
30050 Work-Life Balance: A Landscape Mapping of Two Decades of Scholarly Research

Authors: Gertrude I Hewapathirana, Mohamed M. Moustafa, Michel G. Zaitouni

Abstract:

The purposes of this research are: (a) to provide an epistemological and ontological understanding of the WLB theory, practice, and research to illuminate how the WLB evolved between 2000 to 2020 and (b) to analyze peer-reviewed research to identify the gaps, hotspots, underlying dynamics, theoretical and thematic trends, influential authors, research collaborations, geographic networks, and the multidisciplinary nature of the WLB theory to guide future researchers. The research used four-step bibliometric network analysis to explore five research questions. Using keywords such as WLB and associated variants, 1190 peer-reviewed articles were extracted from the Scopus database and transformed to a plain text format for filtering. The analysis was conducted using the R version 4.1 software (R Development Core Team, 2021) and several libraries such as bibliometrics, word cloud, and ggplot2. We used the VOSviewer software (van Eck & Waltman, 2019) for network visualization. The WLB theory has grown into a multifaceted, multidisciplinary field of research. There is a paucity of research between 2000 to 2005 and an exponential growth from 2006 to 2015. The rapid increase of WLB research in the USA, UK, and Australia reflects the increasing workplace stresses due to hyper competitive workplaces, inflexible work systems, and increasing diversity and the emergence of WLB support mechanisms, legal and constitutional mandates to enhance employee and family wellbeing at multilevel social systems. A severe knowledge gap exists due to inadequate publications disseminating the "core" WLB research. "Locally-centralized-globally-discrete" collaboration among researchers indicates a "North-South" divide between developed and developing nations. A shortage in WLB research in developing nations and a lack of research collaboration hinder a global understanding of the WLB as a universal phenomenon. Policymakers and practitioners can use the findings to initiate supporting policies, and innovative work systems. The boundary expansion of the WLB concepts, categories, relations, and properties would facilitate researchers/theoreticians to test a variety of new dimensions. This is the most comprehensive WLB landscape analysis that reveals emerging trends, concepts, networks, underlying dynamics, gaps, and growing theoretical and disciplinary boundaries. It portrays the WLB as a universal theory.

Keywords: work-life balance, co-citation networks; keyword co-occurrence network, bibliometric analysis

Procedia PDF Downloads 196
30049 A Systematic Review on Challenges in Big Data Environment

Authors: Rimmy Yadav, Anmol Preet Kaur

Abstract:

Big Data has demonstrated the vast potential in streamlining, deciding, spotting business drifts in different fields, for example, producing, fund, Information Technology. This paper gives a multi-disciplinary diagram of the research issues in enormous information and its procedures, instruments, and system identified with the privacy, data storage management, network and energy utilization, adaptation to non-critical failure and information representations. Other than this, result difficulties and openings accessible in this Big Data platform have made.

Keywords: big data, privacy, data management, network and energy consumption

Procedia PDF Downloads 312
30048 Enhancement of Environmental Security by the Application of Wireless Sensor Network in Nigeria

Authors: Ahmadu Girgiri, Lawan Gana Ali, Mamman M. Baba

Abstract:

Environmental security clearly articulates the perfections and developments of various communities around the world irrespective of the region, culture, religion or social inclination. Although, the present state of insecurity has become serious issue devastating the peace, unity, stability and progress of man and his physical environment particularly in developing countries. Recently, measure of security and it management in Nigeria has been a bottle-neck to the effectiveness and advancement of various sectors that include; business, education, social relations, politics and above all an economy. Several measures have been considered on mitigating environment insecurity such as surveillance, demarcation, security personnel empowerment and the likes, but still the issue remains disturbing. In this paper, we present the application of new technology that contributes to the improvement of security surveillance known as “Wireless Sensor Network (WSN)”. The system is new, smart and emerging technology that provides monitoring, detection and aggregation of information using sensor nodes and wireless network. WSN detects, monitors and stores information or activities in the deployed area such as schools, environment, business centers, public squares, industries, and outskirts and transmit to end users. This will reduce the cost of security funding and eases security surveillance depending on the nature and the requirement of the deployment.

Keywords: application, environment, insecurity, sensor, wireless sensor network

Procedia PDF Downloads 263
30047 Performance Evaluation of Routing Protocols for Video Conference over MPLS VPN Network

Authors: Abdullah Al Mamun, Tarek R. Sheltami

Abstract:

Video conferencing is a highly demanding facility now a days in order to its real time characteristics, but faster communication is the prior requirement of this technology. Multi Protocol Label Switching (MPLS) IP Virtual Private Network (VPN) address this problem and it is able to make a communication faster than others techniques. However, this paper studies the performance comparison of video traffic between two routing protocols namely the Enhanced Interior Gateway Protocol(EIGRP) and Open Shortest Path First (OSPF). The combination of traditional routing and MPLS improve the forwarding mechanism, scalability and overall network performance. We will use GNS3 and OPNET Modeler 14.5 to simulate many different scenarios and metrics such as delay, jitter and mean opinion score (MOS) value are measured. The simulation result will show that OSPF and BGP-MPLS VPN offers best performance for video conferencing application.

Keywords: OSPF, BGP, EIGRP, MPLS, Video conference, Provider router, edge router, layer3 VPN

Procedia PDF Downloads 331
30046 Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data

Authors: David Oluigbo, Erik Hemberg, Nathan Shwatal, Wenqi Ding, Yin Yuan, Susanna Mierau

Abstract:

Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs.

Keywords: calcium imaging, computer vision, neural activity, neural networks

Procedia PDF Downloads 82
30045 Cloud Design for Storing Large Amount of Data

Authors: M. Strémy, P. Závacký, P. Cuninka, M. Juhás

Abstract:

Main goal of this paper is to introduce our design of private cloud for storing large amount of data, especially pictures, and to provide good technological backend for data analysis based on parallel processing and business intelligence. We have tested hypervisors, cloud management tools, storage for storing all data and Hadoop to provide data analysis on unstructured data. Providing high availability, virtual network management, logical separation of projects and also rapid deployment of physical servers to our environment was also needed.

Keywords: cloud, glusterfs, hadoop, juju, kvm, maas, openstack, virtualization

Procedia PDF Downloads 353
30044 Wireless Sensor Anomaly Detection Using Soft Computing

Authors: Mouhammd Alkasassbeh, Alaa Lasasmeh

Abstract:

We live in an era of rapid development as a result of significant scientific growth. Like other technologies, wireless sensor networks (WSNs) are playing one of the main roles. Based on WSNs, ZigBee adds many features to devices, such as minimum cost and power consumption, and increasing the range and connect ability of sensor nodes. ZigBee technology has come to be used in various fields, including science, engineering, and networks, and even in medicinal aspects of intelligence building. In this work, we generated two main datasets, the first being based on tree topology and the second on star topology. The datasets were evaluated by three machine learning (ML) algorithms: J48, meta.j48 and multilayer perceptron (MLP). Each topology was classified into normal and abnormal (attack) network traffic. The dataset used in our work contained simulated data from network simulation 2 (NS2). In each database, the Bayesian network meta.j48 classifier achieved the highest accuracy level among other classifiers, of 99.7% and 99.2% respectively.

Keywords: IDS, Machine learning, WSN, ZigBee technology

Procedia PDF Downloads 543
30043 Alternator Fault Detection Using Wigner-Ville Distribution

Authors: Amin Ranjbar, Amir Arsalan Jalili Zolfaghari, Amir Abolfazl Suratgar, Mehrdad Khajavi

Abstract:

This paper describes two stages of learning-based fault detection procedure in alternators. The procedure consists of three states of machine condition namely shortened brush, high impedance relay and maintaining a healthy condition in the alternator. The fault detection algorithm uses Wigner-Ville distribution as a feature extractor and also appropriate feature classifier. In this work, ANN (Artificial Neural Network) and also SVM (support vector machine) were compared to determine more suitable performance evaluated by the mean squared of errors criteria. Modules work together to detect possible faulty conditions of machines working. To test the method performance, a signal database is prepared by making different conditions on a laboratory setup. Therefore, it seems by implementing this method, satisfactory results are achieved.

Keywords: alternator, artificial neural network, support vector machine, time-frequency analysis, Wigner-Ville distribution

Procedia PDF Downloads 374
30042 Lineup Optimization Model of Basketball Players Based on the Prediction of Recursive Neural Networks

Authors: Wang Yichen, Haruka Yamashita

Abstract:

In recent years, in the field of sports, decision making such as member in the game and strategy of the game based on then analysis of the accumulated sports data are widely attempted. In fact, in the NBA basketball league where the world's highest level players gather, to win the games, teams analyze the data using various statistical techniques. However, it is difficult to analyze the game data for each play such as the ball tracking or motion of the players in the game, because the situation of the game changes rapidly, and the structure of the data should be complicated. Therefore, it is considered that the analysis method for real time game play data is proposed. In this research, we propose an analytical model for "determining the optimal lineup composition" using the real time play data, which is considered to be difficult for all coaches. In this study, because replacing the entire lineup is too complicated, and the actual question for the replacement of players is "whether or not the lineup should be changed", and “whether or not Small Ball lineup is adopted”. Therefore, we propose an analytical model for the optimal player selection problem based on Small Ball lineups. In basketball, we can accumulate scoring data for each play, which indicates a player's contribution to the game, and the scoring data can be considered as a time series data. In order to compare the importance of players in different situations and lineups, we combine RNN (Recurrent Neural Network) model, which can analyze time series data, and NN (Neural Network) model, which can analyze the situation on the field, to build the prediction model of score. This model is capable to identify the current optimal lineup for different situations. In this research, we collected all the data of accumulated data of NBA from 2019-2020. Then we apply the method to the actual basketball play data to verify the reliability of the proposed model.

Keywords: recurrent neural network, players lineup, basketball data, decision making model

Procedia PDF Downloads 133
30041 An ANN Approach for Detection and Localization of Fatigue Damage in Aircraft Structures

Authors: Reza Rezaeipour Honarmandzad

Abstract:

In this paper we propose an ANN for detection and localization of fatigue damage in aircraft structures. We used network of piezoelectric transducers for Lamb-wave measurements in order to calculate damage indices. Data gathered by the sensors was given to neural network classifier. A set of neural network electors of different architecture cooperates to achieve consensus concerning the state of each monitored path. Sensed signal variations in the ROI, detected by the networks at each path, were used to assess the state of the structure as well as to localize detected damage and to filter out ambient changes. The classifier has been extensively tested on large data sets acquired in the tests of specimens with artificially introduced notches as well as the results of numerous fatigue experiments. Effect of the classifier structure and test data used for training on the results was evaluated.

Keywords: ANN, fatigue damage, aircraft structures, piezoelectric transducers, lamb-wave measurements

Procedia PDF Downloads 417