Search results for: wireless sensor networks (WSN)
846 Reconstructability Analysis for Landslide Prediction
Authors: David Percy
Abstract:
Landslides are a geologic phenomenon that affects a large number of inhabited places and are constantly being monitored and studied for the prediction of future occurrences. Reconstructability analysis (RA) is a methodology for extracting informative models from large volumes of data that work exclusively with discrete data. While RA has been used in medical applications and social science extensively, we are introducing it to the spatial sciences through applications like landslide prediction. Since RA works exclusively with discrete data, such as soil classification or bedrock type, working with continuous data, such as porosity, requires that these data are binned for inclusion in the model. RA constructs models of the data which pick out the most informative elements, independent variables (IVs), from each layer that predict the dependent variable (DV), landslide occurrence. Each layer included in the model retains its classification data as a primary encoding of the data. Unlike other machine learning algorithms that force the data into one-hot encoding type of schemes, RA works directly with the data as it is encoded, with the exception of continuous data, which must be binned. The usual physical and derived layers are included in the model, and testing our results against other published methodologies, such as neural networks, yields accuracy that is similar but with the advantage of a completely transparent model. The results of an RA session with a data set are a report on every combination of variables and their probability of landslide events occurring. In this way, every combination of informative state combinations can be examined.Keywords: reconstructability analysis, machine learning, landslides, raster analysis
Procedia PDF Downloads 66845 Research Action Fields at the Nexus of Digital Transformation and Supply Chain Management: Findings from Practitioner Focus Group Workshops
Authors: Brandtner Patrick, Staberhofer Franz
Abstract:
Logistics and Supply Chain Management are of crucial importance for organisational success. In the era of Digitalization, several implications and improvement potentials for these domains arise, which at the same time could lead to decreased competitiveness and could endanger long-term company success if ignored or neglected. However, empirical research on the issue of Digitalization and benefits purported to it by practitioners is scarce and mainly focused on single technologies or separate, isolated Supply Chain blocks as e.g. distribution logistics or procurement only. The current paper applies a holistic focus group approach to elaborate practitioner use cases at the nexus of the concepts of Supply Chain Management (SCM) and Digitalization. In the course of three focus group workshops with over 45 participants from more than 20 organisations, a comprehensive set of benefit entitlements and areas for improvement in terms of applying digitalization to SCM is developed. The main results of the paper indicate the relevance of Digitalization being realized in practice. In the form of seventeen concrete research action fields, the benefit entitlements are aggregated and transformed into potential starting points for future research projects in this area. The main contribution of this paper is an empirically grounded basis for future research projects and an overview of actual research action fields from practitioners’ point of view.Keywords: digital supply chain, digital transformation, supply chain management, value networks
Procedia PDF Downloads 177844 Statistical Time-Series and Neural Architecture of Malaria Patients Records in Lagos, Nigeria
Authors: Akinbo Razak Yinka, Adesanya Kehinde Kazeem, Oladokun Oluwagbenga Peter
Abstract:
Time series data are sequences of observations collected over a period of time. Such data can be used to predict health outcomes, such as disease progression, mortality, hospitalization, etc. The Statistical approach is based on mathematical models that capture the patterns and trends of the data, such as autocorrelation, seasonality, and noise, while Neural methods are based on artificial neural networks, which are computational models that mimic the structure and function of biological neurons. This paper compared both parametric and non-parametric time series models of patients treated for malaria in Maternal and Child Health Centres in Lagos State, Nigeria. The forecast methods considered linear regression, Integrated Moving Average, ARIMA and SARIMA Modeling for the parametric approach, while Multilayer Perceptron (MLP) and Long Short-Term Memory (LSTM) Network were used for the non-parametric model. The performance of each method is evaluated using the Mean Absolute Error (MAE), R-squared (R2) and Root Mean Square Error (RMSE) as criteria to determine the accuracy of each model. The study revealed that the best performance in terms of error was found in MLP, followed by the LSTM and ARIMA models. In addition, the Bootstrap Aggregating technique was used to make robust forecasts when there are uncertainties in the data.Keywords: ARIMA, bootstrap aggregation, MLP, LSTM, SARIMA, time-series analysis
Procedia PDF Downloads 75843 Real-Time Pedestrian Detection Method Based on Improved YOLOv3
Authors: Jingting Luo, Yong Wang, Ying Wang
Abstract:
Pedestrian detection in image or video data is a very important and challenging task in security surveillance. The difficulty of this task is to locate and detect pedestrians of different scales in complex scenes accurately. To solve these problems, a deep neural network (RT-YOLOv3) is proposed to realize real-time pedestrian detection at different scales in security monitoring. RT-YOLOv3 improves the traditional YOLOv3 algorithm. Firstly, the deep residual network is added to extract vehicle features. Then six convolutional neural networks with different scales are designed and fused with the corresponding scale feature maps in the residual network to form the final feature pyramid to perform pedestrian detection tasks. This method can better characterize pedestrians. In order to further improve the accuracy and generalization ability of the model, a hybrid pedestrian data set training method is used to extract pedestrian data from the VOC data set and train with the INRIA pedestrian data set. Experiments show that the proposed RT-YOLOv3 method achieves 93.57% accuracy of mAP (mean average precision) and 46.52f/s (number of frames per second). In terms of accuracy, RT-YOLOv3 performs better than Fast R-CNN, Faster R-CNN, YOLO, SSD, YOLOv2, and YOLOv3. This method reduces the missed detection rate and false detection rate, improves the positioning accuracy, and meets the requirements of real-time detection of pedestrian objects.Keywords: pedestrian detection, feature detection, convolutional neural network, real-time detection, YOLOv3
Procedia PDF Downloads 142842 SIM (Subscriber Identity Module) Banking
Authors: Okanta Andrew, Richmond Kweku Frempong
Abstract:
As mobile networks are upgraded with technologies like WAP, GPRS and UMTS to deliver next-generation multimedia services, so are the banks and other financial institutions also getting ready to unleash the financial products on the mobile platform to meet growing demand for mobile based application services. Hence, the onset of Unstructured Supplementary Services (USSD) Banking which would make banking services available at anywhere, anytime through a string of interactive SMS sessions between a mobile device and an application server of a service provider. The aim of this studies was to find out whether the public will accept the sim banking service when it is implemented. Our target group includes: Working class. E. g. Businessmen/women, office workers, fishermen, market women, teachers etc. Nonworking class. E. g. Students (Tertiary, Senior High School), housewives. etc. The survey was in the form of a questionnaire and a verbal interview (video) which was to investigate their idea about the current banking system and the yet to be introduced sim banking concept. Meanwhile, some challenges accompanied the progression of data gathering because some populace showed reluctance in freeing their information. One other suggestion was that government should put measures against foremost challenges obstructing sim banking in Ghana counter to computers hackers. Government and individual have a key role to undertake to give suitable support to facelift the sim banking industry in the country. It was also suggested that Government put strong regulations on the use of sim banking products and services to streamline all the activities and also create awareness of the need for sim banking and emphasize its relevance in the aspect of national GDP.Keywords: banking, mobile banking, SIM banking, mobile banking in Ghana
Procedia PDF Downloads 484841 Geomatic Techniques to Filter Vegetation from Point Clouds
Authors: M. Amparo Núñez-Andrés, Felipe Buill, Albert Prades
Abstract:
More and more frequently, geomatics techniques such as terrestrial laser scanning or digital photogrammetry, either terrestrial or from drones, are being used to obtain digital terrain models (DTM) used for the monitoring of geological phenomena that cause natural disasters, such as landslides, rockfalls, debris-flow. One of the main multitemporal analyses developed from these models is the quantification of volume changes in the slopes and hillsides, either caused by erosion, fall, or land movement in the source area or sedimentation in the deposition zone. To carry out this task, it is necessary to filter the point clouds of all those elements that do not belong to the slopes. Among these elements, vegetation stands out as it is the one we find with the greatest presence and its constant change, both seasonal and daily, as it is affected by factors such as wind. One of the best-known indexes to detect vegetation on the image is the NVDI (Normalized Difference Vegetation Index), which is obtained from the combination of the infrared and red channels. Therefore it is necessary to have a multispectral camera. These cameras are generally of lower resolution than conventional RGB cameras, while their cost is much higher. Therefore we have to look for alternative indices based on RGB. In this communication, we present the results obtained in Georisk project (PID2019‐103974RB‐I00/MCIN/AEI/10.13039/501100011033) by using the GLI (Green Leaf Index) and ExG (Excessive Greenness), as well as the change to the Hue-Saturation-Value (HSV) color space being the H coordinate the one that gives us the most information for vegetation filtering. These filters are applied both to the images, creating binary masks to be used when applying the SfM algorithms, and to the point cloud obtained directly by the photogrammetric process without any previous filter or the one obtained by TLS (Terrestrial Laser Scanning). In this last case, we have also tried to work with a Riegl VZ400i sensor that allows the reception, as in the aerial LiDAR, of several returns of the signal. Information to be used for the classification on the point cloud. After applying all the techniques in different locations, the results show that the color-based filters allow correct filtering in those areas where the presence of shadows is not excessive and there is a contrast between the color of the slope lithology and the vegetation. As we have advanced in the case of using the HSV color space, it is the H coordinate that responds best for this filtering. Finally, the use of the various returns of the TLS signal allows filtering with some limitations.Keywords: RGB index, TLS, photogrammetry, multispectral camera, point cloud
Procedia PDF Downloads 154840 Diagnosis of the Heart Rhythm Disorders by Using Hybrid Classifiers
Authors: Sule Yucelbas, Gulay Tezel, Cuneyt Yucelbas, Seral Ozsen
Abstract:
In this study, it was tried to identify some heart rhythm disorders by electrocardiography (ECG) data that is taken from MIT-BIH arrhythmia database by subtracting the required features, presenting to artificial neural networks (ANN), artificial immune systems (AIS), artificial neural network based on artificial immune system (AIS-ANN) and particle swarm optimization based artificial neural network (PSO-NN) classifier systems. The main purpose of this study is to evaluate the performance of hybrid AIS-ANN and PSO-ANN classifiers with regard to the ANN and AIS. For this purpose, the normal sinus rhythm (NSR), atrial premature contraction (APC), sinus arrhythmia (SA), ventricular trigeminy (VTI), ventricular tachycardia (VTK) and atrial fibrillation (AF) data for each of the RR intervals were found. Then these data in the form of pairs (NSR-APC, NSR-SA, NSR-VTI, NSR-VTK and NSR-AF) is created by combining discrete wavelet transform which is applied to each of these two groups of data and two different data sets with 9 and 27 features were obtained from each of them after data reduction. Afterwards, the data randomly was firstly mixed within themselves, and then 4-fold cross validation method was applied to create the training and testing data. The training and testing accuracy rates and training time are compared with each other. As a result, performances of the hybrid classification systems, AIS-ANN and PSO-ANN were seen to be close to the performance of the ANN system. Also, the results of the hybrid systems were much better than AIS, too. However, ANN had much shorter period of training time than other systems. In terms of training times, ANN was followed by PSO-ANN, AIS-ANN and AIS systems respectively. Also, the features that extracted from the data affected the classification results significantly.Keywords: AIS, ANN, ECG, hybrid classifiers, PSO
Procedia PDF Downloads 442839 Health Trajectory Clustering Using Deep Belief Networks
Authors: Farshid Hajati, Federico Girosi, Shima Ghassempour
Abstract:
We present a Deep Belief Network (DBN) method for clustering health trajectories. Deep Belief Network (DBN) is a deep architecture that consists of a stack of Restricted Boltzmann Machines (RBM). In a deep architecture, each layer learns more complex features than the past layers. The proposed method depends on DBN in clustering without using back propagation learning algorithm. The proposed DBN has a better a performance compared to the deep neural network due the initialization of the connecting weights. We use Contrastive Divergence (CD) method for training the RBMs which increases the performance of the network. The performance of the proposed method is evaluated extensively on the Health and Retirement Study (HRS) database. The University of Michigan Health and Retirement Study (HRS) is a nationally representative longitudinal study that has surveyed more than 27,000 elderly and near-elderly Americans since its inception in 1992. Participants are interviewed every two years and they collect data on physical and mental health, insurance coverage, financial status, family support systems, labor market status, and retirement planning. The dataset is publicly available and we use the RAND HRS version L, which is easy to use and cleaned up version of the data. The size of sample data set is 268 and the length of the trajectories is equal to 10. The trajectories do not stop when the patient dies and represent 10 different interviews of live patients. Compared to the state-of-the-art benchmarks, the experimental results show the effectiveness and superiority of the proposed method in clustering health trajectories.Keywords: health trajectory, clustering, deep learning, DBN
Procedia PDF Downloads 369838 The Second Generation of Tyrosine Kinase Inhibitor Afatinib Controls Inflammation by Regulating NLRP3 Inflammasome Activation
Authors: Shujun Xie, Shirong Zhang, Shenglin Ma
Abstract:
Background: Chronic inflammation might lead to many malignancies, and inadequate resolution could play a crucial role in tumor invasion, progression, and metastases. A randomised, double-blind, placebo-controlled trial shows that IL-1β inhibition with canakinumab could reduce incident lung cancer and lung cancer mortality in patients with atherosclerosis. The process and secretion of proinflammatory cytokine IL-1β are controlled by the inflammasome. Here we showed the correlation of the innate immune system and afatinib, a tyrosine kinase inhibitor targeting epidermal growth factor receptor (EGFR) in non-small cell lung cancer. Methods: Murine Bone marrow derived macrophages (BMDMs), peritoneal macrophages (PMs) and THP-1 were used to check the effect of afatinib on the activation of NLRP3 inflammasome. The assembly of NLRP3 inflammasome was check by co-immunoprecipitation of NLRP3 and apoptosis-associated speck-like protein containing CARD (ASC), disuccinimidyl suberate (DSS)-cross link of ASC. Lipopolysaccharide (LPS)-induced sepsis and Alum-induced peritonitis were conducted to confirm that afatinib could inhibit the activation of NLRP3 in vivo. Peripheral blood mononuclear cells (PBMCs) from non-small cell lung cancer (NSCLC) patients before or after taking afatinib were used to check that afatinib inhibits inflammation in NSCLC therapy. Results: Our data showed that afatinib could inhibit the secretion of IL-1β in a dose-dependent manner in macrophage. Moreover, afatinib could inhibit the maturation of IL-1β and caspase-1 without affecting the precursors of IL-1β and caspase-1. Next, we found that afatinib could block the assembly of NLRP3 inflammasome and the ASC speck by blocking the interaction of the sensor protein NLRP3 and the adaptor protein ASC. We also found that afatinib was able to alleviate the LPS-induced sepsis in vivo. Conclusion: Our study found that afatinib could inhibit the activation of NLRP3 inflammasome in macrophage, providing new evidence that afatinib could target the innate immune system to control chronic inflammation. These investigations will provide significant experimental evidence in afatinib as therapeutic drug for non-small cell lung cancer or other tumors and NLRP3-related diseases and will explore new targets for afatinib.Keywords: inflammasome, afatinib, inflammation, tyrosine kinase inhibitor
Procedia PDF Downloads 118837 Household Water Source Substitution and Demand for Water Connections
Authors: Elizabeth Spink
Abstract:
The United Nations' Sustainable Development Goal 6 sets a target for safe and affordable drinking water for all. Developing country governments aiming to achieve this goal often face significant challenges when trying to service last mile customers, particularly those in peri-urban and rural areas. Expansion of water networks often requires high connection fees from households, and demand for connections may be low if there are cheaper substitute sources of water available. This research studies the effect of the availability of substitute sources of water on demand for individual water connections in Livingstone, Zambia, using an event study analysis of metering campaigns. Metering campaigns reduce the share of a household's neighbors that can provide free water to the household if their water connection becomes disconnected due to nonpayment. The results show that household payments in newly metered regions increase by 10 percentage points in the months following metering events, with a decrease in disconnections of 6 percentage points for low-income households. To isolate the effect of changes in a household's substitution possibilities, a similar analysis is conducted among households that neighbor the metered region. These results show mixed evidence of the impact of substitutes on payment behavior and disconnections. The results suggest that metering may be effective in increasing household demand for individual water connections primarily through a lower monthly cost burden for newly metered households.Keywords: piped-water access, water demand, water utilities, water sharing
Procedia PDF Downloads 198836 The Classification Accuracy of Finance Data through Holder Functions
Authors: Yeliz Karaca, Carlo Cattani
Abstract:
This study focuses on the local Holder exponent as a measure of the function regularity for time series related to finance data. In this study, the attributes of the finance dataset belonging to 13 countries (India, China, Japan, Sweden, France, Germany, Italy, Australia, Mexico, United Kingdom, Argentina, Brazil, USA) located in 5 different continents (Asia, Europe, Australia, North America and South America) have been examined.These countries are the ones mostly affected by the attributes with regard to financial development, covering a period from 2012 to 2017. Our study is concerned with the most important attributes that have impact on the development of finance for the countries identified. Our method is comprised of the following stages: (a) among the multi fractal methods and Brownian motion Holder regularity functions (polynomial, exponential), significant and self-similar attributes have been identified (b) The significant and self-similar attributes have been applied to the Artificial Neuronal Network (ANN) algorithms (Feed Forward Back Propagation (FFBP) and Cascade Forward Back Propagation (CFBP)) (c) the outcomes of classification accuracy have been compared concerning the attributes that have impact on the attributes which affect the countries’ financial development. This study has enabled to reveal, through the application of ANN algorithms, how the most significant attributes are identified within the relevant dataset via the Holder functions (polynomial and exponential function).Keywords: artificial neural networks, finance data, Holder regularity, multifractals
Procedia PDF Downloads 246835 Neural Network Models for Actual Cost and Actual Duration Estimation in Construction Projects: Findings from Greece
Authors: Panagiotis Karadimos, Leonidas Anthopoulos
Abstract:
Predicting the actual cost and duration in construction projects concern a continuous and existing problem for the construction sector. This paper addresses this problem with modern methods and data available from past public construction projects. 39 bridge projects, constructed in Greece, with a similar type of available data were examined. Considering each project’s attributes with the actual cost and the actual duration, correlation analysis is performed and the most appropriate predictive project variables are defined. Additionally, the most efficient subgroup of variables is selected with the use of the WEKA application, through its attribute selection function. The selected variables are used as input neurons for neural network models through correlation analysis. For constructing neural network models, the application FANN Tool is used. The optimum neural network model, for predicting the actual cost, produced a mean squared error with a value of 3.84886e-05 and it was based on the budgeted cost and the quantity of deck concrete. The optimum neural network model, for predicting the actual duration, produced a mean squared error with a value of 5.89463e-05 and it also was based on the budgeted cost and the amount of deck concrete.Keywords: actual cost and duration, attribute selection, bridge construction, neural networks, predicting models, FANN TOOL, WEKA
Procedia PDF Downloads 134834 Modelling Biological Treatment of Dye Wastewater in SBR Systems Inoculated with Bacteria by Artificial Neural Network
Authors: Yasaman Sanayei, Alireza Bahiraie
Abstract:
This paper presents a systematic methodology based on the application of artificial neural networks for sequencing batch reactor (SBR). The SBR is a fill-and-draw biological wastewater technology, which is specially suited for nutrient removal. Employing reactive dye by Sphingomonas paucimobilis bacteria at sequence batch reactor is a novel approach of dye removal. The influent COD, MLVSS, and reaction time were selected as the process inputs and the effluent COD and BOD as the process outputs. The best possible result for the discrete pole parameter was a= 0.44. In orderto adjust the parameters of ANN, the Levenberg-Marquardt (LM) algorithm was employed. The results predicted by the model were compared to the experimental data and showed a high correlation with R2> 0.99 and a low mean absolute error (MAE). The results from this study reveal that the developed model is accurate and efficacious in predicting COD and BOD parameters of the dye-containing wastewater treated by SBR. The proposed modeling approach can be applied to other industrial wastewater treatment systems to predict effluent characteristics. Note that SBR are normally operated with constant predefined duration of the stages, thus, resulting in low efficient operation. Data obtained from the on-line electronic sensors installed in the SBR and from the control quality laboratory analysis have been used to develop the optimal architecture of two different ANN. The results have shown that the developed models can be used as efficient and cost-effective predictive tools for the system analysed.Keywords: artificial neural network, COD removal, SBR, Sphingomonas paucimobilis
Procedia PDF Downloads 413833 Brain Networks and Mathematical Learning Processes of Children
Authors: Felicitas Pielsticker, Christoph Pielsticker, Ingo Witzke
Abstract:
Neurological findings provide foundational results for many different disciplines. In this article we want to discuss these with a special focus on mathematics education. The intention is to make neuroscience research useful for the description of cognitive mathematical learning processes. A key issue of mathematics education is that students often behave as if their mathematical knowledge is constructed in isolated compartments with respect to the specific context of the original learning situation; supporting students to link these compartments to form a coherent mathematical society of mind is a fundamental task not only for mathematics teachers. This aspect goes hand in hand with the question if there is such a thing as abstract general mathematical knowledge detached from concrete reality. Educational Neuroscience may give answers to the question why students develop their mathematical knowledge in isolated subjective domains of experience and if it is generally possible to think in abstract terms. To address these questions, we will provide examples from different fields of mathematics education e.g. students’ development and understanding of the general concept of variables or the mathematical notion of universal proofs. We want to discuss these aspects in the reflection of functional studies which elucidate the role of specific brain regions in mathematical learning processes. In doing this the paper addresses concept formation processes of students in the mathematics classroom and how to support them adequately considering the results of (educational) neuroscience.Keywords: brain regions, concept formation processes in mathematics education, proofs, teaching-learning processes
Procedia PDF Downloads 149832 Resource Orchestration Based on Two-Sides Scheduling in Computing Network Control Sytems
Authors: Li Guo, Jianhong Wang, Dian Huang, Shengzhong Feng
Abstract:
Computing networks as a new network architecture has shown great promise in boosting the utilization of different resources, such as computing, caching, and communications. To maximise the efficiency of resource orchestration in computing network control systems (CNCSs), this work proposes a dynamic orchestration strategy of a different resource based on task requirements from computing power requestors (CPRs). Specifically, computing power providers (CPPs) in CNCSs could share information with each other through communication channels on the basis of blockchain technology, especially their current idle resources. This dynamic process is modeled as a cooperative game in which CPPs have the same target of maximising long-term rewards by improving the resource utilization ratio. Meanwhile, the task requirements from CPRs, including size, deadline, and calculation, are simultaneously considered in this paper. According to task requirements, the proposed orchestration strategy could schedule the best-fitting resource in CNCSs, achieving the maximum long-term rewards of CPPs and the best quality of experience (QoE) of CRRs at the same time. Based on the EdgeCloudSim simulation platform, the efficiency of the proposed strategy is achieved from both sides of CPRs and CPPs. Besides, experimental results show that the proposed strategy outperforms the other comparisons in all cases.Keywords: computing network control systems, resource orchestration, dynamic scheduling, blockchain, cooperative game
Procedia PDF Downloads 114831 Human Immunodeficiency Virus (HIV) Test Predictive Modeling and Identify Determinants of HIV Testing for People with Age above Fourteen Years in Ethiopia Using Data Mining Techniques: EDHS 2011
Authors: S. Abera, T. Gidey, W. Terefe
Abstract:
Introduction: Testing for HIV is the key entry point to HIV prevention, treatment, and care and support services. Hence, predictive data mining techniques can greatly benefit to analyze and discover new patterns from huge datasets like that of EDHS 2011 data. Objectives: The objective of this study is to build a predictive modeling for HIV testing and identify determinants of HIV testing for adults with age above fourteen years using data mining techniques. Methods: Cross-Industry Standard Process for Data Mining (CRISP-DM) was used to predict the model for HIV testing and explore association rules between HIV testing and the selected attributes among adult Ethiopians. Decision tree, Naïve-Bayes, logistic regression and artificial neural networks of data mining techniques were used to build the predictive models. Results: The target dataset contained 30,625 study participants; of which 16, 515 (53.9%) were women. Nearly two-fifth; 17,719 (58%), have never been tested for HIV while the rest 12,906 (42%) had been tested. Ethiopians with higher wealth index, higher educational level, belonging 20 to 29 years old, having no stigmatizing attitude towards HIV positive person, urban residents, having HIV related knowledge, information about family planning on mass media and knowing a place where to get testing for HIV showed an increased patterns with respect to HIV testing. Conclusion and Recommendation: Public health interventions should consider the identified determinants to promote people to get testing for HIV.Keywords: data mining, HIV, testing, ethiopia
Procedia PDF Downloads 496830 A Sectional Control Method to Decrease the Accumulated Survey Error of Tunnel Installation Control Network
Authors: Yinggang Guo, Zongchun Li
Abstract:
In order to decrease the accumulated survey error of tunnel installation control network of particle accelerator, a sectional control method is proposed. Firstly, the accumulation rule of positional error with the length of the control network is obtained by simulation calculation according to the shape of the tunnel installation-control-network. Then, the RMS of horizontal positional precision of tunnel backbone control network is taken as the threshold. When the accumulated error is bigger than the threshold, the tunnel installation control network should be divided into subsections reasonably. On each segment, the middle survey station is taken as the datum for independent adjustment calculation. Finally, by taking the backbone control points as faint datums, the weighted partial parameters adjustment is performed with the adjustment results of each segment and the coordinates of backbone control points. The subsections are jointed and unified into the global coordinate system in the adjustment process. An installation control network of the linac with a length of 1.6 km is simulated. The RMS of positional deviation of the proposed method is 2.583 mm, and the RMS of the difference of positional deviation between adjacent points reaches 0.035 mm. Experimental results show that the proposed sectional control method can not only effectively decrease the accumulated survey error but also guarantee the relative positional precision of the installation control network. So it can be applied in the data processing of tunnel installation control networks, especially for large particle accelerators.Keywords: alignment, tunnel installation control network, accumulated survey error, sectional control method, datum
Procedia PDF Downloads 191829 Distributed Automation System Based Remote Monitoring of Power Quality Disturbance on LV Network
Authors: Emmanuel D. Buedi, K. O. Boateng, Griffith S. Klogo
Abstract:
Electrical distribution networks are prone to power quality disturbances originating from the complexity of the distribution network, mode of distribution (overhead or underground) and types of loads used by customers. Data on the types of disturbances present and frequency of occurrence is needed for economic evaluation and hence finding solution to the problem. Utility companies have resorted to using secondary power quality devices such as smart meters to help gather the required data. Even though this approach is easier to adopt, data gathered from these devices may not serve the required purpose, since the installation of these devices in the electrical network usually does not conform to available PQM placement methods. This paper presents a design of a PQM that is capable of integrating into an existing DAS infrastructure to take advantage of available placement methodologies. The monitoring component of the design is implemented and installed to monitor an existing LV network. Data from the monitor is analyzed and presented. A portion of the LV network of the Electricity Company of Ghana is modeled in MATLAB-Simulink and analyzed under various earth fault conditions. The results presented show the ability of the PQM to detect and analyze PQ disturbance such as voltage sag and overvoltage. By adopting a placement methodology and installing these nodes, utilities are assured of accurate and reliable information with respect to the quality of power delivered to consumers.Keywords: power quality, remote monitoring, distributed automation system, economic evaluation, LV network
Procedia PDF Downloads 349828 Detection of Atrial Fibrillation Using Wearables via Attentional Two-Stream Heterogeneous Networks
Authors: Huawei Bai, Jianguo Yao, Fellow, IEEE
Abstract:
Atrial fibrillation (AF) is the most common form of heart arrhythmia and is closely associated with mortality and morbidity in heart failure, stroke, and coronary artery disease. The development of single spot optical sensors enables widespread photoplethysmography (PPG) screening, especially for AF, since it represents a more convenient and noninvasive approach. To our knowledge, most existing studies based on public and unbalanced datasets can barely handle the multiple noises sources in the real world and, also, lack interpretability. In this paper, we construct a large- scale PPG dataset using measurements collected from PPG wrist- watch devices worn by volunteers and propose an attention-based two-stream heterogeneous neural network (TSHNN). The first stream is a hybrid neural network consisting of a three-layer one-dimensional convolutional neural network (1D-CNN) and two-layer attention- based bidirectional long short-term memory (Bi-LSTM) network to learn representations from temporally sampled signals. The second stream extracts latent representations from the PPG time-frequency spectrogram using a five-layer CNN. The outputs from both streams are fed into a fusion layer for the outcome. Visualization of the attention weights learned demonstrates the effectiveness of the attention mechanism against noise. The experimental results show that the TSHNN outperforms all the competitive baseline approaches and with 98.09% accuracy, achieves state-of-the-art performance.Keywords: PPG wearables, atrial fibrillation, feature fusion, attention mechanism, hyber network
Procedia PDF Downloads 121827 Wind Speed Forecasting Based on Historical Data Using Modern Prediction Methods in Selected Sites of Geba Catchment, Ethiopia
Authors: Halefom Kidane
Abstract:
This study aims to assess the wind resource potential and characterize the urban area wind patterns in Hawassa City, Ethiopia. The estimation and characterization of wind resources are crucial for sustainable urban planning, renewable energy development, and climate change mitigation strategies. A secondary data collection method was used to carry out the study. The collected data at 2 meters was analyzed statistically and extrapolated to the standard heights of 10-meter and 30-meter heights using the power law equation. The standard deviation method was used to calculate the value of scale and shape factors. From the analysis presented, the maximum and minimum mean daily wind speed at 2 meters in 2016 was 1.33 m/s and 0.05 m/s in 2017, 1.67 m/s and 0.14 m/s in 2018, 1.61m and 0.07 m/s, respectively. The maximum monthly average wind speed of Hawassa City in 2016 at 2 meters was noticed in the month of December, which is around 0.78 m/s, while in 2017, the maximum wind speed was recorded in the month of January with a wind speed magnitude of 0.80 m/s and in 2018 June was maximum speed which is 0.76 m/s. On the other hand, October was the month with the minimum mean wind speed in all years, with a value of 0.47 m/s in 2016,0.47 in 2017 and 0.34 in 2018. The annual mean wind speed was 0.61 m/s in 2016,0.64, m/s in 2017 and 0.57 m/s in 2018 at a height of 2 meters. From extrapolation, the annual mean wind speeds for the years 2016,2017 and 2018 at 10 heights were 1.17 m/s,1.22 m/s, and 1.11 m/s, and at the height of 30 meters, were 3.34m/s,3.78 m/s, and 3.01 m/s respectively/Thus, the site consists mainly primarily classes-I of wind speed even at the extrapolated heights.Keywords: artificial neural networks, forecasting, min-max normalization, wind speed
Procedia PDF Downloads 76826 A Comparative Soft Computing Approach to Supplier Performance Prediction Using GEP and ANN Models: An Automotive Case Study
Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari
Abstract:
In multi-echelon supply chain networks, optimal supplier selection significantly depends on the accuracy of suppliers’ performance prediction. Different methods of multi criteria decision making such as ANN, GA, Fuzzy, AHP, etc have been previously used to predict the supplier performance but the “black-box” characteristic of these methods is yet a major concern to be resolved. Therefore, the primary objective in this paper is to implement an artificial intelligence-based gene expression programming (GEP) model to compare the prediction accuracy with that of ANN. A full factorial design with %95 confidence interval is initially applied to determine the appropriate set of criteria for supplier performance evaluation. A test-train approach is then utilized for the ANN and GEP exclusively. The training results are used to find the optimal network architecture and the testing data will determine the prediction accuracy of each method based on measures of root mean square error (RMSE) and correlation coefficient (R2). The results of a case study conducted in Supplying Automotive Parts Co. (SAPCO) with more than 100 local and foreign supply chain members revealed that, in comparison with ANN, gene expression programming has a significant preference in predicting supplier performance by referring to the respective RMSE and R-squared values. Moreover, using GEP, a mathematical function was also derived to solve the issue of ANN black-box structure in modeling the performance prediction.Keywords: Supplier Performance Prediction, ANN, GEP, Automotive, SAPCO
Procedia PDF Downloads 419825 Gas Network Noncooperative Game
Authors: Teresa Azevedo PerdicoúLis, Paulo Lopes Dos Santos
Abstract:
The conceptualisation of the problem of network optimisation as a noncooperative game sets up a holistic interactive approach that brings together different network features (e.g., com-pressor stations, sources, and pipelines, in the gas context) where the optimisation objectives are different, and a single optimisation procedure becomes possible without having to feed results from diverse software packages into each other. A mathematical model of this type, where independent entities take action, offers the ideal modularity and subsequent problem decomposition in view to design a decentralised algorithm to optimise the operation and management of the network. In a game framework, compressor stations and sources are under-stood as players which communicate through network connectivity constraints–the pipeline model. That is, in a scheme similar to tatonnementˆ, the players appoint their best settings and then interact to check for network feasibility. The devolved degree of network unfeasibility informs the players about the ’quality’ of their settings, and this two-phase iterative scheme is repeated until a global optimum is obtained. Due to network transients, its optimisation needs to be assessed at different points of the control interval. For this reason, the proposed approach to optimisation has two stages: (i) the first stage computes along the period of optimisation in order to fulfil the requirement just mentioned; (ii) the second stage is initialised with the solution found by the problem computed at the first stage, and computes in the end of the period of optimisation to rectify the solution found at the first stage. The liability of the proposed scheme is proven correct on an abstract prototype and three example networks.Keywords: connectivity matrix, gas network optimisation, large-scale, noncooperative game, system decomposition
Procedia PDF Downloads 152824 Intelligent Control of Agricultural Farms, Gardens, Greenhouses, Livestock
Authors: Vahid Bairami Rad
Abstract:
The intelligentization of agricultural fields can control the temperature, humidity, and variables affecting the growth of agricultural products online and on a mobile phone or computer. Smarting agricultural fields and gardens is one of the best and best ways to optimize agricultural equipment and has a 100 percent direct effect on the growth of plants and agricultural products and farms. Smart farms are the topic that we are going to discuss today, the Internet of Things and artificial intelligence. Agriculture is becoming smarter every day. From large industrial operations to individuals growing organic produce locally, technology is at the forefront of reducing costs, improving results and ensuring optimal delivery to market. A key element to having a smart agriculture is the use of useful data. Modern farmers have more tools to collect intelligent data than in previous years. Data related to soil chemistry also allows people to make informed decisions about fertilizing farmland. Moisture meter sensors and accurate irrigation controllers have made the irrigation processes to be optimized and at the same time reduce the cost of water consumption. Drones can apply pesticides precisely on the desired point. Automated harvesting machines navigate crop fields based on position and capacity sensors. The list goes on. Almost any process related to agriculture can use sensors that collect data to optimize existing processes and make informed decisions. The Internet of Things (IoT) is at the center of this great transformation. Internet of Things hardware has grown and developed rapidly to provide low-cost sensors for people's needs. These sensors are embedded in IoT devices with a battery and can be evaluated over the years and have access to a low-power and cost-effective mobile network. IoT device management platforms have also evolved rapidly and can now be used securely and manage existing devices at scale. IoT cloud services also provide a set of application enablement services that can be easily used by developers and allow them to build application business logic. Focus on yourself. These development processes have created powerful and new applications in the field of Internet of Things, and these programs can be used in various industries such as agriculture and building smart farms. But the question is, what makes today's farms truly smart farms? Let us put this question in another way. When will the technologies associated with smart farms reach the point where the range of intelligence they provide can exceed the intelligence of experienced and professional farmers?Keywords: food security, IoT automation, wireless communication, hybrid lifestyle, arduino Uno
Procedia PDF Downloads 56823 Implicit and Explicit Mechanisms of Emotional Contagion
Authors: Andres Pinilla Palacios, Ricardo Tamayo
Abstract:
Emotional contagion is characterized as an automatic tendency to synchronize behaviors that facilitate emotional convergence among humans. It might thus play a pivotal role to understand the dynamics of key social interactions. However, a few research has investigated its potential mechanisms. We suggest two complementary but independent processes that may underlie emotional contagion. The efficient contagion hypothesis, based on fast and implicit bottom-up processes, modulated by familiarity and spread of activation in the emotional associative networks of memory. Secondly, the emotional contrast hypothesis, based on slow and explicit top-down processes guided by deliberated appraisal and hypothesis-testing. In order to assess these two hypotheses, an experiment with 39 participants was conducted. In the first phase, participants were induced (between-groups) to an emotional state (positive, neutral or negative) using a standardized video taken from the FilmStim database. In the second phase, participants classified and rated (within-subject) the emotional state of 15 faces (5 for each emotional state) taken from the POFA database. In the third phase, all participants were returned to a baseline emotional state using the same neutral video used in the first phase. In a fourth phase, participants classified and rated a new set of 15 faces. The accuracy in the identification and rating of emotions was partially explained by the efficient contagion hypothesis, but the speed with which these judgments were made was partially explained by the emotional contrast hypothesis. However, results are ambiguous, so a follow-up experiment is proposed in which emotional expressions and activation of the sympathetic system will be measured using EMG and EDA respectively.Keywords: electromyography, emotional contagion, emotional valence, identification of emotions, imitation
Procedia PDF Downloads 316822 MIMIC: A Multi Input Micro-Influencers Classifier
Authors: Simone Leonardi, Luca Ardito
Abstract:
Micro-influencers are effective elements in the marketing strategies of companies and institutions because of their capability to create an hyper-engaged audience around a specific topic of interest. In recent years, many scientific approaches and commercial tools have handled the task of detecting this type of social media users. These strategies adopt solutions ranging from rule based machine learning models to deep neural networks and graph analysis on text, images, and account information. This work compares the existing solutions and proposes an ensemble method to generalize them with different input data and social media platforms. The deployed solution combines deep learning models on unstructured data with statistical machine learning models on structured data. We retrieve both social media accounts information and multimedia posts on Twitter and Instagram. These data are mapped into feature vectors for an eXtreme Gradient Boosting (XGBoost) classifier. Sixty different topics have been analyzed to build a rule based gold standard dataset and to compare the performances of our approach against baseline classifiers. We prove the effectiveness of our work by comparing the accuracy, precision, recall, and f1 score of our model with different configurations and architectures. We obtained an accuracy of 0.91 with our best performing model.Keywords: deep learning, gradient boosting, image processing, micro-influencers, NLP, social media
Procedia PDF Downloads 183821 A Multi-Science Study of Modern Synergetic War and Its Information Security Component
Authors: Alexander G. Yushchenko
Abstract:
From a multi-science point of view, we analyze threats to security resulting from globalization of international information space and information and communication aggression of Russia. A definition of Ruschism is formulated as an ideology supporting aggressive actions of modern Russia against the Euro-Atlantic community. Stages of the hybrid war Russia is leading against Ukraine are described, including the elements of subversive activity of the special services, the activation of the military phase and the gradual shift of the focus of confrontation to the realm of information and communication technologies. We reveal an emergence of a threat for democratic states resulting from the destabilizing impact of a target state’s mass media and social networks being exploited by Russian secret services under freedom-of-speech disguise. Thus, we underline the vulnerability of cyber- and information security of the network society in regard of hybrid war. We propose to define the latter a synergetic war. Our analysis is supported with a long-term qualitative monitoring of representation of top state officials on popular TV channels and Facebook. From the memetics point of view, we have detected a destructive psycho-information technology used by the Kremlin, a kind of information catastrophe, the essence of which is explained in detail. In the conclusion, a comprehensive plan for information protection of the public consciousness and mentality of Euro-Atlantic citizens from the aggression of the enemy is proposed.Keywords: cyber and information security, hybrid war, psycho-information technology, synergetic war, Ruschism
Procedia PDF Downloads 134820 Decision Support: How Explainable A.I. Can Improve Transparency and Trust with Human Users
Authors: Devon Brown, Liu Chunmei
Abstract:
This paper will present an analysis as part of the researchers dissertation topic focusing on the intersection of affective and analytical directed acyclic graphs (DAGs) in the context of Decision Support Systems (DSS). The researcher’s work involves analyzing decision theory models like Affective and Bayesian Decision theory models and how they could be implemented under an Affective Computing Framework using Information Fusion and Human-Centered Design. Additionally, the researcher is beginning research on an Affective-Analytic Decision Framework (AADF) model for their dissertation research and are looking to merge logic and analytic models with empathetic insights into affective DAGs. Data-collection efforts begin Fall 2024 and in preparation for the efforts this paper looks to analyze previous research in this area and introduce the AADF framework and propose conceptual models for consideration. For this paper, the research emphasis is placed on analyzing Bayesian networks and Markov models which offer probabilistic techniques during uncertainty in decision-making. Ideally, including affect into analytic models will ensure algorithms can increase user trust with algorithms by including emotional states and the user’s experience with the goal of developing emotionally intelligent A.I. systems that can start to navigate the complex fabric of human emotion during decision-making.Keywords: decision support systems, explainable AI, HCAI techniques, affective-analytical decision framework
Procedia PDF Downloads 20819 Minimizing Unscheduled Maintenance from an Aircraft and Rolling Stock Maintenance Perspective: Preventive Maintenance Model
Authors: Adel A. Ghobbar, Varun Raman
Abstract:
The Corrective maintenance of components and systems is a problem plaguing almost every industry in the world today. Train operators’ and the maintenance repair and overhaul subsidiary of the Dutch railway company is also facing this problem. A considerable portion of the maintenance activities carried out by the company are unscheduled. This, in turn, severely stresses and stretches the workforce and resources available. One possible solution is to have a robust preventive maintenance plan. The other possible solution is to plan maintenance based on real-time data obtained from sensor-based ‘Health and Usage Monitoring Systems.’ The former has been investigated in this paper. The preventive maintenance model developed for train operator will subsequently be extended, to tackle the unscheduled maintenance problem also affecting the aerospace industry. The extension of the model to the aerospace sector will be dealt with in the second part of the research, and it would, in turn, validate the soundness of the model developed. Thus, there are distinct areas that will be addressed in this paper, including the mathematical modelling of preventive maintenance and optimization based on cost and system availability. The results of this research will help an organization to choose the right maintenance strategy, allowing it to save considerable sums of money as opposed to overspending under the guise of maintaining high asset availability. The concept of delay time modelling was used to address the practical problem of unscheduled maintenance in this paper. The delay time modelling can be used to help with support planning for a given asset. The model was run using MATLAB, and the results are shown that the ideal inspection intervals computed using the extended from a minimal cost perspective were 29 days, and from a minimum downtime, perspective was 14 days. Risk matrix integration was constructed to represent the risk in terms of the probability of a fault leading to breakdown maintenance and its consequences in terms of maintenance cost. Thus, the choice of an optimal inspection interval of 29 days, resulted in a cost of approximately 50 Euros and the corresponding value of b(T) was 0.011. These values ensure that the risk associated with component X being maintained at an inspection interval of 29 days is more than acceptable. Thus, a switch in maintenance frequency from 90 days to 29 days would be optimal from the point of view of cost, downtime and risk.Keywords: delay time modelling, unscheduled maintenance, reliability, maintainability, availability
Procedia PDF Downloads 132818 Analytical Study: An M-Learning App Reflecting the Factors Affecting Student’s Adoption of M-Learning
Authors: Ahmad Khachan, Ahmet Ozmen
Abstract:
This study aims to introduce a mobile bite-sized learning concept, a mobile application with social networks motivation factors that will encourage students to practice critical thinking, improve analytical skills and learn knowledge sharing. We do not aim to propose another e-learning or distance learning based tool like Moodle and Edmodo; instead, we introduce a mobile learning tool called Interactive M-learning Application. The tool reconstructs and strengthens the bonds between educators and learners and provides a foundation for integrating mobile devices in education. The application allows learners to stay connected all the time, share ideas, ask questions and learn from each other. It is built on Android since the Android has the largest platform share in the world and is dominating the market with 74.45% share in 2018. We have chosen Google-Firebase server for hosting because of flexibility, ease of hosting and real time update capabilities. The proposed m-learning tool was offered to four groups of university students in different majors. An improvement in the relation between the students, the teachers and the academic institution was obvious. Student’s performance got much better added to better analytical and critical skills advancement and moreover a willingness to adopt mobile learning in class. We have also compared our app with another tool in the same class for clarity and reliability of the results. The student’s mobile devices were used in this experimental study for diversity of devices and platform versions.Keywords: education, engineering, interactive software, undergraduate education
Procedia PDF Downloads 155817 Emerging Cyber Threats and Cognitive Vulnerabilities: Cyberterrorism
Authors: Oludare Isaac Abiodun, Esther Omolara Abiodun
Abstract:
The purpose of this paper is to demonstrate that cyberterrorism is existing and poses a threat to computer security and national security. Nowadays, people have become excitedly dependent upon computers, phones, the Internet, and the Internet of things systems to share information, communicate, conduct a search, etc. However, these network systems are at risk from a different source that is known and unknown. These network systems risk being caused by some malicious individuals, groups, organizations, or governments, they take advantage of vulnerabilities in the computer system to hawk sensitive information from people, organizations, or governments. In doing so, they are engaging themselves in computer threats, crime, and terrorism, thereby making the use of computers insecure for others. The threat of cyberterrorism is of various forms and ranges from one country to another country. These threats include disrupting communications and information, stealing data, destroying data, leaking, and breaching data, interfering with messages and networks, and in some cases, demanding financial rewards for stolen data. Hence, this study identifies many ways that cyberterrorists utilize the Internet as a tool to advance their malicious mission, which negatively affects computer security and safety. One could identify causes for disparate anomaly behaviors and the theoretical, ideological, and current forms of the likelihood of cyberterrorism. Therefore, for a countermeasure, this paper proposes the use of previous and current computer security models as found in the literature to help in countering cyberterrorismKeywords: cyberterrorism, computer security, information, internet, terrorism, threat, digital forensic solution
Procedia PDF Downloads 96