Search results for: Accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1741

Search results for: Accuracy

91 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model

Authors: Yepeng Cheng, Yasuhiko Morimoto

Abstract:

Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.

Keywords: Customer value, Huff's Gravity Model, POS, retailer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 614
90 A Two-Phase Flow Interface Tracking Algorithm Using a Fully Coupled Pressure-Based Finite Volume Method

Authors: Shidvash Vakilipour, Scott Ormiston, Masoud Mohammadi, Rouzbeh Riazi, Kimia Amiri, Sahar Barati

Abstract:

Two-phase and multi-phase flows are common flow types in fluid mechanics engineering. Among the basic and applied problems of these flow types, two-phase parallel flow is the one that two immiscible fluids flow in the vicinity of each other. In this type of flow, fluid properties (e.g. density, viscosity, and temperature) are different at the two sides of the interface of the two fluids. The most challenging part of the numerical simulation of two-phase flow is to determine the location of interface accurately. In the present work, a coupled interface tracking algorithm is developed based on Arbitrary Lagrangian-Eulerian (ALE) approach using a cell-centered, pressure-based, coupled solver. To validate this algorithm, an analytical solution for fully developed two-phase flow in presence of gravity is derived, and then, the results of the numerical simulation of this flow are compared with analytical solution at various flow conditions. The results of the simulations show good accuracy of the algorithm despite using a nearly coarse and uniform grid. Temporal variations of interface profile toward the steady-state solution show that a greater difference between fluids properties (especially dynamic viscosity) will result in larger traveling waves. Gravity effect studies also show that favorable gravity will result in a reduction of heavier fluid thickness and adverse gravity leads to increasing it with respect to the zero gravity condition. However, the magnitude of variation in favorable gravity is much more than adverse gravity.

Keywords: Coupled solver, gravitational force, interface tracking, Reynolds number to Froude number, two-phase flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1013
89 Accuracy of Peak Demand Estimates for Office Buildings Using eQUEST

Authors: Mahdiyeh Zafaranchi, Ethan S. Cantor, William T. Riddell, Jess W. Everett

Abstract:

The New Jersey Department of Military and Veteran’s Affairs (NJ DMAVA) operates over 50 facilities throughout the state of New Jersey, US. NJ DMAVA is under a mandate to move toward decarbonization, which will eventually include eliminating the use of natural gas and other fossil fuels for heating. At the same time, the organization requires increased resiliency regarding electric grid disruption. These competing goals necessitate adopting the use of on-site renewables such as photovoltaic and geothermal power, as well as implementing power control strategies through microgrids. Planning for these changes requires a detailed understanding of current and future electricity use on yearly, monthly, and shorter time scales, as well as a breakdown of consumption by heating, ventilation, and air conditioning (HVAC) equipment. This paper discusses case studies of two buildings that were simulated using the QUick Energy Simulation Tool (eQUEST). Both buildings use electricity from the grid and photovoltaics. One building also uses natural gas. While electricity use data are available in hourly intervals and natural gas data are available in monthly intervals, the simulations were developed using monthly and yearly totals. This approach was chosen to reflect the information available for most NJ DMAVA facilities. Once completed, simulation results are compared to metrics recommended by several organizations to validate energy use simulations. In addition to yearly and monthly totals, the simulated peak demands are compared to actual monthly peak demand values. The simulations resulted in monthly peak demand values that were within 30% of the measured values. These benchmarks will help to assess future energy planning efforts for NJ DMAVA.

Keywords: Building Energy Modeling, eQUEST, peak demand, smart meters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 187
88 Structural Behavior of Precast Foamed Concrete Sandwich Panel Subjected to Vertical In-Plane Shear Loading

Authors: Y. H. Mugahed Amran, Raizal S. M. Rashid, Farzad Hejazi, Nor Azizi Safiee, A. A. Abang Ali

Abstract:

Experimental and analytical studies were accomplished to examine the structural behavior of precast foamed concrete sandwich panel (PFCSP) under vertical in-plane shear load. PFCSP full-scale specimens with total number of six were developed with varying heights to study an important parameter slenderness ratio (H/t). The production technique of PFCSP and the procedure of test setup were described. The results obtained from the experimental tests were analysed in the context of in-plane shear strength capacity, load-deflection profile, load-strain relationship, slenderness ratio, shear cracking patterns and mode of failure. Analytical study of finite element analysis was implemented and the theoretical calculations of the ultimate in-plane shear strengths using the adopted ACI318 equation for reinforced concrete wall were determined aimed at predicting the in-plane shear strength of PFCSP. The decrease in slenderness ratio from 24 to 14 showed an increase of 26.51% and 21.91% on the ultimate in-plane shear strength capacity as obtained experimentally and in FEA models, respectively. The experimental test results, FEA models data and theoretical calculation values were compared and provided a significant agreement with high degree of accuracy. Therefore, on the basis of the results obtained, PFCSP wall has the potential use as an alternative to the conventional load-bearing wall system.

Keywords: Deflection profiles, foamed concrete, load-strain relationships, precast foamed concrete sandwich panel, slenderness ratio, vertical in-plane shear strength capacity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2648
87 Remaining Useful Life Estimation of Bearings Based on Nonlinear Dimensional Reduction Combined with Timing Signals

Authors: Zhongmin Wang, Wudong Fan, Hengshan Zhang, Yimin Zhou

Abstract:

In data-driven prognostic methods, the prediction accuracy of the estimation for remaining useful life of bearings mainly depends on the performance of health indicators, which are usually fused some statistical features extracted from vibrating signals. However, the existing health indicators have the following two drawbacks: (1) The differnet ranges of the statistical features have the different contributions to construct the health indicators, the expert knowledge is required to extract the features. (2) When convolutional neural networks are utilized to tackle time-frequency features of signals, the time-series of signals are not considered. To overcome these drawbacks, in this study, the method combining convolutional neural network with gated recurrent unit is proposed to extract the time-frequency image features. The extracted features are utilized to construct health indicator and predict remaining useful life of bearings. First, original signals are converted into time-frequency images by using continuous wavelet transform so as to form the original feature sets. Second, with convolutional and pooling layers of convolutional neural networks, the most sensitive features of time-frequency images are selected from the original feature sets. Finally, these selected features are fed into the gated recurrent unit to construct the health indicator. The results state that the proposed method shows the enhance performance than the related studies which have used the same bearing dataset provided by PRONOSTIA.

Keywords: Continuous wavelet transform, convolution neural network, gated recurrent unit, health indicators, remaining useful life.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 769
86 Machine Learning Techniques for COVID-19 Detection: A Comparative Analysis

Authors: Abeer Aljohani

Abstract:

The COVID-19 virus spread has been one of the extreme pandemics across the globe. It is also referred as corona virus which is a contagious disease that continuously mutates into numerous variants. Currently, the B.1.1.529 variant labeled as Omicron is detected in South Africa. The huge spread of COVID-19 disease has affected several lives and has surged exceptional pressure on the healthcare systems worldwide. Also, everyday life and the global economy have been at stake. Numerous COVID-19 cases have produced a huge burden on hospitals as well as health workers. To reduce this burden, this paper predicts COVID-19 disease based on the symptoms and medical history of the patient. As machine learning is a widely accepted area and gives promising results for healthcare, this research presents an architecture for COVID-19 detection using ML techniques integrated with feature dimensionality reduction. This paper uses a standard University of California Irvine (UCI) dataset for predicting COVID-19 disease. This dataset comprises symptoms of 5434 patients. This paper also compares several supervised ML techniques on the presented architecture. The architecture has also utilized 10-fold cross validation process for generalization and Principal Component Analysis (PCA) technique for feature reduction. Standard parameters are used to evaluate the proposed architecture including F1-Score, precision, accuracy, recall, Receiver Operating Characteristic (ROC) and Area under Curve (AUC). The results depict that Decision tree, Random Forest and neural networks outperform all other state-of-the-art ML techniques. This result can be used to effectively identify COVID-19 infection cases.

Keywords: Supervised machine learning, COVID-19 prediction, healthcare analytics, Random Forest, Neural Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 385
85 Dispersion Rate of Spilled Oil in Water Column under Non-Breaking Water Waves

Authors: Hanifeh Imanian, Morteza Kolahdoozan

Abstract:

The purpose of this study is to present a mathematical phrase for calculating the dispersion rate of spilled oil in water column under non-breaking waves. In this regard, a multiphase numerical model is applied for which waves and oil phase were computed concurrently, and accuracy of its hydraulic calculations have been proven. More than 200 various scenarios of oil spilling in wave waters were simulated using the multiphase numerical model and its outcome were collected in a database. The recorded results were investigated to identify the major parameters affected vertical oil dispersion and finally 6 parameters were identified as main independent factors. Furthermore, some statistical tests were conducted to identify any relationship between the dependent variable (dispersed oil mass in the water column) and independent variables (water wave specifications containing height, length and wave period and spilled oil characteristics including density, viscosity and spilled oil mass). Finally, a mathematical-statistical relationship is proposed to predict dispersed oil in marine waters. To verify the proposed relationship, a laboratory example available in the literature was selected. Oil mass rate penetrated in water body computed by statistical regression was in accordance with experimental data was predicted. On this occasion, it was necessary to verify the proposed mathematical phrase. In a selected laboratory case available in the literature, mass oil rate penetrated in water body computed by suggested regression. Results showed good agreement with experimental data. The validated mathematical-statistical phrase is a useful tool for oil dispersion prediction in oil spill events in marine areas.

Keywords: Dispersion, marine environment, mathematical-statistical relationship, oil spill.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1147
84 Appraisal of Trace Elements in Scalp Hair of School Children in Kandal Province, Cambodia

Authors: A. Yavar, S. Sarmani, K. S. Khoo

Abstract:

The analysis of trace elements in human hair provides crucial insights into an individual's nutritional status and environmental exposure. This research aimed to examine the levels of toxic and essential elements in the scalp hair of school children aged 12-17 from three villages (Anglong Romiot (AR), Svay Romiot (SR), and Kampong Kong (KK)) in Cambodia's Kandal province, a region where residents are especially vulnerable to toxic elements, notably arsenic (As), due to their dietary habits, lifestyle, and environmental conditions. The scalp hair samples were analyzed using the k0-Instrumental Neutron Activation method (k0-INAA), with a six-hour irradiation period in the Malaysian Nuclear Agency (MNA) research reactor followed by High Purity Germanium (HPGe) detector use to identify the gamma peaks of radionuclides. The analysis identified 31 elements in the human hair from the study area, including As, Au, Br, Ca, Ce, Co, Dy, Eu-152m, Hg-197, Hg-203, Ho, Ir, K, La, Lu, Mn, Na, Pa, Pt-195m, Pt-197, Sb, Sc-46, Sc-47, Sm, Sn-117m, W-181, W-187, Yb-169, Yb-175, Zn, and Zn-69m. The accuracy of the method was verified through the analysis of ERM-DB001-human hair as a Certified Reference Material (CRM), with the results demonstrating consistency with the certified values. Given the prevalent arsenic pollution in the research area, the study also examined the relationship between the concentration of As and other elements using Pearson's correlation test. The outcomes offer a comprehensive resource for future investigations into toxic and essential element presence in the region. In the main body of the paper, a more extensive discussion on the implications of arsenic pollution and the correlations observed is provided to enhance understanding and inform future research directions.

Keywords: Human scalp hair, toxic and essential elements, Kandal Province, Cambodia, k0-Instrumental Neutron Activation Method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 267
83 Probe-Assisted Axillary Lymph Node Biopsy Compared with Axillary Dissection in Breast Cancer: A Retrospective Study from the West of Iran

Authors: Morteza Alizadeh Foroutan, Hassan Moayeri, Keivan Sabooni, Motahareh Rouhi Ardeshiri

Abstract:

Breast cancer incidence is annually increasing in various parts of the world, and sentinel lymph node biopsy (SLNB) has turned into a new standard for care as a staging process in this regard. In the present study, the gamma probe technique was used for SLNB as a safe method with more accuracy and less complications. The study sought to compare the results of two surgical techniques, namely, axillary lymph node dissection (ALND) and SLNB, including epidemiological results and clinicopathological features of BC patients from the western provinces of Iran. In general, 420 BC women were identified who referred to the breast clinic in Sanandaj, Kurdistan province during 2017-2021. Of whom, 318 patients underwent breast surgery, and from these patients, 277 cases participated in the current study. Patients were divided into those undergoing ALND and SLNB. The criteria for complete dissection or axillary biopsy using the gamma probe were based on the results of clinical examinations and the presence of palpable lymph nodes. Overall complications after surgery belonged to 58 (18.9%) cases, including 15 (25.9%) and 43 (74.1%) patients in the SLNB and ALND groups, respectively (P = 0.74). Based on the findings, Seroma (60.3%) was the most reported complication in each group. Most patients had tumors in the upper-outer quadrant of their left breast. The mean of the tumor dimension in the SLNB and ALND groups was 2.1 ± 1.3 cm and 3.2 ± 1.8 cm, respectively, (P = 0.003). The benefits of breast-conserving surgery (BCS) with the SLNB technique are clearly undeniable and can be considered a method with less complications and a better prognosis. Accordingly, SLNB and BCS are favorable methods that can be performed, along with gamma probe technique, which is safe and accurate.

Keywords: Breast cancer, Sentinel lymph node biopsy, Axillary lymph node dissection, Gamma probe.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 43
82 Comparison of Different Hydrograph Routing Techniques in XPSTORM Modelling Software: A Case Study

Authors: Fatema Akram, Mohammad Golam Rasul, Mohammad Masud Kamal Khan, Md. Sharif Imam Ibne Amir

Abstract:

A variety of routing techniques are available to develop surface runoff hydrographs from rainfall. The selection of runoff routing method is very vital as it is directly related to the type of watershed and the required degree of accuracy. There are different modelling softwares available to explore the rainfall-runoff process in urban areas. XPSTORM, a link-node based, integrated stormwater modelling software, has been used in this study for developing surface runoff hydrograph for a Golf course area located in Rockhampton in Central Queensland in Australia. Four commonly used methods, namely SWMM runoff, Kinematic wave, Laurenson, and Time-Area are employed to generate runoff hydrograph for design storm of this study area. In runoff mode of XPSTORM, the rainfall, infiltration, evaporation and depression storage for subcatchments were simulated and the runoff from the subcatchment to collection node was calculated. The simulation results are presented, discussed and compared. The total surface runoff generated by SWMM runoff, Kinematic wave and Time-Area methods are found to be reasonably close, which indicates any of these methods can be used for developing runoff hydrograph of the study area. Laurenson method produces a comparatively less amount of surface runoff, however, it creates highest peak of surface runoff among all which may be suitable for hilly region. Although the Laurenson hydrograph technique is widely acceptable surface runoff routing technique in Queensland (Australia), extensive investigation is recommended with detailed topographic and hydrologic data in order to assess its suitability for use in the case study area.

Keywords: ARI, design storm, IFD, rainfall temporal pattern, routing techniques, surface runoff, XPSTORM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5047
81 Artificial Neural Network Modeling of a Closed Loop Pulsating Heat Pipe

Authors: Vipul M. Patel, Hemantkumar B. Mehta

Abstract:

Technological innovations in electronic world demand novel, compact, simple in design, less costly and effective heat transfer devices. Closed Loop Pulsating Heat Pipe (CLPHP) is a passive phase change heat transfer device and has potential to transfer heat quickly and efficiently from source to sink. Thermal performance of a CLPHP is governed by various parameters such as number of U-turns, orientations, input heat, working fluids and filling ratio. The present paper is an attempt to predict the thermal performance of a CLPHP using Artificial Neural Network (ANN). Filling ratio and heat input are considered as input parameters while thermal resistance is set as target parameter. Types of neural networks considered in the present paper are radial basis, generalized regression, linear layer, cascade forward back propagation, feed forward back propagation; feed forward distributed time delay, layer recurrent and Elman back propagation. Linear, logistic sigmoid, tangent sigmoid and Radial Basis Gaussian Function are used as transfer functions. Prediction accuracy is measured based on the experimental data reported by the researchers in open literature as a function of Mean Absolute Relative Deviation (MARD). The prediction of a generalized regression ANN model with spread constant of 4.8 is found in agreement with the experimental data for MARD in the range of ±1.81%.

Keywords: ANN models, CLPHP, filling ratio, generalized regression, spread constant.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1185
80 Experimental Studies of Sigma Thin-Walled Beams Strengthen by CFRP Tapes

Authors: Katarzyna Rzeszut, Ilona Szewczak

Abstract:

The review of selected methods of strengthening of steel structures with carbon fiber reinforced polymer (CFRP) tapes and the analysis of influence of composite materials on the steel thin-walled elements are performed in this paper. The study is also focused to the problem of applying fast and effective strengthening methods of the steel structures made of thin-walled profiles. It is worth noting that the issue of strengthening the thin-walled structures is a very complex, due to inability to perform welded joints in this type of elements and the limited ability to applying mechanical fasteners. Moreover, structures made of thin-walled cross-section demonstrate a high sensitivity to imperfections and tendency to interactive buckling, which may substantially contribute to the reduction of critical load capacity. Due to the lack of commonly used and recognized modern methods of strengthening of thin-walled steel structures, authors performed the experimental studies of thin-walled sigma profiles strengthened with CFRP tapes. The paper presents the experimental stand and the preliminary results of laboratory test concerning the analysis of the effectiveness of the strengthening steel beams made of thin-walled sigma profiles with CFRP tapes. The study includes six beams made of the cold-rolled sigma profiles with height of 140 mm, wall thickness of 2.5 mm, and a length of 3 m, subjected to the uniformly distributed load. Four beams have been strengthened with carbon fiber tape Sika CarboDur S, while the other two were tested without strengthening to obtain reference results. Based on the obtained results, the evaluation of the accuracy of applied composite materials for strengthening of thin-walled structures was performed.

Keywords: CFRP tapes, sigma profiles, steel thin-walled structures, strengthening.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 865
79 A Modelling Study of the Photochemical and Particulate Pollution Characteristics above a Typical Southeast Mediterranean Urban Area

Authors: Kiriaki-Maria Fameli, Vasiliki D. Assimakopoulos, Vasiliki Kotroni

Abstract:

The Greater Athens Area (GAA) faces photochemical and particulate pollution episodes as a result of the combined effects of local pollutant emissions, regional pollution transport, synoptic circulation and topographic characteristics. The area has undergone significant changes since the Athens 2004 Olympic Games because of large scale infrastructure works that lead to the shift of population to areas previously characterized as rural, the increase of the traffic fleet and the operation of highways. However, few recent modelling studies have been performed due to the lack of an accurate, updated emission inventory. The photochemical modelling system MM5/CAMx was applied in order to study the photochemical and particulate pollution characteristics above the GAA for two distinct ten-day periods in the summer of 2006 and 2010, where air pollution episodes occurred. A new updated emission inventory was used based on official data. Comparison of modeled results with measurements revealed the importance and accuracy of the new Athens emission inventory as compared to previous modeling studies. The model managed to reproduce the local meteorological conditions, the daily ozone and particulates fluctuations at different locations across the GAA. Higher ozone levels were found at suburban and rural areas as well as over the sea at the south of the basin. Concerning PM10, high concentrations were computed at the city centre and the southeastern suburbs in agreement with measured data. Source apportionment analysis showed that different sources contribute to the ozone levels, the local sources (traffic, port activities) affecting its formation.

Keywords: Photochemical modelling, urban pollution, greater Athens area, MM5/CAMx.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1367
78 Combination of Different Classifiers for Cardiac Arrhythmia Recognition

Authors: M. R. Homaeinezhad, E. Tavakkoli, M. Habibi, S. A. Atyabi, A. Ghaffari

Abstract:

This paper describes a new supervised fusion (hybrid) electrocardiogram (ECG) classification solution consisting of a new QRS complex geometrical feature extraction as well as a new version of the learning vector quantization (LVQ) classification algorithm aimed for overcoming the stability-plasticity dilemma. Toward this objective, after detection and delineation of the major events of ECG signal via an appropriate algorithm, each QRS region and also its corresponding discrete wavelet transform (DWT) are supposed as virtual images and each of them is divided into eight polar sectors. Then, the curve length of each excerpted segment is calculated and is used as the element of the feature space. To increase the robustness of the proposed classification algorithm versus noise, artifacts and arrhythmic outliers, a fusion structure consisting of five different classifiers namely as Support Vector Machine (SVM), Modified Learning Vector Quantization (MLVQ) and three Multi Layer Perceptron-Back Propagation (MLP–BP) neural networks with different topologies were designed and implemented. The new proposed algorithm was applied to all 48 MIT–BIH Arrhythmia Database records (within–record analysis) and the discrimination power of the classifier in isolation of different beat types of each record was assessed and as the result, the average accuracy value Acc=98.51% was obtained. Also, the proposed method was applied to 6 number of arrhythmias (Normal, LBBB, RBBB, PVC, APB, PB) belonging to 20 different records of the aforementioned database (between– record analysis) and the average value of Acc=95.6% was achieved. To evaluate performance quality of the new proposed hybrid learning machine, the obtained results were compared with similar peer– reviewed studies in this area.

Keywords: Feature Extraction, Curve Length Method, SupportVector Machine, Learning Vector Quantization, Multi Layer Perceptron, Fusion (Hybrid) Classification, Arrhythmia Classification, Supervised Learning Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2227
77 Integration of Big Data to Predict Transportation for Smart Cities

Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin

Abstract:

The Intelligent transportation system is essential to build smarter cities. Machine learning based transportation prediction could be highly promising approach by delivering invisible aspect visible. In this context, this research aims to make a prototype model that predicts transportation network by using big data and machine learning technology. In detail, among urban transportation systems this research chooses bus system.  The research problem that existing headway model cannot response dynamic transportation conditions. Thus, bus delay problem is often occurred. To overcome this problem, a prediction model is presented to fine patterns of bus delay by using a machine learning implementing the following data sets; traffics, weathers, and bus statues. This research presents a flexible headway model to predict bus delay and analyze the result. The prototyping model is composed by real-time data of buses. The data are gathered through public data portals and real time Application Program Interface (API) by the government. These data are fundamental resources to organize interval pattern models of bus operations as traffic environment factors (road speeds, station conditions, weathers, and bus information of operating in real-time). The prototyping model is designed by the machine learning tool (RapidMiner Studio) and conducted tests for bus delays prediction. This research presents experiments to increase prediction accuracy for bus headway by analyzing the urban big data. The big data analysis is important to predict the future and to find correlations by processing huge amount of data. Therefore, based on the analysis method, this research represents an effective use of the machine learning and urban big data to understand urban dynamics.

Keywords: Big data, bus headway prediction, machine learning, public transportation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1563
76 A Damage Level Assessment Model for Extra High Voltage Transmission Towers

Authors: Huan-Chieh Chiu, Hung-Shuo Wu, Chien-Hao Wang, Yu-Cheng Yang, Ching-Ya Tseng, Joe-Air Jiang

Abstract:

Power failure resulting from tower collapse due to violent seismic events might bring enormous and inestimable losses. The Chi-Chi earthquake, for example, strongly struck Taiwan and caused huge damage to the power system on September 21, 1999. Nearly 10% of extra high voltage (EHV) transmission towers were damaged in the earthquake. Therefore, seismic hazards of EHV transmission towers should be monitored and evaluated. The ultimate goal of this study is to establish a damage level assessment model for EHV transmission towers. The data of earthquakes provided by Taiwan Central Weather Bureau serve as a reference and then lay the foundation for earthquake simulations and analyses afterward. Some parameters related to the damage level of each point of an EHV tower are simulated and analyzed by the data from monitoring stations once an earthquake occurs. Through the Fourier transform, the seismic wave is then analyzed and transformed into different wave frequencies, and the data would be shown through a response spectrum. With this method, the seismic frequency which damages EHV towers the most is clearly identified. An estimation model is built to determine the damage level caused by a future seismic event. Finally, instead of relying on visual observation done by inspectors, the proposed model can provide a power company with the damage information of a transmission tower. Using the model, manpower required by visual observation can be reduced, and the accuracy of the damage level estimation can be substantially improved. Such a model is greatly useful for health and construction monitoring because of the advantages of long-term evaluation of structural characteristics and long-term damage detection.

Keywords: Smart grid, EHV transmission tower, response spectrum, damage level monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1066
75 Improving Fake News Detection Using K-means and Support Vector Machine Approaches

Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy

Abstract:

Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.

Keywords: Fake news detection, feature selection, support vector machine, K-means clustering, machine learning, social media.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4528
74 Phosphine Mortality Estimation for Simulation of Controlling Pest of Stored Grain: Lesser Grain Borer (Rhyzopertha dominica)

Authors: Mingren Shi, Michael Renton

Abstract:

There is a world-wide need for the development of sustainable management strategies to control pest infestation and the development of phosphine (PH3) resistance in lesser grain borer (Rhyzopertha dominica). Computer simulation models can provide a relatively fast, safe and inexpensive way to weigh the merits of various management options. However, the usefulness of simulation models relies on the accurate estimation of important model parameters, such as mortality. Concentration and time of exposure are both important in determining mortality in response to a toxic agent. Recent research indicated the existence of two resistance phenotypes in R. dominica in Australia, weak and strong, and revealed that the presence of resistance alleles at two loci confers strong resistance, thus motivating the construction of a two-locus model of resistance. Experimental data sets on purified pest strains, each corresponding to a single genotype of our two-locus model, were also available. Hence it became possible to explicitly include mortalities of the different genotypes in the model. In this paper we described how we used two generalized linear models (GLM), probit and logistic models, to fit the available experimental data sets. We used a direct algebraic approach generalized inverse matrix technique, rather than the traditional maximum likelihood estimation, to estimate the model parameters. The results show that both probit and logistic models fit the data sets well but the former is much better in terms of small least squares (numerical) errors. Meanwhile, the generalized inverse matrix technique achieved similar accuracy results to those from the maximum likelihood estimation, but is less time consuming and computationally demanding.

Keywords: mortality estimation, probit models, logistic model, generalized inverse matrix approach, pest control simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1584
73 Geostatistical Analysis and Mapping of Groundlevel Ozone in a Medium Sized Urban Area

Authors: F. J. Moral García, P. Valiente González, F. López Rodríguez

Abstract:

Ground-level tropospheric ozone is one of the air pollutants of most concern. It is mainly produced by photochemical processes involving nitrogen oxides and volatile organic compounds in the lower parts of the atmosphere. Ozone levels become particularly high in regions close to high ozone precursor emissions and during summer, when stagnant meteorological conditions with high insolation and high temperatures are common. In this work, some results of a study about urban ozone distribution patterns in the city of Badajoz, which is the largest and most industrialized city in Extremadura region (southwest Spain) are shown. Fourteen sampling campaigns, at least one per month, were carried out to measure ambient air ozone concentrations, during periods that were selected according to favourable conditions to ozone production, using an automatic portable analyzer. Later, to evaluate the ozone distribution at the city, the measured ozone data were analyzed using geostatistical techniques. Thus, first, during the exploratory analysis of data, it was revealed that they were distributed normally, which is a desirable property for the subsequent stages of the geostatistical study. Secondly, during the structural analysis of data, theoretical spherical models provided the best fit for all monthly experimental variograms. The parameters of these variograms (sill, range and nugget) revealed that the maximum distance of spatial dependence is between 302-790 m and the variable, air ozone concentration, is not evenly distributed in reduced distances. Finally, predictive ozone maps were derived for all points of the experimental study area, by use of geostatistical algorithms (kriging). High prediction accuracy was obtained in all cases as cross-validation showed. Useful information for hazard assessment was also provided when probability maps, based on kriging interpolation and kriging standard deviation, were produced.

Keywords: Kriging, map, tropospheric ozone, variogram.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1870
72 Precipitation Intensity: Duration Based Threshold Analysis for Initiation of Landslides in Upper Alaknanda Valley

Authors: Soumiya Bhattacharjee, P. K. Champati Ray, Shovan L. Chattoraj, Mrinmoy Dhara

Abstract:

The entire Himalayan range is globally renowned for rainfall-induced landslides. The prime focus of the study is to determine rainfall based threshold for initiation of landslides that can be used as an important component of an early warning system for alerting stake holders. This research deals with temporal dimension of slope failures due to extreme rainfall events along the National Highway-58 from Karanprayag to Badrinath in the Garhwal Himalaya, India. Post processed 3-hourly rainfall intensity data and its corresponding duration from daily rainfall data available from Tropical Rainfall Measuring Mission (TRMM) were used as the prime source of rainfall data. Landslide event records from Border Road Organization (BRO) and some ancillary landslide inventory data for 2013 and 2014 have been used to determine Intensity Duration (ID) based rainfall threshold. The derived governing threshold equation, I= 4.738D-0.025, has been considered for prediction of landslides of the study region. This equation was validated with an accuracy of 70% landslides during August and September 2014. The derived equation was considered for further prediction of landslides of the study region. From the obtained results and validation, it can be inferred that this equation can be used for initiation of landslides in the study area to work as a part of an early warning system. Results can significantly improve with ground based rainfall estimates and better database on landslide records. Thus, the study has demonstrated a very low cost method to get first-hand information on possibility of impending landslide in any region, thereby providing alert and better preparedness for landslide disaster mitigation.

Keywords: Landslide, intensity-duration, rainfall threshold, Tropical Rainfall Measuring Mission, slope, inventory, early warning system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1238
71 High Securing Cover-File of Hidden Data Using Statistical Technique and AES Encryption Algorithm

Authors: A. A. Zaidan, Anas Majeed, B. B. Zaidan

Abstract:

Nowadays, the rapid development of multimedia and internet allows for wide distribution of digital media data. It becomes much easier to edit, modify and duplicate digital information Besides that, digital documents are also easy to copy and distribute, therefore it will be faced by many threatens. It-s a big security and privacy issue with the large flood of information and the development of the digital format, it become necessary to find appropriate protection because of the significance, accuracy and sensitivity of the information. Nowadays protection system classified with more specific as hiding information, encryption information, and combination between hiding and encryption to increase information security, the strength of the information hiding science is due to the non-existence of standard algorithms to be used in hiding secret messages. Also there is randomness in hiding methods such as combining several media (covers) with different methods to pass a secret message. In addition, there are no formal methods to be followed to discover the hidden data. For this reason, the task of this research becomes difficult. In this paper, a new system of information hiding is presented. The proposed system aim to hidden information (data file) in any execution file (EXE) and to detect the hidden file and we will see implementation of steganography system which embeds information in an execution file. (EXE) files have been investigated. The system tries to find a solution to the size of the cover file and making it undetectable by anti-virus software. The system includes two main functions; first is the hiding of the information in a Portable Executable File (EXE), through the execution of four process (specify the cover file, specify the information file, encryption of the information, and hiding the information) and the second function is the extraction of the hiding information through three process (specify the steno file, extract the information, and decryption of the information). The system has achieved the main goals, such as make the relation of the size of the cover file and the size of information independent and the result file does not make any conflict with anti-virus software.

Keywords: Cryptography, Steganography, Portable ExecutableFile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1804
70 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: Anti-spoofing, CNN, fingerprint recognition, loss function, optimizer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 421
69 Turbine Follower Control Strategy Design Based on Developed FFPP Model

Authors: Ali Ghaffari, Mansour Nikkhah Bahrami, Hesam Parsa

Abstract:

In this paper a comprehensive model of a fossil fueled power plant (FFPP) is developed in order to evaluate the performance of a newly designed turbine follower controller. Considering the drawbacks of previous works, an overall model is developed to minimize the error between each subsystem model output and the experimental data obtained at the actual power plant. The developed model is organized in two main subsystems namely; Boiler and Turbine. Considering each FFPP subsystem characteristics, different modeling approaches are developed. For economizer, evaporator, superheater and reheater, first order models are determined based on principles of mass and energy conservation. Simulations verify the accuracy of the developed models. Due to the nonlinear characteristics of attemperator, a new model, based on a genetic-fuzzy systems utilizing Pittsburgh approach is developed showing a promising performance vis-à-vis those derived with other methods like ANFIS. The optimization constraints are handled utilizing penalty functions. The effect of increasing the number of rules and membership functions on the performance of the proposed model is also studied and evaluated. The turbine model is developed based on the equation of adiabatic expansion. Parameters of all evaluated models are tuned by means of evolutionary algorithms. Based on the developed model a fuzzy PI controller is developed. It is then successfully implemented in the turbine follower control strategy of the plant. In this control strategy instead of keeping control parameters constant, they are adjusted on-line with regard to the error and the error rate. It is shown that the response of the system improves significantly. It is also shown that fuel consumption decreases considerably.

Keywords: Attemperator, Evolutionary algorithms, Fossil fuelled power plant (FFPP), Fuzzy set theory, Gain scheduling

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1793
68 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite

Authors: F. Lazzeri, I. Reiter

Abstract:

Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.

Keywords: Time-series, features engineering methods for forecasting, energy demand forecasting, Azure machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1290
67 Simulation and Analysis of Passive Parameters of Building in eQuest: A Case Study in Istanbul, Turkey

Authors: Mahdiyeh Zafaranchi

Abstract:

With rapid development of urbanization and improvement of living standards in the world, energy consumption and carbon emissions of the building sector are expected to increase in the near future; because of that, energy-saving issues have become more important among the engineers. Besides, the building sector is a major contributor to energy consumption and carbon emissions. The concept of efficient building appeared as a response to the need for reducing energy demand in this sector which has the main purpose of shifting from standard buildings to low-energy buildings. Although energy-saving should happen in all steps of a building during the life cycle (material production, construction, demolition), the main concept of efficient energy building is saving energy during the life expectancy of a building by using passive and active systems, and should not sacrifice comfort and quality to reach these goals. The main aim of this study is to investigate passive strategies (do not need energy consumption or use renewable energy) to achieve energy-efficient buildings. Energy retrofit measures were explored by eQuest software using a case study as a base model. The study investigates predictive accuracy for the major factors like thermal transmittance (U-value) of the material, windows, shading devices, thermal insulation, rate of the exposed envelope, window/wall ration, lighting system in the energy consumption of the building. The base model was located in Istanbul, Turkey. The impact of eight passive parameters on energy consumption had been indicated. After analyzing the base model by eQuest, a final scenario was suggested which had a good energy performance. The results showed a decrease in the U-values of materials, the rate of exposing buildings, and windows had a significant effect on energy consumption. Finally, savings in electric consumption of about 10.5%, and gas consumption by about 8.37% in the suggested model were achieved annually.

Keywords: Efficient building, electric and gas consumption, eQuest, passive parameters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 776
66 Identification of Flexographic-printed Newspapers with NIR Spectral Imaging

Authors: Raimund Leitner, Susanne Rosskopf

Abstract:

Near-infrared (NIR) spectroscopy is a widely used method for material identification for laboratory and industrial applications. While standard spectrometers only allow measurements at one sampling point at a time, NIR Spectral Imaging techniques can measure, in real-time, both the size and shape of an object as well as identify the material the object is made of. The online classification and sorting of recovered paper with NIR Spectral Imaging (SI) is used with success in the paper recycling industry throughout Europe. Recently, the globalisation of the recycling material streams caused that water-based flexographic-printed newspapers mainly from UK and Italy appear also in central Europe. These flexo-printed newspapers are not sufficiently de-inkable with the standard de-inking process originally developed for offset-printed paper. This de-inking process removes the ink from recovered paper and is the fundamental processing step to produce high-quality paper from recovered paper. Thus, the flexo-printed newspapers are a growing problem for the recycling industry as they reduce the quality of the produced paper if their amount exceeds a certain limit within the recovered paper material. This paper presents the results of a research project for the development of an automated entry inspection system for recovered paper that was jointly conducted by CTR AG (Austria) and PTS Papiertechnische Stiftung (Germany). Within the project an NIR SI prototype for the identification of flexo-printed newspaper has been developed. The prototype can identify and sort out flexoprinted newspapers in real-time and achieves a detection accuracy for flexo-printed newspaper of over 95%. NIR SI, the technology the prototype is based on, allows the development of inspection systems for incoming goods in a paper production facility as well as industrial sorting systems for recovered paper in the recycling industry in the near future.

Keywords: spectral imaging, imaging spectroscopy, NIR, waterbasedflexographic, flexo-printed, recovered paper, real-time classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1546
65 An Evaluation on the Effectiveness of a 3D Printed Composite Compression Mold

Authors: Peng Hao Wang, Garam Kim, Ronald Sterkenburg

Abstract:

The applications of composite materials within the aviation industry has been increasing at a rapid pace.  However, the growing applications of composite materials have also led to growing demand for more tooling to support its manufacturing processes. Tooling and tooling maintenance represents a large portion of the composite manufacturing process and cost. Therefore, the industry’s adaptability to new techniques for fabricating high quality tools quickly and inexpensively will play a crucial role in composite material’s growing popularity in the aviation industry. One popular tool fabrication technique currently being developed involves additive manufacturing such as 3D printing. Although additive manufacturing and 3D printing are not entirely new concepts, the technique has been gaining popularity due to its ability to quickly fabricate components, maintain low material waste, and low cost. In this study, a team of Purdue University School of Aviation and Transportation Technology (SATT) faculty and students investigated the effectiveness of a 3D printed composite compression mold. A 3D printed composite compression mold was fabricated by 3D scanning a steel valve cover of an aircraft reciprocating engine. The 3D printed composite compression mold was used to fabricate carbon fiber versions of the aircraft reciprocating engine valve cover. The 3D printed composite compression mold was evaluated for its performance, durability, and dimensional stability while the fabricated carbon fiber valve covers were evaluated for its accuracy and quality. The results and data gathered from this study will determine the effectiveness of the 3D printed composite compression mold in a mass production environment and provide valuable information for future understanding, improvements, and design considerations of 3D printed composite molds.

Keywords: Additive manufacturing, carbon fiber, composite tooling, molds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 709
64 Minimizing the Drilling-Induced Damage in Fiber Reinforced Polymeric Composites

Authors: S. D. El Wakil, M. Pladsen

Abstract:

Fiber reinforced polymeric (FRP) composites are finding wide-spread industrial applications because of their exceptionally high specific strength and specific modulus of elasticity. Nevertheless, it is very seldom to get ready-for-use components or products made of FRP composites. Secondary processing by machining, particularly drilling, is almost always required to make holes for fastening components together to produce assemblies. That creates problems since the FRP composites are neither homogeneous nor isotropic. Some of the problems that are encountered include the subsequent damage in the region around the drilled hole and the drilling – induced delamination of the layer of ply, that occurs both at the entrance and the exit planes of the work piece. Evidently, the functionality of the work piece would be detrimentally affected. The current work was carried out with the aim of eliminating or at least minimizing the work piece damage associated with drilling of FPR composites. Each test specimen involves a woven reinforced graphite fiber/epoxy composite having a thickness of 12.5 mm (0.5 inch). A large number of test specimens were subjected to drilling operations with different combinations of feed rates and cutting speeds. The drilling induced damage was taken as the absolute value of the difference between the drilled hole diameter and the nominal one taken as a percentage of the nominal diameter. The later was determined for each combination of feed rate and cutting speed, and a matrix comprising those values was established, where the columns indicate varying feed rate while and rows indicate varying cutting speeds. Next, the analysis of variance (ANOVA) approach was employed using Minitab software, in order to obtain the combination that would improve the drilling induced damage. Experimental results show that low feed rates coupled with low cutting speeds yielded the best results.

Keywords: Drilling of Composites, dimensional accuracy of holes drilled in composites, delamination and charring, graphite-epoxy composites.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 806
63 Classifying Turbomachinery Blade Mode Shapes Using Artificial Neural Networks

Authors: Ismail Abubakar, Hamid Mehrabi, Reg Morton

Abstract:

Currently, extensive signal analysis is performed in order to evaluate structural health of turbomachinery blades. This approach is affected by constraints of time and the availability of qualified personnel. Thus, new approaches to blade dynamics identification that provide faster and more accurate results are sought after. Generally, modal analysis is employed in acquiring dynamic properties of a vibrating turbomachinery blade and is widely adopted in condition monitoring of blades. The analysis provides useful information on the different modes of vibration and natural frequencies by exploring different shapes that can be taken up during vibration since all mode shapes have their corresponding natural frequencies. Experimental modal testing and finite element analysis are the traditional methods used to evaluate mode shapes with limited application to real live scenario to facilitate a robust condition monitoring scheme. For a real time mode shape evaluation, rapid evaluation and low computational cost is required and traditional techniques are unsuitable. In this study, artificial neural network is developed to evaluate the mode shape of a lab scale rotating blade assembly by using result from finite element modal analysis as training data. The network performance evaluation shows that artificial neural network (ANN) is capable of mapping the correlation between natural frequencies and mode shapes. This is achieved without the need of extensive signal analysis. The approach offers advantage from the perspective that the network is able to classify mode shapes and can be employed in real time including simplicity in implementation and accuracy of the prediction. The work paves the way for further development of robust condition monitoring system that incorporates real time mode shape evaluation.

Keywords: Modal analysis, artificial neural network, mode shape, natural frequencies, pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 908
62 A Continuous Real-Time Analytic for Predicting Instability in Acute Care Rapid Response Team Activations

Authors: Ashwin Belle, Bryce Benson, Mark Salamango, Fadi Islim, Rodney Daniels, Kevin Ward

Abstract:

A reliable, real-time, and non-invasive system that can identify patients at risk for hemodynamic instability is needed to aid clinicians in their efforts to anticipate patient deterioration and initiate early interventions. The purpose of this pilot study was to explore the clinical capabilities of a real-time analytic from a single lead of an electrocardiograph to correctly distinguish between rapid response team (RRT) activations due to hemodynamic (H-RRT) and non-hemodynamic (NH-RRT) causes, as well as predict H-RRT cases with actionable lead times. The study consisted of a single center, retrospective cohort of 21 patients with RRT activations from step-down and telemetry units. Through electronic health record review and blinded to the analytic’s output, each patient was categorized by clinicians into H-RRT and NH-RRT cases. The analytic output and the categorization were compared. The prediction lead time prior to the RRT call was calculated. The analytic correctly distinguished between H-RRT and NH-RRT cases with 100% accuracy, demonstrating 100% positive and negative predictive values, and 100% sensitivity and specificity. In H-RRT cases, the analytic detected hemodynamic deterioration with a median lead time of 9.5 hours prior to the RRT call (range 14 minutes to 52 hours). The study demonstrates that an electrocardiogram (ECG) based analytic has the potential for providing clinical decision and monitoring support for caregivers to identify at risk patients within a clinically relevant timeframe allowing for increased vigilance and early interventional support to reduce the chances of continued patient deterioration.

Keywords: Critical care, early warning systems, emergency medicine, heart rate variability, hemodynamic instability, rapid response team.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630