Search results for: transient data
24161 Big Data Analysis with Rhipe
Authors: Byung Ho Jung, Ji Eun Shin, Dong Hoon Lim
Abstract:
Rhipe that integrates R and Hadoop environment made it possible to process and analyze massive amounts of data using a distributed processing environment. In this paper, we implemented multiple regression analysis using Rhipe with various data sizes of actual data. Experimental results for comparing the performance of our Rhipe with stats and biglm packages available on bigmemory, showed that our Rhipe was more fast than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases. We also compared the computing speeds of pseudo-distributed and fully-distributed modes for configuring Hadoop cluster. The results showed that fully-distributed mode was faster than pseudo-distributed mode, and computing speeds of fully-distributed mode were faster as the number of data nodes increases.Keywords: big data, Hadoop, Parallel regression analysis, R, Rhipe
Procedia PDF Downloads 48324160 Security in Resource Constraints Network Light Weight Encryption for Z-MAC
Authors: Mona Almansoori, Ahmed Mustafa, Ahmad Elshamy
Abstract:
Wireless sensor network was formed by a combination of nodes, systematically it transmitting the data to their base stations, this transmission data can be easily compromised if the limited processing power and the data consistency from these nodes are kept in mind; there is always a discussion to address the secure data transfer or transmission in actual time. This will present a mechanism to securely transmit the data over a chain of sensor nodes without compromising the throughput of the network by utilizing available battery resources available in the sensor node. Our methodology takes many different advantages of Z-MAC protocol for its efficiency, and it provides a unique key by sharing the mechanism using neighbor node MAC address. We present a light weighted data integrity layer which is embedded in the Z-MAC protocol to prove that our protocol performs well than Z-MAC when we introduce the different attack scenarios.Keywords: hybrid MAC protocol, data integrity, lightweight encryption, neighbor based key sharing, sensor node dataprocessing, Z-MAC
Procedia PDF Downloads 12324159 Survival Data with Incomplete Missing Categorical Covariates
Authors: Madaki Umar Yusuf, Mohd Rizam B. Abubakar
Abstract:
The survival censored data with incomplete covariate data is a common occurrence in many studies in which the outcome is survival time. With model when the missing covariates are categorical, a useful technique for obtaining parameter estimates is the EM by the method of weights. The survival outcome for the class of generalized linear model is applied and this method requires the estimation of the parameters of the distribution of the covariates. In this paper, we propose some clinical trials with ve covariates, four of which have some missing values which clearly show that they were fully censored data.Keywords: EM algorithm, incomplete categorical covariates, ignorable missing data, missing at random (MAR), Weibull Distribution
Procedia PDF Downloads 38324158 A Study of Blockchain Oracles
Authors: Abdeljalil Beniiche
Abstract:
The limitation with smart contracts is that they cannot access external data that might be required to control the execution of business logic. Oracles can be used to provide external data to smart contracts. An oracle is an interface that delivers data from external data outside the blockchain to a smart contract to consume. Oracle can deliver different types of data depending on the industry and requirements. In this paper, we study and describe the widely used blockchain oracles. Then, we elaborate on his potential role, technical architecture, and design patterns. Finally, we discuss the human oracle and its key role in solving the truth problem by reaching a consensus about a certain inquiry and tasks.Keywords: blockchain, oracles, oracles design, human oracles
Procedia PDF Downloads 9724157 Estimation of Synchronous Machine Synchronizing and Damping Torque Coefficients
Authors: Khaled M. EL-Naggar
Abstract:
Synchronizing and damping torque coefficients of a synchronous machine can give a quite clear picture for machine behavior during transients. These coefficients are used as a power system transient stability measurement. In this paper, a crow search optimization algorithm is presented and implemented to study the power system stability during transients. The algorithm makes use of the machine responses to perform the stability study in time domain. The problem is formulated as a dynamic estimation problem. An objective function that minimizes the error square in the estimated coefficients is designed. The method is tested using practical system with different study cases. Results are reported and a thorough discussion is presented. The study illustrates that the proposed method can estimate the stability coefficients for the critical stable cases where other methods may fail. The tests proved that the proposed tool is an accurate and reliable tool for estimating the machine coefficients for assessment of power system stability.Keywords: optimization, estimation, synchronous, machine, crow search
Procedia PDF Downloads 11524156 Negative Pressure Waves in Hydraulic Systems
Authors: Fuad H. Veliev
Abstract:
Negative pressure phenomenon appears in many thermodynamic, geophysical and biophysical processes in the Nature and technological systems. For more than 100 years of the laboratory researches beginning from F. M. Donny’s tests, the great values of negative pressure have been achieved. But this phenomenon has not been practically applied, being only a nice lab toy due to the special demands for the purity and homogeneity of the liquids for its appearance. The possibility of creation of direct wave of negative pressure in real heterogeneous liquid systems was confirmed experimentally under the certain kinetic and hydraulic conditions. The negative pressure can be considered as the factor of both useful and destroying energies. The new approach to generation of the negative pressure waves in impure, unclean fluids has allowed the creation of principally new energy saving technologies and installations to increase the effectiveness and efficiency of different production processes. It was proved that the negative pressure is one of the main factors causing hard troubles in some technological and natural processes. Received results emphasize the necessity to take into account the role of the negative pressure as an energy factor in evaluation of many transient thermohydrodynamic processes in the Nature and production systems.Keywords: liquid systems, negative pressure, temperature, wave, metastable state
Procedia PDF Downloads 39524155 Bone Marrow Edema Syndrome in the Foot and Ankle
Authors: S. Alireza Mirghasemi, Elly Trepman, Mohammad Saleh Sadeghi, Narges Rahimi Gabaran, Shervin Rashidinia
Abstract:
Bone marrow edema syndrome (BMES) is an uncommon and self-limited syndrome characterized by atraumatic extremity pain with unknown of etiology. Symptom onset may include sudden or gradual swelling and pain at rest or during activity, usually at night. This syndrome mostly affects middle-aged men and younger women who have pain in the lower extremities. The most common sites involved with BMES, in decreasing order of frequency, are the bones about the hip, knee, ankle, and foot. The diagnosis of BMES is made with magnetic resonance imaging to exclude other causes of bone marrow edema. The correct diagnosis often is delayed because of the low prevalence and nonspecific signs in the foot and ankle. This delay may intensify bone pain and impair patient function and quality of life. The goal of BMES treatment is to relieve pain and shorten disease duration. Treatment options are limited and may include symptomatic treatment, pharmacologic treatment, and surgery.Keywords: transient osteoporosis, bone marrow edema syndrome, iloprost, bisphosphonates
Procedia PDF Downloads 33824154 Finding Bicluster on Gene Expression Data of Lymphoma Based on Singular Value Decomposition and Hierarchical Clustering
Authors: Alhadi Bustaman, Soeganda Formalidin, Titin Siswantining
Abstract:
DNA microarray technology is used to analyze thousand gene expression data simultaneously and a very important task for drug development and test, function annotation, and cancer diagnosis. Various clustering methods have been used for analyzing gene expression data. However, when analyzing very large and heterogeneous collections of gene expression data, conventional clustering methods often cannot produce a satisfactory solution. Biclustering algorithm has been used as an alternative approach to identifying structures from gene expression data. In this paper, we introduce a transform technique based on singular value decomposition to identify normalized matrix of gene expression data followed by Mixed-Clustering algorithm and the Lift algorithm, inspired in the node-deletion and node-addition phases proposed by Cheng and Church based on Agglomerative Hierarchical Clustering (AHC). Experimental study on standard datasets demonstrated the effectiveness of the algorithm in gene expression data.Keywords: agglomerative hierarchical clustering (AHC), biclustering, gene expression data, lymphoma, singular value decomposition (SVD)
Procedia PDF Downloads 25824153 An Efficient Traceability Mechanism in the Audited Cloud Data Storage
Authors: Ramya P, Lino Abraham Varghese, S. Bose
Abstract:
By cloud storage services, the data can be stored in the cloud, and can be shared across multiple users. Due to the unexpected hardware/software failures and human errors, which make the data stored in the cloud be lost or corrupted easily it affected the integrity of data in cloud. Some mechanisms have been designed to allow both data owners and public verifiers to efficiently audit cloud data integrity without retrieving the entire data from the cloud server. But public auditing on the integrity of shared data with the existing mechanisms will unavoidably reveal confidential information such as identity of the person, to public verifiers. Here a privacy-preserving mechanism is proposed to support public auditing on shared data stored in the cloud. It uses group signatures to compute verification metadata needed to audit the correctness of shared data. The identity of the signer on each block in shared data is kept confidential from public verifiers, who are easily verifying shared data integrity without retrieving the entire file. But on demand, the signer of the each block is reveal to the owner alone. Group private key is generated once by the owner in the static group, where as in the dynamic group, the group private key is change when the users revoke from the group. When the users leave from the group the already signed blocks are resigned by cloud service provider instead of owner is efficiently handled by efficient proxy re-signature scheme.Keywords: data integrity, dynamic group, group signature, public auditing
Procedia PDF Downloads 37024152 Rodriguez Diego, Del Valle Martin, Hargreaves Matias, Riveros Jose Luis
Authors: Nathainail Bashir, Neil Anderson
Abstract:
The objective of this study site was to investigate the current state of the practice with regards to karst detection methods and recommend the best method and pattern of arrays to acquire the desire results. Proper site investigation in karst prone regions is extremely valuable in determining the location of possible voids. Two geophysical techniques were employed: multichannel analysis of surface waves (MASW) and electric resistivity tomography (ERT).The MASW data was acquired at each test location using different array lengths and different array orientations (to increase the probability of getting interpretable data in karst terrain). The ERT data were acquired using a dipole-dipole array consisting of 168 electrodes. The MASW data was interpreted (re: estimated depth to physical top of rock) and used to constrain and verify the interpretation of the ERT data. The ERT data indicates poorer quality MASW data were acquired in areas where there was significant local variation in the depth to top of rock.Keywords: dipole-dipole, ERT, Karst terrains, MASW
Procedia PDF Downloads 29524151 Data Science in Military Decision-Making: A Semi-Systematic Literature Review
Authors: H. W. Meerveld, R. H. A. Lindelauf
Abstract:
In contemporary warfare, data science is crucial for the military in achieving information superiority. Yet, to the authors’ knowledge, no extensive literature survey on data science in military decision-making has been conducted so far. In this study, 156 peer-reviewed articles were analysed through an integrative, semi-systematic literature review to gain an overview of the topic. The study examined to what extent literature is focussed on the opportunities or risks of data science in military decision-making, differentiated per level of war (i.e. strategic, operational, and tactical level). A relatively large focus on the risks of data science was observed in social science literature, implying that political and military policymakers are disproportionally influenced by a pessimistic view on the application of data science in the military domain. The perceived risks of data science are, however, hardly addressed in formal science literature. This means that the concerns on the military application of data science are not addressed to the audience that can actually develop and enhance data science models and algorithms. Cross-disciplinary research on both the opportunities and risks of military data science can address the observed research gaps. Considering the levels of war, relatively low attention for the operational level compared to the other two levels was observed, suggesting a research gap with reference to military operational data science. Opportunities for military data science mostly arise at the tactical level. On the contrary, studies examining strategic issues mostly emphasise the risks of military data science. Consequently, domain-specific requirements for military strategic data science applications are hardly expressed. Lacking such applications may ultimately lead to a suboptimal strategic decision in today’s warfare.Keywords: data science, decision-making, information superiority, literature review, military
Procedia PDF Downloads 14024150 Legal Regulation of Personal Information Data Transmission Risk Assessment: A Case Study of the EU’s DPIA
Authors: Cai Qianyi
Abstract:
In the midst of global digital revolution, the flow of data poses security threats that call China's existing legislative framework for protecting personal information into question. As a preliminary procedure for risk analysis and prevention, the risk assessment of personal data transmission lacks detailed guidelines for support. Existing provisions reveal unclear responsibilities for network operators and weakened rights for data subjects. Furthermore, the regulatory system's weak operability and a lack of industry self-regulation heighten data transmission hazards. This paper aims to compare the regulatory pathways for data information transmission risks between China and Europe from a legal framework and content perspective. It draws on the “Data Protection Impact Assessment Guidelines” to empower multiple stakeholders, including data processors, controllers, and subjects, while also defining obligations. In conclusion, this paper intends to solve China's digital security shortcomings by developing a more mature regulatory framework and industry self-regulation mechanisms, resulting in a win-win situation for personal data protection and the development of the digital economy.Keywords: personal information data transmission, risk assessment, DPIA, internet service provider, personal information data transimission, risk assessment
Procedia PDF Downloads 3124149 Wavelets Contribution on Textual Data Analysis
Authors: Habiba Ben Abdessalem
Abstract:
The emergence of giant set of textual data was the push that has encouraged researchers to invest in this field. The purpose of textual data analysis methods is to facilitate access to such type of data by providing various graphic visualizations. Applying these methods requires a corpus pretreatment step, whose standards are set according to the objective of the problem studied. This step determines the forms list contained in contingency table by keeping only those information carriers. This step may, however, lead to noisy contingency tables, so the use of wavelet denoising function. The validity of the proposed approach is tested on a text database that offers economic and political events in Tunisia for a well definite period.Keywords: textual data, wavelet, denoising, contingency table
Procedia PDF Downloads 26124148 Customer Churn Analysis in Telecommunication Industry Using Data Mining Approach
Authors: Burcu Oralhan, Zeki Oralhan, Nilsun Sariyer, Kumru Uyar
Abstract:
Data mining has been becoming more and more important and a wide range of applications in recent years. Data mining is the process of find hidden and unknown patterns in big data. One of the applied fields of data mining is Customer Relationship Management. Understanding the relationships between products and customers is crucial for every business. Customer Relationship Management is an approach to focus on customer relationship development, retention and increase on customer satisfaction. In this study, we made an application of a data mining methods in telecommunication customer relationship management side. This study aims to determine the customers profile who likely to leave the system, develop marketing strategies, and customized campaigns for customers. Data are clustered by applying classification techniques for used to determine the churners. As a result of this study, we will obtain knowledge from international telecommunication industry. We will contribute to the understanding and development of this subject in Customer Relationship Management.Keywords: customer churn analysis, customer relationship management, data mining, telecommunication industry
Procedia PDF Downloads 29224147 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis
Authors: N. R. N. Idris, S. Baharom
Abstract:
A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates. On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.Keywords: aggregate data, combined-level data, individual patient data, meta-analysis
Procedia PDF Downloads 35424146 Analyzing On-Line Process Data for Industrial Production Quality Control
Authors: Hyun-Woo Cho
Abstract:
The monitoring of industrial production quality has to be implemented to alarm early warning for unusual operating conditions. Furthermore, identification of their assignable causes is necessary for a quality control purpose. For such tasks many multivariate statistical techniques have been applied and shown to be quite effective tools. This work presents a process data-based monitoring scheme for production processes. For more reliable results some additional steps of noise filtering and preprocessing are considered. It may lead to enhanced performance by eliminating unwanted variation of the data. The performance evaluation is executed using data sets from test processes. The proposed method is shown to provide reliable quality control results, and thus is more effective in quality monitoring in the example. For practical implementation of the method, an on-line data system must be available to gather historical and on-line data. Recently large amounts of data are collected on-line in most processes and implementation of the current scheme is feasible and does not give additional burdens to users.Keywords: detection, filtering, monitoring, process data
Procedia PDF Downloads 53624145 A Review of Travel Data Collection Methods
Authors: Muhammad Awais Shafique, Eiji Hato
Abstract:
Household trip data is of crucial importance for managing present transportation infrastructure as well as to plan and design future facilities. It also provides basis for new policies implemented under Transportation Demand Management. The methods used for household trip data collection have changed with passage of time, starting with the conventional face-to-face interviews or paper-and-pencil interviews and reaching to the recent approach of employing smartphones. This study summarizes the step-wise evolution in the travel data collection methods. It provides a comprehensive review of the topic, for readers interested to know the changing trends in the data collection field.Keywords: computer, smartphone, telephone, travel survey
Procedia PDF Downloads 28924144 Study of Flow-Induced Noise Control Effects on Flat Plate through Biomimetic Mucus Injection
Authors: Chen Niu, Xuesong Zhang, Dejiang Shang, Yongwei Liu
Abstract:
Fishes can secrete high molecular weight fluid on their body skin to enable their rapid movement in the water. In this work, we employ a hybrid method that combines Computational Fluid Dynamics (CFD) and Finite Element Method (FEM) to investigate the effects of different mucus viscosities and injection velocities on fluctuation pressure in the boundary layer and flow-induced structural vibration noise of a flat plate model. To accurately capture the transient flow distribution on the plate surface, we use Large Eddy Simulation (LES) while the mucus inlet is positioned at a sufficient distance from the model to ensure effective coverage. Mucus injection is modeled using the Volume of Fluid (VOF) method for multiphase flow calculations. The results demonstrate that mucus control of pulsating pressure effectively reduces flow-induced structural vibration noise, providing an approach for controlling flow-induced noise in underwater vehicles.Keywords: mucus, flow control, noise control, flow-induced noise
Procedia PDF Downloads 11324143 A Business-to-Business Collaboration System That Promotes Data Utilization While Encrypting Information on the Blockchain
Authors: Hiroaki Nasu, Ryota Miyamoto, Yuta Kodera, Yasuyuki Nogami
Abstract:
To promote Industry 4.0 and Society 5.0 and so on, it is important to connect and share data so that every member can trust it. Blockchain (BC) technology is currently attracting attention as the most advanced tool and has been used in the financial field and so on. However, the data collaboration using BC has not progressed sufficiently among companies on the supply chain of manufacturing industry that handle sensitive data such as product quality, manufacturing conditions, etc. There are two main reasons why data utilization is not sufficiently advanced in the industrial supply chain. The first reason is that manufacturing information is top secret and a source for companies to generate profits. It is difficult to disclose data even between companies with transactions in the supply chain. In the blockchain mechanism such as Bitcoin using PKI (Public Key Infrastructure), in order to confirm the identity of the company that has sent the data, the plaintext must be shared between the companies. Another reason is that the merits (scenarios) of collaboration data between companies are not specifically specified in the industrial supply chain. For these problems this paper proposes a Business to Business (B2B) collaboration system using homomorphic encryption and BC technique. Using the proposed system, each company on the supply chain can exchange confidential information on encrypted data and utilize the data for their own business. In addition, this paper considers a scenario focusing on quality data, which was difficult to collaborate because it is a top secret. In this scenario, we show a implementation scheme and a benefit of concrete data collaboration by proposing a comparison protocol that can grasp the change in quality while hiding the numerical value of quality data.Keywords: business to business data collaboration, industrial supply chain, blockchain, homomorphic encryption
Procedia PDF Downloads 11024142 Multivariate Assessment of Mathematics Test Scores of Students in Qatar
Authors: Ali Rashash Alzahrani, Elizabeth Stojanovski
Abstract:
Data on various aspects of education are collected at the institutional and government level regularly. In Australia, for example, students at various levels of schooling undertake examinations in numeracy and literacy as part of NAPLAN testing, enabling longitudinal assessment of such data as well as comparisons between schools and states within Australia. Another source of educational data collected internationally is via the PISA study which collects data from several countries when students are approximately 15 years of age and enables comparisons in the performance of science, mathematics and English between countries as well as ranking of countries based on performance in these standardised tests. As well as student and school outcomes based on the tests taken as part of the PISA study, there is a wealth of other data collected in the study including parental demographics data and data related to teaching strategies used by educators. Overall, an abundance of educational data is available which has the potential to be used to help improve educational attainment and teaching of content in order to improve learning outcomes. A multivariate assessment of such data enables multiple variables to be considered simultaneously and will be used in the present study to help develop profiles of students based on performance in mathematics using data obtained from the PISA study.Keywords: cluster analysis, education, mathematics, profiles
Procedia PDF Downloads 10624141 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators
Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros
Abstract:
Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis
Procedia PDF Downloads 11524140 Predictive Analysis for Big Data: Extension of Classification and Regression Trees Algorithm
Authors: Ameur Abdelkader, Abed Bouarfa Hafida
Abstract:
Since its inception, predictive analysis has revolutionized the IT industry through its robustness and decision-making facilities. It involves the application of a set of data processing techniques and algorithms in order to create predictive models. Its principle is based on finding relationships between explanatory variables and the predicted variables. Past occurrences are exploited to predict and to derive the unknown outcome. With the advent of big data, many studies have suggested the use of predictive analytics in order to process and analyze big data. Nevertheless, they have been curbed by the limits of classical methods of predictive analysis in case of a large amount of data. In fact, because of their volumes, their nature (semi or unstructured) and their variety, it is impossible to analyze efficiently big data via classical methods of predictive analysis. The authors attribute this weakness to the fact that predictive analysis algorithms do not allow the parallelization and distribution of calculation. In this paper, we propose to extend the predictive analysis algorithm, Classification And Regression Trees (CART), in order to adapt it for big data analysis. The major changes of this algorithm are presented and then a version of the extended algorithm is defined in order to make it applicable for a huge quantity of data.Keywords: predictive analysis, big data, predictive analysis algorithms, CART algorithm
Procedia PDF Downloads 12324139 Canopy Temperature Acquired from Daytime and Nighttime Aerial Data as an Indicator of Trees’ Health Status
Authors: Agata Zakrzewska, Dominik Kopeć, Adrian Ochtyra
Abstract:
The growing number of new cameras, sensors, and research methods allow for a broader application of thermal data in remote sensing vegetation studies. The aim of this research was to check whether it is possible to use thermal infrared data with a spectral range (3.6-4.9 μm) obtained during the day and the night to assess the health condition of selected species of deciduous trees in an urban environment. For this purpose, research was carried out in the city center of Warsaw (Poland) in 2020. During the airborne data acquisition, thermal data, laser scanning, and orthophoto map images were collected. Synchronously with airborne data, ground reference data were obtained for 617 studied species (Acer platanoides, Acer pseudoplatanus, Aesculus hippocastanum, Tilia cordata, and Tilia × euchlora) in different health condition states. The results were as follows: (i) healthy trees are cooler than trees in poor condition and dying both in the daytime and nighttime data; (ii) the difference in the canopy temperatures between healthy and dying trees was 1.06oC of mean value on the nighttime data and 3.28oC of mean value on the daytime data; (iii) condition classes significantly differentiate on both daytime and nighttime thermal data, but only on daytime data all condition classes differed statistically significantly from each other. In conclusion, the aerial thermal data can be considered as an alternative to hyperspectral data, a method of assessing the health condition of trees in an urban environment. Especially data obtained during the day, which can differentiate condition classes better than data obtained at night. The method based on thermal infrared and laser scanning data fusion could be a quick and efficient solution for identifying trees in poor health that should be visually checked in the field.Keywords: middle wave infrared, thermal imagery, tree discoloration, urban trees
Procedia PDF Downloads 9524138 Realistic Testing Procedure of Power Swing Blocking Function in Distance Relay
Authors: Farzad Razavi, Behrooz Taheri, Mohammad Parpaei, Mehdi Mohammadi Ghalesefidi, Siamak Zarei
Abstract:
As one of the major problems in protecting large-dimension power systems, power swing and its effect on distance have caused a lot of damages to energy transfer systems in many parts of the world. Therefore, power swing has gained attentions of many researchers, which has led to invention of different methods for power swing detection. Power swing detection algorithm is highly important in distance relay, but protection relays should have general requirements such as correct fault detection, response rate, and minimization of disturbances in a power system. To ensure meeting the requirements, protection relays need different tests during development, setup, maintenance, configuration, and troubleshooting steps. This paper covers power swing scheme of the modern numerical relay protection, 7sa522 to address the effect of the different fault types on the function of the power swing blocking. In this study, it was shown that the different fault types during power swing cause different time for unblocking distance relay.Keywords: power swing, distance relay, power system protection, relay test, transient in power system
Procedia PDF Downloads 36024137 Study of the Effect of Rotation on the Deformation of a Flexible Blade Rotor
Authors: Aref Maalej, Marwa Fakhfakh, Wael Ben Amira
Abstract:
We present in this work a numerical investigation of fluid-structure interaction to study the elastic behavior of flexible rotors. The principal aim is to provide the effect of the aero/hydrodynamic parameters on the bending deformation of flexible rotors. This study is accomplished using the strong two-way fluid-structure interaction (FSI) developed by the ANSYS Workbench software. This method is used for coupling the fluid solver to the transient structural solver to study the elastic behavior of flexible rotors in water. In this study, we use a moderately flexible rotor modeled by a single blade with simplified rectangular geometry. In this work, we focus on the effect of the rotational frequency on the flapwise bending deformation. It is demonstrated that the blade deforms in the downstream direction, and the amplitude of these deformations increases with the rotational frequencies. Also, from a critical frequency, the blade begins to deform in the upstream direction.Keywords: numerical simulation, flexible blade, fluid-structure interaction, ANSYS workbench, flapwise deformation
Procedia PDF Downloads 6224136 Specific Emitter Identification Based on Refined Composite Multiscale Dispersion Entropy
Authors: Shaoying Guo, Yanyun Xu, Meng Zhang, Weiqing Huang
Abstract:
The wireless communication network is developing rapidly, thus the wireless security becomes more and more important. Specific emitter identification (SEI) is an vital part of wireless communication security as a technique to identify the unique transmitters. In this paper, a SEI method based on multiscale dispersion entropy (MDE) and refined composite multiscale dispersion entropy (RCMDE) is proposed. The algorithms of MDE and RCMDE are used to extract features for identification of five wireless devices and cross-validation support vector machine (CV-SVM) is used as the classifier. The experimental results show that the total identification accuracy is 99.3%, even at low signal-to-noise ratio(SNR) of 5dB, which proves that MDE and RCMDE can describe the communication signal series well. In addition, compared with other methods, the proposed method is effective and provides better accuracy and stability for SEI.Keywords: cross-validation support vector machine, refined com- posite multiscale dispersion entropy, specific emitter identification, transient signal, wireless communication device
Procedia PDF Downloads 11424135 Hierarchical Clustering Algorithms in Data Mining
Authors: Z. Abdullah, A. R. Hamdan
Abstract:
Clustering is a process of grouping objects and data into groups of clusters to ensure that data objects from the same cluster are identical to each other. Clustering algorithms in one of the areas in data mining and it can be classified into partition, hierarchical, density based, and grid-based. Therefore, in this paper, we do a survey and review for four major hierarchical clustering algorithms called CURE, ROCK, CHAMELEON, and BIRCH. The obtained state of the art of these algorithms will help in eliminating the current problems, as well as deriving more robust and scalable algorithms for clustering.Keywords: clustering, unsupervised learning, algorithms, hierarchical
Procedia PDF Downloads 85524134 End to End Monitoring in Oracle Fusion Middleware for Data Verification
Authors: Syed Kashif Ali, Usman Javaid, Abdullah Chohan
Abstract:
In large enterprises multiple departments use different sort of information systems and databases according to their needs. These systems are independent and heterogeneous in nature and sharing information/data between these systems is not an easy task. The usage of middleware technologies have made data sharing between systems very easy. However, monitoring the exchange of data/information for verification purposes between target and source systems is often complex or impossible for maintenance department due to security/access privileges on target and source systems. In this paper, we are intended to present our experience of an end to end data monitoring approach at middle ware level implemented in Oracle BPEL for data verification without any help of monitoring tool.Keywords: service level agreement, SOA, BPEL, oracle fusion middleware, web service monitoring
Procedia PDF Downloads 45924133 Dissimilarity Measure for General Histogram Data and Its Application to Hierarchical Clustering
Authors: K. Umbleja, M. Ichino
Abstract:
Symbolic data mining has been developed to analyze data in very large datasets. It is also useful in cases when entry specific details should remain hidden. Symbolic data mining is quickly gaining popularity as datasets in need of analyzing are becoming ever larger. One type of such symbolic data is a histogram, which enables to save huge amounts of information into a single variable with high-level of granularity. Other types of symbolic data can also be described in histograms, therefore making histogram a very important and general symbolic data type - a method developed for histograms - can also be applied to other types of symbolic data. Due to its complex structure, analyzing histograms is complicated. This paper proposes a method, which allows to compare two histogram-valued variables and therefore find a dissimilarity between two histograms. Proposed method uses the Ichino-Yaguchi dissimilarity measure for mixed feature-type data analysis as a base and develops a dissimilarity measure specifically for histogram data, which allows to compare histograms with different number of bins and bin widths (so called general histogram). Proposed dissimilarity measure is then used as a measure for clustering. Furthermore, linkage method based on weighted averages is proposed with the concept of cluster compactness to measure the quality of clustering. The method is then validated with application on real datasets. As a result, the proposed dissimilarity measure is found producing adequate and comparable results with general histograms without the loss of detail or need to transform the data.Keywords: dissimilarity measure, hierarchical clustering, histograms, symbolic data analysis
Procedia PDF Downloads 14224132 WiFi Data Offloading: Bundling Method in a Canvas Business Model
Authors: Majid Mokhtarnia, Alireza Amini
Abstract:
Mobile operators deal with increasing in the data traffic as a critical issue. As a result, a vital responsibility of the operators is to deal with such a trend in order to create added values. This paper addresses a bundling method in a Canvas business model in a WiFi Data Offloading (WDO) strategy by which some elements of the model may be affected. In the proposed method, it is supposed to sell a number of data packages for subscribers in which there are some packages with a free given volume of data-offloaded WiFi complimentary. The paper on hands analyses this method in the views of attractiveness and profitability. The results demonstrate that the quality of implementation of the WDO strongly affects the final result and helps the decision maker to make the best one.Keywords: bundling, canvas business model, telecommunication, WiFi data offloading
Procedia PDF Downloads 175