Search results for: Data structures
7740 The Feasibility of Augmenting an Augmented Reality Image Card on a Quick Response Code
Authors: Alfred Chen, Shr Yu Lu, Cong Seng Hong, Yur-June Wang
Abstract:
This research attempts to study the feasibility of augmenting an augmented reality (AR) image card on a Quick Response (QR) code. The authors have developed a new visual tag, which contains a QR code and an augmented AR image card. The new visual tag has features of reading both of the revealed data of the QR code and the instant data from the AR image card. Furthermore, a handheld communicating device is used to read and decode the new visual tag, and then the concealed data of the new visual tag can be revealed and read through its visual display. In general, the QR code is designed to store the corresponding data or, as a key, to access the corresponding data from the server through internet. Those reveled data from the QR code are represented in text. Normally, the AR image card is designed to store the corresponding data in 3-Dimensional or animation/video forms. By using QR code's property of high fault tolerant rate, the new visual tag can access those two different types of data by using a handheld communicating device. The new visual tag has an advantage of carrying much more data than independent QR code or AR image card. The major findings of this research are: 1) the most efficient area for the designed augmented AR card augmenting on the QR code is 9% coverage area out of the total new visual tag-s area, and 2) the best location for the augmented AR image card augmenting on the QR code is located in the bottom-right corner of the new visual tag.Keywords: Augmented reality, QR code, Visual tag, Handheldcommunicating device
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15557739 A Competitive Replica Placement Methodology for Ad Hoc Networks
Authors: Samee Ullah Khan, C. Ardil
Abstract:
In this paper, a mathematical model for data object replication in ad hoc networks is formulated. The derived model is general, flexible and adaptable to cater for various applications in ad hoc networks. We propose a game theoretical technique in which players (mobile hosts) continuously compete in a non-cooperative environment to improve data accessibility by replicating data objects. The technique incorporates the access frequency from mobile hosts to each data object, the status of the network connectivity, and communication costs. The proposed technique is extensively evaluated against four well-known ad hoc network replica allocation methods. The experimental results reveal that the proposed approach outperforms the four techniques in both the execution time and solution qualityKeywords: Data replication, auctions, static allocation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14027738 Multidimensional Data Mining by Means of Randomly Travelling Hyper-Ellipsoids
Authors: Pavel Y. Tabakov, Kevin Duffy
Abstract:
The present study presents a new approach to automatic data clustering and classification problems in large and complex databases and, at the same time, derives specific types of explicit rules describing each cluster. The method works well in both sparse and dense multidimensional data spaces. The members of the data space can be of the same nature or represent different classes. A number of N-dimensional ellipsoids are used for enclosing the data clouds. Due to the geometry of an ellipsoid and its free rotation in space the detection of clusters becomes very efficient. The method is based on genetic algorithms that are used for the optimization of location, orientation and geometric characteristics of the hyper-ellipsoids. The proposed approach can serve as a basis for the development of general knowledge systems for discovering hidden knowledge and unexpected patterns and rules in various large databases.Keywords: Classification, clustering, data minig, genetic algorithms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17727737 Predictions Using Data Mining and Case-based Reasoning: A Case Study for Retinopathy
Authors: Vimala Balakrishnan, Mohammad R. Shakouri, Hooman Hoodeh, Loo, Huck-Soo
Abstract:
Diabetes is one of the high prevalence diseases worldwide with increased number of complications, with retinopathy as one of the most common one. This paper describes how data mining and case-based reasoning were integrated to predict retinopathy prevalence among diabetes patients in Malaysia. The knowledge base required was built after literature reviews and interviews with medical experts. A total of 140 diabetes patients- data were used to train the prediction system. A voting mechanism selects the best prediction results from the two techniques used. It has been successfully proven that both data mining and case-based reasoning can be used for retinopathy prediction with an improved accuracy of 85%.Keywords: Case-Based Reasoning, Data Mining, Prediction, Retinopathy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30227736 Zero Truncated Strict Arcsine Model
Authors: Y. N. Phang, E. F. Loh
Abstract:
The zero truncated model is usually used in modeling count data without zero. It is the opposite of zero inflated model. Zero truncated Poisson and zero truncated negative binomial models are discussed and used by some researchers in analyzing the abundance of rare species and hospital stay. Zero truncated models are used as the base in developing hurdle models. In this study, we developed a new model, the zero truncated strict arcsine model, which can be used as an alternative model in modeling count data without zero and with extra variation. Two simulated and one real life data sets are used and fitted into this developed model. The results show that the model provides a good fit to the data. Maximum likelihood estimation method is used in estimating the parameters.
Keywords: Hurdle models, maximum likelihood estimation method, positive count data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18577735 Li-Fi Technology: Data Transmission through Visible Light
Authors: Shahzad Hassan, Kamran Saeed
Abstract:
People are always in search of Wi-Fi hotspots because Internet is a major demand nowadays. But like all other technologies, there is still room for improvement in the Wi-Fi technology with regards to the speed and quality of connectivity. In order to address these aspects, Harald Haas, a professor at the University of Edinburgh, proposed what we know as the Li-Fi (Light Fidelity). Li-Fi is a new technology in the field of wireless communication to provide connectivity within a network environment. It is a two-way mode of wireless communication using light. Basically, the data is transmitted through Light Emitting Diodes which can vary the intensity of light very fast, even faster than the blink of an eye. From the research and experiments conducted so far, it can be said that Li-Fi can increase the speed and reliability of the transfer of data. This paper pays particular attention on the assessment of the performance of this technology. In other words, it is a 5G technology which uses LED as the medium of data transfer. For coverage within the buildings, Wi-Fi is good but Li-Fi can be considered favorable in situations where large amounts of data are to be transferred in areas with electromagnetic interferences. It brings a lot of data related qualities such as efficiency, security as well as large throughputs to the table of wireless communication. All in all, it can be said that Li-Fi is going to be a future phenomenon where the presence of light will mean access to the Internet as well as speedy data transfer.
Keywords: Communication, LED, Li-Fi, Wi-Fi.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21697734 Business Rules for Data Warehouse
Authors: Rajeev Kaula
Abstract:
Business rules and data warehouse are concepts and technologies that impact a wide variety of organizational tasks. In general, each area has evolved independently, impacting application development and decision-making. Generating knowledge from data warehouse is a complex process. This paper outlines an approach to ease import of information and knowledge from a data warehouse star schema through an inference class of business rules. The paper utilizes the Oracle database for illustrating the working of the concepts. The star schema structure and the business rules are stored within a relational database. The approach is explained through a prototype in Oracle-s PL/SQL Server Pages.Keywords: Business Rules, Data warehouse, PL/SQL ServerPages, Relational model, Web Application.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29847733 Authorization of Commercial Communication Satellite Grounds for Promoting Turkish Data Relay System
Authors: Celal Dudak, Aslı Utku, Burak Yağlioğlu
Abstract:
Uninterrupted and continuous satellite communication through the whole orbit time is becoming more indispensable every day. Data relay systems are developed and built for various high/low data rate information exchanges like TDRSS of USA and EDRSS of Europe. In these missions, a couple of task-dedicated communication satellites exist. In this regard, for Turkey a data relay system is attempted to be defined exchanging low data rate information (i.e. TTC) for Earth-observing LEO satellites appointing commercial GEO communication satellites all over the world. First, justification of this attempt is given, demonstrating duration enhancements in the link. Discussion of preference of RF communication is, also, given instead of laser communication. Then, preferred communication GEOs – including TURKSAT4A already belonging to Turkey- are given, together with the coverage enhancements through STK simulations and the corresponding link budget. Also, a block diagram of the communication system is given on the LEO satellite.Keywords: Communication, satellite, data relay system, coverage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14167732 An Efficient Approach to Mining Frequent Itemsets on Data Streams
Authors: Sara Ansari, Mohammad Hadi Sadreddini
Abstract:
The increasing importance of data stream arising in a wide range of advanced applications has led to the extensive study of mining frequent patterns. Mining data streams poses many new challenges amongst which are the one-scan nature, the unbounded memory requirement and the high arrival rate of data streams. In this paper, we propose a new approach for mining itemsets on data stream. Our approach SFIDS has been developed based on FIDS algorithm. The main attempts were to keep some advantages of the previous approach and resolve some of its drawbacks, and consequently to improve run time and memory consumption. Our approach has the following advantages: using a data structure similar to lattice for keeping frequent itemsets, separating regions from each other with deleting common nodes that results in a decrease in search space, memory consumption and run time; and Finally, considering CPU constraint, with increasing arrival rate of data that result in overloading system, SFIDS automatically detect this situation and discard some of unprocessing data. We guarantee that error of results is bounded to user pre-specified threshold, based on a probability technique. Final results show that SFIDS algorithm could attain about 50% run time improvement than FIDS approach.Keywords: Data stream, frequent itemset, stream mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14197731 Modeling and FOS Feedback Based Control of SISO Intelligent Structures with Embedded Shear Sensors and Actuators
Authors: T. C. Manjunath, B. Bandyopadhyay
Abstract:
Active vibration control is an important problem in structures. The objective of active vibration control is to reduce the vibrations of a system by automatic modification of the system-s structural response. In this paper, the modeling and design of a fast output sampling feedback controller for a smart flexible beam system embedded with shear sensors and actuators for SISO system using Timoshenko beam theory is proposed. FEM theory, Timoshenko beam theory and the state space techniques are used to model the aluminum cantilever beam. For the SISO case, the beam is divided into 5 finite elements and the control actuator is placed at finite element position 1, whereas the sensor is varied from position 2 to 5, i.e., from the nearby fixed end to the free end. Controllers are designed using FOS method and the performance of the designed FOS controller is evaluated for vibration control for 4 SISO models of the same plant. The effect of placing the sensor at different locations on the beam is observed and the performance of the controller is evaluated for vibration control. Some of the limitations of the Euler-Bernoulli theory such as the neglection of shear and axial displacement are being considered here, thus giving rise to an accurate beam model. Embedded shear sensors and actuators have been considered in this paper instead of the surface mounted sensors and actuators for vibration suppression because of lot of advantages. In controlling the vibration modes, the first three dominant modes of vibration of the system are considered.Keywords: Smart structure, Timoshenko beam theory, Fast output sampling feedback control, Finite Element Method, State space model, SISO, Vibration control, LMI
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17877730 Effect of Bentonite on the Rheological Behavior of Cement Grout in Presence of Superplasticizer
Authors: K. Benyounes, A. Benmounah
Abstract:
Cement-based grouts has been used successfully to repair cracks in many concrete structures such as bridges, tunnels, buildings and to consolidate soils or rock foundations. In the present study the rheological characterization of cement grout with water/binder ratio (W/B) is fixed at 0.5. The effect of the replacement of cement by bentonite (2 to 10% wt) in presence of superplasticizer (0.5% wt) was investigated. Several rheological tests were carried out by using controlled-stress rheometer equipped with vane geometry in temperature of 20°C. To highlight the influence of bentonite and superplasticizer on the rheological behavior of grout cement, various flow tests in a range of shear rate from 0 to 200 s-1 were observed. Cement grout showed a non-Newtonian viscosity behavior at all concentrations of bentonite. Three parameter model Herschel- Bulkley was chosen for fitting of experimental data. Based on the values of correlation coefficients of the estimated parameters, The Herschel-Bulkley law model well described the rheological behavior of the grouts. Test results showed that the dosage of bentonite increases the viscosity and yield stress of the system and introduces more thixotropy. While the addition of both bentonite and superplasticizer with cement grout improve significantly the fluidity and reduced the yield stress due to the action of dispersion of SP.
Keywords: Cement grout, bentonite, superplasticizer, viscosity, yield stress.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35667729 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model
Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin
Abstract:
Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.
Keywords: Anomaly detection, autoencoder, data centers, deep learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7427728 Spike Sorting Method Using Exponential Autoregressive Modeling of Action Potentials
Authors: Sajjad Farashi
Abstract:
Neurons in the nervous system communicate with each other by producing electrical signals called spikes. To investigate the physiological function of nervous system it is essential to study the activity of neurons by detecting and sorting spikes in the recorded signal. In this paper a method is proposed for considering the spike sorting problem which is based on the nonlinear modeling of spikes using exponential autoregressive model. The genetic algorithm is utilized for model parameter estimation. In this regard some selected model coefficients are used as features for sorting purposes. For optimal selection of model coefficients, self-organizing feature map is used. The results show that modeling of spikes with nonlinear autoregressive model outperforms its linear counterpart. Also the extracted features based on the coefficients of exponential autoregressive model are better than wavelet based extracted features and get more compact and well-separated clusters. In the case of spikes different in small-scale structures where principal component analysis fails to get separated clouds in the feature space, the proposed method can obtain well-separated cluster which removes the necessity of applying complex classifiers.
Keywords: Exponential autoregressive model, Neural data, spike sorting, time series modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17707727 AnQL: A Query Language for Annotation Documents
Authors: Neerja Bhatnagar, Ben A. Juliano, Renee S. Renner
Abstract:
This paper presents data annotation models at five levels of granularity (database, relation, column, tuple, and cell) of relational data to address the problem of unsuitability of most relational databases to express annotations. These models do not require any structural and schematic changes to the underlying database. These models are also flexible, extensible, customizable, database-neutral, and platform-independent. This paper also presents an SQL-like query language, named Annotation Query Language (AnQL), to query annotation documents. AnQL is simple to understand and exploits the already-existent wide knowledge and skill set of SQL.
Keywords: Annotation query language, data annotations, data annotation models, semantic data annotations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18467726 Machine Learning-Enabled Classification of Climbing Using Small Data
Authors: Nicholas Milburn, Yu Liang, Dalei Wu
Abstract:
Athlete performance scoring within the climbing domain presents interesting challenges as the sport does not have an objective way to assign skill. Assessing skill levels within any sport is valuable as it can be used to mark progress while training, and it can help an athlete choose appropriate climbs to attempt. Machine learning-based methods are popular for complex problems like this. The dataset available was composed of dynamic force data recorded during climbing; however, this dataset came with challenges such as data scarcity, imbalance, and it was temporally heterogeneous. Investigated solutions to these challenges include data augmentation, temporal normalization, conversion of time series to the spectral domain, and cross validation strategies. The investigated solutions to the classification problem included light weight machine classifiers KNN and SVM as well as the deep learning with CNN. The best performing model had an 80% accuracy. In conclusion, there seems to be enough information within climbing force data to accurately categorize climbers by skill.
Keywords: Classification, climbing, data imbalance, data scarcity, machine learning, time sequence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5677725 Challenges of Irrigation Water Supply in Croplands of Arid Regions and their Environmental Consequences – A Case Study in the Dez and Moghan Command Areas of Iran
Authors: Lobat Taghavi, Najaf Hedayat
Abstract:
Renewable water resources are crucial production variables in arid and semi-arid regions where intensive agriculture is practiced to meet ever-increasing demand for food and fiber. This is crucial for the Dez and Moghan command areas where water delivery problems and adverse environmental issues are widespread. This paper aims to identify major problems areas using on-farm surveys of 200 farmers, agricultural extensionists and water suppliers which was complemented by secondary data and field observations during 2010- 2011 cultivating season. The SPSS package was used to analyze and synthesis data. Results indicated inappropriate canal operations in both schemes, though there was no unanimity about the underlying causes. Inequitable and inflexible distribution was found to be rooted in deficient hydraulic structures particularly in the main and secondary canals. The inadequacy and inflexibility of water scheduling regime was the underlying causes of recurring pest and disease spread which often led to the decline of crop yield and quality, although these were not disputed, the water suppliers were not prepared to link with the deficiencies in the operation of the main and secondary canals. They rather attributed these to the prevailing salinity; alkalinity, water table fluctuations and leaching of the valuable agro-chemical inputs from the plants- route zone with farreaching consequences. Examples of these include the pollution of ground and surface resources due to over-irrigation at the farm level which falls under the growers- own responsibility. Poor irrigation efficiency and adverse environmental problems were attributed to deficient and outdated farming practices that were in turn rooted in poor extension programs and irrational water charges.Keywords: water delivery, inequity, inflexibility, conflicts, environmental impact, Dez and Moghan
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15247724 FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule
Authors: Lu Si, Jie Yu, Shasha Li, Jun Ma, Lei Luo, Qingbo Wu, Yongqi Ma, Zhengji Liu
Abstract:
Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rule, we propose a large data sets instance selection method with MapReduce framework. Besides ensuring the prediction accuracy and reduction rate, it has two desirable properties: First, it reduces the work load in the aggregation node; Second and most important, it produces the same result with the sequential version, which other parallel methods cannot achieve. We evaluate the performance of FCNN-MR on one small data set and two large data sets. The experimental results show that it is effective and practical.Keywords: Instance selection, data reduction, MapReduce, kNN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10177723 Optimization of Real Time Measured Data Transmission, Given the Amount of Data Transmitted
Authors: Michal Kopcek, Tomas Skulavik, Michal Kebisek, Gabriela Krizanova
Abstract:
The operation of nuclear power plants involves continuous monitoring of the environment in their area. This monitoring is performed using a complex data acquisition system, which collects status information about the system itself and values of many important physical variables e.g. temperature, humidity, dose rate etc. This paper describes a proposal and optimization of communication that takes place in teledosimetric system between the central control server responsible for the data processing and storing and the decentralized measuring stations, which are measuring the physical variables. Analyzes of ongoing communication were performed and consequently the optimization of the system architecture and communication was done.
Keywords: Communication protocol, transmission optimization, data acquisition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18217722 Measurement of Innovation Performance
Authors: M. Chobotová, Ž. Rylková
Abstract:
Time full of changes which is associated with globalization, tougher competition, changes in the structures of markets and economic downturn, that all force companies to think about their competitive advantages. These changes can bring the company a competitive advantage and that can help improve competitive position in the market. Policy of the European Union is focused on the fast growing innovative companies which quickly respond to market demands and consequently increase its competitiveness. To meet those objectives companies need the right conditions and support of their state.
Keywords: Innovation, performance, measurements metrics, indices.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30217721 Empirical Process Monitoring Via Chemometric Analysis of Partially Unbalanced Data
Authors: Hyun-Woo Cho
Abstract:
Real-time or in-line process monitoring frameworks are designed to give early warnings for a fault along with meaningful identification of its assignable causes. In artificial intelligence and machine learning fields of pattern recognition various promising approaches have been proposed such as kernel-based nonlinear machine learning techniques. This work presents a kernel-based empirical monitoring scheme for batch type production processes with small sample size problem of partially unbalanced data. Measurement data of normal operations are easy to collect whilst special events or faults data are difficult to collect. In such situations, noise filtering techniques can be helpful in enhancing process monitoring performance. Furthermore, preprocessing of raw process data is used to get rid of unwanted variation of data. The performance of the monitoring scheme was demonstrated using three-dimensional batch data. The results showed that the monitoring performance was improved significantly in terms of detection success rate of process fault.
Keywords: Process Monitoring, kernel methods, multivariate filtering, data-driven techniques, quality improvement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17467720 Evaluation of Linear and Geometrically Nonlinear Static and Dynamic Analysis of Thin Shells by Flat Shell Finite Elements
Authors: Djamel Boutagouga, Kamel Djeghaba
Abstract:
The choice of finite element to use in order to predict nonlinear static or dynamic response of complex structures becomes an important factor. Then, the main goal of this research work is to focus a study on the effect of the in-plane rotational degrees of freedom in linear and geometrically non linear static and dynamic analysis of thin shell structures by flat shell finite elements. In this purpose: First, simple triangular and quadrilateral flat shell finite elements are implemented in an incremental formulation based on the updated lagrangian corotational description for geometrically nonlinear analysis. The triangular element is a combination of DKT and CST elements, while the quadrilateral is a combination of DKQ and the bilinear quadrilateral membrane element. In both elements, the sixth degree of freedom is handled via introducing fictitious stiffness. Secondly, in the same code, the sixth degrees of freedom in these elements is handled differently where the in-plane rotational d.o.f is considered as an effective d.o.f in the in-plane filed interpolation. Our goal is to compare resulting shell elements. Third, the analysis is enlarged to dynamic linear analysis by direct integration using Newmark-s implicit method. Finally, the linear dynamic analysis is extended to geometrically nonlinear dynamic analysis where Newmark-s method is used to integrate equations of motion and the Newton-Raphson method is employed for iterating within each time step increment until equilibrium is achieved. The obtained results demonstrate the effectiveness and robustness of the interpolation of the in-plane rotational d.o.f. and present deficiencies of using fictitious stiffness in dynamic linear and nonlinear analysis.Keywords: Flat shell, dynamic analysis, nonlinear, Newmark, drilling rotation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29207719 Influence of Non-Structural Elements on Dynamic Response of Multi-Storey Rc Building to Mining Shock
Authors: Joanna M. Dulińska, Maria Fabijańska
Abstract:
In the paper the results of calculations of the dynamic response of a multi-storey reinforced concrete building to a strong mining shock originated from the main region of mining activity in Poland (i.e. the Legnica-Glogow Copper District) are presented. The representative time histories of accelerations registered in three directions were used as ground motion data in calculations of the dynamic response of the structure. Two variants of a numerical model were applied: the model including only structural elements of the building and the model including both structural and non-structural elements (i.e. partition walls and ventilation ducts made of brick). It turned out that non-structural elements of multi-storey RC buildings have a small impact of about 10 % on natural frequencies of these structures. It was also proved that the dynamic response of building to mining shock obtained in case of inclusion of all non-structural elements in the numerical model is about 20 % smaller than in case of consideration of structural elements only. The principal stresses obtained in calculations of dynamic response of multi-storey building to strong mining shock are situated on the level of about 30% of values obtained from static analysis (dead load).Keywords: Dynamic characteristics of buildings, mining shocks, dynamic response of buildings, non-structural elements
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18877718 A Comparison of Image Data Representations for Local Stereo Matching
Authors: André Smith, Amr Abdel-Dayem
Abstract:
The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.Keywords: Colour data, local stereo matching, stereo correspondence, disparity map.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9167717 Flexible, Adaptable and Scaleable Business Rules Management System for Data Validation
Authors: Kashif Kamran, Farooque Azam
Abstract:
The policies governing the business of any organization are well reflected in her business rules. The business rules are implemented by data validation techniques, coded during the software development process. Any change in business policies results in change in the code written for data validation used to enforce the business policies. Implementing the change in business rules without changing the code is the objective of this paper. The proposed approach enables users to create rule sets at run time once the software has been developed. The newly defined rule sets by end users are associated with the data variables for which the validation is required. The proposed approach facilitates the users to define business rules using all the comparison operators and Boolean operators. Multithreading is used to validate the data entered by end user against the business rules applied. The evaluation of the data is performed by a newly created thread using an enhanced form of the RPN (Reverse Polish Notation) algorithm.Keywords: Business Rules, data validation, multithreading, Reverse Polish Notation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22717716 Tidal Data Analysis using ANN
Authors: Ritu Vijay, Rekha Govil
Abstract:
The design of a complete expansion that allows for compact representation of certain relevant classes of signals is a central problem in signal processing applications. Achieving such a representation means knowing the signal features for the purpose of denoising, classification, interpolation and forecasting. Multilayer Neural Networks are relatively a new class of techniques that are mathematically proven to approximate any continuous function arbitrarily well. Radial Basis Function Networks, which make use of Gaussian activation function, are also shown to be a universal approximator. In this age of ever-increasing digitization in the storage, processing, analysis and communication of information, there are numerous examples of applications where one needs to construct a continuously defined function or numerical algorithm to approximate, represent and reconstruct the given discrete data of a signal. Many a times one wishes to manipulate the data in a way that requires information not included explicitly in the data, which is done through interpolation and/or extrapolation. Tidal data are a very perfect example of time series and many statistical techniques have been applied for tidal data analysis and representation. ANN is recent addition to such techniques. In the present paper we describe the time series representation capabilities of a special type of ANN- Radial Basis Function networks and present the results of tidal data representation using RBF. Tidal data analysis & representation is one of the important requirements in marine science for forecasting.Keywords: ANN, RBF, Tidal Data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16567715 Spatial Data Mining by Decision Trees
Authors: S. Oujdi, H. Belbachir
Abstract:
Existing methods of data mining cannot be applied on spatial data because they require spatial specificity consideration, as spatial relationships. This paper focuses on the classification with decision trees, which are one of the data mining techniques. We propose an extension of the C4.5 algorithm for spatial data, based on two different approaches Join materialization and Querying on the fly the different tables. Similar works have been done on these two main approaches, the first - Join materialization - favors the processing time in spite of memory space, whereas the second - Querying on the fly different tables- promotes memory space despite of the processing time. The modified C4.5 algorithm requires three entries tables: a target table, a neighbor table, and a spatial index join that contains the possible spatial relationship among the objects in the target table and those in the neighbor table. Thus, the proposed algorithms are applied to a spatial data pattern in the accidentology domain. A comparative study of our approach with other works of classification by spatial decision trees will be detailed.
Keywords: C4.5 Algorithm, Decision trees, S-CART, Spatial data mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29867714 Affine Projection Algorithm with Variable Data-Reuse Factor
Authors: ChangWoo Lee, Young Kow Lee, Sung Jun Ban, SungHoo Choi, Sang Woo Kim
Abstract:
This paper suggests a new Affine Projection (AP) algorithm with variable data-reuse factor using the condition number as a decision factor. To reduce computational burden, we adopt a recently reported technique which estimates the condition number of an input data matrix. Several simulations show that the new algorithm has better performance than that of the conventional AP algorithm.
Keywords: Affine projection algorithm, variable data-reuse factor, condition number, convergence rate, misalignment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15457713 Ambient Vibration Testing of Existing Buildings in Madinah
Authors: Tarek M. Alguhane, Ayman H. Khalil, M. N. Fayed, Ayman M. Ismail
Abstract:
The elastic period has a primary role in the seismic assessment of buildings. Reliable calculations and/or estimates of the fundamental frequency of a building and its site are essential during analysis and design process. Various code formulas based on empirical data are generally used to estimate the fundamental frequency of a structure. For existing structures, in addition to code formulas and available analytical tools such as modal analyses, various methods of testing including ambient and forced vibration testing procedures may be used to determine dynamic characteristics. In this study, the dynamic properties of the 32 buildings located in the Madinah of Saudi Arabia were identified using ambient motions recorded at several, spatially-distributed locations within each building. Ambient vibration measurements of buildings have been analyzed and the fundamental longitudinal and transverse periods for all tested buildings are presented. The fundamental mode of vibration has been compared in plots with codes formulae (Saudi Building Code, EC8, and UBC1997). The results indicate that measured periods of existing buildings are shorter than that given by most empirical code formulas. Recommendations are given based on the common design and construction practice in Madinah city.
Keywords: Ambient vibration, Fundamental period, RC buildings, Infill walls.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23327712 Attribute Analysis of Quick Response Code Payment Users Using Discriminant Non-negative Matrix Factorization
Authors: Hironori Karachi, Haruka Yamashita
Abstract:
Recently, the system of quick response (QR) code is getting popular. Many companies introduce new QR code payment services and the services are competing with each other to increase the number of users. For increasing the number of users, we should grasp the difference of feature of the demographic information, usage information, and value of users between services. In this study, we conduct an analysis of real-world data provided by Nomura Research Institute including the demographic data of users and information of users’ usages of two services; LINE Pay, and PayPay. For analyzing such data and interpret the feature of them, Nonnegative Matrix Factorization (NMF) is widely used; however, in case of the target data, there is a problem of the missing data. EM-algorithm NMF (EMNMF) to complete unknown values for understanding the feature of the given data presented by matrix shape. Moreover, for comparing the result of the NMF analysis of two matrices, there is Discriminant NMF (DNMF) shows the difference of users features between two matrices. In this study, we combine EMNMF and DNMF and also analyze the target data. As the interpretation, we show the difference of the features of users between LINE Pay and Paypay.
Keywords: Data science, non-negative matrix factorization, missing data, quality of services.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4537711 Using Data Mining for Learning and Clustering FCM
Authors: Somayeh Alizadeh, Mehdi Ghazanfari, Mohammad Fathian
Abstract:
Fuzzy Cognitive Maps (FCMs) have successfully been applied in numerous domains to show relations between essential components. In some FCM, there are more nodes, which related to each other and more nodes means more complex in system behaviors and analysis. In this paper, a novel learning method used to construct FCMs based on historical data and by using data mining and DEMATEL method, a new method defined to reduce nodes number. This method cluster nodes in FCM based on their cause and effect behaviors.Keywords: Clustering, Data Mining, Fuzzy Cognitive Map(FCM), Learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2016