Search results for: Data Structures
7860 Sport Facilities and Social Change: European Funds as an Opportunity for Urban Regeneration
Authors: Lorenzo Maiorino, Fabio Fortuna, Giovanni Panebianco, Marco Sanzari, Gabriella Arcese, Valerio Maria Paolozzi
Abstract:
It is well known that sport is a factor of social cohesion and the breaking down of barriers between people. From this point of view, the aim is to demonstrate how, through the (re)generation of sustainable structures, it is possible to give life to a new social, cultural and economic pathway, where possible, in peripheral areas with problems of abandonment and degradation. The aim of this paper is therefore to study realities such as European programs and funds and to highlight the ways in which planning can be used to respond to critical issues such as urban decay, abandonment, and the mitigation of social differences. For this reason, the analysis will be carried out through the Multiannual Financial Framework (MFF) package, the next generation EU, the Recovery and Resilience Facility (RRF), the Cohesion Fund, the European Social Fund, and other managed funds. The procedure will rely on sources and data of unquestionable origin, and the relation to the object of study in question will be highlighted. The project lends itself to be ambitious and explore a further aspect of the sports theme, which as we know, is one of the foundations for a healthy society
.Keywords: Sport, social inclusion, urban regeneration, sport facilities, European funds.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4377859 Agile Methodology for Modeling and Design of Data Warehouses -AM4DW-
Authors: Nieto Bernal Wilson, Carmona Suarez Edgar
Abstract:
The organizations have structured and unstructured information in different formats, sources, and systems. Part of these come from ERP under OLTP processing that support the information system, however these organizations in OLAP processing level, presented some deficiencies, part of this problematic lies in that does not exist interesting into extract knowledge from their data sources, as also the absence of operational capabilities to tackle with these kind of projects. Data Warehouse and its applications are considered as non-proprietary tools, which are of great interest to business intelligence, since they are repositories basis for creating models or patterns (behavior of customers, suppliers, products, social networks and genomics) and facilitate corporate decision making and research. The following paper present a structured methodology, simple, inspired from the agile development models as Scrum, XP and AUP. Also the models object relational, spatial data models, and the base line of data modeling under UML and Big data, from this way sought to deliver an agile methodology for the developing of data warehouses, simple and of easy application. The methodology naturally take into account the application of process for the respectively information analysis, visualization and data mining, particularly for patterns generation and derived models from the objects facts structured.
Keywords: Data warehouse, model data, big data, object fact, object relational fact, process developed data warehouse.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14787858 Distributed Data-Mining by Probability-Based Patterns
Authors: M. Kargar, F. Gharbalchi
Abstract:
In this paper a new method is suggested for distributed data-mining by the probability patterns. These patterns use decision trees and decision graphs. The patterns are cared to be valid, novel, useful, and understandable. Considering a set of functions, the system reaches to a good pattern or better objectives. By using the suggested method we will be able to extract the useful information from massive and multi-relational data bases.Keywords: Data-mining, Decision tree, Decision graph, Pattern, Relationship.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15557857 K-Means for Spherical Clusters with Large Variance in Sizes
Authors: A. M. Fahim, G. Saake, A. M. Salem, F. A. Torkey, M. A. Ramadan
Abstract:
Data clustering is an important data exploration technique with many applications in data mining. The k-means algorithm is well known for its efficiency in clustering large data sets. However, this algorithm is suitable for spherical shaped clusters of similar sizes and densities. The quality of the resulting clusters decreases when the data set contains spherical shaped with large variance in sizes. In this paper, we introduce a competent procedure to overcome this problem. The proposed method is based on shifting the center of the large cluster toward the small cluster, and recomputing the membership of small cluster points, the experimental results reveal that the proposed algorithm produces satisfactory results.Keywords: K-Means, Data Clustering, Cluster Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32817856 Representing Data without Lost Compression Properties in Time Series: A Review
Authors: Nabilah Filzah Mohd Radzuan, Zalinda Othman, Azuraliza Abu Bakar, Abdul Razak Hamdan
Abstract:
Uncertain data is believed to be an important issue in building up a prediction model. The main objective in the time series uncertainty analysis is to formulate uncertain data in order to gain knowledge and fit low dimensional model prior to a prediction task. This paper discusses the performance of a number of techniques in dealing with uncertain data specifically those which solve uncertain data condition by minimizing the loss of compression properties.
Keywords: Compression properties, uncertainty, uncertain time series, mining technique, weather prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16207855 Are XBRL-based Financial Reports Better than Non-XBRL Reports? A Quality Assessment
Authors: Zhenkun Wang, Simon S. Gao
Abstract:
Using a scoring system, this paper provides a comparative assessment of the quality of data between XBRL formatted financial reports and non-XBRL financial reports. It shows a major improvement in the quality of data of XBRL formatted financial reports. Although XBRL formatted financial reports do not show much advantage in the quality at the beginning, XBRL financial reports lately display a large improvement in the quality of data in almost all aspects. With the improved XBRL web data managing, presentation and analysis applications, XBRL formatted financial reports have a much better accessibility, are more accurate and better in timeliness.Keywords: Data Quality; Financial Report; Information; XBRL
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25667854 Meta Model for Optimum Design Objective Function of Steel Frames Subjected to Seismic Loads
Authors: Salah R. Al Zaidee, Ali S. Mahdi
Abstract:
Except for simple problems of statically determinate structures, optimum design problems in structural engineering have implicit objective functions where structural analysis and design are essential within each searching loop. With these implicit functions, the structural engineer is usually enforced to write his/her own computer code for analysis, design, and searching for optimum design among many feasible candidates and cannot take advantage of available software for structural analysis, design, and searching for the optimum solution. The meta-model is a regression model used to transform an implicit objective function into objective one and leads in turn to decouple the structural analysis and design processes from the optimum searching process. With the meta-model, well-known software for structural analysis and design can be used in sequence with optimum searching software. In this paper, the meta-model has been used to develop an explicit objective function for plane steel frames subjected to dead, live, and seismic forces. Frame topology is assumed as predefined based on architectural and functional requirements. Columns and beams sections and different connections details are the main design variables in this study. Columns and beams are grouped to reduce the number of design variables and to make the problem similar to that adopted in engineering practice. Data for the implicit objective function have been generated based on analysis and assessment for many design proposals with CSI SAP software. These data have been used later in SPSS software to develop a pure quadratic nonlinear regression model for the explicit objective function. Good correlations with a coefficient, R2, in the range from 0.88 to 0.99 have been noted between the original implicit functions and the corresponding explicit functions generated with meta-model.
Keywords: Meta-modal, objective function, steel frames, seismic analysis, design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13337853 Modeling of Random Variable with Digital Probability Hyper Digraph: Data-Oriented Approach
Authors: A. Habibizad Navin, M. Naghian Fesharaki, M. Mirnia, M. Kargar
Abstract:
In this paper we introduce Digital Probability Hyper Digraph for modeling random variable as the hierarchical data-oriented model.Keywords: Data-Oriented Models, Data Structure, DigitalProbability Hyper Digraph, Random Variable, Statistic andProbability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12737852 Wireless Transmission of Big Data Using Novel Secure Algorithm
Authors: K. Thiagarajan, K. Saranya, A. Veeraiah, B. Sudha
Abstract:
This paper presents a novel algorithm for secure, reliable and flexible transmission of big data in two hop wireless networks using cooperative jamming scheme. Two hop wireless networks consist of source, relay and destination nodes. Big data has to transmit from source to relay and from relay to destination by deploying security in physical layer. Cooperative jamming scheme determines transmission of big data in more secure manner by protecting it from eavesdroppers and malicious nodes of unknown location. The novel algorithm that ensures secure and energy balance transmission of big data, includes selection of data transmitting region, segmenting the selected region, determining probability ratio for each node (capture node, non-capture and eavesdropper node) in every segment, evaluating the probability using binary based evaluation. If it is secure transmission resume with the two- hop transmission of big data, otherwise prevent the attackers by cooperative jamming scheme and transmit the data in two-hop transmission.Keywords: Big data, cooperative jamming, energy balance, physical layer, two-hop transmission, wireless security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21807851 A Temporary Shelter Proposal for Displaced People
Authors: İ. Yetkin, F. Maden, S. Tosun, Y. Akgün, Ö. Kilit, K. Korkmaz, G. Kiper, M. Gündüzalp
Abstract:
Forced migration, whether caused by conflicts or other factors, frequently places individuals in vulnerable situations, necessitating immediate access to shelter. To promptly address the immediate needs of affected individuals, temporary shelters are often established. These shelters are characterized by their adaptable and functional nature, encompassing lightweight and sustainable structural systems, rapid assembly capabilities, modularity, and transportability. The shelter design is contingent upon demand, resulting in distinct phases for different structural forms. A multi-phased shelter approach covers emergency response, temporary shelter, and permanent reconstruction. Emergency shelters play a critical role in providing immediate life-saving aid. In contrast, temporary and transitional shelters, also called “T-shelters,” offer longer-term living environments during the recovery and rebuilding. Among these, temporary shelters are more extensively covered in the literature due to their diverse inhabiting functions. The roles of emergency shelters and temporary shelters are inherently separate, addressing distinct aspects of sheltering processes. Given their prolonged usage, temporary shelters are built for greater durability compared to emergency shelters. Nonetheless, inadequacies in temporary shelters can lead to challenges in ensuring habitability. Issues like non-expandable structures unsuitable for accommodating large families, short-term shelters that worsen conditions, non-waterproof materials providing insufficient protection against bad weather conditions, and complex installation systems contribute to these problems. Given the aforementioned problems, there arises a need to develop adaptive shelters featuring lightweight components for ease of transport, possess the ability for rapid assembly, and utilize durable materials to withstand adverse weather conditions. In this study, first, the state-of-the-art on temporary shelters is presented. Then, a temporary shelter composed of foldable plates is proposed, which can easily be assembled and transportable. The proposed shelter is deliberated upon its movement capacity, transportability, and flexibility. This study makes a valuable contribution to the literature since it not only offers a systematic analysis of temporary shelters utilizing kinetic systems but also presents a practical solution that meets the necessary design requirements.
Keywords: Deployable structures, disasters, foldable plates, temporary shelters, transformable structures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1217850 Microfluidic Manipulation for Biomedical and Biohealth Applications
Authors: Reza Hadjiaghaie Vafaie, Sevda Givtaj
Abstract:
Automation and control of biological samples and solutions at the microscale is a major advantage for biochemistry analysis and biological diagnostics. Despite the known potential of miniaturization in biochemistry and biomedical applications, comparatively little is known about fluid automation and control at the microscale. Here, we study the electric field effect inside a fluidic channel and proper electrode structures with different patterns proposed to form forward, reversal, and rotational flows inside the channel. The simulation results confirmed that the ac electro-thermal flow is efficient for the control and automation of high-conductive solutions. In this research, the fluid pumping and mixing effects were numerically studied by solving physic-coupled electric, temperature, hydrodynamic, and concentration fields inside a microchannel. From an experimental point of view, the electrode structures are deposited on a silicon substrate and bonded to a PDMS microchannel to form a microfluidic chip. The motions of fluorescent particles in pumping and mixing modes were captured by using a CCD camera. By measuring the frequency response of the fluid and exciting the electrodes with the proper voltage, the fluid motions (including pumping and mixing effects) are observed inside the channel through the CCD camera. Based on the results, there is good agreement between the experimental and simulation studies.
Keywords: Microfluidic, nano/micro actuator, AC electrothermal, Reynolds number, micropump, micromixer, microfabrication, mass transfer, biomedical applications.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 837849 Study of Efficiency and Capability LZW++ Technique in Data Compression
Authors: Yusof. Mohd Kamir, Mat Deris. Mohd Sufian, Abidin. Ahmad Faisal Amri
Abstract:
The purpose of this paper is to show efficiency and capability LZWµ in data compression. The LZWµ technique is enhancement from existing LZW technique. The modification the existing LZW is needed to produce LZWµ technique. LZW read one by one character at one time. Differ with LZWµ technique, where the LZWµ read three characters at one time. This paper focuses on data compression and tested efficiency and capability LZWµ by different data format such as doc type, pdf type and text type. Several experiments have been done by different types of data format. The results shows LZWµ technique is better compared to existing LZW technique in term of file size.
Keywords: Data Compression, Huffman Encoding, LZW, LZWµ, RLL, Size.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20897848 Impact of Stack Caches: Locality Awareness and Cost Effectiveness
Authors: Abdulrahman K. Alshegaifi, Chun-Hsi Huang
Abstract:
Treating data based on its location in memory has received much attention in recent years due to its different properties, which offer important aspects for cache utilization. Stack data and non-stack data may interfere with each other’s locality in the data cache. One of the important aspects of stack data is that it has high spatial and temporal locality. In this work, we simulate non-unified cache design that split data cache into stack and non-stack caches in order to maintain stack data and non-stack data separate in different caches. We observe that the overall hit rate of non-unified cache design is sensitive to the size of non-stack cache. Then, we investigate the appropriate size and associativity for stack cache to achieve high hit ratio especially when over 99% of accesses are directed to stack cache. The result shows that on average more than 99% of stack cache accuracy is achieved by using 2KB of capacity and 1-way associativity. Further, we analyze the improvement in hit rate when adding small, fixed, size of stack cache at level1 to unified cache architecture. The result shows that the overall hit rate of unified cache design with adding 1KB of stack cache is improved by approximately, on average, 3.9% for Rijndael benchmark. The stack cache is simulated by using SimpleScalar toolset.
Keywords: Hit rate, Locality of program, Stack cache, and Stack data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15087847 Damage Localization of Deterministic-Stochastic Systems
Authors: Yen-Po Wang, Ming-Chih Huang, Ming-Lian Chang
Abstract:
A scheme integrated with deterministic–stochastic subspace system identification and the method of damage localization vector is proposed in this study for damage detection of structures based on seismic response data. A series of shaking table tests using a five-storey steel frame has been conducted in National Center for Research on Earthquake Engineering (NCREE), Taiwan. Damage condition is simulated by reducing the cross-sectional area of some of the columns at the bottom. Both single and combinations of multiple damage conditions at various locations have been considered. In the system identification analysis, either full or partial observation conditions have been taken into account. It has been shown that the damaged stories can be identified from global responses of the structure to earthquakes if sufficiently observed. In addition to detecting damage(s) with respect to the intact structure, identification of new or extended damages of the as-damaged (ill-conditioned) counterpart has also been studied. The proposed scheme proves to be effective.
Keywords: Damage locating vectors, deterministic-stochastic subspace system, shaking table tests, system identification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16997846 Cross Project Software Fault Prediction at Design Phase
Authors: Pradeep Singh, Shrish Verma
Abstract:
Software fault prediction models are created by using the source code, processed metrics from the same or previous version of code and related fault data. Some company do not store and keep track of all artifacts which are required for software fault prediction. To construct fault prediction model for such company, the training data from the other projects can be one potential solution. Earlier we predicted the fault the less cost it requires to correct. The training data consists of metrics data and related fault data at function/module level. This paper investigates fault predictions at early stage using the cross-project data focusing on the design metrics. In this study, empirical analysis is carried out to validate design metrics for cross project fault prediction. The machine learning techniques used for evaluation is Naïve Bayes. The design phase metrics of other projects can be used as initial guideline for the projects where no previous fault data is available. We analyze seven datasets from NASA Metrics Data Program which offer design as well as code metrics. Overall, the results of cross project is comparable to the within company data learning.Keywords: Software Metrics, Fault prediction, Cross project, Within project.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25467845 Automatic 3D Reconstruction of Coronary Artery Centerlines from Monoplane X-ray Angiogram Images
Authors: Ali Zifan, Panos Liatsis, Panagiotis Kantartzis, Manolis Gavaises, Nicos Karcanias, Demosthenes Katritsis
Abstract:
We present a new method for the fully automatic 3D reconstruction of the coronary artery centerlines, using two X-ray angiogram projection images from a single rotating monoplane acquisition system. During the first stage, the input images are smoothed using curve evolution techniques. Next, a simple yet efficient multiscale method, based on the information of the Hessian matrix, for the enhancement of the vascular structure is introduced. Hysteresis thresholding using different image quantiles, is used to threshold the arteries. This stage is followed by a thinning procedure to extract the centerlines. The resulting skeleton image is then pruned using morphological and pattern recognition techniques to remove non-vessel like structures. Finally, edge-based stereo correspondence is solved using a parallel evolutionary optimization method based on f symbiosis. The detected 2D centerlines combined with disparity map information allow the reconstruction of the 3D vessel centerlines. The proposed method has been evaluated on patient data sets for evaluation purposes.Keywords: Vessel enhancement, centerline extraction, symbiotic reconstruction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22727844 Extreme Temperature Forecast in Mbonge, Cameroon through Return Level Analysis of the Generalized Extreme Value (GEV) Distribution
Authors: Nkongho Ayuketang Arreyndip, Ebobenow Joseph
Abstract:
In this paper, temperature extremes are forecast by employing the block maxima method of the Generalized extreme value(GEV) distribution to analyse temperature data from the Cameroon Development Corporation (C.D.C). By considering two sets of data (Raw data and simulated data) and two (stationary and non-stationary) models of the GEV distribution, return levels analysis is carried out and it was found that in the stationary model, the return values are constant over time with the raw data while in the simulated data, the return values show an increasing trend but with an upper bound. In the non-stationary model, the return levels of both the raw data and simulated data show an increasing trend but with an upper bound. This clearly shows that temperatures in the tropics even-though show a sign of increasing in the future, there is a maximum temperature at which there is no exceedence. The results of this paper are very vital in Agricultural and Environmental research.Keywords: Return level, Generalized extreme value (GEV), Meteorology, Forecasting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21067843 Masonry CSEB Building Models under Shaketable Testing-An Experimental Study
Authors: Lakshmi Keshav, V. G. Srisanthi
Abstract:
In this experimental investigation shake table tests were conducted on two reduced models that represent normal single room building constructed by Compressed Stabilized Earth Block (CSEB) from locally available soil. One model was constructed with earthquake resisting features (EQRF) having sill band, lintel band and vertical bands to control the building vibration and another one was without Earthquake Resisting Features. To examine the seismic capacity of the models particularly when it is subjected to long-period ground motion by large amplitude by many cycles of repeated loading, the test specimen was shaken repeatedly until the failure. The test results from Hi-end Data Acquisition system show that model with EQRF behave better than without EQRF. This modified masonry model with new material combined with new bands is used to improve the behavior of masonry building.Keywords: Earth Quake Resisting Features, Compressed Stabilized Earth Blocks, Masonry structures, Shake table testing, Horizontal and vertical bands.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27357842 Mining Multicity Urban Data for Sustainable Population Relocation
Authors: Xu Du, Aparna S. Varde
Abstract:
In this research, we propose to conduct diagnostic and predictive analysis about the key factors and consequences of urban population relocation. To achieve this goal, urban simulation models extract the urban development trends as land use change patterns from a variety of data sources. The results are treated as part of urban big data with other information such as population change and economic conditions. Multiple data mining methods are deployed on this data to analyze nonlinear relationships between parameters. The result determines the driving force of population relocation with respect to urban sprawl and urban sustainability and their related parameters. This work sets the stage for developing a comprehensive urban simulation model for catering to specific questions by targeted users. It contributes towards achieving sustainability as a whole.Keywords: Data Mining, Environmental Modeling, Sustainability, Urban Planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17837841 Evaluation of Residual Stresses in Human Face as a Function of Growth
Authors: M. A. Askari, M. A. Nazari, P. Perrier, Y. Payan
Abstract:
Growth and remodeling of biological structures have gained lots of attention over the past decades. Determining the response of living tissues to mechanical loads is necessary for a wide range of developing fields such as prosthetics design or computerassisted surgical interventions. It is a well-known fact that biological structures are never stress-free, even when externally unloaded. The exact origin of these residual stresses is not clear, but theoretically, growth is one of the main sources. Extracting body organ’s shapes from medical imaging does not produce any information regarding the existing residual stresses in that organ. The simplest cause of such stresses is gravity since an organ grows under its influence from birth. Ignoring such residual stresses might cause erroneous results in numerical simulations. Accounting for residual stresses due to tissue growth can improve the accuracy of mechanical analysis results. This paper presents an original computational framework based on gradual growth to determine the residual stresses due to growth. To illustrate the method, we apply it to a finite element model of a healthy human face reconstructed from medical images. The distribution of residual stress in facial tissues is computed, which can overcome the effect of gravity and maintain tissues firmness. Our assumption is that tissue wrinkles caused by aging could be a consequence of decreasing residual stress and thus not counteracting gravity. Taking into account these stresses seems therefore extremely important in maxillofacial surgery. It would indeed help surgeons to estimate tissues changes after surgery.Keywords: Finite element method, growth, residual stress, soft tissue.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16867840 An Ant-based Clustering System for Knowledge Discovery in DNA Chip Analysis Data
Authors: Minsoo Lee, Yun-mi Kim, Yearn Jeong Kim, Yoon-kyung Lee, Hyejung Yoon
Abstract:
Biological data has several characteristics that strongly differentiate it from typical business data. It is much more complex, usually large in size, and continuously changes. Until recently business data has been the main target for discovering trends, patterns or future expectations. However, with the recent rise in biotechnology, the powerful technology that was used for analyzing business data is now being applied to biological data. With the advanced technology at hand, the main trend in biological research is rapidly changing from structural DNA analysis to understanding cellular functions of the DNA sequences. DNA chips are now being used to perform experiments and DNA analysis processes are being used by researchers. Clustering is one of the important processes used for grouping together similar entities. There are many clustering algorithms such as hierarchical clustering, self-organizing maps, K-means clustering and so on. In this paper, we propose a clustering algorithm that imitates the ecosystem taking into account the features of biological data. We implemented the system using an Ant-Colony clustering algorithm. The system decides the number of clusters automatically. The system processes the input biological data, runs the Ant-Colony algorithm, draws the Topic Map, assigns clusters to the genes and displays the output. We tested the algorithm with a test data of 100 to1000 genes and 24 samples and show promising results for applying this algorithm to clustering DNA chip data.
Keywords: Ant colony system, biological data, clustering, DNA chip.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19747839 The Resource Description Framework (RDF) as a Modern Structure for Medical Data
Authors: Gabriela Lindemann, Danilo Schmidt, Thomas Schrader, Dietmar Keune
Abstract:
The amount and heterogeneity of data in biomedical research, notably in interdisciplinary fields, requires new methods for the collection, presentation and analysis of information. Important data from laboratory experiments as well as patient trials are available but come out of distributed resources. The Charité - University Hospital Berlin has established together with the German Research Foundation (DFG) a new information service centre for kidney diseases and transplantation (Open European Nephrology Science Centre - OpEN.SC). Beside a collaborative aspect to create new research groups every single partner or institution of this science information centre making his own data available is allowed to search the whole data pool of the various involved centres. A core task is the implementation of a non-restricting open data structure for the various different data sources. We decided to use a modern RDF model and in a first phase transformed original data coming from the web-based Electronic Patient Record database TBase©.
Keywords: Medical databases, Resource Description Framework (RDF), metadata repository.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20317838 XML Data Management in Compressed Relational Database
Authors: Hongzhi Wang, Jianzhong Li, Hong Gao
Abstract:
XML is an important standard of data exchange and representation. As a mature database system, using relational database to support XML data may bring some advantages. But storing XML in relational database has obvious redundancy that wastes disk space, bandwidth and disk I/O when querying XML data. For the efficiency of storage and query XML, it is necessary to use compressed XML data in relational database. In this paper, a compressed relational database technology supporting XML data is presented. Original relational storage structure is adaptive to XPath query process. The compression method keeps this feature. Besides traditional relational database techniques, additional query process technologies on compressed relations and for special structure for XML are presented. In this paper, technologies for XQuery process in compressed relational database are presented..Keywords: XML, compression, query processing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18067837 A System for Analyzing and Eliciting Public Grievances Using Cache Enabled Big Data
Authors: P. Kaladevi, N. Giridharan
Abstract:
The system for analyzing and eliciting public grievances serves its main purpose to receive and process all sorts of complaints from the public and respond to users. Due to the more number of complaint data becomes big data which is difficult to store and process. The proposed system uses HDFS to store the big data and uses MapReduce to process the big data. The concept of cache was applied in the system to provide immediate response and timely action using big data analytics. Cache enabled big data increases the response time of the system. The unstructured data provided by the users are efficiently handled through map reduce algorithm. The processing of complaints takes place in the order of the hierarchy of the authority. The drawbacks of the traditional database system used in the existing system are set forth by our system by using Cache enabled Hadoop Distributed File System. MapReduce framework codes have the possible to leak the sensitive data through computation process. We propose a system that add noise to the output of the reduce phase to avoid signaling the presence of sensitive data. If the complaints are not processed in the ample time, then automatically it is forwarded to the higher authority. Hence it ensures assurance in processing. A copy of the filed complaint is sent as a digitally signed PDF document to the user mail id which serves as a proof. The system report serves to be an essential data while making important decisions based on legislation.Keywords: Big Data, Hadoop, HDFS, Caching, MapReduce, web personalization, e-governance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15927836 Developing Road Performance Measurement System with Evaluation Instrument
Authors: Kati Kõrbe Kaare, Kristjan Kuhi, Ott Koppel
Abstract:
Transportation authorities need to provide the services and facilities that are critical to every country-s well-being and development. Management of the road network is becoming increasingly challenging as demands increase and resources are limited. Public sector institutions are integrating performance information into budgeting, managing and reporting via implementing performance measurement systems. In the face of growing challenges, performance measurement of road networks is attracting growing interest in many countries. The large scale of public investments makes the maintenance and development of road networks an area where such systems are an important assessment tool. Transportation agencies have been using performance measurement and modeling as part of pavement and bridge management systems. Recently the focus has been on extending the process to applications in road construction and maintenance systems, operations and safety programs, and administrative structures and procedures. To eliminate failure and dysfunctional consequences the importance of obtaining objective data and implementing evaluation instrument where necessary is presented in this paperKeywords: Key performance indicators, performance measurement system, evaluation, system architecture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20117835 Synthesis, Structure and Properties of NZP/NASICON Structured Materials
Authors: E. A. Asabina, V. I. Pet'kov, P. A. Mayorov, A. V. Markin, N. N. Smirnova, A. M. Kovalskii, A. A. Usenko
Abstract:
The purpose of this work was to synthesize and investigate phase formation, structure and thermophysical properties of the phosphates M0.5+xM'xZr2–x(PO4)3 (M – Cd, Sr, Pb; M' – Mg, Co, Mn). The compounds were synthesized by sol-gel method. The results showed formation of limited solid solutions of NZP/NASICON type. The crystal structures of triple phosphates of the compositions MMg0.5Zr1.5(PO4)3 were refined by the Rietveld method using XRD data. Heat capacity (8–660 K) of the phosphates Pb0.5+xMgxZr2-x(PO4)3 (x = 0, 0.5) was measured, and reversible polymorphic transitions were found at temperatures, close to the room temperature. The results of Rietveld structure refinement showed the polymorphism caused by disordering of lead cations in the cavities of NZP/NASICON structure. Thermal expansion (298−1073 K) of the phosphates MMg0.5Zr1.5(PO4)3 was studied by XRD method, and the compounds were found to belong to middle and low-expanding materials. Thermal diffusivity (298–573 K) of the ceramic samples of phosphates slightly decreased with temperature increasing. As was demonstrated, the studied phosphates are characterized by the better thermophysical characteristics than widespread fire-resistant materials, such as zirconia and etc.
Keywords: NASICON, NZP, phosphate, structure, synthesis, thermophysical properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8417834 Seismic Base Shear Force Depending on Building Fundamental Period and Site Conditions: Deterministic Formulation and Probabilistic Analysis
Authors: S. Dorbani, M. Badaoui, D. Benouar
Abstract:
The aim of this paper is to investigate the effect of the building fundamental period of reinforced concrete buildings of (6, 9, and 12-storey), with different floor plans: Symmetric, mono-symmetric, and unsymmetric. These structures are erected at different epicentral distances. Using the Boumerdes, Algeria (2003) earthquake data, we focused primarily on the establishment of the deterministic formulation linking the base shear force to two parameters: The first one is the fundamental period that represents the numerical fingerprint of the structure, and the second one is the epicentral distance used to represent the impact of the earthquake on this force. In a second step, with a view to highlight the effect of uncertainty in these parameters on the analyzed response, these parameters are modeled as random variables with a log-normal distribution. The variability of the coefficients of variation of the chosen uncertain parameters, on the statistics on the seismic base shear force, showed that the effect of uncertainty on fundamental period on this force statistics is low compared to the epicentral distance uncertainty influence.
Keywords: Base shear force, fundamental period, epicentral distance, uncertainty, lognormal variable, statistics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13017833 Improved K-Modes for Categorical Clustering Using Weighted Dissimilarity Measure
Authors: S.Aranganayagi, K.Thangavel
Abstract:
K-Modes is an extension of K-Means clustering algorithm, developed to cluster the categorical data, where the mean is replaced by the mode. The similarity measure proposed by Huang is the simple matching or mismatching measure. Weight of attribute values contribute much in clustering; thus in this paper we propose a new weighted dissimilarity measure for K-Modes, based on the ratio of frequency of attribute values in the cluster and in the data set. The new weighted measure is experimented with the data sets obtained from the UCI data repository. The results are compared with K-Modes and K-representative, which show that the new measure generates clusters with high purity.
Keywords: Clustering, categorical data, K-Modes, weighted dissimilarity measure
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36897832 Geometric Data Structures and Their Selected Applications
Authors: Miloš Šeda
Abstract:
Finding the shortest path between two positions is a fundamental problem in transportation, routing, and communications applications. In robot motion planning, the robot should pass around the obstacles touching none of them, i.e. the goal is to find a collision-free path from a starting to a target position. This task has many specific formulations depending on the shape of obstacles, allowable directions of movements, knowledge of the scene, etc. Research of path planning has yielded many fundamentally different approaches to its solution, mainly based on various decomposition and roadmap methods. In this paper, we show a possible use of visibility graphs in point-to-point motion planning in the Euclidean plane and an alternative approach using Voronoi diagrams that decreases the probability of collisions with obstacles. The second application area, investigated here, is focused on problems of finding minimal networks connecting a set of given points in the plane using either only straight connections between pairs of points (minimum spanning tree) or allowing the addition of auxiliary points to the set to obtain shorter spanning networks (minimum Steiner tree).Keywords: motion planning, spanning tree, Steiner tree, Delaunay triangulation, Voronoi diagram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15187831 Mobile Phone as a Tool for Data Collection in Field Research
Authors: Sandro Mourão, Karla Okada
Abstract:
The necessity of accurate and timely field data is shared among organizations engaged in fundamentally different activities, public services or commercial operations. Basically, there are three major components in the process of the qualitative research: data collection, interpretation and organization of data, and analytic process. Representative technological advancements in terms of innovation have been made in mobile devices (mobile phone, PDA-s, tablets, laptops, etc). Resources that can be potentially applied on the data collection activity for field researches in order to improve this process. This paper presents and discuss the main features of a mobile phone based solution for field data collection, composed of basically three modules: a survey editor, a server web application and a client mobile application. The data gathering process begins with the survey creation module, which enables the production of tailored questionnaires. The field workforce receives the questionnaire(s) on their mobile phones to collect the interviews responses and sending them back to a server for immediate analysis.Keywords: Data Gathering, Field Research, Mobile Phone, Survey.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2058