Search results for: test data compression (TDC)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9641

Search results for: test data compression (TDC)

7481 An Images Monitoring System based on Multi-Format Streaming Grid Architecture

Authors: Yi-Haur Shiau, Sun-In Lin, Shi-Wei Lo, Hsiu-Mei Chou, Yi-Hsuan Chen

Abstract:

This paper proposes a novel multi-format stream grid architecture for real-time image monitoring system. The system, based on a three-tier architecture, includes stream receiving unit, stream processor unit, and presentation unit. It is a distributed computing and a loose coupling architecture. The benefit is the amount of required servers can be adjusted depending on the loading of the image monitoring system. The stream receive unit supports multi capture source devices and multi-format stream compress encoder. Stream processor unit includes three modules; they are stream clipping module, image processing module and image management module. Presentation unit can display image data on several different platforms. We verified the proposed grid architecture with an actual test of image monitoring. We used a fast image matching method with the adjustable parameters for different monitoring situations. Background subtraction method is also implemented in the system. Experimental results showed that the proposed architecture is robust, adaptive, and powerful in the image monitoring system.

Keywords: Motion detection, grid architecture, image monitoring system, and background subtraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1569
7480 A Neural Network Control for Voltage Balancing in Three-Phase Electric Power System

Authors: Dana M. Ragab, Jasim A. Ghaeb

Abstract:

The three-phase power system suffers from different challenging problems, e.g. voltage unbalance conditions at the load side. The voltage unbalance usually degrades the power quality of the electric power system. Several techniques can be considered for load balancing including load reconfiguration, static synchronous compensator and static reactive power compensator. In this work an efficient neural network is designed to control the unbalanced condition in the Aqaba-Qatrana-South Amman (AQSA) electric power system. It is designed for highly enhanced response time of the reactive compensator for voltage balancing. The neural network is developed to determine the appropriate set of firing angles required for the thyristor-controlled reactor to balance the three load voltages accurately and quickly. The parameters of AQSA power system are considered in the laboratory model, and several test cases have been conducted to test and validate the proposed technique capabilities. The results have shown a high performance of the proposed Neural Network Control (NNC) technique for correcting the voltage unbalance conditions at three-phase load based on accuracy and response time.

Keywords: Three-phase power system, reactive power control, voltage unbalance factor, neural network, power quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 969
7479 Electron-Impact Excitation of Kr 5s, 5p Levels

Authors: Alla A. Mityureva

Abstract:

The available data on the cross sections of electronimpact excitation of krypton 5s and 5p configuration levels out of the ground state are represented in convenient and compact form. The results are obtained by regression through all known published data related to this process.

Keywords: Cross section, electron excitation, krypton, regression

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1071
7478 Seamless Flow of Voluminous Data in High Speed Network without Congestion Using Feedback Mechanism

Authors: T.Sheela, Dr.J.Raja

Abstract:

Continuously growing needs for Internet applications that transmit massive amount of data have led to the emergence of high speed network. Data transfer must take place without any congestion and hence feedback parameters must be transferred from the receiver end to the sender end so as to restrict the sending rate in order to avoid congestion. Even though TCP tries to avoid congestion by restricting the sending rate and window size, it never announces the sender about the capacity of the data to be sent and also it reduces the window size by half at the time of congestion therefore resulting in the decrease of throughput, low utilization of the bandwidth and maximum delay. In this paper, XCP protocol is used and feedback parameters are calculated based on arrival rate, service rate, traffic rate and queue size and hence the receiver informs the sender about the throughput, capacity of the data to be sent and window size adjustment, resulting in no drastic decrease in window size, better increase in sending rate because of which there is a continuous flow of data without congestion. Therefore as a result of this, there is a maximum increase in throughput, high utilization of the bandwidth and minimum delay. The result of the proposed work is presented as a graph based on throughput, delay and window size. Thus in this paper, XCP protocol is well illustrated and the various parameters are thoroughly analyzed and adequately presented.

Keywords: Bandwidth-Delay Product, Congestion Control, Congestion Window, TCP/IP

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1467
7477 Comparison of Irradiance Decomposition and Energy Production Methods in a Solar Photovoltaic System

Authors: Tisciane Perpetuo e Oliveira, Dante Inga Narvaez, Marcelo Gradella Villalva

Abstract:

Installations of solar photovoltaic systems have increased considerably in the last decade. Therefore, it has been noticed that monitoring of meteorological data (solar irradiance, air temperature, wind velocity, etc.) is important to predict the potential of a given geographical area in solar energy production. In this sense, the present work compares two computational tools that are capable of estimating the energy generation of a photovoltaic system through correlation analyzes of solar radiation data: PVsyst software and an algorithm based on the PVlib package implemented in MATLAB. In order to achieve the objective, it was necessary to obtain solar radiation data (measured and from a solarimetric database), analyze the decomposition of global solar irradiance in direct normal and horizontal diffuse components, as well as analyze the modeling of the devices of a photovoltaic system (solar modules and inverters) for energy production calculations. Simulated results were compared with experimental data in order to evaluate the performance of the studied methods. Errors in estimation of energy production were less than 30% for the MATLAB algorithm and less than 20% for the PVsyst software.

Keywords: Energy production, meteorological data, irradiance decomposition, solar photovoltaic system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 715
7476 Study on Phytochemical Properties, Antibacterial Activity and Cytotoxicity of Aloe vera L.

Authors: K. Thu, Yin Y. Mon, Tin A. Khaing, Ohn M. Tun

Abstract:

The aim of the study was to investigate phytochemical properties, antimicrobial activity and cytotoxicity of Aloe vera. The phytochemical screening of the extracts of leaves of A. vera revealed the presence of bioactive compounds such as alkaloids, tannins, flavonoids phenolic compounds, and etc. with absence of cyanogenic glycosides. Three different solvents such as methanol, ethanol and Di-Methyl sulfoxide were used to screen the antimicrobial activity of A. vera leaves against four human clinical pathogens by agar well diffusion method. The maximum antibacterial activities were observed in methanol extract followed by ethanol and Di-Methyl sulfoxide. It was also found that remarkable antibacterial activities with methanolic and ethanolic extracts of A. vera compared with the standard antibiotic, tetracycline that was not active against E. coli and S. boydii and supported the view that A. vera is a potent antimicrobial agent compared with the conventional antibiotic. Moreover, the brine shrimps (Artemia salina) toxicity test exhibited LC50 value was 569.52 ppm. The resulting data indicated that the A. vera plant have less toxic effects on brine shrimp. Hence, it is signified that Aloe vera plant extract is safe to be used as an antimicrobial agent.

Keywords: Aloe vera L., antimicrobial activity, brine shrimp, cytotoxicity, phytochemical properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6024
7475 Enhanced Particle Swarm Optimization Approach for Solving the Non-Convex Optimal Power Flow

Authors: M. R. AlRashidi, M. F. AlHajri, M. E. El-Hawary

Abstract:

An enhanced particle swarm optimization algorithm (PSO) is presented in this work to solve the non-convex OPF problem that has both discrete and continuous optimization variables. The objective functions considered are the conventional quadratic function and the augmented quadratic function. The latter model presents non-differentiable and non-convex regions that challenge most gradient-based optimization algorithms. The optimization variables to be optimized are the generator real power outputs and voltage magnitudes, discrete transformer tap settings, and discrete reactive power injections due to capacitor banks. The set of equality constraints taken into account are the power flow equations while the inequality ones are the limits of the real and reactive power of the generators, voltage magnitude at each bus, transformer tap settings, and capacitor banks reactive power injections. The proposed algorithm combines PSO with Newton-Raphson algorithm to minimize the fuel cost function. The IEEE 30-bus system with six generating units is used to test the proposed algorithm. Several cases were investigated to test and validate the consistency of detecting optimal or near optimal solution for each objective. Results are compared to solutions obtained using sequential quadratic programming and Genetic Algorithms.

Keywords: Particle Swarm Optimization, Optimal Power Flow, Economic Dispatch.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2342
7474 Reproduction Performance of Etawah Cross Bred Goats in Estrus Synchronization by Controlled Internal Drug Release Implant and Pgf2α Continued by Artificial Insemination

Authors: Diah Tri Widayati, Aris Junaidi, Kresno Suharto, Amelia Oktaviani, Wahyuningsih

Abstract:

The estrus female Etawah cross bred goats were synchronized estrus by controlled internal drug release (CIDR) implants for 10 days combined with PGF2α injection, and continued by artificial insemination (AI) within the hours of 24 period. Vaginal epithelium was taken to determine estrus cycle of the goats without estrus synchronization. The estrus responds (the puffy of vulva and vaginal pH) and percentage of pregnancy were investigated. The data were analyzed descriptively and Independent Sample T-Test. The results showed that the puffy of vulva and vaginal pH were significantly different in synchronized estrus goats and control goats (2.18 ± 0.33 cm vs. 1.20 ± 0.16 cm and 8.55 ± 0.63 vs. 8.22 ± 0.22). Percentage of pregnancy was higher in synchronized estrus goats (73.33%) than in control (53.3%). Estrus synchronization by using CIDR implants and PGF2, continued by AI was effective to improve reproduction performance of Etawah cross bred goats.

Keywords: Artificial insemination, Estrus synchronization, Etawah cross bred goat, Reproduction performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5016
7473 Cloud Computing Cryptography "State-of-the-Art"

Authors: Omer K. Jasim, Safia Abbas, El-Sayed M. El-Horbaty, Abdel-Badeeh M. Salem

Abstract:

Cloud computing technology is very useful in present day to day life, it uses the internet and the central remote servers to provide and maintain data as well as applications. Such applications in turn can be used by the end users via the cloud communications without any installation. Moreover, the end users’ data files can be accessed and manipulated from any other computer using the internet services. Despite the flexibility of data and application accessing and usage that cloud computing environments provide, there are many questions still coming up on how to gain a trusted environment that protect data and applications in clouds from hackers and intruders. This paper surveys the “keys generation and management” mechanism and encryption/decryption algorithms used in cloud computing environments, we proposed new security architecture for cloud computing environment that considers the various security gaps as much as possible. A new cryptographic environment that implements quantum mechanics in order to gain more trusted with less computation cloud communications is given.

Keywords: Cloud Computing, Cloud Encryption Model, Quantum Key Distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4072
7472 Effect of Bentonite on the Rheological Behavior of Cement Grout in Presence of Superplasticizer

Authors: K. Benyounes, A. Benmounah

Abstract:

Cement-based grouts has been used successfully to repair cracks in many concrete structures such as bridges, tunnels, buildings and to consolidate soils or rock foundations. In the present study the rheological characterization of cement grout with water/binder ratio (W/B) is fixed at 0.5. The effect of the replacement of cement by bentonite (2 to 10% wt) in presence of superplasticizer (0.5% wt) was investigated. Several rheological tests were carried out by using controlled-stress rheometer equipped with vane geometry in temperature of 20°C. To highlight the influence of bentonite and superplasticizer on the rheological behavior of grout cement, various flow tests in a range of shear rate from 0 to 200 s-1 were observed. Cement grout showed a non-Newtonian viscosity behavior at all concentrations of bentonite. Three parameter model Herschel- Bulkley was chosen for fitting of experimental data. Based on the values of correlation coefficients of the estimated parameters, The Herschel-Bulkley law model well described the rheological behavior of the grouts. Test results showed that the dosage of bentonite increases the viscosity and yield stress of the system and introduces more thixotropy. While the addition of both bentonite and superplasticizer with cement grout improve significantly the fluidity and reduced the yield stress due to the action of dispersion of SP.

Keywords: Cement grout, bentonite, superplasticizer, viscosity, yield stress.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3541
7471 Deep iCrawl: An Intelligent Vision-Based Deep Web Crawler

Authors: R.Anita, V.Ganga Bharani, N.Nityanandam, Pradeep Kumar Sahoo

Abstract:

The explosive growth of World Wide Web has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. Deep web pages are created dynamically as a result of queries posed to specific web databases. The structure of the deep web pages makes it impossible for traditional web crawlers to access deep web contents. This paper, Deep iCrawl, gives a novel and vision-based approach for extracting data from the deep web. Deep iCrawl splits the process into two phases. The first phase includes Query analysis and Query translation and the second covers vision-based extraction of data from the dynamically created deep web pages. There are several established approaches for the extraction of deep web pages but the proposed method aims at overcoming the inherent limitations of the former. This paper also aims at comparing the data items and presenting them in the required order.

Keywords: Crawler, Deep web, Web Database

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2121
7470 Searchable Encryption in Cloud Storage

Authors: Ren-Junn Hwang, Chung-Chien Lu, Jain-Shing Wu

Abstract:

Cloud outsource storage is one of important services in cloud computing. Cloud users upload data to cloud servers to reduce the cost of managing data and maintaining hardware and software. To ensure data confidentiality, users can encrypt their files before uploading them to a cloud system. However, retrieving the target file from the encrypted files exactly is difficult for cloud server. This study proposes a protocol for performing multikeyword searches for encrypted cloud data by applying k-nearest neighbor technology. The protocol ranks the relevance scores of encrypted files and keywords, and prevents cloud servers from learning search keywords submitted by a cloud user. To reduce the costs of file transfer communication, the cloud server returns encrypted files in order of relevance. Moreover, when a cloud user inputs an incorrect keyword and the number of wrong alphabet does not exceed a given threshold; the user still can retrieve the target files from cloud server. In addition, the proposed scheme satisfies security requirements for outsourced data storage.

Keywords: Fault-tolerance search, multi-keywords search, outsource storage, ranked search, searchable encryption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3053
7469 Unstructured-Data Content Search Based on Optimized EEG Signal Processing and Multi-Objective Feature Extraction

Authors: Qais M. Yousef, Yasmeen A. Alshaer

Abstract:

Over the last few years, the amount of data available on the globe has been increased rapidly. This came up with the emergence of recent concepts, such as the big data and the Internet of Things, which have furnished a suitable solution for the availability of data all over the world. However, managing this massive amount of data remains a challenge due to their large verity of types and distribution. Therefore, locating the required file particularly from the first trial turned to be a not easy task, due to the large similarities of names for different files distributed on the web. Consequently, the accuracy and speed of search have been negatively affected. This work presents a method using Electroencephalography signals to locate the files based on their contents. Giving the concept of natural mind waves processing, this work analyses the mind wave signals of different people, analyzing them and extracting their most appropriate features using multi-objective metaheuristic algorithm, and then classifying them using artificial neural network to distinguish among files with similar names. The aim of this work is to provide the ability to find the files based on their contents using human thoughts only. Implementing this approach and testing it on real people proved its ability to find the desired files accurately within noticeably shorter time and retrieve them as a first choice for the user.

Keywords: Artificial intelligence, data contents search, human active memory, mind wave, multi-objective optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 896
7468 Recycled Aggregates from Construction and Demolition Waste in the Production of Concrete Blocks

Authors: Juan A. Ferriz-Papi, Simon Thomas

Abstract:

The construction industry generates large amounts of waste, usually mixed, which can be composed of different origin materials, most of them catalogued as non-hazardous. The European Union targets for this waste for 2020 have been already achieved by the UK, but it is mainly developed in downcycling processes (backfilling) whereas upcycling (such as recycle in new concrete batches) still keeps at a low percentage. The aim of this paper is to explore further in the use of recycled aggregates from construction and demolition waste (CDW) in concrete mixes so as to improve upcycling. A review of most recent research and legislation applied in the UK is developed regarding the production of concrete blocks. As a case study, initial tests were developed with a CDW recycled aggregate sample from a CDW plant in Swansea. Composition by visual inspection and sieving tests of two samples were developed and compared to original aggregates. More than 70% was formed by soil waste from excavation, and the rest was a mix of waste from mortar, concrete, and ceramics with small traces of plaster, glass and organic matter. Two concrete mixes were made with 80% replacement of recycled aggregates and different water/cement ratio. Tests were carried out for slump, absorption, density and compression strength. The results were compared to a reference sample and showed a substantial reduction of quality in both mixes. Despite that, the discussion brings to identify different aspects to solve, such as heterogeneity or composition, and analyze them for the successful use of these recycled aggregates in the production of concrete blocks. The conclusions obtained can help increase upcycling processes ratio with mixed CDW as recycled aggregates in concrete mixes.

Keywords: Recycled aggregate, concrete, concrete block, construction and demolition waste, recycling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1993
7467 A Modified AES Based Algorithm for Image Encryption

Authors: M. Zeghid, M. Machhout, L. Khriji, A. Baganne, R. Tourki

Abstract:

With the fast evolution of digital data exchange, security information becomes much important in data storage and transmission. Due to the increasing use of images in industrial process, it is essential to protect the confidential image data from unauthorized access. In this paper, we analyze the Advanced Encryption Standard (AES), and we add a key stream generator (A5/1, W7) to AES to ensure improving the encryption performance; mainly for images characterised by reduced entropy. The implementation of both techniques has been realized for experimental purposes. Detailed results in terms of security analysis and implementation are given. Comparative study with traditional encryption algorithms is shown the superiority of the modified algorithm.

Keywords: Cryptography, Encryption, Advanced EncryptionStandard (AES), ECB mode, statistical analysis, key streamgenerator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5020
7466 Numerical Simulation of Inviscid Transient Flows in Shock Tube and its Validations

Authors: Al-Falahi Amir, Yusoff M. Z, Yusaf T

Abstract:

The aim of this paper is to develop a new two dimensional time accurate Euler solver for shock tube applications. The solver was developed to study the performance of a newly built short-duration hypersonic test facility at Universiti Tenaga Nasional “UNITEN" in Malaysia. The facility has been designed, built, and commissioned for different values of diaphragm pressure ratios in order to get wide range of Mach number. The developed solver uses second order accurate cell-vertex finite volume spatial discretization and forth order accurate Runge-Kutta temporal integration and it is designed to simulate the flow process for similar driver/driven gases (e.g. air-air as working fluids). The solver is validated against analytical solution and experimental measurements in the high speed flow test facility. Further investigations were made on the flow process inside the shock tube by using the solver. The shock wave motion, reflection and interaction were investigated and their influence on the performance of the shock tube was determined. The results provide very good estimates for both shock speed and shock pressure obtained after diaphragm rupture. Also detailed information on the gasdynamic processes over the full length of the facility is available. The agreements obtained have been reasonable.

Keywords: shock tunnel, shock tube, shock wave, CFD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2729
7465 Incremental Mining of Shocking Association Patterns

Authors: Eiad Yafi, Ahmed Sultan Al-Hegami, M. A. Alam, Ranjit Biswas

Abstract:

Association rules are an important problem in data mining. Massively increasing volume of data in real life databases has motivated researchers to design novel and incremental algorithms for association rules mining. In this paper, we propose an incremental association rules mining algorithm that integrates shocking interestingness criterion during the process of building the model. A new interesting measure called shocking measure is introduced. One of the main features of the proposed approach is to capture the user background knowledge, which is monotonically augmented. The incremental model that reflects the changing data and the user beliefs is attractive in order to make the over all KDD process more effective and efficient. We implemented the proposed approach and experiment it with some public datasets and found the results quite promising.

Keywords: Knowledge discovery in databases (KDD), Data mining, Incremental Association rules, Domain knowledge, Interestingness, Shocking rules (SHR).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1847
7464 Zero Inflated Strict Arcsine Regression Model

Authors: Y. N. Phang, E. F. Loh

Abstract:

Zero inflated strict arcsine model is a newly developed model which is found to be appropriate in modeling overdispersed count data. In this study, we extend zero inflated strict arcsine model to zero inflated strict arcsine regression model by taking into consideration the extra variability caused by extra zeros and covariates in count data. Maximum likelihood estimation method is used in estimating the parameters for this zero inflated strict arcsine regression model.

Keywords: Overdispersed count data, maximum likelihood estimation, simulated annealing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1734
7463 Comparison of Different k-NN Models for Speed Prediction in an Urban Traffic Network

Authors: Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu

Abstract:

A database that records average traffic speeds measured at five-minute intervals for all the links in the traffic network of a metropolitan city. While learning from this data the models that can predict future traffic speed would be beneficial for the applications such as the car navigation system, building predictive models for every link becomes a nontrivial job if the number of links in a given network is huge. An advantage of adopting k-nearest neighbor (k-NN) as predictive models is that it does not require any explicit model building. Instead, k-NN takes a long time to make a prediction because it needs to search for the k-nearest neighbors in the database at prediction time. In this paper, we investigate how much we can speed up k-NN in making traffic speed predictions by reducing the amount of data to be searched for without a significant sacrifice of prediction accuracy. The rationale behind this is that we had a better look at only the recent data because the traffic patterns not only repeat daily or weekly but also change over time. In our experiments, we build several different k-NN models employing different sets of features which are the current and past traffic speeds of the target link and the neighbor links in its up/down-stream. The performances of these models are compared by measuring the average prediction accuracy and the average time taken to make a prediction using various amounts of data.

Keywords: Big data, k-NN, machine learning, traffic speed prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1355
7462 An Experimental Study of a Self-Supervised Classifier Ensemble

Authors: Neamat El Gayar

Abstract:

Learning using labeled and unlabelled data has received considerable amount of attention in the machine learning community due its potential in reducing the need for expensive labeled data. In this work we present a new method for combining labeled and unlabeled data based on classifier ensembles. The model we propose assumes each classifier in the ensemble observes the input using different set of features. Classifiers are initially trained using some labeled samples. The trained classifiers learn further through labeling the unknown patterns using a teaching signals that is generated using the decision of the classifier ensemble, i.e. the classifiers self-supervise each other. Experiments on a set of object images are presented. Our experiments investigate different classifier models, different fusing techniques, different training sizes and different input features. Experimental results reveal that the proposed self-supervised ensemble learning approach reduces classification error over the single classifier and the traditional ensemble classifier approachs.

Keywords: Multiple Classifier Systems, classifier ensembles, learning using labeled and unlabelled data, K-nearest neighbor classifier, Bayes classifier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1617
7461 A Study of the Adaptive Reuse for School Land Use Strategy: An Application of the Analytic Network Process and Big Data

Authors: Wann-Ming Wey

Abstract:

In today's popularity and progress of information technology, the big data set and its analysis are no longer a major conundrum. Now, we could not only use the relevant big data to analysis and emulate the possible status of urban development in the near future, but also provide more comprehensive and reasonable policy implementation basis for government units or decision-makers via the analysis and emulation results as mentioned above. In this research, we set Taipei City as the research scope, and use the relevant big data variables (e.g., population, facility utilization and related social policy ratings) and Analytic Network Process (ANP) approach to implement in-depth research and discussion for the possible reduction of land use in primary and secondary schools of Taipei City. In addition to enhance the prosperous urban activities for the urban public facility utilization, the final results of this research could help improve the efficiency of urban land use in the future. Furthermore, the assessment model and research framework established in this research also provide a good reference for schools or other public facilities land use and adaptive reuse strategies in the future.

Keywords: Adaptive reuse, analytic network process, big data, land use strategy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 902
7460 A Review and Comparative Analysis on Cluster Ensemble Methods

Authors: S. Sarumathi, P. Ranjetha, C. Saraswathy, M. Vaishnavi, S. Geetha

Abstract:

Clustering is an unsupervised learning technique for aggregating data objects into meaningful classes so that intra cluster similarity is maximized and inter cluster similarity is minimized in data mining. However, no single clustering algorithm proves to be the most effective in producing the best result. As a result, a new challenging technique known as the cluster ensemble approach has blossomed in order to determine the solution to this problem. For the cluster analysis issue, this new technique is a successful approach. The cluster ensemble's main goal is to combine similar clustering solutions in a way that achieves the precision while also improving the quality of individual data clustering. Because of the massive and rapid creation of new approaches in the field of data mining, the ongoing interest in inventing novel algorithms necessitates a thorough examination of current techniques and future innovation. This paper presents a comparative analysis of various cluster ensemble approaches, including their methodologies, formal working process, and standard accuracy and error rates. As a result, the society of clustering practitioners will benefit from this exploratory and clear research, which will aid in determining the most appropriate solution to the problem at hand.

Keywords: Clustering, cluster ensemble methods, consensus function, data mining, unsupervised learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 773
7459 Simultaneous Clustering and Feature Selection Method for Gene Expression Data

Authors: T. Chandrasekhar, K. Thangavel, E. N. Sathishkumar

Abstract:

Microarrays are made it possible to simultaneously monitor the expression profiles of thousands of genes under various experimental conditions. It is used to identify the co-expressed genes in specific cells or tissues that are actively used to make proteins. This method is used to analysis the gene expression, an important task in bioinformatics research. Cluster analysis of gene expression data has proved to be a useful tool for identifying co-expressed genes, biologically relevant groupings of genes and samples. In this work K-Means algorithms has been applied for clustering of Gene Expression Data. Further, rough set based Quick reduct algorithm has been applied for each cluster in order to select the most similar genes having high correlation. Then the ACV measure is used to evaluate the refined clusters and classification is used to evaluate the proposed method. They could identify compact clusters with feature selection method used to genes are selected.

Keywords: Clustering, Feature selection, Gene expression data, Quick reduct.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1950
7458 Segmentation Free Nastalique Urdu OCR

Authors: Sobia T. Javed, Sarmad Hussain, Ameera Maqbool, Samia Asloob, Sehrish Jamil, Huma Moin

Abstract:

The electronically available Urdu data is in image form which is very difficult to process. Printed Urdu data is the root cause of problem. So for the rapid progress of Urdu language we need an OCR systems, which can help us to make Urdu data available for the common person. Research has been carried out for years to automata Arabic and Urdu script. But the biggest hurdle in the development of Urdu OCR is the challenge to recognize Nastalique Script which is taken as standard for writing Urdu language. Nastalique script is written diagonally with no fixed baseline which makes the script somewhat complex. Overlap is present not only in characters but in the ligatures as well. This paper proposes a method which allows successful recognition of Nastalique Script.

Keywords: HMM, Image processing, Optical CharacterRecognition, Urdu OCR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2138
7457 The Advent of Electronic Logbook Technology - Reducing Cost and Risk to Both Marine Resources and the Fishing Industry

Authors: Amos Barkai, Guy Meredith, Fatima Felaar, Zahrah Dantie, Dave de Buys

Abstract:

Fisheries management all around the world is hampered by the lack, or poor quality, of critical data on fish resources and fishing operations. The main reasons for the chronic inability to collect good quality data during fishing operations is the culture of secrecy common among fishers and the lack of modern data gathering technology onboard most fishing vessels. In response, OLRAC-SPS, a South African company, developed fisheries datalogging software (eLog in short) and named it Olrac. The Olrac eLog solution is capable of collecting, analysing, plotting, mapping, reporting, tracing and transmitting all data related to fishing operations. Olrac can be used by skippers, fleet/company managers, offshore mariculture farmers, scientists, observers, compliance inspectors and fisheries management authorities. The authors believe that using eLog onboard fishing vessels has the potential to revolutionise the entire process of data collection and reporting during fishing operations and, if properly deployed and utilised, could transform the entire commercial fleet to a provider of good quality data and forever change the way fish resources are managed. In addition it will make it possible to trace catches back to the actual individual fishing operation, to improve fishing efficiency and to dramatically improve control of fishing operations and enforcement of fishing regulations.

Keywords: data management, electronic logbook (eLog), electronic reporting system (ERS), fisheries management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1957
7456 Integrated Method for Detection of Unknown Steganographic Content

Authors: Magdalena Pejas

Abstract:

This article concerns the presentation of an integrated method for detection of steganographic content embedded by new unknown programs. The method is based on data mining and aggregated hypothesis testing. The article contains the theoretical basics used to deploy the proposed detection system and the description of improvement proposed for the basic system idea. Further main results of experiments and implementation details are collected and described. Finally example results of the tests are presented.

Keywords: Steganography, steganalysis, data embedding, data mining, feature extraction, knowledge base, system learning, hypothesis testing, error estimation, black box program, file structure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1544
7455 Evaluation of Deformable Boundary Condition Using Finite Element Method and Impact Test for Steel Tubes

Authors: Abed Ahmed, Mehrdad Asadi, Jennifer Martay

Abstract:

Stainless steel pipelines are crucial components to transportation and storage in the oil and gas industry. However, the rise of random attacks and vandalism on these pipes for their valuable transport has led to more security and protection for incoming surface impacts. These surface impacts can lead to large global deformations of the pipe and place the pipe under strain, causing the eventual failure of the pipeline. Therefore, understanding how these surface impact loads affect the pipes is vital to improving the pipes’ security and protection. In this study, experimental test and finite element analysis (FEA) have been carried out on EN3B stainless steel specimens to study the impact behaviour. Low velocity impact tests at 9 m/s with 16 kg dome impactor was used to simulate for high momentum impact for localised failure. FEA models of clamped and deformable boundaries were modelled to study the effect of the boundaries on the pipes impact behaviour on its impact resistance, using experimental and FEA approach. Comparison of experimental and FE simulation shows good correlation to the deformable boundaries in order to validate the robustness of the FE model to be implemented in pipe models with complex anisotropic structure.

Keywords: Dynamic impact, deformable boundary conditions, finite element modeling, FEM, finite element, FE, LS-DYNA, Stainless steel pipe.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 663
7454 Can Exams Be Shortened? Using a New Empirical Approach to Test in Finance Courses

Authors: Eric S. Lee, Connie Bygrave, Jordan Mahar, Naina Garg, Suzanne Cottreau

Abstract:

Marking exams is universally detested by lecturers. Final exams in many higher education courses often last 3.0 hrs. Do exams really need to be so long? Can we justifiably reduce the number of questions on them? Surprisingly few have researched these questions, arguably because of the complexity and difficulty of using traditional methods. To answer these questions empirically, we used a new approach based on three key elements: Use of an unusual variation of a true experimental design, equivalence hypothesis testing, and an expanded set of six psychometric criteria to be met by any shortened exam if it is to replace a current 3.0-hr exam (reliability, validity, justifiability, number of exam questions, correspondence, and equivalence). We compared student performance on each official 3.0-hr exam with that on five shortened exams having proportionately fewer questions (2.5, 2.0, 1.5, 1.0, and 0.5 hours) in a series of four experiments conducted in two classes in each of two finance courses (224 students in total). We found strong evidence that, in these courses, shortening of final exams to 2.0 hrs was warranted on all six psychometric criteria. Shortening these exams by one hour should result in a substantial one-third reduction in lecturer time and effort spent marking, lower student stress, and more time for students to prepare for other exams. Our approach provides a relatively simple, easy-to-use methodology that lecturers can use to examine the effect of shortening their own exams.

Keywords: Exam length, psychometric criteria, synthetic experimental designs, test length.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1480
7453 Spatial Variability of Brahmaputra River Flow Characteristics

Authors: Hemant Kumar

Abstract:

Brahmaputra River is known according to the Hindu mythology the son of the Lord Brahma. According to this name, the river Brahmaputra creates mass destruction during the monsoon season in Assam, India. It is a state situated in North-East part of India. This is one of the essential states out of the seven countries of eastern India, where almost all entire Brahmaputra flow carried out. The other states carry their tributaries. In the present case study, the spatial analysis performed in this specific case the number of MODIS data are acquired. In the method of detecting the change, the spray content was found during heavy rainfall and in the flooded monsoon season. By this method, particularly the analysis over the Brahmaputra outflow determines the flooded season. The charged particle-associated in aerosol content genuinely verifies the heavy water content below the ground surface, which is validated by trend analysis through rainfall spectrum data. This is confirmed by in-situ sampled view data from a different position of Brahmaputra River. Further, a Hyperion Hyperspectral 30 m resolution data were used to scan the sediment deposits, which is also confirmed by in-situ sampled view data from a different position.

Keywords: Spatial analysis, change detection, aerosol, trend analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 508
7452 Discovering Complex Regularities by Adaptive Self Organizing Classification

Authors: A. Faro, D. Giordano, F. Maiorana

Abstract:

Data mining uses a variety of techniques each of which is useful for some particular task. It is important to have a deep understanding of each technique and be able to perform sophisticated analysis. In this article we describe a tool built to simulate a variation of the Kohonen network to perform unsupervised clustering and support the entire data mining process up to results visualization. A graphical representation helps the user to find out a strategy to optmize classification by adding, moving or delete a neuron in order to change the number of classes. The tool is also able to automatically suggest a strategy for number of classes optimization.The tool is used to classify macroeconomic data that report the most developed countries? import and export. It is possible to classify the countries based on their economic behaviour and use an ad hoc tool to characterize the commercial behaviour of a country in a selected class from the analysis of positive and negative features that contribute to classes formation.

Keywords: Unsupervised classification, Kohonen networks, macroeconomics, Visual data mining, cluster interpretation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1539