Search results for: Data mining techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9245

Search results for: Data mining techniques

7535 Encoding and Compressing Data for Decreasing Number of Switches in Baseline Networks

Authors: Mohammad Ali Jabraeil Jamali, Ahmad Khademzadeh, Hasan Asil, Amir Asil

Abstract:

This method decrease usage power (expenditure) in networks on chips (NOC). This method data coding for data transferring in order to reduces expenditure. This method uses data compression reduces the size. Expenditure calculation in NOC occurs inside of NOC based on grown models and transitive activities in entry ports. The goal of simulating is to weigh expenditure for encoding, decoding and compressing in Baseline networks and reduction of switches in this type of networks. KeywordsNetworks on chip, Compression, Encoding, Baseline networks, Banyan networks.

Keywords: Networks on chip, Compression, Encoding, Baseline networks, Banyan networks

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1974
7534 Sampled-Data Control for Fuel Cell Systems

Authors: H. Y. Jung, Ju H. Park, S. M. Lee

Abstract:

Sampled-data controller is presented for solid oxide fuel cell systems which is expressed by a sector bounded nonlinear model. The proposed control law is obtained by solving a convex problem satisfying several linear matrix inequalities. Simulation results are given to show the effectiveness of the proposed design method.

Keywords: Sampled-data control, Sector bound, Solid oxide fuel cell, Time-delay.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1719
7533 Automatic Detection and Spatio-temporal Analysis of Commercial Accumulations Using Digital Yellow Page Data

Authors: Yuki. Akiyama, Hiroaki. Sengoku, Ryosuke. Shibasaki

Abstract:

In this study, the locations and areas of commercial accumulations were detected by using digital yellow page data. An original buffering method that can accurately create polygons of commercial accumulations is proposed in this paper.; by using this method, distribution of commercial accumulations can be easily created and monitored over a wide area. The locations, areas, and time-series changes of commercial accumulations in the South Kanto region can be monitored by integrating polygons of commercial accumulations with the time-series data of digital yellow page data. The circumstances of commercial accumulations were shown to vary according to areas, that is, highly- urbanized regions such as the city center of Tokyo and prefectural capitals, suburban areas near large cities, and suburban and rural areas.

Keywords: Commercial accumulations, Spatio-temporal analysis, Urban monitoring, Yellow page data

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1254
7532 A Social Decision Support Mechanism for Group Purchasing

Authors: Lien-Fa Lin, Yung-Ming Li, Fu-Shun Hsieh

Abstract:

With the advancement of information technology and development of group commerce, people have obviously changed in their lifestyle. However, group commerce faces some challenging problems. The products or services provided by vendors do not satisfactorily reflect customers’ opinions, so that the sale and revenue of group commerce gradually become lower. On the other hand, the process for a formed customer group to reach group-purchasing consensus is time-consuming and the final decision is not the best choice for each group members. In this paper, we design a social decision support mechanism, by using group discussion message to recommend suitable options for group members and we consider social influence and personal preference to generate option ranking list. The proposed mechanism can enhance the group purchasing decision making efficiently and effectively and venders can provide group products or services according to the group option ranking list.

Keywords: Social network, group decision, text mining, group commerce.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1386
7531 EEG Waves Classifier using Wavelet Transform and Fourier Transform

Authors: Maan M. Shaker

Abstract:

The electroencephalograph (EEG) signal is one of the most widely signal used in the bioinformatics field due to its rich information about human tasks. In this work EEG waves classification is achieved using the Discrete Wavelet Transform DWT with Fast Fourier Transform (FFT) by adopting the normalized EEG data. The DWT is used as a classifier of the EEG wave's frequencies, while FFT is implemented to visualize the EEG waves in multi-resolution of DWT. Several real EEG data sets (real EEG data for both normal and abnormal persons) have been tested and the results improve the validity of the proposed technique.

Keywords: Bioinformatics, DWT, EEG waves, FFT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5545
7530 Obstacle Classification Method Based On 2D LIDAR Database

Authors: Moohyun Lee, Soojung Hur, Yongwan Park

Abstract:

We propose obstacle classification method based on 2D LIDAR Database. The existing obstacle classification method based on 2D LIDAR, has an advantage in terms of accuracy and shorter calculation time. However, it was difficult to classifier the type of obstacle and therefore accurate path planning was not possible. In order to overcome this problem, a method of classifying obstacle type based on width data of obstacle was proposed. However, width data was not sufficient to improve accuracy. In this paper, database was established by width and intensity data; the first classification was processed by the width data; the second classification was processed by the intensity data; classification was processed by comparing to database; result of obstacle classification was determined by finding the one with highest similarity values. An experiment using an actual autonomous vehicle under real environment shows that calculation time declined in comparison to 3D LIDAR and it was possible to classify obstacle using single 2D LIDAR.

Keywords: Obstacle, Classification, LIDAR, Segmentation, Width, Intensity, Database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3439
7529 An Empirical Mode Decomposition Based Method for Action Potential Detection in Neural Raw Data

Authors: Sajjad Farashi, Mohammadjavad Abolhassani, Mostafa Taghavi Kani

Abstract:

Information in the nervous system is coded as firing patterns of electrical signals called action potential or spike so an essential step in analysis of neural mechanism is detection of action potentials embedded in the neural data. There are several methods proposed in the literature for such a purpose. In this paper a novel method based on empirical mode decomposition (EMD) has been developed. EMD is a decomposition method that extracts oscillations with different frequency range in a waveform. The method is adaptive and no a-priori knowledge about data or parameter adjusting is needed in it. The results for simulated data indicate that proposed method is comparable with wavelet based methods for spike detection. For neural signals with signal-to-noise ratio near 3 proposed methods is capable to detect more than 95% of action potentials accurately.

Keywords: EMD, neural data processing, spike detection, wavelet decomposition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2367
7528 Platform-as-a-Service Sticky Policies for Privacy Classification in the Cloud

Authors: Maha Shamseddine, Amjad Nusayr, Wassim Itani

Abstract:

In this paper, we present a Platform-as-a-Service (PaaS) model for controlling the privacy enforcement mechanisms applied on user data when stored and processed in Cloud data centers. The proposed architecture consists of establishing user configurable ‘sticky’ policies on the Graphical User Interface (GUI) data-bound components during the application development phase to specify the details of privacy enforcement on the contents of these components. Various privacy classification classes on the data components are formally defined to give the user full control on the degree and scope of privacy enforcement including the type of execution containers to process the data in the Cloud. This not only enhances the privacy-awareness of the developed Cloud services, but also results in major savings in performance and energy efficiency due to the fact that the privacy mechanisms are solely applied on sensitive data units and not on all the user content. The proposed design is implemented in a real PaaS cloud computing environment on the Microsoft Azure platform.

Keywords: Privacy enforcement, Platform-as-a-Service privacy awareness, cloud computing privacy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 746
7527 Introduction of Hyperaccumulator Plants with Phytoremediation Potential of a Lead- Zinc Mine in Iran

Authors: M. Cheraghi, B. Lorestani, N. Yousefi

Abstract:

Contamination of heavy metals represents one of the most pressing threats to water and soil resources as well as human health. Phytoremediation can be potentially used to remediate metalcontaminated sites. A major step towards the development of phytoremediation of heavy metal impacted soils is the discovery of the heavy metal hyperaccumulation in plants. In this study, the several established criteria to define a hyperaccumulator plant were applied. The case study was represented by a mining area in Hamedan province in the central west part of Iran. Obtained results showed that the most of sampled species were able to grow on heavily metal-contaminated soils and also were able to accumulate extraordinarily high concentrations of some metals such as Zn, Mn, Cu, Pb and Fe. Using the most common criteria, Euphorbia macroclada and Centaurea virgata can be classified as hyperaccumulators of some measured heavy metals and, therefore, they have suitable potential for phytoremediation of contaminated soils.

Keywords: Enrichment factor, Heavy metals, Hyperaccumulator, Phytoremediation, Translocation factor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2877
7526 DIFFER: A Propositionalization approach for Learning from Structured Data

Authors: Thashmee Karunaratne, Henrik Böstrom

Abstract:

Logic based methods for learning from structured data is limited w.r.t. handling large search spaces, preventing large-sized substructures from being considered by the resulting classifiers. A novel approach to learning from structured data is introduced that employs a structure transformation method, called finger printing, for addressing these limitations. The method, which generates features corresponding to arbitrarily complex substructures, is implemented in a system, called DIFFER. The method is demonstrated to perform comparably to an existing state-of-art method on some benchmark data sets without requiring restrictions on the search space. Furthermore, learning from the union of features generated by finger printing and the previous method outperforms learning from each individual set of features on all benchmark data sets, demonstrating the benefit of developing complementary, rather than competing, methods for structure classification.

Keywords: Machine learning, Structure classification, Propositionalization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1210
7525 Performance Analysis of the Subgroup Method for Collective I/O

Authors: Kwangho Cha, Hyeyoung Cho, Sungho Kim

Abstract:

As many scientific applications require large data processing, the importance of parallel I/O has been increasingly recognized. Collective I/O is one of the considerable features of parallel I/O and enables application programmers to easily handle their large data volume. In this paper we measured and analyzed the performance of original collective I/O and the subgroup method, the way of using collective I/O of MPI effectively. From the experimental results, we found that the subgroup method showed good performance with small data size.

Keywords: Collective I/O, MPI, parallel file system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1567
7524 Animation of Objects on the Website by Application of CSS3 Language

Authors: Vladimir Simovic, Matija Varga, Robert Svetlacic

Abstract:

Scientific work analytically explores and demonstrates techniques that can animate objects and geometric characters using CSS3 language by applying proper formatting and positioning of elements. This paper presents examples of optimum application of the CSS3 descriptive language when generating general web animations (e.g., billiards and movement of geometric characters, etc.). The paper presents analytically, the optimal development and animation design with the frames within which the animated objects are. The originally developed content is based on the upgrading of existing CSS3 descriptive language animations with more complex syntax and project-oriented work. The purpose of the developed animations is to provide an overview of the interactive features of CSS3 descriptive language design for computer games and the animation of important analytical data based on the web view. It has been analytically demonstrated that CSS3 as a descriptive language allows inserting of various multimedia elements into websites for public and internal sites.

Keywords: Animation recording, web page graphics, HTML5 forms, Cascading Style Sheets 3 - CSS3, man-computer interaction, KML animation presenting format, GML, Google Earth Professional.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 798
7523 Statistical Analysis for Overdispersed Medical Count Data

Authors: Y. N. Phang, E. F. Loh

Abstract:

Many researchers have suggested the use of zero inflated Poisson (ZIP) and zero inflated negative binomial (ZINB) models in modeling overdispersed medical count data with extra variations caused by extra zeros and unobserved heterogeneity. The studies indicate that ZIP and ZINB always provide better fit than using the normal Poisson and negative binomial models in modeling overdispersed medical count data. In this study, we proposed the use of Zero Inflated Inverse Trinomial (ZIIT), Zero Inflated Poisson Inverse Gaussian (ZIPIG) and zero inflated strict arcsine models in modeling overdispered medical count data. These proposed models are not widely used by many researchers especially in the medical field. The results show that these three suggested models can serve as alternative models in modeling overdispersed medical count data. This is supported by the application of these suggested models to a real life medical data set. Inverse trinomial, Poisson inverse Gaussian and strict arcsine are discrete distributions with cubic variance function of mean. Therefore, ZIIT, ZIPIG and ZISA are able to accommodate data with excess zeros and very heavy tailed. They are recommended to be used in modeling overdispersed medical count data when ZIP and ZINB are inadequate.

Keywords: Zero inflated, inverse trinomial distribution, Poisson inverse Gaussian distribution, strict arcsine distribution, Pearson’s goodness of fit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3308
7522 Student Satisfaction Data for Work Based Learners

Authors: Rosie Borup, Hanifa Shah

Abstract:

This paper aims to describe how student satisfaction is measured for work-based learners as these are non-traditional learners, conducting academic learning in the workplace, typically their curricula have a high degree of negotiation, and whose motivations are directly related to their employers- needs, as well as their own career ambitions. We argue that while increasing WBL participation, and use of SSD are both accepted as being of strategic importance to the HE agenda, the use of WBL SSD is rarely examined, and lessons can be learned from the comparison of SSD from a range of WBL programmes, and increased visibility of this type of data will provide insight into ways to improve and develop this type of delivery. The key themes that emerged from the analysis of the interview data were: learners profiles and needs, employers drivers, academic staff drivers, organizational approach, tools for collecting data and visibility of findings. The paper concludes with observations on best practice in the collection, analysis and use of WBL SSD, thus offering recommendations for both academic managers and practitioners.

Keywords: Student satisfaction data, work based learning, employer engagement, NSS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1488
7521 Comparative Dynamic Performance of Load Frequency Control of Nonlinear Interconnected Hydro-Thermal System Using Intelligent Techniques

Authors: Banaja Mohanty, Prakash Kumar Hota

Abstract:

This paper demonstrates dynamic performance evaluation of load frequency control (LFC) with different intelligent techniques. All non-linearities and physical constraints have been considered in simulation studies such as governor dead band (GDB), generation rate constraint (GRC) and boiler dynamics. The conventional integral time absolute error has been considered as objective function. The design problem is formulated as an optimisation problem and particle swarm optimisation (PSO), bacterial foraging optimisation algorithm (BFOA) and differential evolution (DE) are employed to search optimal controller parameters. The superiority of the proposed approach has been shown by comparing the results with published fuzzy logic control (FLC) for the same interconnected power system. The comparison is done using various performance measures like overshoot, undershoot, settling time and standard error criteria of frequency and tie-line power deviation following a step load perturbation (SLP). It is noticed that, the dynamic performance of proposed controller is better than FLC. Further, robustness analysis is carried out by varying the time constants of speed governor, turbine, tie-line power in the range of +40% to -40% to demonstrate the robustness of the proposed DE optimized PID controller.

Keywords: Automatic generation control, governor dead band, generation rate constraint, differential evolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1056
7520 A 3.125Gb/s Clock and Data Recovery Circuit Using 1/4-Rate Technique

Authors: Il-Do Jeong, Hang-Geun Jeong

Abstract:

This paper describes the design and fabrication of a clock and data recovery circuit (CDR). We propose a new clock and data recovery which is based on a 1/4-rate frequency detector (QRFD). The proposed frequency detector helps reduce the VCO frequency and is thus advantageous for high speed application. The proposed frequency detector can achieve low jitter operation and extend the pull-in range without using the reference clock. The proposed CDR was implemented using a 1/4-rate bang-bang type phase detector (PD) and a ring voltage controlled oscillator (VCO). The CDR circuit has been fabricated in a standard 0.18 CMOS technology. It occupies an active area of 1 x 1 and consumes 90 mW from a single 1.8V supply.

Keywords: Clock and data recovery, 1/4-rate frequency detector, 1/4-rate phase detector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2912
7519 Walking Hexapod Robot in Disaster Recovery: Developing Algorithm for Terrain Negotiation and Navigation

Authors: Md. Masum Billah, Mohiuddin Ahmed, Soheli Farhana

Abstract:

In modern day disaster recovery mission has become one of the top priorities in any natural disaster management regime. Smart autonomous robots may play a significant role in such missions, including search for life under earth quake hit rubbles, Tsunami hit islands, de-mining in war affected areas and many other such situations. In this paper current state of many walking robots are compared and advantages of hexapod systems against wheeled robots are described. In our research we have selected a hexapod spider robot; we are developing focusing mainly on efficient navigation method in different terrain using apposite gait of locomotion, which will make it faster and at the same time energy efficient to navigate and negotiate difficult terrain. This paper describes the method of terrain negotiation navigation in a hazardous field.

Keywords: Walking robots, locomotion, hexapod robot, gait, hazardous field.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4424
7518 Very High Speed Data Driven Dynamic NAND Gate at 22nm High K Metal Gate Strained Silicon Technology Node

Authors: Shobha Sharma, Amita Dev

Abstract:

Data driven dynamic logic is the high speed dynamic circuit with low area. The clock of the dynamic circuit is removed and data drives the circuit instead of clock for precharging purpose. This data driven dynamic nand gate is given static forward substrate biasing of Vsupply/2 as well as the substrate bias is connected to the input data, resulting in dynamic substrate bias. The dynamic substrate bias gives the shortest propagation delay with a penalty on the power dissipation. Propagation delay is reduced by 77.8% compared to the normal reverse substrate bias Data driven dynamic nand. Also dynamic substrate biased D3nand’s propagation delay is reduced by 31.26% compared to data driven dynamic nand gate with static forward substrate biasing of Vdd/2. This data driven dynamic nand gate with dynamic body biasing gives us the highest speed with no area penalty and finds its applications where power penalty is acceptable. Also combination of Dynamic and static Forward body bias can be used with reduced propagation delay compared to static forward biased circuit and with comparable increase in an average power. The simulations were done on hspice simulator with 22nm High-k metal gate strained Si technology HP models of Arizona State University, USA.

Keywords: Data driven nand gate, dynamic substrate biasing, nand gate, static substrate biasing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1611
7517 A Cascaded Fuzzy Inference System for Dynamic Online Portals Customization

Authors: Erika Martinez Ramirez, Rene V. Mayorga

Abstract:

In our modern world, more physical transactions are being substituted by electronic transactions (i.e. banking, shopping, and payments), many businesses and companies are performing most of their operations through the internet. Instead of having a physical commerce, internet visitors are now adapting to electronic commerce (e-Commerce). The ability of web users to reach products worldwide can be greatly benefited by creating friendly and personalized online business portals. Internet visitors will return to a particular website when they can find the information they need or want easily. Dealing with this human conceptualization brings the incorporation of Artificial/Computational Intelligence techniques in the creation of customized portals. From these techniques, Fuzzy-Set technologies can make many useful contributions to the development of such a human-centered endeavor as e-Commerce. The main objective of this paper is the implementation of a Paradigm for the Intelligent Design and Operation of Human-Computer interfaces. In particular, the paradigm is quite appropriate for the intelligent design and operation of software modules that display information (such Web Pages, graphic user interfaces GUIs, Multimedia modules) on a computer screen. The human conceptualization of the user personal information is analyzed throughout a Cascaded Fuzzy Inference (decision-making) System to generate the User Ascribe Qualities, which identify the user and that can be used to customize portals with proper Web links.

Keywords: Fuzzy Logic, Internet, Electronic Commerce, Intelligent Portals, Electronic Shopping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1781
7516 Bandwidth Efficient Diversity Scheme Using STTC Concatenated With STBC: MIMO Systems

Authors: Sameru Sharma, Sanjay Sharma, Derick Engles

Abstract:

Multiple-input multiple-output (MIMO) systems are widely in use to improve quality, reliability of wireless transmission and increase the spectral efficiency. However in MIMO systems, multiple copies of data are received after experiencing various channel effects. The limitations on account of complexity due to number of antennas in case of conventional decoding techniques have been looked into. Accordingly we propose a modified sphere decoder (MSD-1) algorithm with lower complexity and give rise to system with high spectral efficiency. With the aim to increase signal diversity we apply rotated quadrature amplitude modulation (QAM) constellation in multi dimensional space. Finally, we propose a new architecture involving space time trellis code (STTC) concatenated with space time block code (STBC) using MSD-1 at the receiver for improving system performance. The system gains have been verified with channel state information (CSI) errors.

Keywords: Channel State Information , Diversity, Multi-Antenna, Rotated Constellation, Space Time Codes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1659
7515 Performance Evaluation of an Ontology-Based Arabic Sentiment Analysis

Authors: Salima Behdenna, Fatiha Barigou, Ghalem Belalem

Abstract:

Due to the quick increase in the volume of Arabic opinions posted on various social media, Arabic sentiment analysis has become one of the most important areas of research. Compared to English, there is very little works on Arabic sentiment analysis, in particular aspect-based sentiment analysis (ABSA). In ABSA, aspect extraction is the most important task. In this paper, we propose a semantic ABSA approach for standard Arabic reviews to extract explicit aspect terms and identify the polarity of the extracted aspects. The proposed approach was evaluated using HAAD datasets. Experiments showed that the proposed approach achieved a good level of performance compared with baseline results. The F-measure was improved by 19% for the aspect term extraction tasks and 55% aspect term polarity task.

Keywords: Sentiment analysis, opinion mining, Arabic, aspect level, opinion, polarity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 447
7514 Off-Line Detection of “Pannon Wheat” Milling Fractions by Near-Infrared Spectroscopic Methods

Authors: E. Izsó, M. Bartalné-Berceli, Sz. Gergely, A. Salgó

Abstract:

The aim of this investigation is to elaborate nearinfrared methods for testing and recognition of chemical components and quality in “Pannon wheat” allied (i.e. true to variety or variety identified) milling fractions as well as to develop spectroscopic methods following the milling processes and evaluate the stability of the milling technology by different types of milling products and according to sampling times, respectively. These wheat categories produced under industrial conditions where samples were collected versus sampling time and maximum or minimum yields. The changes of the main chemical components (such as starch, protein, lipid) and physical properties of fractions (particle size) were analysed by dispersive spectrophotometers using visible (VIS) and near-infrared (NIR) regions of the electromagnetic radiation. Close correlation were obtained between the data of spectroscopic measurement techniques processed by various chemometric methods (e.g. principal component analysis [PCA], cluster analysis [CA]) and operation condition of milling technology. It is obvious that NIR methods are able to detect the deviation of the yield parameters and differences of the sampling times by a wide variety of fractions, respectively. NIR technology can be used in the sensitive monitoring of milling technology.

Keywords: Allied wheat fractions, CA, milling process, nearinfrared spectroscopy, PCA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1687
7513 A State-Of-The-Art Review on Web Services Adaptation

Authors: M. Velasco, D. While, P. Raju, J. Krasniewicz, A. Amini, L. Hernandez-Munoz

Abstract:

Web service adaptation involves the creation of adapters that solve Web services incompatibilities known as mismatches. Since the importance of Web services adaptation is increasing because of the frequent implementation and use of online Web services, this paper presents a literature review of web services to investigate the main methods of adaptation, their theoretical underpinnings and the metrics used to measure adapters performance. Eighteen publications were reviewed independently by two researchers. We found that adaptation techniques are needed to solve different types of problems that may arise due to incompatibilities in Web service interfaces, including protocols, messages, data and semantics that affect the interoperability of the services. Although adapters are non-invasive methods that can improve Web services interoperability and there are current approaches for service adaptation; there is, however, not yet one solution that fits all types of mismatches. Our results also show that only a few research projects incorporate theoretical frameworks and that metrics to measure adapters’ performance are very limited. We conclude that further research on software adaptation should improve current adaptation methods in different layers of the service interoperability and that an adaptation theoretical framework that incorporates a theoretical underpinning and measures of qualitative and quantitative performance needs to be created.

Keywords: Web services adapters, software adaptation, web services mismatches, web services interoperability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1863
7512 A System to Adapt Techniques of Text Summarizing to Polish

Authors: Marcin Ciura, Damian Grund, S

Abstract:

This paper describes a system, in which various methods of text summarizing can be adapted to Polish. A structure of the system is presented. A modular construction of the system and access to the system via the Internet are signaled.

Keywords: Automatic summary generation, linguistic analysis, text generation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1538
7511 NSBS: Design of a Network Storage Backup System

Authors: Xinyan Zhang, Zhipeng Tan, Shan Fan

Abstract:

The first layer of defense against data loss is the backup data. This paper implements an agent-based network backup system used the backup, server-storage and server-backup agent these tripartite construction, and the snapshot and hierarchical index are used in the NSBS. It realizes the control command and data flow separation, balances the system load, thereby improving efficiency of the system backup and recovery. The test results show the agent-based network backup system can effectively improve the task-based concurrency, reasonably allocate network bandwidth, the system backup performance loss costs smaller and improves data recovery efficiency by 20%.

Keywords: Agent, network backup system, three architecture model, NSBS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2219
7510 Estimating the Life-Distribution Parameters of Weibull-Life PV Systems Utilizing Non-Parametric Analysis

Authors: Saleem Z. Ramadan

Abstract:

In this paper, a model is proposed to determine the life distribution parameters of the useful life region for the PV system utilizing a combination of non-parametric and linear regression analysis for the failure data of these systems. Results showed that this method is dependable for analyzing failure time data for such reliable systems when the data is scarce.

Keywords: Masking, Bathtub model, reliability, non-parametric analysis, useful life.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1838
7509 Novel Security Strategy for Real Time Digital Videos

Authors: Prakash Devale, R. S. Prasad, Amol Dhumane, Pritesh Patil

Abstract:

Now a days video data embedding approach is a very challenging and interesting task towards keeping real time video data secure. We can implement and use this technique with high-level applications. As the rate-distortion of any image is not confirmed, because the gain provided by accurate image frame segmentation are balanced by the inefficiency of coding objects of arbitrary shape, with a lot factors like losses that depend on both the coding scheme and the object structure. By using rate controller in association with the encoder one can dynamically adjust the target bitrate. This paper discusses about to keep secure videos by mixing signature data with negligible distortion in the original video, and to keep steganographic video as closely as possible to the quality of the original video. In this discussion we propose the method for embedding the signature data into separate video frames by the use of block Discrete Cosine Transform. These frames are then encoded by real time encoding H.264 scheme concepts. After processing, at receiver end recovery of original video and the signature data is proposed.

Keywords: Data Hiding, Digital Watermarking, video coding H.264, Rate Control, Block DCT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1554
7508 Web Search Engine Based Naming Procedure for Independent Topic

Authors: Takahiro Nishigaki, Takashi Onoda

Abstract:

In recent years, the number of document data has been increasing since the spread of the Internet. Many methods have been studied for extracting topics from large document data. We proposed Independent Topic Analysis (ITA) to extract topics independent of each other from large document data such as newspaper data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis. The topic represented by ITA is represented by a set of words. However, the set of words is quite different from the topics the user imagines. For example, the top five words with high independence of a topic are as follows. Topic1 = {"scor", "game", "lead", "quarter", "rebound"}. This Topic 1 is considered to represent the topic of "SPORTS". This topic name "SPORTS" has to be attached by the user. ITA cannot name topics. Therefore, in this research, we propose a method to obtain topics easy for people to understand by using the web search engine, topics given by the set of words given by independent topic analysis. In particular, we search a set of topical words, and the title of the homepage of the search result is taken as the topic name. And we also use the proposed method for some data and verify its effectiveness.

Keywords: Independent topic analysis, topic extraction, topic naming, web search engine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 489
7507 An Anomaly Detection Approach to Detect Unexpected Faults in Recordings from Test Drives

Authors: Andreas Theissler, Ian Dear

Abstract:

In the automotive industry test drives are being conducted during the development of new vehicle models or as a part of quality assurance of series-production vehicles. The communication on the in-vehicle network, data from external sensors, or internal data from the electronic control units is recorded by automotive data loggers during the test drives. The recordings are used for fault analysis. Since the resulting data volume is tremendous, manually analysing each recording in great detail is not feasible. This paper proposes to use machine learning to support domainexperts by preventing them from contemplating irrelevant data and rather pointing them to the relevant parts in the recordings. The underlying idea is to learn the normal behaviour from available recordings, i.e. a training set, and then to autonomously detect unexpected deviations and report them as anomalies. The one-class support vector machine “support vector data description” is utilised to calculate distances of feature vectors. SVDDSUBSEQ is proposed as a novel approach, allowing to classify subsequences in multivariate time series data. The approach allows to detect unexpected faults without modelling effort as is shown with experimental results on recordings from test drives.

Keywords: Anomaly detection, fault detection, test drive analysis, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2468
7506 A Study on the Differential Diagnostic Model for Newborn Hearing Loss Screening

Authors: Chun-Lang Chang

Abstract:

According to the statistics, the prevalence of congenital hearing loss in Taiwan is approximately six thousandths; furthermore, one thousandths of infants have severe hearing impairment. Hearing ability during infancy has significant impact in the development of children-s oral expressions, language maturity, cognitive performance, education ability and social behaviors in the future. Although most children born with hearing impairment have sensorineural hearing loss, almost every child more or less still retains some residual hearing. If provided with a hearing aid or cochlear implant (a bionic ear) timely in addition to hearing speech training, even severely hearing-impaired children can still learn to talk. On the other hand, those who failed to be diagnosed and thus unable to begin hearing and speech rehabilitations on a timely manner might lose an important opportunity to live a complete and healthy life. Eventually, the lack of hearing and speaking ability will affect the development of both mental and physical functions, intelligence, and social adaptability. Not only will this problem result in an irreparable regret to the hearing-impaired child for the life time, but also create a heavy burden for the family and society. Therefore, it is necessary to establish a set of computer-assisted predictive model that can accurately detect and help diagnose newborn hearing loss so that early interventions can be provided timely to eliminate waste of medical resources. This study uses information from the neonatal database of the case hospital as the subjects, adopting two different analysis methods of using support vector machine (SVM) for model predictions and using logistic regression to conduct factor screening prior to model predictions in SVM to examine the results. The results indicate that prediction accuracy is as high as 96.43% when the factors are screened and selected through logistic regression. Hence, the model constructed in this study will have real help in clinical diagnosis for the physicians and actually beneficial to the early interventions of newborn hearing impairment.

Keywords: Data mining, Hearing impairment, Logistic regression analysis, Support vector machines

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1796