Search results for: Information Extraction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4480

Search results for: Information Extraction

3250 Statistical Analysis of the Factors that Influence the Properties of Blueberries from Cultivar Bluecrop

Authors: Raquel P. F. Guiné, Susana R. Matos, Daniela V. T. A. Costa, Fernando J. Gonçalves

Abstract:

Because blueberries are worldwide recognized as a good source of beneficial components, their consumption has increased in the past decades, and so have the scientific works about their properties. Hence, this work was undertaken to evaluate the effect of some production and conservation factors on the properties of blueberries from cultivar Bluecrop. The physical and chemical analyses were done according to established methodologies and then all data was treated using software SPSS for assessment of the possible differences among the factors investigated and/or the correlations between the variables at study. The results showed that location of production influenced some of the berries properties (caliber, sugars, antioxidant activity, color and texture) and that the age of the bushes was correlated with moisture, sugars and acidity, as well as lightness. On the other hand, altitude of the farm only was correlated to sugar content. With regards to conservation, it influenced only anthocyanins content and DPPH antioxidant activity. Finally, the type of extract and the order of extraction had a pronounced influence on all the phenolic properties evaluated.

Keywords: Antioxidant activity, blueberry, conservation, geographical origin, phenolic compounds, statistical analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2112
3249 Pattern Recognition Using Feature Based Die-Map Clusteringin the Semiconductor Manufacturing Process

Authors: Seung Hwan Park, Cheng-Sool Park, Jun Seok Kim, Youngji Yoo, Daewoong An, Jun-Geol Baek

Abstract:

Depending on the big data analysis becomes important, yield prediction using data from the semiconductor process is essential. In general, yield prediction and analysis of the causes of the failure are closely related. The purpose of this study is to analyze pattern affects the final test results using a die map based clustering. Many researches have been conducted using die data from the semiconductor test process. However, analysis has limitation as the test data is less directly related to the final test results. Therefore, this study proposes a framework for analysis through clustering using more detailed data than existing die data. This study consists of three phases. In the first phase, die map is created through fail bit data in each sub-area of die. In the second phase, clustering using map data is performed. And the third stage is to find patterns that affect final test result. Finally, the proposed three steps are applied to actual industrial data and experimental results showed the potential field application.

Keywords: Die-Map Clustering, Feature Extraction, Pattern Recognition, Semiconductor Manufacturing Process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3126
3248 Feature-Based Summarizing and Ranking from Customer Reviews

Authors: Dim En Nyaung, Thin Lai Lai Thein

Abstract:

Due to the rapid increase of Internet, web opinion sources dynamically emerge which is useful for both potential customers and product manufacturers for prediction and decision purposes. These are the user generated contents written in natural languages and are unstructured-free-texts scheme. Therefore, opinion mining techniques become popular to automatically process customer reviews for extracting product features and user opinions expressed over them. Since customer reviews may contain both opinionated and factual sentences, a supervised machine learning technique applies for subjectivity classification to improve the mining performance. In this paper, we dedicate our work is the task of opinion summarization. Therefore, product feature and opinion extraction is critical to opinion summarization, because its effectiveness significantly affects the identification of semantic relationships. The polarity and numeric score of all the features are determined by Senti-WordNet Lexicon. The problem of opinion summarization refers how to relate the opinion words with respect to a certain feature. Probabilistic based model of supervised learning will improve the result that is more flexible and effective.

Keywords: Opinion Mining, Opinion Summarization, Sentiment Analysis, Text Mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2919
3247 On the Quality of Internet Users- Behavioral Patterns in Using Different Sites and Its Impact on Taboos of Marriage: A Survey among Undergraduate Students in Mashhad City in Iran

Authors: Javadi Alimohammad, Zanjanizadeh Homa, Javadi Maryam

Abstract:

Regarding the multi-media property of internet and the facilities that can be provided for the users, the purpose of this paper is to investigate the users- behavioral patterns and the impact of internet on taboos of marriage. For this purpose a survey technique on the sample size amounted 403 students of governmental guidance schools of city of Mashhad in country of Iran were considered. The results showed, the process of using various internet environments depends on the degree of the users- familiarity with these sites. In order to clarify the effects of the Internet on the taboos of marriage, the non – internet parameters also considered to be controlled. The ttest held among the internet users and non-users, indicated that internet users possess lower taboos of marriage. Extraction of the effects of internet via considering the effects of non-internet parameters, indicate that addiction to the internet, creating a cordial atmosphere, emotional communication, and message attractive factors have significant effects on the family's traditional values.

Keywords: Internet, taboos of marriage, family, masscommunication, computer mediate communication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1378
3246 A Universal Model for Content-Based Image Retrieval

Authors: S. Nandagopalan, Dr. B. S. Adiga, N. Deepak

Abstract:

In this paper a novel approach for generalized image retrieval based on semantic contents is presented. A combination of three feature extraction methods namely color, texture, and edge histogram descriptor. There is a provision to add new features in future for better retrieval efficiency. Any combination of these methods, which is more appropriate for the application, can be used for retrieval. This is provided through User Interface (UI) in the form of relevance feedback. The image properties analyzed in this work are by using computer vision and image processing algorithms. For color the histogram of images are computed, for texture cooccurrence matrix based entropy, energy, etc, are calculated and for edge density it is Edge Histogram Descriptor (EHD) that is found. For retrieval of images, a novel idea is developed based on greedy strategy to reduce the computational complexity. The entire system was developed using AForge.Imaging (an open source product), MATLAB .NET Builder, C#, and Oracle 10g. The system was tested with Coral Image database containing 1000 natural images and achieved better results.

Keywords: Content Based Image Retrieval (CBIR), Cooccurrencematrix, Feature vector, Edge Histogram Descriptor(EHD), Greedy strategy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2917
3245 View-Point Insensitive Human Pose Recognition using Neural Network and CUDA

Authors: Sanghyeok Oh, Keechul Jung

Abstract:

Although lots of research work has been done for human pose recognition, the view-point of cameras is still critical problem of overall recognition system. In this paper, view-point insensitive human pose recognition is proposed. The aims of the proposed system are view-point insensitivity and real-time processing. Recognition system consists of feature extraction module, neural network and real-time feed forward calculation. First, histogram-based method is used to extract feature from silhouette image and it is suitable for represent the shape of human pose. To reduce the dimension of feature vector, Principle Component Analysis(PCA) is used. Second, real-time processing is implemented by using Compute Unified Device Architecture(CUDA) and this architecture improves the speed of feed-forward calculation of neural network. We demonstrate the effectiveness of our approach with experiments on real environment.

Keywords: computer vision, neural network, pose recognition, view-point insensitive, PCA, CUDA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1324
3244 Efficient System for Speech Recognition using General Regression Neural Network

Authors: Abderrahmane Amrouche, Jean Michel Rouvaen

Abstract:

In this paper we present an efficient system for independent speaker speech recognition based on neural network approach. The proposed architecture comprises two phases: a preprocessing phase which consists in segmental normalization and features extraction and a classification phase which uses neural networks based on nonparametric density estimation namely the general regression neural network (GRNN). The relative performances of the proposed model are compared to the similar recognition systems based on the Multilayer Perceptron (MLP), the Recurrent Neural Network (RNN) and the well known Discrete Hidden Markov Model (HMM-VQ) that we have achieved also. Experimental results obtained with Arabic digits have shown that the use of nonparametric density estimation with an appropriate smoothing factor (spread) improves the generalization power of the neural network. The word error rate (WER) is reduced significantly over the baseline HMM method. GRNN computation is a successful alternative to the other neural network and DHMM.

Keywords: Speech Recognition, General Regression NeuralNetwork, Hidden Markov Model, Recurrent Neural Network, ArabicDigits.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2169
3243 Dimensional Modeling of HIV Data Using Open Source

Authors: Charles D. Otine, Samuel B. Kucel, Lena Trojer

Abstract:

Selecting the data modeling technique for an information system is determined by the objective of the resultant data model. Dimensional modeling is the preferred modeling technique for data destined for data warehouses and data mining, presenting data models that ease analysis and queries which are in contrast with entity relationship modeling. The establishment of data warehouses as components of information system landscapes in many organizations has subsequently led to the development of dimensional modeling. This has been significantly more developed and reported for the commercial database management systems as compared to the open sources thereby making it less affordable for those in resource constrained settings. This paper presents dimensional modeling of HIV patient information using open source modeling tools. It aims to take advantage of the fact that the most affected regions by the HIV virus are also heavily resource constrained (sub-Saharan Africa) whereas having large quantities of HIV data. Two HIV data source systems were studied to identify appropriate dimensions and facts these were then modeled using two open source dimensional modeling tools. Use of open source would reduce the software costs for dimensional modeling and in turn make data warehousing and data mining more feasible even for those in resource constrained settings but with data available.

Keywords: About Database, Data Mining, Data warehouse, Dimensional Modeling, Open Source.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1938
3242 Finding Equilibrium in Transport Networks by Simulation and Investigation of Behaviors

Authors: Gábor Szűcs, Gyula Sallai

Abstract:

The goal of this paper is to find Wardrop equilibrium in transport networks at case of uncertainty situations, where the uncertainty comes from lack of information. We use simulation tool to find the equilibrium, which gives only approximate solution, but this is sufficient for large networks as well. In order to take the uncertainty into account we have developed an interval-based procedure for finding the paths with minimal cost using the Dempster-Shafer theory. Furthermore we have investigated the users- behaviors using game theory approach, because their path choices influence the costs of the other users- paths.

Keywords: Dempster-Shafer theory, S-O and U-Otransportation network, uncertainty of information, Wardropequilibrium.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1511
3241 Forecasting Stock Price Manipulation in Capital Market

Authors: F. Rahnamay Roodposhti, M. Falah Shams, H. Kordlouie

Abstract:

The aim of the article is extending and developing econometrics and network structure based methods which are able to distinguish price manipulation in Tehran stock exchange. The principal goal of the present study is to offer model for approximating price manipulation in Tehran stock exchange. In order to do so by applying separation method a sample consisting of 397 companies accepted at Tehran stock exchange were selected and information related to their price and volume of trades during years 2001 until 2009 were collected and then through performing runs test, skewness test and duration correlative test the selected companies were divided into 2 sets of manipulated and non manipulated companies. In the next stage by investigating cumulative return process and volume of trades in manipulated companies, the date of starting price manipulation was specified and in this way the logit model, artificial neural network, multiple discriminant analysis and by using information related to size of company, clarity of information, ratio of P/E and liquidity of stock one year prior price manipulation; a model for forecasting price manipulation of stocks of companies present in Tehran stock exchange were designed. At the end the power of forecasting models were studied by using data of test set. Whereas the power of forecasting logit model for test set was 92.1%, for artificial neural network was 94.1% and multi audit analysis model was 90.2%; therefore all of the 3 aforesaid models has high power to forecast price manipulation and there is no considerable difference among forecasting power of these 3 models.

Keywords: Price Manipulation, Liquidity, Size of Company, Floating Stock, Information Clarity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2827
3240 Exploring More Productive Ways of Working

Authors: Jenna Ruostela, Antti Lönnqvist

Abstract:

New ways of working- refers to non-traditional work practices, settings and locations with information and communication technologies (ICT) to supplement or replace traditional ways of working. It questions the contemporary work practices and settings still very much used in knowledge-intensive organizations today. In this study new ways of working is seen to consist of two elements: work environment (incl. physical, virtual and social) and work practices. This study aims to gather the scattered information together and deepen the understanding on new ways of working. Moreover, the objective is to provide some evidence of the unclear productivity impacts of new ways of working using case study approach.

Keywords: Knowledge work, new ways of working, productivity, work environment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2163
3239 Programming Language Extension Using Structured Query Language for Database Access

Authors: Chapman Eze Nnadozie

Abstract:

Relational databases constitute a very vital tool for the effective management and administration of both personal and organizational data. Data access ranges from a single user database management software to a more complex distributed server system. This paper intends to appraise the use a programming language extension like structured query language (SQL) to establish links to a relational database (Microsoft Access 2013) using Visual C++ 9 programming language environment. The methodology used involves the creation of tables to form a database using Microsoft Access 2013, which is Object Linking and Embedding (OLE) database compliant. The SQL command is used to query the tables in the database for easy extraction of expected records inside the visual C++ environment. The findings of this paper reveal that records can easily be accessed and manipulated to filter exactly what the user wants, such as retrieval of records with specified criteria, updating of records, and deletion of part or the whole records in a table.

Keywords: Data access, database, database management system, OLE, programming language, records, relational database, software, SQL, table.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 708
3238 Image Adaptive Watermarking with Visual Model in Orthogonal Polynomials based Transformation Domain

Authors: Krishnamoorthi R., Sheba Kezia Malarchelvi P. D.

Abstract:

In this paper, an image adaptive, invisible digital watermarking algorithm with Orthogonal Polynomials based Transformation (OPT) is proposed, for copyright protection of digital images. The proposed algorithm utilizes a visual model to determine the watermarking strength necessary to invisibly embed the watermark in the mid frequency AC coefficients of the cover image, chosen with a secret key. The visual model is designed to generate a Just Noticeable Distortion mask (JND) by analyzing the low level image characteristics such as textures, edges and luminance of the cover image in the orthogonal polynomials based transformation domain. Since the secret key is required for both embedding and extraction of watermark, it is not possible for an unauthorized user to extract the embedded watermark. The proposed scheme is robust to common image processing distortions like filtering, JPEG compression and additive noise. Experimental results show that the quality of OPT domain watermarked images is better than its DCT counterpart.

Keywords: Orthogonal Polynomials based Transformation, Digital Watermarking, Copyright Protection, Visual model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1682
3237 Exploration of Least Significant Bit Based Watermarking and Its Robustness against Salt and Pepper Noise

Authors: Kamaldeep Joshi, Rajkumar Yadav, Sachin Allwadhi

Abstract:

Image steganography is the best aspect of information hiding. In this, the information is hidden within an image and the image travels openly on the Internet. The Least Significant Bit (LSB) is one of the most popular methods of image steganography. In this method, the information bit is hidden at the LSB of the image pixel. In one bit LSB steganography method, the total numbers of the pixels and the total number of message bits are equal to each other. In this paper, the LSB method of image steganography is used for watermarking. The watermarking is an application of the steganography. The watermark contains 80*88 pixels and each pixel requirs 8 bits for its binary equivalent form so, the total number of bits required to hide the watermark are 80*88*8(56320). The experiment was performed on standard 256*256 and 512*512 size images. After the watermark insertion, histogram analysis was performed. A noise factor (salt and pepper) of 0.02 was added to the stego image in order to evaluate the robustness of the method. The watermark was successfully retrieved after insertion of noise. An experiment was performed in order to know the imperceptibility of stego and the retrieved watermark. It is clear that the LSB watermarking scheme is robust to the salt and pepper noise.

Keywords: LSB, watermarking, salt and pepper, PSNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1038
3236 Improved Feature Extraction Technique for Handling Occlusion in Automatic Facial Expression Recognition

Authors: Khadijat T. Bamigbade, Olufade F. W. Onifade

Abstract:

The field of automatic facial expression analysis has been an active research area in the last two decades. Its vast applicability in various domains has drawn so much attention into developing techniques and dataset that mirror real life scenarios. Many techniques such as Local Binary Patterns and its variants (CLBP, LBP-TOP) and lately, deep learning techniques, have been used for facial expression recognition. However, the problem of occlusion has not been sufficiently handled, making their results not applicable in real life situations. This paper develops a simple, yet highly efficient method tagged Local Binary Pattern-Histogram of Gradient (LBP-HOG) with occlusion detection in face image, using a multi-class SVM for Action Unit and in turn expression recognition. Our method was evaluated on three publicly available datasets which are JAFFE, CK, SFEW. Experimental results showed that our approach performed considerably well when compared with state-of-the-art algorithms and gave insight to occlusion detection as a key step to handling expression in wild.

Keywords: Automatic facial expression analysis, local binary pattern, LBP-HOG, occlusion detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 759
3235 Study of Sugarcane Bagasse Pretreatment with Sulfuric Acid as a Step of Cellulose Obtaining

Authors: Candido. R.G., Godoy, G.G., Gonçalves, A.R

Abstract:

To produce sugar and ethanol, sugarcane processing generates several agricultural residues, being straw and bagasse is considered as the main among them. And what to do with this residues has been subject of many studies and experiences in an industry that, in recent years, highlighted by the ability to transform waste into valuable products such as electric power. Cellulose is the main component of these materials. It is the most common organic polymer and represents about 1.5 x 1012 tons of total production of biomass per year and is considered an almost inexhaustible source of raw material. Pretreatment with mineral acids is one of the most widely used as stage of cellulose extraction from lignocellulosic materials for solubilizing most of the hemicellulose content. This study had as goal to find the best reaction time of sugarcane bagasse pretreatment with sulfuric acid in order to minimize the losses of cellulose concomitantly with the highest possible removal of hemicellulose and lignin. It was found that the best time for this reaction was 40 minutes, in which it was reached a loss of hemicelluloses around 70% and lignin and cellulose, around 15%. Over this time, it was verified that the cellulose loss increased and there was no loss of lignin and hemicellulose.

Keywords: cellulose, acid pretreatment, hemicellulose removal, sugarcane bagasse

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4898
3234 Comparison of Pore Space Features by Thin Sections and X-Ray Microtomography

Authors: H. Alves, J. T. Assis, M. Geraldes, I. Lima, R. T. Lopes

Abstract:

Microtomographic images and thin section (TS) images were analyzed and compared against some parameters of geological interest such as porosity and its distribution along the samples. The results show that microtomography (CT) analysis, although limited by its resolution, have some interesting information about the distribution of porosity (homogeneous or not) and can also quantify the connected and non-connected pores, i.e., total porosity. TS have no limitations concerning resolution, but are limited by the experimental data available in regards to a few glass sheets for analysis and also can give only information about the connected pores, i.e., effective porosity. Those two methods have their own virtues and flaws but when paired together they are able to complement one another, making for a more reliable and complete analysis.

Keywords: Microtomography, petrographical microscopy, sediments, thin sections.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2313
3233 Smart and Connected Aircraft Cabin: A Balancing Act between Operational Cabin Management, Airline Business and Passenger Expectations

Authors: Ralf God, Lothar Kerschgens, Leonardo Goratti, Steven Lemaire

Abstract:

Ubiquitous connectivity is a reality and a basic need for users on ground. Air travel connectivity in the cabin is also becoming increasingly important for passengers during cabin use. Wireless sensor networks that provide information to cabin management systems are being used by airlines to optimize cabin crew workload. In networked cabin systems, communications and digitally transmitted data must be managed by airlines in every direction. Security and privacy, information processing and knowledge management are the current and future requirements for a smart and connected cabin.

Keywords: Smart and connected cabin management, Internet of Things, power management, airline business.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 395
3232 Low-complexity Integer Frequency Offset Synchronization for OFDMA System

Authors: Young-Jae Kim, Young-Hwan You

Abstract:

This paper presents a integer frequency offset (IFO) estimation scheme for the 3GPP long term evolution (LTE) downlink system. Firstly, the conventional joint detection method for IFO and sector cell index (CID) information is introduced. Secondly, an IFO estimation without explicit sector CID information is proposed, which can operate jointly with the proposed IFO estimation and reduce the time delay in comparison with the conventional joint method. Also, the proposed method is computationally efficient and has almost similar performance in comparison with the conventional method over the Pedestrian and Vehicular channel models.

Keywords: LTE, OFDMA, primary synchronization signal (PSS), IFO, CID

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2299
3231 A Text Clustering System based on k-means Type Subspace Clustering and Ontology

Authors: Liping Jing, Michael K. Ng, Xinhua Yang, Joshua Zhexue Huang

Abstract:

This paper presents a text clustering system developed based on a k-means type subspace clustering algorithm to cluster large, high dimensional and sparse text data. In this algorithm, a new step is added in the k-means clustering process to automatically calculate the weights of keywords in each cluster so that the important words of a cluster can be identified by the weight values. For understanding and interpretation of clustering results, a few keywords that can best represent the semantic topic are extracted from each cluster. Two methods are used to extract the representative words. The candidate words are first selected according to their weights calculated by our new algorithm. Then, the candidates are fed to the WordNet to identify the set of noun words and consolidate the synonymy and hyponymy words. Experimental results have shown that the clustering algorithm is superior to the other subspace clustering algorithms, such as PROCLUS and HARP and kmeans type algorithm, e.g., Bisecting-KMeans. Furthermore, the word extraction method is effective in selection of the words to represent the topics of the clusters.

Keywords: Subspace Clustering, Text Mining, Feature Weighting, Cluster Interpretation, Ontology

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2438
3230 A New Method of Adaptation in Integrated Learning Environment

Authors: Ildar Galeev, Renat Mustaphin, C. Ardil

Abstract:

A new method of adaptation in a partially integrated learning environment that includes electronic textbook (ET) and integrated tutoring system (ITS) is described. The algorithm of adaptation is described in detail. It includes: establishment of Interconnections of operations and concepts; estimate of the concept mastering level (for all concepts); estimate of student-s non-mastering level on the current learning step of information on each page of ET; creation of a rank-order list of links to the e-manual pages containing information that require repeated work.

Keywords: Adaptation, Integrated Learning Environment, Integrated Tutoring System, Electronic Textbook.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1457
3229 An Image Segmentation Algorithm for Gradient Target Based on Mean-Shift and Dictionary Learning

Authors: Yanwen Li, Shuguo Xie

Abstract:

In electromagnetic imaging, because of the diffraction limited system, the pixel values could change slowly near the edge of the image targets and they also change with the location in the same target. Using traditional digital image segmentation methods to segment electromagnetic gradient images could result in lots of errors because of this change in pixel values. To address this issue, this paper proposes a novel image segmentation and extraction algorithm based on Mean-Shift and dictionary learning. Firstly, the preliminary segmentation results from adaptive bandwidth Mean-Shift algorithm are expanded, merged and extracted. Then the overlap rate of the extracted image block is detected before determining a segmentation region with a single complete target. Last, the gradient edge of the extracted targets is recovered and reconstructed by using a dictionary-learning algorithm, while the final segmentation results are obtained which are very close to the gradient target in the original image. Both the experimental results and the simulated results show that the segmentation results are very accurate. The Dice coefficients are improved by 70% to 80% compared with the Mean-Shift only method.

Keywords: Gradient image, segmentation and extract, mean-shift algorithm, dictionary learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 959
3228 Block-Based 2D to 3D Image Conversion Method

Authors: S. Sowmyayani, V. Murugan

Abstract:

With the advent of three-dimension (3D) technology, there are lots of research in converting 2D images to 3D images. The main difference between 2D and 3D is the visual illusion of depth in 3D images. In the recent era, there are more depth estimation techniques. The objective of this paper is to convert 2D images to 3D images with less computation time. For this, the input image is divided into blocks from which the depth information is obtained. Having the depth information, a depth map is generated. Then the 3D image is warped using the original image and the depth map. The proposed method is tested on Make3D dataset and NYU-V2 dataset. The experimental results are compared with other recent methods. The proposed method proved to work with less computation time and good accuracy.

Keywords: Depth map, 3D image warping, image rendering, bilateral filter, minimum spanning tree.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 332
3227 Preventing and Coping Strategies for Cyber Bullying and Cyber Victimization

Authors: Erdinc Ozturk, Gizem Akcan

Abstract:

Although there are several advantages of information and communication technologies, they cause some problems like cyber bullying and cyber victimization. Cyber bullying and cyber victimization have lots of negative effects on people. There are lots of different strategies to prevent cyber bullying and victimization. This study was conducted to provide information about the strategies that are used to prevent cyber bullying and cyber victimization. 120 (60 women, 60 men) university students whose ages are between 18 and 35 participated this study. According to findings of this study, men are more prone to cyber bullying than women. Moreover, men are also more prone to cyber victimization than women.

Keywords: Cyber bullying, cyber victimization, coping strategies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1563
3226 Information Transmission between Large and Small Stocks in the Korean Stock Market

Authors: Sang Hoon Kang, Seong-Min Yoon

Abstract:

Little attention has been paid to information transmission between the portfolios of large stocks and small stocks in the Korean stock market. This study investigates the return and volatility transmission mechanisms between large and small stocks in the Korea Exchange (KRX). This study also explores whether bad news in the large stock market leads to a volatility of the small stock market that is larger than the good news volatility of the large stock market. By employing the Granger causality test, we found unidirectional return transmissions from the large stocks to medium and small stocks. This evidence indicates that pat information about the large stocks has a better ability to predict the returns of the medium and small stocks in the Korean stock market. Moreover, by using the asymmetric GARCH-BEKK model, we observed the unidirectional relationship of asymmetric volatility transmission from large stocks to the medium and small stocks. This finding suggests that volatility in the medium and small stocks following a negative shock in the large stocks is larger than that following a positive shock in the large stocks.

Keywords: Asymmetric GARCH-BEKK model, Asymmetric volatility transmission, Causality, Korean stock market, Spillover effect

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1653
3225 Proposing an Efficient Method for Frequent Pattern Mining

Authors: Vaibhav Kant Singh, Vijay Shah, Yogendra Kumar Jain, Anupam Shukla, A.S. Thoke, Vinay KumarSingh, Chhaya Dule, Vivek Parganiha

Abstract:

Data mining, which is the exploration of knowledge from the large set of data, generated as a result of the various data processing activities. Frequent Pattern Mining is a very important task in data mining. The previous approaches applied to generate frequent set generally adopt candidate generation and pruning techniques for the satisfaction of the desired objective. This paper shows how the different approaches achieve the objective of frequent mining along with the complexities required to perform the job. This paper will also look for hardware approach of cache coherence to improve efficiency of the above process. The process of data mining is helpful in generation of support systems that can help in Management, Bioinformatics, Biotechnology, Medical Science, Statistics, Mathematics, Banking, Networking and other Computer related applications. This paper proposes the use of both upward and downward closure property for the extraction of frequent item sets which reduces the total number of scans required for the generation of Candidate Sets.

Keywords: Data Mining, Candidate Sets, Frequent Item set, Pruning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1664
3224 MovieReco: A Recommendation System

Authors: Dipankaj G Medhi, Juri Dakua

Abstract:

Recommender Systems act as personalized decision guides, aiding users in decisions on matters related to personal taste. Most previous research on Recommender Systems has focused on the statistical accuracy of the algorithms driving the systems, with no emphasis on the trustworthiness of the user. RS depends on information provided by different users to gather its knowledge. We believe, if a large group of users provide wrong information it will not be possible for the RS to arrive in an accurate conclusion. The system described in this paper introduce the concept of Testing the knowledge of user to filter out these “bad users". This paper emphasizes on the mechanism used to provide robust and effective recommendation.

Keywords: Collaborative Filtering, Content Based Filtering, Intelligent Agent, Level of Interest, Recommendation System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1631
3223 Hydrothermal Treatment for Production of Aqueous Co-Product and Efficient Oil Extraction from Microalgae

Authors: Manatchanok Tantiphiphatthana, Lin Peng, Rujira Jitrwung, Kunio Yoshikawa

Abstract:

Hydrothermal liquefaction (HTL) is a technique for obtaining clean biofuel from biomass in the presence of heat and pressure in an aqueous medium which leads to a decomposition of this biomass to the formation of various products. A role of operating conditions is essential for the bio-oil and other products’ yield and also quality of the products. The effects of these parameters were investigated in regards to the composition and yield of the products. Chlorellaceae microalgae were tested under different HTL conditions to clarify suitable conditions for extracting bio-oil together with value-added co-products. Firstly, different microalgae loading rates (5-30%) were tested and found that this parameter has not much significant to product yield. Therefore, 10% microalgae loading rate was selected as a proper economical solution for conditioned schedule at 250oC and 30 min-reaction time. Next, a range of temperature (210-290oC) was applied to verify the effects of each parameter by keeping the reaction time constant at 30 min. The results showed no linkage with the increase of the reaction temperature and some reactions occurred that lead to different product yields. Moreover, some nutrients found in the aqueous product are possible to be utilized for nutrient recovery.

Keywords: Bio-oil, Hydrothermal Liquefaction, Microalgae, Aqueous co-product.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2087
3222 Physical Activity and Cognitive Functioning Relationship in Children

Authors: Comfort Mokgothu

Abstract:

This study investigated the relation between processing information and fitness level of active (fit) and sedentary (unfit) children drawn from rural and urban areas in Botswana. It was hypothesized that fit children would display faster simple reaction time (SRT), choice reaction times (CRT) and movement times (SMT). 60, third grade children (7.0 – 9.0 years) were initially selected and based upon fitness testing, 45 participated in the study (15 each of fit urban, unfit urban, fit rural). All children completed anthropometric measures, skinfold testing and submaximal cycle ergometer testing. The cognitive testing included SRT, CRT, SMT and Choice Movement Time (CMT) and memory sequence length. Results indicated that the rural fit group exhibited faster SMT than the urban fit and unfit groups. For CRT, both fit groups were faster than the unfit group. Collectively, the study shows that the relationship that exists between physical fitness and cognitive function amongst the elderly can tentatively be extended to the pediatric population. Physical fitness could be a factor in the speed at which we process information, including decision making, even in children.

Keywords: Decision making, fitness, information processing, reaction time, cognition movement time.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 773
3221 Risk Based Building Information Modeling (BIM) for Urban Infrastructure Transportation Project

Authors: Debasis Sarkar

Abstract:

Building Information Modeling (BIM) is a holistic documentation process for operational visualization, design coordination, estimation and project scheduling. BIM software defines objects parametrically and it is a tool for virtual reality. Primary advantage of implementing BIM is the visual coordination of the building structure and systems such as Mechanical, Electrical and Plumbing (MEP) and it also identifies the possible conflicts between the building systems. This paper is an attempt to develop a risk based BIM model which would highlight the primary advantages of application of BIM pertaining to urban infrastructure transportation project. It has been observed that about 40% of the Architecture, Engineering and Construction (AEC) companies use BIM but primarily for their outsourced projects. Also, 65% of the respondents agree that BIM would be used quiet strongly for future construction projects in India. The 3D models developed with Revit 2015 software would reduce co-ordination problems amongst the architects, structural engineers, contractors and building service providers (MEP). Integration of risk management along with BIM would provide enhanced co-ordination, collaboration and high probability of successful completion of the complex infrastructure transportation project within stipulated time and cost frame.

Keywords: Building information modeling (BIM), infrastructure transportation, project risk management, underground metro rail.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2108