Search results for: Image processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2694

Search results for: Image processing

144 SUPAR: System for User-Centric Profiling of Association Rules in Streaming Data

Authors: Sarabjeet Kaur Kochhar

Abstract:

With a surge of stream processing applications novel techniques are required for generation and analysis of association rules in streams. The traditional rule mining solutions cannot handle streams because they generally require multiple passes over the data and do not guarantee the results in a predictable, small time. Though researchers have been proposing algorithms for generation of rules from streams, there has not been much focus on their analysis. We propose Association rule profiling, a user centric process for analyzing association rules and attaching suitable profiles to them depending on their changing frequency behavior over a previous snapshot of time in a data stream. Association rule profiles provide insights into the changing nature of associations and can be used to characterize the associations. We discuss importance of characteristics such as predictability of linkages present in the data and propose metric to quantify it. We also show how association rule profiles can aid in generation of user specific, more understandable and actionable rules. The framework is implemented as SUPAR: System for Usercentric Profiling of Association Rules in streaming data. The proposed system offers following capabilities: i) Continuous monitoring of frequency of streaming item-sets and detection of significant changes therein for association rule profiling. ii) Computation of metrics for quantifying predictability of associations present in the data. iii) User-centric control of the characterization process: user can control the framework through a) constraint specification and b) non-interesting rule elimination.

Keywords: Data Streams, User subjectivity, Change detection, Association rule profiles, Predictability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1410
143 Evaluation of Video Quality Metrics and Performance Comparison on Contents Taken from Most Commonly Used Devices

Authors: Pratik Dhabal Deo, Manoj P.

Abstract:

With the increasing number of social media users, the amount of video content available has also significantly increased. Currently, the number of smartphone users is at its peak, and many are increasingly using their smartphones as their main photography and recording devices. There have been a lot of developments in the field of video quality assessment in since the past years and more research on various other aspects of video and image are being done. Datasets that contain a huge number of videos from different high-end devices make it difficult to analyze the performance of the metrics on the content from most used devices even if they contain contents taken in poor lighting conditions using lower-end devices. These devices face a lot of distortions due to various factors since the spectrum of contents recorded on these devices is huge. In this paper, we have presented an analysis of the objective Video Quality Analysis (VQA) metrics on contents taken only from most used devices and their performance on them, focusing on full-reference metrics. To carry out this research, we created a custom dataset containing a total of 90 videos that have been taken from three most commonly used devices, and Android smartphone, an iOS smartphone and a Digital Single-Lens Reflex (DSLR) camera. On the videos taken on each of these devices, the six most common types of distortions that users face have been applied in addition to already existing H.264 compression based on four reference videos. These six applied distortions have three levels of degradation each. A total of the five most popular VQA metrics have been evaluated on this dataset and the highest values and the lowest values of each of the metrics on the distortions have been recorded. Finally, it is found that blur is the artifact on which most of the metrics did not perform well. Thus, in order to understand the results better the amount of blur in the data set has been calculated and an additional evaluation of the metrics was done using High Efficiency Video Coding (HEVC) codec, which is the next version of H.264 compression, on the camera that proved to be the sharpest among the devices. The results have shown that as the resolution increases, the performance of the metrics tends to become more accurate and the best performing metric among them is VQM with very few inconsistencies and inaccurate results when the compression applied is H.264, but when the compression is applied is HEVC, Structural Similarity (SSIM) metric and Video Multimethod Assessment Fusion (VMAF) have performed significantly better.

Keywords: Distortion, metrics, recording, frame rate, video quality assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 280
142 Optimization of Some Process Parameters to Produce Raisin Concentrate in Khorasan Region of Iran

Authors: Peiman Ariaii, Hamid Tavakolipour, Mohsen Pirdashti, Rabehe Izadi Amoli

Abstract:

Raisin Concentrate (RC) are the most important products obtained in the raisin processing industries. These RC products are now used to make the syrups, drinks and confectionery productions and introduced as natural substitute for sugar in food applications. Iran is a one of the biggest raisin exporter in the world but unfortunately despite a good raw material, no serious effort to extract the RC has been taken in Iran. Therefore, in this paper, we determined and analyzed affected parameters on extracting RC process and then optimizing these parameters for design the extracting RC process in two types of raisin (round and long) produced in Khorasan region. Two levels of solvent (1:1 and 2:1), three levels of extraction temperature (60°C, 70°C and 80°C), and three levels of concentration temperature (50°C, 60°C and 70°C) were the treatments. Finally physicochemical characteristics of the obtained concentrate such as color, viscosity, percentage of reduction sugar, acidity and the microbial tests (mould and yeast) were counted. The analysis was performed on the basis of factorial in the form of completely randomized design (CRD) and Duncan's multiple range test (DMRT) was used for the comparison of the means. Statistical analysis of results showed that optimal conditions for production of concentrate is round raisins when the solvent ratio was 2:1 with extraction temperature of 60°C and then concentration temperature of 50°C. Round raisin is cheaper than the long one, and it is more economical to concentrate production. Furthermore, round raisin has more aromas and the less color degree with increasing the temperature of concentration and extraction. Finally, according to mentioned factors the concentrate of round raisin is recommended.

Keywords: Raisin concentrate, optimization, process parameters, round raisin, Iran.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1551
141 Thermal Treatments and Characteristics Study On Unalloyed Structural (AISI 1140) Steel

Authors: S. S. Sharma, P. R. Prabhu, Rajagopal Chadaga

Abstract:

The main emphasis of metallurgists has been to process the materials to obtain the balanced mechanical properties for the given application. One of the processing routes to alter the properties is heat treatment. Nearly 90% of the structural applications are related to the medium carbon an alloyed steels and hence are regarded as structural steels. The major requirement in the conventional steel is to improve workability, toughness, hardness and grain refinement. In this view, it is proposed to study the mechanical and tribological properties of unalloyed structural (AISI 1140) steel with different thermal (heat) treatments like annealing, normalizing, tempering and hardening and compared with as brought (cold worked) specimen. All heat treatments are carried out in atmospheric condition. Hardening treatment improves hardness of the material, a marginal decrease in hardness value with improved ductility is observed in tempering. Annealing and normalizing improve ductility of the specimen. Normalized specimen shows ultimate ductility. Hardened specimen shows highest wear resistance in the initial period of slide wear where as above 25KM of sliding distance, as brought steel dominates the hardened specimen. Both mild and severe wear regions are observed. Microstructural analysis shows the existence of pearlitic structure in normalized specimen, lath martensitic structure in hardened, pearlitic, ferritic structure in annealed specimen.

Keywords: Annealing, hardness, heat treatment, normalizing, wear.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2064
140 Comparative Study of the Effects of Process Parameters on the Yield of Oil from Melon Seed (Cococynthis citrullus) and Coconut Fruit (Cocos nucifera)

Authors: Ndidi F. Amulu, Patrick E. Amulu, Gordian O. Mbah, Callistus N. Ude

Abstract:

Comparative analysis of the properties of melon seed, coconut fruit and their oil yield were evaluated in this work using standard analytical technique AOAC. The results of the analysis carried out revealed that the moisture contents of the samples studied are 11.15% (melon) and 7.59% (coconut). The crude lipid content are 46.10% (melon) and 55.15% (coconut).The treatment combinations used (leaching time, leaching temperature and solute: solvent ratio) showed significant difference (p < 0.05) in yield between the samples, with melon oil seed flour having a higher percentage range of oil yield (41.30 – 52.90%) and coconut (36.25 – 49.83%). The physical characterization of the extracted oil was also carried out. The values gotten for refractive index are 1.487 (melon seed oil) and 1.361 (coconut oil) and viscosities are 0.008 (melon seed oil) and 0.002 (coconut oil). The chemical analysis of the extracted oils shows acid value of 1.00mg NaOH/g oil (melon oil), 10.050mg NaOH/g oil (coconut oil) and saponification value of 187.00mg/KOH (melon oil) and 183.26mg/KOH (coconut oil). The iodine value of the melon oil gave 75.00mg I2/g and 81.00mg I2/g for coconut oil. A standard statistical package Minitab version 16.0 was used in the regression analysis and analysis of variance (ANOVA). The statistical software mentioned above was also used to optimize the leaching process. Both samples gave high oil yield at the same optimal conditions. The optimal conditions to obtain highest oil yield ≥ 52% (melon seed) and ≥ 48% (coconut seed) are solute - solvent ratio of 40g/ml, leaching time of 2hours and leaching temperature of 50oC. The two samples studied have potential of yielding oil with melon seed giving the higher yield.

Keywords: Coconut, melon, optimization, processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2102
139 Application of Interferometric Techniques for Quality Control of Oils Used in the Food Industry

Authors: Andres Piña, Amy Meléndez, Pablo Cano, Tomas Cahuich

Abstract:

The purpose of this project is to propose a quick and environmentally friendly alternative to measure the quality of oils used in food industry. There is evidence that repeated and indiscriminate use of oils in food processing cause physicochemical changes with formation of potentially toxic compounds that can affect the health of consumers and cause organoleptic changes. In order to assess the quality of oils, non-destructive optical techniques such as Interferometry offer a rapid alternative to the use of reagents, using only the interaction of light on the oil. Through this project, we used interferograms of samples of oil placed under different heating conditions to establish the changes in their quality. These interferograms were obtained by means of a Mach-Zehnder Interferometer using a beam of light from a HeNe laser of 10mW at 632.8nm. Each interferogram was captured, analyzed and measured full width at half-maximum (FWHM) using the software from Amcap and ImageJ. The total of FWHMs was organized in three groups. It was observed that the average obtained from each of the FWHMs of group A shows a behavior that is almost linear, therefore it is probable that the exposure time is not relevant when the oil is kept under constant temperature. Group B exhibits a slight exponential model when temperature raises between 373 K and 393 K. Results of the t-Student show a probability of 95% (0.05) of the existence of variation in the molecular composition of both samples. Furthermore, we found a correlation between the Iodine Indexes (Physicochemical Analysis) and the Interferograms (Optical Analysis) of group C. Based on these results, this project highlights the importance of the quality of the oils used in food industry and shows how Interferometry can be a useful tool for this purpose.

Keywords: Food industry, interferometric, oils, quality control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2126
138 Fast Return Path Planning for Agricultural Autonomous Terrestrial Robot in a Known Field

Authors: Carlo Cernicchiaro, Pedro D. Gaspar, Martim L. Aguiar

Abstract:

The agricultural sector is becoming more critical than ever in view of the expected overpopulation of the Earth. The introduction of robotic solutions in this field is an increasingly researched topic to make the most of the Earth's resources, thus going to avoid the problems of wear and tear of the human body due to the harsh agricultural work, and open the possibility of a constant careful processing 24 hours a day. This project is realized for a terrestrial autonomous robot aimed to navigate in an orchard collecting fallen peaches below the trees. When it receives the signal indicating the low battery, it has to return to the docking station where it will replace its battery and then return to the last work point and resume its routine. Considering a preset path in orchards with tree rows with variable length by which the robot goes iteratively using the algorithm D*. In case of low battery, the D* algorithm is still used to determine the fastest return path to the docking station as well as to come back from the docking station to the last work point. MATLAB simulations were performed to analyze the flexibility and adaptability of the developed algorithm. The simulation results show an enormous potential for adaptability, particularly in view of the irregularity of orchard field, since it is not flat and undergoes modifications over time from fallen branch as well as from other obstacles and constraints. The D* algorithm determines the best route in spite of the irregularity of the terrain. Moreover, in this work, it will be shown a possible solution to improve the initial points tracking and reduce time between movements.

Keywords: Path planning, fastest return path, agricultural terrestrial robot, autonomous, docking station.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 806
137 Determination of Potential Agricultural Lands Using Landsat 8 OLI Images and GIS: Case Study of Gokceada (Imroz) Turkey

Authors: Rahmi Kafadar, Levent Genc

Abstract:

In present study, it was aimed to determine potential agricultural lands (PALs) in Gokceada (Imroz) Island of Canakkale province, Turkey. Seven-band Landsat 8 OLI images acquired on July 12 and August 13, 2013, and their 14-band combination image were used to identify current Land Use Land Cover (LULC) status. Principal Component Analysis (PCA) was applied to three Landsat datasets in order to reduce the correlation between the bands. A total of six Original and PCA images were classified using supervised classification method to obtain the LULC maps including 6 main classes (“Forest”, “Agriculture”, “Water Surface”, “Residential Area- Bare Soil”, “Reforestation” and “Other”). Accuracy assessment was performed by checking the accuracy of 120 randomized points for each LULC maps. The best overall accuracy and Kappa statistic values (90.83%, 0.8791% respectively) were found for PCA images which were generated from 14-bands combined images called 3- B/JA. Digital Elevation Model (DEM) with 15 m spatial resolution (ASTER) was used to consider topographical characteristics. Soil properties were obtained by digitizing 1:25000 scaled soil maps of Rural Services Directorate General. Potential Agricultural Lands (PALs) were determined using Geographic information Systems (GIS). Procedure was applied considering that “Other” class of LULC map may be used for agricultural purposes in the future properties. Overlaying analysis was conducted using Slope (S), Land Use Capability Class (LUCC), Other Soil Properties (OSP) and Land Use Capability Sub-Class (SUBC) properties. A total of 901.62 ha areas within “Other” class (15798.2 ha) of LULC map were determined as PALs. These lands were ranked as “Very Suitable”, “Suitable”, “Moderate Suitable” and “Low Suitable”. It was determined that the 8.03 ha were classified as “Very Suitable” while 18.59 ha as suitable and 11.44 ha as “Moderate Suitable” for PALs. In addition, 756.56 ha were found to be “Low Suitable”. The results obtained from this preliminary study can serve as basis for further studies.

Keywords: Digital Elevation Model (DEM), Geographic Information Systems (GIS), LANDSAT 8 OLI-TIRS, Land Use Land Cover (LULC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2588
136 Effect of Injection Moulding Process Parameter on Tensile Strength Using Taguchi Method

Authors: Gurjeet Singh, M. K. Pradhan, Ajay Verma

Abstract:

The plastic industry plays very important role in the economy of any country. It is generally among the leading share of the economy of the country. Since metals and their alloys are very rarely available on the earth. Therefore, to produce plastic products and components, which finds application in many industrial as well as household consumer products is beneficial. Since 50% plastic products are manufactured by injection moulding process. For production of better quality product, we have to control quality characteristics and performance of the product. The process parameters plays a significant role in production of plastic, hence the control of process parameter is essential. In this paper the effect of the parameters selection on injection moulding process has been described. It is to define suitable parameters in producing plastic product. Selecting the process parameter by trial and error is neither desirable nor acceptable, as it is often tends to increase the cost and time. Hence, optimization of processing parameter of injection moulding process is essential. The experiments were designed with Taguchi’s orthogonal array to achieve the result with least number of experiments. Plastic material polypropylene is studied. Tensile strength test of material is done on universal testing machine, which is produced by injection moulding machine. By using Taguchi technique with the help of MiniTab-14 software the best value of injection pressure, melt temperature, packing pressure and packing time is obtained. We found that process parameter packing pressure contribute more in production of good tensile plastic product.

Keywords: Injection moulding, tensile strength, Taguchi method, poly-propylene.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3701
135 ORank: An Ontology Based System for Ranking Documents

Authors: Mehrnoush Shamsfard, Azadeh Nematzadeh, Sarah Motiee

Abstract:

Increasing growth of information volume in the internet causes an increasing need to develop new (semi)automatic methods for retrieval of documents and ranking them according to their relevance to the user query. In this paper, after a brief review on ranking models, a new ontology based approach for ranking HTML documents is proposed and evaluated in various circumstances. Our approach is a combination of conceptual, statistical and linguistic methods. This combination reserves the precision of ranking without loosing the speed. Our approach exploits natural language processing techniques for extracting phrases and stemming words. Then an ontology based conceptual method will be used to annotate documents and expand the query. To expand a query the spread activation algorithm is improved so that the expansion can be done in various aspects. The annotated documents and the expanded query will be processed to compute the relevance degree exploiting statistical methods. The outstanding features of our approach are (1) combining conceptual, statistical and linguistic features of documents, (2) expanding the query with its related concepts before comparing to documents, (3) extracting and using both words and phrases to compute relevance degree, (4) improving the spread activation algorithm to do the expansion based on weighted combination of different conceptual relationships and (5) allowing variable document vector dimensions. A ranking system called ORank is developed to implement and test the proposed model. The test results will be included at the end of the paper.

Keywords: Document ranking, Ontology, Spread activation algorithm, Annotation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1833
134 A Novel Multiple Valued Logic OHRNS Modulo rn Adder Circuit

Authors: Mehdi Hosseinzadeh, Somayyeh Jafarali Jassbi, Keivan Navi

Abstract:

Residue Number System (RNS) is a modular representation and is proved to be an instrumental tool in many digital signal processing (DSP) applications which require high-speed computations. RNS is an integer and non weighted number system; it can support parallel, carry-free, high-speed and low power arithmetic. A very interesting correspondence exists between the concepts of Multiple Valued Logic (MVL) and Residue Number Arithmetic. If the number of levels used to represent MVL signals is chosen to be consistent with the moduli which create the finite rings in the RNS, MVL becomes a very natural representation for the RNS. There are two concerns related to the application of this Number System: reaching the most possible speed and the largest dynamic range. There is a conflict when one wants to resolve both these problem. That is augmenting the dynamic range results in reducing the speed in the same time. For achieving the most performance a method is considere named “One-Hot Residue Number System" in this implementation the propagation is only equal to one transistor delay. The problem with this method is the huge increase in the number of transistors they are increased in order m2 . In real application this is practically impossible. In this paper combining the Multiple Valued Logic and One-Hot Residue Number System we represent a new method to resolve both of these two problems. In this paper we represent a novel design of an OHRNS-based adder circuit. This circuit is useable for Multiple Valued Logic moduli, in comparison to other RNS design; this circuit has considerably improved the number of transistors and power consumption.

Keywords: Computer Arithmetic, Residue Number System, Multiple Valued Logic, One-Hot, VLSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1797
133 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain

Authors: Bita Payami-Shabestari, Dariush Eslami

Abstract:

The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.

Keywords: Economic production quantity, random cost, supply chain management, vendor-managed inventory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 619
132 Natural Antioxidant Changes in Fresh and Dried Spices and Vegetables

Authors: Liga Priecina, Daina Karklina

Abstract:

Antioxidants are became the most analyzed substances in last decades. Antioxidants act as in activator for free radicals. Spices and vegetables are one of major antioxidant sources. Most common antioxidants in vegetables and spices are vitamin C, E, phenolic compounds, carotenoids. Therefore, it is important to get some view about antioxidant changes in spices and vegetables during processing. In this article was analyzed nine fresh and dried spices and vegetables- celery (Apium graveolens), parsley (Petroselinum crispum), dill (Anethum graveolens), leek (Allium ampeloprasum L.), garlic (Allium sativum L.), onion (Allium cepa), celery root (Apium graveolens var. rapaceum), pumpkin (Curcubica maxima), carrot (Daucus carota)- grown in Latvia 2013. Total carotenoids and phenolic compounds and their antiradical scavenging activity were determined for all samples. Dry matter content was calculated from moisture content. After drying process carotenoid content significantly decreases in all analyzed samples, except one -carotenoid content increases in parsley. Phenolic composition was different and depends on sample – fresh or dried. Total phenolic, flavonoid and phenolic acid content increases in dried spices. Flavan-3-ol content is not detected in fresh spice samples. For dried vegetables- phenolic acid content decreases significantly, but increases flavan-3-ols content. The higher antiradical scavenging activity was observed in samples with higher flavonoid and phenolic acid content.

Keywords: Antiradical scavenging activity, carotenoids, phenolic compounds, spices, vegetables.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3589
131 The Mechanism Study of Degradative Solvent Extraction of Biomass by Liquid Membrane-Fourier Transform Infrared Spectroscopy

Authors: W. Ketren, J. Wannapeera, Z. Heishun, A. Ryuichi, K. Toshiteru, M. Kouichi, O. Hideaki

Abstract:

Degradative solvent extraction is the method developed for biomass upgrading by dewatering and fractionation of biomass under the mild condition. However, the conversion mechanism of the degradative solvent extraction method has not been fully understood so far. The rice straw was treated in 1-methylnaphthalene (1-MN) at a different solvent-treatment temperature varied from 250 to 350 oC with the residence time for 60 min. The liquid membrane-Fourier Transform Infrared Spectroscopy (FTIR) technique is applied to study the processing mechanism in-depth without separation of the solvent. It has been found that the strength of the oxygen-hydrogen stretching  (3600-3100 cm-1) decreased slightly with increasing temperature in the range of 300-350 oC. The decrease of the hydroxyl group in the solvent soluble suggested dehydration reaction taking place between 300 and 350 oC. FTIR spectra in the carbonyl stretching region (1800-1600 cm-1) revealed the presence of esters groups, carboxylic acid and ketonic groups in the solvent-soluble of biomass. The carboxylic acid increased in the range of 200 to 250 oC and then decreased. The prevailing of aromatic groups showed that the aromatization took place during extraction at above 250 oC. From 300 to 350 oC, the carbonyl functional groups in the solvent-soluble noticeably decreased. The removal of the carboxylic acid and the decrease of esters into the form of carbon dioxide indicated that the decarboxylation reaction occurred during the extraction process.

Keywords: Biomass upgrading, liquid membrane-Fourier transform infrared spectroscopy, FTIR, degradative solvent extraction, mechanism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 941
130 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model

Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You

Abstract:

The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.

Keywords: Clustering algorithm, potential function, speech signal, the UBSS model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 618
129 Robot Technology Impact on Dyslexic Students’ English Learning

Authors: Khaled Hamdan, Abid Amorri, Fatima Hamdan

Abstract:

Involving students in English language learning process and achieving an adequate English language proficiency in the target language can be a great challenge for both teachers and students. This can prove even a far greater challenge to engage students with special needs (Dyslexia) if they have physical impairment and inadequate mastery of basic communicative language competence/proficiency in the target language. From this perspective, technology like robots can probably be used to enhance learning process for the special needs students who have extensive communication needs, who face continuous struggle to interact with their peers and teachers and meet academic requirements. Robots, precisely NAO, can probably provide them with the perfect opportunity to practice social and communication skills, and meet their English academic requirements. This research paper aims to identify to what extent robots can be used to improve students’ social interaction and communication skills and to understand the potential for robotics-based education in motivating and engaging UAEU dyslexic students to meet university requirements. To reach this end, the paper will explore several factors that come into play – Motion Level-involving cognitive activities, Interaction Level-involving language processing, Behavior Level -establishing a close relationship with the robot and Appraisal Level- focusing on dyslexia students’ achievement in the target language.

Keywords: Dyslexia, robot technology, motion, interaction, behavior and appraisal levels, social and communication skills.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1325
128 Developing Well-Being Indicators and Measurement Methods as Illustrated by Projects Aimed at Preventing Obesity in Children

Authors: E. Grochowska-Niedworok, K. Brukało, M. Hadasik, M. Kardas

Abstract:

Consumption of vegetables by school children and adolescents is essential for their normal growth, development and health, but a significant minority of the world's population consumes the right amount of these products. The aim of the study was to evaluate the preferences and frequency of consumption of vegetables by school children and adolescents. It has been assumed that effectively implemented nutrition education programs should have an impact on increasing the frequency of vegetable consumption among the recipients. The study covered 514 students of five schools in the Opole Voivodeship aged 9 years to 22 years. The research tool was an author's questionnaire, which consisted of closed questions on the frequency of vegetable consumption and the use of 10 ways to treat them. Preferences and frequencies are shown in percentages, while correlations were estimated on the basis of Cramer`s V and gamma coefficients. In each of the examined age groups, the relationship between sex and vegetable consumption (the Cramer`s V coefficient value was 0.06 to 0.38) was determined and the various methods of culinary processing were used (V Craméra was 0.08 to 0.34). For both sexes, the relationship between age and frequency of vegetable consumption was shown (gamma values ranged from ~ 0.00 to 0.39) and different cooking methods (gamma values were 0.01 to 0.22). The most important determinant of nutritional choices is the taste and availability of products. The fact that they have a positive effect on their health is only in third position. As has been shown, obesity prevention programs can not only address nutrition education but also teach about new flavors and increase the availability of healthy foods. In addition, the frequency of vegetable consumption can be a good indicator reflecting the healthy behaviors of children and adolescents.

Keywords: Children and adolescents, frequency, welfare rate, vegetables.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 961
127 Co-Disposal of Coal Ash with Mine Tailings in Surface Paste Disposal Practices: A Gold Mining Case Study

Authors: M. L. Dinis, M. C. Vila, A. Fiúza, A. Futuro, C. Nunes

Abstract:

The present paper describes the study of paste tailings prepared in laboratory using gold tailings, produced in a Finnish gold mine with the incorporation of coal ash. Natural leaching tests were conducted with the original materials (tailings, fly and bottom ashes) and also with paste mixtures that were prepared with different percentages of tailings and ashes. After leaching, the solid wastes were physically and chemically characterized and the results were compared to those selected as blank – the unleached samples. The tailings and the coal ash, as well as the prepared mixtures, were characterized, in addition to the textural parameters, by the following measurements: grain size distribution, chemical composition and pH. Mixtures were also tested in order to characterize their mechanical behavior by measuring the flexural strength, the compressive strength and the consistency. The original tailing samples presented an alkaline pH because during their processing they were previously submitted to pressure oxidation with destruction of the sulfides. Therefore, it was not possible to ascertain the effect of the coal ashes in the acid mine drainage. However, it was possible to verify that the paste reactivity was affected mostly by the bottom ash and that the tailings blended with bottom ash present lower mechanical strength than when blended with a combination of fly and bottom ash. Surface paste disposal offer an attractive alternative to traditional methods in addition to the environmental benefits of incorporating large-volume wastes (e.g. bottom ash). However, a comprehensive characterization of the paste mixtures is crucial to optimize paste design in order to enhance engineer and environmental properties.

Keywords: Coal ash, gold tailings, paste, surface disposal.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1389
126 Frictional Effects on the Dynamics of a Truncated Double-Cone Gravitational Motor

Authors: Barenten Suciu

Abstract:

In this work, effects of the friction and truncation on the dynamics of a double-cone gravitational motor, self-propelled on a straight V-shaped horizontal rail, are evaluated. Such mechanism has a variable radius of contact, and, on one hand, it is similar to a pulley mechanism that changes the potential energy into the kinetic energy of rotation, but on the other hand, it is similar to a pendulum mechanism that converts the potential energy of the suspended body into the kinetic energy of translation along a circular path. Movies of the self- propelled double-cones, made of S45C carbon steel and wood, along rails made of aluminum alloy, were shot for various opening angles of the rails. Kinematical features of the double-cones were estimated through the slow-motion processing of the recorded movies. Then, a kinematical model is derived under assumption that the distance traveled by the contact points on the rectilinear rails is identical with the distance traveled by the contact points on the truncated conical surface. Additionally, a dynamic model, for this particular contact problem, was proposed and validated against the experimental results. Based on such model, the traction force and the traction torque acting on the double-cone are identified. One proved that the rolling traction force is always smaller than the sliding friction force; i.e., the double-cone is rolling without slipping. Results obtained in this work can be used to achieve the proper design of such gravitational motor.

Keywords: Truncated double-cone, friction, rolling and sliding, dynamic model, gravitational motor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1289
125 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators

Authors: Wei Zhang

Abstract:

With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.

Keywords: Deep learning, field programmable gate array, FPGA, hardware acceleration, convolutional neural networks, CNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 812
124 Automation of Heat Exchanger using Neural Network

Authors: Sudhir Agashe, Ashok Ghatol, Sujata Agashe

Abstract:

In this paper the development of a heat exchanger as a pilot plant for educational purpose is discussed and the use of neural network for controlling the process is being presented. The aim of the study is to highlight the need of a specific Pseudo Random Binary Sequence (PRBS) to excite a process under control. As the neural network is a data driven technique, the method for data generation plays an important role. In light of this a careful experimentation procedure for data generation was crucial task. Heat exchange is a complex process, which has a capacity and a time lag as process elements. The proposed system is a typical pipe-in- pipe type heat exchanger. The complexity of the system demands careful selection, proper installation and commissioning. The temperature, flow, and pressure sensors play a vital role in the control performance. The final control element used is a pneumatically operated control valve. While carrying out the experimentation on heat exchanger a welldrafted procedure is followed giving utmost attention towards safety of the system. The results obtained are encouraging and revealing the fact that if the process details are known completely as far as process parameters are concerned and utilities are well stabilized then feedback systems are suitable, whereas neural network control paradigm is useful for the processes with nonlinearity and less knowledge about process. The implementation of NN control reinforces the concepts of process control and NN control paradigm. The result also underlined the importance of excitation signal typically for that process. Data acquisition, processing, and presentation in a typical format are the most important parameters while validating the results.

Keywords: Process identification, neural network, heat exchanger.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1523
123 Discovery of Quantified Hierarchical Production Rules from Large Set of Discovered Rules

Authors: Tamanna Siddiqui, M. Afshar Alam

Abstract:

Automated discovery of Rule is, due to its applicability, one of the most fundamental and important method in KDD. It has been an active research area in the recent past. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form: Decision If < condition> Generality Specificity . HPRs systems are capable of handling taxonomical structures inherent in the knowledge about the real world. This paper focuses on the issue of mining Quantified rules with crisp hierarchical structure using Genetic Programming (GP) approach to knowledge discovery. The post-processing scheme presented in this work uses Quantified production rules as initial individuals of GP and discovers hierarchical structure. In proposed approach rules are quantified by using Dempster Shafer theory. Suitable genetic operators are proposed for the suggested encoding. Based on the Subsumption Matrix(SM), an appropriate fitness function is suggested. Finally, Quantified Hierarchical Production Rules (HPRs) are generated from the discovered hierarchy, using Dempster Shafer theory. Experimental results are presented to demonstrate the performance of the proposed algorithm.

Keywords: Knowledge discovery in database, quantification, dempster shafer theory, genetic programming, hierarchy, subsumption matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1482
122 Occurrence of Foreign Matter in Food: Applied Identification Method - Association of Official Agricultural Chemists (AOAC) and Food and Drug Administration (FDA)

Authors: E. C. Mattos, V. S. M. G. Daros, R. Dal Col, A. L. Nascimento

Abstract:

The aim of this study is to present the results of a retrospective survey on the foreign matter found in foods analyzed at the Adolfo Lutz Institute, from July 2001 to July 2015. All the analyses were conducted according to the official methods described on Association of Official Agricultural Chemists (AOAC) for the micro analytical procedures and Food and Drug Administration (FDA) for the macro analytical procedures. The results showed flours, cereals and derivatives such as baking and pasta products were the types of food where foreign matters were found more frequently followed by condiments and teas. Fragments of stored grains insects, its larvae, nets, excrement, dead mites and rodent excrement were the most foreign matter found in food. Besides, foreign matters that can cause a physical risk to the consumer’s health such as metal, stones, glass, wood were found but rarely. Miscellaneous (shell, sand, dirt and seeds) were also reported. There are a lot of extraneous materials that are considered unavoidable since are something inherent to the product itself, such as insect fragments in grains. In contrast, there are avoidable extraneous materials that are less tolerated because it is preventable with the Good Manufacturing Practice. The conclusion of this work is that although most extraneous materials found in food are considered unavoidable it is necessary to keep the Good Manufacturing Practice throughout the food processing as well as maintaining a constant surveillance of the production process in order to avoid accidents that may lead to occurrence of these extraneous materials in food.

Keywords: Food contamination, extraneous materials, foreign matter, surveillance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3623
121 Criminal Protection Objectivity of the Child's Right to Life and Physical and Psychological Safety

Authors: Hezha Hewa, Taher Sur

Abstract:

Nowadays, child affairs is a matter of both national and international interests. This issue is regarded a vital topic for various scientific fields across ages, and for all the communities without exception. However, the nature of child caring may vary due to the verities in science perspectives. So, considering child's affairs from different perspectives is helpful to have a complementary image about this matter. The purpose behind selecting this topic is to keep a balance between the victim on the one hand, and the guardian and the offender on the other hand, (i.e.) to avoid any kind of excessiveness either in the protection of the child and its rights not in the punishment of the offender. This is achieved through considering various legal materials in the Iraqi legislation and in the comparative legislations that are concerned with the child's issue and the extent to which the child makes use of these rights. The scope of this study involves the crimes that are considered as aggressions against the child's right to life, and the crimes that are dangerous to their physical and psychological safety. So, this study comprehensively considers the intentional murder of child, child murder to avoid disgrace, child kidnapping, child abandonment, physical abuse for the sake of punishment or not, child circumcision, verbal violence, and abstaining from leaving a child with a person who has the right of custody. This study ends with the most significant concluding points that have been derived throughout this study, which are: Unlike the Iraqi legislation, the Egyptian legislation defines the child in the Article 2 of the Child Law No. 12 of 1996 amended by the Law No. 126 of 2008 that the child is a person who does not exceed 18 years of age. Some legislation does not provide special criminal protection for child intentional murder, as in the Iraqi and the Egyptian legislation. However, some others have provided special criminal protection for a child, as in French and Syrian legislations. Child kidnapping is regarded as one of the most dangerous crimes that affects the child and the family as well, as it may expose the child's life to danger or to death. The most significant recommendations from the researcher are: The Iraqi legislation is recommended to take the necessary measures to establish a particular legislation for the child by including all the legal provisions that are associated with this weak creature, and make use of the Egyptian legislator’s experience as a pioneer in this respect. Both the Iraqi legislation and the Egyptian legislation are recommended to enact special laws to protect a child from the crimes of intentional murder, as the crime of child murder is currently subjected to the same provisions consider for adult murder.

Keywords: Child abuse, juvenile, legislation, punishment and aggravation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1010
120 Issues in Spectral Source Separation Techniques for Plant-wide Oscillation Detection and Diagnosis

Authors: A.K. Tangirala, S. Babji

Abstract:

In the last few years, three multivariate spectral analysis techniques namely, Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) have emerged as effective tools for oscillation detection and isolation. While the first method is used in determining the number of oscillatory sources, the latter two methods are used to identify source signatures by formulating the detection problem as a source identification problem in the spectral domain. In this paper, we present a critical drawback of the underlying linear (mixing) model which strongly limits the ability of the associated source separation methods to determine the number of sources and/or identify the physical source signatures. It is shown that the assumed mixing model is only valid if each unit of the process gives equal weighting (all-pass filter) to all oscillatory components in its inputs. This is in contrast to the fact that each unit, in general, acts as a filter with non-uniform frequency response. Thus, the model can only facilitate correct identification of a source with a single frequency component, which is again unrealistic. To overcome this deficiency, an iterative post-processing algorithm that correctly identifies the physical source(s) is developed. An additional issue with the existing methods is that they lack a procedure to pre-screen non-oscillatory/noisy measurements which obscure the identification of oscillatory sources. In this regard, a pre-screening procedure is prescribed based on the notion of sparseness index to eliminate the noisy and non-oscillatory measurements from the data set used for analysis.

Keywords: non-negative matrix factorization, PCA, source separation, plant-wide diagnosis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1475
119 Data Compression in Ultrasonic Network Communication via Sparse Signal Processing

Authors: Beata Zima, Octavio A. Márquez Reyes, Masoud Mohammadgholiha, Jochen Moll, Luca De Marchi

Abstract:

This document presents the approach of using compressed sensing in signal encoding and information transferring within a guided wave sensor network, comprised of specially designed frequency steerable acoustic transducers (FSATs). Wave propagation in a damaged plate was simulated using commercial FEM-based software COMSOL. Guided waves were excited by means of FSATs, characterized by the special shape of its electrodes, and modeled using PIC255 piezoelectric material. The special shape of the FSAT, allows for focusing wave energy in a certain direction, accordingly to the frequency components of its actuation signal, which makes a larger monitored area available. The process begins when a FSAT detects and records reflection from damage in the structure, this signal is then encoded and prepared for transmission, using a combined approach, based on Compressed Sensing Matching Pursuit and Quadrature Amplitude Modulation (QAM). After codification of the signal is in binary, the information is transmitted between the nodes in the network. The message reaches the last node, where it is finally decoded and processed, to be used for damage detection and localization purposes. The main aim of the investigation is to determine the location of detected damage using reconstructed signals. The study demonstrates that the special steerable capabilities of FSATs, not only facilitate the detection of damage but also permit transmitting the damage information to a chosen area in a specific direction of the investigated structure.

Keywords: Data compression, ultrasonic communication, guided waves, FEM analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 295
118 Privacy Concerns and Law Enforcement Data Collection to Tackle Domestic and Sexual Violence

Authors: Francesca Radice

Abstract:

It has been observed that violent or coercive behaviour has been apparent from initial conversations on dating apps like Tinder. Child pornography, stalking, and coercive control are some criminal offences from dating apps, including women murdered after finding partners through Tinder. Police databases and predictive policing are novel approaches taken to prevent crime before harm is done. This research will investigate how police databases can be used in a privacy-preserving way to characterise users in terms of their potential for violent crime. Using the COPS database of NSW Police, we will explore how the past criminal record can be interpreted to yield a category of potential danger for each dating app user. It is up to the judgement of each subscriber on what degree of the potential danger they are prepared to enter into. Sentiment analysis is an area where research into natural language processing has made great progress over the last decade. This research will investigate how sentiment analysis can be used to interpret interchanges between dating app users to detect manipulative or coercive sentiments. These can be used to alert law enforcement if continued for a defined number of communications. One of the potential problems of this approach is the potential prejudice a categorisation can cause. Another drawback is the possibility of misinterpreting communications and involving law enforcement without reason. The approach will be thoroughly tested with cross-checks by human readers who verify both the level of danger predicted by the interpretation of the criminal record and the sentiment detected from personal messages. Even if only a few violent crimes can be prevented, the approach will have a tangible value for real people.

Keywords: Sentiment Analysis, data mining, predictive policing, virtual manipulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 75
117 In vitro Study of Laser Diode Radiation Effect on the Photo-Damage of MCF-7 and MCF-10A Cell Clusters

Authors: A. Dashti, M. Eskandari, L. Farahmand, P. Parvin, A. Jafargholi

Abstract:

Breast Cancer is one of the most considerable diseases in the United States and other countries and is the second leading cause of death in women. Common breast cancer treatments would lead to adverse side effects such as loss of hair, nausea, and weakness. These complications arise because these cancer treatments damage some healthy cells while eliminating the cancer cells. In an effort to address these complications, laser radiation was utilized and tested as a targeted cancer treatment for breast cancer. In this regard, tissue engineering approaches are being employed by using an electrospun scaffold in order to facilitate the growth of breast cancer cells. Polycaprolacton (PCL) was used as a material for scaffold fabricating because of its biocompatibility, biodegradability, and supporting cell growth. The specific breast cancer cells have the ability to create a three-dimensional cell cluster due to the spontaneous accumulation of cells in the porosity of the scaffold under some specific conditions. Therefore, we are looking for a higher density of porosity and larger pore size. Fibers showed uniform diameter distribution and final scaffold had optimum characteristics with approximately 40% porosity. The images were taken by SEM and the density and the size of the porosity were determined with the Image. After scaffold preparation, it has cross-linked by glutaraldehyde. Then, it has been washed with glycine and phosphate buffer saline (PBS), in order to neutralize the residual glutaraldehyde. 3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromidefor (MTT) results have represented approximately 91.13% viability of the scaffolds for cancer cells. In order to create a cluster, Michigan Cancer Foundation-7 (MCF-7, breast cancer cell line) and Michigan Cancer Foundation-10A (MCF-10A, human mammary epithelial cell line) cells were cultured on the scaffold in 24 well plate for five days. Then, we have exposed the cluster to the laser diode 808 nm radiation to investigate the effect of laser on the tumor with different power and time. Under the same conditions, cancer cells lost their viability more than the healthy ones. In conclusion, laser therapy is a viable method to destroy the target cells and has a minimum effect on the healthy tissues and cells and it can improve the other method of cancer treatments limitations.

Keywords: Breast cancer, electrospun scaffold, polycaprolacton, laser diode, cancer treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 742
116 The Effect of Main Factors on Forces during FSJ Processing of AA2024 Aluminum

Authors: Dunwen Zuo, Yongfang Deng, Bo Song

Abstract:

An attempt is made here to measure the forces of three directions, under conditions of different feed speeds, different tilt angles of tool and without or with the pin on the tool, by using octagonal ring dynamometer in the AA2024 aluminum FSJ (Friction Stir Joining) process, and investigate how four main factors influence forces in the FSJ process. It is found that, high feed speed lead to small feed force and small lateral force, but high feed speed leads to large feed force in the stable joining stage of process. As the rotational speed increasing, the time of axial force drop from the maximum to the minimum required increased in the push-up process. In the stable joining stage, the rotational speed has little effect on the feed force; large rotational speed leads to small lateral force and axial force. The maximum axial force increases as the tilt angle of tool increases at the downward movement stage. At the moment of start feeding, as tilt angle of tool increases, the amplitudes of the axial force increasing become large. In the stable joining stage, with the increase of tilt angle of tool, the axial force is increased, the lateral force is decreased, and the feed force almost unchanged. The tool with pin will decrease axial force in the downward movement stage. The feed force and lateral force will increase, but the axial force will reduced in the stable joining stage by using the tool with pin compare to by using the tool without pin.

Keywords: FSJ, force factor, AA2024, friction stir joining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1080
115 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

Keywords: Time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1516