Search results for: statistical signal processing.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3603

Search results for: statistical signal processing.

2433 A Smart-Visio Microphone for Audio-Visual Speech Recognition “Vmike“

Authors: Y. Ni, K. Sebri

Abstract:

The practical implementation of audio-video coupled speech recognition systems is mainly limited by the hardware complexity to integrate two radically different information capturing devices with good temporal synchronisation. In this paper, we propose a solution based on a smart CMOS image sensor in order to simplify the hardware integration difficulties. By using on-chip image processing, this smart sensor can calculate in real time the X/Y projections of the captured image. This on-chip projection reduces considerably the volume of the output data. This data-volume reduction permits a transmission of the condensed visual information via the same audio channel by using a stereophonic input available on most of the standard computation devices such as PC, PDA and mobile phones. A prototype called VMIKE (Visio-Microphone) has been designed and realised by using standard 0.35um CMOS technology. A preliminary experiment gives encouraged results. Its efficiency will be further investigated in a large variety of applications such as biometrics, speech recognition in noisy environments, and vocal control for military or disabled persons, etc.

Keywords: Audio-Visual Speech recognition, CMOS Smartsensor, On-Chip image processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1826
2432 Topographic Mapping of Farmland by Integration of Multiple Sensors on Board Low-Altitude Unmanned Aerial System

Authors: Mengmeng Du, Noboru Noguchi, Hiroshi Okamoto, Noriko Kobayashi

Abstract:

This paper introduced a topographic mapping system with time-saving and simplicity advantages based on integration of Light Detection and Ranging (LiDAR) data and Post Processing Kinematic Global Positioning System (PPK GPS) data. This topographic mapping system used a low-altitude Unmanned Aerial Vehicle (UAV) as a platform to conduct land survey in a low-cost, efficient, and totally autonomous manner. An experiment in a small-scale sugarcane farmland was conducted in Queensland, Australia. Subsequently, we synchronized LiDAR distance measurements that were corrected by using attitude information from gyroscope with PPK GPS coordinates for generation of precision topographic maps, which could be further utilized for such applications like precise land leveling and drainage management. The results indicated that LiDAR distance measurements and PPK GPS altitude reached good accuracy of less than 0.015 m.

Keywords: Land survey, light detection and ranging, post processing kinematic global positioning system, precision agriculture, topographic map, unmanned aerial vehicle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1054
2431 Image Processing Approach for Detection of Three-Dimensional Tree-Rings from X-Ray Computed Tomography

Authors: Jorge Martinez-Garcia, Ingrid Stelzner, Joerg Stelzner, Damian Gwerder, Philipp Schuetz

Abstract:

Tree-ring analysis is an important part of the quality assessment and the dating of (archaeological) wood samples. It provides quantitative data about the whole anatomical ring structure, which can be used, for example, to measure the impact of the fluctuating environment on the tree growth, for the dendrochronological analysis of archaeological wooden artefacts and to estimate the wood mechanical properties. Despite advances in computer vision and edge recognition algorithms, detection and counting of annual rings are still limited to 2D datasets and performed in most cases manually, which is a time consuming, tedious task and depends strongly on the operator’s experience. This work presents an image processing approach to detect the whole 3D tree-ring structure directly from X-ray computed tomography imaging data. The approach relies on a modified Canny edge detection algorithm, which captures fully connected tree-ring edges throughout the measured image stack and is validated on X-ray computed tomography data taken from six wood species.

Keywords: Ring recognition, edge detection, X-ray computed tomography, dendrochronology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 807
2430 Spatial Clustering Model of Vessel Trajectory to Extract Sailing Routes Based on AIS Data

Authors: Lubna Eljabu, Mohammad Etemad, Stan Matwin

Abstract:

The automatic extraction of shipping routes is advantageous for intelligent traffic management systems to identify events and support decision-making in maritime surveillance. At present, there is a high demand for the extraction of maritime traffic networks that resemble the real traffic of vessels accurately, which is valuable for further analytical processing tasks for vessels trajectories (e.g., naval routing and voyage planning, anomaly detection, destination prediction, time of arrival estimation). With the help of big data and processing huge amounts of vessels’ trajectory data, it is possible to learn these shipping routes from the navigation history of past behaviour of other, similar ships that were travelling in a given area. In this paper, we propose a spatial clustering model of vessels’ trajectories (SPTCLUST) to extract spatial representations of sailing routes from historical Automatic Identification System (AIS) data. The whole model consists of three main parts: data preprocessing, path finding, and route extraction, which consists of clustering and representative trajectory extraction. The proposed clustering method provides techniques to overcome the problems of: (i) optimal input parameters selection; (ii) the high complexity of processing a huge volume of multidimensional data; (iii) and the spatial representation of complete representative trajectory detection in the context of trajectory clustering algorithms. The experimental evaluation showed the effectiveness of the proposed model by using a real-world AIS dataset from the Port of Halifax. The results contribute to further understanding of shipping route patterns. This could aid surveillance authorities in stable and sustainable vessel traffic management.

Keywords: Vessel trajectory clustering, trajectory mining, Spatial Clustering, marine intelligent navigation, maritime traffic network extraction, sdailing routes extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 457
2429 Embedding a Large Amount of Information Using High Secure Neural Based Steganography Algorithm

Authors: Nameer N. EL-Emam

Abstract:

In this paper, we construct and implement a new Steganography algorithm based on learning system to hide a large amount of information into color BMP image. We have used adaptive image filtering and adaptive non-uniform image segmentation with bits replacement on the appropriate pixels. These pixels are selected randomly rather than sequentially by using new concept defined by main cases with sub cases for each byte in one pixel. According to the steps of design, we have been concluded 16 main cases with their sub cases that covere all aspects of the input information into color bitmap image. High security layers have been proposed through four layers of security to make it difficult to break the encryption of the input information and confuse steganalysis too. Learning system has been introduces at the fourth layer of security through neural network. This layer is used to increase the difficulties of the statistical attacks. Our results against statistical and visual attacks are discussed before and after using the learning system and we make comparison with the previous Steganography algorithm. We show that our algorithm can embed efficiently a large amount of information that has been reached to 75% of the image size (replace 18 bits for each pixel as a maximum) with high quality of the output.

Keywords: Adaptive image segmentation, hiding with high capacity, hiding with high security, neural networks, Steganography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1989
2428 Numerical Investigation of the Chilling of Food Products by Air-Mist Spray

Authors: Roy J. Issa

Abstract:

Spray chilling using air-mist nozzles has received much attention in the food processing industry because of the benefits it has shown over forced air convection. These benefits include an increase in the heat transfer coefficient and a reduction in the water loss by the product during cooling. However, few studies have simulated the heat transfer and aerodynamics phenomena of the air-mist chilling process for optimal operating conditions. The study provides insight into the optimal conditions for spray impaction, heat transfer efficiency and control of surface flooding. A computational fluid dynamics model using a two-phase flow composed of water droplets injected with air is developed to simulate the air-mist chilling of food products. The model takes into consideration droplet-to-surface interaction, water-film accumulation and surface runoff. The results of this study lead to a better understanding of the heat transfer enhancement, water conservation, and to a clear direction for the optimal design of air-mist chilling systems that can be used in commercial applications in the food and meat processing industries.

Keywords: Droplets impaction efficiency, Droplet size, Heat transfer enhancement factor, Water runoff.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1953
2427 Named Entity Recognition using Support Vector Machine: A Language Independent Approach

Authors: Asif Ekbal, Sivaji Bandyopadhyay

Abstract:

Named Entity Recognition (NER) aims to classify each word of a document into predefined target named entity classes and is now-a-days considered to be fundamental for many Natural Language Processing (NLP) tasks such as information retrieval, machine translation, information extraction, question answering systems and others. This paper reports about the development of a NER system for Bengali and Hindi using Support Vector Machine (SVM). Though this state of the art machine learning technique has been widely applied to NER in several well-studied languages, the use of this technique to Indian languages (ILs) is very new. The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the four different named (NE) classes, such as Person name, Location name, Organization name and Miscellaneous name. We have used the annotated corpora of 122,467 tokens of Bengali and 502,974 tokens of Hindi tagged with the twelve different NE classes 1, defined as part of the IJCNLP-08 NER Shared Task for South and South East Asian Languages (SSEAL) 2. In addition, we have manually annotated 150K wordforms of the Bengali news corpus, developed from the web-archive of a leading Bengali newspaper. We have also developed an unsupervised algorithm in order to generate the lexical context patterns from a part of the unlabeled Bengali news corpus. Lexical patterns have been used as the features of SVM in order to improve the system performance. The NER system has been tested with the gold standard test sets of 35K, and 60K tokens for Bengali, and Hindi, respectively. Evaluation results have demonstrated the recall, precision, and f-score values of 88.61%, 80.12%, and 84.15%, respectively, for Bengali and 80.23%, 74.34%, and 77.17%, respectively, for Hindi. Results show the improvement in the f-score by 5.13% with the use of context patterns. Statistical analysis, ANOVA is also performed to compare the performance of the proposed NER system with that of the existing HMM based system for both the languages.

Keywords: Named Entity (NE), Named Entity Recognition (NER), Support Vector Machine (SVM), Bengali, Hindi.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3404
2426 Modelling a Hospital as a Queueing Network: Analysis for Improving Performance

Authors: Emad Alenany, M. Adel El-Baz

Abstract:

In this paper, the flow of different classes of patients into a hospital is modelled and analyzed by using the queueing network analyzer (QNA) algorithm and discrete event simulation. Input data for QNA are the rate and variability parameters of the arrival and service times in addition to the number of servers in each facility. Patient flows mostly match real flow for a hospital in Egypt. Based on the analysis of the waiting times, two approaches are suggested for improving performance: Separating patients into service groups, and adopting different service policies for sequencing patients through hospital units. The separation of a specific group of patients, with higher performance target, to be served separately from the rest of patients requiring lower performance target, requires the same capacity while improves performance for the selected group of patients with higher target. Besides, it is shown that adopting the shortest processing time and shortest remaining processing time service policies among other tested policies would results in, respectively, 11.47% and 13.75% reduction in average waiting time relative to first come first served policy.

Keywords: Queueing network, discrete-event simulation, health applications, SPT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1529
2425 Medical Image Fusion Based On Redundant Wavelet Transform and Morphological Processing

Authors: P. S. Gomathi, B. Kalaavathi

Abstract:

The process in which the complementary information from multiple images is integrated to provide composite image that contains more information than the original input images is called image fusion. Medical image fusion provides useful information from multimodality medical images that provides additional information to the doctor for diagnosis of diseases in a better way. This paper represents the wavelet based medical image fusion algorithm on different multimodality medical images. In order to fuse the medical images, images are decomposed using Redundant Wavelet Transform (RWT). The high frequency coefficients are convolved with morphological operator followed by the maximum-selection (MS) rule. The low frequency coefficients are processed by MS rule. The reconstructed image is obtained by inverse RWT. The quantitative measures which includes Mean, Standard Deviation, Average Gradient, Spatial frequency, Edge based Similarity Measures are considered for evaluating the fused images. The performance of this proposed method is compared with Pixel averaging, PCA, and DWT fusion methods. When compared with conventional methods, the proposed framework provides better performance for analysis of multimodality medical images.

Keywords: Discrete Wavelet Transform (DWT), Image Fusion, Morphological Processing, Redundant Wavelet Transform (RWT).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2158
2424 Verification and Proposal of Information Processing Model Using EEG-Based Brain Activity Monitoring

Authors: Toshitaka Higashino, Naoki Wakamiya

Abstract:

Human beings perform a task by perceiving information from outside, recognizing them, and responding them. There have been various attempts to analyze and understand internal processes behind the reaction to a given stimulus by conducting psychological experiments and analysis from multiple perspectives. Among these, we focused on Model Human Processor (MHP). However, it was built based on psychological experiments and thus the relation with brain activity was unclear so far. To verify the validity of the MHP and propose our model from a viewpoint of neuroscience, EEG (Electroencephalography) measurements are performed during experiments in this study. More specifically, first, experiments were conducted where Latin alphabet characters were used as visual stimuli. In addition to response time, ERPs (event-related potentials) such as N100 and P300 were measured by using EEG. By comparing cycle time predicted by the MHP and latency of ERPs, it was found that N100, related to perception of stimuli, appeared at the end of the perceptual processor. Furthermore, by conducting an additional experiment, it was revealed that P300, related to decision making, appeared during the response decision process, not at the end. Second, by experiments using Japanese Hiragana characters, i.e. Japan's own phonetic symbols, those findings were confirmed. Finally, Japanese Kanji characters were used as more complicated visual stimuli. A Kanji character usually has several readings and several meanings. Despite the difference, a reading-related task and a meaning-related task exhibited similar results, meaning that they involved similar information processing processes of the brain. Based on those results, our model was proposed which reflects response time and ERP latency. It consists of three processors: the perception processor from an input of a stimulus to appearance of N100, the cognitive processor from N100 to P300, and the decision-action processor from P300 to response. Using our model, an application system which reflects brain activity can be established.

Keywords: Brain activity, EEG, information processing model, model human processor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 691
2423 Detecting Earnings Management via Statistical and Neural Network Techniques

Authors: Mohammad Namazi, Mohammad Sadeghzadeh Maharluie

Abstract:

Predicting earnings management is vital for the capital market participants, financial analysts and managers. The aim of this research is attempting to respond to this query: Is there a significant difference between the regression model and neural networks’ models in predicting earnings management, and which one leads to a superior prediction of it? In approaching this question, a Linear Regression (LR) model was compared with two neural networks including Multi-Layer Perceptron (MLP), and Generalized Regression Neural Network (GRNN). The population of this study includes 94 listed companies in Tehran Stock Exchange (TSE) market from 2003 to 2011. After the results of all models were acquired, ANOVA was exerted to test the hypotheses. In general, the summary of statistical results showed that the precision of GRNN did not exhibit a significant difference in comparison with MLP. In addition, the mean square error of the MLP and GRNN showed a significant difference with the multi variable LR model. These findings support the notion of nonlinear behavior of the earnings management. Therefore, it is more appropriate for capital market participants to analyze earnings management based upon neural networks techniques, and not to adopt linear regression models.

Keywords: Earnings management, generalized regression neural networks, linear regression, multi-layer perceptron, Tehran stock exchange.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2104
2422 A Development of Creative Instruction Model through Digital Media

Authors: Kathaleeya Chanda, Panupong Chanplin, Suppara Charoenpoom

Abstract:

This purposes of the development of creative instruction model through digital media are to: 1) enable learners to learn from instruction media application; 2) help learners implementing instruction media correctly and appropriately; and 3) facilitate learners to apply technology for searching information and practicing skills to implement technology creatively. The sample group consists of 130 cases of secondary students studying in Bo Kluea School, Bo Kluea Nuea Sub-district, Bo Kluea District, Nan Province. The probability sampling was selected through the simple random sampling and the statistics used in this research are percentage, mean, standard deviation and one group pretest – posttest design. The findings are summarized as follows: The congruence index of instruction media for occupation and technology subjects is appropriate. By comparing between learning achievements before implementing the instruction media and learning achievements after implementing the instruction media, it is found that the posttest achievements are higher than the pretest achievements with statistical significance at the level of .05. For the learning achievements from instruction media implementation, pretest mean is 16.24 while posttest mean is 26.28. Besides, pretest and posttest results are compared and differences of mean are tested, the test results show that the posttest achievements are higher than the pretest achievements with statistical significance at the level of .05. This can be interpreted that the learners achieve better learning progress.

Keywords: Teaching learning model, digital media, creative instruction model, facilitate learners.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 692
2421 Data Transformation Services (DTS): Creating Data Mart by Consolidating Multi-Source Enterprise Operational Data

Authors: J. D. D. Daniel, K. N. Goh, S. M. Yusop

Abstract:

Trends in business intelligence, e-commerce and remote access make it necessary and practical to store data in different ways on multiple systems with different operating systems. As business evolve and grow, they require efficient computerized solution to perform data update and to access data from diverse enterprise business applications. The objective of this paper is to demonstrate the capability of DTS [1] as a database solution for automatic data transfer and update in solving business problem. This DTS package is developed for the sales of variety of plants and eventually expanded into commercial supply and landscaping business. Dimension data modeling is used in DTS package to extract, transform and load data from heterogeneous database systems such as MySQL, Microsoft Access and Oracle that consolidates into a Data Mart residing in SQL Server. Hence, the data transfer from various databases is scheduled to run automatically every quarter of the year to review the efficient sales analysis. Therefore, DTS is absolutely an attractive solution for automatic data transfer and update which meeting today-s business needs.

Keywords: Data Transformation Services (DTS), ObjectLinking and Embedding Database (OLEDB), Data Mart, OnlineAnalytical Processing (OLAP), Online Transactional Processing(OLTP).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2038
2420 Investigation of Anti-diabetic and Hypocholesterolemic Potential of Psyllium Husk Fiber (Plantago psyllium) in Diabetic and Hypercholesterolemic Albino Rats

Authors: Ishtiaq Ahmed, Muhammad Naeem, Abdul Shakoor, Zaheer Ahmed, Hafiz Muhammad Nasir Iqbal

Abstract:

The present study was conducted to observe the effect of Plantago psyllium on blood glucose and cholesterol levels in normal and alloxan induced diabetic rats. To investigate the effect of Plantago psyllium 40 rats were included in this study divided into four groups of ten rats in each group. One group A was normal, second group B was diabetic, third group C was non diabetic and hypercholesterolemic and fourth group D was diabetic and hypercholesterolemic. Two groups B and D were made diabetic by intraperitonial injection of alloxan dissolved in 1mL distilled water at a dose of 125mg/Kg of body weight. Two groups C and D were made hypercholesterolemic by oral administration of powder cholesterol (1g/Kg of body weight). The blood samples from all the rats were collected from coccygial vein on 1st day, then on 21st and 42nd day respectively. All the samples were analyzed for blood glucose and cholesterol level by using enzymatic kits. The blood glucose and cholesterol levels of treated groups of rats showed significant reduction after 7 weeks of treatment with Plantago psyllium. By statistical analysis of results it was found that Plantago psyllium has anti-diabetic and hypocholesterolemic activity in diabetic and hypercholesterolemic albino rats.

Keywords: Albino rats, alloxan, Plantago psyllium, statistical analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2165
2419 Digital Forensics Compute Cluster: A High Speed Distributed Computing Capability for Digital Forensics

Authors: Daniel Gonzales, Zev Winkelman, Trung Tran, Ricardo Sanchez, Dulani Woods, John Hollywood

Abstract:

We have developed a distributed computing capability, Digital Forensics Compute Cluster (DFORC2) to speed up the ingestion and processing of digital evidence that is resident on computer hard drives. DFORC2 parallelizes evidence ingestion and file processing steps. It can be run on a standalone computer cluster or in the Amazon Web Services (AWS) cloud. When running in a virtualized computing environment, its cluster resources can be dynamically scaled up or down using Kubernetes. DFORC2 is an open source project that uses Autopsy, Apache Spark and Kafka, and other open source software packages. It extends the proven open source digital forensics capabilities of Autopsy to compute clusters and cloud architectures, so digital forensics tasks can be accomplished efficiently by a scalable array of cluster compute nodes. In this paper, we describe DFORC2 and compare it with a standalone version of Autopsy when both are used to process evidence from hard drives of different sizes.

Keywords: Cloud computing, cybersecurity, digital forensics, Kafka, Kubernetes, Spark.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1658
2418 Statistical Analysis of Parameters Effects on Maximum Strain and Torsion Angle of FRP Honeycomb Sandwich Panels Subjected to Torsion

Authors: Mehdi Modabberifar, Milad Roodi, Ehsan Souri

Abstract:

In recent years, honeycomb fiber reinforced plastic (FRP) sandwich panels have been increasingly used in various industries. Low weight, low price and high mechanical strength are the benefits of these structures. However, their mechanical properties and behavior have not been fully explored. The objective of this study is to conduct a combined numerical-statistical investigation of honeycomb FRP sandwich beams subject to torsion load. In this paper, the effect of geometric parameters of sandwich panel on maximum shear strain in both face and core and angle of torsion in a honeycomb FRP sandwich structures in torsion is investigated. The effect of Parameters including core thickness, face skin thickness, cell shape, cell size, and cell thickness on mechanical behavior of the structure were numerically investigated. Main effects of factors were considered in this paper and regression equations were derived. Taguchi method was employed as experimental design and an optimum parameter combination for the maximum structure stiffness has been obtained. The results showed that cell size and face skin thickness have the most significant impacts on torsion angle, maximum shear strain in face and core.

Keywords: Finite element, honeycomb FRP sandwich panel, torsion, civil engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2620
2417 Physical Activity and Cognitive Functioning Relationship in Children

Authors: Comfort Mokgothu

Abstract:

This study investigated the relation between processing information and fitness level of active (fit) and sedentary (unfit) children drawn from rural and urban areas in Botswana. It was hypothesized that fit children would display faster simple reaction time (SRT), choice reaction times (CRT) and movement times (SMT). 60, third grade children (7.0 – 9.0 years) were initially selected and based upon fitness testing, 45 participated in the study (15 each of fit urban, unfit urban, fit rural). All children completed anthropometric measures, skinfold testing and submaximal cycle ergometer testing. The cognitive testing included SRT, CRT, SMT and Choice Movement Time (CMT) and memory sequence length. Results indicated that the rural fit group exhibited faster SMT than the urban fit and unfit groups. For CRT, both fit groups were faster than the unfit group. Collectively, the study shows that the relationship that exists between physical fitness and cognitive function amongst the elderly can tentatively be extended to the pediatric population. Physical fitness could be a factor in the speed at which we process information, including decision making, even in children.

Keywords: Decision making, fitness, information processing, reaction time, cognition movement time.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 795
2416 An Intelligent Nondestructive Testing System of Ultrasonic Infrared Thermal Imaging Based on Embedded Linux

Authors: Hao Mi, Ming Yang, Tian-yue Yang

Abstract:

Ultrasonic infrared nondestructive testing is a kind of testing method with high speed, accuracy and localization. However, there are still some problems, such as the detection requires manual real-time field judgment, the methods of result storage and viewing are still primitive. An intelligent non-destructive detection system based on embedded linux is put forward in this paper. The hardware part of the detection system is based on the ARM (Advanced Reduced Instruction Set Computer Machine) core and an embedded linux system is built to realize image processing and defect detection of thermal images. The CLAHE algorithm and the Butterworth filter are used to process the thermal image, and then the boa server and CGI (Common Gateway Interface) technology are used to transmit the test results to the display terminal through the network for real-time monitoring and remote monitoring. The system also liberates labor and eliminates the obstacle of manual judgment. According to the experiment result, the system provides a convenient and quick solution for industrial non-destructive testing.

Keywords: Remote monitoring, non-destructive testing, embedded linux system, image processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 967
2415 An Implementation of MacMahon's Partition Analysis in Ordering the Lower Bound of Processing Elements for the Algorithm of LU Decomposition

Authors: Halil Snopce, Ilir Spahiu, Lavdrim Elmazi

Abstract:

A lot of Scientific and Engineering problems require the solution of large systems of linear equations of the form bAx in an effective manner. LU-Decomposition offers good choices for solving this problem. Our approach is to find the lower bound of processing elements needed for this purpose. Here is used the so called Omega calculus, as a computational method for solving problems via their corresponding Diophantine relation. From the corresponding algorithm is formed a system of linear diophantine equalities using the domain of computation which is given by the set of lattice points inside the polyhedron. Then is run the Mathematica program DiophantineGF.m. This program calculates the generating function from which is possible to find the number of solutions to the system of Diophantine equalities, which in fact gives the lower bound for the number of processors needed for the corresponding algorithm. There is given a mathematical explanation of the problem as well. Keywordsgenerating function, lattice points in polyhedron, lower bound of processor elements, system of Diophantine equationsand : calculus.

Keywords: generating function, lattice points in polyhedron, lower bound of processor elements, system of Diophantine equations and calculus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1475
2414 A Frame Work for the Development of a Suitable Method to Find Shoot Length at Maturity of Mustard Plant Using Soft Computing Model

Authors: Satyendra Nath Mandal, J. Pal Choudhury, Dilip De, S. R. Bhadra Chaudhuri

Abstract:

The production of a plant can be measured in terms of seeds. The generation of seeds plays a critical role in our social and daily life. The fruit production which generates seeds, depends on the various parameters of the plant, such as shoot length, leaf number, root length, root number, etc When the plant is growing, some leaves may be lost and some new leaves may appear. It is very difficult to use the number of leaves of the tree to calculate the growth of the plant.. It is also cumbersome to measure the number of roots and length of growth of root in several time instances continuously after certain initial period of time, because roots grow deeper and deeper under ground in course of time. On the contrary, the shoot length of the tree grows in course of time which can be measured in different time instances. So the growth of the plant can be measured using the data of shoot length which are measured at different time instances after plantation. The environmental parameters like temperature, rain fall, humidity and pollution are also play some role in production of yield. The soil, crop and distance management are taken care to produce maximum amount of yields of plant. The data of the growth of shoot length of some mustard plant at the initial stage (7,14,21 & 28 days after plantation) is available from the statistical survey by a group of scientists under the supervision of Prof. Dilip De. In this paper, initial shoot length of Ken( one type of mustard plant) has been used as an initial data. The statistical models, the methods of fuzzy logic and neural network have been tested on this mustard plant and based on error analysis (calculation of average error) that model with minimum error has been selected and can be used for the assessment of shoot length at maturity. Finally, all these methods have been tested with other type of mustard plants and the particular soft computing model with the minimum error of all types has been selected for calculating the predicted data of growth of shoot length. The shoot length at the stage of maturity of all types of mustard plants has been calculated using the statistical method on the predicted data of shoot length.

Keywords: Fuzzy time series, neural network, forecasting error, average error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1591
2413 Using Electrical Impedance Tomography to Control a Robot

Authors: Shayan Rezvanigilkolaei, Shayesteh Vefaghnematollahi

Abstract:

Electrical impedance tomography is a non-invasive medical imaging technique suitable for medical applications. This paper describes an electrical impedance tomography device with the ability to navigate a robotic arm to manipulate a target object. The design of the device includes various hardware and software sections to perform medical imaging and control the robotic arm. In its hardware section an image is formed by 16 electrodes which are located around a container. This image is used to navigate a 3DOF robotic arm to reach the exact location of the target object. The data set to form the impedance imaging is obtained by having repeated current injections and voltage measurements between all electrode pairs. After performing the necessary calculations to obtain the impedance, information is transmitted to the computer. This data is fed and then executed in MATLAB which is interfaced with EIDORS (Electrical Impedance Tomography Reconstruction Software) to reconstruct the image based on the acquired data. In the next step, the coordinates of the center of the target object are calculated by image processing toolbox of MATLAB (IPT). Finally, these coordinates are used to calculate the angles of each joint of the robotic arm. The robotic arm moves to the desired tissue with the user command.

Keywords: Electrical impedance tomography, EIT, Surgeon robot, image processing of Electrical impedance tomography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2333
2412 Customer Need Type Classification Model using Data Mining Techniques for Recommender Systems

Authors: Kyoung-jae Kim

Abstract:

Recommender systems are usually regarded as an important marketing tool in the e-commerce. They use important information about users to facilitate accurate recommendation. The information includes user context such as location, time and interest for personalization of mobile users. We can easily collect information about location and time because mobile devices communicate with the base station of the service provider. However, information about user interest can-t be easily collected because user interest can not be captured automatically without user-s approval process. User interest usually represented as a need. In this study, we classify needs into two types according to prior research. This study investigates the usefulness of data mining techniques for classifying user need type for recommendation systems. We employ several data mining techniques including artificial neural networks, decision trees, case-based reasoning, and multivariate discriminant analysis. Experimental results show that CHAID algorithm outperforms other models for classifying user need type. This study performs McNemar test to examine the statistical significance of the differences of classification results. The results of McNemar test also show that CHAID performs better than the other models with statistical significance.

Keywords: Customer need type, Data mining techniques, Recommender system, Personalization, Mobile user.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2146
2411 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore

Authors: Ronal Muresano, Andrea Pagano

Abstract:

Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.

Keywords: Algorithm optimization, Bank Failures, OpenMP, Parallel Techniques, Statistical tool.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1901
2410 Distributional Semantics Approach to Thai Word Sense Disambiguation

Authors: Sunee Pongpinigpinyo, Wanchai Rivepiboon

Abstract:

Word sense disambiguation is one of the most important open problems in natural language processing applications such as information retrieval and machine translation. Many approach strategies can be employed to resolve word ambiguity with a reasonable degree of accuracy. These strategies are: knowledgebased, corpus-based, and hybrid-based. This paper pays attention to the corpus-based strategy that employs an unsupervised learning method for disambiguation. We report our investigation of Latent Semantic Indexing (LSI), an information retrieval technique and unsupervised learning, to the task of Thai noun and verbal word sense disambiguation. The Latent Semantic Indexing has been shown to be efficient and effective for Information Retrieval. For the purposes of this research, we report experiments on two Thai polysemous words, namely  /hua4/ and /kep1/ that are used as a representative of Thai nouns and verbs respectively. The results of these experiments demonstrate the effectiveness and indicate the potential of applying vector-based distributional information measures to semantic disambiguation.

Keywords: Distributional semantics, Latent Semantic Indexing, natural language processing, Polysemous words, unsupervisedlearning, Word Sense Disambiguation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814
2409 A New Variant of RC4 Stream Cipher

Authors: Lae Lae Khine

Abstract:

RC4 was used as an encryption algorithm in WEP(Wired Equivalent Privacy) protocol that is a standardized for 802.11 wireless network. A few attacks followed, indicating certain weakness in the design. In this paper, we proposed a new variant of RC4 stream cipher. The new version of the cipher does not only appear to be more secure, but its keystream also has large period, large complexity and good statistical properties.

Keywords: Cryptography, New variant, RC4, Stream Cipher.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1911
2408 Virtual 3D Environments for Image-Based Navigation Algorithms

Authors: V. B. Bastos, M. P. Lima, P. R. G. Kurka

Abstract:

This paper applies to the creation of virtual 3D environments for the study and development of mobile robot image based navigation algorithms and techniques, which need to operate robustly and efficiently. The test of these algorithms can be performed in a physical way, from conducting experiments on a prototype, or by numerical simulations. Current simulation platforms for robotic applications do not have flexible and updated models for image rendering, being unable to reproduce complex light effects and materials. Thus, it is necessary to create a test platform that integrates sophisticated simulated applications of real environments for navigation, with data and image processing. This work proposes the development of a high-level platform for building 3D model’s environments and the test of image-based navigation algorithms for mobile robots. Techniques were used for applying texture and lighting effects in order to accurately represent the generation of rendered images regarding the real world version. The application will integrate image processing scripts, trajectory control, dynamic modeling and simulation techniques for physics representation and picture rendering with the open source 3D creation suite - Blender.

Keywords: Simulation, visual navigation, mobile robot, data visualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1049
2407 FPGA Based Longitudinal and Lateral Controller Implementation for a Small UAV

Authors: Hafiz ul Azad, Dragan V.Lazic, Waqar Shahid

Abstract:

This paper presents implementation of attitude controller for a small UAV using field programmable gate array (FPGA). Due to the small size constrain a miniature more compact and computationally extensive; autopilot platform is needed for such systems. More over UAV autopilot has to deal with extremely adverse situations in the shortest possible time, while accomplishing its mission. FPGAs in the recent past have rendered themselves as fast, parallel, real time, processing devices in a compact size. This work utilizes this fact and implements different attitude controllers for a small UAV in FPGA, using its parallel processing capabilities. Attitude controller is designed in MATLAB/Simulink environment. The discrete version of this controller is implemented using pipelining followed by retiming, to reduce the critical path and thereby clock period of the controller datapath. Pipelined, retimed, parallel PID controller implementation is done using rapidprototyping and testing efficient development tool of “system generator", which has been developed by Xilinx for FPGA implementation. The improved timing performance enables the controller to react abruptly to any changes made to the attitudes of UAV.

Keywords: Field Programmable gate array (FPGA), Hardwaredescriptive Language (HDL), PID, Pipelining, Retiming, XilinxSystem Generator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3189
2406 Machine Vision System for Automatic Weeding Strategy in Oil Palm Plantation using Image Filtering Technique

Authors: Kamarul Hawari Ghazali, Mohd. Marzuki Mustafa, Aini Hussain

Abstract:

Machine vision is an application of computer vision to automate conventional work in industry, manufacturing or any other field. Nowadays, people in agriculture industry have embarked into research on implementation of engineering technology in their farming activities. One of the precision farming activities that involve machine vision system is automatic weeding strategy. Automatic weeding strategy in oil palm plantation could minimize the volume of herbicides that is sprayed to the fields. This paper discusses an automatic weeding strategy in oil palm plantation using machine vision system for the detection and differential spraying of weeds. The implementation of vision system involved the used of image processing technique to analyze weed images in order to recognized and distinguished its types. Image filtering technique has been used to process the images as well as a feature extraction method to classify the type of weed images. As a result, the image processing technique contributes a promising result of classification to be implemented in machine vision system for automated weeding strategy.

Keywords: Machine vision, Automatic Weeding Strategy, filter, feature extraction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1866
2405 Nano-Texturing of Single Crystalline Silicon via Cu-Catalyzed Chemical Etching

Authors: A. A. Abaker Omer, H. B. Mohamed Balh, W. Liu, A. Abas, J. Yu, S. Li, W. Ma, W. El Kolaly, Y. Y. Ahmed Abuker

Abstract:

We have discovered an important technical solution that could make new approaches in the processing of wet silicon etching, especially in the production of photovoltaic cells. During its inferior light-trapping and structural properties, the inverted pyramid structure outperforms the conventional pyramid textures and black silicone. The traditional pyramid textures and black silicon can only be accomplished with more advanced lithography, laser processing, etc. Importantly, our data demonstrate the feasibility of an inverted pyramidal structure of silicon via one-step Cu-catalyzed chemical etching (CCCE) in Cu (NO3)2/HF/H2O2/H2O solutions. The effects of etching time and reaction temperature on surface geometry and light trapping were systematically investigated. The conclusion shows that the inverted pyramid structure has ultra-low reflectivity of ~4.2% in the wavelength of 300~1000 nm; introduce of Cu particles can significantly accelerate the dissolution of the silicon wafer. The etching and the inverted pyramid structure formation mechanism are discussed. Inverted pyramid structure with outstanding anti-reflectivity includes useful applications throughout the manufacture of semi-conductive industry-compatible solar cells, and can have significant impacts on industry colleagues and populations.

Keywords: Cu-catalyzed chemical etching, inverted pyramid nanostructured, reflection, solar cells.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 875
2404 Hand Gesture Detection via EmguCV Canny Pruning

Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae

Abstract:

Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.

Keywords: Canny pruning, hand recognition, machine learning, skin tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1309