Search results for: real coded genetic algorithm (RCGA).
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5428

Search results for: real coded genetic algorithm (RCGA).

2728 Macular Ganglion Cell Inner Plexiform Layer Thinning in Patients with Visual Field Defect that Respects the Vertical Meridian

Authors: Hye-Young Shin, Chan Kee Park

Abstract:

Background: To compare the thinning patterns of the ganglion cell-inner plexiform layer (GCIPL) and peripapillary retinal nerve fiber layer (pRNFL) as measured using Cirrus high-definition optical coherence tomography (HD-OCT) in patients with visual field (VF) defects that respect the vertical meridian. Methods: Twenty eyes of eleven patients with VF defects that respect the vertical meridian were enrolled retrospectively. The thicknesses of the macular GCIPL and pRNFL were measured using Cirrus HD-OCT. The 5% and 1% thinning area index (TAI) was calculated as the proportion of abnormally thin sectors at the 5% and 1% probability level within the area corresponding to the affected VF. The 5% and 1% TAI were compared between the GCIPL and pRNFL measurements. Results: The color-coded GCIPL deviation map showed a characteristic vertical thinning pattern of the GCIPL, which is also seen in the VF of patients with brain lesions. The 5% and 1% TAI were significantly higher in the GCIPL measurements than in the pRNFL measurements (all P < 0.01). Conclusions: Macular GCIPL analysis clearly visualized a characteristic topographic pattern of retinal ganglion cell (RGC) loss in patients with VF defects that respect the vertical meridian, unlike pRNFL measurements. Macular GCIPL measurements provide more valuable information than pRNFL measurements for detecting the loss of RGCs in patients with retrograde degeneration of the optic nerve fibers.

Keywords: Brain lesion, Macular ganglion cell-Inner plexiform layer, Spectral-domain optical coherence tomography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1754
2727 An Experimental Procedure for Design and Construction of Monocopter and Its Control Using Optical and GPS-Aided AHRS Sensors

Authors: A. Safaee, M. S. Mehrabani, M. B. Menhaj, V. Mousavi, S. Z. Moussavi

Abstract:

Monocopter is a single-wing rotary flying vehicle which has the capability of hovering. This flying vehicle includes two dynamic parts in which more efficiency can be expected rather than other Micro UAVs due to the extended area of wing compared to its fuselage. Low cost and simple mechanism in comparison to other vehicles such as helicopter are the most important specifications of this flying vehicle. In the previous paper we discussed the introduction of the final system but in this paper, the experimental design process of Monocopter and its control algorithm has been investigated in general. Also the editorial bugs in the previous article have been corrected and some translational ambiguities have been resolved. Initially by constructing several prototypes and carrying out many flight tests the main design parameters of this air vehicle were obtained by experimental measurements. Eventually the required main monocopter for this project was constructed. After construction of the monocopter in order to design, implementation and testing of control algorithms first a simple optic system used for determining the heading angle. After doing numerous tests on Test Stand, the control algorithm designed and timing of applying control inputs adjusted. Then other control parameters of system were tuned in flight tests. Eventually the final control system designed and implemented using the AHRS sensor and the final operational tests performed successfully.

Keywords: Monocopter, Flap, Heading Angle, AHRS, Cyclic, Photo Diode.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3413
2726 Accurate Position Electromagnetic Sensor Using Data Acquisition System

Authors: Z. Ezzouine, A. Nakheli

Abstract:

This paper presents a high position electromagnetic sensor system (HPESS) that is applicable for moving object detection. The authors have developed a high-performance position sensor prototype dedicated to students’ laboratory. The challenge was to obtain a highly accurate and real-time sensor that is able to calculate position, length or displacement. An electromagnetic solution based on a two coil induction principal was adopted. The HPESS converts mechanical motion to electric energy with direct contact. The output signal can then be fed to an electronic circuit. The voltage output change from the sensor is captured by data acquisition system using LabVIEW software. The displacement of the moving object is determined. The measured data are transmitted to a PC in real-time via a DAQ (NI USB -6281). This paper also describes the data acquisition analysis and the conditioning card developed specially for sensor signal monitoring. The data is then recorded and viewed using a user interface written using National Instrument LabVIEW software. On-line displays of time and voltage of the sensor signal provide a user-friendly data acquisition interface. The sensor provides an uncomplicated, accurate, reliable, inexpensive transducer for highly sophisticated control systems.

Keywords: Electromagnetic sensor, data acquisition, accurately, position measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 938
2725 Recommender Systems Using Ensemble Techniques

Authors: Yeonjeong Lee, Kyoung-jae Kim, Youngtae Kim

Abstract:

This study proposes a novel recommender system that uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user’s preference. The proposed model consists of two steps. In the first step, this study uses logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. Then, this study combines the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. In the second step, this study uses the market basket analysis to extract association rules for co-purchased products. Finally, the system selects customers who have high likelihood to purchase products in each product group and recommends proper products from same or different product groups to them through above two steps. We test the usability of the proposed system by using prototype and real-world transaction and profile data. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The results also show that the proposed system may be useful in real-world online shopping store.

Keywords: Product recommender system, Ensemble technique, Association rules, Decision tree, Artificial neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4199
2724 Evaluation of Two Earliness Cotton Genotypes in Three Ecological Regions

Authors: Gholamhossein Hosseini

Abstract:

Two earliness cotton genotypes I and II, which had been developed by hybridization and backcross methods between sindise-80 as an early maturing gene parent and two other lines i.e. Red leaf and Bulgare-557 as a second parent, are subjected to different environmental conditions. The early maturing genotypes with coded names of I and II were compared with four native cotton cultivars in randomized complete block design (RCBD) with four replications in three ecological regions of Iran from 2016-2017. Two early maturing genotypes along with four native cultivars viz. Varamin, Oltan, Sahel and Arya were planted in Agricultural Research Station of Varamin, Moghan and Kashmar for evaluation. Earliness data were collected for six treatments during two years in the three regions except missing data for the second year of Kashmar. Therefore, missed data were estimated and imputed. For testing the homogeneity of error variances, each experiment at a given location or year is analyzed separately using Hartley and Bartlett’s Chi-square tests and both tests confirmed homogeneity of variance. Combined analysis of variance showed that genotypes I and II were superior in Varamin, Moghan and Kashmar regions. Earliness means and their interaction effects were compared with Duncan’s multiple range tests. Finally combined analysis of variance showed that genotypes I and II were superior in Varamin, Moghan and Kashmar regions. Earliness means and their interaction effects are compared with Duncan’s multiple range tests.

Keywords: Cotton, combined, analysis, earliness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 548
2723 Laser Ultrasonic Imaging Based on Synthetic Aperture Focusing Technique Algorithm

Authors: Sundara Subramanian Karuppasamy, Che Hua Yang

Abstract:

In this work, the laser ultrasound technique has been used for analyzing and imaging the inner defects in metal blocks. To detect the defects in blocks, traditionally the researchers used piezoelectric transducers for the generation and reception of ultrasonic signals. These transducers can be configured into the sparse and phased array. But these two configurations have their drawbacks including the requirement of many transducers, time-consuming calculations, limited bandwidth, and provide confined image resolution. Here, we focus on the non-contact method for generating and receiving the ultrasound to examine the inner defects in aluminum blocks. A Q-switched pulsed laser has been used for the generation and the reception is done by using Laser Doppler Vibrometer (LDV). Based on the Doppler effect, LDV provides a rapid and high spatial resolution way for sensing ultrasonic waves. From the LDV, a series of scanning points are selected which serves as the phased array elements. The side-drilled hole of 10 mm diameter with a depth of 25 mm has been introduced and the defect is interrogated by the linear array of scanning points obtained from the LDV. With the aid of the Synthetic Aperture Focusing Technique (SAFT) algorithm, based on the time-shifting principle the inspected images are generated from the A-scan data acquired from the 1-D linear phased array elements. Thus the defect can be precisely detected with good resolution.

Keywords: Laser ultrasonics, linear phased array, nondestructive testing, synthetic aperture focusing technique, ultrasonic imaging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 917
2722 Modeling and Analysis of Adaptive Buffer Sharing Scheme for Consecutive Packet Loss Reduction in Broadband Networks

Authors: Sakshi Kausha, R.K Sharma

Abstract:

High speed networks provide realtime variable bit rate service with diversified traffic flow characteristics and quality requirements. The variable bit rate traffic has stringent delay and packet loss requirements. The burstiness of the correlated traffic makes dynamic buffer management highly desirable to satisfy the Quality of Service (QoS) requirements. This paper presents an algorithm for optimization of adaptive buffer allocation scheme for traffic based on loss of consecutive packets in data-stream and buffer occupancy level. Buffer is designed to allow the input traffic to be partitioned into different priority classes and based on the input traffic behavior it controls the threshold dynamically. This algorithm allows input packets to enter into buffer if its occupancy level is less than the threshold value for priority of that packet. The threshold is dynamically varied in runtime based on packet loss behavior. The simulation is run for two priority classes of the input traffic – realtime and non-realtime classes. The simulation results show that Adaptive Partial Buffer Sharing (ADPBS) has better performance than Static Partial Buffer Sharing (SPBS) and First In First Out (FIFO) queue under the same traffic conditions.

Keywords: Buffer Management, Consecutive packet loss, Quality-of-Service, Priority based packet discarding, partial buffersharing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1618
2721 Automation of the Maritime UAV Command, Control, Navigation Operations, Simulated in Real-Time Using Kinect Sensor: A Feasibility Study

Authors: Regius Asiimwe, Amir Anvar

Abstract:

This paper describes the process used in the automation of the Maritime UAV commands using the Kinect sensor. The AR Drone is a Quadrocopter manufactured by Parrot [1] to be controlled using the Apple operating systems such as iPhones and Ipads. However, this project uses the Microsoft Kinect SDK and Microsoft Visual Studio C# (C sharp) software, which are compatible with Windows Operating System for the automation of the navigation and control of the AR drone. The navigation and control software for the Quadrocopter runs on a windows 7 computer. The project is divided into two sections; the Quadrocopter control system and the Kinect sensor control system. The Kinect sensor is connected to the computer using a USB cable from which commands can be sent to and from the Kinect sensors. The AR drone has Wi-Fi capabilities from which it can be connected to the computer to enable transfer of commands to and from the Quadrocopter. The project was implemented in C#, a programming language that is commonly used in the automation systems. The language was chosen because there are more libraries already established in C# for both the AR drone and the Kinect sensor. The study will contribute toward research in automation of systems using the Quadrocopter and the Kinect sensor for navigation involving a human operator in the loop. The prototype created has numerous applications among which include the inspection of vessels such as ship, airplanes and areas that are not accessible by human operators.

Keywords: UAV, AR drone, Kinect Sensors, Automation, Real time, C sharp, Microsoft Kinect SDK.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2909
2720 A Neurofuzzy Learning and its Application to Control System

Authors: Seema Chopra, R. Mitra, Vijay Kumar

Abstract:

A neurofuzzy approach for a given set of input-output training data is proposed in two phases. Firstly, the data set is partitioned automatically into a set of clusters. Then a fuzzy if-then rule is extracted from each cluster to form a fuzzy rule base. Secondly, a fuzzy neural network is constructed accordingly and parameters are tuned to increase the precision of the fuzzy rule base. This network is able to learn and optimize the rule base of a Sugeno like Fuzzy inference system using Hybrid learning algorithm, which combines gradient descent, and least mean square algorithm. This proposed neurofuzzy system has the advantage of determining the number of rules automatically and also reduce the number of rules, decrease computational time, learns faster and consumes less memory. The authors also investigate that how neurofuzzy techniques can be applied in the area of control theory to design a fuzzy controller for linear and nonlinear dynamic systems modelling from a set of input/output data. The simulation analysis on a wide range of processes, to identify nonlinear components on-linely in a control system and a benchmark problem involving the prediction of a chaotic time series is carried out. Furthermore, the well-known examples of linear and nonlinear systems are also simulated under the Matlab/Simulink environment. The above combination is also illustrated in modeling the relationship between automobile trips and demographic factors.

Keywords: Fuzzy control, neuro-fuzzy techniques, fuzzy subtractive clustering, extraction of rules, and optimization of membership functions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2574
2719 Design and Development of 5-DOF Color Sorting Manipulator for Industrial Applications

Authors: Atef. A. Ata, Sohair F. Rezeka, Ahmed El-Shenawy, Mohammed Diab

Abstract:

Image processing in today’s world grabs massive attentions as it leads to possibilities of broaden application in many fields of high technology. The real challenge is how to improve existing sorting system applications which consists of two integrated stations of processing and handling with a new image processing feature. Existing color sorting techniques use a set of inductive, capacitive, and optical sensors to differentiate object color. This research presents a mechatronic color sorting system solution with the application of image processing. A 5-DOF robot arm is designed and developed with pick and place operation to act as the main part of the color sorting system. Image processing procedure senses the circular objects in an image captured in real time by a webcam fixed at the end-effector then extracts color and position information out of it. This information is passed as a sequence of sorting commands to the manipulator that has pick-and-place mechanism. Performance analysis proves that this color based object sorting system works accurately under ideal condition in term of adequate illumination, circular objects shape and color. The circular objects tested for sorting are red, green and blue. For non-ideal condition, such as unspecified color the accuracy reduces to 80%.

Keywords: Robotics manipulator, 5-DOF manipulator, image processing, Color sorting, Pick-and-place.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4193
2718 Classification of Business Models of Italian Bancassurance by Balance Sheet Indicators

Authors: Andrea Bellucci, Martina Tofi

Abstract:

The aim of paper is to analyze business models of bancassurance in Italy for life business. The life insurance business is very developed in the Italian market and banks branches have 80% of the market share. Given its maturity, the life insurance market needs to consolidate its organizational form to allow for the development of non-life business, which nowadays collects few premiums but represents a great opportunity to enlarge the market share of bancassurance using its strength in the distribution channel while the market share of independent agents is decreasing. Starting with the main business model of bancassurance for life business, this paper will analyze the performances of life companies in the Italian market by balance sheet indicators and by main discriminant variables of business models. The study will observe trends from 2013 to 2015 for the Italian market by exploiting a database managed by Associazione Nazionale delle Imprese di Assicurazione (ANIA). The applied approach is based on a bottom-up analysis starting with variables and indicators to define business models’ classification. The statistical classification algorithm proposed by Ward is employed to design business models’ profiles. Results from the analysis will be a representation of the main business models built by their profile related to indicators. In that way, an unsupervised analysis is developed that has the limit of its judgmental dimension based on research opinion, but it is possible to obtain a design of effective business models.

Keywords: Balance sheet indicators, Bancassurance, business models, ward algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1244
2717 Low Value Capacitance Measurement System with Adjustable Lead Capacitance Compensation

Authors: Gautam Sarkar, Anjan Rakshit, Amitava Chatterjee, Kesab Bhattacharya

Abstract:

The present paper describes the development of a low cost, highly accurate low capacitance measurement system that can be used over a range of 0 – 400 pF with a resolution of 1 pF. The range of capacitance may be easily altered by a simple resistance or capacitance variation of the measurement circuit. This capacitance measurement system uses quad two-input NAND Schmitt trigger circuit CD4093B with hysteresis for the measurement and this system is integrated with PIC 18F2550 microcontroller for data acquisition purpose. The microcontroller interacts with software developed in the PC end through USB architecture and an attractive graphical user interface (GUI) based system is developed in the PC end to provide the user with real time, online display of capacitance under measurement. The system uses a differential mode of capacitance measurement, with reference to a trimmer capacitance, that effectively compensates lead capacitances, a notorious error encountered in usual low capacitance measurements. The hysteresis provided in the Schmitt-trigger circuits enable reliable operation of the system by greatly minimizing the possibility of false triggering because of stray interferences, usually regarded as another source of significant error. The real life testing of the proposed system showed that our measurements could produce highly accurate capacitance measurements, when compared to cutting edge, high end digital capacitance meters.

Keywords: Capacitance measurement, NAND Schmitt trigger, microcontroller, GUI, lead compensation, hysteresis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7349
2716 Feature Based Unsupervised Intrusion Detection

Authors: Deeman Yousif Mahmood, Mohammed Abdullah Hussein

Abstract:

The goal of a network-based intrusion detection system is to classify activities of network traffics into two major categories: normal and attack (intrusive) activities. Nowadays, data mining and machine learning plays an important role in many sciences; including intrusion detection system (IDS) using both supervised and unsupervised techniques. However, one of the essential steps of data mining is feature selection that helps in improving the efficiency, performance and prediction rate of proposed approach. This paper applies unsupervised K-means clustering algorithm with information gain (IG) for feature selection and reduction to build a network intrusion detection system. For our experimental analysis, we have used the new NSL-KDD dataset, which is a modified dataset for KDDCup 1999 intrusion detection benchmark dataset. With a split of 60.0% for the training set and the remainder for the testing set, a 2 class classifications have been implemented (Normal, Attack). Weka framework which is a java based open source software consists of a collection of machine learning algorithms for data mining tasks has been used in the testing process. The experimental results show that the proposed approach is very accurate with low false positive rate and high true positive rate and it takes less learning time in comparison with using the full features of the dataset with the same algorithm.

Keywords: Information Gain (IG), Intrusion Detection System (IDS), K-means Clustering, Weka.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2750
2715 Rank-Based Chain-Mode Ensemble for Binary Classification

Authors: Chongya Song, Kang Yen, Alexander Pons, Jin Liu

Abstract:

In the field of machine learning, the ensemble has been employed as a common methodology to improve the performance upon multiple base classifiers. However, the true predictions are often canceled out by the false ones during consensus due to a phenomenon called “curse of correlation” which is represented as the strong interferences among the predictions produced by the base classifiers. In addition, the existing practices are still not able to effectively mitigate the problem of imbalanced classification. Based on the analysis on our experiment results, we conclude that the two problems are caused by some inherent deficiencies in the approach of consensus. Therefore, we create an enhanced ensemble algorithm which adopts a designed rank-based chain-mode consensus to overcome the two problems. In order to evaluate the proposed ensemble algorithm, we employ a well-known benchmark data set NSL-KDD (the improved version of dataset KDDCup99 produced by University of New Brunswick) to make comparisons between the proposed and 8 common ensemble algorithms. Particularly, each compared ensemble classifier uses the same 22 base classifiers, so that the differences in terms of the improvements toward the accuracy and reliability upon the base classifiers can be truly revealed. As a result, the proposed rank-based chain-mode consensus is proved to be a more effective ensemble solution than the traditional consensus approach, which outperforms the 8 ensemble algorithms by 20% on almost all compared metrices which include accuracy, precision, recall, F1-score and area under receiver operating characteristic curve.

Keywords: Consensus, curse of correlation, imbalanced classification, rank-based chain-mode ensemble.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 698
2714 A New Fast Skin Color Detection Technique

Authors: Tarek M. Mahmoud

Abstract:

Skin color can provide a useful and robust cue for human-related image analysis, such as face detection, pornographic image filtering, hand detection and tracking, people retrieval in databases and Internet, etc. The major problem of such kinds of skin color detection algorithms is that it is time consuming and hence cannot be applied to a real time system. To overcome this problem, we introduce a new fast technique for skin detection which can be applied in a real time system. In this technique, instead of testing each image pixel to label it as skin or non-skin (as in classic techniques), we skip a set of pixels. The reason of the skipping process is the high probability that neighbors of the skin color pixels are also skin pixels, especially in adult images and vise versa. The proposed method can rapidly detect skin and non-skin color pixels, which in turn dramatically reduce the CPU time required for the protection process. Since many fast detection techniques are based on image resizing, we apply our proposed pixel skipping technique with image resizing to obtain better results. The performance evaluation of the proposed skipping and hybrid techniques in terms of the measured CPU time is presented. Experimental results demonstrate that the proposed methods achieve better result than the relevant classic method.

Keywords: Adult images filtering, image resizing, skin color detection, YcbCr color space.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3963
2713 A Static Android Malware Detection Based on Actual Used Permissions Combination and API Calls

Authors: Xiaoqing Wang, Junfeng Wang, Xiaolan Zhu

Abstract:

Android operating system has been recognized by most application developers because of its good open-source and compatibility, which enriches the categories of applications greatly. However, it has become the target of malware attackers due to the lack of strict security supervision mechanisms, which leads to the rapid growth of malware, thus bringing serious safety hazards to users. Therefore, it is critical to detect Android malware effectively. Generally, the permissions declared in the AndroidManifest.xml can reflect the function and behavior of the application to a large extent. Since current Android system has not any restrictions to the number of permissions that an application can request, developers tend to apply more than actually needed permissions in order to ensure the successful running of the application, which results in the abuse of permissions. However, some traditional detection methods only consider the requested permissions and ignore whether it is actually used, which leads to incorrect identification of some malwares. Therefore, a machine learning detection method based on the actually used permissions combination and API calls was put forward in this paper. Meanwhile, several experiments are conducted to evaluate our methodology. The result shows that it can detect unknown malware effectively with higher true positive rate and accuracy while maintaining a low false positive rate. Consequently, the AdaboostM1 (J48) classification algorithm based on information gain feature selection algorithm has the best detection result, which can achieve an accuracy of 99.8%, a true positive rate of 99.6% and a lowest false positive rate of 0.

Keywords: Android, permissions combination, API calls, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1893
2712 An Analysis of Economic Capital Allocation of Global Banks

Authors: Petr Teply, Ondrej Vejdovec

Abstract:

There are three main ways of categorizing capital in banking operations: accounting, regulatory and economic capital. However, the 2008-2009 global crisis has shown that none of these categories adequately reflects the real risks of bank operations, especially in light of the failures Bear Stearns, Lehman Brothers or Northern Rock. This paper deals with the economic capital allocation of global banks. In theory, economic capital should reflect the real risks of a bank and should be publicly available. Yet, as discovered during the global financial crisis, even when economic capital information was publicly disclosed, the underlying assumptions rendered the information useless. Specifically, some global banks that reported relatively high levels of economic capital before the crisis went bankrupt or had to be bailed-out by their government. And, only 15 out of 50 global banks reported their economic capital during the 2007-2010 period. In this paper, we analyze the changes in reported bank economic capital disclosure during this period. We conclude that relative shares of credit and business risks increased in 2010 compared to 2007, while both operational and market risks decreased their shares on the total economic capital of top-rated global banks. Generally speaking, higher levels of disclosure and transparency of bank operations are required to obtain more confidence from stakeholders. Moreover, additional risks such as liquidity risks should be included in these disclosures.

Keywords: global crisis, economic capital, risk management, risk allocation, bank

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2951
2711 EZW Coding System with Artificial Neural Networks

Authors: Saudagar Abdul Khader Jilani, Syed Abdul Sattar

Abstract:

Image compression plays a vital role in today-s communication. The limitation in allocated bandwidth leads to slower communication. To exchange the rate of transmission in the limited bandwidth the Image data must be compressed before transmission. Basically there are two types of compressions, 1) LOSSY compression and 2) LOSSLESS compression. Lossy compression though gives more compression compared to lossless compression; the accuracy in retrievation is less in case of lossy compression as compared to lossless compression. JPEG, JPEG2000 image compression system follows huffman coding for image compression. JPEG 2000 coding system use wavelet transform, which decompose the image into different levels, where the coefficient in each sub band are uncorrelated from coefficient of other sub bands. Embedded Zero tree wavelet (EZW) coding exploits the multi-resolution properties of the wavelet transform to give a computationally simple algorithm with better performance compared to existing wavelet transforms. For further improvement of compression applications other coding methods were recently been suggested. An ANN base approach is one such method. Artificial Neural Network has been applied to many problems in image processing and has demonstrated their superiority over classical methods when dealing with noisy or incomplete data for image compression applications. The performance analysis of different images is proposed with an analysis of EZW coding system with Error Backpropagation algorithm. The implementation and analysis shows approximately 30% more accuracy in retrieved image compare to the existing EZW coding system.

Keywords: Accuracy, Compression, EZW, JPEG2000, Performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1902
2710 An Educational Data Mining System for Advising Higher Education Students

Authors: Heba Mohammed Nagy, Walid Mohamed Aly, Osama Fathy Hegazy

Abstract:

Educational  data mining  is  a  specific  data   mining field applied to data originating from educational environments, it relies on different  approaches to discover hidden knowledge  from  the  available   data. Among these approaches are   machine   learning techniques which are used to build a system that acquires learning from previous data. Machine learning can be applied to solve different regression, classification, clustering and optimization problems.

In  our  research, we propose  a “Student  Advisory  Framework” that  utilizes  classification  and  clustering  to  build  an  intelligent system. This system can be used to provide pieces of consultations to a first year  university  student to  pursue a  certain   education   track   where  he/she  will  likely  succeed  in, aiming  to  decrease   the  high  rate   of  academic  failure   among these  students.  A real case study  in Cairo  Higher  Institute  for Engineering, Computer  Science  and  Management  is  presented using  real  dataset   collected  from  2000−2012.The dataset has two main components: pre-higher education dataset and first year courses results dataset. Results have proved the efficiency of the suggested framework.

Keywords: Classification, Clustering, Educational Data Mining (EDM), Machine Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5194
2709 Identification of Microbial Community in an Anaerobic Reactor Treating Brewery Wastewater

Authors: Abimbola M. Enitan, John O. Odiyo, Feroz M. Swalaha

Abstract:

The study of microbial ecology and their function in anaerobic digestion processes are essential to control the biological processes. This is to know the symbiotic relationship between the microorganisms that are involved in the conversion of complex organic matter in the industrial wastewater to simple molecules. In this study, diversity and quantity of bacterial community in the granular sludge taken from the different compartments of a full-scale upflow anaerobic sludge blanket (UASB) reactor treating brewery wastewater was investigated using polymerase chain reaction (PCR) and real-time quantitative PCR (qPCR). The phylogenetic analysis showed three major eubacteria phyla that belong to Proteobacteria, Firmicutes and Chloroflexi in the full-scale UASB reactor, with different groups populating different compartment. The result of qPCR assay showed high amount of eubacteria with increase in concentration along the reactor’s compartment. This study extends our understanding on the diverse, topological distribution and shifts in concentration of microbial communities in the different compartments of a full-scale UASB reactor treating brewery wastewater. The colonization and the trophic interactions among these microbial populations in reducing and transforming complex organic matter within the UASB reactors were established.

Keywords: Bacteria, brewery wastewater, real-time quantitative PCR, UASB reactor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1068
2708 Heritability Estimates of Lactation Traits in Maltese Goat

Authors: R. Pesce Delfino, M. Selvaggi, G. V. Celano, C. Dario

Abstract:

Data on 657 lactation from 163 Maltese goat, collected over a 5-year period were analyzed by a mixed model to estimate the variance components for heritability. The considered lactation traits were: milk yield (MY) and lactation length (LL). Year, parity and type of birth (single or twin) were significant sources of variation for lactation length; on the other hand milk yield was significantly influenced only by the year. The average MY was 352.34 kg and the average LL was 230 days. Estimates of heritability were 0.21 and 0.15 for MY and LL respectively. These values suggest there is low correlation between genotype and phenotype so it may be difficult to evaluate animals directly on phenotype. So, the genetic improvement of this breed may be quite slow without the support of progeny test aimed to select Maltese breeders.

Keywords: Heritability estimate, lactation traits, Maltese goat

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2207
2707 A Case Study in Using the Can-Sized Satellite Platforms for Interdisciplinary Problem-Based Learning in Aeronautical and Electronic Engineering

Authors: Michael Johnson, Vincenzo Oliveri

Abstract:

This work considers an interdisciplinary Problem-Based Learning (PBL) project developed by lecturers from the Aeronautical and Electronic and Computer Engineering departments at the University of Limerick. This “CANSAT” project utilises the CanSat can-sized satellite platform in order to allow students from aeronautical and electronic engineering to engage in a mixed format (online/face-to-face), interdisciplinary PBL assignment using a real-world platform and application. The project introduces students to the design, development, and construction of the CanSat system over the course of a single semester, enabling student(s) to apply their aeronautical and technical skills/capabilities to the realisation of a working CanSat system. In this case study, the CanSat kits are used to pivot the real-world, discipline-relevant PBL goal of designing, building, and testing the CanSat system with payload(s) from a traditional module-based setting to an online PBL setting. Feedback, impressions, benefits, and challenges identified through the semester are presented. Students found the project to be interesting and rewarding, with the interdisciplinary nature of the project appealing to them. Challenges and difficulties encountered are also addressed, with solutions developed between the students and facilitators to overcoming these discussed.

Keywords: Problem-Based Learning, Online PBL, Electronic Engineering, Aeronautical Engineering, Interdisciplinary Project, CanSat.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 416
2706 sEMG Interface Design for Locomotion Identification

Authors: Rohit Gupta, Ravinder Agarwal

Abstract:

Surface electromyographic (sEMG) signal has the potential to identify the human activities and intention. This potential is further exploited to control the artificial limbs using the sEMG signal from residual limbs of amputees. The paper deals with the development of multichannel cost efficient sEMG signal interface for research application, along with evaluation of proposed class dependent statistical approach of the feature selection method. The sEMG signal acquisition interface was developed using ADS1298 of Texas Instruments, which is a front-end interface integrated circuit for ECG application. Further, the sEMG signal is recorded from two lower limb muscles for three locomotions namely: Plane Walk (PW), Stair Ascending (SA), Stair Descending (SD). A class dependent statistical approach is proposed for feature selection and also its performance is compared with 12 preexisting feature vectors. To make the study more extensive, performance of five different types of classifiers are compared. The outcome of the current piece of work proves the suitability of the proposed feature selection algorithm for locomotion recognition, as compared to other existing feature vectors. The SVM Classifier is found as the outperformed classifier among compared classifiers with an average recognition accuracy of 97.40%. Feature vector selection emerges as the most dominant factor affecting the classification performance as it holds 51.51% of the total variance in classification accuracy. The results demonstrate the potentials of the developed sEMG signal acquisition interface along with the proposed feature selection algorithm.

Keywords: Classifiers, feature selection, locomotion, sEMG.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1469
2705 Arriving at an Optimum Value of Tolerance Factor for Compressing Medical Images

Authors: Sumathi Poobal, G. Ravindran

Abstract:

Medical imaging uses the advantage of digital technology in imaging and teleradiology. In teleradiology systems large amount of data is acquired, stored and transmitted. A major technology that may help to solve the problems associated with the massive data storage and data transfer capacity is data compression and decompression. There are many methods of image compression available. They are classified as lossless and lossy compression methods. In lossy compression method the decompressed image contains some distortion. Fractal image compression (FIC) is a lossy compression method. In fractal image compression an image is coded as a set of contractive transformations in a complete metric space. The set of contractive transformations is guaranteed to produce an approximation to the original image. In this paper FIC is achieved by PIFS using quadtree partitioning. PIFS is applied on different images like , Ultrasound, CT Scan, Angiogram, X-ray, Mammograms. In each modality approximately twenty images are considered and the average values of compression ratio and PSNR values are arrived. In this method of fractal encoding, the parameter, tolerance factor Tmax, is varied from 1 to 10, keeping the other standard parameters constant. For all modalities of images the compression ratio and Peak Signal to Noise Ratio (PSNR) are computed and studied. The quality of the decompressed image is arrived by PSNR values. From the results it is observed that the compression ratio increases with the tolerance factor and mammogram has the highest compression ratio. The quality of the image is not degraded upto an optimum value of tolerance factor, Tmax, equal to 8, because of the properties of fractal compression.

Keywords: Fractal image compression, IFS, PIFS, PSNR, Quadtree partitioning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1715
2704 An Adaptive Memetic Algorithm With Dynamic Population Management for Designing HIV Multidrug Therapies

Authors: Hassan Zarei, Ali Vahidian Kamyad, Sohrab Effati

Abstract:

In this paper, a mathematical model of human immunodeficiency virus (HIV) is utilized and an optimization problem is proposed, with the final goal of implementing an optimal 900-day structured treatment interruption (STI) protocol. Two type of commonly used drugs in highly active antiretroviral therapy (HAART), reverse transcriptase inhibitors (RTI) and protease inhibitors (PI), are considered. In order to solving the proposed optimization problem an adaptive memetic algorithm with population management (AMAPM) is proposed. The AMAPM uses a distance measure to control the diversity of population in genotype space and thus preventing the stagnation and premature convergence. Moreover, the AMAPM uses diversity parameter in phenotype space to dynamically set the population size and the number of crossovers during the search process. Three crossover operators diversify the population, simultaneously. The progresses of crossover operators are utilized to set the number of each crossover per generation. In order to escaping the local optima and introducing the new search directions toward the global optima, two local searchers assist the evolutionary process. In contrast to traditional memetic algorithms, the activation of these local searchers is not random and depends on both the diversity parameters in genotype space and phenotype space. The capability of AMAPM in finding optimal solutions compared with three popular metaheurestics is introduced.

Keywords: HIV therapy design, memetic algorithms, adaptivealgorithms, nonlinear integer programming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1598
2703 A Perceptually Optimized Wavelet Embedded Zero Tree Image Coder

Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf

Abstract:

In this paper, we propose a Perceptually Optimized Embedded ZeroTree Image Coder (POEZIC) that introduces a perceptual weighting to wavelet transform coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to the coding quality obtained using the SPIHT algorithm only. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEZIC quality assessment. Our POEZIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) luminance masking and Contrast masking, 2) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting, 3) the Wavelet Error Sensitivity WES used to reduce the perceptual quantization errors. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.

Keywords: DWT, linear-phase 9/7 filter, 9/7 Wavelets Error Sensitivity WES, CSF implementation approaches, JND Just Noticeable Difference, Luminance masking, Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2030
2702 2D Validation of a High-order Adaptive Cartesian-grid finite-volume Characteristic- flux Model with Embedded Boundaries

Authors: C. Leroy, G. Oger, D. Le Touzé, B. Alessandrini

Abstract:

A Finite Volume method based on Characteristic Fluxes for compressible fluids is developed. An explicit cell-centered resolution is adopted, where second and third order accuracy is provided by using two different MUSCL schemes with Minmod, Sweby or Superbee limiters for the hyperbolic part. Few different times integrator is used and be describe in this paper. Resolution is performed on a generic unstructured Cartesian grid, where solid boundaries are handled by a Cut-Cell method. Interfaces are explicitely advected in a non-diffusive way, ensuring local mass conservation. An improved cell cutting has been developed to handle boundaries of arbitrary geometrical complexity. Instead of using a polygon clipping algorithm, we use the Voxel traversal algorithm coupled with a local floodfill scanline to intersect 2D or 3D boundary surface meshes with the fixed Cartesian grid. Small cells stability problem near the boundaries is solved using a fully conservative merging method. Inflow and outflow conditions are also implemented in the model. The solver is validated on 2D academic test cases, such as the flow past a cylinder. The latter test cases are performed both in the frame of the body and in a fixed frame where the body is moving across the mesh. Adaptive Cartesian grid is provided by Paramesh without complex geometries for the moment.

Keywords: Finite volume method, cartesian grid, compressible solver, complex geometries, Paramesh.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1594
2701 Unsteady Transonic Aerodynamic Analysis for Oscillatory Airfoils using Time Spectral Method

Authors: Mohamad Reza. Mohaghegh, Majid. Malek Jafarian

Abstract:

This research proposes an algorithm for the simulation of time-periodic unsteady problems via the solution unsteady Euler and Navier-Stokes equations. This algorithm which is called Time Spectral method uses a Fourier representation in time and hence solve for the periodic state directly without resolving transients (which consume most of the resources in a time-accurate scheme). Mathematical tools used here are discrete Fourier transformations. It has shown tremendous potential for reducing the computational cost compared to conventional time-accurate methods, by enforcing periodicity and using Fourier representation in time, leading to spectral accuracy. The accuracy and efficiency of this technique is verified by Euler and Navier-Stokes calculations for pitching airfoils. Because of flow turbulence nature, Baldwin-Lomax turbulence model has been used at viscous flow analysis. The results presented by the Time Spectral method are compared with experimental data. It has shown tremendous potential for reducing the computational cost compared to the conventional time-accurate methods, by enforcing periodicity and using Fourier representation in time, leading to spectral accuracy, because results verify the small number of time intervals per pitching cycle required to capture the flow physics.

Keywords: Time Spectral Method, Time-periodic unsteadyflow, Discrete Fourier transform, Pitching airfoil, Turbulence flow

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1748
2700 Integrating E-learning Environments with Computational Intelligence Assessment Agents

Authors: Christos E. Alexakos, Konstantinos C. Giotopoulos, Eleni J. Thermogianni, Grigorios N. Beligiannis, Spiridon D. Likothanassis

Abstract:

In this contribution an innovative platform is being presented that integrates intelligent agents in legacy e-learning environments. It introduces the design and development of a scalable and interoperable integration platform supporting various assessment agents for e-learning environments. The agents are implemented in order to provide intelligent assessment services to computational intelligent techniques such as Bayesian Networks and Genetic Algorithms. The utilization of new and emerging technologies like web services allows integrating the provided services to any web based legacy e-learning environment.

Keywords: Bayesian Networks, Computational Intelligence techniques, E-learning legacy systems, Service Oriented Integration, Intelligent Agents

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1906
2699 Measurement of Real Time Drive Cycle for Indian Roads and Estimation of Component Sizing for HEV using LABVIEW

Authors: Varsha Shah, Patel Pritesh, Patel Sagar, PrasantaKundu, RanjanMaheshwari

Abstract:

Performance of vehicle depends on driving patterns and vehicle drive train configuration. Driving patterns depends on traffic condition, road condition and driver behavior. HEV design is carried out under certain constrain like vehicle operating range, acceleration, decelerations, maximum speed and road grades which are directly related to the driving patterns. Therefore the detailed study on HEV performance over a different drive cycle is required for selection and sizing of HEV components. A simple hardware is design to measured velocity v/s time profile of the vehicle by operating vehicle on Indian roads under real traffic conditions. To size the HEV components, a detailed dynamic model of the vehicle is developed considering the effect of inertia of rotating components like wheels, drive chain, engine and electric motor. Using vehicle model and different Indian drive cycles data, total tractive power demanded by vehicle and power supplied by individual components has been calculated.Using above information selection and estimation of component sizing for HEV is carried out so that HEV performs efficiently under hostile driving condition. Complete analysis is carried out in LABVIEW.

Keywords: BLDC motor, Driving cycle, LABVIEW Ultracapacitors, Vehicle Dynamics,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3879