Search results for: Gaussian noise
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1079

Search results for: Gaussian noise

509 Dynamic Measurement System Modeling with Machine Learning Algorithms

Authors: Changqiao Wu, Guoqing Ding, Xin Chen

Abstract:

In this paper, ways of modeling dynamic measurement systems are discussed. Specially, for linear system with single-input single-output, it could be modeled with shallow neural network. Then, gradient based optimization algorithms are used for searching the proper coefficients. Besides, method with normal equation and second order gradient descent are proposed to accelerate the modeling process, and ways of better gradient estimation are discussed. It shows that the mathematical essence of the learning objective is maximum likelihood with noises under Gaussian distribution. For conventional gradient descent, the mini-batch learning and gradient with momentum contribute to faster convergence and enhance model ability. Lastly, experimental results proved the effectiveness of second order gradient descent algorithm, and indicated that optimization with normal equation was the most suitable for linear dynamic models.

Keywords: Dynamic system modeling, neural network, normal equation, second order gradient descent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 781
508 Segmenting Ultrasound B-Mode Images Using RiIG Distributions and Stochastic Optimization

Authors: N. Mpofu, M. Sears

Abstract:

In this paper, we propose a novel algorithm for delineating the endocardial wall from a human heart ultrasound scan. We assume that the gray levels in the ultrasound images are independent and identically distributed random variables with different Rician Inverse Gaussian (RiIG) distributions. Both synthetic and real clinical data will be used for testing the algorithm. Algorithm performance will be evaluated using the expert radiologist evaluation of a soft copy of an ultrasound scan during the scanning process and secondly, doctor’s conclusion after going through a printed copy of the same scan. Successful implementation of this algorithm should make it possible to differentiate normal from abnormal soft tissue and help disease identification, what stage the disease is in and how best to treat the patient. We hope that an automated system that uses this algorithm will be useful in public hospitals especially in Third World countries where problems such as shortage of skilled radiologists and shortage of ultrasound machines are common. These public hospitals are usually the first and last stop for most patients in these countries.

Keywords: Endorcardial Wall, Rician Inverse Distributions, Segmentation, Ultrasound Images.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1573
507 Effective Planning of Public Transportation Systems: A Decision Support Application

Authors: Ferdi Sönmez, Nihal Yorulmaz

Abstract:

Decision making on the true planning of the public transportation systems to serve potential users is a must for metropolitan areas. To take attraction of travelers to projected modes of transport, adequately fair overall travel times should be provided. In this fashion, other benefits such as lower traffic congestion, road safety and lower noise and atmospheric pollution may be earned. The congestion which comes with increasing demand of public transportation is becoming a part of our lives and making residents’ life difficult. Hence, regulations should be done to reduce this congestion. To provide a constructive and balanced regulation in public transportation systems, right stations should be located in right places. In this study, it is aimed to design and implement a Decision Support System (DSS) Application to determine the optimal bus stop places for public transport in Istanbul which is one of the biggest and oldest cities in the world. Required information is gathered from IETT (Istanbul Electricity, Tram and Tunnel) Enterprises which manages all public transportation services in Istanbul Metropolitan Area. By using the most real-like values, cost assignments are made. The cost is calculated with the help of equations produced by bi-level optimization model. For this study, 300 buses, 300 drivers, 10 lines and 110 stops are used. The user cost of each station and the operator cost taken place in lines are calculated. Some components like cost, security and noise pollution are considered as significant factors affecting the solution of set covering problem which is mentioned for identifying and locating the minimum number of possible bus stops. Preliminary research and model development for this study refers to previously published article of the corresponding author. Model results are represented with the intent of decision support to the specialists on locating stops effectively.

Keywords: User cost, bi-level optimization model, decision support, operator cost, transportation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 728
506 Localising Gauss's Law and the Electric Charge Induction on a Conducting Sphere

Authors: Sirapat Lookrak, Anol Paisal

Abstract:

Space debris has numerous manifestations including ferro-metalize and non-ferrous. The electric field will induce negative charges to split from positive charges inside the space debris. In this research, we focus only on conducting materials. The assumption is that the electric charge density of a conducting surface is proportional to the electric field on that surface due to Gauss's law. We are trying to find the induced charge density from an external electric field perpendicular to a conducting spherical surface. An object is a sphere on which the external electric field is not uniform. The electric field is, therefore, considered locally. The localised spherical surface is a tangent plane so the Gaussian surface is a very small cylinder and every point on a spherical surface has its own cylinder. The electric field from a circular electrode has been calculated in near-field and far-field approximation and shown Explanation Touchless manoeuvring space debris orbit properties. The electric charge density calculation from a near-field and far-field approximation is done.

Keywords: Near-field approximation, far-field approximation, localized Gauss’s law, electric charge density.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 402
505 Density Functional Calculations of N-14 andB-11 NQR Parameters in the H-capped (5, 5)Single-Wall BN Nanotube

Authors: Ahmad Seif, Karim Zare, Asadallah Boshra, Mehran Aghaie

Abstract:

Density functional theory (DFT) calculations were performed to compute nitrogen-14 and boron-11 nuclear quadrupole resonance (NQR) spectroscopy parameters in the representative model of armchair boron nitride nanotube (BNNT) for the first time. The considered model consisting of 1 nm length of H-capped (5, 5) single-wall BNNT were first allowed to fully relax and then the NQR calculations were carried out on the geometrically optimized model. The evaluated nuclear quadrupole coupling constants and asymmetry parameters for the mentioned nuclei reveal that the model can be divided into seven layers of nuclei with an equivalent electrostatic environment where those nuclei at the ends of tubes have a very strong electrostatic environment compared to the other nuclei along the length of tubes. The calculations were performed via Gaussian 98 package of program.

Keywords: Armchair Nanotube, Density Functional Theory, Nuclear Quadrupole Resonance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1783
504 Analysis of Sonographic Images of Breast

Authors: M. Bastanfard, S. Jafari, B.Jalaeian

Abstract:

Ultrasound images are very useful diagnostic tool to distinguish benignant from malignant masses of the breast. However, there is a considerable overlap between benignancy and malignancy in ultrasonic images which makes it difficult to interpret. In this paper, a new noise removal algorithm was used to improve the images and classification process. The masses are classified by wavelet transform's coefficients, morphological and textural features as a novel feature set for this goal. The Bayesian estimation theory is used to classify the tissues in three classes according to their features.

Keywords: Bayesian estimation theory, breast, ultrasound, wavelet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1446
503 Histogram Slicing to Better Reveal Special Thermal Objects

Authors: S. Ratna Sulistiyanti, Adhi Susanto, Thomas Sri Widodo, Gede Bayu Suparta

Abstract:

In this paper, an experimentation to enhance the visibility of hot objects in a thermal image acquired with ordinary digital camera is reported, after the applications of lowpass and median filters to suppress the distracting granular noises. The common thresholding and slicing techniques were used on the histogram at different gray levels, followed by a subjective comparative evaluation. The best result came out with the threshold level 115 and the number of slices 3.

Keywords: enhance, thermal image, thresholding and slicingtechniques, granular noise, hot objects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1735
502 Posture Recognition using Combined Statistical and Geometrical Feature Vectors based on SVM

Authors: Omer Rashid, Ayoub Al-Hamadi, Axel Panning, Bernd Michaelis

Abstract:

It is hard to percept the interaction process with machines when visual information is not available. In this paper, we have addressed this issue to provide interaction through visual techniques. Posture recognition is done for American Sign Language to recognize static alphabets and numbers. 3D information is exploited to obtain segmentation of hands and face using normal Gaussian distribution and depth information. Features for posture recognition are computed using statistical and geometrical properties which are translation, rotation and scale invariant. Hu-Moment as statistical features and; circularity and rectangularity as geometrical features are incorporated to build the feature vectors. These feature vectors are used to train SVM for classification that recognizes static alphabets and numbers. For the alphabets, curvature analysis is carried out to reduce the misclassifications. The experimental results show that proposed system recognizes posture symbols by achieving recognition rate of 98.65% and 98.6% for ASL alphabets and numbers respectively.

Keywords: Feature Extraction, Posture Recognition, Pattern Recognition, Application.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520
501 A New Self-Adaptive EP Approach for ANN Weights Training

Authors: Kristina Davoian, Wolfram-M. Lippe

Abstract:

Evolutionary Programming (EP) represents a methodology of Evolutionary Algorithms (EA) in which mutation is considered as a main reproduction operator. This paper presents a novel EP approach for Artificial Neural Networks (ANN) learning. The proposed strategy consists of two components: the self-adaptive, which contains phenotype information and the dynamic, which is described by genotype. Self-adaptation is achieved by the addition of a value, called the network weight, which depends on a total number of hidden layers and an average number of neurons in hidden layers. The dynamic component changes its value depending on the fitness of a chromosome, exposed to mutation. Thus, the mutation step size is controlled by two components, encapsulated in the algorithm, which adjust it according to the characteristics of a predefined ANN architecture and the fitness of a particular chromosome. The comparative analysis of the proposed approach and the classical EP (Gaussian mutation) showed, that that the significant acceleration of the evolution process is achieved by using both phenotype and genotype information in the mutation strategy.

Keywords: Artificial Neural Networks (ANN), Learning Theory, Evolutionary Programming (EP), Mutation, Self-Adaptation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1828
500 Parameter Selections of Fuzzy C-Means Based on Robust Analysis

Authors: Kuo-Lung Wu

Abstract:

The weighting exponent m is called the fuzzifier that can have influence on the clustering performance of fuzzy c-means (FCM) and mÎ[1.5,2.5] is suggested by Pal and Bezdek [13]. In this paper, we will discuss the robust properties of FCM and show that the parameter m will have influence on the robustness of FCM. According to our analysis, we find that a large m value will make FCM more robust to noise and outliers. However, if m is larger than the theoretical upper bound proposed by Yu et al. [14], the sample mean will become the unique optimizer. Here, we suggest to implement the FCM algorithm with mÎ[1.5,4] under the restriction when m is smaller than the theoretical upper bound.

Keywords: Fuzzy c-means, robust, fuzzifier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1660
499 Relation of Optimal Pilot Offsets in the Shifted Constellation-Based Method for the Detection of Pilot Contamination Attacks

Authors: Dimitriya A. Mihaylova, Zlatka V. Valkova-Jarvis, Georgi L. Iliev

Abstract:

One possible approach for maintaining the security of communication systems relies on Physical Layer Security mechanisms. However, in wireless time division duplex systems, where uplink and downlink channels are reciprocal, the channel estimate procedure is exposed to attacks known as pilot contamination, with the aim of having an enhanced data signal sent to the malicious user. The Shifted 2-N-PSK method involves two random legitimate pilots in the training phase, each of which belongs to a constellation, shifted from the original N-PSK symbols by certain degrees. In this paper, legitimate pilots’ offset values and their influence on the detection capabilities of the Shifted 2-N-PSK method are investigated. As the implementation of the technique depends on the relation between the shift angles rather than their specific values, the optimal interconnection between the two legitimate constellations is investigated. The results show that no regularity exists in the relation between the pilot contamination attacks (PCA) detection probability and the choice of offset values. Therefore, an adversary who aims to obtain the exact offset values can only employ a brute-force attack but the large number of possible combinations for the shifted constellations makes such a type of attack difficult to successfully mount. For this reason, the number of optimal shift value pairs is also studied for both 100% and 98% probabilities of detecting pilot contamination attacks. Although the Shifted 2-N-PSK method has been broadly studied in different signal-to-noise ratio scenarios, in multi-cell systems the interference from the signals in other cells should be also taken into account. Therefore, the inter-cell interference impact on the performance of the method is investigated by means of a large number of simulations. The results show that the detection probability of the Shifted 2-N-PSK decreases inversely to the signal-to-interference-plus-noise ratio.

Keywords: Channel estimation, inter-cell interference, pilot contamination attacks, wireless communications.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 677
498 Synthesis of Filtering in Stochastic Systems on Continuous-Time Memory Observations in the Presence of Anomalous Noises

Authors: S. Rozhkova, O. Rozhkova, A. Harlova, V. Lasukov

Abstract:

We have conducted the optimal synthesis of rootmean- squared objective filter to estimate the state vector in the case if within the observation channel with memory the anomalous noises with unknown mathematical expectation are complement in the function of the regular noises. The synthesis has been carried out for linear stochastic systems of continuous - time.

Keywords: Mathematical expectation, filtration, anomalous noise, memory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1968
497 Mean-Square Performance of Adaptive Filter Algorithms in Nonstationary Environments

Authors: Mohammad Shams Esfand Abadi, John Hakon Husøy

Abstract:

Employing a recently introduced unified adaptive filter theory, we show how the performance of a large number of important adaptive filter algorithms can be predicted within a general framework in nonstationary environment. This approach is based on energy conservation arguments and does not need to assume a Gaussian or white distribution for the regressors. This general performance analysis can be used to evaluate the mean square performance of the Least Mean Square (LMS) algorithm, its normalized version (NLMS), the family of Affine Projection Algorithms (APA), the Recursive Least Squares (RLS), the Data-Reusing LMS (DR-LMS), its normalized version (NDR-LMS), the Block Least Mean Squares (BLMS), the Block Normalized LMS (BNLMS), the Transform Domain Adaptive Filters (TDAF) and the Subband Adaptive Filters (SAF) in nonstationary environment. Also, we establish the general expressions for the steady-state excess mean square in this environment for all these adaptive algorithms. Finally, we demonstrate through simulations that these results are useful in predicting the adaptive filter performance.

Keywords: Adaptive filter, general framework, energy conservation, mean-square performance, nonstationary environment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2187
496 On the Learning of Causal Relationships between Banks in Saudi Equities Market Using Ensemble Feature Selection Methods

Authors: Adel Aloraini

Abstract:

Financial forecasting using machine learning techniques has received great efforts in the last decide . In this ongoing work, we show how machine learning of graphical models will be able to infer a visualized causal interactions between different banks in the Saudi equities market. One important discovery from such learned causal graphs is how companies influence each other and to what extend. In this work, a set of graphical models named Gaussian graphical models with developed ensemble penalized feature selection methods that combine ; filtering method, wrapper method and a regularizer will be shown. A comparison between these different developed ensemble combinations will also be shown. The best ensemble method will be used to infer the causal relationships between banks in Saudi equities market.

Keywords: Causal interactions , banks, feature selection, regularizere,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1747
495 Fuzzy Rules Generation and Extraction from Support Vector Machine Based on Kernel Function Firing Signals

Authors: Prasan Pitiranggon, Nunthika Benjathepanun, Somsri Banditvilai, Veera Boonjing

Abstract:

Our study proposes an alternative method in building Fuzzy Rule-Based System (FRB) from Support Vector Machine (SVM). The first set of fuzzy IF-THEN rules is obtained through an equivalence of the SVM decision network and the zero-ordered Sugeno FRB type of the Adaptive Network Fuzzy Inference System (ANFIS). The second set of rules is generated by combining the first set based on strength of firing signals of support vectors using Gaussian kernel. The final set of rules is then obtained from the second set through input scatter partitioning. A distinctive advantage of our method is the guarantee that the number of final fuzzy IFTHEN rules is not more than the number of support vectors in the trained SVM. The final FRB system obtained is capable of performing classification with results comparable to its SVM counterpart, but it has an advantage over the black-boxed SVM in that it may reveal human comprehensible patterns.

Keywords: Fuzzy Rule Base, Rule Extraction, Rule Generation, Support Vector Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1902
494 T-Wave Detection Based on an Adjusted Wavelet Transform Modulus Maxima

Authors: Samar Krimi, Kaïs Ouni, Noureddine Ellouze

Abstract:

The method described in this paper deals with the problems of T-wave detection in an ECG. Determining the position of a T-wave is complicated due to the low amplitude, the ambiguous and changing form of the complex. A wavelet transform approach handles these complications therefore a method based on this concept was developed. In this way we developed a detection method that is able to detect T-waves with a sensitivity of 93% and a correct-detection ratio of 93% even with a serious amount of baseline drift and noise.

Keywords: ECG, Modulus Maxima Wavelet Transform, Performance, T-wave detection

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1853
493 Tool Failure Detection Based on Statistical Analysis of Metal Cutting Acoustic Emission Signals

Authors: Othman Belgassim, Krzysztof Jemielniak

Abstract:

The analysis of Acoustic Emission (AE) signal generated from metal cutting processes has often approached statistically. This is due to the stochastic nature of the emission signal as a result of factors effecting the signal from its generation through transmission and sensing. Different techniques are applied in this manner, each of which is suitable for certain processes. In metal cutting where the emission generated by the deformation process is rather continuous, an appropriate method for analysing the AE signal based on the root mean square (RMS) of the signal is often used and is suitable for use with the conventional signal processing systems. The aim of this paper is to set a strategy in tool failure detection in turning processes via the statistic analysis of the AE generated from the cutting zone. The strategy is based on the investigation of the distribution moments of the AE signal at predetermined sampling. The skews and kurtosis of these distributions are the key elements in the detection. A normal (Gaussian) distribution has first been suggested then this was eliminated due to insufficiency. The so called Beta distribution was then considered, this has been used with an assumed β density function and has given promising results with regard to chipping and tool breakage detection.

Keywords: AE signal, skew, kurtosis, tool failure

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1846
492 Combining Bagging and Additive Regression

Authors: Sotiris B. Kotsiantis

Abstract:

Bagging and boosting are among the most popular re-sampling ensemble methods that generate and combine a diversity of regression models using the same learning algorithm as base-learner. Boosting algorithms are considered stronger than bagging on noise-free data. However, there are strong empirical indications that bagging is much more robust than boosting in noisy settings. For this reason, in this work we built an ensemble using an averaging methodology of bagging and boosting ensembles with 10 sub-learners in each one. We performed a comparison with simple bagging and boosting ensembles with 25 sub-learners on standard benchmark datasets and the proposed ensemble gave better accuracy.

Keywords: Regressors, statistical learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1640
491 Ship Detection Requirements Analysis for Different Sea States: Validation on Real SAR Data

Authors: Jaime Martín-de-Nicolás, David Mata-Moya, Nerea del-Rey-Maestre, Pedro Gómez-del-Hoyo, María-Pilar Jarabo-Amores

Abstract:

Ship detection is nowadays quite an important issue in tasks related to sea traffic control, fishery management and ship search and rescue. Although it has traditionally been carried out by patrol ships or aircrafts, coverage and weather conditions and sea state can become a problem. Synthetic aperture radars can surpass these coverage limitations and work under any climatological condition. A fast CFAR ship detector based on a robust statistical modeling of sea clutter with respect to sea states in SAR images is used. In this paper, the minimum SNR required to obtain a given detection probability with a given false alarm rate for any sea state is determined. A Gaussian target model using real SAR data is considered. Results show that SNR does not depend heavily on the class considered. Provided there is some variation in the backscattering of targets in SAR imagery, the detection probability is limited and a post-processing stage based on morphology would be suitable.

Keywords: SAR, generalized gamma distribution, detection curves, radar detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1174
490 Spread Spectrum Code Estimation by Genetic Algorithm

Authors: V. R. Asghari, M. Ardebilipour

Abstract:

In the context of spectrum surveillance, a method to recover the code of spread spectrum signal is presented, whereas the receiver has no knowledge of the transmitter-s spreading sequence. The approach is based on a genetic algorithm (GA), which is forced to model the received signal. Genetic algorithms (GAs) are well known for their robustness in solving complex optimization problems. Experimental results show that the method provides a good estimation, even when the signal power is below the noise power.

Keywords: Code estimation, genetic algorithms, spread spectrum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1571
489 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images

Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir

Abstract:

The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement. On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.

Keywords: Automatic landing, multirotor, nonlinear control, parameters estimation, optical flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 527
488 Conceptual Design of the TransAtlantic as a Research Platform for the Development of “Green” Aircraft Technologies

Authors: Victor Maldonado

Abstract:

Recent concerns of the growing impact of aviation on climate change has prompted the emergence of a field referred to as Sustainable or “Green” Aviation dedicated to mitigating the harmful impact of aviation related CO2 emissions and noise pollution on the environment. In the current paper, a unique “green” business jet aircraft called the TransAtlantic was designed (using analytical formulation common in conceptual design) in order to show the feasibility for transatlantic passenger air travel with an aircraft weighing less than 10,000 pounds takeoff weight. Such an advance in fuel efficiency will require development and integration of advanced and emerging aerospace technologies. The TransAtlantic design is intended to serve as a research platform for the development of technologies such as active flow control. Recent advances in the field of active flow control and how this technology can be integrated on a sub-scale flight demonstrator are discussed in this paper. Flow control is a technique to modify the behavior of coherent structures in wall-bounded flows (over aerodynamic surfaces such as wings and turbine nozzles) resulting in improved aerodynamic cruise and flight control efficiency. One of the key challenges to application in manned aircraft is development of a robust high-momentum actuator that can penetrate the boundary layer flowing over aerodynamic surfaces. These deficiencies may be overcome in the current development and testing of a novel electromagnetic synthetic jet actuator which replaces piezoelectric materials as the driving diaphragm. One of the overarching goals of the TranAtlantic research platform include fostering national and international collaboration to demonstrate (in numerical and experimental models) reduced CO2/ noise pollution via development and integration of technologies and methodologies in design optimization, fluid dynamics, structures/ composites, propulsion, and controls.

Keywords: Aircraft Design, Sustainable “Green” Aviation, Active Flow Control, Aerodynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2533
487 Digital Image Watermarking in the Wavelet Transform Domain

Authors: Kamran Hameed, Adeel Mumtaz, S.A.M. Gilani

Abstract:

In this paper, we start by first characterizing the most important and distinguishing features of wavelet-based watermarking schemes. We studied the overwhelming amount of algorithms proposed in the literature. Application scenario, copyright protection is considered and building on the experience that was gained, implemented two distinguishing watermarking schemes. Detailed comparison and obtained results are presented and discussed. We concluded that Joo-s [1] technique is more robust for standard noise attacks than Dote-s [2] technique.

Keywords: Digital image, Copyright protection, Watermarking, Wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2652
486 Automatic Musical Genre Classification Using Divergence and Average Information Measures

Authors: Hassan Ezzaidi, Jean Rouat

Abstract:

Recently many research has been conducted to retrieve pertinent parameters and adequate models for automatic music genre classification. In this paper, two measures based upon information theory concepts are investigated for mapping the features space to decision space. A Gaussian Mixture Model (GMM) is used as a baseline and reference system. Various strategies are proposed for training and testing sessions with matched or mismatched conditions, long training and long testing, long training and short testing. For all experiments, the file sections used for testing are never been used during training. With matched conditions all examined measures yield the best and similar scores (almost 100%). With mismatched conditions, the proposed measures yield better scores than the GMM baseline system, especially for the short testing case. It is also observed that the average discrimination information measure is most appropriate for music category classifications and on the other hand the divergence measure is more suitable for music subcategory classifications.

Keywords: Audio feature, information measures, music genre.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1577
485 Parametric Modeling Approach for Call Holding Times for IP based Public Safety Networks via EM Algorithm

Authors: Badarch Tuyatsetseg

Abstract:

This paper presents parametric probability density models for call holding times (CHTs) into emergency call center based on the actual data collected for over a week in the public Emergency Information Network (EIN) in Mongolia. When the set of chosen candidates of Gamma distribution family is fitted to the call holding time data, it is observed that the whole area in the CHT empirical histogram is underestimated due to spikes of higher probability and long tails of lower probability in the histogram. Therefore, we provide the Gaussian parametric model of a mixture of lognormal distributions with explicit analytical expressions for the modeling of CHTs of PSNs. Finally, we show that the CHTs for PSNs are fitted reasonably by a mixture of lognormal distributions via the simulation of expectation maximization algorithm. This result is significant as it expresses a useful mathematical tool in an explicit manner of a mixture of lognormal distributions.

Keywords: A mixture of lognormal distributions, modeling call holding times, public safety network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1650
484 Clustering Categorical Data Using Hierarchies (CLUCDUH)

Authors: Gökhan Silahtaroğlu

Abstract:

Clustering large populations is an important problem when the data contain noise and different shapes. A good clustering algorithm or approach should be efficient enough to detect clusters sensitively. Besides space complexity, time complexity also gains importance as the size grows. Using hierarchies we developed a new algorithm to split attributes according to the values they have and choosing the dimension for splitting so as to divide the database roughly into equal parts as much as possible. At each node we calculate some certain descriptive statistical features of the data which reside and by pruning we generate the natural clusters with a complexity of O(n).

Keywords: Clustering, tree, split, pruning, entropy, gini.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1556
483 High Resolution Methods Based On Rank Revealing Triangular Factorizations

Authors: M. Bouri, S. Bourennane

Abstract:

In this paper, we propose a novel method for subspace estimation used high resolution method without eigendecomposition where the sample Cross-Spectral Matrix (CSM) is replaced by upper triangular matrix obtained from LU factorization. This novel method decreases the computational complexity. The method relies on a recently published result on Rank-Revealing LU (RRLU) factorization. Simulation results demonstrates that the new algorithm outperform the Householder rank-revealing QR (RRQR) factorization method and the MUSIC in the low Signal to Noise Ratio (SNR) scenarios.

Keywords: Factorization, Localization, Matrix, Signalsubspace.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1360
482 Intelligent Audio Watermarking using Genetic Algorithm in DWT Domain

Authors: M. Ketcham, S. Vongpradhip

Abstract:

In this paper, an innovative watermarking scheme for audio signal based on genetic algorithms (GA) in the discrete wavelet transforms is proposed. It is robust against watermarking attacks, which are commonly employed in literature. In addition, the watermarked image quality is also considered. We employ GA for the optimal localization and intensity of watermark. The watermark detection process can be performed without using the original audio signal. The experimental results demonstrate that watermark is inaudible and robust to many digital signal processing, such as cropping, low pass filter, additive noise.

Keywords: Intelligent Audio Watermarking, GeneticAlgorithm, DWT Domain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2056
481 Optimal Feature Extraction Dimension in Finger Vein Recognition Using Kernel Principal Component Analysis

Authors: Amir Hajian, Sepehr Damavandinejadmonfared

Abstract:

In this paper the issue of dimensionality reduction is investigated in finger vein recognition systems using kernel Principal Component Analysis (KPCA). One aspect of KPCA is to find the most appropriate kernel function on finger vein recognition as there are several kernel functions which can be used within PCA-based algorithms. In this paper, however, another side of PCA-based algorithms -particularly KPCA- is investigated. The aspect of dimension of feature vector in PCA-based algorithms is of importance especially when it comes to the real-world applications and usage of such algorithms. It means that a fixed dimension of feature vector has to be set to reduce the dimension of the input and output data and extract the features from them. Then a classifier is performed to classify the data and make the final decision. We analyze KPCA (Polynomial, Gaussian, and Laplacian) in details in this paper and investigate the optimal feature extraction dimension in finger vein recognition using KPCA.

Keywords: Biometrics, finger vein recognition, Principal Component Analysis (PCA), Kernel Principal Component Analysis (KPCA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1962
480 Morphing Human Faces: Automatic Control Points Selection and Color Transition

Authors: Stephen Karungaru, Minoru Fukumi, Norio Akamatsu

Abstract:

In this paper, we propose a morphing method by which face color images can be freely transformed. The main focus of this work is the transformation of one face image to another. This method is fully automatic in that it can morph two face images by automatically detecting all the control points necessary to perform the morph. A face detection neural network, edge detection and medium filters are employed to detect the face position and features. Five control points, for both the source and target images, are then extracted based on the facial features. Triangulation method is then used to match and warp the source image to the target image using the control points. Finally color interpolation is done using a color Gaussian model that calculates the color for each particular frame depending on the number of frames used. A real coded Genetic algorithm is used in both the image warping and color blending steps to assist in step size decisions and speed up the morphing. This method results in ''very smooth'' morphs and is fast to process.

Keywords: color transition, genetic algorithms morphing, warping

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2823