Search results for: correlated noise
806 Teager-Huang Analysis Applied to Sonar Target Recognition
Authors: J.-C. Cexus, A.O. Boudraa
Abstract:
In this paper, a new approach for target recognition based on the Empirical mode decomposition (EMD) algorithm of Huang etal. [11] and the energy tracking operator of Teager [13]-[14] is introduced. The conjunction of these two methods is called Teager-Huang analysis. This approach is well suited for nonstationary signals analysis. The impulse response (IR) of target is first band pass filtered into subsignals (components) called Intrinsic mode functions (IMFs) with well defined Instantaneous frequency (IF) and Instantaneous amplitude (IA). Each IMF is a zero-mean AM-FM component. In second step, the energy of each IMF is tracked using the Teager energy operator (TEO). IF and IA, useful to describe the time-varying characteristics of the signal, are estimated using the Energy separation algorithm (ESA) algorithm of Maragos et al .[16]-[17]. In third step, a set of features such as skewness and kurtosis are extracted from the IF, IA and IMF energy functions. The Teager-Huang analysis is tested on set of synthetic IRs of Sonar targets with different physical characteristics (density, velocity, shape,? ). PCA is first applied to features to discriminate between manufactured and natural targets. The manufactured patterns are classified into spheres and cylinders. One hundred percent of correct recognition is achieved with twenty three echoes where sixteen IRs, used for training, are free noise and seven IRs, used for testing phase, are corrupted with white Gaussian noise.
Keywords: Target recognition, Empirical mode decomposition, Teager-Kaiser energy operator, Features extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2283805 An Enhanced SAR-Based Tsunami Detection System
Authors: Jean-Pierre Dubois, Jihad S. Daba, H. Karam, J. Abdallah
Abstract:
Tsunami early detection and warning systems have proved to be of ultimate importance, especially after the destructive tsunami that hit Japan in March 2012. Such systems are crucial to inform the authorities of any risk of a tsunami and of the degree of its danger in order to make the right decision and notify the public of the actions they need to take to save their lives. The purpose of this research is to enhance existing tsunami detection and warning systems. We first propose an automated and miniaturized model of an early tsunami detection and warning system. The model for the operation of a tsunami warning system is simulated using the data acquisition toolbox of Matlab and measurements acquired from specified internet pages due to the lack of the required real-life sensors, both seismic and hydrologic, and building a graphical user interface for the system. In the second phase of this work, we implement various satellite image filtering schemes to enhance the acquired synthetic aperture radar images of the tsunami affected region that are masked by speckle noise. This enables us to conduct a post-tsunami damage extent study and calculate the percentage damage. We conclude by proposing improvements to the existing telecommunication infrastructure of existing warning tsunami systems using a migration to IP-based networks and fiber optics links.
Keywords: Detection, GIS, GSN, GTS, GPS, speckle noise, synthetic aperture radar, tsunami, wiener filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2175804 Multi Task Scheme to Monitor Multivariate Environments Using Artificial Neural Network
Authors: K. Atashgar
Abstract:
When an assignable cause(s) manifests itself to a multivariate process and the process shifts to an out-of-control condition, a root-cause analysis should be initiated by quality engineers to identify and eliminate the assignable cause(s) affected the process. A root-cause analysis in a multivariate process is more complex compared to a univariate process. In the case of a process involved several correlated variables an effective root-cause analysis can be only experienced when it is possible to identify the required knowledge including the out-of-control condition, the change point, and the variable(s) responsible to the out-of-control condition, all simultaneously. Although literature addresses different schemes to monitor multivariate processes, one can find few scientific reports focused on all the required knowledge. To the best of the author’s knowledge this is the first time that a multi task model based on artificial neural network (ANN) is reported to monitor all the required knowledge at the same time for a multivariate process with more than two correlated quality characteristics. The performance of the proposed scheme is evaluated numerically when different step shifts affect the mean vector. Average run length is used to investigate the performance of the proposed multi task model. The simulated results indicate the multi task scheme performs all the required knowledge effectively.
Keywords: Artificial neural network, Multivariate process, Statistical process control, Change point.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1681803 Saliva Cortisol and Yawning as a Predictor of Neurological Disease
Authors: Simon B. N. Thompson
Abstract:
Cortisol is important to our immune system, regulates our stress response, and is a factor in maintaining brain temperature. Saliva cortisol is a practical and useful non-invasive measurement that signifies the presence of the important hormone. Electrical activity in the jaw muscles typically rises when the muscles are moved during yawning and the electrical level is found to be correlated with the cortisol level. In two studies using identical paradigms, a total of 108 healthy subjects were exposed to yawning-provoking stimuli so that their cortisol levels and electrical nerve impulses from their jaw muscles was recorded. Electrical activity is highly correlated with cortisol levels in healthy people. The Hospital Anxiety and Depression Scale, Yawning Susceptibility Scale, General Health Questionnaire, demographic, health details were collected and exclusion criteria applied for voluntary recruitment: chronic fatigue, diabetes, fibromyalgia, heart condition, high blood pressure, hormone replacement therapy, multiple sclerosis, and stroke. Significant differences were found between the saliva cortisol samples for the yawners as compared with the non-yawners between rest and post-stimuli. Significant evidence supports the Thompson Cortisol Hypothesis that suggests rises in cortisol levels are associated with yawning. Ethics approval granted and professional code of conduct, confidentiality, and safety issues are approved therein.
Keywords: Cortisol, Diagnosis, Neurological Disease, Thompson Cortisol Hypothesis, Yawning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1549802 Effective Traffic Lights Recognition Method for Real Time Driving Assistance Systemin the Daytime
Authors: Hyun-Koo Kim, Ju H. Park, Ho-Youl Jung
Abstract:
This paper presents an effective traffic lights recognition method at the daytime. First, Potential Traffic Lights Detector (PTLD) use whole color source of YCbCr channel image and make each binary image of green and red traffic lights. After PTLD step, Shape Filter (SF) use to remove noise such as traffic sign, street tree, vehicle, and building. At this time, noise removal properties consist of information of blobs of binary image; length, area, area of boundary box, etc. Finally, after an intermediate association step witch goal is to define relevant candidates region from the previously detected traffic lights, Adaptive Multi-class Classifier (AMC) is executed. The classification method uses Haar-like feature and Adaboost algorithm. For simulation, we are implemented through Intel Core CPU with 2.80 GHz and 4 GB RAM and tested in the urban and rural roads. Through the test, we are compared with our method and standard object-recognition learning processes and proved that it reached up to 94 % of detection rate which is better than the results achieved with cascade classifiers. Computation time of our proposed method is 15 ms.Keywords: Traffic Light Detection, Multi-class Classification, Driving Assistance System, Haar-like Feature, Color SegmentationMethod, Shape Filter
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2780801 Fuzzy Relatives of the CLARANS Algorithm With Application to Text Clustering
Authors: Mohamed A. Mahfouz, M. A. Ismail
Abstract:
This paper introduces new algorithms (Fuzzy relative of the CLARANS algorithm FCLARANS and Fuzzy c Medoids based on randomized search FCMRANS) for fuzzy clustering of relational data. Unlike existing fuzzy c-medoids algorithm (FCMdd) in which the within cluster dissimilarity of each cluster is minimized in each iteration by recomputing new medoids given current memberships, FCLARANS minimizes the same objective function minimized by FCMdd by changing current medoids in such away that that the sum of the within cluster dissimilarities is minimized. Computing new medoids may be effected by noise because outliers may join the computation of medoids while the choice of medoids in FCLARANS is dictated by the location of a predominant fraction of points inside a cluster and, therefore, it is less sensitive to the presence of outliers. In FCMRANS the step of computing new medoids in FCMdd is modified to be based on randomized search. Furthermore, a new initialization procedure is developed that add randomness to the initialization procedure used with FCMdd. Both FCLARANS and FCMRANS are compared with the robust and linearized version of fuzzy c-medoids (RFCMdd). Experimental results with different samples of the Reuter-21578, Newsgroups (20NG) and generated datasets with noise show that FCLARANS is more robust than both RFCMdd and FCMRANS. Finally, both FCMRANS and FCLARANS are more efficient and their outputs are almost the same as that of RFCMdd in terms of classification rate.Keywords: Data Mining, Fuzzy Clustering, Relational Clustering, Medoid-Based Clustering, Cluster Analysis, Unsupervised Learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2402800 Performance Analysis of Genetic Algorithm with kNN and SVM for Feature Selection in Tumor Classification
Authors: C. Gunavathi, K. Premalatha
Abstract:
Tumor classification is a key area of research in the field of bioinformatics. Microarray technology is commonly used in the study of disease diagnosis using gene expression levels. The main drawback of gene expression data is that it contains thousands of genes and a very few samples. Feature selection methods are used to select the informative genes from the microarray. These methods considerably improve the classification accuracy. In the proposed method, Genetic Algorithm (GA) is used for effective feature selection. Informative genes are identified based on the T-Statistics, Signal-to-Noise Ratio (SNR) and F-Test values. The initial candidate solutions of GA are obtained from top-m informative genes. The classification accuracy of k-Nearest Neighbor (kNN) method is used as the fitness function for GA. In this work, kNN and Support Vector Machine (SVM) are used as the classifiers. The experimental results show that the proposed work is suitable for effective feature selection. With the help of the selected genes, GA-kNN method achieves 100% accuracy in 4 datasets and GA-SVM method achieves in 5 out of 10 datasets. The GA with kNN and SVM methods are demonstrated to be an accurate method for microarray based tumor classification.
Keywords: F-Test, Gene Expression, Genetic Algorithm, k- Nearest-Neighbor, Microarray, Signal-to-Noise Ratio, Support Vector Machine, T-statistics, Tumor Classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4539799 Markov Game Controller Design Algorithms
Authors: Rajneesh Sharma, M. Gopal
Abstract:
Markov games are a generalization of Markov decision process to a multi-agent setting. Two-player zero-sum Markov game framework offers an effective platform for designing robust controllers. This paper presents two novel controller design algorithms that use ideas from game-theory literature to produce reliable controllers that are able to maintain performance in presence of noise and parameter variations. A more widely used approach for controller design is the H∞ optimal control, which suffers from high computational demand and at times, may be infeasible. Our approach generates an optimal control policy for the agent (controller) via a simple Linear Program enabling the controller to learn about the unknown environment. The controller is facing an unknown environment, and in our formulation this environment corresponds to the behavior rules of the noise modeled as the opponent. Proposed controller architectures attempt to improve controller reliability by a gradual mixing of algorithmic approaches drawn from the game theory literature and the Minimax-Q Markov game solution approach, in a reinforcement-learning framework. We test the proposed algorithms on a simulated Inverted Pendulum Swing-up task and compare its performance against standard Q learning.Keywords: Reinforcement learning, Markov Decision Process, Matrix Games, Markov Games, Smooth Fictitious play, Controller, Inverted Pendulum.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1521798 Liquid-Liquid Equilibrium for the Binary Mixtures of α-Pinene + Water and α-Terpineol + Water
Authors: Herti Utami, Sutijan, Roto, Wahyudi Budi Sediawan
Abstract:
α-Pinene is the main component of the most turpentine oils. The hydration of α-pinene with acid catalysts leads to a complex mixture of monoterpenes. In order to obtain more valuable products, the α-pinene in the turpentine can be hydrated in dilute mineral acid solutions to produce α-terpineol. The design of separation processes requires information on phase equilibrium and related thermodynamic properties. This paper reports the results of study on liquid-liquid equilibrium (LLE) of system containing α- pinene + water and α-terpineol + water. Binary LLE for α-pinene + water system, and α-terpineol + water systems were determined by experiment at 301K and atmospheric pressure. The two component mixture was stirred for about 30min, then the mixture was left for about 2h for complete phase separation. The composition of both phases was analyzed by using a Gas Chromatograph. The experimental data were correlated by considering both NRTL and UNIQUAC activity coefficient models. The LLE data for the system of α-pinene + water and α-terpineol + water were correlated successfully by the NRTL model. The experimental data were not satisfactorily fitted by the UNIQUAC model. The NRTL model (α =0.3) correlates the LLE data for the system of α-pinene + water at 301K with RMSD of 0.0404%. And the NRTL model (α =0.61) at 301K with RMSD of 0.0058 %. The NRTL model (α =0.3) correlates the LLE data for the system of α- terpineol + water at 301K with RMSD of 0.1487% and the NRTL model (α =0.6) at 301K with RMSD of 0.0032%, between the experimental and calculated mole fractions.Keywords: α-Pinene, α-Terpineol, Liquid-liquid Equilibrium, NRTL model, UNIQUAC model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4977797 Full Potential Study of Electronic and Optical Properties of NdF3
Authors: Sapan Mohan Saini
Abstract:
We report the electronic structure and optical properties of NdF3 compound. Our calculations are based on density functional theory (DFT) using the full potential linearized augmented plane wave (FPLAPW) method with the inclusion of spin orbit coupling. We employed the local spin density approximation (LSDA) and Coulomb-corrected local spin density approximation, known for treating the highly correlated 4f electrons properly, is able to reproduce the correct insulating ground state. We find that the standard LSDA approach is incapable of correctly describing the electronic properties of such materials since it positions the f-bands incorrectly resulting in an incorrect metallic ground state. On the other hand, LSDA + U approximation, known for treating the highly correlated 4f electrons properly, is able to reproduce the correct insulating ground state. Interestingly, however, we do not find any significant differences in the optical properties calculated using LSDA, and LSDA + U suggesting that the 4f electrons do not play a decisive role in the optical properties of these compounds. The reflectivity for NdF3 compound stays low till 7 eV which is consistent with their large energy gaps. The calculated energy gaps are in good agreement with experiments. Our calculated reflectivity compares well with the experimental data and the results are analyzed in the light of band to band transitions.Keywords: FPLAPW Method, optical properties, rare earthtrifluorides LSDA+U
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1674796 Locating Center Points for Radial Basis Function Networks Using Instance Reduction Techniques
Authors: Rana Yousef, Khalil el Hindi
Abstract:
The behavior of Radial Basis Function (RBF) Networks greatly depends on how the center points of the basis functions are selected. In this work we investigate the use of instance reduction techniques, originally developed to reduce the storage requirements of instance based learners, for this purpose. Five Instance-Based Reduction Techniques were used to determine the set of center points, and RBF networks were trained using these sets of centers. The performance of the RBF networks is studied in terms of classification accuracy and training time. The results obtained were compared with two Radial Basis Function Networks: RBF networks that use all instances of the training set as center points (RBF-ALL) and Probabilistic Neural Networks (PNN). The former achieves high classification accuracies and the latter requires smaller training time. Results showed that RBF networks trained using sets of centers located by noise-filtering techniques (ALLKNN and ENN) rather than pure reduction techniques produce the best results in terms of classification accuracy. The results show that these networks require smaller training time than that of RBF-ALL and higher classification accuracy than that of PNN. Thus, using ALLKNN and ENN to select center points gives better combination of classification accuracy and training time. Our experiments also show that using the reduced sets to train the networks is beneficial especially in the presence of noise in the original training sets.
Keywords: Radial basis function networks, Instance-based reduction, PNN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1687795 Taguchi-Based Six Sigma Approach to Optimize Surface Roughness for Milling Processes
Authors: Sky Chou, Joseph C. Chen
Abstract:
This paper focuses on using Six Sigma methodologies to improve the surface roughness of a manufactured part produced by the CNC milling machine. It presents a case study where the surface roughness of milled aluminum is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for a CNC milling process. The six sigma methodology, DMAIC (design, measure, analyze, improve, and control) approach, was applied in this study to improve the process, reduce defects, and ultimately reduce costs. The Taguchi-based six sigma approach was applied to identify the optimized processing parameters that led to the targeted surface roughness specified by our customer. A L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of feed rate, depth of cut, spindle speed, and surface roughness. The noise factor is the difference between the old cutting tool and the new cutting tool. The confirmation run with the optimal parameters confirmed that the new parameter settings are correct. The new settings also improved the process capability index. The purpose of this study is that the Taguchi–based six sigma approach can be efficiently used to phase out defects and improve the process capability index of the CNC milling process.
Keywords: CNC machining, Six Sigma, Surface roughness, Taguchi methodology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1056794 A Novel SVM-Based OOK Detector in Low SNR Infrared Channels
Authors: J. P. Dubois, O. M. Abdul-Latif
Abstract:
Support Vector Machine (SVM) is a recent class of statistical classification and regression techniques playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM is applied to an infrared (IR) binary communication system with different types of channel models including Ricean multipath fading and partially developed scattering channel with additive white Gaussian noise (AWGN) at the receiver. The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these channel stochastic models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to classical binary signal maximum likelihood detection using a matched filter driven by On-Off keying (OOK) modulation. We found that the performance of SVM is superior to that of the traditional optimal detection schemes used in statistical communication, especially for very low signal-to-noise ratio (SNR) ranges. For large SNR, the performance of the SVM is similar to that of the classical detectors. The implication of these results is that SVM can prove very beneficial to IR communication systems that notoriously suffer from low SNR at the cost of increased computational complexity.
Keywords: Least square-support vector machine, on-off keying, matched filter, maximum likelihood detector, wireless infrared communication.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1953793 High Specific Speed in Circulating Water Pump Can Cause Cavitation, Noise and Vibration
Authors: Chandra Gupt Porwal
Abstract:
Excessive vibration means increased wear, increased repair efforts, bad product selection & quality and high energy consumption. This may be sometimes experienced by cavitation or suction/discharge recirculation which could occur only when net positive suction head available NPSHA drops below the net positive suction head required NPSHR. Cavitation can cause axial surging, if it is excessive, will damage mechanical seals, bearings, possibly other pump components frequently, and shorten the life of the impeller. Efforts have been made to explain Suction Energy (SE), Specific Speed (Ns), Suction Specific Speed (Nss), NPSHA, NPSHR & their significance, possible reasons of cavitation /internal recirculation, its diagnostics and remedial measures to arrest and prevent cavitation in this paper. A case study is presented by the author highlighting that the root cause of unwanted noise and vibration is due to cavitation, caused by high specific speeds or inadequate net- positive suction head available which results in damages to material surfaces of impeller & suction bells and degradation of machine performance, its capacity and efficiency too. Author strongly recommends revisiting the technical specifications of CW pumps to provide sufficient NPSH margin ratios >1.5, for future projects and Nss be limited to 8500 - 9000 for cavitation free operation.
Keywords: Best efficiency point (BEP), Net positive suction head NPSHA, NPSHR, Specific Speed NS, Suction Specific Speed Nss.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5061792 Evaluation of Bone and Body Mineral Profile in Association with Protein Content, Fat, Fat-Free, Skeletal Muscle Tissues According to Obesity Classification among Adult Men
Authors: Orkide Donma, Mustafa M. Donma
Abstract:
Obesity is associated with increased fat mass as well as fat percentage. Minerals are the elements, which are of vital importance. In this study, the relationships between body as well as bone mineral profile and the percentage as well as mass values of fat, fat-free portion, protein, skeletal muscle were evaluated in adult men with normal body mass index (N-BMI), and those classified according to different stages of obesity. A total of 103 adult men classified into five groups participated in this study. Ages were within 19-79 years range. Groups were N-BMI (Group 1), overweight (OW) (Group 2), first level of obesity (FLO) (Group 3), second level of obesity (SLO) (Group 4) and third level of obesity (TLO) (Group 5). Anthropometric measurements were performed. BMI values were calculated. Obesity degree, total body fat mass, fat percentage, basal metabolic rate (BMR), visceral adiposity, body mineral mass, body mineral percentage, bone mineral mass, bone mineral percentage, fat-free mass, fat-free percentage, protein mass, protein percentage, skeletal muscle mass and skeletal muscle percentage were determined by TANITA body composition monitor using bioelectrical impedance analysis technology. Statistical package (SPSS) for Windows Version 16.0 was used for statistical evaluations. The values below 0.05 were accepted as statistically significant. All the groups were matched based upon age (p > 0.05). BMI values were calculated as 22.6 ± 1.7 kg/m2, 27.1 ± 1.4 kg/m2, 32.0 ± 1.2 kg/m2, 37.2 ± 1.8 kg/m2, and 47.1 ± 6.1 kg/m2 for groups 1, 2, 3, 4, and 5, respectively. Visceral adiposity and BMR values were also within an increasing trend. Percentage values of mineral, protein, fat-free portion and skeletal muscle masses were decreasing going from normal to TLO. Upon evaluation of the percentages of protein, fat-free portion and skeletal muscle, statistically significant differences were noted between NW and OW as well as OW and FLO (p < 0.05). However, such differences were not observed for body and bone mineral percentages. Correlation existed between visceral adiposity and BMI was stronger than that detected between visceral adiposity and obesity degree. Correlation between visceral adiposity and BMR was significant at the 0.05 level. Visceral adiposity was not correlated with body mineral mass but correlated with bone mineral mass whereas significant negative correlations were observed with percentages of these parameters (p < 0.001). BMR was not correlated with body mineral percentage whereas a negative correlation was found between BMR and bone mineral percentage (p < 0.01). It is interesting to note that mineral percentages of both body as well as bone are highly affected by the visceral adiposity. Bone mineral percentage was also associated with BMR. From these findings, it is plausible to state that minerals are highly associated with the critical stages of obesity as prominent parameters.
Keywords: Bone, men, minerals, obesity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 698791 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes
Authors: Sky Chou, Joseph C. Chen
Abstract:
This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.
Keywords: Injection molding, shrinkage, six sigma, Taguchi parameter design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1381790 ALDH1A1 as a Cancer Stem Cell Marker: Value of Immunohistochemical Expression in Benign Prostatic Hyperplasia, Prostatic Intraepithelial Neoplasia, and Prostatic Adenocarcinoma
Authors: H. M. Abdelmoneim, N. A. Babtain, A. S. Barhamain, A. Z. Kufiah, A. S. Malibari, S. F. Munassar, R. S. Rawa
Abstract:
Introduction: Prostate cancer is one of the most common causes of morbidity and mortality in men in developed countries. Cancer Stem Cells (CSCs) could be responsible for the progression and relapse of cancer. Therefore, CSCs markers could provide a prognostic strategy for human malignancies. Aldehyde dehydrogenase 1A1 (ALDH1A1) activity has been shown to be associated with tumorigenesis and proposed to represent a functional marker for tumor initiating cells in various tumor types including prostate cancer. Material & Methods: We analyzed the immunohistochemical expression of ALDH1A1 in benign prostatic hyperplasia (BPH), prostatic intraepithelial neoplasia (PIN) and prostatic adenocarcinoma and assessed their significant correlations in 50 TURP sections. They were microscopically interpreted and the results were correlated with histopathological types and tumor grade. Results: In different prostatic histopathological lesions we found that ALDH1A1 expression was low in BPH (13.3%) and PIN (6.7%) and then its expression increased with prostatic adenocarcinoma (40%), and this was statistically highly significant (P value = 0.02). However, in different grades of prostatic adenocarcinoma we found that the higher the Gleason grade the higher the expression for ALDH1A1 and this was statistically significant (P value = 0.02). We compared the expression of ALDH1A1 in PIN and prostatic adenocarcinoma. ALDH1A1 expression was decreased in PIN and highly expressed in prostatic adenocarcinoma and this was statistically significant (P value = 0.04). Conclusion: Increasing ALDH1A1 expression is correlated with aggressive behavior of the tumor. Immunohistochemical expression of ALDH1A1 might provide a potential approach to study tumorigenesis and progression of primary prostate carcinoma.
Keywords: ALDH1A1, BPH, PIN, prostatic adenocarcinoma.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1355789 Vibroacoustic Modulation of Wideband Vibrations and Its Possible Application for Windmill Blade Diagnostics
Authors: Abdullah Alnutayfat, Alexander Sutin, Dong Liu
Abstract:
Wind turbine has become one of the most popular energy production methods. However, failure of blades and maintenance costs evolve into significant issues in the wind power industry, so it is essential to detect the initial blade defects to avoid the collapse of the blades and structure. This paper aims to apply modulation of high-frequency blade vibrations by low-frequency blade rotation, which is close to the known Vibro-Acoustic Modulation (VAM) method. The high-frequency wideband blade vibration is produced by the interaction of the surface blades with the environment air turbulence, and the low-frequency modulation is produced by alternating bending stress due to gravity. The low-frequency load of rotational wind turbine blades ranges between 0.2-0.4 Hz and can reach up to 2 Hz for strong wind. The main difference between this study and previous ones on VAM methods is the use of a wideband vibration signal from the blade's natural vibrations. Different features of the VAM are considered using a simple model of breathing crack. This model considers the simple mechanical oscillator, where the parameters of the oscillator are varied due to low-frequency blade rotation. During the blade's operation, the internal stress caused by the weight of the blade modifies the crack's elasticity and damping. The laboratory experiment using steel samples demonstrates the possibility of VAM using a probe wideband noise signal. A cycle load with a small amplitude was used as a pump wave to damage the tested sample, and a small transducer generated a wideband probe wave. The received signal demodulation was conducted using the Detecting of Envelope Modulation on Noise (DEMON) approach. In addition, the experimental results were compared with the modulation index (MI) technique regarding the harmonic pump wave. The wideband and traditional VAM methods demonstrated similar sensitivity for earlier detection of invisible cracks. Importantly, employing a wideband probe signal with the DEMON approach speeds up and simplifies testing since it eliminates the need to conduct tests repeatedly for various harmonic probe frequencies and to adjust the probe frequency.
Keywords: Damage detection, turbine blades, Vibro-Acoustic Structural Health Monitoring, SHM, Detecting of Envelope Modulation on Noise.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 450788 A Study of Dose Distribution and Image Quality under an Automatic Tube Current Modulation (ATCM) System for a Toshiba Aquilion 64 CT Scanner Using a New Design of Phantom
Authors: S. Sookpeng, C. J. Martin, D. J. Gentle
Abstract:
Automatic tube current modulation (ATCM) systems are available for all CT manufacturers and are used for the majority of patients. Understanding how the systems work and their influence on patient dose and image quality is important for CT users, in order to gain the most effective use of the systems. In the present study, a new phantom was used for evaluating dose distribution and image quality under the ATCM operation for the Toshiba Aquilion 64 CT scanner using different ATCM options and a fixed mAs technique. A routine chest, abdomen and pelvis (CAP) protocol was selected for study and Gafchromic film was used to measure entrance surface dose (ESD), peripheral dose and central axis dose in the phantom. The results show the dose reductions achievable with various ATCM options, in relation with the target noise. The doses and image noise distribution were more uniform when the ATCM system was implemented compared with the fixed mAs technique. The lower limit set for the tube current will affect the modulations especially for the lower dose option. This limit prevented the tube current being reduced further and therefore the lower dose ATCM setting resembled a fixed mAs technique. Selection of a lower tube current limit is likely to reduce doses for smaller patients in scans of chest and neck regions.
Keywords: Computed Tomography (CT), Automatic Tube Current Modulation (ATCM), Automatic Exposure Control (AEC).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2623787 New Wavelet Indices to Assess Muscle Fatigue during Dynamic Contractions
Authors: González-Izal M., Rodríguez-Carreño I, Mallor-Giménez F, Malanda A, Izquierdo M
Abstract:
The purpose of this study was to evaluate and compare new indices based on the discrete wavelet transform with another spectral parameters proposed in the literature as mean average voltage, median frequency and ratios between spectral moments applied to estimate acute exercise-induced changes in power output, i.e., to assess peripheral muscle fatigue during a dynamic fatiguing protocol. 15 trained subjects performed 5 sets consisting of 10 leg press, with 2 minutes rest between sets. Surface electromyography was recorded from vastus medialis (VM) muscle. Several surface electromyographic parameters were compared to detect peripheral muscle fatigue. These were: mean average voltage (MAV), median spectral frequency (Fmed), Dimitrov spectral index of muscle fatigue (FInsm5), as well as other five parameters obtained from the discrete wavelet transform (DWT) as ratios between different scales. The new wavelet indices achieved the best results in Pearson correlation coefficients with power output changes during acute dynamic contractions. Their regressions were significantly different from MAV and Fmed. On the other hand, they showed the highest robustness in presence of additive white gaussian noise for different signal to noise ratios (SNRs). Therefore, peripheral impairments assessed by sEMG wavelet indices may be a relevant factor involved in the loss of power output after dynamic high-loading fatiguing task.Keywords: Median Frequency, EMG, wavelet transform, muscle fatigue
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1867786 Evaluation of the Weight-Based and Fat-Based Indices in Relation to Basal Metabolic Rate-to-Weight Ratio
Authors: Orkide Donma, Mustafa M. Donma
Abstract:
Basal metabolic rate is questioned as a risk factor for weight gain. The relations between basal metabolic rate and body composition have not been cleared yet. The impact of fat mass on basal metabolic rate is also uncertain. Within this context, indices based upon total body mass as well as total body fat mass are available. In this study, the aim is to investigate the potential clinical utility of these indices in the adult population. 287 individuals, aged from 18 to 79 years, were included into the scope of the study. Based upon body mass index values, 10 underweight, 88 normal, 88 overweight, 81 obese, and 20 morbid obese individuals participated. Anthropometric measurements including height (m), and weight (kg) were performed. Body mass index, diagnostic obesity notation model assessment index I, diagnostic obesity notation model assessment index II, basal metabolic rate-to-weight ratio were calculated. Total body fat mass (kg), fat percent (%), basal metabolic rate, metabolic age, visceral adiposity, fat mass of upper as well as lower extremities and trunk, obesity degree were measured by TANITA body composition monitor using bioelectrical impedance analysis technology. Statistical evaluations were performed by statistical package (SPSS) for Windows Version 16.0. Scatterplots of individual measurements for the parameters concerning correlations were drawn. Linear regression lines were displayed. The statistical significance degree was accepted as p < 0.05. The strong correlations between body mass index and diagnostic obesity notation model assessment index I as well as diagnostic obesity notation model assessment index II were obtained (p < 0.001). A much stronger correlation was detected between basal metabolic rate and diagnostic obesity notation model assessment index I in comparison with that calculated for basal metabolic rate and body mass index (p < 0.001). Upon consideration of the associations between basal metabolic rate-to-weight ratio and these three indices, the best association was observed between basal metabolic rate-to-weight and diagnostic obesity notation model assessment index II. In a similar manner, this index was highly correlated with fat percent (p < 0.001). Being independent of the indices, a strong correlation was found between fat percent and basal metabolic rate-to-weight ratio (p < 0.001). Visceral adiposity was much strongly correlated with metabolic age when compared to that with chronological age (p < 0.001). In conclusion, all three indices were associated with metabolic age, but not with chronological age. Diagnostic obesity notation model assessment index II values were highly correlated with body mass index values throughout all ranges starting with underweight going towards morbid obesity. This index is the best in terms of its association with basal metabolic rate-to-weight ratio, which can be interpreted as basal metabolic rate unit.
Keywords: Basal metabolic rate, body mass index, children, diagnostic obesity notation model assessment index, obesity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1055785 Image Transmission via Iterative Cellular-Turbo System
Authors: Ersin Gose, Kenan Buyukatak, Onur Osman, Osman N. Ucan
Abstract:
To compress, improve bit error performance and also enhance 2D images, a new scheme, called Iterative Cellular-Turbo System (IC-TS) is introduced. In IC-TS, the original image is partitioned into 2N quantization levels, where N is denoted as bit planes. Then each of the N-bit-plane is coded by Turbo encoder and transmitted over Additive White Gaussian Noise (AWGN) channel. At the receiver side, bit-planes are re-assembled taking into consideration of neighborhood relationship of pixels in 2-D images. Each of the noisy bit-plane values of the image is evaluated iteratively using IC-TS structure, which is composed of equalization block; Iterative Cellular Image Processing Algorithm (ICIPA) and Turbo decoder. In IC-TS, there is an iterative feedback link between ICIPA and Turbo decoder. ICIPA uses mean and standard deviation of estimated values of each pixel neighborhood. It has extra-ordinary satisfactory results of both Bit Error Rate (BER) and image enhancement performance for less than -1 dB Signal-to-Noise Ratio (SNR) values, compared to traditional turbo coding scheme and 2-D filtering, applied separately. Also, compression can be achieved by using IC-TS systems. In compression, less memory storage is used and data rate is increased up to N-1 times by simply choosing any number of bit slices, sacrificing resolution. Hence, it is concluded that IC-TS system will be a compromising approach in 2-D image transmission, recovery of noisy signals and image compression.
Keywords: Iterative Cellular Image Processing Algorithm (ICIPA), Turbo Coding, Iterative Cellular Turbo System (IC-TS), Image Compression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814784 Statistical Modeling of Accelerated Pavement Failure Using Response Surface Methodology
Authors: Anshu Manik, Kasthurirangan Gopalakrishnan, Siddhartha K. Khaitan
Abstract:
Rutting is one of the major load-related distresses in airport flexible pavements. Rutting in paving materials develop gradually with an increasing number of load applications, usually appearing as longitudinal depressions in the wheel paths and it may be accompanied by small upheavals to the sides. Significant research has been conducted to determine the factors which affect rutting and how they can be controlled. Using the experimental design concepts, a series of tests can be conducted while varying levels of different parameters, which could be the cause for rutting in airport flexible pavements. If proper experimental design is done, the results obtained from these tests can give a better insight into the causes of rutting and the presence of interactions and synergisms among the system variables which have influence on rutting. Although traditionally, laboratory experiments are conducted in a controlled fashion to understand the statistical interaction of variables in such situations, this study is an attempt to identify the critical system variables influencing airport flexible pavement rut depth from a statistical DoE perspective using real field data from a full-scale test facility. The test results do strongly indicate that the response (rut depth) has too much noise in it and it would not allow determination of a good model. From a statistical DoE perspective, two major changes proposed for this experiment are: (1) actual replication of the tests is definitely required, (2) nuisance variables need to be identified and blocked properly. Further investigation is necessary to determine possible sources of noise in the experiment.
Keywords: Airport Pavement, Design of Experiments, Rutting, NAPTF.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1673783 Study Forecast Indoor Acoustics. A Case Study: the Auditorium Theatre-Hotel “Casa Tra Noi“
Authors: D. Germanò, D. Plutino, G. Cannistraro
Abstract:
The theatre-auditorium under investigation following the highly reflective characteristics of materials used in it (marble, painted wood, smooth plaster, etc), architectural and structural features of the Protocol and its intended use (very multifunctional: Auditorium, theatre, cinema, musicals, conference room) from the analysis of the statement of fact made by the acoustic simulation software Ramsete and supported by data obtained through a campaign of acoustic measurements of the state of fact made on the spot by a Fonomet Svantek model SVAN 957, appears to be acoustically inadequate. After the completion of the 3D model according to the specifications necessary software used forecast in order to be recognized by him, have made three simulations, acoustic simulation of the state of and acoustic simulation of two design solutions. Improved noise characteristics found in the first design solution, compared to the state in fact consists therefore in lowering Reverberation Time that you turn most desirable value, while the Indicators of Clarity, the Baricentric Time, the Lateral Efficiency, Ratio of Low Tmedia BR and defined the Speech Intelligibility improved significantly. Improved noise characteristics found instead in the second design solution, as compared to first design solution, is finally mostly in a more uniform distribution of Leq and in lowering Reverberation Time that you turn the optimum values. Indicators of Clarity, and the Lateral Efficiency improve further but at the expense of a value slightly worse than the BR. Slightly vary the remaining indices.Keywords: Indoor, Acoustic, Acoustic simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4194782 Design and Performance Improvement of Three-Dimensional Optical Code Division Multiple Access Networks with NAND Detection Technique
Authors: Satyasen Panda, Urmila Bhanja
Abstract:
In this paper, we have presented and analyzed three-dimensional (3-D) matrices of wavelength/time/space code for optical code division multiple access (OCDMA) networks with NAND subtraction detection technique. The 3-D codes are constructed by integrating a two-dimensional modified quadratic congruence (MQC) code with one-dimensional modified prime (MP) code. The respective encoders and decoders were designed using fiber Bragg gratings and optical delay lines to minimize the bit error rate (BER). The performance analysis of the 3D-OCDMA system is based on measurement of signal to noise ratio (SNR), BER and eye diagram for a different number of simultaneous users. Also, in the analysis, various types of noises and multiple access interference (MAI) effects were considered. The results obtained with NAND detection technique were compared with those obtained with OR and AND subtraction techniques. The comparison results proved that the NAND detection technique with 3-D MQC\MP code can accommodate more number of simultaneous users for longer distances of fiber with minimum BER as compared to OR and AND subtraction techniques. The received optical power is also measured at various levels of BER to analyze the effect of attenuation.Keywords: Cross correlation, three-dimensional optical code division multiple access, spectral amplitude coding optical code division multiple access, multiple access interference, phase induced intensity noise, three-dimensional modified quadratic congruence/modified prime code.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1529781 Quadrotor Black-Box System Identification
Authors: Ionel Stanculeanu, Theodor Borangiu
Abstract:
This paper presents a new approach in the identification of the quadrotor dynamic model using a black-box system for identification. Also the paper considers the problems which appear during the identification in the closed-loop and offers a technical solution for overcoming the correlation between the input noise present in the output
Keywords: System identification, UAV, prediction error method, quadrotor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3459780 Preferences of Electric Buses in Public Transport; Conclusions from Real Life Testing in Eight Swedish Municipalities
Authors: Sven Borén, Lisiana Nurhadi, Henrik Ny
Abstract:
From a theoretical perspective, electric buses can be more sustainable and can be cheaper than fossil fuelled buses in city traffic. The authors have not found other studies based on actual urban public transport in Swedish winter climate. Further on, noise measurements from buses for the European market were found old. The aims of this follow-up study was therefore to test and possibly verify in a real-life environment how energy efficient and silent electric buses are, and then conclude on if electric buses are preferable to use in public transport. The Ebusco 2.0 electric bus, fitted with a 311 kWh battery pack, was used and the tests were carried out during November 2014-April 2015 in eight municipalities in the south of Sweden. Six tests took place in urban traffic and two took place in more of a rural traffic setting. The energy use for propulsion was measured via logging of the internal system in the bus and via an external charging meter. The average energy use turned out to be 8% less (0,96 kWh/km) than assumed in the earlier theoretical study. This rate allows for a 320 km range in public urban traffic. The interior of the bus was kept warm by a diesel heater (biodiesel will probably be used in a future operational traffic situation), which used 0,67 kWh/km in January. This verified that electric buses can be up to 25% cheaper when used in public transport in cities for about eight years. The noise was found to be lower, primarily during acceleration, than for buses with combustion engines in urban bus traffic. According to our surveys, most passengers and drivers appreciated the silent and comfortable ride and preferred electric buses rather than combustion engine buses. Bus operators and passenger transport executives were also positive to start using electric buses for public transport. The operators did however point out that procurement processes need to account for eventual risks regarding this new technology, along with personnel education. The study revealed that it is possible to establish a charging infrastructure for almost all studied bus lines. However, design of a charging infrastructure for each municipality requires further investigations, including electric grid capacity analysis, smart location of charging points, and tailored schedules to allow fast charging. In conclusion, electric buses proved to be a preferable alternative for all stakeholders involved in public bus transport in the studied municipalities. However, in order to electric buses to be a prominent support for sustainable development, they need to be charged either by stand-alone units or via an expansion of the electric grid, and the electricity should be made from new renewable sources.Keywords: Sustainability, Electric, Bus, Noise, GreenCharge.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2285779 Performance Analysis of Reconstruction Algorithms in Diffuse Optical Tomography
Authors: K. Uma Maheswari, S. Sathiyamoorthy, G. Lakshmi
Abstract:
Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.
Keywords: Diffuse optical tomography, ill-posedness, Levenberg Marquardt method, Split Bregman, the Gradient projection for sparse reconstruction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1618778 DWT-SATS Based Detection of Image Region Cloning
Authors: Michael Zimba
Abstract:
A duplicated image region may be subjected to a number of attacks such as noise addition, compression, reflection, rotation, and scaling with the intention of either merely mating it to its targeted neighborhood or preventing its detection. In this paper, we present an effective and robust method of detecting duplicated regions inclusive of those affected by the various attacks. In order to reduce the dimension of the image, the proposed algorithm firstly performs discrete wavelet transform, DWT, of a suspicious image. However, unlike most existing copy move image forgery (CMIF) detection algorithms operating in the DWT domain which extract only the low frequency subband of the DWT of the suspicious image thereby leaving valuable information in the other three subbands, the proposed algorithm simultaneously extracts features from all the four subbands. The extracted features are not only more accurate representation of image regions but also robust to additive noise, JPEG compression, and affine transformation. Furthermore, principal component analysis-eigenvalue decomposition, PCA-EVD, is applied to reduce the dimension of the features. The extracted features are then sorted using the more computationally efficient Radix Sort algorithm. Finally, same affine transformation selection, SATS, a duplication verification method, is applied to detect duplicated regions. The proposed algorithm is not only fast but also more robust to attacks compared to the related CMIF detection algorithms. The experimental results show high detection rates.
Keywords: Affine Transformation, Discrete Wavelet Transform, Radix Sort, SATS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1910777 Digital Automatic Gain Control Integrated on WLAN Platform
Authors: Emilija Miletic, Milos Krstic, Maxim Piz, Michael Methfessel
Abstract:
In this work we present a solution for DAGC (Digital Automatic Gain Control) in WLAN receivers compatible to IEEE 802.11a/g standard. Those standards define communication in 5/2.4 GHz band using Orthogonal Frequency Division Multiplexing OFDM modulation scheme. WLAN Transceiver that we have used enables gain control over Low Noise Amplifier (LNA) and a Variable Gain Amplifier (VGA). The control over those signals is performed in our digital baseband processor using dedicated hardware block DAGC. DAGC in this process is used to automatically control the VGA and LNA in order to achieve better signal-to-noise ratio, decrease FER (Frame Error Rate) and hold the average power of the baseband signal close to the desired set point. DAGC function in baseband processor is done in few steps: measuring power levels of baseband samples of an RF signal,accumulating the differences between the measured power level and actual gain setting, adjusting a gain factor of the accumulation, and applying the adjusted gain factor the baseband values. Based on the measurement results of RSSI signal dependence to input power we have concluded that this digital AGC can be implemented applying the simple linearization of the RSSI. This solution is very simple but also effective and reduces complexity and power consumption of the DAGC. This DAGC is implemented and tested both in FPGA and in ASIC as a part of our WLAN baseband processor. Finally, we have integrated this circuit in a compact WLAN PCMCIA board based on MAC and baseband ASIC chips designed from us.Keywords: WLAN, AGC, RSSI, baseband processor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3949