Search results for: noise constraint.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1100

Search results for: noise constraint.

140 Firm Performance of Thai Cuisines in Bangkok, Thailand: Contribution to the Tourism Industry

Authors: Prateep Wajeetongratana

Abstract:

This study is a descriptive-normative research. It attempted to investigate the restaurants’ firm performance in terms of the customers and restaurant personnel’s degree of satisfaction. A total of 12 restaurants in Bangkok, Thailand that offer Thai cuisine were included in this study. It involved 24 stockholders/managers, 120 subordinates and 360 customers. General Managers and restaurants’ stockholders, 10 staffs, and 30 costumers for each restaurant were chosen for random sampling. This study found that respondents are slightly satisfied with their work environment but are generally satisfied with the accessibility to transportation, to malls, convenience, safety, recreation, noise-free, and attraction; customers find the Quality of Food in most Thai Cuisines like services, prices of food, sales promotion, and capital and length of service satisfactory. Therefore, both stockholder-related and personnel-related factors which are influenced by restaurant, personnel, and customer-related factors are partially accepted whereas; customer-related factors which are influenced by restaurant, personnel and customer-related factors are rejected.

Keywords: Firm performance, Thai Cuisine, Tourism industry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2193
139 Corporate Governance and Corporate Social Responsibility: Research on the Interconnection of Both Concepts and Its Impact on Non-Profit Organizations

Authors: Helene Eller

Abstract:

The aim of non-profit organizations (NPO) is to provide services and goods for its clientele, with profit being a minor objective. By having this definition as the basic purpose of doing business, it is obvious that the goal of an organisation is to serve several bottom lines and not only the financial one. This approach is underpinned by the non-distribution constraint which means that NPO are allowed to make profits to a certain extent, but not to distribute them. The advantage is that there are no single shareholders who might have an interest in the prosperity of the organisation: there is no pie to divide. The gained profits remain within the organisation and will be reinvested in purposeful projects. Good governance is mandatory to support the aim of NPOs. Looking for a measure of good governance the principals of corporate governance (CG) will come in mind. The purpose of CG is direction and control, and in the field of NPO, CG is enlarged to consider the relationship to all important stakeholders who have an impact on the organisation. The recognition of more relevant parties than the shareholder is the link to corporate social responsibility (CSR). It supports a broader view of the bottom line: It is no longer enough to know how profits are used but rather how they are made. Besides, CSR addresses the responsibility of organisations for their impact on society. When transferring the concept of CSR to the non-profit area it will become obvious that CSR with its distinctive features will match the aims of NPOs. As a consequence, NPOs who apply CG apply also CSR to a certain extent. The research is designed as a comprehensive theoretical and empirical analysis. First, the investigation focuses on the theoretical basis of both concepts. Second, the similarities and differences are outlined and as a result the interconnection of both concepts will show up. The contribution of this research is manifold: The interconnection of both concepts when applied to NPOs has not got any attention in science yet. CSR and governance as integrated concept provides a lot of advantages for NPOs compared to for-profit organisations which are in a steady justification to show the impact they might have on the society. NPOs, however, integrate economic and social aspects as starting point. For NPOs CG is not a mere concept of compliance but rather an enhanced concept integrating a lot of aspects of CSR. There is no “either-nor” between the concepts for NPOs.

Keywords: Business ethics, corporate governance, corporate social responsibility, non-profit organisations, stakeholder theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1896
138 A Multi-Feature Deep Learning Algorithm for Urban Traffic Classification with Limited Labeled Data

Authors: Rohan Putatunda, Aryya Gangopadhyay

Abstract:

Acoustic sensors, if embedded in smart street lights, can help in capturing the activities (car honking, sirens, events, traffic, etc.) in cities. Needless to say, the acoustic data from such scenarios are complex due to multiple audio streams originating from different events, and when decomposed to independent signals, the amount of retrieved data volume is small in quantity which is inadequate to train deep neural networks. So, in this paper, we address the two challenges: a) separating the mixed signals, and b) developing an efficient acoustic classifier under data paucity. So, to address these challenges, we propose an architecture with supervised deep learning, where the initial captured mixed acoustics data are analyzed with Fast Fourier Transformation (FFT), followed by filtering the noise from the signal, and then decomposed to independent signals by fast independent component analysis (Fast ICA). To address the challenge of data paucity, we propose a multi feature-based deep neural network with high performance that is reflected in our experiments when compared to the conventional convolutional neural network (CNN) and multi-layer perceptron (MLP).

Keywords: FFT, ICA, vehicle classification, multi-feature DNN, CNN, MLP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 362
137 Detection of Ultrasonic Images in the Presence of a Random Number of Scatterers: A Statistical Learning Approach

Authors: J. P. Dubois, O. M. Abdul-Latif

Abstract:

Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of medical ultrasound images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to clinical ultrasound images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected ultrasound images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (detection hypotheses) in the original images.

Keywords: LS-SVM, medical ultrasound imaging, partially developed speckle, multi-look model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1300
136 In Vitro Study of Coded Transmission in Synthetic Aperture Ultrasound Imaging Systems

Authors: Ihor Trots, Yuriy Tasinkevych, Andrzej Nowicki, Marcin Lewandowski

Abstract:

In the paper the study of synthetic transmit aperture method applying the Golay coded transmission for medical ultrasound imaging is presented. Longer coded excitation allows to increase the total energy of the transmitted signal without increasing the peak pressure. Moreover signal-to-noise ratio and penetration depth are improved while maintaining high ultrasound image resolution. In the work the 128-element linear transducer array with 0.3 mm inter-element spacing excited by one cycle and the 8 and 16- bit Golay coded sequences at nominal frequency 4 MHz was used. To generate a spherical wave covering the full image region a single element transmission aperture was used and all the elements received the echo signals. The comparison of 2D ultrasound images of the tissue mimicking phantom and in vitro measurements of the beef liver is presented to illustrate the benefits of the coded transmission. The results were obtained using the synthetic aperture algorithm with transmit and receive signals correction based on a single element directivity function.

Keywords: Golay coded sequences, radiation pattern, signal processing, synthetic aperture, ultrasound imaging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1635
135 An Automatic Pipeline Monitoring System Based on PCA and SVM

Authors: C. Wan, A. Mita

Abstract:

This paper proposes a novel system for monitoring the health of underground pipelines. Some of these pipelines transport dangerous contents and any damage incurred might have catastrophic consequences. However, most of these damage are unintentional and usually a result of surrounding construction activities. In order to prevent these potential damages, monitoring systems are indispensable. This paper focuses on acoustically recognizing road cutters since they prelude most construction activities in modern cities. Acoustic recognition can be easily achieved by installing a distributed computing sensor network along the pipelines and using smart sensors to “listen" for potential threat; if there is a real threat, raise some form of alarm. For efficient pipeline monitoring, a novel monitoring approach is proposed. Principal Component Analysis (PCA) was studied and applied. Eigenvalues were regarded as the special signature that could characterize a sound sample, and were thus used for the feature vector for sound recognition. The denoising ability of PCA could make it robust to noise interference. One class SVM was used for classifier. On-site experiment results show that the proposed PCA and SVM based acoustic recognition system will be very effective with a low tendency for raising false alarms.

Keywords: One class SVM, pipeline monitoring system, principal component analysis, sound recognition, third party damage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1974
134 Fuzzy Mathematical Morphology approach in Image Processing

Authors: Yee Yee Htun, Dr. Khaing Khaing Aye

Abstract:

Morphological operators transform the original image into another image through the interaction with the other image of certain shape and size which is known as the structure element. Mathematical morphology provides a systematic approach to analyze the geometric characteristics of signals or images, and has been applied widely too many applications such as edge detection, objection segmentation, noise suppression and so on. Fuzzy Mathematical Morphology aims to extend the binary morphological operators to grey-level images. In order to define the basic morphological operations such as fuzzy erosion, dilation, opening and closing, a general method based upon fuzzy implication and inclusion grade operators is introduced. The fuzzy morphological operations extend the ordinary morphological operations by using fuzzy sets where for fuzzy sets, the union operation is replaced by a maximum operation, and the intersection operation is replaced by a minimum operation. In this work, it consists of two articles. In the first one, fuzzy set theory, fuzzy Mathematical morphology which is based on fuzzy logic and fuzzy set theory; fuzzy Mathematical operations and their properties will be studied in details. As a second part, the application of fuzziness in Mathematical morphology in practical work such as image processing will be discussed with the illustration problems.

Keywords: Binary Morphological, Fuzzy sets, Grayscalemorphology, Image processing, Mathematical morphology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3207
133 Optimization of Surface Roughness in Turning Process Utilizing Live Tooling via Taguchi Methodology

Authors: Weinian Wang, Joseph C. Chen

Abstract:

The objective of this research is to optimize the process of cutting cylindrical workpieces utilizing live tooling on a HAAS ST-20 lathe. Surface roughness (Ra) has been investigated as the indicator of quality characteristics for machining process. Aluminum alloy was used to conduct experiments due to its wide range usages in engineering structures and components where light weight or corrosion resistance is required. In this study, Taguchi methodology is utilized to determine the effects that each of the parameters has on surface roughness (Ra). A total of 18 experiments of each process were designed according to Taguchi’s L9 orthogonal array (OA) with four control factors at three levels of each and signal-to-noise ratios (S/N) were computed with Smaller the better equation for minimizing the system. The optimal parameters identified for the surface roughness of the turning operation utilizing live tooling were a feed rate of 3 inches/min(A3); a spindle speed of 1300 rpm(B3); a 2-flute titanium nitrite coated 3/8” endmill (C1); and a depth of cut of 0.025 inches (D2). The mean surface roughness of the confirmation runs in turning operation was 8.22 micro inches. The final results demonstrate that Taguchi methodology is a sufficient way of process improvement in turning process on surface roughness.

Keywords: Live tooling, surface roughness, Taguchi Parameter Design, CNC turning operation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 742
132 Co-tier and Co-channel Interference Avoidance Algorithm for Femtocell Networks

Authors: S. Padmapriya, M. Tamilarasi

Abstract:

Femtocells are regarded as a milestone for next generation cellular networks. As femtocells are deployed in an unplanned manner, there is a chance of assigning same resource to neighboring femtocells. This scenario may induce co-channel interference and may seriously affect the service quality of neighboring femtocells. In addition, the dominant transmit power of a femtocell will induce co-tier interference to neighboring femtocells. Thus to jointly handle co-tier and co-channel interference, we propose an interference-free power and resource block allocation (IFPRBA) algorithm for closely located, closed access femtocells. Based on neighboring list, inter-femto-base station distance and uplink noise power, the IFPRBA algorithm assigns non-interfering power and resource to femtocells. The IFPRBA algorithm also guarantees the quality of service to femtouser based on the knowledge of resource requirement, connection type, and the tolerable delay budget. Simulation result shows that the interference power experienced in IFPRBA algorithm is below the tolerable interference power and hence the overall service success ratio, PRB efficiency and network throughput are maximum when compared to conventional resource allocation framework for femtocell (RAFF) algorithm.

Keywords: Co-channel interference, co-tier interference, femtocells, guaranteed QoS, power optimization, resource assignment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2422
131 Simultaneous Optimization of Machining Parameters and Tool Geometry Specifications in Turning Operation of AISI1045 Steel

Authors: Farhad Kolahan, Mohsen Manoochehri, Abbas Hosseini

Abstract:

Machining is an important manufacturing process used to produce a wide variety of metallic parts. Among various machining processes, turning is one of the most important one which is employed to shape cylindrical parts. In turning, the quality of finished product is measured in terms of surface roughness. In turn, surface quality is determined by machining parameters and tool geometry specifications. The main objective of this study is to simultaneously model and optimize machining parameters and tool geometry in order to improve the surface roughness for AISI1045 steel. Several levels of machining parameters and tool geometry specifications are considered as input parameters. The surface roughness is selected as process output measure of performance. A Taguchi approach is employed to gather experimental data. Then, based on signal-to-noise (S/N) ratio, the best sets of cutting parameters and tool geometry specifications have been determined. Using these parameters values, the surface roughness of AISI1045 steel parts may be minimized. Experimental results are provided to illustrate the effectiveness of the proposed approach.

Keywords: Taguchi method, turning parameters, tool geometry specifications, S/N ratio, statistical analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2289
130 Inverse Heat Conduction Analysis of Cooling on Run Out Tables

Authors: M. S. Gadala, Khaled Ahmed, Elasadig Mahdi

Abstract:

In this paper, we introduced a gradient-based inverse solver to obtain the missing boundary conditions based on the readings of internal thermocouples. The results show that the method is very sensitive to measurement errors, and becomes unstable when small time steps are used. The artificial neural networks are shown to be capable of capturing the whole thermal history on the run-out table, but are not very effective in restoring the detailed behavior of the boundary conditions. Also, they behave poorly in nonlinear cases and where the boundary condition profile is different. GA and PSO are more effective in finding a detailed representation of the time-varying boundary conditions, as well as in nonlinear cases. However, their convergence takes longer. A variation of the basic PSO, called CRPSO, showed the best performance among the three versions. Also, PSO proved to be effective in handling noisy data, especially when its performance parameters were tuned. An increase in the self-confidence parameter was also found to be effective, as it increased the global search capabilities of the algorithm. RPSO was the most effective variation in dealing with noise, closely followed by CRPSO. The latter variation is recommended for inverse heat conduction problems, as it combines the efficiency and effectiveness required by these problems.

Keywords: Inverse Analysis, Function Specification, Neural Net Works, Particle Swarm, Run Out Table.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1661
129 Theoretical Analysis of Capacities in Dynamic Spatial Multiplexing MIMO Systems

Authors: Imen Sfaihi, Noureddine Hamdi

Abstract:

In this paper, we investigate the study of techniques for scheduling users for resource allocation in the case of multiple input and multiple output (MIMO) packet transmission systems. In these systems, transmit antennas are assigned to one user or dynamically to different users using spatial multiplexing. The allocation of all transmit antennas to one user cannot take full advantages of multi-user diversity. Therefore, we developed the case when resources are allocated dynamically. At each time slot users have to feed back their channel information on an uplink feedback channel. Channel information considered available in the schedulers is the zero forcing (ZF) post detection signal to interference plus noise ratio. Our analysis study concerns the round robin and the opportunistic schemes. In this paper, we present an overview and a complete capacity analysis of these schemes. The main results in our study are to give an analytical form of system capacity using the ZF receiver at the user terminal. Simulations have been carried out to validate all proposed analytical solutions and to compare the performance of these schemes.

Keywords: MIMO, scheduling, ZF receiver, spatial multiplexing, round robin scheduling, opportunistic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1276
128 Investigating the Effective Parameters in Determining the Type of Traffic Congestion Pricing Schemes in Urban Streets

Authors: Saeed Sayyad Hagh Shomar

Abstract:

Traffic congestion pricing – as a strategy in travel demand management in urban areas to reduce traffic congestion, air pollution and noise pollution – has drawn many attentions towards itself. Unlike the satisfying findings in this method, there are still problems in determining the best functional congestion pricing scheme with regard to the situation. The so-called problems in this process will result in further complications and even the scheme failure. That is why having proper knowledge of the significance of congestion pricing schemes and the effective factors in choosing them can lead to the success of this strategy. In this study, first, a variety of traffic congestion pricing schemes and their components are introduced; then, their functional usage is discussed. Next, by analyzing and comparing the barriers, limitations and advantages, the selection criteria of pricing schemes are described. The results, accordingly, show that the selection of the best scheme depends on various parameters. Finally, based on examining the effective parameters, it is concluded that the implementation of area-based schemes (cordon and zonal) has been more successful in non-diversion of traffic. That is considering the topology of the cities and the fact that traffic congestion is often created in the city centers, area-based schemes would be notably functional and appropriate.

Keywords: Congestion pricing, demand management, flat toll, variable toll.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 555
127 NDENet: End-to-End Nighttime Dehazing and Enhancement

Authors: H. Baskar, A. S. Chakravarthy, P. Garg, D. Goel, A. S. Raj, K. Kumar, Lakshya, R. Parvatham, V. Sushant, B. Kumar Rout

Abstract:

In this paper, we present a computer vision task called nighttime dehaze-enhancement. This task aims to jointly perform dehazing and lightness enhancement. Our task fundamentally differs from nighttime dehazing – our goal is to jointly dehaze and enhance scenes, while nighttime dehazing aims to dehaze scenes under a nighttime setting. In order to facilitate further research on this task, we release a benchmark dataset called Reside-β Night dataset, consisting of 4122 nighttime hazed images from 2061 scenes and 2061 ground truth images. Moreover, we also propose a network called NDENet (Nighttime Dehaze-Enhancement Network), which jointly performs dehazing and low-light enhancement in an end-to-end manner. We evaluate our method on the proposed benchmark and achieve Structural Index Similarity (SSIM) of 0.8962 and Peak Signal to Noise Ratio (PSNR) of 26.25. We also compare our network with other baseline networks on our benchmark to demonstrate the effectiveness of our approach. We believe that nighttime dehaze-enhancement is an essential task particularly for autonomous navigation applications, and hope that our work will open up new frontiers in research. The code for our network is made publicly available.

Keywords: Dehazing, image enhancement, nighttime, computer vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 552
126 Performance Analysis of a Combined Ordered Successive and Interference Cancellation Using Zero-Forcing Detection over Rayleigh Fading Channels in MIMO Systems

Authors: Jamal R. Elbergali

Abstract:

Multiple Input Multiple Output (MIMO) systems are wireless systems with multiple antenna elements at both ends of the link. Wireless communication systems demand high data rate and spectral efficiency with increased reliability. MIMO systems have been popular techniques to achieve these goals because increased data rate is possible through spatial multiplexing scheme and diversity. Spatial Multiplexing (SM) is used to achieve higher possible throughput than diversity. In this paper, we propose a Zero- Forcing (ZF) detection using a combination of Ordered Successive Interference Cancellation (OSIC) and Zero Forcing using Interference Cancellation (ZF-IC). The proposed method used an OSIC based on Signal to Noise Ratio (SNR) ordering to get the estimation of last symbol, then the estimated last symbol is considered to be an input to the ZF-IC. We analyze the Bit Error Rate (BER) performance of the proposed MIMO system over Rayleigh Fading Channel, using Binary Phase Shift Keying (BPSK) modulation scheme. The results show better performance than the previous methods.

Keywords: SNR, BER, BPSK, MIMO, Modulation, Zero forcing (ZF), OSIC, ZF-IC, Spatial Multiplexing (SM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1654
125 Analysis of Reflectance Photoplethysmograph Sensors

Authors: Fu-Hsuan Huang, Po-Jung Yuan, Kang-Ping Lin, Hen-Hong Chang, Cheng-Lun Tsai

Abstract:

Photoplethysmography is a simple measurement of the variation in blood volume in tissue. It detects the pulse signal of heart beat as well as the low frequency signal of vasoconstriction and vasodilation. The transmission type measurement is limited to only a few specific positions for example the index finger that have a short path length for light. The reflectance type measurement can be conveniently applied on most parts of the body surface. This study analyzed the factors that determine the quality of reflectance photoplethysmograph signal including the emitter-detector distance, wavelength, light intensity, and optical properties of skin tissue. Light emitting diodes (LEDs) with four different visible wavelengths were used as the light emitters. A phototransistor was used as the light detector. A micro translation stage adjusts the emitter-detector distance from 2 mm to 15 mm. The reflective photoplethysmograph signals were measured on different sites. The optimal emitter-detector distance was chosen to have a large dynamic range for low frequency drifting without signal saturation and a high perfusion index. Among these four wavelengths, a yellowish green (571nm) light with a proper emitter-detection distance of 2mm is the most suitable for obtaining a steady and reliable reflectance photoplethysmograph signal

Keywords: Reflectance photoplethysmograph, Perfusion index, Signal-to-noise ratio

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2121
124 An Image Enhancement Method Based on Curvelet Transform for CBCT-Images

Authors: Shahriar Farzam, Maryam Rastgarpour

Abstract:

Image denoising plays extremely important role in digital image processing. Enhancement of clinical image research based on Curvelet has been developed rapidly in recent years. In this paper, we present a method for image contrast enhancement for cone beam CT (CBCT) images based on fast discrete curvelet transforms (FDCT) that work through Unequally Spaced Fast Fourier Transform (USFFT). These transforms return a table of Curvelet transform coefficients indexed by a scale parameter, an orientation and a spatial location. Accordingly, the coefficients obtained from FDCT-USFFT can be modified in order to enhance contrast in an image. Our proposed method first uses a two-dimensional mathematical transform, namely the FDCT through unequal-space fast Fourier transform on input image and then applies thresholding on coefficients of Curvelet to enhance the CBCT images. Consequently, applying unequal-space fast Fourier Transform leads to an accurate reconstruction of the image with high resolution. The experimental results indicate the performance of the proposed method is superior to the existing ones in terms of Peak Signal to Noise Ratio (PSNR) and Effective Measure of Enhancement (EME).

Keywords: Curvelet transform, image enhancement, CBCT, image denoising.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1210
123 Channel Estimation for Orthogonal Frequency Division Multiplexing Systems over Doubly Selective Channels Based on the DCS-DCSOMP Algorithm

Authors: Linyu Wang, Furui Huo, Jianhong Xiang

Abstract:

The Doppler shift generated by high-speed movement and multipath effects in the channel are the main reasons for the generation of a time-frequency doubly-selective (DS) channel. There is severe inter-carrier interference (ICI) in the DS channel. Channel estimation for an orthogonal frequency division multiplexing (OFDM) system over a DS channel is very difficult. The simultaneous orthogonal matching pursuit (SOMP) algorithm under distributed compressive sensing theory (DCS-SOMP) has been used in channel estimation for OFDM systems over DS channels. However, the reconstruction accuracy of the DCS-SOMP algorithm is not high enough in the low Signal-to-Noise Ratio (SNR) stage. To solve this problem, in this paper, we propose an improved DCS-SOMP algorithm based on the inner product difference comparison operation (DCS-DCSOMP). The reconstruction accuracy is improved by increasing the number of candidate indexes and designing the comparison conditions of inner product difference. We combine the DCS-DCSOMP algorithm with the basis expansion model (BEM) to reduce the complexity of channel estimation. Simulation results show the effectiveness of the proposed algorithm and its advantages over other algorithms.

Keywords: OFDM, doubly selective, channel estimation, compressed sensing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 259
122 Human Face Detection and Segmentation using Eigenvalues of Covariance Matrix, Hough Transform and Raster Scan Algorithms

Authors: J. Prakash, K. Rajesh

Abstract:

In this paper we propose a novel method for human face segmentation using the elliptical structure of the human head. It makes use of the information present in the edge map of the image. In this approach we use the fact that the eigenvalues of covariance matrix represent the elliptical structure. The large and small eigenvalues of covariance matrix are associated with major and minor axial lengths of an ellipse. The other elliptical parameters are used to identify the centre and orientation of the face. Since an Elliptical Hough Transform requires 5D Hough Space, the Circular Hough Transform (CHT) is used to evaluate the elliptical parameters. Sparse matrix technique is used to perform CHT, as it squeeze zero elements, and have only a small number of non-zero elements, thereby having an advantage of less storage space and computational time. Neighborhood suppression scheme is used to identify the valid Hough peaks. The accurate position of the circumference pixels for occluded and distorted ellipses is identified using Bresenham-s Raster Scan Algorithm which uses the geometrical symmetry properties. This method does not require the evaluation of tangents for curvature contours, which are very sensitive to noise. The method has been evaluated on several images with different face orientations.

Keywords: Circular Hough Transform, Covariance matrix, Eigenvalues, Elliptical Hough Transform, Face segmentation, Raster Scan Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2463
121 Investigation of the Flow Characteristics in a Catalytic Muffler with Perforated Inlet Cone

Authors: Gyo Woo Lee, Man Young Kim

Abstract:

Emission regulations for diesel engines are being strengthened and it is impossible to meet the standards without exhaust after-treatment systems. Lack of the space in many diesel vehicles, however, make it difficult to design and install stand-alone catalytic converters such as DOC, DPF, and SCR in the vehicle exhaust systems. Accordingly, those have been installed inside the muffler to save the space, and referred to the catalytic muffler. However, that has complex internal structure with perforated plate and pipe for noise and monolithic catalyst for emission reduction. For this reason, flow uniformity and pressure drop, which affect efficiency of catalyst and engine performance, respectively, should be examined when the catalytic muffler is designed. In this work, therefore, the flow uniformity and pressure drop to improve the performance of the catalytic converter and the engine have been numerically investigated by changing various design parameters such as inlet shape, porosity, and outlet shape of the muffler using the three-dimensional turbulent flow of the incompressible, non-reacting, and steady state inside the catalytic muffler. Finally, it can be found that the shape, in which the muffler has perforated pipe inside the inlet part, has higher uniformity index and lower pressure drop than others considered in this work.

Keywords: Catalytic muffler, Perforated inlet cone, Catalysts, Perforated pipe, Flow uniformity, Pressure drop.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2854
120 Retrospective Synthetic Focusing with Correlation Weighting for Very High Frame Rate Ultrasound

Authors: Chang-Lin Hu, Yao-You Cheng, Meng-Lin Li

Abstract:

The need of high frame-rate imaging has been triggered by the new applications of ultrasound imaging to transient elastography and real-time 3D ultrasound. Using plane wave excitation (PWE) is one of the methods to achieve very high frame-rate imaging since an image can be formed with a single insonification. However, due to the lack of transmit focusing, the image quality with PWE is lower compared with those using conventional focused transmission. To solve this problem, we propose a filter-retrieved transmit focusing (FRF) technique combined with cross-correlation weighting (FRF+CC weighting) for high frame-rate imaging with PWE. A restrospective focusing filter is designed to simultaneously minimize the predefined sidelobe energy associated with single PWE and the filter energy related to the signal-to-noise-ratio (SNR). This filter attempts to maintain the mainlobe signals and to reduce the sidelobe ones, which gives similar mainlobe signals and different sidelobes between the original PWE and the FRF baseband data. Normalized cross-correlation coefficient at zero lag is calculated to quantify the degree of similarity at each imaging point and used as a weighting matrix to the FRF baseband data to further suppress sidelobes, thus improving the filter-retrieved focusing quality.

Keywords: retrospective synthetic focusing, high frame rate, correlation weighting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1811
119 Coding Structures for Seated Row Simulation of an Active Controlled Vibration Isolation and Stabilization System for Astronaut’s Exercise Platform

Authors: Ziraguen O. Williams, Shield B. Lin, Fouad N. Matari, Leslie J. Quiocho

Abstract:

Simulation for seated row exercise was a continued task to assist NASA in analyzing a one-dimensional vibration isolation and stabilization system for astronaut’s exercise platform. Feedback delay and signal noise were added to the simulation model. Simulation runs for this study were conducted in two software simulation tools, Trick and MBDyn, software simulation environments developed at the NASA Johnson Space Center. The exciter force in the simulation was calculated from motion capture of an exerciser during a seated aerobic row exercise. The simulation runs include passive control, active control using a Proportional, Integral, Derivative (PID) controller, and active control using a Piecewise Linear Integral Derivative (PWLID) controller. Output parameters include displacements of the exercise platform, the exerciser, and the counterweight; transmitted force to the wall of spacecraft; and actuator force to the platform. The simulation results showed excellent force reduction in the active controlled system compared to the passive controlled system, which showed less force reduction.

Keywords: Simulation, counterweight, exercise, vibration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 239
118 Using HMM-based Classifier Adapted to Background Noises with Improved Sounds Features for Audio Surveillance Application

Authors: Asma Rabaoui, Zied Lachiri, Noureddine Ellouze

Abstract:

Discrimination between different classes of environmental sounds is the goal of our work. The use of a sound recognition system can offer concrete potentialities for surveillance and security applications. The first paper contribution to this research field is represented by a thorough investigation of the applicability of state-of-the-art audio features in the domain of environmental sound recognition. Additionally, a set of novel features obtained by combining the basic parameters is introduced. The quality of the features investigated is evaluated by a HMM-based classifier to which a great interest was done. In fact, we propose to use a Multi-Style training system based on HMMs: one recognizer is trained on a database including different levels of background noises and is used as a universal recognizer for every environment. In order to enhance the system robustness by reducing the environmental variability, we explore different adaptation algorithms including Maximum Likelihood Linear Regression (MLLR), Maximum A Posteriori (MAP) and the MAP/MLLR algorithm that combines MAP and MLLR. Experimental evaluation shows that a rather good recognition rate can be reached, even under important noise degradation conditions when the system is fed by the convenient set of features.

Keywords: Sounds recognition, HMM classifier, Multi-style training, Environmental Adaptation, Feature combinations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1591
117 Analysis of Translational Ship Oscillations in a Realistic Environment

Authors: Chen Zhang, Bernhard Schwarz-Röhr, Alexander Härting

Abstract:

To acquire accurate ship motions at the center of gravity, a single low-cost inertial sensor is utilized and applied on board to measure ship oscillating motions. As observations, the three axes accelerations and three axes rotational rates provided by the sensor are used. The mathematical model of processing the observation data includes determination of the distance vector between the sensor and the center of gravity in x, y, and z directions. After setting up the transfer matrix from sensor’s own coordinate system to the ship’s body frame, an extended Kalman filter is applied to deal with nonlinearities between the ship motion in the body frame and the observation information in the sensor’s frame. As a side effect, the method eliminates sensor noise and other unwanted errors. Results are not only roll and pitch, but also linear motions, in particular heave and surge at the center of gravity. For testing, we resort to measurements recorded on a small vessel in a well-defined sea state. With response amplitude operators computed numerically by a commercial software (Seaway), motion characteristics are estimated. These agree well with the measurements after processing with the suggested method.

Keywords: Extended Kalman filter, nonlinear estimation, sea trial, ship motion estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 997
116 Discrete Polyphase Matched Filtering-based Soft Timing Estimation for Mobile Wireless Systems

Authors: Thomas O. Olwal, Michael A. van Wyk, Barend J. van Wyk

Abstract:

In this paper we present a soft timing phase estimation (STPE) method for wireless mobile receivers operating in low signal to noise ratios (SNRs). Discrete Polyphase Matched (DPM) filters, a Log-maximum a posterior probability (MAP) and/or a Soft-output Viterbi algorithm (SOVA) are combined to derive a new timing recovery (TR) scheme. We apply this scheme to wireless cellular communication system model that comprises of a raised cosine filter (RCF), a bit-interleaved turbo-coded multi-level modulation (BITMM) scheme and the channel is assumed to be memory-less. Furthermore, no clock signals are transmitted to the receiver contrary to the classical data aided (DA) models. This new model ensures that both the bandwidth and power of the communication system is conserved. However, the computational complexity of ideal turbo synchronization is increased by 50%. Several simulation tests on bit error rate (BER) and block error rate (BLER) versus low SNR reveal that the proposed iterative soft timing recovery (ISTR) scheme outperforms the conventional schemes.

Keywords: discrete polyphase matched filters, maximum likelihood estimators, soft timing phase estimation, wireless mobile systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1650
115 Highly Accurate Target Motion Compensation Using Entropy Function Minimization

Authors: Amin Aghatabar Roodbary, Mohammad Hassan Bastani

Abstract:

One of the defects of stepped frequency radar systems is their sensitivity to target motion. In such systems, target motion causes range cell shift, false peaks, Signal to Noise Ratio (SNR) reduction and range profile spreading because of power spectrum interference of each range cell in adjacent range cells which induces distortion in High Resolution Range Profile (HRRP) and disrupt target recognition process. Thus Target Motion Parameters (TMPs) effects compensation should be employed. In this paper, such a method for estimating TMPs (velocity and acceleration) and consequently eliminating or suppressing the unwanted effects on HRRP based on entropy minimization has been proposed. This method is carried out in two major steps: in the first step, a discrete search method has been utilized over the whole acceleration-velocity lattice network, in a specific interval seeking to find a less-accurate minimum point of the entropy function. Then in the second step, a 1-D search over velocity is done in locus of the minimum for several constant acceleration lines, in order to enhance the accuracy of the minimum point found in the first step. The provided simulation results demonstrate the effectiveness of the proposed method.

Keywords: ATR, HRRP, motion compensation, SFW, TMP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 604
114 A New Image Psychovisual Coding Quality Measurement based Region of Interest

Authors: M. Nahid, A. Bajit, A. Tamtaoui, E. H. Bouyakhf

Abstract:

To model the human visual system (HVS) in the region of interest, we propose a new objective metric evaluation adapted to wavelet foveation-based image compression quality measurement, which exploits a foveation setup filter implementation technique in the DWT domain, based especially on the point and region of fixation of the human eye. This model is then used to predict the visible divergences between an original and compressed image with respect to this region field and yields an adapted and local measure error by removing all peripheral errors. The technique, which we call foveation wavelet visible difference prediction (FWVDP), is demonstrated on a number of noisy images all of which have the same local peak signal to noise ratio (PSNR), but visibly different errors. We show that the FWVDP reliably predicts the fixation areas of interest where error is masked, due to high image contrast, and the areas where the error is visible, due to low image contrast. The paper also suggests ways in which the FWVDP can be used to determine a visually optimal quantization strategy for foveation-based wavelet coefficients and to produce a quantitative local measure of image quality.

Keywords: Human Visual System, Image Quality, ImageCompression, foveation wavelet, region of interest ROI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1456
113 QoS Improvement Using Intelligent Algorithm under Dynamic Tropical Weather for Earth-Space Satellite Applications

Authors: Joseph S. Ojo, Vincent A. Akpan, Oladayo G. Ajileye, Olalekan L, Ojo

Abstract:

In this paper, the intelligent algorithm (IA) that is capable of adapting to dynamical tropical weather conditions is proposed based on fuzzy logic techniques. The IA effectively interacts with the quality of service (QoS) criteria irrespective of the dynamic tropical weather to achieve improvement in the satellite links. To achieve this, an adaptive network-based fuzzy inference system (ANFIS) has been adopted. The algorithm is capable of interacting with the weather fluctuation to generate appropriate improvement to the satellite QoS for efficient services to the customers. 5-year (2012-2016) rainfall rate of one-minute integration time series data has been used to derive fading based on ITU-R P. 618-12 propagation models. The data are obtained from the measurement undertaken by the Communication Research Group (CRG), Physics Department, Federal University of Technology, Akure, Nigeria. The rain attenuation and signal-to-noise ratio (SNR) were derived for frequency between Ku and V-band and propagation angle with respect to different transmitting power. The simulated results show a substantial reduction in SNR especially for application in the area of digital video broadcast-second generation coding modulation satellite networks.

Keywords: Fuzzy logic, intelligent algorithm, Nigeria, QoS, satellite applications, tropical weather.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 755
112 An Optimization of Machine Parameters for Modified Horizontal Boring Tool Using Taguchi Method

Authors: Thirasak Panyaphirawat, Pairoj Sapsmarnwong, Teeratas Pornyungyuen

Abstract:

This paper presents the findings of an experimental investigation of important machining parameters for the horizontal boring tool modified to mouth with a horizontal lathe machine to bore an overlength workpiece. In order to verify a usability of a modified tool, design of experiment based on Taguchi method is performed. The parameters investigated are spindle speed, feed rate, depth of cut and length of workpiece. Taguchi L9 orthogonal array is selected for four factors three level parameters in order to minimize surface roughness (Ra and Rz) of S45C steel tubes. Signal to noise ratio analysis and analysis of variance (ANOVA) is performed to study an effect of said parameters and to optimize the machine setting for best surface finish. The controlled factors with most effect are depth of cut, spindle speed, length of workpiece, and feed rate in order. The confirmation test is performed to test the optimal setting obtained from Taguchi method and the result is satisfactory.

Keywords: Design of Experiment, Taguchi Design, Optimization, Analysis of Variance, Machining Parameters, Horizontal Boring Tool.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2674
111 Optimization of Process Parameters of Pressure Die Casting using Taguchi Methodology

Authors: Satish Kumar, Arun Kumar Gupta, Pankaj Chandna

Abstract:

The present work analyses different parameters of pressure die casting to minimize the casting defects. Pressure diecasting is usually applied for casting of aluminium alloys. Good surface finish with required tolerances and dimensional accuracy can be achieved by optimization of controllable process parameters such as solidification time, molten temperature, filling time, injection pressure and plunger velocity. Moreover, by selection of optimum process parameters the pressure die casting defects such as porosity, insufficient spread of molten material, flash etc. are also minimized. Therefore, a pressure die casting component, carburetor housing of aluminium alloy (Al2Si2O5) has been considered. The effects of selected process parameters on casting defects and subsequent setting of parameters with the levels have been accomplished by Taguchi-s parameter design approach. The experiments have been performed as per the combination of levels of different process parameters suggested by L18 orthogonal array. Analyses of variance have been performed for mean and signal-to-noise ratio to estimate the percent contribution of different process parameters. Confidence interval has also been estimated for 95% consistency level and three conformational experiments have been performed to validate the optimum level of different parameters. Overall 2.352% reduction in defects has been observed with the help of suggested optimum process parameters.

Keywords: Aluminium Casting, Pressure Die Casting, Taguchi Methodology, Design of Experiments

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7290