Search results for: Signal Classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2214

Search results for: Signal Classification

264 Adaptive Square-Rooting Companding Technique for PAPR Reduction in OFDM Systems

Authors: Wisam F. Al-Azzo, Borhanuddin Mohd. Ali

Abstract:

This paper addresses the problem of peak-to-average power ratio (PAPR) in orthogonal frequency division multiplexing (OFDM) systems. It also introduces a new PAPR reduction technique based on adaptive square-rooting (SQRT) companding process. The SQRT process of the proposed technique changes the statistical characteristics of the OFDM output signals from Rayleigh distribution to Gaussian-like distribution. This change in statistical distribution results changes of both the peak and average power values of OFDM signals, and consequently reduces significantly the PAPR. For the 64QAM OFDM system using 512 subcarriers, up to 6 dB reduction in PAPR was achieved by square-rooting technique with fixed degradation in bit error rate (BER) equal to 3 dB. However, the PAPR is reduced at the expense of only -15 dB out-ofband spectral shoulder re-growth below the in-band signal level. The proposed adaptive SQRT technique is superior in terms of BER performance than the original, non-adaptive, square-rooting technique when the required reduction in PAPR is no more than 5 dB. Also, it provides fixed amount of PAPR reduction in which it is not available in the original SQRT technique.

Keywords: complementary cumulative distribution function(CCDF), OFDM, peak-to-average power ratio (PAPR), adaptivesquare-rooting PAPR reduction technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2180
263 Improving Activity Recognition Classification of Repetitious Beginner Swimming Using a 2-Step Peak/Valley Segmentation Method with Smoothing and Resampling for Machine Learning

Authors: Larry Powell, Seth Polsley, Drew Casey, Tracy Hammond

Abstract:

Human activity recognition (HAR) systems have shown positive performance when recognizing repetitive activities like walking, running, and sleeping. Water-based activities are a reasonably new area for activity recognition. However, water-based activity recognition has largely focused on supporting the elite and competitive swimming population, which already has amazing coordination and proper form. Beginner swimmers are not perfect, and activity recognition needs to support the individual motions to help beginners. Activity recognition algorithms are traditionally built around short segments of timed sensor data. Using a time window input can cause performance issues in the machine learning model. The window’s size can be too small or large, requiring careful tuning and precise data segmentation. In this work, we present a method that uses a time window as the initial segmentation, then separates the data based on the change in the sensor value. Our system uses a multi-phase segmentation method that pulls all peaks and valleys for each axis of an accelerometer placed on the swimmer’s lower back. This results in high recognition performance using leave-one-subject-out validation on our study with 20 beginner swimmers, with our model optimized from our final dataset resulting in an F-Score of 0.95.

Keywords: Time window, peak/valley segmentation, feature extraction, beginner swimming, activity recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 167
262 Design Neural Network Controller for Mechatronic System

Authors: Ismail Algelli Sassi Ehtiwesh, Mohamed Ali Elhaj

Abstract:

The main goal of the study is to analyze all relevant properties of the electro hydraulic systems and based on that to make a proper choice of the neural network control strategy that may be used for the control of the mechatronic system. A combination of electronic and hydraulic systems is widely used since it combines the advantages of both. Hydraulic systems are widely spread because of their properties as accuracy, flexibility, high horsepower-to-weight ratio, fast starting, stopping and reversal with smoothness and precision, and simplicity of operations. On the other hand, the modern control of hydraulic systems is based on control of the circuit fed to the inductive solenoid that controls the position of the hydraulic valve. Since this circuit may be easily handled by PWM (Pulse Width Modulation) signal with a proper frequency, the combination of electrical and hydraulic systems became very fruitful and usable in specific areas as airplane and military industry. The study shows and discusses the experimental results obtained by the control strategy of neural network control using MATLAB and SIMULINK [1]. Finally, the special attention was paid to the possibility of neuro-controller design and its application to control of electro-hydraulic systems and to make comparative with other kinds of control.

Keywords: Neural-Network controller, Mechatronic, electrohydraulic

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2159
261 An Overview of Electronic Waste as Aggregate in Concrete

Authors: S. R. Shamili, C. Natarajan, J. Karthikeyan

Abstract:

Rapid growth of world population and widespread urbanization has remarkably increased the development of the construction industry which caused a huge demand for sand and gravels. Environmental problems occur when the rate of extraction of sand, gravels, and other materials exceeds the rate of generation of natural resources; therefore, an alternative source is essential to replace the materials used in concrete. Now-a-days, electronic products have become an integral part of daily life which provides more comfort, security, and ease of exchange of information. These electronic waste (E-Waste) materials have serious human health concerns and require extreme care in its disposal to avoid any adverse impacts. Disposal or dumping of these E-Wastes also causes major issues because it is highly complex to handle and often contains highly toxic chemicals such as lead, cadmium, mercury, beryllium, brominates flame retardants (BFRs), polyvinyl chloride (PVC), and phosphorus compounds. Hence, E-Waste can be incorporated in concrete to make a sustainable environment. This paper deals with the composition, preparation, properties, classification of E-Waste. All these processes avoid dumping to landfills whilst conserving natural aggregate resources, and providing a better environmental option. This paper also provides a detailed literature review on the behaviour of concrete with incorporation of E-Wastes. Many research shows the strong possibility of using E-Waste as a substitute of aggregates eventually it reduces the use of natural aggregates in concrete.

Keywords: Disposal, electronic waste, landfill, toxic chemicals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2801
260 Optimized Brain Computer Interface System for Unspoken Speech Recognition: Role of Wernicke Area

Authors: Nassib Abdallah, Pierre Chauvet, Abd El Salam Hajjar, Bassam Daya

Abstract:

In this paper, we propose an optimized brain computer interface (BCI) system for unspoken speech recognition, based on the fact that the constructions of unspoken words rely strongly on the Wernicke area, situated in the temporal lobe. Our BCI system has four modules: (i) the EEG Acquisition module based on a non-invasive headset with 14 electrodes; (ii) the Preprocessing module to remove noise and artifacts, using the Common Average Reference method; (iii) the Features Extraction module, using Wavelet Packet Transform (WPT); (iv) the Classification module based on a one-hidden layer artificial neural network. The present study consists of comparing the recognition accuracy of 5 Arabic words, when using all the headset electrodes or only the 4 electrodes situated near the Wernicke area, as well as the selection effect of the subbands produced by the WPT module. After applying the articial neural network on the produced database, we obtain, on the test dataset, an accuracy of 83.4% with all the electrodes and all the subbands of 8 levels of the WPT decomposition. However, by using only the 4 electrodes near Wernicke Area and the 6 middle subbands of the WPT, we obtain a high reduction of the dataset size, equal to approximately 19% of the total dataset, with 67.5% of accuracy rate. This reduction appears particularly important to improve the design of a low cost and simple to use BCI, trained for several words.

Keywords: Brain-computer interface, speech recognition, electroencephalography EEG, Wernicke area, artificial neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 879
259 Theoretical Analysis of Capacities in Dynamic Spatial Multiplexing MIMO Systems

Authors: Imen Sfaihi, Noureddine Hamdi

Abstract:

In this paper, we investigate the study of techniques for scheduling users for resource allocation in the case of multiple input and multiple output (MIMO) packet transmission systems. In these systems, transmit antennas are assigned to one user or dynamically to different users using spatial multiplexing. The allocation of all transmit antennas to one user cannot take full advantages of multi-user diversity. Therefore, we developed the case when resources are allocated dynamically. At each time slot users have to feed back their channel information on an uplink feedback channel. Channel information considered available in the schedulers is the zero forcing (ZF) post detection signal to interference plus noise ratio. Our analysis study concerns the round robin and the opportunistic schemes. In this paper, we present an overview and a complete capacity analysis of these schemes. The main results in our study are to give an analytical form of system capacity using the ZF receiver at the user terminal. Simulations have been carried out to validate all proposed analytical solutions and to compare the performance of these schemes.

Keywords: MIMO, scheduling, ZF receiver, spatial multiplexing, round robin scheduling, opportunistic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1294
258 A Static Android Malware Detection Based on Actual Used Permissions Combination and API Calls

Authors: Xiaoqing Wang, Junfeng Wang, Xiaolan Zhu

Abstract:

Android operating system has been recognized by most application developers because of its good open-source and compatibility, which enriches the categories of applications greatly. However, it has become the target of malware attackers due to the lack of strict security supervision mechanisms, which leads to the rapid growth of malware, thus bringing serious safety hazards to users. Therefore, it is critical to detect Android malware effectively. Generally, the permissions declared in the AndroidManifest.xml can reflect the function and behavior of the application to a large extent. Since current Android system has not any restrictions to the number of permissions that an application can request, developers tend to apply more than actually needed permissions in order to ensure the successful running of the application, which results in the abuse of permissions. However, some traditional detection methods only consider the requested permissions and ignore whether it is actually used, which leads to incorrect identification of some malwares. Therefore, a machine learning detection method based on the actually used permissions combination and API calls was put forward in this paper. Meanwhile, several experiments are conducted to evaluate our methodology. The result shows that it can detect unknown malware effectively with higher true positive rate and accuracy while maintaining a low false positive rate. Consequently, the AdaboostM1 (J48) classification algorithm based on information gain feature selection algorithm has the best detection result, which can achieve an accuracy of 99.8%, a true positive rate of 99.6% and a lowest false positive rate of 0.

Keywords: Android, permissions combination, API calls, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1892
257 NDENet: End-to-End Nighttime Dehazing and Enhancement

Authors: H. Baskar, A. S. Chakravarthy, P. Garg, D. Goel, A. S. Raj, K. Kumar, Lakshya, R. Parvatham, V. Sushant, B. Kumar Rout

Abstract:

In this paper, we present a computer vision task called nighttime dehaze-enhancement. This task aims to jointly perform dehazing and lightness enhancement. Our task fundamentally differs from nighttime dehazing – our goal is to jointly dehaze and enhance scenes, while nighttime dehazing aims to dehaze scenes under a nighttime setting. In order to facilitate further research on this task, we release a benchmark dataset called Reside-β Night dataset, consisting of 4122 nighttime hazed images from 2061 scenes and 2061 ground truth images. Moreover, we also propose a network called NDENet (Nighttime Dehaze-Enhancement Network), which jointly performs dehazing and low-light enhancement in an end-to-end manner. We evaluate our method on the proposed benchmark and achieve Structural Index Similarity (SSIM) of 0.8962 and Peak Signal to Noise Ratio (PSNR) of 26.25. We also compare our network with other baseline networks on our benchmark to demonstrate the effectiveness of our approach. We believe that nighttime dehaze-enhancement is an essential task particularly for autonomous navigation applications, and hope that our work will open up new frontiers in research. The code for our network is made publicly available.

Keywords: Dehazing, image enhancement, nighttime, computer vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 601
256 Performance Analysis of a Combined Ordered Successive and Interference Cancellation Using Zero-Forcing Detection over Rayleigh Fading Channels in MIMO Systems

Authors: Jamal R. Elbergali

Abstract:

Multiple Input Multiple Output (MIMO) systems are wireless systems with multiple antenna elements at both ends of the link. Wireless communication systems demand high data rate and spectral efficiency with increased reliability. MIMO systems have been popular techniques to achieve these goals because increased data rate is possible through spatial multiplexing scheme and diversity. Spatial Multiplexing (SM) is used to achieve higher possible throughput than diversity. In this paper, we propose a Zero- Forcing (ZF) detection using a combination of Ordered Successive Interference Cancellation (OSIC) and Zero Forcing using Interference Cancellation (ZF-IC). The proposed method used an OSIC based on Signal to Noise Ratio (SNR) ordering to get the estimation of last symbol, then the estimated last symbol is considered to be an input to the ZF-IC. We analyze the Bit Error Rate (BER) performance of the proposed MIMO system over Rayleigh Fading Channel, using Binary Phase Shift Keying (BPSK) modulation scheme. The results show better performance than the previous methods.

Keywords: SNR, BER, BPSK, MIMO, Modulation, Zero forcing (ZF), OSIC, ZF-IC, Spatial Multiplexing (SM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1673
255 Quad Tree Decomposition Based Analysis of Compressed Image Data Communication for Lossy and Lossless Using WSN

Authors: N. Muthukumaran, R. Ravi

Abstract:

The Quad Tree Decomposition based performance analysis of compressed image data communication for lossy and lossless through wireless sensor network is presented. Images have considerably higher storage requirement than text. While transmitting a multimedia content there is chance of the packets being dropped due to noise and interference. At the receiver end the packets that carry valuable information might be damaged or lost due to noise, interference and congestion. In order to avoid the valuable information from being dropped various retransmission schemes have been proposed. In this proposed scheme QTD is used. QTD is an image segmentation method that divides the image into homogeneous areas. In this proposed scheme involves analysis of parameters such as compression ratio, peak signal to noise ratio, mean square error, bits per pixel in compressed image and analysis of difficulties during data packet communication in Wireless Sensor Networks. By considering the above, this paper is to use the QTD to improve the compression ratio as well as visual quality and the algorithm in MATLAB 7.1 and NS2 Simulator software tool.

Keywords: Image compression, Compression Ratio, Quad tree decomposition, Wireless sensor networks, NS2 simulator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2371
254 Evolutionary Approach for Automated Discovery of Censored Production Rules

Authors: Kamal K. Bharadwaj, Basheer M. Al-Maqaleh

Abstract:

In the recent past, there has been an increasing interest in applying evolutionary methods to Knowledge Discovery in Databases (KDD) and a number of successful applications of Genetic Algorithms (GA) and Genetic Programming (GP) to KDD have been demonstrated. The most predominant representation of the discovered knowledge is the standard Production Rules (PRs) in the form If P Then D. The PRs, however, are unable to handle exceptions and do not exhibit variable precision. The Censored Production Rules (CPRs), an extension of PRs, were proposed by Michalski & Winston that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: If P Then D Unless C, where C (Censor) is an exception to the rule. Such rules are employed in situations, in which the conditional statement 'If P Then D' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence are tight or there is simply no information available as to whether it holds or not. Thus, the 'If P Then D' part of the CPR expresses important information, while the Unless C part acts only as a switch and changes the polarity of D to ~D. This paper presents a classification algorithm based on evolutionary approach that discovers comprehensible rules with exceptions in the form of CPRs. The proposed approach has flexible chromosome encoding, where each chromosome corresponds to a CPR. Appropriate genetic operators are suggested and a fitness function is proposed that incorporates the basic constraints on CPRs. Experimental results are presented to demonstrate the performance of the proposed algorithm.

Keywords: Censored Production Rule, Data Mining, MachineLearning, Evolutionary Algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1859
253 3D Locomotion and Fractal Analysis of Goldfish for Acute Toxicity Bioassay

Authors: Kittiwann Nimkerdphol, Masahiro Nakagawa

Abstract:

Biological reactions of individuals of a testing animal to toxic substance are unique and can be used as an indication of the existing of toxic substance. However, to distinguish such phenomenon need a very complicate system and even more complicate to analyze data in 3 dimensional. In this paper, a system to evaluate in vitro biological activities to acute toxicity of stochastic self-affine non-stationary signal of 3D goldfish swimming by using fractal analysis is introduced. Regular digital camcorders are utilized by proposed algorithm 3DCCPC to effectively capture and construct 3D movements of the fish. A Critical Exponent Method (CEM) has been adopted as a fractal estimator. The hypothesis was that the swimming of goldfish to acute toxic would show the fractal property which related to the toxic concentration. The experimental results supported the hypothesis by showing that the swimming of goldfish under the different toxic concentration has fractal properties. It also shows that the fractal dimension of the swimming related to the pH value of FD Ôëê 0.26pH + 0.05. With the proposed system, the fish is allowed to swim freely in all direction to react to the toxic. In addition, the trajectories are precisely evaluated by fractal analysis with critical exponent method and hence the results exhibit with much higher degree of confidence.

Keywords: 3D locomotion, bioassay, critical exponent method, CEM, fractal analysis, goldfish.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1698
252 Multi-Temporal Mapping of Built-up Areas Using Daytime and Nighttime Satellite Images Based on Google Earth Engine Platform

Authors: S. Hutasavi, D. Chen

Abstract:

The built-up area is a significant proxy to measure regional economic growth and reflects the Gross Provincial Product (GPP). However, an up-to-date and reliable database of built-up areas is not always available, especially in developing countries. The cloud-based geospatial analysis platform such as Google Earth Engine (GEE) provides an opportunity with accessibility and computational power for those countries to generate the built-up data. Therefore, this study aims to extract the built-up areas in Eastern Economic Corridor (EEC), Thailand using day and nighttime satellite imagery based on GEE facilities. The normalized indices were generated from Landsat 8 surface reflectance dataset, including Normalized Difference Built-up Index (NDBI), Built-up Index (BUI), and Modified Built-up Index (MBUI). These indices were applied to identify built-up areas in EEC. The result shows that MBUI performs better than BUI and NDBI, with the highest accuracy of 0.85 and Kappa of 0.82. Moreover, the overall accuracy of classification was improved from 79% to 90%, and error of total built-up area was decreased from 29% to 0.7%, after night-time light data from the Visible and Infrared Imaging Suite (VIIRS) Day Night Band (DNB). The results suggest that MBUI with night-time light imagery is appropriate for built-up area extraction and be utilize for further study of socioeconomic impacts of regional development policy over the EEC region.

Keywords: Built-up area extraction, Google earth engine, adaptive thresholding method, rapid mapping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 565
251 Effects of Double Delta Doping on Millimeter and Sub-millimeter Wave Response of Two-Dimensional Hot Electrons in GaAs Nanostructures

Authors: N. Basanta Singh, Sanjoy Deb, G. P Mishra, Subir Kumar Sarkar

Abstract:

Carrier mobility has become the most important characteristic of high speed low dimensional devices. Due to development of very fast switching semiconductor devices, speed of computer and communication equipment has been increasing day by day and will continue to do so in future. As the response of any device depends on the carrier motion within the devices, extensive studies of carrier mobility in the devices has been established essential for the growth in the field of low dimensional devices. Small-signal ac transport of degenerate two-dimensional hot electrons in GaAs quantum wells is studied here incorporating deformation potential acoustic, polar optic and ionized impurity scattering in the framework of heated drifted Fermi-Dirac carrier distribution. Delta doping is considered in the calculations to investigate the effects of double delta doping on millimeter and submillimeter wave response of two dimensional hot electrons in GaAs nanostructures. The inclusion of delta doping is found to enhance considerably the two dimensional electron density which in turn improves the carrier mobility (both ac and dc) values in the GaAs quantum wells thereby providing scope of getting higher speed devices in future.

Keywords: Carrier mobility, Delta doping, Hot carriers, Quantum wells.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1652
250 Review and Evaluation of Trending Canonical Correlation Analyses-Based Brain-Computer Interface Methods

Authors: Bayar Shahab

Abstract:

The fast development of technology that has advanced neuroscience and human interaction with computers has enabled solutions to various problems and issues of this new era. The Brain-Computer Interface (BCI) has opened the door to several new research areas and have been able to provide solutions to critical and vital issues such as supporting a paralyzed patient to interact with the outside world, controlling a robot arm, playing games in VR with the brain, driving a wheelchair. This review presents the state-of-the-art methods and improvements of canonical correlation analyses (CCA), an SSVEP-based BCI method. These are the methods used to extract EEG signal features or, to be said differently, the features of interest that we are looking for in the EEG analyses. Each of the methods from oldest to newest has been discussed while comparing their advantages and disadvantages. This would create a great context and help researchers understand the most state-of-the-art methods available in this field, their pros and cons, and their mathematical representations and usage. This work makes a vital contribution to the existing field of study. It differs from other similar recently published works by providing the following: (1) stating most of the main methods used in this field in a hierarchical way, (2) explaining the pros and cons of each method and their performance, (3) presenting the gaps that exist at the end of each method that can improve the understanding and open doors to new researches or improvements. 

Keywords: BCI, CCA, SSVEP, EEG

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 534
249 An Image Enhancement Method Based on Curvelet Transform for CBCT-Images

Authors: Shahriar Farzam, Maryam Rastgarpour

Abstract:

Image denoising plays extremely important role in digital image processing. Enhancement of clinical image research based on Curvelet has been developed rapidly in recent years. In this paper, we present a method for image contrast enhancement for cone beam CT (CBCT) images based on fast discrete curvelet transforms (FDCT) that work through Unequally Spaced Fast Fourier Transform (USFFT). These transforms return a table of Curvelet transform coefficients indexed by a scale parameter, an orientation and a spatial location. Accordingly, the coefficients obtained from FDCT-USFFT can be modified in order to enhance contrast in an image. Our proposed method first uses a two-dimensional mathematical transform, namely the FDCT through unequal-space fast Fourier transform on input image and then applies thresholding on coefficients of Curvelet to enhance the CBCT images. Consequently, applying unequal-space fast Fourier Transform leads to an accurate reconstruction of the image with high resolution. The experimental results indicate the performance of the proposed method is superior to the existing ones in terms of Peak Signal to Noise Ratio (PSNR) and Effective Measure of Enhancement (EME).

Keywords: Curvelet transform, image enhancement, CBCT, image denoising.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1231
248 A Study of Student Satisfaction of the Suan Sunandha Rajabhat University Radio Station

Authors: Prapoj Na Bangchang

Abstract:

The research aimed to study the satisfaction of Suan Sunandha Rajabhat University students towards the university radio station which broadcasts in both analog on FM 97.25 MHz and online via the university website. The sample used in this study consists of undergraduate students year 1 to year 4 from 6 faculties i.e. Faculty of Education, Faculty of Humanities and Social Sciences, Faculty of Management Science and Faculty of Industrial Technology, and Faculty of Fine and Applied Arts totaling 200 students. The tools used for data collection is survey. Data analysis applied statistics that are percentage, mean and standard deviation. The results showed that Suan Sunandha Rajabhat University students were satisfied to the place of listening service, followed by channels of broadcasting that cover both analog signals on 97.25 MHz FM and online via the Internet. However, the satisfaction level of the content offered was very low. Most of the students want the station to improve the content. Entertainment content was requested the most, followed by sports content. The lowest satisfaction level is with the broadcasting quality through analog signal. Most students asked the station to improve on the issue. However, overall, Suan Sunandha Rajabhat University students were satisfied with the university radio station broadcasted online via the university website.

Keywords: Satisfaction, students, radio station, Suan Sunandha Rajabhat University.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1235
247 Channel Estimation for Orthogonal Frequency Division Multiplexing Systems over Doubly Selective Channels Based on the DCS-DCSOMP Algorithm

Authors: Linyu Wang, Furui Huo, Jianhong Xiang

Abstract:

The Doppler shift generated by high-speed movement and multipath effects in the channel are the main reasons for the generation of a time-frequency doubly-selective (DS) channel. There is severe inter-carrier interference (ICI) in the DS channel. Channel estimation for an orthogonal frequency division multiplexing (OFDM) system over a DS channel is very difficult. The simultaneous orthogonal matching pursuit (SOMP) algorithm under distributed compressive sensing theory (DCS-SOMP) has been used in channel estimation for OFDM systems over DS channels. However, the reconstruction accuracy of the DCS-SOMP algorithm is not high enough in the low Signal-to-Noise Ratio (SNR) stage. To solve this problem, in this paper, we propose an improved DCS-SOMP algorithm based on the inner product difference comparison operation (DCS-DCSOMP). The reconstruction accuracy is improved by increasing the number of candidate indexes and designing the comparison conditions of inner product difference. We combine the DCS-DCSOMP algorithm with the basis expansion model (BEM) to reduce the complexity of channel estimation. Simulation results show the effectiveness of the proposed algorithm and its advantages over other algorithms.

Keywords: OFDM, doubly selective, channel estimation, compressed sensing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 307
246 Analytical Comparison of Conventional Algorithms with Vedic Algorithm for Digital Multiplier

Authors: Akhilesh G. Naik, Dipankar Pal

Abstract:

In today’s scenario, the complexity of digital signal processing (DSP) applications and various microcontroller architectures have been increasing to such an extent that the traditional approaches to multiplier design in most processors are becoming outdated for being comparatively slow. Modern processing applications require suitable pipelined approaches, and therefore, algorithms that are friendlier with pipelined architectures. Traditional algorithms like Wallace Tree, Radix-4 Booth, Radix-8 Booth, Dadda architectures have been proven to be comparatively slow for pipelined architectures. These architectures, therefore, need to be optimized or combined with other architectures amongst them to enhance its performances and to be made suitable for pipelined hardware/architectures. Recently, Vedic algorithm mathematically has proven to be efficient by appearing to be less complex and with fewer steps for its output establishment and have assumed renewed importance. This paper describes and shows how the Vedic algorithm can be better suited for pipelined architectures and also can be combined with traditional architectures and algorithms for enhancing its ability even further. In this paper, we also established that for complex applications on DSP and other microcontroller architectures, using Vedic approach for multiplication proves to be the best available and efficient option.

Keywords: Wallace tree, Radix-4 Booth, Radix-8 Booth, Dadda, Vedic, Single-Stage Karatsuba, Looped Karatsuba.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 808
245 Intrinsic Electromagnetic Fields and Atom-Field Coupling in Living Cells

Authors: Masroor H. S. Bukhari, Z. H. Shah

Abstract:

The possibility of intrinsic electromagnetic fields within living cells and their resonant self-interaction and interaction with ambient electromagnetic fields is suggested on the basis of a theoretical and experimental study. It is reported that intrinsic electromagnetic fields are produced in the form of radio-frequency and infra-red photons within atoms (which may be coupled or uncoupled) in cellular structures, such as the cell cytoskeleton and plasma membrane. A model is presented for the interaction of these photons among themselves or with atoms under a dipole-dipole coupling, induced by single-photon or two-photon processes. This resonance is manifested by conspicuous field amplification and it is argued that it is possible for these resonant photons to undergo tunnelling in the form of evanescent waves to a short range (of a few nanometers to micrometres). This effect, suggested as a resonant photon tunnelling mechanism in this report, may enable these fields to act as intracellular signal communication devices and as bridges between macromolecules or cellular structures in the cell cytoskeleton, organelles or membrane. A brief overview of an experimental technique and a review of some preliminary results are presented, in the detection of these fields produced in living cell membranes under physiological conditions.

Keywords: bioelectromagnetism, cell membrane, evanescentwaves, photon tunnelling, resonance

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1859
244 Retrospective Synthetic Focusing with Correlation Weighting for Very High Frame Rate Ultrasound

Authors: Chang-Lin Hu, Yao-You Cheng, Meng-Lin Li

Abstract:

The need of high frame-rate imaging has been triggered by the new applications of ultrasound imaging to transient elastography and real-time 3D ultrasound. Using plane wave excitation (PWE) is one of the methods to achieve very high frame-rate imaging since an image can be formed with a single insonification. However, due to the lack of transmit focusing, the image quality with PWE is lower compared with those using conventional focused transmission. To solve this problem, we propose a filter-retrieved transmit focusing (FRF) technique combined with cross-correlation weighting (FRF+CC weighting) for high frame-rate imaging with PWE. A restrospective focusing filter is designed to simultaneously minimize the predefined sidelobe energy associated with single PWE and the filter energy related to the signal-to-noise-ratio (SNR). This filter attempts to maintain the mainlobe signals and to reduce the sidelobe ones, which gives similar mainlobe signals and different sidelobes between the original PWE and the FRF baseband data. Normalized cross-correlation coefficient at zero lag is calculated to quantify the degree of similarity at each imaging point and used as a weighting matrix to the FRF baseband data to further suppress sidelobes, thus improving the filter-retrieved focusing quality.

Keywords: retrospective synthetic focusing, high frame rate, correlation weighting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1830
243 A Watermarking Scheme for MP3 Audio Files

Authors: Dimitrios Koukopoulos, Yiannis Stamatiou

Abstract:

In this work, we present for the first time in our perception an efficient digital watermarking scheme for mpeg audio layer 3 files that operates directly in the compressed data domain, while manipulating the time and subband/channel domain. In addition, it does not need the original signal to detect the watermark. Our scheme was implemented taking special care for the efficient usage of the two limited resources of computer systems: time and space. It offers to the industrial user the capability of watermark embedding and detection in time immediately comparable to the real music time of the original audio file that depends on the mpeg compression, while the end user/audience does not face any artifacts or delays hearing the watermarked audio file. Furthermore, it overcomes the disadvantage of algorithms operating in the PCMData domain to be vulnerable to compression/recompression attacks, as it places the watermark in the scale factors domain and not in the digitized sound audio data. The strength of our scheme, that allows it to be used with success in both authentication and copyright protection, relies on the fact that it gives to the users the enhanced capability their ownership of the audio file not to be accomplished simply by detecting the bit pattern that comprises the watermark itself, but by showing that the legal owner knows a hard to compute property of the watermark.

Keywords: Audio watermarking, mpeg audio layer 3, hardinstance generation, NP-completeness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634
242 Coding Structures for Seated Row Simulation of an Active Controlled Vibration Isolation and Stabilization System for Astronaut’s Exercise Platform

Authors: Ziraguen O. Williams, Shield B. Lin, Fouad N. Matari, Leslie J. Quiocho

Abstract:

Simulation for seated row exercise was a continued task to assist NASA in analyzing a one-dimensional vibration isolation and stabilization system for astronaut’s exercise platform. Feedback delay and signal noise were added to the simulation model. Simulation runs for this study were conducted in two software simulation tools, Trick and MBDyn, software simulation environments developed at the NASA Johnson Space Center. The exciter force in the simulation was calculated from motion capture of an exerciser during a seated aerobic row exercise. The simulation runs include passive control, active control using a Proportional, Integral, Derivative (PID) controller, and active control using a Piecewise Linear Integral Derivative (PWLID) controller. Output parameters include displacements of the exercise platform, the exerciser, and the counterweight; transmitted force to the wall of spacecraft; and actuator force to the platform. The simulation results showed excellent force reduction in the active controlled system compared to the passive controlled system, which showed less force reduction.

Keywords: Simulation, counterweight, exercise, vibration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 271
241 A Preliminary Literature Review of Digital Transformation Case Studies

Authors: Vesna Bosilj Vukšić, Lucija Ivančić, Dalia Suša Vugec

Abstract:

While struggling to succeed in today’s complex market environment and provide better customer experience and services, enterprises encompass digital transformation as a means for reaching competitiveness and foster value creation. A digital transformation process consists of information technology implementation projects, as well as organizational factors such as top management support, digital transformation strategy, and organizational changes. However, to the best of our knowledge, there is little evidence about digital transformation endeavors in organizations and how they perceive it – is it only about digital technologies adoption or a true organizational shift is needed? In order to address this issue and as the first step in our research project, a literature review is conducted. The analysis included case study papers from Scopus and Web of Science databases. The following attributes are considered for classification and analysis of papers: time component; country of case origin; case industry and; digital transformation concept comprehension, i.e. focus. Research showed that organizations – public, as well as private ones, are aware of change necessity and employ digital transformation projects. Also, the changes concerning digital transformation affect both manufacturing and service-based industries. Furthermore, we discovered that organizations understand that besides technologies implementation, organizational changes must also be adopted. However, with only 29 relevant papers identified, research positioned digital transformation as an unexplored and emerging phenomenon in information systems research. The scarcity of evidence-based papers calls for further examination of this topic on cases from practice.

Keywords: Digital strategy, digital technologies, digital transformation, literature review.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6742
240 A Finite Precision Block Floating Point Treatment to Direct Form, Cascaded and Parallel FIR Digital Filters

Authors: Abhijit Mitra

Abstract:

This paper proposes an efficient finite precision block floating point (BFP) treatment to the fixed coefficient finite impulse response (FIR) digital filter. The treatment includes effective implementation of all the three forms of the conventional FIR filters, namely, direct form, cascaded and par- allel, and a roundoff error analysis of them in the BFP format. An effective block formatting algorithm together with an adaptive scaling factor is pro- posed to make the realizations more simple from hardware view point. To this end, a generic relation between the tap weight vector length and the input block length is deduced. The implementation scheme also emphasises on a simple block exponent update technique to prevent overflow even during the block to block transition phase. The roundoff noise is also investigated along the analogous lines, taking into consideration these implementational issues. The simulation results show that the BFP roundoff errors depend on the sig- nal level almost in the same way as floating point roundoff noise, resulting in approximately constant signal to noise ratio over a relatively large dynamic range.

Keywords: Finite impulse response digital filters, Cascade structure, Parallel structure, Block floating point arithmetic, Roundoff error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1626
239 Computer Countenanced Diagnosis of Skin Nodule Detection and Histogram Augmentation: Extracting System for Skin Cancer

Authors: S. Zith Dey Babu, S. Kour, S. Verma, C. Verma, V. Pathania, A. Agrawal, V. Chaudhary, A. Manoj Puthur, R. Goyal, A. Pal, T. Danti Dey, A. Kumar, K. Wadhwa, O. Ved

Abstract:

Background: Skin cancer is now is the buzzing button in the field of medical science. The cyst's pandemic is drastically calibrating the body and well-being of the global village. Methods: The extracted image of the skin tumor cannot be used in one way for diagnosis. The stored image contains anarchies like the center. This approach will locate the forepart of an extracted appearance of skin. Partitioning image models has been presented to sort out the disturbance in the picture. Results: After completing partitioning, feature extraction has been formed by using genetic algorithm and finally, classification can be performed between the trained and test data to evaluate a large scale of an image that helps the doctors for the right prediction. To bring the improvisation of the existing system, we have set our objectives with an analysis. The efficiency of the natural selection process and the enriching histogram is essential in that respect. To reduce the false-positive rate or output, GA is performed with its accuracy. Conclusions: The objective of this task is to bring improvisation of effectiveness. GA is accomplishing its task with perfection to bring down the invalid-positive rate or outcome. The paper's mergeable portion conflicts with the composition of deep learning and medical image processing, which provides superior accuracy. Proportional types of handling create the reusability without any errors.

Keywords: Computer-aided system, detection, image segmentation, morphology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 505
238 Delay Preserving Substructures in Wireless Networks Using Edge Difference between a Graph and its Square Graph

Authors: T. N. Janakiraman, J. Janet Lourds Rani

Abstract:

In practice, wireless networks has the property that the signal strength attenuates with respect to the distance from the base station, it could be better if the nodes at two hop away are considered for better quality of service. In this paper, we propose a procedure to identify delay preserving substructures for a given wireless ad-hoc network using a new graph operation G 2 – E (G) = G* (Edge difference of square graph of a given graph and the original graph). This operation helps to analyze some induced substructures, which preserve delay in communication among them. This operation G* on a given graph will induce a graph, in which 1- hop neighbors of any node are at 2-hop distance in the original network. In this paper, we also identify some delay preserving substructures in G*, which are (i) set of all nodes, which are mutually at 2-hop distance in G that will form a clique in G*, (ii) set of nodes which forms an odd cycle C2k+1 in G, will form an odd cycle in G* and the set of nodes which form a even cycle C2k in G that will form two disjoint companion cycles ( of same parity odd/even) of length k in G*, (iii) every path of length 2k+1 or 2k in G will induce two disjoint paths of length k in G*, and (iv) set of nodes in G*, which induces a maximal connected sub graph with radius 1 (which identifies a substructure with radius equal 2 and diameter at most 4 in G). The above delay preserving sub structures will behave as good clusters in the original network.

Keywords: Clique, cycles, delay preserving substructures, maximal connected sub graph.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1235
237 Applications of Support Vector Machines on Smart Phone Systems for Emotional Speech Recognition

Authors: Wernhuar Tarng, Yuan-Yuan Chen, Chien-Lung Li, Kun-Rong Hsie, Mingteh Chen

Abstract:

An emotional speech recognition system for the applications on smart phones was proposed in this study to combine with 3G mobile communications and social networks to provide users and their groups with more interaction and care. This study developed a mechanism using the support vector machines (SVM) to recognize the emotions of speech such as happiness, anger, sadness and normal. The mechanism uses a hierarchical classifier to adjust the weights of acoustic features and divides various parameters into the categories of energy and frequency for training. In this study, 28 commonly used acoustic features including pitch and volume were proposed for training. In addition, a time-frequency parameter obtained by continuous wavelet transforms was also used to identify the accent and intonation in a sentence during the recognition process. The Berlin Database of Emotional Speech was used by dividing the speech into male and female data sets for training. According to the experimental results, the accuracies of male and female test sets were increased by 4.6% and 5.2% respectively after using the time-frequency parameter for classifying happy and angry emotions. For the classification of all emotions, the average accuracy, including male and female data, was 63.5% for the test set and 90.9% for the whole data set.

Keywords: Smart phones, emotional speech recognition, socialnetworks, support vector machines, time-frequency parameter, Mel-scale frequency cepstral coefficients (MFCC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1824
236 Determining G-γ Degradation Curve in Cohesive Soils by Dilatometer and in situ Seismic Tests

Authors: Ivandic Kreso, Spiranec Miljenko, Kavur Boris, Strelec Stjepan

Abstract:

This article discusses the possibility of using dilatometer tests (DMT) together with in situ seismic tests (MASW) in order to get the shape of G-g degradation curve in cohesive soils (clay, silty clay, silt, clayey silt and sandy silt). MASW test provides the small soil stiffness (Go from vs) at very small strains and DMT provides the stiffness of the soil at ‘work strains’ (MDMT). At different test locations, dilatometer shear stiffness of the soil has been determined by the theory of elasticity. Dilatometer shear stiffness has been compared with the theoretical G-g degradation curve in order to determine the typical range of shear deformation for different types of cohesive soil. The analysis also includes factors that influence the shape of the degradation curve (G-g) and dilatometer modulus (MDMT), such as the overconsolidation ratio (OCR), plasticity index (IP) and the vertical effective stress in the soil (svo'). Parametric study in this article defines the range of shear strain gDMT and GDMT/Go relation depending on the classification of a cohesive soil (clay, silty clay, clayey silt, silt and sandy silt), function of density (loose, medium dense and dense) and the stiffness of the soil (soft, medium hard and hard). The article illustrates the potential of using MASW and DMT to obtain G-g degradation curve in cohesive soils.

Keywords: Dilatometer testing, MASW testing, shear wave, soil stiffness, stiffness reduction, shear strain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 856
235 Dynamic Threshold Adjustment Approach For Neural Networks

Authors: Hamza A. Ali, Waleed A. J. Rasheed

Abstract:

The use of neural networks for recognition application is generally constrained by their inherent parameters inflexibility after the training phase. This means no adaptation is accommodated for input variations that have any influence on the network parameters. Attempts were made in this work to design a neural network that includes an additional mechanism that adjusts the threshold values according to the input pattern variations. The new approach is based on splitting the whole network into two subnets; main traditional net and a supportive net. The first deals with the required output of trained patterns with predefined settings, while the second tolerates output generation dynamically with tuning capability for any newly applied input. This tuning comes in the form of an adjustment to the threshold values. Two levels of supportive net were studied; one implements an extended additional layer with adjustable neuronal threshold setting mechanism, while the second implements an auxiliary net with traditional architecture performs dynamic adjustment to the threshold value of the main net that is constructed in dual-layer architecture. Experiment results and analysis of the proposed designs have given quite satisfactory conducts. The supportive layer approach achieved over 90% recognition rate, while the multiple network technique shows more effective and acceptable level of recognition. However, this is achieved at the price of network complexity and computation time. Recognition generalization may be also improved by accommodating capabilities involving all the innate structures in conjugation with Intelligence abilities with the needs of further advanced learning phases.

Keywords: Classification, Recognition, Neural Networks, Pattern Recognition, Generalization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1608