Search results for: wavelet domain
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1902

Search results for: wavelet domain

1812 Morphology Operation and Discrete Wavelet Transform for Blood Vessels Segmentation in Retina Fundus

Authors: Rita Magdalena, N. K. Caecar Pratiwi, Yunendah Nur Fuadah, Sofia Saidah, Bima Sakti

Abstract:

Vessel segmentation of retinal fundus is important for biomedical sciences in diagnosing ailments related to the eye. Segmentation can simplify medical experts in diagnosing retinal fundus image state. Therefore, in this study, we designed a software using MATLAB which enables the segmentation of the retinal blood vessels on retinal fundus images. There are two main steps in the process of segmentation. The first step is image preprocessing that aims to improve the quality of the image to be optimum segmented. The second step is the image segmentation in order to perform the extraction process to retrieve the retina’s blood vessel from the eye fundus image. The image segmentation methods that will be analyzed in this study are Morphology Operation, Discrete Wavelet Transform and combination of both. The amount of data that used in this project is 40 for the retinal image and 40 for manually segmentation image. After doing some testing scenarios, the average accuracy for Morphology Operation method is 88.46 % while for Discrete Wavelet Transform is 89.28 %. By combining the two methods mentioned in later, the average accuracy was increased to 89.53 %. The result of this study is an image processing system that can segment the blood vessels in retinal fundus with high accuracy and low computation time.

Keywords: discrete wavelet transform, fundus retina, morphology operation, segmentation, vessel

Procedia PDF Downloads 168
1811 Robust Medical Image Watermarking based on Contourlet and Extraction Using ICA

Authors: S. Saju, G. Thirugnanam

Abstract:

In this paper, a medical image watermarking algorithm based on contourlet is proposed. Medical image watermarking is a special subcategory of image watermarking in the sense that images have special requirements. Watermarked medical images should not differ perceptually from their original counterparts because clinical reading of images must not be affected. Watermarking techniques based on wavelet transform are reported in many literatures but robustness and security using contourlet are better when compared to wavelet transform. The main challenge in exploring geometry in images comes from the discrete nature of the data. In this paper, original image is decomposed to two level using contourlet and the watermark is embedded in the resultant sub-bands. Sub-band selection is based on the value of Peak Signal to Noise Ratio (PSNR) that is calculated between watermarked and original image. To extract the watermark, Kernel ICA is used and it has a novel characteristic is that it does not require the transformation process to extract the watermark. Simulation results show that proposed scheme is robust against attacks such as Salt and Pepper noise, Median filtering and rotation. The performance measures like PSNR and Similarity measure are evaluated and compared with Discrete Wavelet Transform (DWT) to prove the robustness of the scheme. Simulations are carried out using Matlab Software.

Keywords: digital watermarking, independent component analysis, wavelet transform, contourlet

Procedia PDF Downloads 503
1810 Quality Assurance in Software Design Patterns

Authors: Rabbia Tariq, Hannan Sajjad, Mehreen Sirshar

Abstract:

Design patterns are widely used to make the process of development easier as they greatly help the developers to develop the software. Different design patterns have been introduced till now but the behavior of same design pattern may differ in different domains that can lead to the wrong selection of the design pattern. The paper aims to discover the design patterns that suits best with respect to their domain thereby helping the developers to choose an effective design pattern. It presents the comprehensive analysis of design patterns based on different methodologies that include simulation, case study and comparison of various algorithms. Due to the difference of the domain the methodology used in one domain may be inapplicable to the other domain. The paper draws a conclusion based on strength and limitation of each design pattern in their respective domain.

Keywords: design patterns, evaluation, quality assurance, software domains

Procedia PDF Downloads 489
1809 Stator Short-Circuits Fault Diagnosis in Induction Motors

Authors: K. Yahia, M. Sahraoui, A. Guettaf

Abstract:

This paper deals with the problem of stator faults diagnosis in induction motors. Using the discrete wavelet transform (DWT) for the current Park’s vector modulus (CPVM) analysis, the inter-turn short-circuit faults diagnosis can be achieved. This method is based on the decomposition of the CPVM signal, where wavelet approximation and detail coefficients of this signal have been extracted. The energy evaluation of a known bandwidth detail permits to define a fault severity factor (FSF). This method has been tested through the simulation of an induction motor using a mathematical model based on the winding-function approach. Simulation, as well as experimental results, show the effectiveness of the used method.

Keywords: induction motors (IMs), inter-turn short-circuits diagnosis, discrete wavelet transform (DWT), Current Park’s Vector Modulus (CPVM)

Procedia PDF Downloads 429
1808 Sampling Two-Channel Nonseparable Wavelets and Its Applications in Multispectral Image Fusion

Authors: Bin Liu, Weijie Liu, Bin Sun, Yihui Luo

Abstract:

In order to solve the problem of lower spatial resolution and block effect in the fusion method based on separable wavelet transform in the resulting fusion image, a new sampling mode based on multi-resolution analysis of two-channel non separable wavelet transform, whose dilation matrix is [1,1;1,-1], is presented and a multispectral image fusion method based on this kind of sampling mode is proposed. Filter banks related to this kind of wavelet are constructed, and multiresolution decomposition of the intensity of the MS and panchromatic image are performed in the sampled mode using the constructed filter bank. The low- and high-frequency coefficients are fused by different fusion rules. The experiment results show that this method has good visual effect. The fusion performance has been noted to outperform the IHS fusion method, as well as, the fusion methods based on DWT, IHS-DWT, IHS-Contourlet transform, and IHS-Curvelet transform in preserving both spectral quality and high spatial resolution information. Furthermore, when compared with the fusion method based on nonsubsampled two-channel non separable wavelet, the proposed method has been observed to have higher spatial resolution and good global spectral information.

Keywords: image fusion, two-channel sampled nonseparable wavelets, multispectral image, panchromatic image

Procedia PDF Downloads 403
1807 Fake News Detection Based on Fusion of Domain Knowledge and Expert Knowledge

Authors: Yulan Wu

Abstract:

The spread of fake news on social media has posed significant societal harm to the public and the nation, with its threats spanning various domains, including politics, economics, health, and more. News on social media often covers multiple domains, and existing models studied by researchers and relevant organizations often perform well on datasets from a single domain. However, when these methods are applied to social platforms with news spanning multiple domains, their performance significantly deteriorates. Existing research has attempted to enhance the detection performance of multi-domain datasets by adding single-domain labels to the data. However, these methods overlook the fact that a news article typically belongs to multiple domains, leading to the loss of domain knowledge information contained within the news text. To address this issue, research has found that news records in different domains often use different vocabularies to describe their content. In this paper, we propose a fake news detection framework that combines domain knowledge and expert knowledge. Firstly, it utilizes an unsupervised domain discovery module to generate a low-dimensional vector for each news article, representing domain embeddings, which can retain multi-domain knowledge of the news content. Then, a feature extraction module uses the domain embeddings discovered through unsupervised domain knowledge to guide multiple experts in extracting news knowledge for the total feature representation. Finally, a classifier is used to determine whether the news is fake or not. Experiments show that this approach can improve multi-domain fake news detection performance while reducing the cost of manually labeling domain labels.

Keywords: fake news, deep learning, natural language processing, multiple domains

Procedia PDF Downloads 37
1806 Effect of Removing Hub Domain on Human CaMKII Isoforms Sensitivity to Calcium/Calmodulin

Authors: Ravid Inbar

Abstract:

CaMKII (calcium-calmodulin dependent protein kinase II) makes up 2% of the protein in our brain and has a critical role in memory formation and long-term potentiation of neurons. Despite this, research has yet to uncover the role of one of the domains on the activation of this kinase. The following proposes to express the protein without the hub domain in E. coli, leaving only the kinase and regulatory segment of the protein. Next, a series of kinase assays will be conducted to elucidate the role the hub domain plays on CaMKII sensitivity to calcium/calmodulin activation. The hub domain may be important for activation; however, it may also be a variety of domains working together to influence protein activation and not the hub alone. Characterization of a protein is critical to the future understanding of the protein's function, as well as for producing pharmacological targets in cases of patients with diseases.

Keywords: CaMKII, hub domain, kinase assays, kinase + reg seg

Procedia PDF Downloads 56
1805 A Passive Digital Video Authentication Technique Using Wavelet Based Optical Flow Variation Thresholding

Authors: R. S. Remya, U. S. Sethulekshmi

Abstract:

Detecting the authenticity of a video is an important issue in digital forensics as Video is used as a silent evidence in court such as in child pornography, movie piracy cases, insurance claims, cases involving scientific fraud, traffic monitoring etc. The biggest threat to video data is the availability of modern open video editing tools which enable easy editing of videos without leaving any trace of tampering. In this paper, we propose an efficient passive method for inter-frame video tampering detection, its type and location by estimating the optical flow of wavelet features of adjacent frames and thresholding the variation in the estimated feature. The performance of the algorithm is compared with the z-score thresholding and achieved an efficiency above 95% on all the tested databases. The proposed method works well for videos with dynamic (forensics) as well as static (surveillance) background.

Keywords: discrete wavelet transform, optical flow, optical flow variation, video tampering

Procedia PDF Downloads 332
1804 A Boundary Backstepping Control Design for 2-D, 3-D and N-D Heat Equation

Authors: Aziz Sezgin

Abstract:

We consider the problem of stabilization of an unstable heat equation in a 2-D, 3-D and generally n-D domain by deriving a generalized backstepping boundary control design methodology. To stabilize the systems, we design boundary backstepping controllers inspired by the 1-D unstable heat equation stabilization procedure. We assume that one side of the boundary is hinged and the other side is controlled for each direction of the domain. Thus, controllers act on two boundaries for 2-D domain, three boundaries for 3-D domain and ”n” boundaries for n-D domain. The main idea of the design is to derive ”n” controllers for each of the dimensions by using ”n” kernel functions. Thus, we obtain ”n” controllers for the ”n” dimensional case. We use a transformation to change the system into an exponentially stable ”n” dimensional heat equation. The transformation used in this paper is a generalized Volterra/Fredholm type with ”n” kernel functions for n-D domain instead of the one kernel function of 1-D design.

Keywords: backstepping, boundary control, 2-D, 3-D, n-D heat equation, distributed parameter systems

Procedia PDF Downloads 375
1803 Image Compression Based on Regression SVM and Biorthogonal Wavelets

Authors: Zikiou Nadia, Lahdir Mourad, Ameur Soltane

Abstract:

In this paper, we propose an effective method for image compression based on SVM Regression (SVR), with three different kernels, and biorthogonal 2D Discrete Wavelet Transform. SVM regression could learn dependency from training data and compressed using fewer training points (support vectors) to represent the original data and eliminate the redundancy. Biorthogonal wavelet has been used to transform the image and the coefficients acquired are then trained with different kernels SVM (Gaussian, Polynomial, and Linear). Run-length and Arithmetic coders are used to encode the support vectors and its corresponding weights, obtained from the SVM regression. The peak signal noise ratio (PSNR) and their compression ratios of several test images, compressed with our algorithm, with different kernels are presented. Compared with other kernels, Gaussian kernel achieves better image quality. Experimental results show that the compression performance of our method gains much improvement.

Keywords: image compression, 2D discrete wavelet transform (DWT-2D), support vector regression (SVR), SVM Kernels, run-length, arithmetic coding

Procedia PDF Downloads 352
1802 Speech Intelligibility Improvement Using Variable Level Decomposition DWT

Authors: Samba Raju, Chiluveru, Manoj Tripathy

Abstract:

Intelligibility is an essential characteristic of a speech signal, which is used to help in the understanding of information in speech signal. Background noise in the environment can deteriorate the intelligibility of a recorded speech. In this paper, we presented a simple variance subtracted - variable level discrete wavelet transform, which improve the intelligibility of speech. The proposed algorithm does not require an explicit estimation of noise, i.e., prior knowledge of the noise; hence, it is easy to implement, and it reduces the computational burden. The proposed algorithm decides a separate decomposition level for each frame based on signal dominant and dominant noise criteria. The performance of the proposed algorithm is evaluated with speech intelligibility measure (STOI), and results obtained are compared with Universal Discrete Wavelet Transform (DWT) thresholding and Minimum Mean Square Error (MMSE) methods. The experimental results revealed that the proposed scheme outperformed competing methods

Keywords: discrete wavelet transform, speech intelligibility, STOI, standard deviation

Procedia PDF Downloads 114
1801 Spectral Domain Fast Multipole Method for Solving Integral Equations of One and Two Dimensional Wave Scattering

Authors: Mohammad Ahmad, Dayalan Kasilingam

Abstract:

In this paper, a spectral domain implementation of the fast multipole method is presented. It is shown that the aggregation, translation, and disaggregation stages of the fast multipole method (FMM) can be performed using the spectral domain (SD) analysis. The spectral domain fast multipole method (SD-FMM) has the advantage of eliminating the near field/far field classification used in conventional FMM formulation. The study focuses on the application of SD-FMM to one-dimensional (1D) and two-dimensional (2D) electric field integral equation (EFIE). The case of perfectly conducting strip, circular and square cylinders are numerically analyzed and compared with the results from the standard method of moments (MoM).

Keywords: electric field integral equation, fast multipole method, method of moments, wave scattering, spectral domain

Procedia PDF Downloads 371
1800 Basic Calibration and Normalization Techniques for Time Domain Reflectometry Measurements

Authors: Shagufta Tabassum

Abstract:

The study of dielectric properties in a binary mixture of liquids is very useful to understand the liquid structure, molecular interaction, dynamics, and kinematics of the mixture. Time-domain reflectometry (TDR) is a powerful tool for studying the cooperation and molecular dynamics of the H-bonded system. In this paper, we discuss the basic calibration and normalization procedure for time-domain reflectometry measurements. Our approach is to explain the different types of error occur during TDR measurements and how these errors can be eliminated or minimized.

Keywords: time domain reflectometry measurement techinque, cable and connector loss, oscilloscope loss, and normalization technique

Procedia PDF Downloads 179
1799 The OQAM-OFDM System Using WPT/IWPT Replaced FFT/IFFT

Authors: Alaa H. Thabet, Ehab F. Badran, Moustafa H. Aly

Abstract:

With the rapid expand of wireless digital communications, demand for wireless systems that are reliable and have a high spectral efficiency have increased too. FBMC scheme based on the OFDM/OQAM has been recognized for its good performance to achieve high data rates. Fast Fourier Transforms (FFT) has been used to produce the orthogonal sub-carriers. Due to the drawbacks of OFDM -FFT based system which are the high peak-to-average ratio (PAR) and the synchronization. In this paper, Wavelet Packet Transform (WPT) is used in the place of FFT, and show better performance.

Keywords: OQAM-OFDM, wavelet packet transform, PAPR, FFT

Procedia PDF Downloads 425
1798 Domain-Specific Languages Evaluation: A Literature Review and Experience Report

Authors: Sofia Meacham

Abstract:

In this abstract paper, the Domain-Specific Languages (DSL) evaluation will be presented based on existing literature and years of experience developing DSLs for several domains. The domains we worked on ranged from AI, business applications, and finances/accounting to health. In general, DSLs have been utilised in many domains to provide tailored and efficient solutions to address specific problems. Although they are a reputable method among highly technical circles and have also been used by non-technical experts with success, according to our knowledge, there isn’t a commonly accepted method for evaluating them. There are some methods that define criteria that are adaptations from the general software engineering quality criteria. Other literature focuses on the DSL usability aspect of evaluation and applies methods such as Human-Computer Interaction (HCI) and goal modeling. All these approaches are either hard to introduce, such as the goal modeling, or seem to ignore the domain-specific focus of the DSLs. From our experience, the DSLs have domain-specificity in their core, and consequently, the methods to evaluate them should also include domain-specific criteria in their core. The domain-specific criteria would require synergy between the domain experts and the DSL developers in the same way that DSLs cannot be developed without domain-experts involvement. Methods from agile and other software engineering practices, such as co-creation workshops, should be further emphasised and explored to facilitate this direction. Concluding, our latest experience and plans for DSLs evaluation will be presented and open for discussion.

Keywords: domain-specific languages, DSL evaluation, DSL usability, DSL quality metrics

Procedia PDF Downloads 74
1797 High-Capacity Image Steganography using Wavelet-based Fusion on Deep Convolutional Neural Networks

Authors: Amal Khalifa, Nicolas Vana Santos

Abstract:

Steganography has been known for centuries as an efficient approach for covert communication. Due to its popularity and ease of access, image steganography has attracted researchers to find secure techniques for hiding information within an innocent looking cover image. In this research, we propose a novel deep-learning approach to digital image steganography. The proposed method, DeepWaveletFusion, uses convolutional neural networks (CNN) to hide a secret image into a cover image of the same size. Two CNNs are trained back-to-back to merge the Discrete Wavelet Transform (DWT) of both colored images and eventually be able to blindly extract the hidden image. Based on two different image similarity metrics, a weighted gain function is used to guide the learning process and maximize the quality of the retrieved secret image and yet maintaining acceptable imperceptibility. Experimental results verified the high recoverability of DeepWaveletFusion which outperformed similar deep-learning-based methods.

Keywords: deep learning, steganography, image, discrete wavelet transform, fusion

Procedia PDF Downloads 43
1796 Classifications of Sleep Apnea (Obstructive, Central, Mixed) and Hypopnea Events Using Wavelet Packet Transform and Support Vector Machines (VSM)

Authors: Benghenia Hadj Abd El Kader

Abstract:

Sleep apnea events as obstructive, central, mixed or hypopnea are characterized by frequent breathing cessations or reduction in upper airflow during sleep. An advanced method for analyzing the patterning of biomedical signals to recognize obstructive sleep apnea and hypopnea is presented. In the aim to extract characteristic parameters, which will be used for classifying the above stated (obstructive, central, mixed) sleep apnea and hypopnea, the proposed method is based first on the analysis of polysomnography signals such as electrocardiogram signal (ECG) and electromyogram (EMG), then classification of the (obstructive, central, mixed) sleep apnea and hypopnea. The analysis is carried out using the wavelet transform technique in order to extract characteristic parameters whereas classification is carried out by applying the SVM (support vector machine) technique. The obtained results show good recognition rates using characteristic parameters.

Keywords: obstructive, central, mixed, sleep apnea, hypopnea, ECG, EMG, wavelet transform, SVM classifier

Procedia PDF Downloads 344
1795 Periodicity Analysis of Long-Term Waterquality Data Series of the Hungarian Section of the River Tisza Using Morlet Wavelet Spectrum Estimation

Authors: Péter Tanos, József Kovács, Angéla Anda, Gábor Várbíró, Sándor Molnár, István Gábor Hatvani

Abstract:

The River Tisza is the second largest river in Central Europe. In this study, Morlet wavelet spectrum (periodicity) analysis was used with chemical, biological and physical water quality data for the Hungarian section of the River Tisza. In the research 15, water quality parameters measured at 14 sampling sites in the River Tisza and 4 sampling sites in the main artificial changes were assessed for the time period 1993 - 2005. Results show that annual periodicity was not always to be found in the water quality parameters, at least at certain sampling sites. Periodicity was found to vary over space and time, but in general, an increase was observed in the company of higher trophic states of the river heading downstream.

Keywords: annual periodicity water quality, spatiotemporal variability of periodic behavior, Morlet wavelet spectrum analysis, River Tisza

Procedia PDF Downloads 309
1794 DWT-SATS Based Detection of Image Region Cloning

Authors: Michael Zimba

Abstract:

A duplicated image region may be subjected to a number of attacks such as noise addition, compression, reflection, rotation, and scaling with the intention of either merely mating it to its targeted neighborhood or preventing its detection. In this paper, we present an effective and robust method of detecting duplicated regions inclusive of those affected by the various attacks. In order to reduce the dimension of the image, the proposed algorithm firstly performs discrete wavelet transform, DWT, of a suspicious image. However, unlike most existing copy move image forgery (CMIF) detection algorithms operating in the DWT domain which extract only the low frequency sub-band of the DWT of the suspicious image thereby leaving valuable information in the other three sub-bands, the proposed algorithm simultaneously extracts features from all the four sub-bands. The extracted features are not only more accurate representation of image regions but also robust to additive noise, JPEG compression, and affine transformation. Furthermore, principal component analysis-eigenvalue decomposition, PCA-EVD, is applied to reduce the dimension of the features. The extracted features are then sorted using the more computationally efficient Radix Sort algorithm. Finally, same affine transformation selection, SATS, a duplication verification method, is applied to detect duplicated regions. The proposed algorithm is not only fast but also more robust to attacks compared to the related CMIF detection algorithms. The experimental results show high detection rates.

Keywords: affine transformation, discrete wavelet transform, radix sort, SATS

Procedia PDF Downloads 201
1793 Single Carrier Frequency Domain Equalization Design to Cope with Narrow Band Jammer

Authors: So-Young Ju, Sung-Mi Jo, Eui-Rim Jeong

Abstract:

In this paper, based on the conventional single carrier frequency domain equalization (SC-FDE) structure, we propose a new SC-FDE structure to cope with narrowband jammer. In the conventional SC-FDE structure, channel estimation is performed in the time domain. When a narrowband jammer exists, time-domain channel estimation is very difficult due to high power jamming interference, which degrades receiver performance. To relieve from this problem, a new SC-FDE frame is proposed to enable channel estimation under narrow band jamming environments. In this paper, we proposed a modified SC-FDE structure that can perform channel estimation in the frequency domain and verified the performance via computer simulation.

Keywords: channel estimation, jammer, pilot, SC-FDE

Procedia PDF Downloads 447
1792 Video Compression Using Contourlet Transform

Authors: Delara Kazempour, Mashallah Abasi Dezfuli, Reza Javidan

Abstract:

Video compression used for channels with limited bandwidth and storage devices has limited storage capabilities. One of the most popular approaches in video compression is the usage of different transforms. Discrete cosine transform is one of the video compression methods that have some problems such as blocking, noising and high distortion inappropriate effect in compression ratio. wavelet transform is another approach is better than cosine transforms in balancing of compression and quality but the recognizing of curve curvature is so limit. Because of the importance of the compression and problems of the cosine and wavelet transforms, the contourlet transform is most popular in video compression. In the new proposed method, we used contourlet transform in video image compression. Contourlet transform can save details of the image better than the previous transforms because this transform is multi-scale and oriented. This transform can recognize discontinuity such as edges. In this approach we lost data less than previous approaches. Contourlet transform finds discrete space structure. This transform is useful for represented of two dimension smooth images. This transform, produces compressed images with high compression ratio along with texture and edge preservation. Finally, the results show that the majority of the images, the parameters of the mean square error and maximum signal-to-noise ratio of the new method based contourlet transform compared to wavelet transform are improved but in most of the images, the parameters of the mean square error and maximum signal-to-noise ratio in the cosine transform is better than the method based on contourlet transform.

Keywords: video compression, contourlet transform, discrete cosine transform, wavelet transform

Procedia PDF Downloads 414
1791 An Overview of Domain Models of Urban Quantitative Analysis

Authors: Mohan Li

Abstract:

Nowadays, intelligent research technology is more and more important than traditional research methods in urban research work, and this proportion will greatly increase in the next few decades. Frequently such analyzing work cannot be carried without some software engineering knowledge. And here, domain models of urban research will be necessary when applying software engineering knowledge to urban work. In many urban plan practice projects, making rational models, feeding reliable data, and providing enough computation all make indispensable assistance in producing good urban planning. During the whole work process, domain models can optimize workflow design. At present, human beings have entered the era of big data. The amount of digital data generated by cities every day will increase at an exponential rate, and new data forms are constantly emerging. How to select a suitable data set from the massive amount of data, manage and process it has become an ability that more and more planners and urban researchers need to possess. This paper summarizes and makes predictions of the emergence of technologies and technological iterations that may affect urban research in the future, discover urban problems, and implement targeted sustainable urban strategies. They are summarized into seven major domain models. They are urban and rural regional domain model, urban ecological domain model, urban industry domain model, development dynamic domain model, urban social and cultural domain model, urban traffic domain model, and urban space domain model. These seven domain models can be used to guide the construction of systematic urban research topics and help researchers organize a series of intelligent analytical tools, such as Python, R, GIS, etc. These seven models make full use of quantitative spatial analysis, machine learning, and other technologies to achieve higher efficiency and accuracy in urban research, assisting people in making reasonable decisions.

Keywords: big data, domain model, urban planning, urban quantitative analysis, machine learning, workflow design

Procedia PDF Downloads 153
1790 Detection of Parkinsonian Freezing of Gait

Authors: Sang-Hoon Park, Yeji Ho, Gwang-Moon Eom

Abstract:

Fast and accurate detection of Freezing of Gait (FOG) is desirable for appropriate application of cueing which has been shown to ameliorate FOG. Utilization of frequency spectrum of leg acceleration to derive the freeze index requires much calculation and it would lead to delayed cueing. We hypothesized that FOG can be reasonably detected from the time domain amplitude of foot acceleration. A time instant was recognized as FOG if the mean amplitude of the acceleration in the time window surrounding the time instant was in the specific FOG range. Parameters required in the FOG detection was optimized by simulated annealing. The suggested time domain methods showed performances comparable to those of frequency domain methods.

Keywords: freezing of gait, detection, Parkinson's disease, time-domain method

Procedia PDF Downloads 415
1789 Insomnia and Depression in Outpatients of Dementia Center

Authors: Jun Hong Lee

Abstract:

Background: Many dementia patients complain insomnia and depressive mood, and hypnotics and antidepressants are being prescribed. As prevalence of dementia is increasing, insomnia and depressive mood are becoming more important. Objective: We evaluated insomnia and depression in outpatients of dementia center. Patients and Methods/Material and Methods: We reviewed medical records of the patients who visited outpatients clinic of NHIS Ilsan Hospital Dementia Center during 2016. Results: Total 716 patients are included; Subjective Memory Impairment (SMI) : 143 patients (20%), non-amnestic Mild Cognitive Impairment (MCI): single domain 70 (10%), multiple domain 34 (5%), amnestic MCI: single domain 74 (10%), multiple domain 159 (22%), Early onset Alzheimer´s disease (AD): 9 (1%), AD 121 (17%), Vascular dementia: 62 (9%), Mixed dementia 44 (6%). Hypnotics and antidepressants are prescribed as follows; SMI : hypnotics 14 patients (10%), antidepressants 27 (19%), non-amnestic MCI: single domain hypnotics 9 (13%), antidepressants 12 (17%), multiple domain hypnotics 4 (12%), antidepressants 6 (18%), amnestic MCI: single domain hypnotics 10 (14%), antidepressants 16 (22%), multiple domain hypnotics 22 (14%), antidepressants 24 (15%), Early onset Alzheimer´s disease (AD): hypnotics 1 (11%), antidepressants 2 (22%), AD: hypnotics 10 (8%), antidepressants 36 (30%), Vascular dementia: hypnotics 8 (13%), antidepressants 20 (32%), Mixed dementia: hypnotics 4 (9%), antidepressants 17 (39%). Conclusion: Among the outpatients of Dementia Center, MCI and SMI are majorities, and the number of MCI patients are almost half. Depression is more prevalent in AD, and Vascular dementia than MCI and SMI, and about 22% of patients are being prescribed by antidepressants and 11% by hypnotics.

Keywords: insomnia, depression, dementia, antidepressants, hypnotics

Procedia PDF Downloads 141
1788 PH.WQT as a Web Quality Model for Websites of Government Domain

Authors: Rupinder Pal Kaur, Vishal Goyal

Abstract:

In this research, a systematic and quantitative engineering-based approach is followed by applying well-known international standards and guidelines to develop a web quality model (PH.WQT- Punjabi and Hindi Website Quality Tester) to measure external quality for websites of government domain that are developed in Punjabi and Hindi. Correspondingly, the model can be used for websites developed in other languages also. The research is valuable to researchers and practitioners interested in designing, implementing and managing websites of government domain Also, by implementing PH.WQT analysis and comparisons among web sites of government domain can be performed in a consistent way.

Keywords: external quality, PH.WQT, indian languages, punjabi and hindi, quality model, websites of government

Procedia PDF Downloads 274
1787 Optimization of Shear Frame Structures Applying Various Forms of Wavelet Transforms

Authors: Seyed Sadegh Naseralavi, Sohrab Nemati, Ehsan Khojastehfar, Sadegh Balaghi

Abstract:

In the present research, various formulations of wavelet transform are applied on acceleration time history of earthquake. The mentioned transforms decompose the strong ground motion into low and high frequency parts. Since the high frequency portion of strong ground motion has a minor effect on dynamic response of structures, the structure is excited by low frequency part. Consequently, the seismic response of structure is predicted consuming one half of computational time, comparing with conventional time history analysis. Towards reducing the computational effort needed in seismic optimization of structure, seismic optimization of a shear frame structure is conducted by applying various forms of mentioned transformation through genetic algorithm.

Keywords: time history analysis, wavelet transform, optimization, earthquake

Procedia PDF Downloads 203
1786 Blind Watermarking Using Discrete Wavelet Transform Algorithm with Patchwork

Authors: Toni Maristela C. Estabillo, Michaela V. Matienzo, Mikaela L. Sabangan, Rosette M. Tienzo, Justine L. Bahinting

Abstract:

This study is about blind watermarking on images with different categories and properties using two algorithms namely, Discrete Wavelet Transform and Patchwork Algorithm. A program is created to perform watermark embedding, extraction and evaluation. The evaluation is based on three watermarking criteria namely: image quality degradation, perceptual transparency and security. Image quality is measured by comparing the original properties with the processed one. Perceptual transparency is measured by a visual inspection on a survey. Security is measured by implementing geometrical and non-geometrical attacks through a pass or fail testing. Values used to measure the following criteria are mostly based on Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR). The results are based on statistical methods used to interpret and collect data such as averaging, z Test and survey. The study concluded that the combined DWT and Patchwork algorithms were less efficient and less capable of watermarking than DWT algorithm only.

Keywords: blind watermarking, discrete wavelet transform algorithm, patchwork algorithm, digital watermark

Procedia PDF Downloads 242
1785 The Bernstein Expansion for Exponentials in Taylor Functions: Approximation of Fixed Points

Authors: Tareq Hamadneh, Jochen Merker, Hassan Al-Zoubi

Abstract:

Bernstein's expansion for exponentials in Taylor functions provides lower and upper optimization values for the range of its original function. these values converge to the original functions if the degree is elevated or the domain subdivided. Taylor polynomial can be applied so that the exponential is a polynomial of finite degree over a given domain. Bernstein's basis has two main properties: its sum equals 1, and positive for all x 2 (0; 1). In this work, we prove the existence of fixed points for exponential functions in a given domain using the optimization values of Bernstein. The Bernstein basis of finite degree T over a domain D is defined non-negatively. Any polynomial p of degree t can be expanded into the Bernstein form of maximum degree t ≤ T, where we only need to compute the coefficients of Bernstein in order to optimize the original polynomial. The main property is that p(x) is approximated by the minimum and maximum Bernstein coefficients (Bernstein bound). If the bound is contained in the given domain, then we say that p(x) has fixed points in the same domain.

Keywords: Bernstein polynomials, Stability of control functions, numerical optimization, Taylor function

Procedia PDF Downloads 106
1784 Visualization Tool for EEG Signal Segmentation

Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh

Abstract:

This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.

Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation

Procedia PDF Downloads 366
1783 Effects of Tool State on the Output Parameters of Front Milling Using Discrete Wavelet Transform

Authors: Bruno S. Soria, Mauricio R. Policena, Andre J. Souza

Abstract:

The state of the cutting tool is an important factor to consider during machining to achieve a good surface quality. The vibration generated during material cutting can also directly affect the surface quality and life of the cutting tool. In this work, the effect of mechanical broken failure (MBF) on carbide insert tools during face milling of AISI 304 stainless steel was evaluated using three levels of feed rate and two spindle speeds for each tool condition: three carbide inserts have perfect geometry, and three other carbide inserts have MBF. The axial and radial depths remained constant. The cutting forces were determined through a sensory system that consists of a piezoelectric dynamometer and data acquisition system. Discrete Wavelet Transform was used to separate the static part of the signals of force and vibration. The roughness of the machined surface was analyzed for each machining condition. The MBF of the tool increased the intensity and force of vibration and worsened the roughness factors.

Keywords: face milling, stainless steel, tool condition monitoring, wavelet discrete transform

Procedia PDF Downloads 117