Search results for: 9/7 Wavelets Error Sensitivity WES
1470 An Efficient Fall Detection Method for Elderly Care System
Authors: S. Sowmyayani, P. Arockia Jansi Rani
Abstract:
Fall detection is one of the challenging problems in elderly care system. The objective of this paper is to identify falls in elderly care system. In this paper, an efficient fall detection method is proposed to identify falls using correlation factor and Motion History Image (MHI). The proposed method is tested on URF (University of Rzeszow Fall detection) dataset and evaluated with some efficient measures like sensitivity, specificity, precision and classification accuracy. It is compared with other recent methods. The experimental results substantially proved that the proposed method achieves 1.5% higher sensitivity when compared to other methods.Keywords: Pearson correlation coefficient, motion history image, human shape identification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8391469 Joint Design of MIMO Relay Networks Based on MMSE Criterion
Authors: Seungwon Choi, Seungri Jin, Ayoung Heo, Jung-Hyun Park, Dong-Jo Park
Abstract:
This paper deals with wireless relay communication systems in which multiple sources transmit information to the destination node by the help of multiple relays. We consider a signal forwarding technique based on the minimum mean-square error (MMSE) approach with multiple antennas for each relay. A source-relay-destination joint design strategy is proposed with power constraints at the destination and the source nodes. Simulation results confirm that the proposed joint design method improves the average MSE performance compared with that of conventional MMSE relaying schemes.Keywords: minimum mean squre error (MMSE), multiple relay, MIMO.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17161468 Integrating Blogging into Peer Assessment on College Students’ English Writing
Authors: Su-Lien Liao
Abstract:
Most of college students in Taiwan do not have sufficient English proficiency to express themselves in written English. Teachers spent a lot of time correcting the errors in students’ English writing, but the results are not satisfactory. This study aims to use blogs as a teaching and learning tool in written English. Before applying peer assessment, students should be trained to be good reviewers. The teacher starts the course by posting the error analysis of students’ first English composition on blogs as the comment models for students. Then the students will go through the process of drafting, composing, peer response and last revision on blogs. Evaluation questionnaires and interviews will be conducted at the end of the course to see the impact and also students’ perception for the course.
Keywords: Blog, Peer assessment, English writing, Error analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19381467 A New Image Psychovisual Coding Quality Measurement based Region of Interest
Authors: M. Nahid, A. Bajit, A. Tamtaoui, E. H. Bouyakhf
Abstract:
To model the human visual system (HVS) in the region of interest, we propose a new objective metric evaluation adapted to wavelet foveation-based image compression quality measurement, which exploits a foveation setup filter implementation technique in the DWT domain, based especially on the point and region of fixation of the human eye. This model is then used to predict the visible divergences between an original and compressed image with respect to this region field and yields an adapted and local measure error by removing all peripheral errors. The technique, which we call foveation wavelet visible difference prediction (FWVDP), is demonstrated on a number of noisy images all of which have the same local peak signal to noise ratio (PSNR), but visibly different errors. We show that the FWVDP reliably predicts the fixation areas of interest where error is masked, due to high image contrast, and the areas where the error is visible, due to low image contrast. The paper also suggests ways in which the FWVDP can be used to determine a visually optimal quantization strategy for foveation-based wavelet coefficients and to produce a quantitative local measure of image quality.
Keywords: Human Visual System, Image Quality, ImageCompression, foveation wavelet, region of interest ROI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14981466 Profitability Assessment of Granite Aggregate Production and the Development of a Profit Assessment Model
Authors: Melodi Mbuyi Mata, Blessing Olamide Taiwo, Afolabi Ayodele David
Abstract:
The purpose of this research is to create empirical models for assessing the profitability of granite aggregate production in Akure, Ondo state aggregate quarries. In addition, an Artificial Neural Network (ANN) model and multivariate predicting models for granite profitability were developed in the study. A formal survey questionnaire was used to collect data for the study. The data extracted from the case study mine for this study include granite marketing operations, royalty, production costs, and mine production information. The following methods were used to achieve the goal of this study: descriptive statistics, MATLAB 2017, and SPSS16.0 software in analyzing and modeling the data collected from granite traders in the study areas. The ANN and Multi Variant Regression models' prediction accuracy was compared using a coefficient of determination (R2), Root Mean Square Error (RMSE), and mean square error (MSE). Due to the high prediction error, the model evaluation indices revealed that the ANN model was suitable for predicting generated profit in a typical quarry. More quarries in Nigeria's southwest region and other geopolitical zones should be considered to improve ANN prediction accuracy.
Keywords: National development, granite, profitability assessment, ANN models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 841465 A Robust Image Watermarking Scheme using Image Moment Normalization
Authors: Latha Parameswaran, K. Anbumani
Abstract:
Multimedia security is an incredibly significant area of concern. A number of papers on robust digital watermarking have been presented, but there are no standards that have been defined so far. Thus multimedia security is still a posing problem. The aim of this paper is to design a robust image-watermarking scheme, which can withstand a different set of attacks. The proposed scheme provides a robust solution integrating image moment normalization, content dependent watermark and discrete wavelet transformation. Moment normalization is useful to recover the watermark even in case of geometrical attacks. Content dependent watermarks are a powerful means of authentication as the data is watermarked with its own features. Discrete wavelet transforms have been used as they describe image features in a better manner. The proposed scheme finds its place in validating identification cards and financial instruments.Keywords: Watermarking, moments, wavelets, content-based, benchmarking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15491464 Improved Approximation to the Derivative of a Digital Signal Using Wavelet Transforms for Crosstalk Analysis
Authors: S. P. Kozaitis, R. L. Kriner
Abstract:
The information revealed by derivatives can help to better characterize digital near-end crosstalk signatures with the ultimate goal of identifying the specific aggressor signal. Unfortunately, derivatives tend to be very sensitive to even low levels of noise. In this work we approximated the derivatives of both quiet and noisy digital signals using a wavelet-based technique. The results are presented for Gaussian digital edges, IBIS Model digital edges, and digital edges in oscilloscope data captured from an actual printed circuit board. Tradeoffs between accuracy and noise immunity are presented. The results show that the wavelet technique can produce first derivative approximations that are accurate to within 5% or better, even under noisy conditions. The wavelet technique can be used to calculate the derivative of a digital signal edge when conventional methods fail.Keywords: digital signals, electronics, IBIS model, printedcircuit board, wavelets
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18771463 Stochastic Resonance in Nonlinear Signal Detection
Authors: Youguo Wang, Lenan Wu
Abstract:
Stochastic resonance (SR) is a phenomenon whereby the signal transmission or signal processing through certain nonlinear systems can be improved by adding noise. This paper discusses SR in nonlinear signal detection by a simple test statistic, which can be computed from multiple noisy data in a binary decision problem based on a maximum a posteriori probability criterion. The performance of detection is assessed by the probability of detection error Per . When the input signal is subthreshold signal, we establish that benefit from noise can be gained for different noises and confirm further that the subthreshold SR exists in nonlinear signal detection. The efficacy of SR is significantly improved and the minimum of Per can dramatically approach to zero as the sample number increases. These results show the robustness of SR in signal detection and extend the applicability of SR in signal processing.Keywords: Probability of detection error, signal detection, stochastic resonance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15341462 High-Power Amplifier Pre-distorter Based on Neural Networks for 5G Satellite Communications
Authors: Abdelhamid Louliej, Younes Jabrane
Abstract:
Satellites are becoming indispensable assets to fifth-generation (5G) new radio architecture, complementing wireless and terrestrial communication links. The combination of satellites and 5G architecture allows consumers to access all next-generation services anytime, anywhere, including scenarios, like traveling to remote areas (without coverage). Nevertheless, this solution faces several challenges, such as a significant propagation delay, Doppler frequency shift, and high Peak-to-Average Power Ratio (PAPR), causing signal distortion due to the non-linear saturation of the High-Power Amplifier (HPA). To compensate for HPA non-linearity in 5G satellite transmission, an efficient pre-distorter scheme using Neural Networks (NN) is proposed. To assess the proposed NN pre-distorter, two types of HPA were investigated: Travelling Wave Tube Amplifier (TWTA) and Solid-State Power Amplifier (SSPA). The results show that the NN pre-distorter design presents an Error Vector Magnitude (EVM) improvement by 95.26%. Normalized Mean Square Error (NMSE) and Adjacent Channel Power Ratio (ACPR) were reduced by -43,66 dB and 24.56 dBm, respectively. Moreover, the system suffers no degradation of the Bit Error Rate (BER) for TWTA and SSPA amplifiers.
Keywords: Satellites, 5G, Neural Networks, High-Power Amplifier, Travelling Wave Tube Amplifier, Solid-State Power Amplifier, EVM, NMSE, ACPR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1091461 Automatic Generation Control of an Interconnected Power System with Capacitive Energy Storage
Authors: Rajesh Joseph Abraham, D. Das, Amit Patra
Abstract:
This paper is concerned with the application of small rating Capacitive Energy Storage units for the improvement of Automatic Generation Control of a multiunit multiarea power system. Generation Rate Constraints are also considered in the investigations. Integral Squared Error technique is used to obtain the optimal integral gain settings by minimizing a quadratic performance index. Simulation studies reveal that with CES units, the deviations in area frequencies and inter-area tie-power are considerably improved in terms of peak deviations and settling time as compared to that obtained without CES units.Keywords: Automatic Generation Control, Capacitive EnergyStorage, Integral Squared Error.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27981460 Determination of Cd, Zn, K, pH, TNV, Organic Material and Electrical Conductivity (EC) Distribution in Agricultural Soils using Geostatistics and GIS (Case Study: South- Western of Natanz- Iran)
Authors: Abbas Hani, Seyed Ali Hoseini Abari
Abstract:
Soil chemical and physical properties have important roles in compartment of the environment and agricultural sustainability and human health. The objectives of this research is determination of spatial distribution patterns of Cd, Zn, K, pH, TNV, organic material and electrical conductivity (EC) in agricultural soils of Natanz region in Esfehan province. In this study geostatistic and non-geostatistic methods were used for prediction of spatial distribution of these parameters. 64 composite soils samples were taken at 0-20 cm depth. The study area is located in south of NATANZ agricultural lands with area of 21660 hectares. Spatial distribution of Cd, Zn, K, pH, TNV, organic material and electrical conductivity (EC) was determined using geostatistic and geographic information system. Results showed that Cd, pH, TNV and K data has normal distribution and Zn, OC and EC data had not normal distribution. Kriging, Inverse Distance Weighting (IDW), Local Polynomial Interpolation (LPI) and Redial Basis functions (RBF) methods were used to interpolation. Trend analysis showed that organic carbon in north-south and east to west did not have trend while K and TNV had second degree trend. We used some error measurements include, mean absolute error(MAE), mean squared error (MSE) and mean biased error(MBE). Ordinary kriging(exponential model), LPI(Local polynomial interpolation), RBF(radial basis functions) and IDW methods have been chosen as the best methods to interpolating of the soil parameters. Prediction maps by disjunctive kriging was shown that in whole study area was intensive shortage of organic matter and more than 63.4 percent of study area had shortage of K amount.Keywords: Electrical conductivity, Geostatistics, Geographical Information System, TNV
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27011459 An Adaptive Least-squares Mixed Finite Element Method for Pseudo-parabolic Integro-differential Equations
Authors: Zilong Feng, Hong Li, Yang Liu, Siriguleng He
Abstract:
In this article, an adaptive least-squares mixed finite element method is studied for pseudo-parabolic integro-differential equations. The solutions of least-squares mixed weak formulation and mixed finite element are proved. A posteriori error estimator is constructed based on the least-squares functional and the posteriori errors are obtained.
Keywords: Pseudo-parabolic integro-differential equation, least squares mixed finite element method, adaptive method, a posteriori error estimates.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13181458 Evaluation of Haar Cascade Classifiers Designed for Face Detection
Authors: R. Padilla, C. F. F. Costa Filho, M. G. F. Costa
Abstract:
In the past years a lot of effort has been made in the field of face detection. The human face contains important features that can be used by vision-based automated systems in order to identify and recognize individuals. Face location, the primary step of the vision-based automated systems, finds the face area in the input image. An accurate location of the face is still a challenging task. Viola-Jones framework has been widely used by researchers in order to detect the location of faces and objects in a given image. Face detection classifiers are shared by public communities, such as OpenCV. An evaluation of these classifiers will help researchers to choose the best classifier for their particular need. This work focuses of the evaluation of face detection classifiers minding facial landmarks.Keywords: Face datasets, face detection, facial landmarking, haar wavelets, Viola-Jones detectors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 54101457 Harmful Effect of Ambient Ozone on Growth and Productivity of Two Legume Crops Visia Faba, and Pisum sativum in Riyadh City, K.S.A.
Authors: Ibrahim A. Al-Muhaisen, Mohammad N. Al Ymemeni
Abstract:
Ozone (O3) is considered as one of the most phytotoxic pollutants with deleterious effects on living and non living components of Ecosystems. It reduces growth and yield of many crops as well as alters the physiology and crop quality. The present study described series of experiments to investigate the effects of ambient O3 at different locations with different ambient levels of O3 depending on proximity to pollutant source and ranged between 17 ppb/h in control experiment to 112 ppb/h in industrial area respectively. The ambient levels in other three locations (King Saud University botanical garden, King Fahd Rd, and Almanakh Garden) were 61,61,77 ppb/h respectively. Tow legume crops species (vicia vaba L ; and Pisum sativum) differ in their phenology and sensitivity were used. The results showed a significant negative effect to ozone on morphology, number of injured leaves, growth and productivity with a difference in the degree of response depending on the plant type. Visia Faba showed sensitivity to ozone to number and leaf area and the degree of injury leaves 3, pisum sativum show higher sensitivity for the gas for degree of injury 1,The relative growth rate and seed weight, it turns out there is no significant difference between the two plants in plant height and number of seeds.Keywords: Ozone, Legume crops, growth and production, Resistance, Riyadh city.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15031456 A Comparative Study between Discrete Wavelet Transform and Maximal Overlap Discrete Wavelet Transform for Testing Stationarity
Authors: Amel Abdoullah Ahmed Dghais, Mohd Tahir Ismail
Abstract:
In this paper the core objective is to apply discrete wavelet transform and maximal overlap discrete wavelet transform functions namely Haar, Daubechies2, Symmlet4, Coiflet2 and discrete approximation of the Meyer wavelets in non stationary financial time series data from Dow Jones index (DJIA30) of US stock market. The data consists of 2048 daily data of closing index from December 17, 2004 to October 23, 2012. Unit root test affirms that the data is non stationary in the level. A comparison between the results to transform non stationary data to stationary data using aforesaid transforms is given which clearly shows that the decomposition stock market index by discrete wavelet transform is better than maximal overlap discrete wavelet transform for original data.
Keywords: Discrete wavelet transform, maximal overlap discrete wavelet transform, stationarity, autocorrelation function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 47281455 Process Optimization Regarding Geometrical Variation and Sensitivity Involving Dental Drill- and Implant-Guided Surgeries
Authors: T. Kero, R. Söderberg, M. Andersson, L. Lindkvist
Abstract:
Within dental-guided surgery, there has been a lack of analytical methods for optimizing the treatment of the rehabilitation concepts regarding geometrical variation. The purpose of this study is to find the source of the greatest geometrical variation contributor and sensitivity contributor with the help of virtual variation simulation of a dental drill- and implant-guided surgery process using a methodical approach. It is believed that lower geometrical variation will lead to better patient security and higher quality of dental drill- and implant-guided surgeries. It was found that the origin of the greatest contributor to the most variation, and hence where the foci should be set, in order to minimize geometrical variation was in the assembly category (surgery). This was also the category that was the most sensitive for geometrical variation.Keywords: Variation Simulation, Process Optimization, Guided Surgeries, Dental Prosthesis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12851454 Improvement of Parallel Compressor Model in Dealing Outlet Unequal Pressure Distribution
Authors: Kewei Xu, Jens Friedrich, Kevin Dwinger, Wei Fan, Xijin Zhang
Abstract:
Parallel Compressor Model (PCM) is a simplified approach to predict compressor performance with inlet distortions. In PCM calculation, it is assumed that the sub-compressors’ outlet static pressure is uniform and therefore simplifies PCM calculation procedure. However, if the compressor’s outlet duct is not long and straight, such assumption frequently induces error ranging from 10% to 15%. This paper provides a revised calculation method of PCM that can correct the error. The revised method employs energy equation, momentum equation and continuity equation to acquire needed parameters and replace the equal static pressure assumption. Based on the revised method, PCM is applied on two compression system with different blades types. The predictions of their performance in non-uniform inlet conditions are yielded through the revised calculation method and are employed to evaluate the method’s efficiency. Validating the results by experimental data, it is found that although little deviation occurs, calculated result agrees well with experiment data whose error ranges from 0.1% to 3%. Therefore, this proves the revised calculation method of PCM possesses great advantages in predicting the performance of the distorted compressor with limited exhaust duct.Keywords: Parallel Compressor Model (PCM), Revised Calculation Method, Inlet Distortion, Outlet Unequal Pressure Distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16881453 Experimental Investigation and Sensitivity Analysis for the Effects of Fracture Parameters to the Conductance Properties of Laterite
Authors: Bai Wei, Kong Ling-Wei, Guo Ai-Guo
Abstract:
This experiment discusses the effects of fracture parameters such as depth, length, width, angle and the number of the fracture to the conductance properties of laterite using the DUK-2B digital electrical measurement system combined with the method of simulating the fractures. The results of experiment show that the changes of fracture parameters produce effects to the conductance properties of laterite. There is a clear degressive period of the conductivity of laterite during increasing the depth, length, width, or the angle and the quantity of fracture gradually. When the depth of fracture exceeds the half thickness of the soil body, the conductivity of laterite shows evidently non-linear diminishing pattern and the amplitude of decrease tends to increase. The length of fracture has fewer effects than the depth to the conductivity. When the width of fracture reaches some fixed values, the change of the conductivity is less sensitive to the change of the width, and at this time, the conductivity of laterite maintains at a stable level. When the angle of fracture is less than 45°, the decrease of the conductivity is more clearly as the angle increases. But when angle is more than 45°, change of the conductivity is relatively gentle as the angle increases. The increasing quantity of the fracture causes the other fracture parameters having great impact on the change of conductivity. When moisture content and temperature were unchanged, depth and angle of fractures are the major factors affecting the conductivity of laterite soil; quantity, length, and width are minor influencing factors. The sensitivity of fracture parameters affect conductivity of laterite soil is: depth >angles >quantity >length >width.Keywords: laterite, fracture parameters, conductance properties, conductivity, uniform design, sensitivity analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14301452 Comparison between Separable and Irreducible Goppa Code in McEliece Cryptosystem
Authors: Thuraya M. Qaradaghi, Newroz N. Abdulrazaq
Abstract:
The McEliece cryptosystem is an asymmetric type of cryptography based on error correction code. The classical McEliece used irreducible binary Goppa code which considered unbreakable until now especially with parameter [1024, 524, and 101], but it is suffering from large public key matrix which leads to be difficult to be used practically. In this work Irreducible and Separable Goppa codes have been introduced. The Irreducible and Separable Goppa codes used are with flexible parameters and dynamic error vectors. A Comparison between Separable and Irreducible Goppa code in McEliece Cryptosystem has been done. For encryption stage, to get better result for comparison, two types of testing have been chosen; in the first one the random message is constant while the parameters of Goppa code have been changed. But for the second test, the parameters of Goppa code are constant (m=8 and t=10) while the random message have been changed. The results show that the time needed to calculate parity check matrix in separable are higher than the one for irreducible McEliece cryptosystem, which is considered expected results due to calculate extra parity check matrix in decryption process for g2(z) in separable type, and the time needed to execute error locator in decryption stage in separable type is better than the time needed to calculate it in irreducible type. The proposed implementation has been done by Visual studio C#.Keywords: McEliece cryptosystem, Goppa code, separable, irreducible.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22121451 Performance Evaluation of ROI Extraction Models from Stationary Images
Authors: K.V. Sridhar, Varun Gunnala, K.S.R Krishna Prasad
Abstract:
In this paper three basic approaches and different methods under each of them for extracting region of interest (ROI) from stationary images are explored. The results obtained for each of the proposed methods are shown, and it is demonstrated where each method outperforms the other. Two main problems in ROI extraction: the channel selection problem and the saliency reversal problem are discussed and how best these two are addressed by various methods is also seen. The basic approaches are 1) Saliency based approach 2) Wavelet based approach 3) Clustering based approach. The saliency approach performs well on images containing objects of high saturation and brightness. The wavelet based approach performs well on natural scene images that contain regions of distinct textures. The mean shift clustering approach partitions the image into regions according to the density distribution of pixel intensities. The experimental results of various methodologies show that each technique performs at different acceptable levels for various types of images.Keywords: clustering, ROI, saliency, wavelets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14091450 Corruption, Economic Growth, and Income Inequality: Evidence from Ten Countries in Asia
Authors: Chiung-Ju Huang
Abstract:
This study utilizes the panel vector error correction model (PVECM) to examine the relationship among corruption, economic growth, and income inequality experienced within ten Asian countries over the 1995 to 2010 period. According to the empirical results, we do not support the common perception that corruption decreases economic growth. On the contrary, we found that corruption increases economic growth. Meanwhile, an increase in economic growth will cause an increase in income inequality, although the effect is insignificant. Similarly, an increase in income inequality will cause an increase in economic growth but a decrease in corruption, although the effect is also insignificant.Keywords: Corruption, economic growth, income inequality, panel vector error correction model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33681449 Heat Stress Monitor by Using Low-Cost Temperature and Humidity Sensors
Authors: Kiattisak Batsungnoen, Thanatchai Kulworawanichpong
Abstract:
The aim of this study is to develop a cost-effective WBGT heat stress monitor which provides precise heat stress measurement. The proposed device employs SHT15 and DS18B20 as a temperature and humidity sensors, respectively, incorporating with ATmega328 microcontroller. The developed heat stress monitor was calibrated and adjusted to that of the standard temperature and humidity sensors in the laboratory. The results of this study illustrated that the mean percentage error and the standard deviation from the measurement of the globe temperature was 2.33 and 2.71 respectively, while 0.94 and 1.02 were those of the dry bulb temperature, 0.79 and 0.48 were of the wet bulb temperature, and 4.46 and 1.60 were of the relative humidity sensor. This device is relatively low-cost and the measurement error is acceptable.
Keywords: Heat stress monitor, WBGT, Temperature and Humidity Sensors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25031448 Identifying New Sequence Features for Exon-Intron Discrimination by Rescaled-Range Frameshift Analysis
Authors: Sing-Wu Liou, Yin-Fu Huang
Abstract:
For identifying the discriminative sequence features between exons and introns, a new paradigm, rescaled-range frameshift analysis (RRFA), was proposed. By RRFA, two new sequence features, the frameshift sensitivity (FS) and the accumulative penta-mer complexity (APC), were discovered which were further integrated into a new feature of larger scale, the persistency in anti-mutation (PAM). The feature-validation experiments were performed on six model organisms to test the power of discrimination. All the experimental results highly support that FS, APC and PAM were all distinguishing features between exons and introns. These identified new sequence features provide new insights into the sequence composition of genes and they have great potentials of forming a new basis for recognizing the exonintron boundaries in gene sequences.Keywords: Exon-Intron Discrimination, Rescaled-Range Frameshift Analysis, Frameshift Sensitivity, Accumulative Sequence Complexity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11741447 Selection of Rayleigh Damping Coefficients for Seismic Response Analysis of Soil Layers
Authors: Huai-Feng Wang, Meng-Lin Lou, Ru-Lin Zhang
Abstract:
One good analysis method in seismic response analysis is direct time integration, which widely adopts Rayleigh damping. An approach is presented for selection of Rayleigh damping coefficients to be used in seismic analyses to produce a response that is consistent with Modal damping response. In the presented approach, the expression of the error of peak response, acquired through complete quadratic combination method, and Rayleigh damping coefficients was set up and then the coefficients were produced by minimizing the error. Two finite element modes of soil layers, excited by 28 seismic waves, were used to demonstrate the feasibility and validity.Keywords: Rayleigh damping, modal damping, damping coefficients, seismic response analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29171446 Predicting Oil Content of Fresh Palm Fruit Using Transmission-Mode Ultrasonic Technique
Authors: Sutthawee Suwannarat, Thanate Khaorapapong, Mitchai Chongcheawchamnan
Abstract:
In this paper, an ultrasonic technique is proposed to predict oil content in a fresh palm fruit. This is accomplished by measuring the attenuation based on ultrasonic transmission mode. Several palm fruit samples with known oil content by Soxhlet extraction (ISO9001:2008) were tested with our ultrasonic measurement. Amplitude attenuation data results for all palm samples were collected. The Feedforward Neural Networks (FNNs) are applied to predict the oil content for the samples. The Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) of the FNN model for predicting oil content percentage are 7.6186 and 5.2287 with the correlation coefficient (R) of 0.9193.Keywords: Non-destructive, ultrasonic testing, oil content, fresh palm fruit, neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18071445 Structural Design Strategy of Double-Eccentric Butterfly Valve using Topology Optimization Techniques
Authors: Jun-Oh Kim, Seol-Min Yang, Seok-Heum Baek, Sangmo Kang
Abstract:
In this paper, the shape design process is briefly discussed emphasizing the use of topology optimization in the conceptual design stage. The basic idea is to view feasible domains for sensitivity region concepts. In this method, the main process consists of two steps: as the design moves further inside the feasible domain using Taguchi method, and thus becoming more successful topology optimization, the sensitivity region becomes larger. In designing a double-eccentric butterfly valve, related to hydrodynamic performance and disc structure, are discussed where the use of topology optimization has proven to dramatically improve an existing design and significantly decrease the development time of a shape design. Computational Fluid Dynamics (CFD) analysis results demonstrate the validity of this approach.
Keywords: Double-eccentric butterfly valve, CFD, Topology optimization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35431444 A C1-Conforming Finite Element Method for Nonlinear Fourth-Order Hyperbolic Equation
Authors: Yang Liu, Hong Li, Siriguleng He, Wei Gao, Zhichao Fang
Abstract:
In this paper, the C1-conforming finite element method is analyzed for a class of nonlinear fourth-order hyperbolic partial differential equation. Some a priori bounds are derived using Lyapunov functional, and existence, uniqueness and regularity for the weak solutions are proved. Optimal error estimates are derived for both semidiscrete and fully discrete schemes.
Keywords: Nonlinear fourth-order hyperbolic equation, Lyapunov functional, existence, uniqueness and regularity, conforming finite element method, optimal error estimates.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18951443 A Medical Vulnerability Scoring System Incorporating Health and Data Sensitivity Metrics
Authors: Nadir A. Carreón, Christa Sonderer, Aakarsh Rao, Roman Lysecky
Abstract:
With the advent of complex software and increased connectivity, security of life-critical medical devices is becoming an increasing concern, particularly with their direct impact to human safety. Security is essential, but it is impossible to develop completely secure and impenetrable systems at design time. Therefore, it is important to assess the potential impact on security and safety of exploiting a vulnerability in such critical medical systems. The common vulnerability scoring system (CVSS) calculates the severity of exploitable vulnerabilities. However, for medical devices, it does not consider the unique challenges of impacts to human health and privacy. Thus, the scoring of a medical device on which a human life depends (e.g., pacemakers, insulin pumps) can score very low, while a system on which a human life does not depend (e.g., hospital archiving systems) might score very high. In this paper, we present a Medical Vulnerability Scoring System (MVSS) that extends CVSS to address the health and privacy concerns of medical devices. We propose incorporating two new parameters, namely health impact and sensitivity impact. Sensitivity refers to the type of information that can be stolen from the device, and health represents the impact to the safety of the patient if the vulnerability is exploited (e.g., potential harm, life threatening). We evaluate 15 different known vulnerabilities in medical devices and compare MVSS against two state-of-the-art medical device-oriented vulnerability scoring system and the foundational CVSS.
Keywords: Common vulnerability system, medical devices, medical device security, vulnerabilities.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7461442 Study on the Evaluation of the Chaotic Cipher System Using the Improved Volterra Filters and the RBFN Mapping
Authors: Hirotaka Watanabe, Takaaki Kondo, Daiki Yoshida, Ariyoshi Nakayama, Taichi Sato, Shuhei Kuriyama, Hiroyuki Kamata
Abstract:
In this paper, we propose a chaotic cipher system consisting of Improved Volterra Filters and the mapping that is created from the actual voice by using Radial Basis Function Network. In order to achieve a practical system, the system supposes to use the digital communication line, such as the Internet, to maintain the parameter matching between the transmitter and receiver sides. Therefore, in order to withstand the attack from outside, it is necessary that complicate the internal state and improve the sensitivity coefficient. In this paper, we validate the robustness of proposed method from three perspectives of "Chaotic properties", "Randomness", "Coefficient sensitivity".
Keywords: Chaos cipher, 16-bit-length fixed point arithmetic, Volterra filter, Seacret communications, RBF Network
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18221441 ROC Analysis of PVC Detection Algorithm using ECG and Vector-ECG Charateristics
Authors: J. S. Nah, A. Y. Jeon, J. H. Ro, G. R. Jeon
Abstract:
ECG analysis method was developed using ROC analysis of PVC detecting algorithm. ECG signal of MIT-BIH arrhythmia database was analyzed by MATLAB. First of all, the baseline was removed by median filter to preprocess the ECG signal. R peaks were detected for ECG analysis method, and normal VCG was extracted for VCG analysis method. Four PVC detecting algorithm was analyzed by ROC curve, which parameters are maximum amplitude of QRS complex, width of QRS complex, r-r interval and geometric mean of VCG. To set cut-off value of parameters, ROC curve was estimated by true-positive rate (sensitivity) and false-positive rate. sensitivity and false negative rate (specificity) of ROC curve calculated, and ECG was analyzed using cut-off value which was estimated from ROC curve. As a result, PVC detecting algorithm of VCG geometric mean have high availability, and PVC could be detected more accurately with amplitude and width of QRS complex.Keywords: Vectorcardiogram (VCG), Premature Ventricular contraction (PVC), ROC (receiver operating characteristic) curve, ECG
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2945