Search results for: Signal Classification
1149 A Optimal Subclass Detection Method for Credit Scoring
Authors: Luciano Nieddu, Giuseppe Manfredi, Salvatore D'Acunto, Katia La Regina
Abstract:
In this paper a non-parametric statistical pattern recognition algorithm for the problem of credit scoring will be presented. The proposed algorithm is based on a clustering k- means algorithm and allows for the determination of subclasses of homogenous elements in the data. The algorithm will be tested on two benchmark datasets and its performance compared with other well known pattern recognition algorithm for credit scoring.
Keywords: Constrained clustering, Credit scoring, Statistical pattern recognition, Supervised classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20481148 A Supply Chain Perspective of RFID Systems
Authors: A. N. Nambiar
Abstract:
Radio Frequency Identification (RFID) initially introduced during WW-II, has revolutionized the world with its numerous benefits and plethora of implementations in diverse areas ranging from manufacturing to agriculture to healthcare to hotel management. This work reviews the current research in this area with emphasis on applications for supply chain management and to develop a taxonomic framework to classify literature which will enable swift and easy content analysis and also help identify areas for future research.Keywords: RFID, supply chain, applications, classification framework.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23031147 Advanced Technologies and Algorithms for Efficient Portfolio Selection
Authors: Konstantinos Liagkouras, Konstantinos Metaxiotis
Abstract:
In this paper we present a classification of the various technologies applied for the solution of the portfolio selection problem according to the discipline and the methodological framework followed. We provide a concise presentation of the emerged categories and we are trying to identify which methods considered obsolete and which lie at the heart of the debate. On top of that, we provide a comparative study of the different technologies applied for efficient portfolio construction and we suggest potential paths for future work that lie at the intersection of the presented techniques.
Keywords: Portfolio selection, optimization techniques, financial models, stochastics, heuristics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17501146 Analysis of Sonographic Images of Breast
Authors: M. Bastanfard, S. Jafari, B.Jalaeian
Abstract:
Ultrasound images are very useful diagnostic tool to distinguish benignant from malignant masses of the breast. However, there is a considerable overlap between benignancy and malignancy in ultrasonic images which makes it difficult to interpret. In this paper, a new noise removal algorithm was used to improve the images and classification process. The masses are classified by wavelet transform's coefficients, morphological and textural features as a novel feature set for this goal. The Bayesian estimation theory is used to classify the tissues in three classes according to their features.Keywords: Bayesian estimation theory, breast, ultrasound, wavelet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14441145 Improvement of the Q-System Using the Rock Engineering System: A Case Study of Water Conveyor Tunnel of Azad Dam
Authors: S. Golmohammadi, M. Noorian Bidgoli
Abstract:
Because the status and mechanical parameters of discontinuities in the rock mass are included in the calculations, various methods of rock engineering classification are often used as a starting point for the design of different types of structures. The Q-system is one of the most frequently used methods for stability analysis and determination of support systems of underground structures in rock, including tunnel. In this method, six main parameters of the rock mass, namely, the Rock Quality Designation (RQD), joint set number (Jn), joint roughness number (Jr), joint alteration number (Ja), joint water parameter (Jw) and Stress Reduction Factor (SRF) are required. In this regard, in order to achieve a reasonable and optimal design, identifying the effective parameters for the stability of the mentioned structures is one of the most important goals and the most necessary actions in rock engineering. Therefore, it is necessary to study the relationships between the parameters of a system and how they interact with each other and, ultimately, the whole system. In this research, it has been attempted to determine the most effective parameters (key parameters) from the six parameters of rock mass in the Q-system using the Rock Engineering System (RES) method to improve the relationships between the parameters in the calculation of the Q value. The RES system is, in fact, a method by which one can determine the degree of cause and effect of a system's parameters by making an interaction matrix. In this research, the geomechanical data collected from the water conveyor tunnel of Azad Dam were used to make the interaction matrix of the Q-system. For this purpose, instead of using the conventional methods that are always accompanied by defects such as uncertainty, the Q-system interaction matrix is coded using a technique that is actually a statistical analysis of the data and determining the correlation coefficient between them. So, the effect of each parameter on the system is evaluated with greater certainty. The results of this study show that the formed interaction matrix provides a reasonable estimate of the effective parameters in the Q-system. Among the six parameters of the Q-system, the SRF and Jr parameters have the maximum and minimum impact on the system, respectively, and also the RQD and Jw parameters have the maximum and minimum impact on the system, respectively. Therefore, by developing this method, we can obtain a more accurate relation to the rock mass classification by weighting the required parameters in the Q-system.
Keywords: Q-system, Rock Engineering System, statistical analysis, rock mass, tunnel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2941144 Research on Software Security Testing
Authors: Gu Tian-yang, Shi Yin-sheng, Fang You-yuan
Abstract:
Software security testing is an important means to ensure software security and trustiness. This paper first mainly discusses the definition and classification of software security testing, and investigates methods and tools of software security testing widely. Then it analyzes and concludes the advantages and disadvantages of various methods and the scope of application, presents a taxonomy of security testing tools. Finally, the paper points out future focus and development directions of software security testing technology.
Keywords: security testing, security functional testing, securityvulnerability testing, testing method, testing tool
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 51331143 Rapid Monitoring of Earthquake Damages Using Optical and SAR Data
Authors: Saeid Gharechelou, Ryutaro Tateishi
Abstract:
Earthquake is an inevitable catastrophic natural disaster. The damages of buildings and man-made structures, where most of the human activities occur are the major cause of casualties from earthquakes. A comparison of optical and SAR data is presented in the case of Kathmandu valley which was hardly shaken by 2015-Nepal Earthquake. Though many existing researchers have conducted optical data based estimated or suggested combined use of optical and SAR data for improved accuracy, however finding cloud-free optical images when urgently needed are not assured. Therefore, this research is specializd in developing SAR based technique with the target of rapid and accurate geospatial reporting. Should considers that limited time available in post-disaster situation offering quick computation exclusively based on two pairs of pre-seismic and co-seismic single look complex (SLC) images. The InSAR coherence pre-seismic, co-seismic and post-seismic was used to detect the change in damaged area. In addition, the ground truth data from field applied to optical data by random forest classification for detection of damaged area. The ground truth data collected in the field were used to assess the accuracy of supervised classification approach. Though a higher accuracy obtained from the optical data then integration by optical-SAR data. Limitation of cloud-free images when urgently needed for earthquak evevent are and is not assured, thus further research on improving the SAR based damage detection is suggested. Availability of very accurate damage information is expected for channelling the rescue and emergency operations. It is expected that the quick reporting of the post-disaster damage situation quantified by the rapid earthquake assessment should assist in channeling the rescue and emergency operations, and in informing the public about the scale of damage.
Keywords: Sentinel-1A data, Landsat-8, earthquake damage, InSAR, rapid monitoring, 2015-Nepal earthquake.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10551142 Clustered Signatures for Modeling and Recognizing 3D Rigid Objects
Authors: H. B. Darbandi, M. R. Ito, J. Little
Abstract:
This paper describes a probabilistic method for three-dimensional object recognition using a shared pool of surface signatures. This technique uses flatness, orientation, and convexity signatures that encode the surface of a free-form object into three discriminative vectors, and then creates a shared pool of data by clustering the signatures using a distance function. This method applies the Bayes-s rule for recognition process, and it is extensible to a large collection of three-dimensional objects.Keywords: Object recognition, modeling, classification, computer vision.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12771141 Person Identification using Gait by Combined Features of Width and Shape of the Binary Silhouette
Authors: M.K. Bhuyan, Aragala Jagan.
Abstract:
Current image-based individual human recognition methods, such as fingerprints, face, or iris biometric modalities generally require a cooperative subject, views from certain aspects, and physical contact or close proximity. These methods cannot reliably recognize non-cooperating individuals at a distance in the real world under changing environmental conditions. Gait, which concerns recognizing individuals by the way they walk, is a relatively new biometric without these disadvantages. The inherent gait characteristic of an individual makes it irreplaceable and useful in visual surveillance. In this paper, an efficient gait recognition system for human identification by extracting two features namely width vector of the binary silhouette and the MPEG-7-based region-based shape descriptors is proposed. In the proposed method, foreground objects i.e., human and other moving objects are extracted by estimating background information by a Gaussian Mixture Model (GMM) and subsequently, median filtering operation is performed for removing noises in the background subtracted image. A moving target classification algorithm is used to separate human being (i.e., pedestrian) from other foreground objects (viz., vehicles). Shape and boundary information is used in the moving target classification algorithm. Subsequently, width vector of the outer contour of binary silhouette and the MPEG-7 Angular Radial Transform coefficients are taken as the feature vector. Next, the Principal Component Analysis (PCA) is applied to the selected feature vector to reduce its dimensionality. These extracted feature vectors are used to train an Hidden Markov Model (HMM) for identification of some individuals. The proposed system is evaluated using some gait sequences and the experimental results show the efficacy of the proposed algorithm.Keywords: Gait Recognition, Gaussian Mixture Model, PrincipalComponent Analysis, MPEG-7 Angular Radial Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19101140 Balancing Tourism and Environment: The ETM Model
Authors: U.V Jose, Muhammed Nahar, Vijayakumar S., Sonia Jose
Abstract:
Environment both endowed and built are essential for tourism. However tourism and environment maintains a complex relationship, where in most cases environment is at the receiving end. Many tourism development activities have adverse environmental effects, mainly emanating from construction of general infrastructure and tourism facilities. These negative impacts of tourism can lead to the destruction of precious natural resources on which it depends. These effects vary between locations; and its effect on a hill destination is highly critical. This study aims at developing a Sustainable Tourism Planning Model for an environmentally sensitive tourism destination in Kerala, India. Being part of the Nilgiri mountain ranges, Munnar falls in the Western Ghats, one of the biological hotspots in the world. Endowed with a unique high altitude environment Munnar inherits highly significant ecological wealth. Giving prime importance to the protection of this ecological heritage, the study proposes a tourism planning model with resource conservation and sustainability as the paramount focus. Conceiving a novel approach towards sustainable tourism planning, the study proposes to assess tourism attractions using Ecological Sensitivity Index (ESI) and Tourism Attractiveness Index (TAI). Integration of these two indices will form the Ecology – Tourism Matrix (ETM), outlining the base for tourism planning in an environmentally sensitive destination. The ETM Matrix leads to a classification of tourism nodes according to its Conservation Significance and Tourism Significance. The spatial integration of such nodes based on the Hub & Spoke Principle constitutes sub – regions within the STZ. Ensuing analyses lead to specific guidelines for the STZ as a whole, specific tourism nodes, hubs and sub-regions. The study results in a multi – dimensional output, viz., (1) Classification system for tourism nodes in an environmentally sensitive region/ destination (2) Conservation / Tourism Development Strategies and Guidelines for the micro and macro regions and (3) A Sustainable Tourism Planning Tool particularly for Ecologically Sensitive Destinations, which can be adapted for other destinations as well.Keywords: Tourism, Environment, Spatial Planning, Model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28081139 Compression and Filtering of Random Signals under Constraint of Variable Memory
Authors: Anatoli Torokhti, Stan Miklavcic
Abstract:
We study a new technique for optimal data compression subject to conditions of causality and different types of memory. The technique is based on the assumption that some information about compressed data can be obtained from a solution of the associated problem without constraints of causality and memory. This allows us to consider two separate problem related to compression and decompression subject to those constraints. Their solutions are given and the analysis of the associated errors is provided.Keywords: stochastic signals, optimization problems in signal processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13301138 CdS Quantum Dots as Fluorescent Probes for Detection of Naphthalene
Authors: Zhengyu Yan, Yan Yu, Jianqiu Chen
Abstract:
A novel sensing system has been designed for naphthalene detection based on the quenched fluorescence signal of CdS quantum dots. The fluorescence intensity of the system reduced significantly after adding CdS quantum dots to the water pollution model because of the fluorescent static quenching f mechanism. Herein, we have demonstrated the facile methodology can offer a convenient and low analysis cost with the recovery rate as 97.43%-103.2%, which has potential application prospect.Keywords: CdS quantum dots, modification, detection, naphthalene.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12721137 Solving of the Fourth Order Differential Equations with the Neumann Problem
Authors: Marziyeh Halimi, Roushanak Lotfikar, Simin Mansouri Borojeni
Abstract:
In this paper we considered the Neumann problem for the fourth order differential equation. First we define the weighted Sobolev space 2 Wα and generalized solution for this equation. Then we consider the existence and uniqueness of the generalized solution, as well as give the description of the spectrum and of the domain of definition of the corresponding operator.Keywords: Neumann problem, weighted Sobolev spaces, generalized solution, spectrum of linear operators.2000 mathematic subject classification: 34A05, 34A30.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14251136 Fault-Tolerant Control Study and Classification: Case Study of a Hydraulic-Press Model Simulated in Real-Time
Authors: Jorge Rodriguez-Guerra, Carlos Calleja, Aron Pujana, Iker Elorza, Ana Maria Macarulla
Abstract:
Society demands more reliable manufacturing processes capable of producing high quality products in shorter production cycles. New control algorithms have been studied to satisfy this paradigm, in which Fault-Tolerant Control (FTC) plays a significant role. It is suitable to detect, isolate and adapt a system when a harmful or faulty situation appears. In this paper, a general overview about FTC characteristics are exposed; highlighting the properties a system must ensure to be considered faultless. In addition, a research to identify which are the main FTC techniques and a classification based on their characteristics is presented in two main groups: Active Fault-Tolerant Controllers (AFTCs) and Passive Fault-Tolerant Controllers (PFTCs). AFTC encompasses the techniques capable of re-configuring the process control algorithm after the fault has been detected, while PFTC comprehends the algorithms robust enough to bypass the fault without further modifications. The mentioned re-configuration requires two stages, one focused on detection, isolation and identification of the fault source and the other one in charge of re-designing the control algorithm by two approaches: fault accommodation and control re-design. From the algorithms studied, one has been selected and applied to a case study based on an industrial hydraulic-press. The developed model has been embedded under a real-time validation platform, which allows testing the FTC algorithms and analyse how the system will respond when a fault arises in similar conditions as a machine will have on factory. One AFTC approach has been picked up as the methodology the system will follow in the fault recovery process. In a first instance, the fault will be detected, isolated and identified by means of a neural network. In a second instance, the control algorithm will be re-configured to overcome the fault and continue working without human interaction.Keywords: Fault-tolerant control, electro-hydraulic actuator, fault detection and isolation, control re-design, real-time.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8391135 Taguchi Robust Design for Optimal Setting of Process Wastes Parameters in an Automotive Parts Manufacturing Company
Authors: Charles Chikwendu Okpala, Christopher Chukwutoo Ihueze
Abstract:
As a technique that reduces variation in a product by lessening the sensitivity of the design to sources of variation, rather than by controlling their sources, Taguchi Robust Design entails the designing of ideal goods, by developing a product that has minimal variance in its characteristics and also meets the desired exact performance. This paper examined the concept of the manufacturing approach and its application to brake pad product of an automotive parts manufacturing company. Although the firm claimed that only defects, excess inventory, and over-production were the few wastes that grossly affect their productivity and profitability, a careful study and analysis of their manufacturing processes with the application of Single Minute Exchange of Dies (SMED) tool showed that the waste of waiting is the fourth waste that bedevils the firm. The selection of the Taguchi L9 orthogonal array which is based on the four parameters and the three levels of variation for each parameter revealed that with a range of 2.17, that waiting is the major waste that the company must reduce in order to continue to be viable. Also, to enhance the company’s throughput and profitability, the wastes of over-production, excess inventory, and defects with ranges of 2.01, 1.46, and 0.82, ranking second, third, and fourth respectively must also be reduced to the barest minimum. After proposing -33.84 as the highest optimum Signal-to-Noise ratio to be maintained for the waste of waiting, the paper advocated for the adoption of all the tools and techniques of Lean Production System (LPS), and Continuous Improvement (CI), and concluded by recommending SMED in order to drastically reduce set up time which leads to unnecessary waiting.Keywords: Taguchi Robust Design, signal to noise ratio, Single Minute Exchange of Dies, lean production system, waste.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9731134 Classifying Turbomachinery Blade Mode Shapes Using Artificial Neural Networks
Authors: Ismail Abubakar, Hamid Mehrabi, Reg Morton
Abstract:
Currently, extensive signal analysis is performed in order to evaluate structural health of turbomachinery blades. This approach is affected by constraints of time and the availability of qualified personnel. Thus, new approaches to blade dynamics identification that provide faster and more accurate results are sought after. Generally, modal analysis is employed in acquiring dynamic properties of a vibrating turbomachinery blade and is widely adopted in condition monitoring of blades. The analysis provides useful information on the different modes of vibration and natural frequencies by exploring different shapes that can be taken up during vibration since all mode shapes have their corresponding natural frequencies. Experimental modal testing and finite element analysis are the traditional methods used to evaluate mode shapes with limited application to real live scenario to facilitate a robust condition monitoring scheme. For a real time mode shape evaluation, rapid evaluation and low computational cost is required and traditional techniques are unsuitable. In this study, artificial neural network is developed to evaluate the mode shape of a lab scale rotating blade assembly by using result from finite element modal analysis as training data. The network performance evaluation shows that artificial neural network (ANN) is capable of mapping the correlation between natural frequencies and mode shapes. This is achieved without the need of extensive signal analysis. The approach offers advantage from the perspective that the network is able to classify mode shapes and can be employed in real time including simplicity in implementation and accuracy of the prediction. The work paves the way for further development of robust condition monitoring system that incorporates real time mode shape evaluation.
Keywords: Modal analysis, artificial neural network, mode shape, natural frequencies, pattern recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9071133 Temperature-Based Detection of Initial Yielding Point in Loading of Tensile Specimens Made of Structural Steel
Authors: Aqsa Jamil, Hiroshi Tamura, Hiroshi Katsuchi, Jiaqi Wang
Abstract:
Yield point represents the upper limit of forces which can be applied on a specimen without causing any permanent deformation. After yielding, the behavior of specimen suddenly changes including the possibility of cracking or buckling. So, the accumulation of damage or type of fracture changes depending on this condition. As it is difficult to accurately detect yield points of the several stress concentration points in structural steel specimens, an effort has been made in this research work to develop a convenient technique using thermography (temperature-based detection) during tensile tests for the precise detection of yield point initiation. To verify the applicability of thermography camera, tests were conducted under different loading conditions and measuring the deformation by installing various strain gauges and monitoring the surface temperature with the help of thermography camera. The yield point of specimens was estimated by the help of temperature dip which occurs due to the thermoelastic effect during the plastic deformation. The scattering of the data has been checked by performing repeatability analysis. The effect of temperature imperfection and light source has been checked by carrying out the tests at daytime as well as midnight and by calculating the signal to noise ratio (SNR) of the noised data from the infrared thermography camera, it can be concluded that the camera is independent of testing time and the presence of a visible light source. Furthermore, a fully coupled thermal-stress analysis has been performed by using Abaqus/Standard exact implementation technique to validate the temperature profiles obtained from the thermography camera and to check the feasibility of numerical simulation for the prediction of results extracted with the help of thermographic technique.
Keywords: Signal to noise ratio, thermoelastic effect, thermography, yield point.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3661132 Analysing and Classifying VLF Transients
Authors: Ernst D. Schmitter
Abstract:
Monitoring lightning electromagnetic pulses (sferics) and other terrestrial as well as extraterrestrial transient radiation signals is of considerable interest for practical and theoretical purposes in astro- and geophysics as well as meteorology. Managing a continuous flow of data, automation of the analysis and classification process is important. Features based on a combination of wavelet and statistical methods proved efficient for this task and serve as input into a radial basis function network that is trained to discriminate transient shapes from pulse like to wave like. We concentrate on signals in the Very Low Frequency (VLF, 3 -30 kHz) range in this paper, but the developed methods are independent of this specific choice.
Keywords: Transient signals, statistics, wavelets, neural networks
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18791131 Comparison of Performance between Different SVM Kernels for the Identification of Adult Video
Authors: Hajar Bouirouga, Sanaa El Fkihi , Abdeilah Jilbab, Driss Aboutajdine
Abstract:
In this paper we propose a method for recognition of adult video based on support vector machine (SVM). Different kernel features are proposed to classify adult videos. SVM has an advantage that it is insensitive to the relative number of training example in positive (adult video) and negative (non adult video) classes. This advantage is illustrated by comparing performance between different SVM kernels for the identification of adult video.Keywords: Skin detection, Support vector machine, Pornographic videos, Feature extraction, Video filtering, Classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23051130 Javanese Character Recognition Using Hidden Markov Model
Authors: Anastasia Rita Widiarti, Phalita Nari Wastu
Abstract:
Hidden Markov Model (HMM) is a stochastic method which has been used in various signal processing and character recognition. This study proposes to use HMM to recognize Javanese characters from a number of different handwritings, whereby HMM is used to optimize the number of state and feature extraction. An 85.7 % accuracy is obtained as the best result in 16-stated vertical model using pure HMM. This initial result is satisfactory for prompting further research.Keywords: Character recognition, off-line handwritingrecognition, Hidden Markov Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19871129 Semantic Web as an Enabling Technology for Better e-Services Addoption
Authors: Luka Pavlič, Marjan Heričko
Abstract:
E-services have significantly changed the way of doing business in recent years. We can, however, observe poor use of these services. There is a large gap between supply and actual eservices usage. This is why we started a project to provide an environment that will encourage the use of e-services. We believe that only providing e-service does not automatically mean consumers would use them. This paper shows the origins of our project and its current position. We discuss the decision of using semantic web technologies and their potential to improve e-services usage. We also present current knowledge base and its real-world classification. In the paper, we discuss further work to be done in the project. Current state of the project is promising.Keywords: E-Services, E-Services Repository, Ontologies, Semantic Web
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13991128 Novel Linear Autozeroing Floating-gate Amplifier for Ultra Low-voltage Applications
Authors: Yngvar Berg, Mehdi Azadmehr
Abstract:
In this paper we present a linear autozeroing ultra lowvoltage amplifier. The autozeroing performed by all ULV circuits is important to reduce the impact of noise and especially avoid power supply noise in mixed signal low-voltage CMOS circuits. The simulated data presented is relevant for a 90nm TSMC CMOS process.
Keywords: Low-voltage, trans conductance amplifier, linearity, floating-gate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13831127 A Comparison of Signal Processing Techniques for the Extraction of Breathing Rate from the Photoplethysmogram
Authors: Susannah G. Fleming Lionel Tarassenko
Abstract:
The photoplethysmogram (PPG) is the pulsatile waveform produced by the pulse oximeter, which is widely used for monitoring arterial oxygen saturation in patients. Various methods for extracting the breathing rate from the PPG waveform have been compared using a consistent data set, and a novel technique using autoregressive modelling is presented. This novel technique is shown to outperform the existing techniques, with a mean error in breathing rate of 0.04 breaths per minute.Keywords: Autoregressive modelling, breathing rate, photoplethysmogram, pulse oximetry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33171126 Classification of Causes and Effects of Uploading and Downloading of Pirated Film Products
Authors: Pavel Janak
Abstract:
This paper covers various aspects of the Internet film piracy. In order to successfully deal with this matter, it is needed to recognize and explain various motivational factors related to film piracy. Thus, this study proposes groups of economical, sociopsychological and other factors that could motivate individuals to engage in pirate activities. The paper also studies the interactions between downloaders and uploaders and offers the causality of the motivational factors and its effects on the film industry. Moreover, the study also focuses on proposed scheme of relations of downloading movies and the possible effect on box office revenues.Keywords: Download, Film piracy, Internet, Upload
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31211125 Measures and Influence of a Baw Filter on Digital Radio-Communications Signals
Authors: A. Diet, M. Villegas, G. Baudoin
Abstract:
This work concerns the measurements of a Bulk Acoustic Waves (BAW) emission filter S parameters and compare with prototypes simulated types. Thanks to HP-ADS, a co-simulation of filters- characteristics in a digital radio-communication chain is performed. Four cases of modulation schemes are studied in order to illustrate the impact of the spectral occupation of the modulated signal. Results of simulations and co-simulation are given in terms of Error Vector Measurements to be useful for a general sensibility analysis of 4th/3rd Generation (G.) emitters (wideband QAM and OFDM signals)Keywords: RF architectures, BAW filters.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27721124 An Improved Design of Area Efficient Two Bit Comparator
Authors: Shashank Gautam, Pramod Sharma
Abstract:
In present era, development of digital circuits, signal processors and other integrated circuits, magnitude comparators are challenged by large area and more power consumption. Comparator is most basic circuit that performs comparison. This paper presents a technique to design a two bit comparator which consumes less area and power. DSCH and MICROWIND version 3 are used to design the schematic and design the layout of the schematic, observe the performance parameters at different nanometer technologies respectively.
Keywords: Chip design, consumed power, layout area, two bit comparator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12171123 EEG Spikes Detection, Sorting, and Localization
Authors: Mazin Z. Othman, Maan M. Shaker, Mohammed F. Abdullah
Abstract:
This study introduces a new method for detecting, sorting, and localizing spikes from multiunit EEG recordings. The method combines the wavelet transform, which localizes distinctive spike features, with Super-Paramagnetic Clustering (SPC) algorithm, which allows automatic classification of the data without assumptions such as low variance or Gaussian distributions. Moreover, the method is capable of setting amplitude thresholds for spike detection. The method makes use of several real EEG data sets, and accordingly the spikes are detected, clustered and their times were detected.Keywords: EEG time localizations, EEG spike detection, superparamagnetic algorithm, wavelet transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25481122 Self Organizing Analysis Platform for Wear Particle
Authors: Qurban A. Memon, Mohammad S. Laghari
Abstract:
Integration of system process information obtained through an image processing system with an evolving knowledge database to improve the accuracy and predictability of wear particle analysis is the main focus of the paper. The objective is to automate intelligently the analysis process of wear particle using classification via self organizing maps. This is achieved using relationship measurements among corresponding attributes of various measurements for wear particle. Finally, visualization technique is proposed that helps the viewer in understanding and utilizing these relationships that enable accurate diagnostics.Keywords: Neural Network, Relationship Measurement, Selforganizing Clusters, Wear Particle Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22121121 H.263 Based Video Transceiver for Wireless Camera System
Authors: Won-Ho Kim
Abstract:
In this paper, a design of H.263 based wireless video transceiver is presented for wireless camera system. It uses standard WIFI transceiver and the covering area is up to 100m. Furthermore the standard H.263 video encoding technique is used for video compression since wireless video transmitter is unable to transmit high capacity raw data in real time and the implemented system is capable of streaming at speed of less than 1Mbps using NTSC 720x480 video.Keywords: Digital signal processing, H.263 video encoder, surveillance camera, wireless video transceiver.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18511120 Estimating the Traffic Impacts of Green Light Optimal Speed Advisory Systems Using Microsimulation
Authors: C. B. Masera, M. Imprialou, L. Budd, C. Morton
Abstract:
Even though signalised intersections are necessary for urban road traffic management, they can act as bottlenecks and disrupt traffic operations. Interrupted traffic flow causes congestion, delays, stop-and-go conditions (i.e. excessive acceleration/deceleration) and longer journey times. Vehicle and infrastructure connectivity offers the potential to provide improved new services with additional functions of assisting drivers. This paper focuses on one of the applications of vehicle-to-infrastructure communication namely Green Light Optimal Speed Advisory (GLOSA). To assess the effectiveness of GLOSA in the urban road network, an integrated microscopic traffic simulation framework is built into VISSIM software. Vehicle movements and vehicle-infrastructure communications are simulated through the interface of External Driver Model. A control algorithm is developed for recommending an optimal speed that is continuously updated in every time step for all vehicles approaching a signal-controlled point. This algorithm allows vehicles to pass a traffic signal without stopping or to minimise stopping times at a red phase. This study is performed with all connected vehicles at 100% penetration rate. Conventional vehicles are also simulated in the same network as a reference. A straight road segment composed of two opposite directions with two traffic lights per lane is studied. The simulation is implemented under 150 vehicles per hour and 200 per hour traffic volume conditions to identify how different traffic densities influence the benefits of GLOSA. The results indicate that traffic flow is improved by the application of GLOSA. According to this study, vehicles passed through the traffic lights more smoothly, and waiting times were reduced by up to 28 seconds. Average delays decreased for the entire network by 86.46% and 83.84% under traffic densities of 150 vehicles per hour per lane and 200 vehicles per hour per lane, respectively.
Keywords: Connected vehicles, GLOSA, intelligent transportation systems, infrastructure-to-vehicle communication.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1612