Search results for: signal detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2524

Search results for: signal detection

334 Spatial Variability of Brahmaputra River Flow Characteristics

Authors: Hemant Kumar

Abstract:

Brahmaputra River is known according to the Hindu mythology the son of the Lord Brahma. According to this name, the river Brahmaputra creates mass destruction during the monsoon season in Assam, India. It is a state situated in North-East part of India. This is one of the essential states out of the seven countries of eastern India, where almost all entire Brahmaputra flow carried out. The other states carry their tributaries. In the present case study, the spatial analysis performed in this specific case the number of MODIS data are acquired. In the method of detecting the change, the spray content was found during heavy rainfall and in the flooded monsoon season. By this method, particularly the analysis over the Brahmaputra outflow determines the flooded season. The charged particle-associated in aerosol content genuinely verifies the heavy water content below the ground surface, which is validated by trend analysis through rainfall spectrum data. This is confirmed by in-situ sampled view data from a different position of Brahmaputra River. Further, a Hyperion Hyperspectral 30 m resolution data were used to scan the sediment deposits, which is also confirmed by in-situ sampled view data from a different position.

Keywords: Spatial analysis, change detection, aerosol, trend analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 496
333 Thermal Effect on Wave Interaction in Composite Structures

Authors: R. K. Apalowo, D. Chronopoulos, V. Thierry

Abstract:

There exist a wide range of failure modes in composite structures due to the increased usage of the structures especially in aerospace industry. Moreover, temperature dependent wave response of composite and layered structures have been continuously studied, though still limited, in the last decade mainly due to the broad operating temperature range of aerospace structures. A wave finite element (WFE) and finite element (FE) based computational method is presented by which the temperature dependent wave dispersion characteristics and interaction phenomenon in composite structures can be predicted. Initially, the temperature dependent mechanical properties of the panel in the range of -100 ◦C to 150 ◦C are measured experimentally using the Thermal Mechanical Analysis (TMA). Temperature dependent wave dispersion characteristics of each waveguide of the structural system, which is discretized as a system of a number of waveguides coupled by a coupling element, is calculated using the WFE approach. The wave scattering properties, as a function of temperature, is determined by coupling the WFE wave characteristics models of the waveguides with the full FE modelling of the coupling element on which defect is included. Numerical case studies are exhibited for two waveguides coupled through a coupling element.

Keywords: Temperature dependent mechanical characteristics, wave propagation properties, damage detection, wave finite element, composite structure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1185
332 Investigating the Vehicle-Bicyclists Conflicts Using LIDAR Sensor Technology at Signalized Intersections

Authors: Alireza Ansariyar, Mansoureh Jeihani

Abstract:

Light Detection and Ranging (LiDAR) sensors is capable of recording traffic data including the number of passing vehicles and bicyclists, the speed of vehicles and bicyclists, and the number of conflicts among both road users. In order to collect real-time traffic data and investigate the safety of different road users, a LiDAR sensor was installed at Cold Spring Ln – Hillen Rd intersection in Baltimore city. The frequency and severity of collected real-time conflicts were analyzed and the results highlighted that 122 conflicts were recorded over a 10-month time interval from May 2022 to February 2023. By employing an image-processing algorithm, a safety Measure of Effectiveness (MOE) aims to identify critical zones for bicyclists upon entering each respective zone at the signalized intersection. Considering the trajectory of conflicts, the results of analysis demonstrated that conflicts in the northern approach (zone N) are more frequent and severe. Additionally, sunny weather is more likely to cause severe vehicle-bike conflicts.

Keywords: LiDAR sensor, Post Encroachment Time threshold, vehicle-bike conflicts, measure of effectiveness, weather condition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 82
331 New Features for Specific JPEG Steganalysis

Authors: Johann Barbier, Eric Filiol, Kichenakoumar Mayoura

Abstract:

We present in this paper a new approach for specific JPEG steganalysis and propose studying statistics of the compressed DCT coefficients. Traditionally, steganographic algorithms try to preserve statistics of the DCT and of the spatial domain, but they cannot preserve both and also control the alteration of the compressed data. We have noticed a deviation of the entropy of the compressed data after a first embedding. This deviation is greater when the image is a cover medium than when the image is a stego image. To observe this deviation, we pointed out new statistic features and combined them with the Multiple Embedding Method. This approach is motivated by the Avalanche Criterion of the JPEG lossless compression step. This criterion makes possible the design of detectors whose detection rates are independent of the payload. Finally, we designed a Fisher discriminant based classifier for well known steganographic algorithms, Outguess, F5 and Hide and Seek. The experiemental results we obtained show the efficiency of our classifier for these algorithms. Moreover, it is also designed to work with low embedding rates (< 10-5) and according to the avalanche criterion of RLE and Huffman compression step, its efficiency is independent of the quantity of hidden information.

Keywords: Compressed frequency domain, Fisher discriminant, specific JPEG steganalysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2131
330 Study on the Presence of Protozoal Coinfections among Patients with Pneumocystis jirovecii Pneumonia in Bulgaria

Authors: N. Tsvetkova, R. Harizanov A. Ivanova, I. Rainova, N. Yancheva-Petrova, D. Strashimirov, R. Enikova, M. Videnova, E. Kaneva, I. Kaftandjiev, V. Levterova, I. Simeonovski, N. Yanev, G. Hinkov

Abstract:

The Pneumocystis jirovecii (P. jirovecii) and protozoan of the genera Acanthamoeba, Cryptosporidium, and Toxoplasma gondii are opportunistic pathogens that can cause life-threatening infections in immunocompromised patients. Aim of the study was to evaluate the coinfection rate with opportunistic protozoal agents among Bulgarian patients diagnosed with P. jirovecii pneumonia. 38 pulmonary samples were collected from 38 patients (28 HIV-infected) with P. jirovecii infection. P. jirovecii DNA was detected by real-time PCR targeting the large mitochondrial subunit ribosomal RNA gene. Acanthamoeba was determined by genus-specific conventional PCR assay. Real-time PCR for the detection of a Toxoplasma gondii and Cryptosporidium DNA fragment was used. Pneumocystis DNA was detected in all 38 specimens; 28 (73.7%) were from HIV-infected patients. Three (10,7%) of them were coinfected with T. gondii and 1 (3.6%) with Cryptosporidium. In the group of non-HIV-infected (n = 10), Cryptosporidium DNA was detected in an infant (10%). Acanthamoeba DNA was not found in the tested samples. The current study showed a relatively low rate of coinfections of Cryptosporidium spp./T. gondii and P. jirovecii in the Bulgarian patients studied.

Keywords: Coinfection, opportunistic protozoal agents, Pneumocystis jirovecii, pulmonary infections.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 180
329 Method Development and Validation for the Determination of Cefixime in Pure and Commercial Dosage Forms by Specrophotometry

Authors: S. N. H. Azmi, B. Iqbal, J. K. Al Mamari, K. A. Al Hattali, W. N. Al Hadhrami

Abstract:

A simple, accurate and precise direct spectrophotometric method has been developed for the determination of cefixime in tablets and capsules. The method is based on the reaction of cefixime with a mixture of potassium iodide and potassium iodate to form yellow coloured product in ethanol-distilled water medium at room temperature which absorbed maximally at 352 nm. The factors affecting the reaction product were carefully studied and optimized. The validation parameters based on International Conference on Harmonisation (ICH, USA) guidelines were followed. The effect of common excipients used as additives has been tested and the tolerance limit was calculated for the determination of cefixime. Beer’s law is obeyed in the concentration range of 4 – 24 ug mL-1 with apparent molar absorptivity of 1.52 × 104 L mol-1cm-1 and Sandell’s sensitivity of 0.033 ug/cm2/ 0.001 absorbance unit. The limits of detection and quantitation for the proposed method are 0.32 and 1.06 ug mL-1, respectively. The proposed method has been successfully applied for the determination of cefixime in pharmaceutical formulations. The results obtained by the proposed method were statistically compared with the reference method using t- and F- values and found no significant difference between the two methods. The proposed method can be used as an alternate method for routine quality control analysis of cefixime in pharmaceutical formulations.

Keywords: Spectrophotometry, cefixime, validation, pharmaceutical formulations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3133
328 Discrete Polyphase Matched Filtering-based Soft Timing Estimation for Mobile Wireless Systems

Authors: Thomas O. Olwal, Michael A. van Wyk, Barend J. van Wyk

Abstract:

In this paper we present a soft timing phase estimation (STPE) method for wireless mobile receivers operating in low signal to noise ratios (SNRs). Discrete Polyphase Matched (DPM) filters, a Log-maximum a posterior probability (MAP) and/or a Soft-output Viterbi algorithm (SOVA) are combined to derive a new timing recovery (TR) scheme. We apply this scheme to wireless cellular communication system model that comprises of a raised cosine filter (RCF), a bit-interleaved turbo-coded multi-level modulation (BITMM) scheme and the channel is assumed to be memory-less. Furthermore, no clock signals are transmitted to the receiver contrary to the classical data aided (DA) models. This new model ensures that both the bandwidth and power of the communication system is conserved. However, the computational complexity of ideal turbo synchronization is increased by 50%. Several simulation tests on bit error rate (BER) and block error rate (BLER) versus low SNR reveal that the proposed iterative soft timing recovery (ISTR) scheme outperforms the conventional schemes.

Keywords: discrete polyphase matched filters, maximum likelihood estimators, soft timing phase estimation, wireless mobile systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1659
327 Highly Accurate Target Motion Compensation Using Entropy Function Minimization

Authors: Amin Aghatabar Roodbary, Mohammad Hassan Bastani

Abstract:

One of the defects of stepped frequency radar systems is their sensitivity to target motion. In such systems, target motion causes range cell shift, false peaks, Signal to Noise Ratio (SNR) reduction and range profile spreading because of power spectrum interference of each range cell in adjacent range cells which induces distortion in High Resolution Range Profile (HRRP) and disrupt target recognition process. Thus Target Motion Parameters (TMPs) effects compensation should be employed. In this paper, such a method for estimating TMPs (velocity and acceleration) and consequently eliminating or suppressing the unwanted effects on HRRP based on entropy minimization has been proposed. This method is carried out in two major steps: in the first step, a discrete search method has been utilized over the whole acceleration-velocity lattice network, in a specific interval seeking to find a less-accurate minimum point of the entropy function. Then in the second step, a 1-D search over velocity is done in locus of the minimum for several constant acceleration lines, in order to enhance the accuracy of the minimum point found in the first step. The provided simulation results demonstrate the effectiveness of the proposed method.

Keywords: ATR, HRRP, motion compensation, SFW, TMP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 612
326 Design and Simulation of Heartbeat Measurement System Using Arduino Microcontroller in Proteus

Authors: Muhibul H. Bhuyan, Mafujul Hasan

Abstract:

If a person can monitor his/her heart rate regularly then he/she can detect heart disease early and thus he/she can enjoy longer life span. Therefore, this disease should be taken seriously. Hence, many health care devices and monitoring systems are being designed to keep track of the heart disease. This work reports a design and simulation processes of an Arduino microcontroller based heart rate measurement and monitoring system in Proteus environment. Clipping sensors were utilized to sense the heart rate of an individual from the finger tips. It is a digital device and uses mainly infrared (IR) transmitter (mainly IR LED) and receiver (mainly IR photo-transistor or IR photo-detector). When the heart pumps the blood and circulates it among the blood vessels of the body, the changed blood pressure is detected by the transmitter and then reflected back to the receiver accordingly. The reflected signals are then processed inside the microcontroller through a software written assembly language and appropriate heart rate (HR) is determined by it in beats per minute (bpm) from the detected signal for a duration of 10 seconds and display the same in bpm on the LCD screen in digital format. The designed system was simulated on several persons with varying ages, for example, infants, adult persons and active athletes. Simulation results were found very satisfactory.

Keywords: Heart rate measurement, design, simulation, Proteus, Arduino Uno microcontroller.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1713
325 Towards Growing Self-Organizing Neural Networks with Fixed Dimensionality

Authors: Guojian Cheng, Tianshi Liu, Jiaxin Han, Zheng Wang

Abstract:

The competitive learning is an adaptive process in which the neurons in a neural network gradually become sensitive to different input pattern clusters. The basic idea behind the Kohonen-s Self-Organizing Feature Maps (SOFM) is competitive learning. SOFM can generate mappings from high-dimensional signal spaces to lower dimensional topological structures. The main features of this kind of mappings are topology preserving, feature mappings and probability distribution approximation of input patterns. To overcome some limitations of SOFM, e.g., a fixed number of neural units and a topology of fixed dimensionality, Growing Self-Organizing Neural Network (GSONN) can be used. GSONN can change its topological structure during learning. It grows by learning and shrinks by forgetting. To speed up the training and convergence, a new variant of GSONN, twin growing cell structures (TGCS) is presented here. This paper first gives an introduction to competitive learning, SOFM and its variants. Then, we discuss some GSONN with fixed dimensionality, which include growing cell structures, its variants and the author-s model: TGCS. It is ended with some testing results comparison and conclusions.

Keywords: Artificial neural networks, Competitive learning, Growing cell structures, Self-organizing feature maps.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1515
324 Improving Detection of Illegitimate Scores and Assessment in Most Advantageous Tenders

Authors: Hao-Hsi Tseng, Hsin-Yun Lee

Abstract:

Adopting Most Advantageous Tender (MAT) for the government procurement projects has become popular in Taiwan. As time pass by, the problems of MAT has appeared gradually. People condemn two points that are the result might be manipulated by a single committee member’s partiality and how to make a fair decision when the winner has two or more. Arrow’s Impossibility Theorem proposed that the best scoring method should meet the four reasonable criteria. According to these four criteria this paper constructed an “Illegitimate Scores Checking Scheme” for a scoring method and used the scheme to find out the illegitimate of the current evaluation method of MAT. This paper also proposed a new scoring method that is called the “Standardizing Overall Evaluated Score Method”. This method makes each committee member’s influence tend to be identical. Thus, the committee members can scoring freely according to their partiality without losing the fairness. Finally, it was examined by a large-scale simulation, and the experiment revealed that the it improved the problem of dictatorship and perfectly avoided the situation of cyclical majorities, simultaneously. This result verified that the Standardizing Overall Evaluated Score Method is better than any current evaluation method of MAT.

Keywords: Arrow’s impossibility theorem, most advantageous tender, illegitimate scores checking scheme, standard score.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1442
323 Fast Wavelet Image Denoising Based on Local Variance and Edge Analysis

Authors: Gaoyong Luo

Abstract:

The approach based on the wavelet transform has been widely used for image denoising due to its multi-resolution nature, its ability to produce high levels of noise reduction and the low level of distortion introduced. However, by removing noise, high frequency components belonging to edges are also removed, which leads to blurring the signal features. This paper proposes a new method of image noise reduction based on local variance and edge analysis. The analysis is performed by dividing an image into 32 x 32 pixel blocks, and transforming the data into wavelet domain. Fast lifting wavelet spatial-frequency decomposition and reconstruction is developed with the advantages of being computationally efficient and boundary effects minimized. The adaptive thresholding by local variance estimation and edge strength measurement can effectively reduce image noise while preserve the features of the original image corresponding to the boundaries of the objects. Experimental results demonstrate that the method performs well for images contaminated by natural and artificial noise, and is suitable to be adapted for different class of images and type of noises. The proposed algorithm provides a potential solution with parallel computation for real time or embedded system application.

Keywords: Edge strength, Fast lifting wavelet, Image denoising, Local variance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1995
322 Fuzzy Mathematical Morphology approach in Image Processing

Authors: Yee Yee Htun, Dr. Khaing Khaing Aye

Abstract:

Morphological operators transform the original image into another image through the interaction with the other image of certain shape and size which is known as the structure element. Mathematical morphology provides a systematic approach to analyze the geometric characteristics of signals or images, and has been applied widely too many applications such as edge detection, objection segmentation, noise suppression and so on. Fuzzy Mathematical Morphology aims to extend the binary morphological operators to grey-level images. In order to define the basic morphological operations such as fuzzy erosion, dilation, opening and closing, a general method based upon fuzzy implication and inclusion grade operators is introduced. The fuzzy morphological operations extend the ordinary morphological operations by using fuzzy sets where for fuzzy sets, the union operation is replaced by a maximum operation, and the intersection operation is replaced by a minimum operation. In this work, it consists of two articles. In the first one, fuzzy set theory, fuzzy Mathematical morphology which is based on fuzzy logic and fuzzy set theory; fuzzy Mathematical operations and their properties will be studied in details. As a second part, the application of fuzziness in Mathematical morphology in practical work such as image processing will be discussed with the illustration problems.

Keywords: Binary Morphological, Fuzzy sets, Grayscalemorphology, Image processing, Mathematical morphology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3219
321 A New Image Psychovisual Coding Quality Measurement based Region of Interest

Authors: M. Nahid, A. Bajit, A. Tamtaoui, E. H. Bouyakhf

Abstract:

To model the human visual system (HVS) in the region of interest, we propose a new objective metric evaluation adapted to wavelet foveation-based image compression quality measurement, which exploits a foveation setup filter implementation technique in the DWT domain, based especially on the point and region of fixation of the human eye. This model is then used to predict the visible divergences between an original and compressed image with respect to this region field and yields an adapted and local measure error by removing all peripheral errors. The technique, which we call foveation wavelet visible difference prediction (FWVDP), is demonstrated on a number of noisy images all of which have the same local peak signal to noise ratio (PSNR), but visibly different errors. We show that the FWVDP reliably predicts the fixation areas of interest where error is masked, due to high image contrast, and the areas where the error is visible, due to low image contrast. The paper also suggests ways in which the FWVDP can be used to determine a visually optimal quantization strategy for foveation-based wavelet coefficients and to produce a quantitative local measure of image quality.

Keywords: Human Visual System, Image Quality, ImageCompression, foveation wavelet, region of interest ROI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1466
320 Performance of Compound Enhancement Algorithms on Dental Radiograph Images

Authors: S.A.Ahmad, M.N.Taib, N.E.A.Khalid, R.Ahmad, H.Taib

Abstract:

The purpose of this research is to compare the original intra-oral digital dental radiograph images with images that are enhanced using a combination of image processing algorithms. Intraoral digital dental radiograph images are often noisy, blur edges and low in contrast. A combination of sharpening and enhancement method are used to overcome these problems. Three types of proposed compound algorithms used are Sharp Adaptive Histogram Equalization (SAHE), Sharp Median Adaptive Histogram Equalization (SMAHE) and Sharp Contrast adaptive histogram equalization (SCLAHE). This paper presents an initial study of the perception of six dentists on the details of abnormal pathologies and improvement of image quality in ten intra-oral radiographs. The research focus on the detection of only three types of pathology which is periapical radiolucency, widen periodontal ligament space and loss of lamina dura. The overall result shows that SCLAHE-s slightly improve the appearance of dental abnormalities- over the original image and also outperform the other two proposed compound algorithms.

Keywords: intra-oral dental radiograph, histogram equalization, sharpening, CLAHE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1750
319 QoS Improvement Using Intelligent Algorithm under Dynamic Tropical Weather for Earth-Space Satellite Applications

Authors: Joseph S. Ojo, Vincent A. Akpan, Oladayo G. Ajileye, Olalekan L, Ojo

Abstract:

In this paper, the intelligent algorithm (IA) that is capable of adapting to dynamical tropical weather conditions is proposed based on fuzzy logic techniques. The IA effectively interacts with the quality of service (QoS) criteria irrespective of the dynamic tropical weather to achieve improvement in the satellite links. To achieve this, an adaptive network-based fuzzy inference system (ANFIS) has been adopted. The algorithm is capable of interacting with the weather fluctuation to generate appropriate improvement to the satellite QoS for efficient services to the customers. 5-year (2012-2016) rainfall rate of one-minute integration time series data has been used to derive fading based on ITU-R P. 618-12 propagation models. The data are obtained from the measurement undertaken by the Communication Research Group (CRG), Physics Department, Federal University of Technology, Akure, Nigeria. The rain attenuation and signal-to-noise ratio (SNR) were derived for frequency between Ku and V-band and propagation angle with respect to different transmitting power. The simulated results show a substantial reduction in SNR especially for application in the area of digital video broadcast-second generation coding modulation satellite networks.

Keywords: Fuzzy logic, intelligent algorithm, Nigeria, QoS, satellite applications, tropical weather.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 763
318 A Novel Nucleus-Based Classifier for Discrimination of Osteoclasts and Mesenchymal Precursor Cells in Mouse Bone Marrow Cultures

Authors: Andreas Heindl, Alexander K. Seewald, Martin Schepelmann, Radu Rogojanu, Giovanna Bises, Theresia Thalhammer, Isabella Ellinger

Abstract:

Bone remodeling occurs by the balanced action of bone resorbing osteoclasts (OC) and bone-building osteoblasts. Increased bone resorption by excessive OC activity contributes to malignant and non-malignant diseases including osteoporosis. To study OC differentiation and function, OC formed in in vitro cultures are currently counted manually, a tedious procedure which is prone to inter-observer differences. Aiming for an automated OC-quantification system, classification of OC and precursor cells was done on fluorescence microscope images based on the distinct appearance of fluorescent nuclei. Following ellipse fitting to nuclei, a combination of eight features enabled clustering of OC and precursor cell nuclei. After evaluating different machine-learning techniques, LOGREG achieved 74% correctly classified OC and precursor cell nuclei, outperforming human experts (best expert: 55%). In combination with the automated detection of total cell areas, this system allows to measure various cell parameters and most importantly to quantify proteins involved in osteoclastogenesis.

Keywords: osteoclasts, machine learning, ellipse fitting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1883
317 An Optimization of Machine Parameters for Modified Horizontal Boring Tool Using Taguchi Method

Authors: Thirasak Panyaphirawat, Pairoj Sapsmarnwong, Teeratas Pornyungyuen

Abstract:

This paper presents the findings of an experimental investigation of important machining parameters for the horizontal boring tool modified to mouth with a horizontal lathe machine to bore an overlength workpiece. In order to verify a usability of a modified tool, design of experiment based on Taguchi method is performed. The parameters investigated are spindle speed, feed rate, depth of cut and length of workpiece. Taguchi L9 orthogonal array is selected for four factors three level parameters in order to minimize surface roughness (Ra and Rz) of S45C steel tubes. Signal to noise ratio analysis and analysis of variance (ANOVA) is performed to study an effect of said parameters and to optimize the machine setting for best surface finish. The controlled factors with most effect are depth of cut, spindle speed, length of workpiece, and feed rate in order. The confirmation test is performed to test the optimal setting obtained from Taguchi method and the result is satisfactory.

Keywords: Design of Experiment, Taguchi Design, Optimization, Analysis of Variance, Machining Parameters, Horizontal Boring Tool.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2679
316 Optimization of Process Parameters of Pressure Die Casting using Taguchi Methodology

Authors: Satish Kumar, Arun Kumar Gupta, Pankaj Chandna

Abstract:

The present work analyses different parameters of pressure die casting to minimize the casting defects. Pressure diecasting is usually applied for casting of aluminium alloys. Good surface finish with required tolerances and dimensional accuracy can be achieved by optimization of controllable process parameters such as solidification time, molten temperature, filling time, injection pressure and plunger velocity. Moreover, by selection of optimum process parameters the pressure die casting defects such as porosity, insufficient spread of molten material, flash etc. are also minimized. Therefore, a pressure die casting component, carburetor housing of aluminium alloy (Al2Si2O5) has been considered. The effects of selected process parameters on casting defects and subsequent setting of parameters with the levels have been accomplished by Taguchi-s parameter design approach. The experiments have been performed as per the combination of levels of different process parameters suggested by L18 orthogonal array. Analyses of variance have been performed for mean and signal-to-noise ratio to estimate the percent contribution of different process parameters. Confidence interval has also been estimated for 95% consistency level and three conformational experiments have been performed to validate the optimum level of different parameters. Overall 2.352% reduction in defects has been observed with the help of suggested optimum process parameters.

Keywords: Aluminium Casting, Pressure Die Casting, Taguchi Methodology, Design of Experiments

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7302
315 Sonic Localization Cues for Classrooms: A Structural Model Proposal

Authors: Abhijit Mitra, C. Ardil

Abstract:

We investigate sonic cues for binaural sound localization within classrooms and present a structural model for the same. Two of the primary cues for localization, interaural time difference (ITD) and interaural level difference (ILD) created between the two ears by sounds from a particular point in space, are used. Although these cues do not lend any information about the elevation of a sound source, the torso, head, and outer ear carry out elevation dependent spectral filtering of sounds before they reach the inner ear. This effect is commonly captured in head related transfer function (HRTF) which aids in resolving the ambiguity from the ITDs and ILDs alone and helps localize sounds in free space. The proposed structural model of HRTF produces well controlled horizontal as well as vertical effects. The implemented HRTF is a signal processing model which tries to mimic the physical effects of the sounds interacting with different parts of the body. The effectiveness of the method is tested by synthesizing spatial audio, in MATLAB, for use in listening tests with human subjects and is found to yield satisfactory results in comparison with existing models.

Keywords: Auditory localization, Binaural sound, Head related impulse response, Head related transfer function, Interaural level difference, Interaural time difference, Localization cues.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1689
314 Power System Stability Improvement by Simultaneous Tuning of PSS and SVC Based Damping Controllers Employing Differential Evolution Algorithm

Authors: Sangram Keshori Mohapatra, Sidhartha Panda, Prasant Kumar Satpathy

Abstract:

Power-system stability improvement by simultaneous tuning of power system stabilizer (PSS) and a Static Var Compensator (SVC) based damping controller is thoroughly investigated in this paper. Both local and remote signals with associated time delays are considered in the present study. The design problem of the proposed controller is formulated as an optimization problem, and differential evolution (DE) algorithm is employed to search for the optimal controller parameters. The performances of the proposed controllers are evaluated under different disturbances for both single-machine infinite bus power system and multi-machine power system. The performance of the proposed controllers with variations in the signal transmission delays has also been investigated. The proposed stabilizers are tested on a weakly connected power system subjected to different disturbances. Nonlinear simulation results are presented to show the effectiveness and robustness of the proposed control schemes over a wide range of loading conditions and disturbances. Further, the proposed design approach is found to be robust and improves stability effectively even under small disturbance conditions.

Keywords: Differential Evolution Algorithm, Power System Stability, Power System Stabilizer, Static Var Compensator

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2302
313 Automatic Extraction of Roads from High Resolution Aerial and Satellite Images with Heavy Noise

Authors: Yan Li, Ronald Briggs

Abstract:

Aerial and satellite images are information rich. They are also complex to analyze. For GIS systems, many features require fast and reliable extraction of roads and intersections. In this paper, we study efficient and reliable automatic extraction algorithms to address some difficult issues that are commonly seen in high resolution aerial and satellite images, nonetheless not well addressed in existing solutions, such as blurring, broken or missing road boundaries, lack of road profiles, heavy shadows, and interfering surrounding objects. The new scheme is based on a new method, namely reference circle, to properly identify the pixels that belong to the same road and use this information to recover the whole road network. This feature is invariable to the shape and direction of roads and tolerates heavy noise and disturbances. Road extraction based on reference circles is much more noise tolerant and flexible than the previous edge-detection based algorithms. The scheme is able to extract roads reliably from images with complex contents and heavy obstructions, such as the high resolution aerial/satellite images available from Google maps.

Keywords: Automatic road extraction, Image processing, Feature extraction, GIS update, Remote sensing, Geo-referencing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1667
312 En-Face Optical Coherence Tomography Combined with Fluorescence in Material Defects Investigations for Ceramic Fixed Partial Dentures

Authors: C. Sinescu, M. Negrutiu, M. Romînu, C. Haiduc, E. Petrescu, M. Leretter, A.G. Podoleanu

Abstract:

Optical Coherence Tomography (OCT) combined with the Confocal Microscopy, as a noninvasive method, permits the determinations of materials defects in the ceramic layers depth. For this study 256 anterior and posterior metal and integral ceramic fixed partial dentures were used, made with Empress (Ivoclar), Wollceram and CAD/CAM (Wieland) technology. For each investigate area 350 slices were obtain and a 3D reconstruction was perform from each stuck. The Optical Coherent Tomography, as a noninvasive method, can be used as a control technique in integral ceramic technology, before placing those fixed partial dentures in the oral cavity. The purpose of this study is to evaluate the capability of En face Optical Coherence Tomography (OCT) combined with a fluorescent method in detection and analysis of possible material defects in metalceramic and integral ceramic fixed partial dentures. As a conclusion, it is important to have a non invasive method to investigate fixed partial prostheses before their insertion in the oral cavity in order to satisfy the high stress requirements and the esthetic function.

Keywords: Ceramic Fixed Partial Dentures, Material Defects, En face Optical Coherence Tomography, Fluorescence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1451
311 Protein Profiling in Alanine Aminotransferase Induced Patient cohort using Acetaminophen

Authors: Gry M, Bergström J, Lengquist J, Lindberg J, Drobin K, Schwenk J, Nilsson P, Schuppe-Koistinen I.

Abstract:

Sensitive and predictive DILI (Drug Induced Liver Injury) biomarkers are needed in drug R&D to improve early detection of hepatotoxicity. The discovery of DILI biomarkers that demonstrate the predictive power to identify individuals at risk to DILI would represent a major advance in the development of personalized healthcare approaches. In this healthy volunteer acetaminophen study (4g/day for 7 days, with 3 monitored nontreatment days before and 4 after), 450 serum samples from 32 subjects were analyzed using protein profiling by antibody suspension bead arrays. Multiparallel protein profiles were generated using a DILI target protein array with 300 antibodies, where the antibodies were selected based on previous literature findings of putative DILI biomarkers and a screening process using pre dose samples from the same cohort. Of the 32 subjects, 16 were found to develop an elevated ALT value (2Xbaseline, responders). Using the plasma profiling approach together with multivariate statistical analysis some novel findings linked to lipid metabolism were found and more important, endogenous protein profiles in baseline samples (prior to treatment) with predictive power for ALT elevations were identified.

Keywords: DILI, Plasma profiling, PLSDA, Randomforest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1288
310 Subjective Evaluation of Spectral and Time Domain Cascading Algorithm for Speech Enhancement for Mobile Communication

Authors: Harish Chander, Balwinder Singh, Ravinder Khanna

Abstract:

In this paper, we present the comparative subjective analysis of Improved Minima Controlled Recursive Averaging (IMCRA) Algorithm, the Kalman filter and the cascading of IMCRA and Kalman filter algorithms. Performance of speech enhancement algorithms can be predicted in two different ways. One is the objective method of evaluation in which the speech quality parameters are predicted computationally. The second is a subjective listening test in which the processed speech signal is subjected to the listeners who judge the quality of speech on certain parameters. The comparative objective evaluation of these algorithms was analyzed in terms of Global SNR, Segmental SNR and Perceptual Evaluation of Speech Quality (PESQ) by the authors and it was reported that with cascaded algorithms there is a substantial increase in objective parameters. Since subjective evaluation is the real test to judge the quality of speech enhancement algorithms, the authenticity of superiority of cascaded algorithms over individual IMCRA and Kalman algorithms is tested through subjective analysis in this paper. The results of subjective listening tests have confirmed that the cascaded algorithms perform better under all types of noise conditions.

Keywords: Speech enhancement, spectral domain, time domain, PESQ, subjective analysis, objective analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1198
309 ECG-Based Heartbeat Classification Using Convolutional Neural Networks

Authors: Jacqueline R. T. Alipo-on, Francesca I. F. Escobar, Myles J. T. Tan, Hezerul Abdul Karim, Nouar AlDahoul

Abstract:

Electrocardiogram (ECG) signal analysis and processing are crucial in the diagnosis of cardiovascular diseases which are considered as one of the leading causes of mortality worldwide. However, the traditional rule-based analysis of large volumes of ECG data is time-consuming, labor-intensive, and prone to human errors. With the advancement of the programming paradigm, algorithms such as machine learning have been increasingly used to perform an analysis on the ECG signals. In this paper, various deep learning algorithms were adapted to classify five classes of heart beat types. The dataset used in this work is the synthetic MIT-Beth Israel Hospital (MIT-BIH) Arrhythmia dataset produced from generative adversarial networks (GANs). Various deep learning models such as ResNet-50 convolutional neural network (CNN), 1-D CNN, and long short-term memory (LSTM) were evaluated and compared. ResNet-50 was found to outperform other models in terms of recall and F1 score using a five-fold average score of 98.88% and 98.87%, respectively. 1-D CNN, on the other hand, was found to have the highest average precision of 98.93%.

Keywords: Heartbeat classification, convolutional neural network, electrocardiogram signals, ECG signals, generative adversarial networks, long short-term memory, LSTM, ResNet-50.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 138
308 Enhancing the Performance of H.264/AVC in Adaptive Group of Pictures Mode Using Octagon and Square Search Pattern

Authors: S. Sowmyayani, P. Arockia Jansi Rani

Abstract:

This paper integrates Octagon and Square Search pattern (OCTSS) motion estimation algorithm into H.264/AVC (Advanced Video Coding) video codec in Adaptive Group of Pictures (AGOP) mode. AGOP structure is computed based on scene change in the video sequence. Octagon and square search pattern block-based motion estimation method is implemented in inter-prediction process of H.264/AVC. Both these methods reduce bit rate and computational complexity while maintaining the quality of the video sequence respectively. Experiments are conducted for different types of video sequence. The results substantially proved that the bit rate, computation time and PSNR gain achieved by the proposed method is better than the existing H.264/AVC with fixed GOP and AGOP. With a marginal gain in quality of 0.28dB and average gain in bitrate of 132.87kbps, the proposed method reduces the average computation time by 27.31 minutes when compared to the existing state-of-art H.264/AVC video codec.

Keywords: Block Distortion Measure, Block Matching Algorithms, H.264/AVC, Motion estimation, Search patterns, Shot cut detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1698
307 On Analysis of Boundness Property for ECATNets by Using Rewriting Logic

Authors: Noura Boudiaf, Allaoua Chaoui

Abstract:

To analyze the behavior of Petri nets, the accessibility graph and Model Checking are widely used. However, if the analyzed Petri net is unbounded then the accessibility graph becomes infinite and Model Checking can not be used even for small Petri nets. ECATNets [2] are a category of algebraic Petri nets. The main feature of ECATNets is their sound and complete semantics based on rewriting logic [8] and its language Maude [9]. ECATNets analysis may be done by using techniques of accessibility analysis and Model Checking defined in Maude. But, these two techniques supported by Maude do not work also with infinite-states systems. As a category of Petri nets, ECATNets can be unbounded and so infinite systems. In order to know if we can apply accessibility analysis and Model Checking of Maude to an ECATNet, we propose in this paper an algorithm allowing the detection if the ECATNet is bounded or not. Moreover, we propose a rewriting logic based tool implementing this algorithm. We show that the development of this tool using the Maude system is facilitated thanks to the reflectivity of the rewriting logic. Indeed, the self-interpretation of this logic allows us both the modelling of an ECATNet and acting on it.

Keywords: ECATNets, Rewriting Logic, Maude, Finite-stateSystems, Infinite-state Systems, Boundness Property Checking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1353
306 Evaluating Portfolio Performance by Highlighting Network Property and the Sharpe Ratio in the Stock Market

Authors: Zahra Hatami, Hesham Ali, David Volkman

Abstract:

Selecting a portfolio for investing is a crucial decision for individuals and legal entities. In the last two decades, with economic globalization, a stream of financial innovations has rushed to the aid of financial institutions. The importance of selecting stocks for the portfolio is always a challenging task for investors. This study aims to create a financial network to identify optimal portfolios using network centralities metrics. This research presents a community detection technique of superior stocks that can be described as an optimal stock portfolio to be used by investors. By using the advantages of a network and its property in extracted communities, a group of stocks was selected for each of the various time periods. The performance of the optimal portfolios was compared to the famous index. Their Sharpe ratio was calculated in a timely manner to evaluate their profit for making decisions. The analysis shows that the selected potential portfolio from stocks with low centrality measurement can outperform the market; however, they have a lower Sharpe ratio than stocks with high centrality scores. In other words, stocks with low centralities could outperform the S&P500 yet have a lower Sharpe ratio than high central stocks.

Keywords: Portfolio management performance, network analysis, centrality measurements, Sharpe ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 342
305 Intelligent Temperature Controller for Water-Bath System

Authors: Om Prakash Verma, Rajesh Singla, Rajesh Kumar

Abstract:

Conventional controller’s usually required a prior knowledge of mathematical modelling of the process. The inaccuracy of mathematical modelling degrades the performance of the process, especially for non-linear and complex control problem. The process used is Water-Bath system, which is most widely used and nonlinear to some extent. For Water-Bath system, it is necessary to attain desired temperature within a specified period of time to avoid the overshoot and absolute error, with better temperature tracking capability, else the process is disturbed.

To overcome above difficulties intelligent controllers, Fuzzy Logic (FL) and Adaptive Neuro-Fuzzy Inference System (ANFIS), are proposed in this paper. The Fuzzy controller is designed to work with knowledge in the form of linguistic control rules. But the translation of these linguistic rules into the framework of fuzzy set theory depends on the choice of certain parameters, for which no formal method is known. To design ANFIS, Fuzzy-Inference-System is combined with learning capability of Neural-Network.

It is analyzed that ANFIS is best suitable for adaptive temperature control of above system. As compared to PID and FLC, ANFIS produces a stable control signal. It has much better temperature tracking capability with almost zero overshoot and minimum absolute error.

Keywords: PID Controller, FLC, ANFIS, Non-Linear Control System, Water-Bath System, MATLAB-7.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5507