Search results for: Traffic Signal Timing Optimization.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3595

Search results for: Traffic Signal Timing Optimization.

385 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: Anomaly detection, autoencoder, data centers, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 744
384 Process and Supply-Chain Optimization for Testing and Verification of Formation Tester/Pressure-While- Drilling Tools

Authors: Vivek V, Hafeez Syed, Darren W Terrell, Harit Naik, Halliburton

Abstract:

Applying a rigorous process to optimize the elements of a supply-chain network resulted in reduction of the waiting time for a service provider and customer. Different sources of downtime of hydraulic pressure controller/calibrator (HPC) were causing interruptions in the operations. The process examined all the issues to drive greater efficiencies. The issues included inherent design issues with HPC pump, contamination of the HPC with impurities, and the lead time required for annual calibration in the USA. HPC is used for mandatory testing/verification of formation tester/pressure measurement/logging-while drilling tools by oilfield service providers, including Halliburton. After market study andanalysis, it was concluded that the current HPC model is best suited in the oilfield industry. To use theexisting HPC model effectively, design andcontamination issues were addressed through design and process improvements. An optimum network is proposed after comparing different supply-chain models for calibration lead-time reduction.

Keywords: Hydraulic Pressure Controller/Calibrator, M/LWD, Pressure, FTWD

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1453
383 High Speed Video Transmission for Telemedicine using ATM Technology

Authors: J. P. Dubois, H. M. Chiu

Abstract:

In this paper, we study statistical multiplexing of VBR video in ATM networks. ATM promises to provide high speed realtime multi-point to central video transmission for telemedicine applications in rural hospitals and in emergency medical services. Video coders are known to produce variable bit rate (VBR) signals and the effects of aggregating these VBR signals need to be determined in order to design a telemedicine network infrastructure capable of carrying these signals. We first model the VBR video signal and simulate it using a generic continuous-data autoregressive (AR) scheme. We carry out the queueing analysis by the Fluid Approximation Model (FAM) and the Markov Modulated Poisson Process (MMPP). The study has shown a trade off: multiplexing VBR signals reduces burstiness and improves resource utilization, however, the buffer size needs to be increased with an associated economic cost. We also show that the MMPP model and the Fluid Approximation model fit best, respectively, the cell region and the burst region. Therefore, a hybrid MMPP and FAM completely characterizes the overall performance of the ATM statistical multiplexer. The ramifications of this technology are clear: speed, reliability (lower loss rate and jitter), and increased capacity in video transmission for telemedicine. With migration to full IP-based networks still a long way to achieving both high speed and high quality of service, the proposed ATM architecture will remain of significant use for telemedicine.

Keywords: ATM, multiplexing, queueing, telemedicine, VBR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744
382 Prioritization Method in the Fuzzy Analytic Network Process by Fuzzy Preferences Programming Method

Authors: Tarifa S. Almulhim, Ludmil Mikhailov, Dong-Ling Xu

Abstract:

In this paper, a method for deriving a group priority vector in the Fuzzy Analytic Network Process (FANP) is proposed. By introducing importance weights of multiple decision makers (DMs) based on their experiences, the Fuzzy Preferences Programming Method (FPP) is extended to a fuzzy group prioritization problem in the FANP. Additionally, fuzzy pair-wise comparison judgments are presented rather than exact numerical assessments in order to model the uncertainty and imprecision in the DMs- judgments and then transform the fuzzy group prioritization problem into a fuzzy non-linear programming optimization problem which maximize the group satisfaction. Unlike the known fuzzy prioritization techniques, the new method proposed in this paper can easily derive crisp weights from incomplete and inconsistency fuzzy set of comparison judgments and does not require additional aggregation producers. Detailed numerical examples are used to illustrate the implement of our approach and compare with the latest fuzzy prioritization method.

Keywords: Fuzzy Analytic Network Process (FANP), Fuzzy Non-linear Programming, Fuzzy Preferences Programming Method (FPP), Multiple Criteria Decision-Making (MCDM), Triangular Fuzzy Number.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2387
381 EEG Analysis of Brain Dynamics in Children with Language Disorders

Authors: Hamed Alizadeh Dashagholi, Hossein Yousefi-Banaem, Mina Naeimi

Abstract:

Current study established for EEG signal analysis in patients with language disorder. Language disorder can be defined as meaningful delay in the use or understanding of spoken or written language. The disorder can include the content or meaning of language, its form, or its use. Here we applied Z-score, power spectrum, and coherence methods to discriminate the language disorder data from healthy ones. Power spectrum of each channel in alpha, beta, gamma, delta, and theta frequency bands was measured. In addition, intra hemispheric Z-score obtained by scoring algorithm. Obtained results showed high Z-score and power spectrum in posterior regions. Therefore, we can conclude that peoples with language disorder have high brain activity in frontal region of brain in comparison with healthy peoples. Results showed that high coherence correlates with irregularities in the ERP and is often found during complex task, whereas low coherence is often found in pathological conditions. The results of the Z-score analysis of the brain dynamics showed higher Z-score peak frequency in delta, theta and beta sub bands of Language Disorder patients. In this analysis there were activity signs in both hemispheres and the left-dominant hemisphere was more active than the right.

Keywords: EEG, electroencephalography, coherence methods, language disorder, power spectrum, z-score.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2550
380 Development of a Neural Network based Algorithm for Multi-Scale Roughness Parameters and Soil Moisture Retrieval

Authors: L. Bennaceur Farah, I. R. Farah, R. Bennaceur, Z. Belhadj, M. R. Boussema

Abstract:

The overall objective of this paper is to retrieve soil surfaces parameters namely, roughness and soil moisture related to the dielectric constant by inverting the radar backscattered signal from natural soil surfaces. Because the classical description of roughness using statistical parameters like the correlation length doesn't lead to satisfactory results to predict radar backscattering, we used a multi-scale roughness description using the wavelet transform and the Mallat algorithm. In this description, the surface is considered as a superposition of a finite number of one-dimensional Gaussian processes each having a spatial scale. A second step in this study consisted in adapting a direct model simulating radar backscattering namely the small perturbation model to this multi-scale surface description. We investigated the impact of this description on radar backscattering through a sensitivity analysis of backscattering coefficient to the multi-scale roughness parameters. To perform the inversion of the small perturbation multi-scale scattering model (MLS SPM) we used a multi-layer neural network architecture trained by backpropagation learning rule. The inversion leads to satisfactory results with a relative uncertainty of 8%.

Keywords: Remote sensing, rough surfaces, inverse problems, SAR, radar scattering, Neural networks and Fractals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1595
379 Satellite Imagery Classification Based on Deep Convolution Network

Authors: Zhong Ma, Zhuping Wang, Congxin Liu, Xiangzeng Liu

Abstract:

Satellite imagery classification is a challenging problem with many practical applications. In this paper, we designed a deep convolution neural network (DCNN) to classify the satellite imagery. The contributions of this paper are twofold — First, to cope with the large-scale variance in the satellite image, we introduced the inception module, which has multiple filters with different size at the same level, as the building block to build our DCNN model. Second, we proposed a genetic algorithm based method to efficiently search the best hyper-parameters of the DCNN in a large search space. The proposed method is evaluated on the benchmark database. The results of the proposed hyper-parameters search method show it will guide the search towards better regions of the parameter space. Based on the found hyper-parameters, we built our DCNN models, and evaluated its performance on satellite imagery classification, the results show the classification accuracy of proposed models outperform the state of the art method.

Keywords: Satellite imagery classification, deep convolution network, genetic algorithm, hyper-parameter optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2348
378 Understanding Walkability in the Libyan Urban Space: Policies, Perceptions and Smart Design for Sustainable Tripoli

Authors: A. Abdulla Khairi Mohamed, Mohamed Gamal Abdelmonem, Gehan Selim

Abstract:

Walkability in civic and public spaces in Libyan cities is challenging due to the lack of accessibility design, informal merging into car traffic, and the general absence of adequate urban and space planning. The lack of accessible and pedestrian-friendly public spaces in Libyan cities has emerged as a major concern for the government if it is to develop smart and sustainable spaces for the 21st century. A walkable urban space has become a driver for urban development and redistribution of land use to ensure pedestrian and walkable routes between sites of living and workplaces. The characteristics of urban open space in the city centre play a main role in attracting people to walk when attending their daily needs, recreation and daily sports. There is significant gap in the understanding of perceptions, feasibility and capabilities of Libyan urban space to accommodate enhance or support the smart design of a walkable pedestrian-friendly environment that is safe and accessible to everyone. The paper aims to undertake observations of walkability and walkable space in the city of Tripoli as a benchmark for Libyan cities; assess the validity and consistency of the seven principal aspects of smart design, safety, accessibility and 51 factors that affect the walkability in open urban space in Tripoli, through the analysis of 10 local urban spaces experts (town planner, architect, transport engineer and urban designer); and explore user groups’ perceptions of accessibility in walkable spaces in Libyan cities through questionnaires. The study sampled 200 respondents in 2015-16. The results of this study are useful for urban planning, to classify the walkable urban space elements which affect to improve the level of walkability in the Libyan cities and create sustainable and liveable urban spaces.

Keywords: Walkability, sustainability, liveability, accessibility, safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1407
377 Statistical Optimization of Process Variables for Direct Fermentation of 226 White Rose Tapioca Stem to Ethanol by Fusarium oxysporum

Authors: A. Magesh, B. Preetha, T. Viruthagiri

Abstract:

Direct fermentation of 226 white rose tapioca stem to ethanol by Fusarium oxysporum was studied in a batch reactor. Fermentation of ethanol can be achieved by sequential pretreatment using dilute acid and dilute alkali solutions using 100 mesh tapioca stem particles. The quantitative effects of substrate concentration, pH and temperature on ethanol concentration were optimized using a full factorial central composite design experiment. The optimum process conditions were then obtained using response surface methodology. The quadratic model indicated that substrate concentration of 33g/l, pH 5.52 and a temperature of 30.13oC were found to be optimum for maximum ethanol concentration of 8.64g/l. The predicted optimum process conditions obtained using response surface methodology was verified through confirmatory experiments. Leudeking-piret model was used to study the product formation kinetics for the production of ethanol and the model parameters were evaluated using experimental data.

Keywords: Fusarium oxysporum, Lignocellulosic biomass, Product formation kinetics, Statistical experimental design

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1640
376 Progressive Watershed Management Approaches in Iran

Authors: S. H. R. Sadeghi, A. Sadoddin, A. Najafinejad

Abstract:

Expansionism and ever-increasing population menace all different resources worldwide. The issue, hence, is critical in developing countries like Iran where new technologies are rapidly luxuriated and unguardedly applied, resulting in unexpected outcomes. However, uncommon and comprehensive approaches are introduced to take all the different aspects involved into consideration. In the last decade, few approaches such as community-based, stakeholders-oriented, adaptive and ultimately integrated management, have emerged and are developing for efficient, Co-management or best management, economic and sustainable development and management of watershed resources in Iran. In the present paper, an attempt has been made to focus on state-of-the-art approaches for the management of watershed resources applied in Iran. The study has been then supported by reports of some case studies conducted throughout the country involving previously mentioned approaches. Scrutinizing results of the researches verified a progressive tendency of the managerial approaches in watershed management strategies leading to a general approaching balance situation. The approaches are firmly rooted in educational, research, executive, legal and policy-making sectors leading to some recuperation at different levels. However, there is a long way ahead to naturalize detrimental effects of unscientific, illegal and over exploitation of the watershed resources in Iran.

Keywords: Comprehensive management, ecosystem balance, integrated watershed management, land resources optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1024
375 Aggregation Scheduling Algorithms in Wireless Sensor Networks

Authors: Min Kyung An

Abstract:

In Wireless Sensor Networks which consist of tiny wireless sensor nodes with limited battery power, one of the most fundamental applications is data aggregation which collects nearby environmental conditions and aggregates the data to a designated destination, called a sink node. Important issues concerning the data aggregation are time efficiency and energy consumption due to its limited energy, and therefore, the related problem, named Minimum Latency Aggregation Scheduling (MLAS), has been the focus of many researchers. Its objective is to compute the minimum latency schedule, that is, to compute a schedule with the minimum number of timeslots, such that the sink node can receive the aggregated data from all the other nodes without any collision or interference. For the problem, the two interference models, the graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR), have been adopted with different power models, uniform-power and non-uniform power (with power control or without power control), and different antenna models, omni-directional antenna and directional antenna models. In this survey article, as the problem has proven to be NP-hard, we present and compare several state-of-the-art approximation algorithms in various models on the basis of latency as its performance measure.

Keywords: Data aggregation, convergecast, gathering, approximation, interference, omni-directional, directional.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 799
374 Truck Routing Problem Considering Platooning and Drivers’ Breaks

Authors: Xiaoyuan Yan, Min Xu

Abstract:

Truck platooning refers to a convoy of digitally connected automated trucks traveling safely with a small inter-vehicle gap. It has been identified as one of the most promising and applicable technologies towards automated and sustainable freight transportation. Although truck platooning delivers significant energy-saving benefits, it cannot be realized without good coordination of drivers’ shifts to lead the platoons subject to their mandatory breaks. Therefore, this study aims to route a fleet of trucks to their destinations using the least amount of fuel by maximizing platoon opportunities under the regulations of drivers’ mandatory breaks. We formulate this platoon coordination problem as a mixed-integer linear programming problem and solve it by CPLEX. Numerical experiments are conducted to demonstrate the effectiveness and efficiency of our proposed model. In addition, we also explore the impacts of drivers’ compulsory breaks on the fuel-savings performance. The results show a slight increase in the total fuel costs in the presence of drivers’ compulsory breaks, thanks to driving-while-resting benefit provided for the trailing trucks. This study may serve as a guide for the operators of automated freight transportation.

Keywords: Truck platooning, route optimization, compulsory breaks, energy saving.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 618
373 Comparison of Number of Waves Surfed and Duration Using Global Positioning System and Inertial Sensors

Authors: J. Madureira, R. Lagido, I. Sousa

Abstract:

Surf is an increasingly popular sport and its performance evaluation is often qualitative. This work aims at using a smartphone to collect and analyze the GPS and inertial sensors data in order to obtain quantitative metrics of the surfing performance. Two approaches are compared for detection of wave rides, computing the number of waves rode in a surfing session, the starting time of each wave and its duration. The first approach is based on computing the velocity from the Global Positioning System (GPS) signal and finding the velocity thresholds that allow identifying the start and end of each wave ride. The second approach adds information from the Inertial Measurement Unit (IMU) of the smartphone, to the velocity thresholds obtained from the GPS unit, to determine the start and end of each wave ride. The two methods were evaluated using GPS and IMU data from two surfing sessions and validated with similar metrics extracted from video data collected from the beach. The second method, combining GPS and IMU data, was found to be more accurate in determining the number of waves, start time and duration. This paper shows that it is feasible to use smartphones for quantification of performance metrics during surfing. In particular, detection of the waves rode and their duration can be accurately determined using the smartphone GPS and IMU. 

Keywords: Inertial Measurement Unit (IMU), Global Positioning System (GPS), smartphone, surfing performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656
372 A Few Descriptive and Optimization Issues on the Material Flow at a Research-Academic Institution: The Role of Simulation

Authors: D. R. Delgado Sobrino, P. Košťál, J. Oravcová

Abstract:

Lately, significant work in the area of Intelligent Manufacturing has become public and mainly applied within the frame of industrial purposes. Special efforts have been made in the implementation of new technologies, management and control systems, among many others which have all evolved the field. Aware of all this and due to the scope of new projects and the need of turning the existing flexible ideas into more autonomous and intelligent ones, i.e.: Intelligent Manufacturing, the present paper emerges with the main aim of contributing to the design and analysis of the material flow in either systems, cells or work stations under this new “intelligent" denomination. For this, besides offering a conceptual basis in some of the key points to be taken into account and some general principles to consider in the design and analysis of the material flow, also some tips on how to define other possible alternative material flow scenarios and a classification of the states a system, cell or workstation are offered as well. All this is done with the intentions of relating it with the use of simulation tools, for which these have been briefly addressed with a special focus on the Witness simulation package. For a better comprehension, the previous elements are supported by a detailed layout, other figures and a few expressions which could help obtaining necessary data. Such data and others will be used in the future, when simulating the scenarios in the search of the best material flow configurations.

Keywords: Flexible/Intelligent Manufacturing System/Cell (F/IMS/C), material flow/design/configuration (MF/D/C), workstation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1612
371 Protein Production by Bacillus Subtilis Atcc 21332 in the Presence of Cymbopogon Essential Oils

Authors: Hanina M. N., Hairul Shahril M., Mohd Fazrullah Innsan M. F., Ismatul Nurul Asyikin I., Abdul Jalil A. K, Salina M. R., Ahmad I.B.

Abstract:

Proteins levels produced by bacteria may be increased in stressful surroundings, such as in the presence of antibiotics. It appears that many antimicrobial agents or antibiotics, when used at low concentrations, have in common the ability to activate or repress gene transcription, which is distinct from their inhibitory effect. There have been comparatively few studies on the potential of antibiotics or natural compounds in nature as a specific chemical signal that can trigger a variety of biological functions. Therefore, this study was focusing on the effect of essential oils from Cymbopogon flexuosus and C. nardus in regulating proteins production by Bacillus subtilis ATCC 21332. The Minimum Inhibition Concentrations (MICs) of both essential oils on B. subtilis were determined by using microdilution assay, resulting 0.2% and 1.56% for each C. flexuosus and C. nardus subsequently. The bacteria were further exposed to each essential oils at concentration of 0.01XMIC for 2 days. The proteins were then isolated and analyzed by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE). Protein profile showed that a band with approximate size of 250 kD was appeared for the treated bacteria with essential oils. Thus, Bacillus subtilis ATCC 21332 in stressful condition with the presence of essential oils at low concentration could induce the protein production.

Keywords: Bacillus subtilis ATCC 21332, Cymbopogon essential oils, protein

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2157
370 Fuzzy Population-Based Meta-Heuristic Approaches for Attribute Reduction in Rough Set Theory

Authors: Mafarja Majdi, Salwani Abdullah, Najmeh S. Jaddi

Abstract:

One of the global combinatorial optimization problems in machine learning is feature selection. It concerned with removing the irrelevant, noisy, and redundant data, along with keeping the original meaning of the original data. Attribute reduction in rough set theory is an important feature selection method. Since attribute reduction is an NP-hard problem, it is necessary to investigate fast and effective approximate algorithms. In this paper, we proposed two feature selection mechanisms based on memetic algorithms (MAs) which combine the genetic algorithm with a fuzzy record to record travel algorithm and a fuzzy controlled great deluge algorithm, to identify a good balance between local search and genetic search. In order to verify the proposed approaches, numerical experiments are carried out on thirteen datasets. The results show that the MAs approaches are efficient in solving attribute reduction problems when compared with other meta-heuristic approaches.

Keywords: Rough Set Theory, Attribute Reduction, Fuzzy Logic, Memetic Algorithms, Record to Record Algorithm, Great Deluge Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1937
369 Generational PipeLined Genetic Algorithm (PLGA)using Stochastic Selection

Authors: Malay K. Pakhira, Rajat K. De

Abstract:

In this paper, a pipelined version of genetic algorithm, called PLGA, and a corresponding hardware platform are described. The basic operations of conventional GA (CGA) are made pipelined using an appropriate selection scheme. The selection operator, used here, is stochastic in nature and is called SA-selection. This helps maintaining the basic generational nature of the proposed pipelined GA (PLGA). A number of benchmark problems are used to compare the performances of conventional roulette-wheel selection and the SA-selection. These include unimodal and multimodal functions with dimensionality varying from very small to very large. It is seen that the SA-selection scheme is giving comparable performances with respect to the classical roulette-wheel selection scheme, for all the instances, when quality of solutions and rate of convergence are considered. The speedups obtained by PLGA for different benchmarks are found to be significant. It is shown that a complete hardware pipeline can be developed using the proposed scheme, if parallel evaluation of the fitness expression is possible. In this connection a low-cost but very fast hardware evaluation unit is described. Results of simulation experiments show that in a pipelined hardware environment, PLGA will be much faster than CGA. In terms of efficiency, PLGA is found to outperform parallel GA (PGA) also.

Keywords: Hardware evaluation, Hardware pipeline, Optimization, Pipelined genetic algorithm, SA-selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443
368 Broadband PowerLine Communications: Performance Analysis

Authors: Justinian Anatory, Nelson Theethayi, M. M. Kissaka, N. H. Mvungi

Abstract:

Power line channel is proposed as an alternative for broadband data transmission especially in developing countries like Tanzania [1]. However the channel is affected by stochastic attenuation and deep notches which can lead to the limitation of channel capacity and achievable data rate. Various studies have characterized the channel without giving exactly the maximum performance and limitation in data transfer rate may be this is due to complexity of channel modeling being used. In this paper the channel performance of medium voltage, low voltage and indoor power line channel is presented. In the investigations orthogonal frequency division multiplexing (OFDM) with phase shift keying (PSK) as carrier modulation schemes is considered, for indoor, medium and low voltage channels with typical ten branches and also Golay coding is applied for medium voltage channel. From channels, frequency response deep notches are observed in various frequencies which can lead to reduce the achievable data rate. However, is observed that data rate up to 240Mbps is realized for a signal to noise ratio of about 50dB for indoor and low voltage channels, however for medium voltage a typical link with ten branches is affected by strong multipath and coding is required for feasible broadband data transfer.

Keywords: Powerline Communications, branched network, channel model, modulation, channel performance, OFDM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1833
367 Comparison of Compression Ability Using DCT and Fractal Technique on Different Imaging Modalities

Authors: Sumathi Poobal, G. Ravindran

Abstract:

Image compression is one of the most important applications Digital Image Processing. Advanced medical imaging requires storage of large quantities of digitized clinical data. Due to the constrained bandwidth and storage capacity, however, a medical image must be compressed before transmission and storage. There are two types of compression methods, lossless and lossy. In Lossless compression method the original image is retrieved without any distortion. In lossy compression method, the reconstructed images contain some distortion. Direct Cosine Transform (DCT) and Fractal Image Compression (FIC) are types of lossy compression methods. This work shows that lossy compression methods can be chosen for medical image compression without significant degradation of the image quality. In this work DCT and Fractal Compression using Partitioned Iterated Function Systems (PIFS) are applied on different modalities of images like CT Scan, Ultrasound, Angiogram, X-ray and mammogram. Approximately 20 images are considered in each modality and the average values of compression ratio and Peak Signal to Noise Ratio (PSNR) are computed and studied. The quality of the reconstructed image is arrived by the PSNR values. Based on the results it can be concluded that the DCT has higher PSNR values and FIC has higher compression ratio. Hence in medical image compression, DCT can be used wherever picture quality is preferred and FIC is used wherever compression of images for storage and transmission is the priority, without loosing picture quality diagnostically.

Keywords: DCT, FIC, PIFS, PSNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1826
366 Diagnosing Dangerous Arrhythmia of Patients by Automatic Detecting of QRS Complexes in ECG

Authors: Jia-Rong Yeh, Ai-Hsien Li, Jiann-Shing Shieh, Yen-An Su, Chi-Yu Yang

Abstract:

In this paper, an automatic detecting algorithm for QRS complex detecting was applied for analyzing ECG recordings and five criteria for dangerous arrhythmia diagnosing are applied for a protocol type of automatic arrhythmia diagnosing system. The automatic detecting algorithm applied in this paper detected the distribution of QRS complexes in ECG recordings and related information, such as heart rate and RR interval. In this investigation, twenty sampled ECG recordings of patients with different pathologic conditions were collected for off-line analysis. A combinative application of four digital filters for bettering ECG signals and promoting detecting rate for QRS complex was proposed as pre-processing. Both of hardware filters and digital filters were applied to eliminate different types of noises mixed with ECG recordings. Then, an automatic detecting algorithm of QRS complex was applied for verifying the distribution of QRS complex. Finally, the quantitative clinic criteria for diagnosing arrhythmia were programmed in a practical application for automatic arrhythmia diagnosing as a post-processor. The results of diagnoses by automatic dangerous arrhythmia diagnosing were compared with the results of off-line diagnoses by experienced clinic physicians. The results of comparison showed the application of automatic dangerous arrhythmia diagnosis performed a matching rate of 95% compared with an experienced physician-s diagnoses.

Keywords: Signal processing, electrocardiography (ECG), QRS complex, arrhythmia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1517
365 Mathematical Analysis of EEG of Patients with Non-fatal Nonspecific Diffuse Encephalitis

Authors: Mukesh Doble, Sunil K Narayan

Abstract:

Diffuse viral encephalitis may lack fever and other cardinal signs of infection and hence its distinction from other acute encephalopathic illnesses is challenging. Often, the EEG changes seen routinely are nonspecific and reflect diffuse encephalopathic changes only. The aim of this study was to use nonlinear dynamic mathematical techniques for analyzing the EEG data in order to look for any characteristic diagnostic patterns in diffuse forms of encephalitis.It was diagnosed on clinical, imaging and cerebrospinal fluid criteria in three young male patients. Metabolic and toxic encephalopathies were ruled out through appropriate investigations. Digital EEGs were done on the 3rd to 5th day of onset. The digital EEGs of 5 male and 5 female age and sex matched healthy volunteers served as controls.Two sample t-test indicated that there was no statistically significant difference between the average values in amplitude between the two groups. However, the standard deviation (or variance) of the EEG signals at FP1-F7 and FP2-F8 are significantly higher for the patients than the normal subjects. The regularisation dimension is significantly less for the patients (average between 1.24-1.43) when compared to the normal persons (average between 1.41-1.63) for the EEG signals from all locations except for the Fz-Cz signal. Similarly the wavelet dimension is significantly less (P = 0.05*) for the patients (1.122) when compared to the normal person (1.458). EEGs are subdued in the case of the patients with presence of uniform patterns, manifested in the values of regularisation and wavelet dimensions, when compared to the normal person, indicating a decrease in chaotic nature.

Keywords: Chaos, Diffuse encephalitis, Electroencephalogram, Fractal dimension, Fourier spectrum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2209
364 A Two-Stage Adaptation towards Automatic Speech Recognition System for Malay-Speaking Children

Authors: Mumtaz Begum Mustafa, Siti Salwah Salim, Feizal Dani Rahman

Abstract:

Recently, Automatic Speech Recognition (ASR) systems were used to assist children in language acquisition as it has the ability to detect human speech signal. Despite the benefits offered by the ASR system, there is a lack of ASR systems for Malay-speaking children. One of the contributing factors for this is the lack of continuous speech database for the target users. Though cross-lingual adaptation is a common solution for developing ASR systems for under-resourced language, it is not viable for children as there are very limited speech databases as a source model. In this research, we propose a two-stage adaptation for the development of ASR system for Malay-speaking children using a very limited database. The two stage adaptation comprises the cross-lingual adaptation (first stage) and cross-age adaptation. For the first stage, a well-known speech database that is phonetically rich and balanced, is adapted to the medium-sized Malay adults using supervised MLLR. The second stage adaptation uses the speech acoustic model generated from the first adaptation, and the target database is a small-sized database of the target users. We have measured the performance of the proposed technique using word error rate, and then compare them with the conventional benchmark adaptation. The two stage adaptation proposed in this research has better recognition accuracy as compared to the benchmark adaptation in recognizing children’s speech.

Keywords: Automatic speech recognition system, children speech, adaptation, Malay.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1752
363 Automatic Sleep Stage Scoring with Wavelet Packets Based on Single EEG Recording

Authors: Luay A. Fraiwan, Natheer Y. Khaswaneh, Khaldon Y. Lweesy

Abstract:

Sleep stage scoring is the process of classifying the stage of the sleep in which the subject is in. Sleep is classified into two states based on the constellation of physiological parameters. The two states are the non-rapid eye movement (NREM) and the rapid eye movement (REM). The NREM sleep is also classified into four stages (1-4). These states and the state wakefulness are distinguished from each other based on the brain activity. In this work, a classification method for automated sleep stage scoring based on a single EEG recording using wavelet packet decomposition was implemented. Thirty two ploysomnographic recording from the MIT-BIH database were used for training and validation of the proposed method. A single EEG recording was extracted and smoothed using Savitzky-Golay filter. Wavelet packets decomposition up to the fourth level based on 20th order Daubechies filter was used to extract features from the EEG signal. A features vector of 54 features was formed. It was reduced to a size of 25 using the gain ratio method and fed into a classifier of regression trees. The regression trees were trained using 67% of the records available. The records for training were selected based on cross validation of the records. The remaining of the records was used for testing the classifier. The overall correct rate of the proposed method was found to be around 75%, which is acceptable compared to the techniques in the literature.

Keywords: Features selection, regression trees, sleep stagescoring, wavelet packets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2329
362 Multi-Agent System for Irrigation Using Fuzzy Logic Algorithm and Open Platform Communication Data Access

Authors: T. Wanyama, B. Far

Abstract:

Automatic irrigation systems usually conveniently protect landscape investment. While conventional irrigation systems are known to be inefficient, automated ones have the potential to optimize water usage. In fact, there is a new generation of irrigation systems that are smart in the sense that they monitor the weather, soil conditions, evaporation and plant water use, and automatically adjust the irrigation schedule. In this paper, we present an agent based smart irrigation system. The agents are built using a mix of commercial off the shelf software, including MATLAB, Microsoft Excel and KEPServer Ex5 OPC server, and custom written code. The Irrigation Scheduler Agent uses fuzzy logic to integrate the information that affect the irrigation schedule. In addition, the Multi-Agent system uses Open Platform Connectivity (OPC) technology to share data. OPC technology enables the Irrigation Scheduler Agent to communicate over the Internet, making the system scalable to a municipal or regional agent based water monitoring, management, and optimization system. Finally, this paper presents simulation and pilot installation test result that show the operational effectiveness of our system.

Keywords: Community water usage, fuzzy logic, irrigation, multi-agent system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1338
361 On Adaptive Optimization of Filter Performance Based on Markov Representation for Output Prediction Error

Authors: Hong Son Hoang, Remy Baraille

Abstract:

This paper addresses the problem of how one can improve the performance of a non-optimal filter. First the theoretical question on dynamical representation for a given time correlated random process is studied. It will be demonstrated that for a wide class of random processes, having a canonical form, there exists a dynamical system equivalent in the sense that its output has the same covariance function. It is shown that the dynamical approach is more effective for simulating and estimating a Markov and non- Markovian random processes, computationally is less demanding, especially with increasing of the dimension of simulated processes. Numerical examples and estimation problems in low dimensional systems are given to illustrate the advantages of the approach. A very useful application of the proposed approach is shown for the problem of state estimation in very high dimensional systems. Here a modified filter for data assimilation in an oceanic numerical model is presented which is proved to be very efficient due to introducing a simple Markovian structure for the output prediction error process and adaptive tuning some parameters of the Markov equation.

Keywords: Statistical simulation, canonical form, dynamical system, Markov and non-Markovian processes, data assimilation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1298
360 Q-Net: A Novel QoS Aware Routing Algorithm for Future Data Networks

Authors: Maassoumeh Javadi Baygi, Abdul Rahman B Ramli, Borhanuddin Mohd Ali, Syamsiah Mashohor

Abstract:

The expectation of network performance from the early days of ARPANET until now has been changed significantly. Every day, new advancement in technological infrastructure opens the doors for better quality of service and accordingly level of perceived quality of network services have been increased over the time. Nowadays for many applications, late information has no value or even may result in financial or catastrophic loss, on the other hand, demands for some level of guarantee in providing and maintaining quality of service are ever increasing. Based on this history, having a QoS aware routing system which is able to provide today's required level of quality of service in the networks and effectively adapt to the future needs, seems as a key requirement for future Internet. In this work we have extended the traditional AntNet routing system to support QoS with multiple metrics such as bandwidth and delay which is named Q-Net. This novel scalable QoS routing system aims to provide different types of services in the network simultaneously. Each type of service can be provided for a period of time in the network and network nodes do not need to have any previous knowledge about it. When a type of quality of service is requested, Q-Net will allocate required resources for the service and will guarantee QoS requirement of the service, based on target objectives.

Keywords: Quality of Service, Routing, Ant Colony Optimization, Ant-based algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1327
359 Multi-Robotic Partial Disassembly Line Balancing with Robotic Efficiency Difference via HNSGA-II

Authors: Tao Yin, Zeqiang Zhang, Wei Liang, Yanqing Zeng, Yu Zhang

Abstract:

To accelerate the remanufacturing process of electronic waste products, this study designs a partial disassembly line with the multi-robotic station to effectively dispose of excessive wastes. The multi-robotic partial disassembly line is a technical upgrade to the existing manual disassembly line. Balancing optimization can make the disassembly line smoother and more efficient. For partial disassembly line balancing with the multi-robotic station (PDLBMRS), a mixed-integer programming model (MIPM) considering the robotic efficiency differences is established to minimize cycle time, energy consumption and hazard index and to calculate their optimal global values. Besides, an enhanced NSGA-II algorithm (HNSGA-II) is proposed to optimize PDLBMRS efficiently. Finally, MIPM and HNSGA-II are applied to an actual mixed disassembly case of two types of computers, the comparison of the results solved by GUROBI and HNSGA-II verifies the correctness of the model and excellent performance of the algorithm, and the obtained Pareto solution set provides multiple options for decision-makers.

Keywords: Waste disposal, disassembly line balancing, multi-robot station, robotic efficiency difference, HNSGA-II.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 527
358 A Model to Study the Effect of Excess Buffers and Na+ Ions on Ca2+ Diffusion in Neuron Cell

Authors: Vikas Tewari, Shivendra Tewari, K. R. Pardasani

Abstract:

Calcium is a vital second messenger used in signal transduction. Calcium controls secretion, cell movement, muscular contraction, cell differentiation, ciliary beating and so on. Two theories have been used to simplify the system of reaction-diffusion equations of calcium into a single equation. One is excess buffer approximation (EBA) which assumes that mobile buffer is present in excess and cannot be saturated. The other is rapid buffer approximation (RBA), which assumes that calcium binding to buffer is rapid compared to calcium diffusion rate. In the present work, attempt has been made to develop a model for calcium diffusion under excess buffer approximation in neuron cells. This model incorporates the effect of [Na+] influx on [Ca2+] diffusion,variable calcium and sodium sources, sodium-calcium exchange protein, Sarcolemmal Calcium ATPase pump, sodium and calcium channels. The proposed mathematical model leads to a system of partial differential equations which have been solved numerically using Forward Time Centered Space (FTCS) approach. The numerical results have been used to study the relationships among different types of parameters such as buffer concentration, association rate, calcium permeability.

Keywords: Excess buffer approximation, Na+ influx, sodium calcium exchange protein, sarcolemmal calcium atpase pump, forward time centred space.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1597
357 Hydrogen Sulphide Removal Using a Novel Biofilter Media

Authors: Z. M. Shareefdeen, A. Aidan, W.Ahmed, M. B. Khatri, M. Islam, R. Lecheheb, F. Shams

Abstract:

Air emissions from waste treatment plants often consist of a combination of Volatile Organic Compounds (VOCs) and odors. Hydrogen sulfide is one of the major odorous gases present in the waste emissions coming from municipal wastewater treatment facilities. Hydrogen sulfide (H2S) is odorous, highly toxic and flammable. Exposure to lower concentrations can result in eye irritation, a sore throat and cough, shortness of breath, and fluid in the lungs. Biofiltration has become a widely accepted technology for treating air streams containing H2S. When compared with other nonbiological technologies, biofilter is more cost-effective for treating large volumes of air containing low concentrations of biodegradable compounds. Optimization of biofilter media is essential for many reasons such as: providing a higher surface area for biofilm growth, low pressure drop, physical stability, and good moisture retention. In this work, a novel biofilter media is developed and tested at a pumping station of a municipality located in the United Arab Emirates (UAE). The media is found to be very effective (>99%) in removing H2S concentrations that are expected in pumping stations under steady state and shock loading conditions.

Keywords: biofilter media, hydrogen sulphide, pumping station, biofiltration

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1953
356 Promoting Authenticity in Employer Brands to Address the Global-Local Problem in Complex Organisations: The Case of a Developing Country

Authors: Saud A. Taj

Abstract:

Employer branding is considered as a useful tool for addressing the global-local problem facing complex organisations that have operations scattered across the globe and face challenges of dealing with the local environment alongside. Despite being an established field of study within the Western developed world, there is little empirical evidence concerning the relevance of employer branding to global companies that operate in the under-developed economies. This paper fills this gap by gaining rich insight into the implementation of employer branding programs in a foreign multinational operating in Pakistan dealing with the global-local problem. The study is qualitative in nature and employs semistructured and focus group interviews with senior/middle managers and local frontline employees to deeply examine the phenomenon in case organisation. Findings suggest that authenticity is required in employer brands to enable them to respond to the local needs thereby leading to the resolution of the global-local problem. However, the role of signaling theory is key to the development of authentic employer brands as it stresses on the need to establish an efficient and effective signaling environment where in signals travel in both directions (from signal designers to receivers and backwards) and facilitate firms with the global-local problem. The paper also identifies future avenues of research for the employer branding field.

Keywords: Authenticity, Counter-signals, Employer Branding, Global-Local Problem, Signaling Theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1808