Search results for: generalized maximal ratio combining
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2646

Search results for: generalized maximal ratio combining

1296 A Novel Multiplex Real-Time PCR Assay Using TaqMan MGB Probes for Rapid Detection of Trisomy 21

Authors: Mehrdad Hashemi, Mitra Behrooz Aghdam, Reza Mahdian, Ahmad Reza Kamyab

Abstract:

Cytogenetic analysis still remains the gold standard method for prenatal diagnosis of trisomy 21 (Down syndrome, DS). Nevertheless, the conventional cytogenetic analysis needs live cultured cells and is too time-consuming for clinical application. In contrast, molecular methods such as FISH, QF-PCR, MLPA and quantitative Real-time PCR are rapid assays with results available in 24h. In the present study, we have successfully used a novel MGB TaqMan probe-based real time PCR assay for rapid diagnosis of trisomy 21 status in Down syndrome samples. We have also compared the results of this molecular method with corresponding results obtained by the cytogenetic analysis. Blood samples obtained from DS patients (n=25) and normal controls (n=20) were tested by quantitative Real-time PCR in parallel to standard G-banding analysis. Genomic DNA was extracted from peripheral blood lymphocytes. A high precision TaqMan probe quantitative Real-time PCR assay was developed to determine the gene dosage of DSCAM (target gene on 21q22.2) relative to PMP22 (reference gene on 17p11.2). The DSCAM/PMP22 ratio was calculated according to the formula; ratio=2 -ΔΔCT. The quantitative Real-time PCR was able to distinguish between trisomy 21 samples and normal controls with the gene ratios of 1.49±0.13 and 1.03±0.04 respectively (p value <0.001). These results represent the presence of 3 copies of target gene in DS samples Vs 2 copies in normal controls. The results of quantitative Real-time PCR were in complete agreement with results of cytogenetic analysis. This study confirms previous reports regarding successful implementation of quantitative Real-time PCR for detection of trisomy 21. However, the assay has been improved by using MGB probes and more accurate data analysis. This assay, in particular, when performed in combination with another molecular assay such as QF-PCR or MLPA, can be used as a reliable technique for rapid prenatal diagnosis of trisomy 21.

Keywords: Trisomy 21, Real-time PCR, MGB-TaqMan Probes, Gene Dosage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2538
1295 Multi-Layer Multi-Feature Background Subtraction Using Codebook Model Framework

Authors: Yun-Tao Zhang, Jong-Yeop Bae, Whoi-Yul Kim

Abstract:

Background modeling and subtraction in video analysis has been widely used as an effective method for moving objects detection in many computer vision applications. Recently, a large number of approaches have been developed to tackle different types of challenges in this field. However, the dynamic background and illumination variations are the most frequently occurred problems in the practical situation. This paper presents a favorable two-layer model based on codebook algorithm incorporated with local binary pattern (LBP) texture measure, targeted for handling dynamic background and illumination variation problems. More specifically, the first layer is designed by block-based codebook combining with LBP histogram and mean value of each RGB color channel. Because of the invariance of the LBP features with respect to monotonic gray-scale changes, this layer can produce block wise detection results with considerable tolerance of illumination variations. The pixel-based codebook is employed to reinforce the precision from the output of the first layer which is to eliminate false positives further. As a result, the proposed approach can greatly promote the accuracy under the circumstances of dynamic background and illumination changes. Experimental results on several popular background subtraction datasets demonstrate very competitive performance compared to previous models.

Keywords: Background subtraction, codebook model, local binary pattern, dynamic background, illumination changes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1965
1294 Ranking Genes from DNA Microarray Data of Cervical Cancer by a local Tree Comparison

Authors: Frank Emmert-Streib, Matthias Dehmer, Jing Liu, Max Muhlhauser

Abstract:

The major objective of this paper is to introduce a new method to select genes from DNA microarray data. As criterion to select genes we suggest to measure the local changes in the correlation graph of each gene and to select those genes whose local changes are largest. More precisely, we calculate the correlation networks from DNA microarray data of cervical cancer whereas each network represents a tissue of a certain tumor stage and each node in the network represents a gene. From these networks we extract one tree for each gene by a local decomposition of the correlation network. The interpretation of a tree is that it represents the n-nearest neighbor genes on the n-th level of a tree, measured by the Dijkstra distance, and, hence, gives the local embedding of a gene within the correlation network. For the obtained trees we measure the pairwise similarity between trees rooted by the same gene from normal to cancerous tissues. This evaluates the modification of the tree topology due to tumor progression. Finally, we rank the obtained similarity values from all tissue comparisons and select the top ranked genes. For these genes the local neighborhood in the correlation networks changes most between normal and cancerous tissues. As a result we find that the top ranked genes are candidates suspected to be involved in tumor growth. This indicates that our method captures essential information from the underlying DNA microarray data of cervical cancer.

Keywords: Graph similarity, generalized trees, graph alignment, DNA microarray data, cervical cancer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1753
1293 Mitigation of Radiation Levels for Base Transceiver Stations based on ITU-T Recommendation K.70

Authors: Reyes C., Ramos B.

Abstract:

This essay presents applicative methods to reduce human exposure levels in the area around base transceiver stations in a environment with multiple sources based on ITU-T recommendation K.70. An example is presented to understand the mitigation techniques and their results and also to learn how they can be applied, especially in developing countries where there is not much research on non-ionizing radiations.

Keywords: Electromagnetic fields (EMF), human exposure limits, intentional radiator, cumulative exposure ratio, base transceiver station (BTS), radiation levels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2701
1292 Relation of Optimal Pilot Offsets in the Shifted Constellation-Based Method for the Detection of Pilot Contamination Attacks

Authors: Dimitriya A. Mihaylova, Zlatka V. Valkova-Jarvis, Georgi L. Iliev

Abstract:

One possible approach for maintaining the security of communication systems relies on Physical Layer Security mechanisms. However, in wireless time division duplex systems, where uplink and downlink channels are reciprocal, the channel estimate procedure is exposed to attacks known as pilot contamination, with the aim of having an enhanced data signal sent to the malicious user. The Shifted 2-N-PSK method involves two random legitimate pilots in the training phase, each of which belongs to a constellation, shifted from the original N-PSK symbols by certain degrees. In this paper, legitimate pilots’ offset values and their influence on the detection capabilities of the Shifted 2-N-PSK method are investigated. As the implementation of the technique depends on the relation between the shift angles rather than their specific values, the optimal interconnection between the two legitimate constellations is investigated. The results show that no regularity exists in the relation between the pilot contamination attacks (PCA) detection probability and the choice of offset values. Therefore, an adversary who aims to obtain the exact offset values can only employ a brute-force attack but the large number of possible combinations for the shifted constellations makes such a type of attack difficult to successfully mount. For this reason, the number of optimal shift value pairs is also studied for both 100% and 98% probabilities of detecting pilot contamination attacks. Although the Shifted 2-N-PSK method has been broadly studied in different signal-to-noise ratio scenarios, in multi-cell systems the interference from the signals in other cells should be also taken into account. Therefore, the inter-cell interference impact on the performance of the method is investigated by means of a large number of simulations. The results show that the detection probability of the Shifted 2-N-PSK decreases inversely to the signal-to-interference-plus-noise ratio.

Keywords: Channel estimation, inter-cell interference, pilot contamination attacks, wireless communications.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 677
1291 Shear Strength Characteristics of Sand-Particulate Rubber Mixture

Authors: Firas Daghistani, Hossam Abuel Naga

Abstract:

Waste tyres is an ongoing global problem that has a negative effect on the environment. Waste tyres are discarded in stockpiles where they provide harm to the environment in many ways. Finding applications to these materials can help in reducing this global problem. One of these applications is recycling these waste materials and using them in geotechnical engineering. Recycled waste tyre particulates can be mixed with sand to form a lightweight material with varying shear strength characteristics. This research further investigates the inclusion of particulate rubber to sand and whether it can increase or decrease the shear strength characteristics of the mixture. For the experiment, a series of direct shear tests was performed on a poorly graded sand with a mean particle size of 0.32 mm mixed with recycled poorly graded particulate rubber with a mean particle size of 0.51 mm. The shear tests were performed on four normal stresses 30, 55, 105, 200 kPa at a shear rate of 1 mm/minute. Different percentages of particulate rubber content were used in the mixture i.e., 10%, 20%, 30% and 50% of sand dry weight at three density states namely loose, slight dense, and dense state. The size ratio of the mixture, which is the mean particle size of the particulate rubber divided by the mean particle size of the sand, was 1.59. The results identified multiple parameters that can influence the shear strength of the mixture. The parameters were: normal stress, particulate rubber content, mixture gradation, mixture size ratio, and the mixture’s density. The inclusion of particulate rubber to sand showed a decrease to the internal friction angle, and an increase to the apparent cohesion. Overall, the inclusion of particulate rubber did not have a significant influence on the shear strength of the mixture. For all the dense states at the low normal stresses 30, and 55 kPa, the inclusion of particulate rubber showed a slight increase in the shear strength where the peak was at 20-30% rubber content of the sand’s dry weight. On the other hand, at the high normal stresses 105, and 200 kPa, there was a slight decrease in the shear strength.

Keywords: Direct shear, granular material, sand-rubber mixture, shear strength, waste material.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 360
1290 An Investigation of Performance versus Security in Cognitive Radio Networks with Supporting Cloud Platforms

Authors: Kurniawan D. Irianto, Demetres D. Kouvatsos

Abstract:

The growth of wireless devices affects the availability of limited frequencies or spectrum bands as it has been known that spectrum bands are a natural resource that cannot be added. Meanwhile, the licensed frequencies are idle most of the time. Cognitive radio is one of the solutions to solve those problems. Cognitive radio is a promising technology that allows the unlicensed users known as secondary users (SUs) to access licensed bands without making interference to licensed users or primary users (PUs). As cloud computing has become popular in recent years, cognitive radio networks (CRNs) can be integrated with cloud platform. One of the important issues in CRNs is security. It becomes a problem since CRNs use radio frequencies as a medium for transmitting and CRNs share the same issues with wireless communication systems. Another critical issue in CRNs is performance. Security has adverse effect to performance and there are trade-offs between them. The goal of this paper is to investigate the performance related to security trade-off in CRNs with supporting cloud platforms. Furthermore, Queuing Network Models with preemptive resume and preemptive repeat identical priority are applied in this project to measure the impact of security to performance in CRNs with or without cloud platform. The generalized exponential (GE) type distribution is used to reflect the bursty inter-arrival and service times at the servers. The results show that the best performance is obtained when security is disabled and cloud platform is enabled.

Keywords: Cloud Platforms, Cognitive Radio Networks, GEtype Distribution, Performance Vs Security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2521
1289 Union is Strength in Lossy Image Compression

Authors: Mario Mastriani

Abstract:

In this work, we present a comparison between different techniques of image compression. First, the image is divided in blocks which are organized according to a certain scan. Later, several compression techniques are applied, combined or alone. Such techniques are: wavelets (Haar's basis), Karhunen-Loève Transform, etc. Simulations show that the combined versions are the best, with minor Mean Squared Error (MSE), and higher Peak Signal to Noise Ratio (PSNR) and better image quality, even in the presence of noise.

Keywords: Haar's basis, Image compression, Karhunen-LoèveTransform, Morton's scan, row-rafter scan.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1746
1288 Computer Models of the Vestibular Head Tilt Response, and Their Relationship to EVestG and Meniere's Disease

Authors: Daniel Heibert, Brian Lithgow, Kerry Hourigan

Abstract:

This paper attempts to explain response components of Electrovestibulography (EVestG) using a computer simulation of a three-canal model of the vestibular system. EVestG is a potentially new diagnostic method for Meniere's disease. EVestG is a variant of Electrocochleography (ECOG), which has been used as a standard method for diagnosing Meniere's disease - it can be used to measure the SP/AP ratio, where an SP/AP ratio greater than 0.4-0.5 is indicative of Meniere-s Disease. In EVestG, an applied head tilt replaces the acoustic stimulus of ECOG. The EVestG output is also an SP/AP type plot, where SP is the summing potential, and AP is the action potential amplitude. AP is thought of as being proportional to the size of a population of afferents in an excitatory neural firing state. A simulation of the fluid volume displacement in the vestibular labyrinth in response to various types of head tilts (ipsilateral, backwards and horizontal rotation) was performed, and a simple neural model based on these simulations developed. The simple neural model shows that the change in firing rate of the utricle is much larger in magnitude than the change in firing rates of all three semi-circular canals following a head tilt (except in a horizontal rotation). The data suggests that the change in utricular firing rate is a minimum 2-3 orders of magnitude larger than changes in firing rates of the canals during ipsilateral/backward tilts. Based on these results, the neural response recorded by the electrode in our EVestG recordings is expected to be dominated by the utricle in ipsilateral/backward tilts (It is important to note that the effect of the saccule and efferent signals were not taken into account in this model). If the utricle response dominates the EVestG recordings as the modeling results suggest, then EVestG has the potential to diagnose utricular hair cell damage due to a viral infection (which has been cited as one possible cause of Meniere's Disease).

Keywords: Diagnostic, endolymph hydrops, Meniere's disease, modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1517
1287 Effects of Free-Hanging Horizontal Sound Absorbers on the Cooling Performance of Thermally Activated Building Systems

Authors: L. Marcos Domínguez, Nils Rage, Ongun B. Kazanci, Bjarne W. Olesen

Abstract:

Thermally Activated Building Systems (TABS) have proven to be an energy-efficient solution to provide buildings with an optimal indoor thermal environment. This solution uses the structure of the building to store heat, reduce the peak loads, and decrease the primary energy demand. TABS require the heated or cooled surfaces to be as exposed as possible to the indoor space, but exposing the bare concrete surfaces has a diminishing effect on the acoustic qualities of the spaces in a building. Acoustic solutions capable of providing optimal acoustic comfort and allowing the heat exchange between the TABS and the room are desirable. In this study, the effects of free-hanging units on the cooling performance of TABS and the occupants’ thermal comfort was measured in a full-scale TABS laboratory. Investigations demonstrate that the use of free-hanging sound absorbers are compatible with the performance of TABS and the occupant’s thermal comfort, but an appropriate acoustic design is needed to find the most suitable solution for each case. The results show a reduction of 11% of the cooling performance of the TABS when 43% of the ceiling area is covered with free-hanging horizontal sound absorbers, of 23% for 60% ceiling coverage ratio and of 36% for 80% coverage. Measurements in actual buildings showed an increase of the room operative temperature of 0.3 K when 50% of the ceiling surface is covered with horizontal panels and of 0.8 to 1 K for a 70% coverage ratio. According to numerical simulations using a new TRNSYS Type, the use of comfort ventilation has a considerable influence on the thermal conditions in the room; if the ventilation is removed, then the operative temperature increases by 1.8 K for a 60%-covered ceiling.

Keywords: Acoustic comfort, concrete core activation, full-scale measurements, thermally activated building systems, TRNSYS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1423
1286 Analysis of Combustion, Performance and Emission Characteristics of Turbocharged LHR Extended Expansion DI Diesel Engine

Authors: Mohd.F.Shabir, P. Tamilporai, B. Rajendra Prasath

Abstract:

The fundamental aim of extended expansion concept is to achieve higher work done which in turn leads to higher thermal efficiency. This concept is compatible with the application of turbocharger and LHR engine. The Low Heat Rejection engine was developed by coating the piston crown, cylinder head inside with valves and cylinder liner with partially stabilized zirconia coating of 0.5 mm thickness. Extended expansion in diesel engines is termed as Miller cycle in which the expansion ratio is increased by reducing the compression ratio by modifying the inlet cam for late inlet valve closing. The specific fuel consumption reduces to an appreciable level and the thermal efficiency of the extended expansion turbocharged LHR engine is improved. In this work, a thermodynamic model was formulated and developed to simulate the LHR based extended expansion turbocharged direct injection diesel engine. It includes a gas flow model, a heat transfer model, and a two zone combustion model. Gas exchange model is modified by incorporating the Miller cycle, by delaying inlet valve closing timing which had resulted in considerable improvement in thermal efficiency of turbocharged LHR engines. The heat transfer model, calculates the convective and radiative heat transfer between the gas and wall by taking into account of the combustion chamber surface temperature swings. Using the two-zone combustion model, the combustion parameters and the chemical equilibrium compositions were determined. The chemical equilibrium compositions were used to calculate the Nitric oxide formation rate by assuming a modified Zeldovich mechanism. The accuracy of this model is scrutinized against actual test results from the engine. The factors which affect thermal efficiency and exhaust emissions were deduced and their influences were discussed. In the final analysis it is seen that there is an excellent agreement in all of these evaluations.

Keywords: Low Heat Rejection, Miller cycle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2093
1285 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: Anomaly detection, autoencoder, data centers, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 742
1284 A Trainable Neural Network Ensemble for ECG Beat Classification

Authors: Atena Sajedin, Shokoufeh Zakernejad, Soheil Faridi, Mehrdad Javadi, Reza Ebrahimpour

Abstract:

This paper illustrates the use of a combined neural network model for classification of electrocardiogram (ECG) beats. We present a trainable neural network ensemble approach to develop customized electrocardiogram beat classifier in an effort to further improve the performance of ECG processing and to offer individualized health care. We process a three stage technique for detection of premature ventricular contraction (PVC) from normal beats and other heart diseases. This method includes a denoising, a feature extraction and a classification. At first we investigate the application of stationary wavelet transform (SWT) for noise reduction of the electrocardiogram (ECG) signals. Then feature extraction module extracts 10 ECG morphological features and one timing interval feature. Then a number of multilayer perceptrons (MLPs) neural networks with different topologies are designed. The performance of the different combination methods as well as the efficiency of the whole system is presented. Among them, Stacked Generalization as a proposed trainable combined neural network model possesses the highest recognition rate of around 95%. Therefore, this network proves to be a suitable candidate in ECG signal diagnosis systems. ECG samples attributing to the different ECG beat types were extracted from the MIT-BIH arrhythmia database for the study.

Keywords: ECG beat Classification; Combining Classifiers;Premature Ventricular Contraction (PVC); Multi Layer Perceptrons;Wavelet Transform

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2216
1283 Ammonia Adsorption Properties of Composite Ammonia Carriers Obtained by Supporting Metal Chloride on Porous Materials

Authors: Cheng Shen, LaiHong Shen

Abstract:

Ammonia is an important carrier of hydrogen energy, with the characteristics of high hydrogen content density and no carbon dioxide emission. Safe and efficient ammonia capture for ammonia synthesis from biomass is an important way to alleviate the energy crisis and solve the energy problem. Metal chloride has a chemical adsorption effect on ammonia and can be desorbed at high temperatures to obtain high-concentration ammonia after combining with ammonia, which has a good development prospect in ammonia capture and separation technology. In this paper, the ammonia adsorption properties of CuCl2 were measured, and the composite adsorbents were prepared by using silicon and multi-walled carbon nanotubes, respectively to support CuCl2, and the ammonia adsorption properties of the composite adsorbents were studied. The study found that the ammonia adsorption capacity of the three adsorbents decreased with the increase in temperature, so metal chlorides were more suitable for the low-temperature adsorption of ammonia. Silicon and multi-walled carbon nanotubes have an enhanced effect on the ammonia adsorption of CuCl2. The reason is that the porous material itself has a physical adsorption effect on ammonia, and silicon can play the role of skeleton support in cupric chloride particles, which enhances the pore structure of the adsorbent, thereby alleviating sintering.

Keywords: Ammonia, adsorption properties, metal chloride, MWCNTs, silicon.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 170
1282 Statistical Modeling of Local Area Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes

Authors: Jihad S. Daba, J. P. Dubois

Abstract:

Fading noise degrades the performance of cellular communication, most notably in femto- and pico-cells in 3G and 4G systems. When the wireless channel consists of a small number of scattering paths, the statistics of fading noise is not analytically tractable and poses a serious challenge to developing closed canonical forms that can be analysed and used in the design of efficient and optimal receivers. In this context, noise is multiplicative and is referred to as stochastically local fading. In many analytical investigation of multiplicative noise, the exponential or Gamma statistics are invoked. More recent advances by the author of this paper utilized a Poisson modulated-weighted generalized Laguerre polynomials with controlling parameters and uncorrelated noise assumptions. In this paper, we investigate the statistics of multidiversity stochastically local area fading channel when the channel consists of randomly distributed Rayleigh and Rician scattering centers with a coherent Nakagami-distributed line of sight component and an underlying doubly stochastic Poisson process driven by a lognormal intensity. These combined statistics form a unifying triply stochastic filtered marked Poisson point process model.

Keywords: Cellular communication, femto- and pico-cells, stochastically local area fading channel, triply stochastic filtered marked Poisson point process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1345
1281 Dynamic Web-Based 2D Medical Image Visualization and Processing Software

Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail

Abstract:

In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.

Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 795
1280 Comparison of Number of Waves Surfed and Duration Using Global Positioning System and Inertial Sensors

Authors: J. Madureira, R. Lagido, I. Sousa

Abstract:

Surf is an increasingly popular sport and its performance evaluation is often qualitative. This work aims at using a smartphone to collect and analyze the GPS and inertial sensors data in order to obtain quantitative metrics of the surfing performance. Two approaches are compared for detection of wave rides, computing the number of waves rode in a surfing session, the starting time of each wave and its duration. The first approach is based on computing the velocity from the Global Positioning System (GPS) signal and finding the velocity thresholds that allow identifying the start and end of each wave ride. The second approach adds information from the Inertial Measurement Unit (IMU) of the smartphone, to the velocity thresholds obtained from the GPS unit, to determine the start and end of each wave ride. The two methods were evaluated using GPS and IMU data from two surfing sessions and validated with similar metrics extracted from video data collected from the beach. The second method, combining GPS and IMU data, was found to be more accurate in determining the number of waves, start time and duration. This paper shows that it is feasible to use smartphones for quantification of performance metrics during surfing. In particular, detection of the waves rode and their duration can be accurately determined using the smartphone GPS and IMU. 

Keywords: Inertial Measurement Unit (IMU), Global Positioning System (GPS), smartphone, surfing performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1655
1279 A Comparison of Experimental Data with Monte Carlo Calculations for Optimisation of the Sourceto- Detector Distance in Determining the Efficiency of a LaBr3:Ce (5%) Detector

Authors: H. Aldousari, T. Buchacher, N. M. Spyrou

Abstract:

Cerium-doped lanthanum bromide LaBr3:Ce(5%) crystals are considered to be one of the most advanced scintillator materials used in PET scanning, combining a high light yield, fast decay time and excellent energy resolution. Apart from the correct choice of scintillator, it is also important to optimise the detector geometry, not least in terms of source-to-detector distance in order to obtain reliable measurements and efficiency. In this study a commercially available 25 mm x 25 mm BrilLanCeTM 380 LaBr3: Ce (5%) detector was characterised in terms of its efficiency at varying source-to-detector distances. Gamma-ray spectra of 22Na, 60Co, and 137Cs were separately acquired at distances of 5, 10, 15, and 20cm. As a result of the change in solid angle subtended by the detector, the geometric efficiency reduced in efficiency with increasing distance. High efficiencies at low distances can cause pulse pile-up when subsequent photons are detected before previously detected events have decayed. To reduce this systematic error the source-to-detector distance should be balanced between efficiency and pulse pile-up suppression as otherwise pile-up corrections would need to be necessary at short distances. In addition to the experimental measurements Monte Carlo simulations have been carried out for the same setup, allowing a comparison of results. The advantages and disadvantages of each approach have been highlighted.

Keywords: BrilLanCeTM380 LaBr3:Ce(5%), Coincidence summing, GATE simulation, Geometric efficiency

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1891
1278 Alleviation of Adverse Effects of Salt Stress on Soybean (Glycine max. L.) by Using Osmoprotectants and Organic Nutrients

Authors: Ayman El Sabagh, Sobhy Sorour, Abd Elhamid Omar, Adel Ragab, Mohammad Sohidul Islam, Celaleddin Barutçular, Akihiro Ueda, Hirofumi Saneoka

Abstract:

Salinity is one of the major factors limiting crop production in an arid environment. Despite its global importance soybean production suffer the problems of salinity stress causing damages at plant development. So it is implacable to either search for salinity enhancement of soybean plants. Therefore, in the current study we try to clarify the mechanism that might be involved in the ameliorating effects of osmo-protectants such as proline and glycine betaine as well as, compost application on soybean plants grown under salinity stress. The experiment was conducted under greenhouse conditions at the Graduate School of Biosphere Science Laboratory of Hiroshima University, Japan in 2011. The experiment was designed as a spilt-split plot based on randomized complete block design with four replications. The treatments could be summarized as follows; (i) salinity concentrations (0 and 15 mM), (ii) compost treatments (0 and 24 t ha-1) and (iii) the exogenous, proline and glycine betaine concentrations (0 mM and 25 mM) for each. Results indicated that salinity stress induced reduction in growth and physiological aspects (dry weight per plant, chlorophyll content, N and K+ content) of soybean plant compared with those of the unstressed plants. On the other hand, salinity stress led to increases in the electrolyte leakage ratio, Na and proline contents. Special attention was paid to, the tolerance against salt stress was observed, the improvement of salt tolerance resulted from proline, glycine betaine and compost were accompanied with improved K+, and proline accumulation. While, significantly decreased electrolyte leakage ratio and Na+ content. These results clearly demonstrate that harmful effect of salinity could reduce on growth aspects of soybean. Consequently, exogenous osmoprotectants combine with compost will effectively solve seasonal salinity stress problem and are a good strategy to increase salinity resistance of soybean in the drylands.

Keywords: Compost, glycine betaine, growth, proline, salinity tolerance, soybean.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3229
1277 Improving Topic Quality of Scripts by Using Scene Similarity Based Word Co-Occurrence

Authors: Yunseok Noh, Chang-Uk Kwak, Sun-Joong Kim, Seong-Bae Park

Abstract:

Scripts are one of the basic text resources to understand broadcasting contents. Topic modeling is the method to get the summary of the broadcasting contents from its scripts. Generally, scripts represent contents descriptively with directions and speeches, and provide scene segments that can be seen as semantic units. Therefore, a script can be topic modeled by treating a scene segment as a document. Because scene segments consist of speeches mainly, however, relatively small co-occurrences among words in the scene segments are observed. This causes inevitably the bad quality of topics by statistical learning method. To tackle this problem, we propose a method to improve topic quality with additional word co-occurrence information obtained using scene similarities. The main idea of improving topic quality is that the information that two or more texts are topically related can be useful to learn high quality of topics. In addition, more accurate topical representations lead to get information more accurate whether two texts are related or not. In this paper, we regard two scene segments are related if their topical similarity is high enough. We also consider that words are co-occurred if they are in topically related scene segments together. By iteratively inferring topics and determining semantically neighborhood scene segments, we draw a topic space represents broadcasting contents well. In the experiments, we showed the proposed method generates a higher quality of topics from Korean drama scripts than the baselines.

Keywords: Broadcasting contents, generalized P´olya urn model, scripts, text similarity, topic model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1817
1276 Locating Cultural Centers in Shiraz (Iran) Applying Geographic Information System (GIS)

Authors: R. Mokhtari Malekabadi, S. Ghaed Rahmati, S. Aram

Abstract:

Optimal cultural site selection is one of the ways that can lead to the promotion of citizenship culture in addition to ensuring the health and leisure of city residents. This study examines the social and cultural needs of the community and optimal cultural site allocation and after identifying the problems and shortcomings, provides a suitable model for finding the best location for these centers where there is the greatest impact on the promotion of citizenship culture. On the other hand, non-scientific methods cause irreversible impacts to the urban environment and citizens. But modern efficient methods can reduce these impacts. One of these methods is using geographical information systems (GIS). In this study, Analytical Hierarchy Process (AHP) method was used to locate the optimal cultural site. In AHP, three principles (decomposition), (comparative analysis), and (combining preferences) are used. The objectives of this research include providing optimal contexts for passing time and performing cultural activities by Shiraz residents and also proposing construction of some cultural sites in different areas of the city. The results of this study show the correct positioning of cultural sites based on social needs of citizens. Thus, considering the population parameters and radii access, GIS and AHP model for locating cultural centers can meet social needs of citizens.

Keywords: Analytical Hierarchy Process (AHP), geographical information systems (GIS), Cultural site, locating, Shiraz.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1600
1275 Algorithm of Measurement of Noise Signal Power in the Presence of Narrowband Interference

Authors: Alexey V. Klyuev, Valery P. Samarin, Viktor F. Klyuev

Abstract:

A power measurement algorithm of the input mix components of the noise signal and narrowband interference is considered using functional transformations of the input mix in the postdetection processing channel. The algorithm efficiency analysis has been carried out for different interference-to-signal ratio. Algorithm performance features have been explored by numerical experiment results.

Keywords: Noise signal, continuous narrowband interference, signal power, spectrum width, detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1397
1274 Assisted Prediction of Hypertension Based on Heart Rate Variability and Improved Residual Networks

Authors: Yong Zhao, Jian He, Cheng Zhang

Abstract:

Cardiovascular disease resulting from hypertension poses a significant threat to human health, and early detection of hypertension can potentially save numerous lives. Traditional methods for detecting hypertension require specialized equipment and are often incapable of capturing continuous blood pressure fluctuations. To address this issue, this study starts by analyzing the principle of heart rate variability (HRV) and introduces the utilization of sliding window and power spectral density (PSD) techniques to analyze both temporal and frequency domain features of HRV. Subsequently, a hypertension prediction network that relies on HRV is proposed, combining Resnet, attention mechanisms, and a multi-layer perceptron. The network leverages a modified ResNet18 to extract frequency domain features, while employing an attention mechanism to integrate temporal domain features, thus enabling auxiliary hypertension prediction through the multi-layer perceptron. The proposed network is trained and tested using the publicly available SHAREE dataset from PhysioNet. The results demonstrate that the network achieves a high prediction accuracy of 92.06% for hypertension, surpassing traditional models such as K Near Neighbor (KNN), Bayes, Logistic regression, and traditional Convolutional Neural Network (CNN).

Keywords: Feature extraction, heart rate variability, hypertension, residual networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 195
1273 Spatiotemporal Analysis of Visual Evoked Responses Using Dense EEG

Authors: Rima Hleiss, Elie Bitar, Mahmoud Hassan, Mohamad Khalil

Abstract:

A comprehensive study of object recognition in the human brain requires combining both spatial and temporal analysis of brain activity. Here, we are mainly interested in three issues: the time perception of visual objects, the ability of discrimination between two particular categories (objects vs. animals), and the possibility to identify a particular spatial representation of visual objects. Our experiment consisted of acquiring dense electroencephalographic (EEG) signals during a picture-naming task comprising a set of objects and animals’ images. These EEG responses were recorded from nine participants. In order to determine the time perception of the presented visual stimulus, we analyzed the Event Related Potentials (ERPs) derived from the recorded EEG signals. The analysis of these signals showed that the brain perceives animals and objects with different time instants. Concerning the discrimination of the two categories, the support vector machine (SVM) was applied on the instantaneous EEG (excellent temporal resolution: on the order of millisecond) to categorize the visual stimuli into two different classes. The spatial differences between the evoked responses of the two categories were also investigated. The results showed a variation of the neural activity with the properties of the visual input. Results showed also the existence of a spatial pattern of electrodes over particular regions of the scalp in correspondence to their responses to the visual inputs.

Keywords: Brain activity, dense EEG, evoked responses, spatiotemporal analysis, SVM, perception.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1071
1272 Aircraft Selection Using Multiple Criteria Decision Making Analysis Method with Different Data Normalization Techniques

Authors: C. Ardil

Abstract:

This paper presents an original application of multiple criteria decision making analysis theory to the evaluation of aircraft selection problem. The selection of an optimal, efficient and reliable fleet, network and operations planning policy is one of the most important factors in aircraft selection problem. Given that decision making in aircraft selection involves the consideration of a number of opposite criteria and possible solutions, such a selection can be considered as a multiple criteria decision making analysis problem. This study presents a new integrated approach to decision making by considering the multiple criteria utility theory and the maximal regret minimization theory methods as well as aircraft technical, economical, and environmental aspects. Multiple criteria decision making analysis method uses different normalization techniques to allow criteria to be aggregated with qualitative and quantitative data of the decision problem. Therefore, selecting a suitable normalization technique for the model is also a challenge to provide data aggregation for the aircraft selection problem. To compare the impact of different normalization techniques on the decision problem, the vector, linear (sum), linear (max), and linear (max-min) data normalization techniques were identified to evaluate aircraft selection problem. As a logical implication of the proposed approach, it enhances the decision making process through enabling the decision maker to: (i) use higher level knowledge regarding the selection of criteria weights and the proposed technique, (ii) estimate the ranking of an alternative, under different data normalization techniques and integrated criteria weights after a posteriori analysis of the final rankings of alternatives. A set of commercial passenger aircraft were considered in order to illustrate the proposed approach. The obtained results of the proposed approach were compared using Spearman's rho tests. An analysis of the final rank stability with respect to the changes in criteria weights was also performed so as to assess the sensitivity of the alternative rankings obtained by the application of different data normalization techniques and the proposed approach.

Keywords: Normalization Techniques, Aircraft Selection, Multiple Criteria Decision Making, Multiple Criteria Decision Making Analysis, MCDMA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 587
1271 On Combining Support Vector Machines and Fuzzy K-Means in Vision-based Precision Agriculture

Authors: A. Tellaeche, X. P. Burgos-Artizzu, G. Pajares, A. Ribeiro

Abstract:

One important objective in Precision Agriculture is to minimize the volume of herbicides that are applied to the fields through the use of site-specific weed management systems. In order to reach this goal, two major factors need to be considered: 1) the similar spectral signature, shape and texture between weeds and crops; 2) the irregular distribution of the weeds within the crop's field. This paper outlines an automatic computer vision system for the detection and differential spraying of Avena sterilis, a noxious weed growing in cereal crops. The proposed system involves two processes: image segmentation and decision making. Image segmentation combines basic suitable image processing techniques in order to extract cells from the image as the low level units. Each cell is described by two area-based attributes measuring the relations among the crops and the weeds. From these attributes, a hybrid decision making approach determines if a cell must be or not sprayed. The hybrid approach uses the Support Vector Machines and the Fuzzy k-Means methods, combined through the fuzzy aggregation theory. This makes the main finding of this paper. The method performance is compared against other available strategies.

Keywords: Fuzzy k-Means, Precision agriculture, SupportVectors Machines, Weed detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1779
1270 Expectation-Confirmation Model of Information System Continuance: A Meta-Analysis

Authors: Hui-Min Lai, Chin-Pin Chen, Yung-Fu Chang

Abstract:

The expectation-confirmation model (ECM) is one of the most widely used models for evaluating information system continuance, and this model has been extended to other study backgrounds, or expanded with other theoretical perspectives. However, combining ECM with other theories or investigating the background problem may produce some disparities, thus generating inaccurate conclusions. Habit is considered to be an important factor that influences the user’s continuance behavior. This paper thus critically examines seven pairs of relationships from the original ECM and the habit variable. A meta-analysis was used to tackle the development of ECM research over the last 10 years from a range of journals and conference papers published in 2005–2014. Forty-six journal articles and 19 conference papers were selected for analysis. The results confirm our prediction that a high effect size for the seven pairs of relationships was obtained (ranging from r=0.386 to r=0.588). Furthermore, a meta-analytic structural equation modeling was performed to simultaneously test all relationships. The results show that habit had a significant positive effect on continuance intention at p<=0.05 and that the six other pairs of relationships were significant at p<0.10. Based on the findings, we refined our original research model and an alternative model was proposed for understanding and predicting information system continuance. Some theoretical implications are also discussed.

Keywords: Expectation-confirmation theory, expectation- confirmation model, meta-analysis, meta-analytic structural equation modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2730
1269 T-Wave Detection Based on an Adjusted Wavelet Transform Modulus Maxima

Authors: Samar Krimi, Kaïs Ouni, Noureddine Ellouze

Abstract:

The method described in this paper deals with the problems of T-wave detection in an ECG. Determining the position of a T-wave is complicated due to the low amplitude, the ambiguous and changing form of the complex. A wavelet transform approach handles these complications therefore a method based on this concept was developed. In this way we developed a detection method that is able to detect T-waves with a sensitivity of 93% and a correct-detection ratio of 93% even with a serious amount of baseline drift and noise.

Keywords: ECG, Modulus Maxima Wavelet Transform, Performance, T-wave detection

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1853
1268 Design Criteria for Achieving Acceptable Indoor Radon Concentration

Authors: T. Valdbjørn Rasmussen

Abstract:

Design criteria for achieving an acceptable indoor radon concentration are presented in this paper. The paper suggests three design criteria. These criteria have to be considered at the early stage of the building design phase to meet the latest recommendations from the World Health Organization in most countries. The three design criteria are; first, establishing a radon barrier facing the ground; second, lowering the air pressure in the lower zone of the slab on ground facing downwards; third, diluting the indoor air with outdoor air. The first two criteria can prevent radon from infiltrating from the ground, and the third criteria can dilute the indoor air. By combining these three criteria, the indoor radon concentration can be lowered achieving an acceptable level. In addition, a cheap and reliable method for measuring the radon concentration in the indoor air is described. The provision on radon in the Danish Building Regulations complies with the latest recommendations from the World Health Organization. Radon can cause lung cancer and it is not known whether there is a lower limit for when it is not harmful to human beings. Therefore, it is important to reduce the radon concentration as much as possible in buildings. Airtightness is an important factor when dealing with buildings. It is important to avoid air leakages in the building envelope both facing the atmosphere, e.g. in compliance with energy requirements, but also facing the ground, to meet the requirements to ensure and control the indoor environment. Infiltration of air from the ground underneath a building is the main providing source of radon to the indoor air.

Keywords: Radon, natural radiation, barrier, pressure lowering, ventilation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1190
1267 A Combined Conventional and Differential Evolution Method for Model Order Reduction

Authors: J. S. Yadav, N. P. Patidar, J. Singhai, S. Panda, C. Ardil

Abstract:

In this paper a mixed method by combining an evolutionary and a conventional technique is proposed for reduction of Single Input Single Output (SISO) continuous systems into Reduced Order Model (ROM). In the conventional technique, the mixed advantages of Mihailov stability criterion and continued Fraction Expansions (CFE) technique is employed where the reduced denominator polynomial is derived using Mihailov stability criterion and the numerator is obtained by matching the quotients of the Cauer second form of Continued fraction expansions. Then, retaining the numerator polynomial, the denominator polynomial is recalculated by an evolutionary technique. In the evolutionary method, the recently proposed Differential Evolution (DE) optimization technique is employed. DE method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. The proposed method is illustrated through a numerical example and compared with ROM where both numerator and denominator polynomials are obtained by conventional method to show its superiority.

Keywords: Reduced Order Modeling, Stability, Mihailov Stability Criterion, Continued Fraction Expansions, Differential Evolution, Integral Squared Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2163