Search results for: interference checking
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 382

Search results for: interference checking

82 Door Fan Test in Data Processing Center at Portopalo Test Site

Authors: F. Noto, M. Castro, R. Garraffo, An. Mirabella, A. Rizzo, G. Cuttone

Abstract:

The door fan test is a verification procedure on the tightness of a room, necessary following the installation of saturation extinguishing systems and made mandatory according to the UNI 15004-1: 2019 standard whenever a gas extinguishing system is designed and installed. The door fan test was carried out at the Portopalo di Capo Passero headquarters of the Southern National Laboratories and highlighted how the Data Processing Center (CED) is perfectly up to standard, passing the door fan test in an excellent way. The Southern National Laboratories constitute a solid research reality, well established in the international scientific panorama. The CED in the Portopalo site has been expanded, so the extinguishing system has been expanded according to a detailed design. After checking the correctness of the design to verify the absence of air leaks, we carried out the door fan test. The activities of the Laboratori Nazionali del Sud (LNS) are mainly aimed at basic research in the field of Nuclear Physics, Nuclear and Particle Astrophysics. The Portopalo site will host some of the largest submarine wired scientific research infrastructures built in Europe and in the world, such as KM3NeT and EMSO ERIC; in particular, the site research laboratory in Portopalo will host the power supply and data acquisition systems of the underwater infrastructures, and a technological backbone will be created, unique in the Mediterranean, capable of allowing the connection, at abyssal depths, of dozens of real-time surveying and research structures of the marine environment deep.

Keywords: KM3Net, fire protection, door fan test, CED.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 166
81 A Power-Controlled Scheduling Scheme Using a Directional Antenna in Smart Home

Authors: Yongsun Kim, Hoyong Kang

Abstract:

This paper proposes a power-controlled scheduling scheme for devices using a directional antenna in smart home. In the case of the home network using directional antenna, devices can concurrently transmit data in the same frequency band. Accordingly, the throughput increases compared to that of devices using omni-directional antenna in proportional to the number of concurrent transmissions. Also, the number of concurrent transmissions depends on the beamwidth of antenna, the number of devices operating in the network , transmission power, interference and so on. In particular, the less transmission power is used, the more concurrent transmissions occur due to small transmission range. In this paper, we considered sub-optimal scheduling scheme for throughput maximization and power consumption minimization. In the scheme, each device is equipped with a directional antenna. Various beamwidths, path loss components, and antenna radiation efficiencies are considered. Numerical results show that the proposed schemes outperform the scheduling scheme using directional antennas without power control.

Keywords: Mmwave WPANs, directional scheduling, power-controlled scheduling scheme, smart home.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1384
80 Design and Implementation of Reed Solomon Encoder on FPGA

Authors: Amandeep Singh, Mandeep Kaur

Abstract:

Error correcting codes are used for detection and correction of errors in digital communication system. Error correcting coding is based on appending of redundancy to the information message according to a prescribed algorithm. Reed Solomon codes are part of channel coding and withstand the effect of noise, interference and fading. Galois field arithmetic is used for encoding and decoding reed Solomon codes. Galois field multipliers and linear feedback shift registers are used for encoding the information data block. The design of Reed Solomon encoder is complex because of use of LFSR and Galois field arithmetic. The purpose of this paper is to design and implement Reed Solomon (255, 239) encoder with optimized and lesser number of Galois Field multipliers. Symmetric generator polynomial is used to reduce the number of GF multipliers. To increase the capability toward error correction, convolution interleaving will be used with RS encoder. The Design will be implemented on Xilinx FPGA Spartan II.

Keywords: Galois Field, Generator polynomial, LFSR, Reed Solomon.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4799
79 Analysis of the Interference from Risk-Determining Factors of Cooperative and Conventional Construction Contracts

Authors: E. Harrer, M. Mauerhofer, T. Werginz

Abstract:

As a result of intensive competition, the building sector is suffering from a high degree of rivalry. Furthermore, there can be observed an unbalanced distribution of project risks. Clients are aimed to shift their own risks into the sphere of the constructors or planners. The consequence of this is that the number of conflicts between the involved parties is inordinately high or even increasing; an alternative approach to counter on that developments are cooperative project forms in the construction sector. This research compares conventional contract models and models with partnering agreements to examine the influence on project risks by an early integration of the involved parties. The goal is to show up deviations in different project stages from the design phase to the project transfer phase. These deviations are evaluated by a survey of experts from the three spheres: clients, contractors and planners. By rating the influence of the participants on specific risk factors it is possible to identify factors which are relevant for a smooth project execution.

Keywords: Collaborative work, construction industry, contract-models, influence, partnering, project management, risk.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 820
78 A Cost Function for Joint Blind Equalization and Phase Recovery

Authors: Reza Berangi, Morteza Babaee, Majid Soleimanipour

Abstract:

In this paper a new cost function for blind equalization is proposed. The proposed cost function, referred to as the modified maximum normalized cumulant criterion (MMNC), is an extension of the previously proposed maximum normalized cumulant criterion (MNC). While the MNC requires a separate phase recovery system after blind equalization, the MMNC performs joint blind equalization and phase recovery. To achieve this, the proposed algorithm maximizes a cost function that considers both amplitude and phase of the equalizer output. The simulation results show that the proposed algorithm has an improved channel equalization effect than the MNC algorithm and simultaneously can correct the phase error that the MNC algorithm is unable to do. The simulation results also show that the MMNC algorithm has lower complexity than the MNC algorithm. Moreover, the MMNC algorithm outperforms the MNC algorithm particularly when the symbols block size is small.

Keywords: Blind equalization, maximum normalized cumulant criterion (MNC), intersymbol interference (ISI), modified MNC criterion (MMNC), phase recovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1725
77 Search for Flavour Changing Neutral Current Couplings of Higgs-up Sector Quarks at Future Circular Collider (FCC-eh)

Authors: I. Turk Cakir, B. Hacisahinoglu, S. Kartal, A. Yilmaz, A. Yilmaz, Z. Uysal, O. Cakir

Abstract:

In the search for new physics beyond the Standard Model, Flavour Changing Neutral Current (FCNC) is a good research field in terms of the observability at future colliders. Increased Higgs production with higher energy and luminosity in colliders is essential for verification or falsification of our knowledge of physics and predictions, and the search for new physics. Prospective electron-proton collider constituent of the Future Circular Collider project is FCC-eh. It offers great sensitivity due to its high luminosity and low interference. In this work, thq FCNC interaction vertex with off-shell top quark decay at electron-proton colliders is studied. By using MadGraph5_aMC@NLO multi-purpose event generator, observability of tuh and tch couplings are obtained with equal coupling scenario. Upper limit on branching ratio of tree level top quark FCNC decay is determined as 0.012% at FCC-eh with 1 ab ^−1 luminosity.

Keywords: FCC, FCNC, Higgs Boson, Top Quark.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 809
76 Despiking of Turbulent Flow Data in Gravel Bed Stream

Authors: Ratul Das

Abstract:

The present experimental study insights the decontamination of instantaneous velocity fluctuations captured by Acoustic Doppler Velocimeter (ADV) in gravel-bed streams to ascertain near-bed turbulence for low Reynolds number. The interference between incidental and reflected pulses produce spikes in the ADV data especially in the near-bed flow zone and therefore filtering the data are very essential. Nortek’s Vectrino four-receiver ADV probe was used to capture the instantaneous three-dimensional velocity fluctuations over a non-cohesive bed. A spike removal algorithm based on the acceleration threshold method was applied to note the bed roughness and its influence on velocity fluctuations and velocity power spectra in the carrier fluid. The velocity power spectra of despiked signals with a best combination of velocity threshold (VT) and acceleration threshold (AT) are proposed which ascertained velocity power spectra a satisfactory fit with the Kolmogorov “–5/3 scaling-law” in the inertial sub-range. Also, velocity distributions below the roughness crest level fairly follows a third-degree polynomial series.

Keywords: Acoustic Doppler Velocimeter, gravel-bed, spike removal, Reynolds shear stress, near-bed turbulence, velocity power spectra.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1139
75 Template-Based Object Detection through Partial Shape Matching and Boundary Verification

Authors: Feng Ge, Tiecheng Liu, Song Wang, Joachim Stahl

Abstract:

This paper presents a novel template-based method to detect objects of interest from real images by shape matching. To locate a target object that has a similar shape to a given template boundary, the proposed method integrates three components: contour grouping, partial shape matching, and boundary verification. In the first component, low-level image features, including edges and corners, are grouped into a set of perceptually salient closed contours using an extended ratio-contour algorithm. In the second component, we develop a partial shape matching algorithm to identify the fractions of detected contours that partly match given template boundaries. Specifically, we represent template boundaries and detected contours using landmarks, and apply a greedy algorithm to search the matched landmark subsequences. For each matched fraction between a template and a detected contour, we estimate an affine transform that transforms the whole template into a hypothetic boundary. In the third component, we provide an efficient algorithm based on oriented edge lists to determine the target boundary from the hypothetic boundaries by checking each of them against image edges. We evaluate the proposed method on recognizing and localizing 12 template leaves in a data set of real images with clutter back-grounds, illumination variations, occlusions, and image noises. The experiments demonstrate the high performance of our proposed method1.

Keywords: Object detection, shape matching, contour grouping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2252
74 An Evaluation of Carbon Dioxide Emissions Trading among Enterprises -The Tokyo Cap and Trade Program-

Authors: Hiroki Satou, Kayoko Yamamoto

Abstract:

This study aims to propose three evaluation methods to evaluate the Tokyo Cap and Trade Program when emissions trading is performed virtually among enterprises, focusing on carbon dioxide (CO2), which is the only emitted greenhouse gas that tends to increase. The first method clarifies the optimum reduction rate for the highest cost benefit, the second discusses emissions trading among enterprises through market trading, and the third verifies long-term emissions trading during the term of the plan (2010-2019), checking the validity of emissions trading partly using Geographic Information Systems (GIS). The findings of this study can be summarized in the following three points. 1. Since the total cost benefit is the greatest at a 44% reduction rate, it is possible to set it more highly than that of the Tokyo Cap and Trade Program to get more total cost benefit. 2. At a 44% reduction rate, among 320 enterprises, 8 purchasing enterprises and 245 sales enterprises gain profits from emissions trading, and 67 enterprises perform voluntary reduction without conducting emissions trading. Therefore, to further promote emissions trading, it is necessary to increase the sales volumes of emissions trading in addition to sales enterprises by increasing the number of purchasing enterprises. 3. Compared to short-term emissions trading, there are few enterprises which benefit in each year through the long-term emissions trading of the Tokyo Cap and Trade Program. Only 81 enterprises at the most can gain profits from emissions trading in FY 2019. Therefore, by setting the reduction rate more highly, it is necessary to increase the number of enterprises that participate in emissions trading and benefit from the restraint of CO2 emissions.

Keywords: Emissions Trading, Tokyo Cap and Trade Program, Carbon Dioxide (CO2), Global Warming, Geographic Information Systems (GIS)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2129
73 Hypothesis of a Holistic Treatment of Cancer: Crab Method

Authors: Devasis Ghosh

Abstract:

The main hindrance to total cure of cancer is a) the failure to control continued production of cancer cells, b) its sustenance and c) its metastasis. This review study has tried to address this issue of total cancer cure in a more innovative way. A 10-pronged “CRAB METHOD”, a novel holistic scientific approach of Cancer treatment has been hypothesized in this paper. Apart from available Chemotherapy, Radiotherapy and Oncosurgery, (which shall not be discussed here), seven other points of interference and treatment has been suggested, i.e. 1. Efficient stress management. 2. Dampening of ATF3 expression. 3. Selective inhibition of Platelet Activity. 4. Modulation of serotonin production, metabolism and 5HT receptor antagonism. 5. Auxin, its anti-proliferative potential and its modulation. 6. Melatonin supplementation because of its oncostatic properties. 7. HDAC Inhibitors especially valproic acid use due to its apoptotic role in many cancers. If all the above stated seven steps are thoroughly taken care of at the time of initial diagnosis of cancer along with the available treatment modalities of Chemotherapy, Radiotherapy and Oncosurgery, then perhaps, the morbidity and mortality rate of cancer may be greatly reduced.

Keywords: ATF3 dampening, auxin modulation, cancer, platelet activation, serotonin, stress, valproic acid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1380
72 Localizing and Recognizing Integral Pitches of Cheque Document Images

Authors: Bremananth R., Veerabadran C. S., Andy W. H. Khong

Abstract:

Automatic reading of handwritten cheque is a computationally complex process and it plays an important role in financial risk management. Machine vision and learning provide a viable solution to this problem. Research effort has mostly been focused on recognizing diverse pitches of cheques and demand drafts with an identical outline. However most of these methods employ templatematching to localize the pitches and such schemes could potentially fail when applied to different types of outline maintained by the bank. In this paper, the so-called outline problem is resolved by a cheque information tree (CIT), which generalizes the localizing method to extract active-region-of-entities. In addition, the weight based density plot (WBDP) is performed to isolate text entities and read complete pitches. Recognition is based on texture features using neural classifiers. Legal amount is subsequently recognized by both texture and perceptual features. A post-processing phase is invoked to detect the incorrect readings by Type-2 grammar using the Turing machine. The performance of the proposed system was evaluated using cheque and demand drafts of 22 different banks. The test data consists of a collection of 1540 leafs obtained from 10 different account holders from each bank. Results show that this approach can easily be deployed without significant design amendments.

Keywords: Cheque reading, Connectivity checking, Text localization, Texture analysis, Turing machine, Signature verification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1608
71 Automated Video Surveillance System for Detection of Suspicious Activities during Academic Offline Examination

Authors: G. Sandhya Devi, G. Suvarna Kumar, S. Chandini

Abstract:

This research work aims to develop a system that will analyze and identify students who indulge in malpractices/suspicious activities during the course of an academic offline examination. Automated Video Surveillance provides an optimal solution which helps in monitoring the students and identifying the malpractice event immediately. This work is organized into three modules. The first module deals with performing an impersonation check using a PCA-based face recognition method which is done by cross checking his profile with the database. The presence or absence of the student is even determined in this module by implementing an image registration technique wherein a grid is formed by considering all the images registered using the frontal camera at the determined positions. Second, detecting such facial malpractices in which a student gets involved in conversation with another, trying to obtain unauthorized information etc., based on the threshold range evaluated by considering his/her mouth state whether open or closed. The third module deals with identification of unauthorized material or gadgets used in the examination hall by training the positive samples of the object through various stages. Here, a top view camera feed is analyzed to detect the suspicious activities. The system automatically alerts the administration when any suspicious activities are identified, thereby reducing the error rate caused due to manual monitoring. This work is an improvement over our previous work published in identifying suspicious activities done by examinees in an offline examination.

Keywords: Impersonation, image registration, incrimination, object detection, threshold evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1531
70 Effects of Dopant Concentrations on Radiative Properties of Nanoscale Multilayer with Coherent Formulation for Visible Wavelengths

Authors: S. A. A. Oloomi , M. Omidpanah

Abstract:

Semiconductor materials with coatings have a wide range of applications in MEMS and NEMS. This work uses transfermatrix method for calculating the radiative properties. Dopped silicon is used and the coherent formulation is applied. The Drude model for the optical constants of doped silicon is employed. Results showed that for the visible wavelengths, more emittance occurs in greater concentrations and the reflectance decreases as the concentration increases. In these wavelengths, transmittance is negligible. Donars and acceptors act similar in visible wavelengths. The effect of wave interference can be understood by plotting the spectral properties such as reflectance or transmittance of a thin dielectric film versus the film thickness and analyzing the oscillations of properties due to constructive and destructive interferences. But this effect has not been shown at visible wavelengths. At room temperature, the scattering process is dominated by lattice scattering for lightly doped silicon, and the impurity scattering becomes important for heavily doped silicon when the dopant concentration exceeds1018cm-3 .

Keywords: Dopant Concentrations, Radiative Properties, Nanoscale Multilayer, Coherent Formulation, Visible Wavelengths

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1401
69 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: Goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, type-I error, penalized quasi-likelihood, power, quasi-likelihood.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 695
68 Error Detection and Correction for Onboard Satellite Computers Using Hamming Code

Authors: Rafsan Al Mamun, Md. Motaharul Islam, Rabana Tajrin, Nabiha Noor, Shafinaz Qader

Abstract:

In an attempt to enrich the lives of billions of people by providing proper information, security and a way of communicating with others, the need for efficient and improved satellites is constantly growing. Thus, there is an increasing demand for better error detection and correction (EDAC) schemes, which are capable of protecting the data onboard the satellites. The paper is aimed towards detecting and correcting such errors using a special algorithm called the Hamming Code, which uses the concept of parity and parity bits to prevent single-bit errors onboard a satellite in Low Earth Orbit. This paper focuses on the study of Low Earth Orbit satellites and the process of generating the Hamming Code matrix to be used for EDAC using computer programs. The most effective version of Hamming Code generated was the Hamming (16, 11, 4) version using MATLAB, and the paper compares this particular scheme with other EDAC mechanisms, including other versions of Hamming Codes and Cyclic Redundancy Check (CRC), and the limitations of this scheme. This particular version of the Hamming Code guarantees single-bit error corrections as well as double-bit error detections. Furthermore, this version of Hamming Code has proved to be fast with a checking time of 5.669 nanoseconds, that has a relatively higher code rate and lower bit overhead compared to the other versions and can detect a greater percentage of errors per length of code than other EDAC schemes with similar capabilities. In conclusion, with the proper implementation of the system, it is quite possible to ensure a relatively uncorrupted satellite storage system.

Keywords: Bit-flips, Hamming code, low earth orbit, parity bits, satellite, single error upset.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 841
67 The Impact of Regulatory Changes on the Development of Mobile Medical Apps

Authors: M. McHugh, D. Lillis

Abstract:

Mobile applications are being used to perform a wide variety of tasks in day-to-day life, ranging from checking email to controlling your home heating. Application developers have recognized the potential to transform a smart device into a medical device, by using a mobile medical application i.e. a mobile phone or a tablet. When initially conceived these mobile medical applications performed basic functions e.g. BMI calculator, accessing reference material etc.; however, increasing complexity offers clinicians and patients a range of functionality. As this complexity and functionality increases, so too does the potential risk associated with using such an application. Examples include any applications that provide the ability to inflate and deflate blood pressure cuffs, as well as applications that use patient-specific parameters and calculate dosage or create a dosage plan for radiation therapy. If an unapproved mobile medical application is marketed by a medical device organization, then they face significant penalties such as receiving an FDA warning letter to cease the prohibited activity, fines and possibility of facing a criminal conviction. Regulatory bodies have finalized guidance intended for mobile application developers to establish if their applications are subject to regulatory scrutiny. However, regulatory controls appear contradictory with the approaches taken by mobile application developers who generally work with short development cycles and very little documentation and as such, there is the potential to stifle further improvements due to these regulations. The research presented as part of this paper details how by adopting development techniques, such as agile software development, mobile medical application developers can meet regulatory requirements whilst still fostering innovation.

Keywords: Medical, mobile, applications, software Engineering, FDA, standards, regulations, agile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2017
66 Integrated Grey Rational Analysis-Standard Deviation Method for Handover in Heterogeneous Networks

Authors: Mohanad Alhabo, Naveed Nawaz, Mahmoud Al-Faris

Abstract:

The dense deployment of small cells is a promising solution to enhance the coverage and capacity of the heterogeneous networks (HetNets). However, the unplanned deployment could bring new challenges to the network ranging from interference, unnecessary handovers and handover failures. This will cause a degradation in the quality of service (QoS) delivered to the end user. In this paper, we propose an integrated Grey Rational Analysis Standard Deviation based handover method (GRA-SD) for HetNet. The proposed method integrates the Standard Deviation (SD) technique to acquire the weight of the handover metrics and the GRA method to select the best handover base station. The performance of the GRA-SD method is evaluated and compared with the traditional Multiple Attribute Decision Making (MADM) methods including Simple Additive Weighting (SAW) and VIKOR methods. Results reveal that the proposed method has outperformed the other methods in terms of minimizing the number of frequent unnecessary handovers and handover failures, in addition to improving the energy efficiency.

Keywords: Energy efficiency, handover, HetNets, MADM, small cells.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 442
65 Ultra-Wideband Slot Antenna with Notched Band for World Interoperability for Microwave Access

Authors: Rezaul Azim, A. Toaha Mobashsher, M. Tariqul Islam

Abstract:

In this paper a novel ultra-wideband (UWB) slot antenna with band notch characteristics for world interoperability for microwave access (WiMAX) is proposed. The designed antenna consists of a rectangular radiating patch and a ground plane with tapered shape slot. To realize a notch band, a curved parasitic element has been etched out along with the radiating patch. It is observed that by adjusting the length, thickness and position of the parasitic element, the proposed antenna can achieved an impedance bandwidth of 8.01GHz (2.84 to 10.85GHz) with a notched band of 3.28-3.85GHz. Compared to the recently reported band notch antennas, the proposed antenna has a simple configuration to realize band notch characteristics in order to mitigate the potential interference between WiMAX and UWB system. Furthermore, a stable radiation pattern and moderate gain except at the notched band makes the proposed antenna suitable for various UWB applications. 

Keywords: Band notch, Filter element, Ultra-wideband (UWB), WiMAX.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2153
64 Sidelobe Reduction in Cognitive Radio Systems Using Hybrid Technique

Authors: Atif Elahi, Ijaz Mansoor Qureshi, Mehreen Atif, Noor Gul

Abstract:

Orthogonal frequency division multiplexing (OFDM) is one of the best candidates for dynamic spectrum access due to its flexibility of spectrum shaping. However, the high sidelobes of the OFDM signal that result in high out-of-band radiation, introduce significant interference to the users operating in its vicinity. This problem becomes more critical in cognitive radio (CR) system that enables the secondary users (SUs) users to access the spectrum holes not used by the primary users (PUs) at that time. In this paper, we present a generalized OFDM framework that has a capability of describing any sidelobe suppression techniques, despite of whether one or a number of techniques are used. Based on that framework, we propose cancellation carrier (CC) technique in conjunction with the generalized sidelobe canceller (GSC) to reduce the out-of-band radiation in the region where the licensed users are operating. Simulation results show that the proposed technique can reduce the out-of-band radiation better when compared with the existing techniques found in the literature.

Keywords: Cognitive radio, cancellation carriers, generalized sidelobe canceller, out-of-band radiation, orthogonal frequency division multiplexing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1145
63 Design and Simulation of CCM Boost Converter for Power Factor Correction Using Variable Duty Cycle Control

Authors: M. Nirmala

Abstract:

Power quality in terms of power factor, THD and precisely regulated output voltage are the major key factors for efficient operation of power electronic converters. This paper presents an easy and effective active wave shaping control scheme for the pulsed input current drawn by the uncontrolled diode bridge rectifier thereby achieving power factor nearer to unity and also satisfying the THD specifications. It also regulates the output DC-bus voltage. CCM boost power factor correction with constant frequency operation features smaller inductor current ripple resulting in low RMS currents on inductor and switch thus leading to low electromagnetic interference. The objective of this work is to develop an active PFC control circuit using CCM boost converter implementing variable duty cycle control. The proposed scheme eliminates inductor current sensing requirements yet offering good performance and satisfactory results for maintaining the power quality. Simulation results have been presented which covers load changes also.

Keywords: CCM Boost converter, Power factor Correction, Total harmonic distortion, Variable Duty Cycle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7443
62 A Comparison of Adaline and MLP Neural Network based Predictors in SIR Estimation in Mobile DS/CDMA Systems

Authors: Nahid Ardalani, Ahmadreza Khoogar, H. Roohi

Abstract:

In this paper we compare the response of linear and nonlinear neural network-based prediction schemes in prediction of received Signal-to-Interference Power Ratio (SIR) in Direct Sequence Code Division Multiple Access (DS/CDMA) systems. The nonlinear predictor is Multilayer Perceptron MLP and the linear predictor is an Adaptive Linear (Adaline) predictor. We solve the problem of complexity by using the Minimum Mean Squared Error (MMSE) principle to select the optimal predictors. The optimized Adaline predictor is compared to optimized MLP by employing noisy Rayleigh fading signals with 1.8 GHZ carrier frequency in an urban environment. The results show that the Adaline predictor can estimates SIR with the same error as MLP when the user has the velocity of 5 km/h and 60 km/h but by increasing the velocity up-to 120 km/h the mean squared error of MLP is two times more than Adaline predictor. This makes the Adaline predictor (with lower complexity) more suitable than MLP for closed-loop power control where efficient and accurate identification of the time-varying inverse dynamics of the multi path fading channel is required.

Keywords: Power control, neural networks, DS/CDMA mobilecommunication systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2465
61 Optimal Channel Equalization for MIMO Time-Varying Channels

Authors: Ehab F. Badran, Guoxiang Gu

Abstract:

We consider optimal channel equalization for MIMO (multi-input/multi-output) time-varying channels in the sense of MMSE (minimum mean-squared-error), where the observation noise can be non-stationary. We show that all ZF (zero-forcing) receivers can be parameterized in an affine form which eliminates completely the ISI (inter-symbol-interference), and optimal channel equalizers can be designed through minimization of the MSE (mean-squarederror) between the detected signals and the transmitted signals, among all ZF receivers. We demonstrate that the optimal channel equalizer is a modified Kalman filter, and show that under the AWGN (additive white Gaussian noise) assumption, the proposed optimal channel equalizer minimizes the BER (bit error rate) among all possible ZF receivers. Our results are applicable to optimal channel equalization for DWMT (discrete wavelet multitone), multirate transmultiplexers, OFDM (orthogonal frequency division multiplexing), and DS (direct sequence) CDMA (code division multiple access) wireless data communication systems. A design algorithm for optimal channel equalization is developed, and several simulation examples are worked out to illustrate the proposed design algorithm.

Keywords: Channel equalization, Kalman filtering, Time-varying systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1795
60 Capacity Optimization for Local and Cooperative Spectrum Sensing in Cognitive Radio Networks

Authors: Ayman A. El-Saleh, Mahamod Ismail, Mohd. A. M. Ali, Ahmed N. H. Alnuaimy

Abstract:

The dynamic spectrum allocation solutions such as cognitive radio networks have been proposed as a key technology to exploit the frequency segments that are spectrally underutilized. Cognitive radio users work as secondary users who need to constantly and rapidly sense the presence of primary users or licensees to utilize their frequency bands if they are inactive. Short sensing cycles should be run by the secondary users to achieve higher throughput rates as well as to provide low level of interference to the primary users by immediately vacating their channels once they have been detected. In this paper, the throughput-sensing time relationship in local and cooperative spectrum sensing has been investigated under two distinct scenarios, namely, constant primary user protection (CPUP) and constant secondary user spectrum usability (CSUSU) scenarios. The simulation results show that the design of sensing slot duration is very critical and depends on the number of cooperating users under CPUP scenario whereas under CSUSU, cooperating more users has no effect if the sensing time used exceeds 5% of the total frame duration.

Keywords: Capacity, cognitive radio, optimization, spectrumsensing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1570
59 Targeting the Pulmonary Delivery via Optimizing Physicochemical Characteristics of Instilled Liquid and Exploring Distribution of Produced Liquids by Bench-Top Models and Scintigraphy of Rabbits- Lungs

Authors: Mohammad Nasri, Hossein Mirshekarpour

Abstract:

We aimed to investigate how can target and optimize pulmonary delivery distribution by changing physicochemical characteristics of instilled liquid.Therefore, we created a new liquids group: a. eligible for desired distribution within lung because of assorted physicochemical characteristics b. capable of being augmented with a broad range of chemicals inertly c. no interference on respiratory function d. compatible with airway surface liquid We developed forty types of new liquid,were composed of Carboxymethylcellulose sodium,Glycerin and different types of Polysorbates.Viscosity was measured using a Programmable Rheometer and surface tension by KRUSS Tensiometer.We subsequently examined the liquids and delivery protocols by simple and branched glass capillary tube models of airways.Eventually,we explored pulmonary distribution of liquids being augmented with technetium-99m in mechanically ventilated rabbits.We used a single head large field of view gamma camera.Kinematic viscosity between 0.265Stokes and 0.289Stokes,density between 1g/cm3 and 1.5g/cm3 and surface tension between 25dyn/cm and 35dyn/cm were the most acceptable.

Keywords: Pulmonary delivery, Liquid instillation into airway, Physicochemical characteristics, Optimal distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487
58 Computational Model for Predicting Effective siRNA Sequences Using Whole Stacking Energy (% G) for Gene Silencing

Authors: Reena Murali, David Peter S.

Abstract:

The small interfering RNA (siRNA) alters the regulatory role of mRNA during gene expression by translational inhibition. Recent studies show that upregulation of mRNA because serious diseases like cancer. So designing effective siRNA with good knockdown effects plays an important role in gene silencing. Various siRNA design tools had been developed earlier. In this work, we are trying to analyze the existing good scoring second generation siRNA predicting tools and to optimize the efficiency of siRNA prediction by designing a computational model using Artificial Neural Network and whole stacking energy (%G), which may help in gene silencing and drug design in cancer therapy. Our model is trained and tested against a large data set of siRNA sequences. Validation of our results is done by finding correlation coefficient of experimental versus observed inhibition efficacy of siRNA. We achieved a correlation coefficient of 0.727 in our previous computational model and we could improve the correlation coefficient up to 0.753 when the threshold of whole tacking energy is greater than or equal to -32.5 kcal/mol.

Keywords: Artificial Neural Network, Double Stranded RNA, RNA Interference, Short Interfering RNA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2614
57 Image Features Comparison-Based Position Estimation Method Using a Camera Sensor

Authors: Jinseon Song, Yongwan Park

Abstract:

In this paper, propose method that can user’s position that based on database is built from single camera. Previous positioning calculate distance by arrival-time of signal like GPS (Global Positioning System), RF(Radio Frequency). However, these previous method have weakness because these have large error range according to signal interference. Method for solution estimate position by camera sensor. But, signal camera is difficult to obtain relative position data and stereo camera is difficult to provide real-time position data because of a lot of image data, too. First of all, in this research we build image database at space that able to provide positioning service with single camera. Next, we judge similarity through image matching of database image and transmission image from user. Finally, we decide position of user through position of most similar database image. For verification of propose method, we experiment at real-environment like indoor and outdoor. Propose method is wide positioning range and this method can verify not only position of user but also direction.

Keywords: Positioning, Distance, Camera, Features, SURF (Speed-Up Robust Features), Database, Estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1407
56 An Automatic Pipeline Monitoring System Based on PCA and SVM

Authors: C. Wan, A. Mita

Abstract:

This paper proposes a novel system for monitoring the health of underground pipelines. Some of these pipelines transport dangerous contents and any damage incurred might have catastrophic consequences. However, most of these damage are unintentional and usually a result of surrounding construction activities. In order to prevent these potential damages, monitoring systems are indispensable. This paper focuses on acoustically recognizing road cutters since they prelude most construction activities in modern cities. Acoustic recognition can be easily achieved by installing a distributed computing sensor network along the pipelines and using smart sensors to “listen" for potential threat; if there is a real threat, raise some form of alarm. For efficient pipeline monitoring, a novel monitoring approach is proposed. Principal Component Analysis (PCA) was studied and applied. Eigenvalues were regarded as the special signature that could characterize a sound sample, and were thus used for the feature vector for sound recognition. The denoising ability of PCA could make it robust to noise interference. One class SVM was used for classifier. On-site experiment results show that the proposed PCA and SVM based acoustic recognition system will be very effective with a low tendency for raising false alarms.

Keywords: One class SVM, pipeline monitoring system, principal component analysis, sound recognition, third party damage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1973
55 Impulse Response Shortening for Discrete Multitone Transceivers using Convex Optimization Approach

Authors: Ejaz Khan, Conor Heneghan

Abstract:

In this paper we propose a new criterion for solving the problem of channel shortening in multi-carrier systems. In a discrete multitone receiver, a time-domain equalizer (TEQ) reduces intersymbol interference (ISI) by shortening the effective duration of the channel impulse response. Minimum mean square error (MMSE) method for TEQ does not give satisfactory results. In [1] a new criterion for partially equalizing severe ISI channels to reduce the cyclic prefix overhead of the discrete multitone transceiver (DMT), assuming a fixed transmission bandwidth, is introduced. Due to specific constrained (unit morm constraint on the target impulse response (TIR)) in their method, the freedom to choose optimum vector (TIR) is reduced. Better results can be obtained by avoiding the unit norm constraint on the target impulse response (TIR). In this paper we change the cost function proposed in [1] to the cost function of determining the maximum of a determinant subject to linear matrix inequality (LMI) and quadratic constraint and solve the resulting optimization problem. Usefulness of the proposed method is shown with the help of simulations.

Keywords: Equalizer, target impulse response, convex optimization, matrix inequality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1661
54 Theoretical Analysis of Capacities in Dynamic Spatial Multiplexing MIMO Systems

Authors: Imen Sfaihi, Noureddine Hamdi

Abstract:

In this paper, we investigate the study of techniques for scheduling users for resource allocation in the case of multiple input and multiple output (MIMO) packet transmission systems. In these systems, transmit antennas are assigned to one user or dynamically to different users using spatial multiplexing. The allocation of all transmit antennas to one user cannot take full advantages of multi-user diversity. Therefore, we developed the case when resources are allocated dynamically. At each time slot users have to feed back their channel information on an uplink feedback channel. Channel information considered available in the schedulers is the zero forcing (ZF) post detection signal to interference plus noise ratio. Our analysis study concerns the round robin and the opportunistic schemes. In this paper, we present an overview and a complete capacity analysis of these schemes. The main results in our study are to give an analytical form of system capacity using the ZF receiver at the user terminal. Simulations have been carried out to validate all proposed analytical solutions and to compare the performance of these schemes.

Keywords: MIMO, scheduling, ZF receiver, spatial multiplexing, round robin scheduling, opportunistic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1275
53 Attribution Theory and Perceived Reliability of Cellphones for Teaching and Learning

Authors: Mayowa A. Sofowora, Seraphim D. Eyono Obono

Abstract:

The use of information and communication technologies such as computers, mobile phones and the Internet is becoming prevalent in today’s world; and it is facilitating access to a vast amount of data, services and applications for the improvement of people’s lives. However, this prevalence of ICTs is hampered by the problem of low income levels in developing countries to the point where people cannot timeously replace or repair their ICT devices when damaged or lost; and this problem serves as a motivation for this study whose aim is to examine the perceptions of teachers on the reliability of cellphones when used for teaching and learning purposes. The research objectives unfolding this aim are of two types: Objectives on the selection and design of theories and models, and objectives on the empirical testing of these theories and models. The first type of objectives is achieved using content analysis in an extensive literature survey: and the second type of objectives is achieved through a survey of high school teachers from the ILembe and UMgungundlovu districts in the KwaZulu-Natal province of South Africa. Data collected from this questionnaire based survey is analysed in SPSS using descriptive statistics and Pearson correlations after checking the reliability and validity of the questionnaires. The main hypothesis driving this study is that there is a relationship between the demographics and the attribution identity of teachers on one hand, and their perceptions on the reliability of cellphones on the other hand, as suggested by existing literature; except that attribution identities are considered in this study under three angles: intention, knowledge and ability, and action. The results of this study confirm that the perceptions of teachers on the reliability of cellphones for teaching and learning are affected by the school location of these teachers, and by their perceptions on learners’ cellphones usage intentions and actual use.

Keywords: Attribution, Cellphones, E-learning, Reliability

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1762