Search results for: distributed dislocation technique
7775 Imputation Technique for Feature Selection in Microarray Data Set
Authors: Younies Saeed Hassan Mahmoud, Mai Mabrouk, Elsayed Sallam
Abstract:
Analysing DNA microarray data sets is a great challenge, which faces the bioinformaticians due to the complication of using statistical and machine learning techniques. The challenge will be doubled if the microarray data sets contain missing data, which happens regularly because these techniques cannot deal with missing data. One of the most important data analysis process on the microarray data set is feature selection. This process finds the most important genes that affect certain disease. In this paper, we introduce a technique for imputing the missing data in microarray data sets while performing feature selection.Keywords: DNA microarray, feature selection, missing data, bioinformatics
Procedia PDF Downloads 5727774 Monitoring Soil Moisture Dynamic in Root Zone System of Argania spinosa Using Electrical Resistivity Imaging
Authors: F. Ainlhout, S. Boutaleb, M. C. Diaz-Barradas, M. Zunzunegui
Abstract:
Argania spinosa is an endemic tree of the southwest of Morocco, occupying 828,000 Ha, distributed mainly between Mediterranean vegetation and the desert. This tree can grow in extremely arid regions in Morocco, where annual rainfall ranges between 100-300 mm where no other tree species can live. It has been designated as a UNESCO Biosphere reserve since 1998. Argania tree is of great importance in human and animal feeding of rural population as well as for oil production, it is considered as a multi-usage tree. Admine forest located in the suburbs of Agadir city, 5 km inland, was selected to conduct this work. The aim of the study was to investigate the temporal variation in root-zone moisture dynamic in response to variation in climatic conditions and vegetation water uptake, using a geophysical technique called Electrical resistivity imaging (ERI). This technique discriminates resistive woody roots, dry and moisture soil. Time-dependent measurements (from April till July) of resistivity sections were performed along the surface transect (94 m Length) at 2 m fixed electrode spacing. Transect included eight Argan trees. The interactions between the tree and soil moisture were estimated by following the tree water status variations accompanying the soil moisture deficit. For that purpose we measured midday leaf water potential and relative water content during each sampling day, and for the eight trees. The first results showed that ERI can be used to accurately quantify the spatiotemporal distribution of root-zone moisture content and woody root. The section obtained shows three different layers: middle conductive one (moistured); a moderately resistive layer corresponding to relatively dry soil (calcareous formation with intercalation of marly strata) on top, this layer is interspersed by very resistant layer corresponding to woody roots. Below the conductive layer, we find the moderately resistive layer. We note that throughout the experiment, there was a continuous decrease in soil moisture at the different layers. With the ERI, we can clearly estimate the depth of the woody roots, which does not exceed 4 meters. In previous work on the same species, analyzing the δ18O in water of xylem and in the range of possible water sources, we argued that rain is the main water source in winter and spring, but not in summer, trees are not exploiting deep water from the aquifer as the popular assessment, instead of this they are using soil water at few meter depth. The results of the present work confirm the idea that the roots of Argania spinosa are not growing very deep.Keywords: Argania spinosa, electrical resistivity imaging, root system, soil moisture
Procedia PDF Downloads 3267773 Application of XRF and Other Principal Component Analysis for Counterfeited Gold Coin Characterization in Forensic Science
Authors: Somayeh Khanjani, Hamideh Abolghasemi, Hadi Shirzad, Samaneh Nabavi
Abstract:
At world market can be currently encountered a wide range of gemological objects that are incorrectly declared, treated, or it concerns completely different materials that try to copy precious objects more or less successfully. Counterfeiting of precious commodities is a problem faced by governments in most countries. Police have seized many counterfeit coins that looked like the real coins and because the feeling to the touch and the weight were very similar to those of real coins. Most people were fooled and believed that the counterfeit coins were real ones. These counterfeit coins may have been made by big criminal organizations. To elucidate the manufacturing process, not only the quantitative analysis of the coins but also the comparison of their morphological characteristics was necessary. Several modern techniques have been applied to prevent counterfeiting of coins. The objective of this study was to demonstrate the potential of X-ray Fluorescence (XRF) technique and the other analytical techniques for example SEM/EDX/WDX, FT-IR/ATR and Raman Spectroscopy. Using four elements (Cu, Ag, Au and Zn) and obtaining XRF for several samples, they could be discriminated. XRF technique and SEM/EDX/WDX are used for study of chemical composition. XRF analyzers provide a fast, accurate, nondestructive method to test the purity and chemistry of all precious metals. XRF is a very promising technique for rapid and non destructive counterfeit coins identification in forensic science.Keywords: counterfeit coins, X-ray fluorescence, forensic, FT-IR
Procedia PDF Downloads 4927772 The impact of Climate Change and Land use/land Cover Change (LUCC) on Carbon Storage in Arid and Semi-Arid Regions of China
Authors: Xia Fang
Abstract:
Arid and semiarid areas of China (ASAC) have experienced significant land-use/cover changes (LUCC), along with intensified climate change. However, LUCC and climate changes and their individual and interactive effects on carbon stocks have not yet been fully understood in the ASAC. This study analyses the carbon stocks in the ASAC during 1980 - 2020 using the specific arid ecosystem model (AEM), and investigates the effects of LUCC and climate change on carbon stock trends. The results indicate that in the past 41 years, the ASAC carbon pool experienced an overall growth trend, with an increase of 182.03 g C/m2. Climatic factors (+291.99 g C/m2), especially the increase in precipitation, were the main drivers of the carbon pool increase. LUCC decreased the carbon pool (-112.27 g C/m2), mainly due to the decrease in grassland area (-2.77%). The climate-induced carbon sinks were distributed in northern Xinjiang, on the Ordos Plateau, and in Northeast China, while the LUCC-induced carbon sinks mainly occurred on the Ordos Plateau and the North China Plain, resulting in a net decrease in carbon sequestration in these regions according to carbon pool measurements. The study revealed that the combination of climate variability, LUCC, and increasing atmospheric CO2 concentration resulted in an increase of approximately 182.03 g C/m2, which was mainly distributed in eastern Inner Mongolia and the western Qinghai-Tibet Plateau. Our findings are essential for improving theoretical guidance to protect the ecological environment, rationally plan land use, and understand the sustainable development of arid and semiarid zones.Keywords: AEM, climate change, LUCC, carbon stocks
Procedia PDF Downloads 787771 Ankle Arthroscopy: Indications, Patterns of Admissions, Surgical Outcomes, and Associated Complications Among Saudi Patients at King Abdul-Aziz Medical City in Riyadh
Authors: Mohammad Abdullah Almalki
Abstract:
Background: Despite the frequent usage of ankle arthroscopy, there is limited medical literature regarding its indications, patterns of admissions, surgical outcomes, and associated complicated at Saudi Arabia. Hence, this study would highlight the surgical outcomes of such surgical approach that will assist orthopedic surgeons to detect which surgical procedure needs to be done as well as to help them regarding their diagnostic workups. Methods: At the Orthopedic Division of King Abdul‑Aziz Medical City in Riyadh and through a cross‑sectional design and convenient sampling techniques, the present study had recruited 20 subjects who fulfill the inclusion and exclusion criteria between 2016 and 2018. Data collection was carried out by a questionnaire designed and revised by an expert panel of health professionals. Results: Twenty patients were reviewed (11M and 9F) with an average age of 40.1 ± 12.2. Only 30% of the patients (5M, 1F) have no comorbidity, but 70% of patients (7M, 8F) were having at least one comorbidity. The most common indications were osteochondritis dissecans (n = 7, 35%), ankle fracture without dislocation (n = 4, 20%), and tibiotalar impingement (n = 3, 15%). Patients recorded pain in all cases (100%). The top four symptoms after pain were instability (30%, n = 6), muscle weakness (15%, n = 3) swelling (15%, n = 3), and stiffness (5%, n = 1). Two‑third of cases reached to their full healthy status and toe‑touch weight‑bearing was seen in two patients (10%). Conclusion: Ankle arthroscopy improved the rehabilitation rates in our tertiary care center. In addition, the surgical outcomes are favorable in our hospital since it has a very short length of stay, unexpended surgery, and fewest physiotherapy sessions.Keywords: ankle, arthroscopy, indications, patterns
Procedia PDF Downloads 867770 An Efficient Clustering Technique for Copy-Paste Attack Detection
Authors: N. Chaitawittanun, M. Munlin
Abstract:
Due to rapid advancement of powerful image processing software, digital images are easy to manipulate and modify by ordinary people. Lots of digital images are edited for a specific purpose and more difficult to distinguish form their original ones. We propose a clustering method to detect a copy-move image forgery of JPEG, BMP, TIFF, and PNG. The process starts with reducing the color of the photos. Then, we use the clustering technique to divide information of measuring data by Hausdorff Distance. The result shows that the purposed methods is capable of inspecting the image file and correctly identify the forgery.Keywords: image detection, forgery image, copy-paste, attack detection
Procedia PDF Downloads 3367769 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark
Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos
Abstract:
This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark
Procedia PDF Downloads 1187768 Application of the Bionic Wavelet Transform and Psycho-Acoustic Model for Speech Compression
Authors: Chafik Barnoussi, Mourad Talbi, Adnane Cherif
Abstract:
In this paper we propose a new speech compression system based on the application of the Bionic Wavelet Transform (BWT) combined with the psychoacoustic model. This compression system is a modified version of the compression system using a MDCT (Modified Discrete Cosine Transform) filter banks of 32 filters each and the psychoacoustic model. This modification consists in replacing the banks of the MDCT filter banks by the bionic wavelet coefficients which are obtained from the application of the BWT to the speech signal to be compressed. These two methods are evaluated and compared with each other by computing bits before and bits after compression. They are tested on different speech signals and the obtained simulation results show that the proposed technique outperforms the second technique and this in term of compressed file size. In term of SNR, PSNR and NRMSE, the outputs speech signals of the proposed compression system are with acceptable quality. In term of PESQ and speech signal intelligibility, the proposed speech compression technique permits to obtain reconstructed speech signals with good quality.Keywords: speech compression, bionic wavelet transform, filterbanks, psychoacoustic model
Procedia PDF Downloads 3817767 Iterative Segmentation and Application of Hausdorff Dilation Distance in Defect Detection
Authors: S. Shankar Bharathi
Abstract:
Inspection of surface defects on metallic components has always been challenging due to its specular property. Occurrences of defects such as scratches, rust, pitting are very common in metallic surfaces during the manufacturing process. These defects if unchecked can hamper the performance and reduce the life time of such component. Many of the conventional image processing algorithms in detecting the surface defects generally involve segmentation techniques, based on thresholding, edge detection, watershed segmentation and textural segmentation. They later employ other suitable algorithms based on morphology, region growing, shape analysis, neural networks for classification purpose. In this paper the work has been focused only towards detecting scratches. Global and other thresholding techniques were used to extract the defects, but it proved to be inaccurate in extracting the defects alone. However, this paper does not focus on comparison of different segmentation techniques, but rather describes a novel approach towards segmentation combined with hausdorff dilation distance. The proposed algorithm is based on the distribution of the intensity levels, that is, whether a certain gray level is concentrated or evenly distributed. The algorithm is based on extraction of such concentrated pixels. Defective images showed higher level of concentration of some gray level, whereas in non-defective image, there seemed to be no concentration, but were evenly distributed. This formed the basis in detecting the defects in the proposed algorithm. Hausdorff dilation distance based on mathematical morphology was used to strengthen the segmentation of the defects.Keywords: metallic surface, scratches, segmentation, hausdorff dilation distance, machine vision
Procedia PDF Downloads 4277766 A Validation Technique for Integrated Ontologies
Authors: Neli P. Zlatareva
Abstract:
Ontology validation is an important part of web applications’ development, where knowledge integration and ontological reasoning play a fundamental role. It aims to ensure the consistency and correctness of ontological knowledge and to guarantee that ontological reasoning is carried out in a meaningful way. Existing approaches to ontology validation address more or less specific validation issues, but the overall process of validating web ontologies has not been formally established yet. As the size and the number of web ontologies continue to grow, the necessity to validate and ensure their consistency and interoperability is becoming increasingly important. This paper presents a validation technique intended to test the consistency of independent ontologies utilized by a common application.Keywords: knowledge engineering, ontological reasoning, ontology validation, semantic web
Procedia PDF Downloads 3207765 Pseudo Modal Operating Deflection Shape Based Estimation Technique of Mode Shape Using Time History Modal Assurance Criterion
Authors: Doyoung Kim, Hyo Seon Park
Abstract:
Studies of System Identification(SI) based on Structural Health Monitoring(SHM) have actively conducted for structural safety. Recently SI techniques have been rapidly developed with output-only SI paradigm for estimating modal parameters. The features of these output-only SI methods consist of Frequency Domain Decomposition(FDD) and Stochastic Subspace Identification(SSI) are using the algorithms based on orthogonal decomposition such as singular value decomposition(SVD). But the SVD leads to high level of computational complexity to estimate modal parameters. This paper proposes the technique to estimate mode shape with lower computational cost. This technique shows pseudo modal Operating Deflections Shape(ODS) through bandpass filter and suggests time history Modal Assurance Criterion(MAC). Finally, mode shape could be estimated from pseudo modal ODS and time history MAC. Analytical simulations of vibration measurement were performed and the results with mode shape and computation time between representative SI method and proposed method were compared.Keywords: modal assurance criterion, mode shape, operating deflection shape, system identification
Procedia PDF Downloads 4077764 Efficient Energy Management: A Novel Technique for Prolonged and Persistent Automotive Engine
Authors: Chakshu Baweja, Ishaan Prakash, Deepak Giri, Prithwish Mukherjee, Herambraj Ashok Nalawade
Abstract:
The need to prevent and control rampant and indiscriminate usage of energy in present-day realm on earth has motivated active research efforts aimed at understanding of controlling mechanisms leading to sustained energy. Although much has been done but complexity of the problem has prevented a complete understanding due to nonlinear interaction between flow, heat and mass transfer in terrestrial environment. Therefore, there is need for a systematic study to clearly understand mechanisms controlling energy-spreading phenomena to increase a system’s efficiency. The present work addresses the issue of sustaining energy and proposes a devoted technique of optimizing energy in the automotive domain. The proposed method focus on utilization of the mechanical and thermal energy of an automobile IC engine by converting and storing energy due to motion of a piston in form of electrical energy. The suggested technique utilizes piston motion of the engine to generate high potential difference capable of working as a secondary power source. This is achieved by the use of a gear mechanism and a flywheel.Keywords: internal combustion engine, energy, electromagnetic induction, efficiency, gear ratio, hybrid vehicle, engine shaft
Procedia PDF Downloads 4737763 Biomechanics of Atalantoaxial Complex for Various Posterior Fixation Techniques
Authors: Arun C. O., Shrijith M. B., Thakur Rajesh Singh
Abstract:
The study aims to analyze and understand the biomechanical stability of the atlantoaxial complex under different posterior fixation techniques using the finite element method in the Indian context. The conventional cadaveric studies performed show heterogeneity in biomechanical properties. The finite element method being a versatile numerical tool, is being wisely used for biomechanics analysis of atlantoaxial complex. However, the biomechanics of posterior fixation techniques for an Indian subject is missing in the literature. It is essential to study in this context as the bone density and geometry of vertebrae vary from region to region, thereby requiring different screw lengths and it can affect the range of motion(ROM), stresses generated. The current study uses CT images for developing a 3D finite element model with C1-C2 geometry without ligaments. Instrumentation is added to this geometry to develop four models for four fixation techniques, namely C1-C2 TA, C1LM-C2PS, C1LM-C2Pars, C1LM-C2TL. To simulate Flexion, extension, lateral bending, axial rotation, 1.5 Nm is applied to C1 while the bottom nodes of C2 are fixed. Then Range of Motion (ROM) is compared with the unstable model(without ligaments). All the fixation techniques showed more than 97 percent reduction in the Range of Motion. The von-mises stresses developed in the screw constructs are obtained. From the studies, it is observed that Transarticular technique is most stable in Lateral Bending, C1LM-C2 Translaminar is found most stable in Flexion/extension. The Von-Mises stresses developed minimum in Trasarticular technique in lateral bending and axial rotation, whereas stress developed in C2 pars construct minimum in Flexion/ Extension. On average, the TA technique is stable in all motions and also stresses in constructs are less in TA. Tarnsarticular technique is found to be the best fixation technique for Indian subjects among the 4 methods.Keywords: biomechanics, cervical spine, finite element model, posterior fixation
Procedia PDF Downloads 1427762 About Multi-Resolution Techniques for Large Eddy Simulation of Reactive Multi-Phase Flows
Authors: Giacomo Rossi, Bernardo Favini, Eugenio Giacomazzi, Franca Rita Picchia, Nunzio Maria Salvatore Arcidiacono
Abstract:
A numerical technique for mesh refinement in the HeaRT (Heat Release and Transfer) numerical code is presented. In the CFD framework, Large Eddy Simulation (LES) approach is gaining in importance as a tool for simulating turbulent combustion processes, also if this approach has an high computational cost due to the complexity of the turbulent modeling and the high number of grid points necessary to obtain a good numerical solution. In particular, when a numerical simulation of a big domain is performed with a structured grid, the number of grid points can increase so much that the simulation becomes impossible: this problem can be overcame with a mesh refinement technique. Mesh refinement technique developed for HeaRT numerical code (a staggered finite difference code) is based on an high order reconstruction of the variables at the grid interfaces by means of a least square quasi-ENO interpolation: numerical code is written in modern Fortran (2003 standard of newer) and is parallelized using domain decomposition and message passing interface (MPI) standard.Keywords: LES, multi-resolution, ENO, fortran
Procedia PDF Downloads 3647761 Long-Term Results of Coronary Bifurcation Stenting with Drug Eluting Stents
Authors: Piotr Muzyk, Beata Morawiec, Mariusz Opara, Andrzej Tomasik, Brygida Przywara-Chowaniec, Wojciech Jachec, Ewa Nowalany-Kozielska, Damian Kawecki
Abstract:
Background: Coronary bifurcation is one of the most complex lesion in patients with coronary ar-tery disease. Provisional T-stenting is currently one of the recommended techniques. The aim was to assess optimal methods of treatment in the era of drug-eluting stents (DES). Methods: The regis-try consisted of data from 1916 patients treated with coronary percutaneous interventions (PCI) using either first- or second-generation DES. Patients with bifurcation lesion entered the analysis. Major adverse cardiac and cardiovascular events (MACCE) were assessed at one year of follow-up and comprised of death, acute myocardial infarction (AMI), repeated PCI (re-PCI) of target ves-sel and stroke. Results: Of 1916 registry patients, 204 patients (11%) were diagnosed with bifurcation lesion >50% and entered the analysis. The most commonly used technique was provi-sional T-stenting (141 patients, 69%). Optimization with kissing-balloons technique was performed in 45 patients (22%). In 59 patients (29%) second-generation DES was implanted, while in 112 pa-tients (55%), first-generation DES was used. In 33 patients (16%) both types of DES were used. The procedure success rate (TIMI 3 flow) was achieved in 98% of patients. In one-year follow-up, there were 39 MACCE (19%) (9 deaths, 17 AMI, 16 re-PCI and 5 strokes). Provisional T-stenting resulted in similar rate of MACCE to other techniques (16% vs. 5%, p=0.27) and similar occurrence of re-PCI (6% vs. 2%, p=0.78). The results of post-PCI kissing-balloon technique gave equal out-comes with 3% vs. 16% of MACCE in patients in whom no optimization technique was used (p=0.39). The type of implanted DES (second- vs. first-generation) had no influence on MACCE (4% vs 14%, respectively, p=0.12) and re-PCI (1.7% vs. 51% patients, respectively, p=0.28). Con-clusions: The treatment of bifurcation lesions with PCI represent high-risk procedures with high rate of MACCE. Stenting technique, optimization of PCI and the generation of implanted stent should be personalized for each case to balance risk of the procedure. In this setting, the operator experience might be the factor of better outcome, which should be further investigated.Keywords: coronary bifurcation, drug eluting stents, long-term follow-up, percutaneous coronary interventions
Procedia PDF Downloads 2027760 Acceleration of DNA Hybridization Using Electroosmotic Flow
Authors: Yun-Hsiang Wang, Huai-Yi Chen, Kin Fong Lei
Abstract:
Deoxyribonucleic acid (DNA) hybridization is a common technique used in genetic assay widely. However, the hybridization ratio and rate are usually limited by the diffusion effect. Here, microfluidic electrode platform producing electroosmosis generated by alternating current signal has been proposed to enhance the hybridization ratio and rate. The electrode was made of aurum fabricated by microfabrication technique. Thiol-modified oligo probe was immobilized on the electrode for specific capture of target, which is modified by fluorescent tag. Alternative electroosmosis can induce local microfluidic vortexes to accelerate DNA hybridization. This study provides a strategy to enhance the rate of DNA hybridization in the genetic assay.Keywords: DNA hybridization, electroosmosis, electrical enhancement, hybridization ratio
Procedia PDF Downloads 3827759 Effect of Spatially Correlated Disorder on Electronic Transport Properties of Aperiodic Superlattices (GaAs/AlxGa1-xAs)
Authors: F. Bendahma, S. Bentata, S. Cherid, A. Zitouni, S. Terkhi, T. Lantri, Y. Sefir, Z. F. Meghoufel
Abstract:
We examine the electronic transport properties in AlxGa1-xAs/GaAs superlattices. Using the transfer-matrix technique and the exact Airy function formalism, we investigate theoretically the effect of structural parameters on the electronic energy spectra of trimer thickness barrier (TTB). Our numerical calculations showed that the localization length of the states becomes more extended when the disorder is correlated (trimer case). We have also found that the resonant tunneling time (RTT) is of the order of several femtoseconds.Keywords: electronic transport properties, structural parameters, superlattices, transfer-matrix technique
Procedia PDF Downloads 2837758 Genome-Wide Analysis of Long Terminal Repeat (LTR) Retrotransposons in Rabbit (Oryctolagus cuniculus)
Authors: Zeeshan Khan, Faisal Nouroz, Shumaila Noureen
Abstract:
European or common rabbit (Oryctolagus cuniculus) belongs to class Mammalia, order Lagomorpha of family Leporidae. They are distributed worldwide and are native to Europe (France, Spain and Portugal) and Africa (Morocco and Algeria). LTR retrotransposons are major Class I mobile genetic elements of eukaryotic genomes and play a crucial role in genome expansion, evolution and diversification. They were mostly annotated in various genomes by conventional approaches of homology searches, which restricted the annotation of novel elements. Present work involved de novo identification of LTR retrotransposons by LTR_FINDER in haploid genome of rabbit (2247.74 Mb) distributed in 22 chromosomes, of which 7,933 putative full-length or partial copies were identified containing 69.38 Mb of elements, accounting 3.08% of the genome. Highest copy numbers (731) were found on chromosome 7, followed by chromosome 12 (705), while the lowest copy numbers (27) were detected in chromosome 19 with no elements identified from chromosome 21 due to partially sequenced chromosome, unidentified nucleotides (N) and repeated simple sequence repeats (SSRs). The identified elements ranged in sizes from 1.2 - 25.8 Kb with average sizes between 2-10 Kb. Highest percentage (4.77%) of elements was found in chromosome 15, while lowest (0.55%) in chromosome 19. The most frequent tRNA type was Arginine present in majority of the elements. Based on gained results, it was estimated that rabbit exhibits 15,866 copies having 137.73 Mb of elements accounting 6.16% of diploid genome (44 chromosomes). Further molecular analyses will be helpful in chromosomal localization and distribution of these elements on chromosomes.Keywords: rabbit, LTR retrotransposons, genome, chromosome
Procedia PDF Downloads 1467757 A Comparative Analysis of Various Companding Techniques Used to Reduce PAPR in VLC Systems
Authors: Arushi Singh, Anjana Jain, Prakash Vyavahare
Abstract:
Recently, Li-Fi(light-fiedelity) has been launched based on VLC(visible light communication) technique, 100 times faster than WiFi. Now 5G mobile communication system is proposed to use VLC-OFDM as the transmission technique. The VLC system focused on visible rays, is considered for efficient spectrum use and easy intensity modulation through LEDs. The reason of high speed in VLC is LED, as they flicker incredibly fast(order of MHz). Another advantage of employing LED is-it acts as low pass filter results no out-of-band emission. The VLC system falls under the category of ‘green technology’ for utilizing LEDs. In present scenario, OFDM is used for high data-rates, interference immunity and high spectral efficiency. Inspite of the advantages OFDM suffers from large PAPR, ICI among carriers and frequency offset errors. Since, the data transmission technique used in VLC system is OFDM, the system suffers the drawbacks of OFDM as well as VLC, the non-linearity dues to non-linear characteristics of LED and PAPR of OFDM due to which the high power amplifier enters in non-linear region. The proposed paper focuses on reduction of PAPR in VLC-OFDM systems. Many techniques are applied to reduce PAPR such as-clipping-introduces distortion in the carrier; selective mapping technique-suffers wastage of bandwidth; partial transmit sequence-very complex due to exponentially increased number of sub-blocks. The paper discusses three companding techniques namely- µ-law, A-law and advance A-law companding technique. The analysis shows that the advance A-law companding techniques reduces the PAPR of the signal by adjusting the companding parameter within the range. VLC-OFDM systems are the future of the wireless communication but non-linearity in VLC-OFDM is a severe issue. The proposed paper discusses the techniques to reduce PAPR, one of the non-linearities of the system. The companding techniques mentioned in this paper provides better results without increasing the complexity of the system.Keywords: non-linear companding techniques, peak to average power ratio (PAPR), visible light communication (VLC), VLC-OFDM
Procedia PDF Downloads 2847756 A Grid Synchronization Method Based On Adaptive Notch Filter for SPV System with Modified MPPT
Authors: Priyanka Chaudhary, M. Rizwan
Abstract:
This paper presents a grid synchronization technique based on adaptive notch filter for SPV (Solar Photovoltaic) system along with MPPT (Maximum Power Point Tracking) techniques. An efficient grid synchronization technique offers proficient detection of various components of grid signal like phase and frequency. It also acts as a barrier for harmonics and other disturbances in grid signal. A reference phase signal synchronized with the grid voltage is provided by the grid synchronization technique to standardize the system with grid codes and power quality standards. Hence, grid synchronization unit plays important role for grid connected SPV systems. As the output of the PV array is fluctuating in nature with the meteorological parameters like irradiance, temperature, wind etc. In order to maintain a constant DC voltage at VSC (Voltage Source Converter) input, MPPT control is required to track the maximum power point from PV array. In this work, a variable step size P & O (Perturb and Observe) MPPT technique with DC/DC boost converter has been used at first stage of the system. This algorithm divides the dPpv/dVpv curve of PV panel into three separate zones i.e. zone 0, zone 1 and zone 2. A fine value of tracking step size is used in zone 0 while zone 1 and zone 2 requires a large value of step size in order to obtain a high tracking speed. Further, adaptive notch filter based control technique is proposed for VSC in PV generation system. Adaptive notch filter (ANF) approach is used to synchronize the interfaced PV system with grid to maintain the amplitude, phase and frequency parameters as well as power quality improvement. This technique offers the compensation of harmonics current and reactive power with both linear and nonlinear loads. To maintain constant DC link voltage a PI controller is also implemented and presented in this paper. The complete system has been designed, developed and simulated using SimPower System and Simulink toolbox of MATLAB. The performance analysis of three phase grid connected solar photovoltaic system has been carried out on the basis of various parameters like PV output power, PV voltage, PV current, DC link voltage, PCC (Point of Common Coupling) voltage, grid voltage, grid current, voltage source converter current, power supplied by the voltage source converter etc. The results obtained from the proposed system are found satisfactory.Keywords: solar photovoltaic systems, MPPT, voltage source converter, grid synchronization technique
Procedia PDF Downloads 5937755 Development of Loop Mediated Isothermal Amplification (Lamp) Assay for the Diagnosis of Ovine Theileriosis
Authors: Muhammad Fiaz Qamar, Uzma Mehreen, Muhammad Arfan Zaman, Kazim Ali
Abstract:
Ovine Theileriosis is a world-wide concern, especially in tropical and subtropical areas, due to having tick abundance that has received less awareness in different developed and developing areas due to less worth of sheep, low to the middle level of infection in different small ruminants herd. Across Asia, the prevalence reports have been conducted to provide equivalent calculation of flock and animal level prevalence of Theileriosisin animals. It is a challenge for veterinarians to timely diagnosis & control of Theileriosis and famers because of the nature of the organism and inadequacy of restricted plans to control. All most work is based upon the development of such a technique which should be farmer-friendly, less expensive, and easy to perform into the field. By the timely diagnosis of this disease will decrease the irrational use of the drugs, and other plan was to determine the prevalence of Theileriosis in District Jhang by using the conventional method, PCR and qPCR, and LAMP. We quantify the molecular epidemiology of T.lestoquardiin sheep from Jhang districts, Punjab, Pakistan. In this study, we concluded that the overall prevalence of Theileriosis was (32/350*100= 9.1%) in sheep by using Giemsa staining technique, whereas (48/350*100= 13%) is observed by using PCR technique (56/350*100=16%) in qPCR and the LAMP technique have shown up to this much prevalence percentage (60/350*100= 17.1%). The specificity and sensitivity also calculated in comparison with the PCR and LAMP technique. Means more positive results have been shown when the diagnosis has been done with the help of LAMP. And there is little bit of difference between the positive results of PCR and qPCR, and the least positive animals was by using Giemsa staining technique/conventional method. If we talk about the specificity and sensitivity of the LAMP as compared to PCR, The cross tabulation shows that the results of sensitivity of LAMP counted was 94.4%, and specificity of LAMP counted was 78%. Advances in scientific field must be upon reality based ideas which can lessen the gaps and hurdles in the way of scientific research; the lamp is one of such techniques which have done wonders in adding value and helping human at large. It is such a great biological diagnostic tools and has helped a lot in the proper diagnosis and treatment of certain diseases. Other methods for diagnosis, such as culture techniques and serological techniques, have exposed humans with great danger. However, with the help of molecular diagnostic technique like LAMP, exposure to such pathogens is being avoided in the current era Most prompt and tentative diagnosis can be made using LAMP. Other techniques like PCR has many disadvantages when compared to LAMP as PCR is a relatively expensive, time consuming, and very complicated procedure while LAMP is relatively cheap, easy to perform, less time consuming, and more accurate. LAMP technique has removed hurdles in the way of scientific research and molecular diagnostics, making it approachable to poor and developing countries.Keywords: distribution, thelaria, LAMP, primer sequences, PCR
Procedia PDF Downloads 1017754 Investigation on the Thermal Properties of Magnesium Oxychloride Cement Prepared with Glass Powder
Authors: Rim Zgueb, Noureddine Yacoubi
Abstract:
The objective of this study was to investigate the thermal property of magnesium oxychloride cement (MOC) using glass powder as a substitute. Glass powder by proportion 0%, 5%, 10%, 15% and 20% of cement’s weight was added to specimens. At the end of a drying time of 28 days, thermal properties, compressive strength and bulk density of samples were determined. Thermal property is measured by Photothermal Deflection Technique by comparing the experimental of normalized amplitude and the phase curves of the photothermal signal to the corresponding theoretical ones. The findings indicate that incorporation of glass powder decreases the thermal properties of MOC.Keywords: magnesium oxychloride cement (MOC), phototharmal deflection technique, thermal properties, Ddensity
Procedia PDF Downloads 3517753 Reimagine and Redesign: Augmented Reality Digital Technologies and 21st Century Education
Authors: Jasmin Cowin
Abstract:
Augmented reality digital technologies, big data, and the need for a teacher workforce able to meet the demands of a knowledge-based society are poised to lead to major changes in the field of education. This paper explores applications and educational use cases of augmented reality digital technologies for educational organizations during the Fourth Industrial Revolution. The Fourth Industrial Revolution requires vision, flexibility, and innovative educational conduits by governments and educational institutions to remain competitive in a global economy. Educational organizations will need to focus on teaching in and for a digital age to continue offering academic knowledge relevant to 21st-century markets and changing labor force needs. Implementation of contemporary disciplines will need to be embodied through learners’ active knowledge-making experiences while embracing ubiquitous accessibility. The power of distributed ledger technology promises major streamlining for educational record-keeping, degree conferrals, and authenticity guarantees. Augmented reality digital technologies hold the potential to restructure educational philosophies and their underpinning pedagogies thereby transforming modes of delivery. Structural changes in education and governmental planning are already increasing through intelligent systems and big data. Reimagining and redesigning education on a broad scale is required to plan and implement governmental and institutional changes to harness innovative technologies while moving away from the big schooling machine.Keywords: fourth industrial revolution, artificial intelligence, big data, education, augmented reality digital technologies, distributed ledger technology
Procedia PDF Downloads 2767752 The Benefits of a Totally Autologous Breast Reconstruction Technique Using Extended Latissimus Dorsi Flap with Lipo-Modelling: A Seven Years United Kingdom Tertiary Breast Unit Results
Authors: Wisam Ismail, Brendan Wooler, Penelope McManus
Abstract:
Introduction: The public perception of implants has been damaged in the wake of recent negative publicity and increasingly we are finding patients wanting to avoid them. Planned lipo-modelling to enhance the volume of a Latissimus dorsi flap is a viable alternative to silicone implants and maintains a Totally Autologous Technique (TAT). Here we demonstrate that when compared to an Implant Assisted Technique (IAT), a TAT offers patients many benefits that offset the requirement of more operations initially, with reduced short and long term complications, reduced symmetrisation surgery and reduced revision rates. Methods. Data was collected prospectively over 7 years. The minimum follows up was 3 years. The technique was generally standardized in the hand of one surgeon. All flaps were extended LD flaps (ELD). Lipo-modelling was performed using standard techniques. Outcome measures were unplanned secondary procedures, complication rates, and contralateral symmetrisation surgery rates. Key Results Were: Lower complication rates in the TAT group (18.5% vs. 33.3%), despite higher radiotherapy rates (TAT=49%, IAT=36.8%), TAT was associated with lower subsequent symmetrisation rates (30.6% vs. 50.9%), IAT had a relative risk of 3.1 for subsequent unplanned procedure, Autologous patients required an average of 1.76 sessions of lipo-modelling, Conclusions: Using lipo-modelling to enable totally autologous LD reconstruction offers significant advantages over an implant assisted technique. We have shown a lower subsequent unplanned procedure rate, lower revision surgery, and less contralateral symmetrisation surgery. We anticipate that a TAT will be supported by patient satisfaction surveys and long-term patient-reported cosmetic outcome data and intended to study this.Keywords: breast, Latissimus dorsi, lipomodelling, reconstruction
Procedia PDF Downloads 3347751 A Quinary Coding and Matrix Structure Based Channel Hopping Algorithm for Blind Rendezvous in Cognitive Radio Networks
Authors: Qinglin Liu, Zhiyong Lin, Zongheng Wei, Jianfeng Wen, Congming Yi, Hai Liu
Abstract:
The multi-channel blind rendezvous problem in distributed cognitive radio networks (DCRNs) refers to how users in the network can hop to the same channel at the same time slot without any prior knowledge (i.e., each user is unaware of other users' information). The channel hopping (CH) technique is a typical solution to this blind rendezvous problem. In this paper, we propose a quinary coding and matrix structure-based CH algorithm called QCMS-CH. The QCMS-CH algorithm can guarantee the rendezvous of users using only one cognitive radio in the scenario of the asynchronous clock (i.e., arbitrary time drift between the users), heterogeneous channels (i.e., the available channel sets of users are distinct), and symmetric role (i.e., all users play a same role). The QCMS-CH algorithm first represents a randomly selected channel (denoted by R) as a fixed-length quaternary number. Then it encodes the quaternary number into a quinary bootstrapping sequence according to a carefully designed quaternary-quinary coding table with the prefix "R00". Finally, it builds a CH matrix column by column according to the bootstrapping sequence and six different types of elaborately generated subsequences. The user can access the CH matrix row by row and accordingly perform its channel, hoping to attempt rendezvous with other users. We prove the correctness of QCMS-CH and derive an upper bound on its Maximum Time-to-Rendezvous (MTTR). Simulation results show that the QCMS-CH algorithm outperforms the state-of-the-art in terms of the MTTR and the Expected Time-to-Rendezvous (ETTR).Keywords: channel hopping, blind rendezvous, cognitive radio networks, quaternary-quinary coding
Procedia PDF Downloads 917750 Iterative Reconstruction Techniques as a Dose Reduction Tool in Pediatric Computed Tomography Imaging: A Phantom Study
Authors: Ajit Brindhaban
Abstract:
Background and Purpose: Computed Tomography (CT) scans have become the largest source of radiation in radiological imaging. The purpose of this study was to compare the quality of pediatric Computed Tomography (CT) images reconstructed using Filtered Back Projection (FBP) with images reconstructed using different strengths of Iterative Reconstruction (IR) technique, and to perform a feasibility study to assess the use of IR techniques as a dose reduction tool. Materials and Methods: An anthropomorphic phantom representing a 5-year old child was scanned, in two stages, using a Siemens Somatom CT unit. In stage one, scans of the head, chest and abdomen were performed using standard protocols recommended by the scanner manufacturer. Images were reconstructed using FBP and 5 different strengths of IR. Contrast-to-Noise Ratios (CNR) were calculated from average CT number and its standard deviation measured in regions of interest created in the lungs, bone, and soft tissues regions of the phantom. Paired t-test and the one-way ANOVA were used to compare the CNR from FBP images with IR images, at p = 0.05 level. The lowest strength value of IR that produced the highest CNR was identified. In the second stage, scans of the head was performed with decreased mA(s) values relative to the increase in CNR compared to the standard FBP protocol. CNR values were compared in this stage using Paired t-test at p = 0.05 level. Results: Images reconstructed using IR technique had higher CNR values (p < 0.01.) in all regions compared to the FBP images, at all strengths of IR. The CNR increased with increasing IR strength of up to 3, in the head and chest images. Increases beyond this strength were insignificant. In abdomen images, CNR continued to increase up to strength 5. The results also indicated that, IR techniques improve CNR by a up to factor of 1.5. Based on the CNR values at strength 3 of IR images and CNR values of FBP images, a reduction in mA(s) of about 20% was identified. The images of the head acquired at 20% reduced mA(s) and reconstructed using IR at strength 3, had similar CNR as FBP images at standard mA(s). In the head scans of the phantom used in this study, it was demonstrated that similar CNR can be achieved even when the mA(s) is reduced by about 20% if IR technique with strength of 3 is used for reconstruction. Conclusions: The IR technique produced better image quality at all strengths of IR in comparison to FBP. IR technique can provide approximately 20% dose reduction in pediatric head CT while maintaining the same image quality as FBP technique.Keywords: filtered back projection, image quality, iterative reconstruction, pediatric computed tomography imaging
Procedia PDF Downloads 1477749 Evolution of Multimodulus Algorithm Blind Equalization Based on Recursive Least Square Algorithm
Authors: Sardar Ameer Akram Khan, Shahzad Amin Sheikh
Abstract:
Blind equalization is an important technique amongst equalization family. Multimodulus algorithms based on blind equalization removes the undesirable effects of ISI and cater ups the phase issues, saving the cost of rotator at the receiver end. In this paper a new algorithm combination of recursive least square and Multimodulus algorithm named as RLSMMA is proposed by providing few assumption, fast convergence and minimum Mean Square Error (MSE) is achieved. The excellence of this technique is shown in the simulations presenting MSE plots and the resulting filter results.Keywords: blind equalizations, constant modulus algorithm, multi-modulus algorithm, recursive least square algorithm, quadrature amplitude modulation (QAM)
Procedia PDF Downloads 6427748 Analysis of Two Phase Hydrodynamics in a Column Flotation by Particle Image Velocimetry
Authors: Balraju Vadlakonda, Narasimha Mangadoddy
Abstract:
The hydrodynamic behavior in a laboratory column flotation was analyzed using particle image velocimetry. For complete characterization of column flotation, it is necessary to determine the flow velocity induced by bubbles in the liquid phase, the bubble velocity and bubble characteristics:diameter,shape and bubble size distribution. An experimental procedure for analyzing simultaneous, phase-separated velocity measurements in two-phase flows was introduced. The non-invasive PIV technique has used to quantify the instantaneous flow field, as well as the time averaged flow patterns in selected planes of the column. Using the novel particle velocimetry (PIV) technique by the combination of fluorescent tracer particles, shadowgraphy and digital phase separation with masking technique measured the bubble velocity as well as the Reynolds stresses in the column. Axial and radial mean velocities as well as fluctuating components were determined for both phases by averaging the sufficient number of double images. Bubble size distribution was cross validated with high speed video camera. Average turbulent kinetic energy of bubble were analyzed. Different air flow rates were considered in the experiments.Keywords: particle image velocimetry (PIV), bubble velocity, bubble diameter, turbulent kinetic energy
Procedia PDF Downloads 5107747 Very Large Scale Integration Architecture of Finite Impulse Response Filter Implementation Using Retiming Technique
Authors: S. Jalaja, A. M. Vijaya Prakash
Abstract:
Recursive combination of an algorithm based on Karatsuba multiplication is exploited to design a generalized transpose and parallel Finite Impulse Response (FIR) Filter. Mid-range Karatsuba multiplication and Carry Save adder based on Karatsuba multiplication reduce time complexity for higher order multiplication implemented up to n-bit. As a result, we design modified N-tap Transpose and Parallel Symmetric FIR Filter Structure using Karatsuba algorithm. The mathematical formulation of the FFA Filter is derived. The proposed architecture involves significantly less area delay product (APD) then the existing block implementation. By adopting retiming technique, hardware cost is reduced further. The filter architecture is designed by using 90 nm technology library and is implemented by using cadence EDA Tool. The synthesized result shows better performance for different word length and block size. The design achieves switching activity reduction and low power consumption by applying with and without retiming for different combination of the circuit. The proposed structure achieves more than a half of the power reduction by adopting with and without retiming techniques compared to the earlier design structure. As a proof of the concept for block size 16 and filter length 64 for CKA method, it achieves a 51% as well as 70% less power by applying retiming technique, and for CSA method it achieves a 57% as well as 77% less power by applying retiming technique compared to the previously proposed design.Keywords: carry save adder Karatsuba multiplication, mid range Karatsuba multiplication, modified FFA and transposed filter, retiming
Procedia PDF Downloads 2337746 Secure Optimized Ingress Filtering in Future Internet Communication
Authors: Bander Alzahrani, Mohammed Alreshoodi
Abstract:
Information-centric networking (ICN) using architectures such as the Publish-Subscribe Internet Technology (PURSUIT) has been proposed as a new networking model that aims at replacing the current used end-centric networking model of the Internet. This emerged model focuses on what is being exchanged rather than which network entities are exchanging information, which gives the control plane functions such as routing and host location the ability to be specified according to the content items. The forwarding plane of the PURSUIT ICN architecture uses a simple and light mechanism based on Bloom filter technologies to forward the packets. Although this forwarding scheme solve many problems of the today’s Internet such as the growth of the routing table and the scalability issues, it is vulnerable to brute force attacks which are starting point to distributed- denial-of-service (DDoS) attacks. In this work, we design and analyze a novel source-routing and information delivery technique that keeps the simplicity of using Bloom filter-based forwarding while being able to deter different attacks such as denial of service attacks at the ingress of the network. To achieve this, special forwarding nodes called Edge-FW are directly attached to end user nodes and used to perform a security test for malicious injected random packets at the ingress of the path to prevent any possible attack brute force attacks at early stage. In this technique, a core entity of the PURSUIT ICN architecture called topology manager, that is responsible for finding shortest path and creating a forwarding identifiers (FId), uses a cryptographically secure hash function to create a 64-bit hash, h, over the formed FId for authentication purpose to be included in the packet. Our proposal restricts the attacker from injecting packets carrying random FIds with a high amount of filling factor ρ, by optimizing and reducing the maximum allowed filling factor ρm in the network. We optimize the FId to the minimum possible filling factor where ρ ≤ ρm, while it supports longer delivery trees, so the network scalability is not affected by the chosen ρm. With this scheme, the filling factor of any legitimate FId never exceeds the ρm while the filling factor of illegitimate FIds cannot exceed the chosen small value of ρm. Therefore, injecting a packet containing an FId with a large value of filling factor, to achieve higher attack probability, is not possible anymore. The preliminary analysis of this proposal indicates that with the designed scheme, the forwarding function can detect and prevent malicious activities such DDoS attacks at early stage and with very high probability.Keywords: forwarding identifier, filling factor, information centric network, topology manager
Procedia PDF Downloads 153