Search results for: sound processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4266

Search results for: sound processing

936 Non-Invasive Data Extraction from Machine Display Units Using Video Analytics

Authors: Ravneet Kaur, Joydeep Acharya, Sudhanshu Gaur

Abstract:

Artificial Intelligence (AI) has the potential to transform manufacturing by improving shop floor processes such as production, maintenance and quality. However, industrial datasets are notoriously difficult to extract in a real-time, streaming fashion thus, negating potential AI benefits. The main example is some specialized industrial controllers that are operated by custom software which complicates the process of connecting them to an Information Technology (IT) based data acquisition network. Security concerns may also limit direct physical access to these controllers for data acquisition. To connect the Operational Technology (OT) data stored in these controllers to an AI application in a secure, reliable and available way, we propose a novel Industrial IoT (IIoT) solution in this paper. In this solution, we demonstrate how video cameras can be installed in a factory shop floor to continuously obtain images of the controller HMIs. We propose image pre-processing to segment the HMI into regions of streaming data and regions of fixed meta-data. We then evaluate the performance of multiple Optical Character Recognition (OCR) technologies such as Tesseract and Google vision to recognize the streaming data and test it for typical factory HMIs and realistic lighting conditions. Finally, we use the meta-data to match the OCR output with the temporal, domain-dependent context of the data to improve the accuracy of the output. Our IIoT solution enables reliable and efficient data extraction which will improve the performance of subsequent AI applications.

Keywords: human machine interface, industrial internet of things, internet of things, optical character recognition, video analytics

Procedia PDF Downloads 82
935 Ultrasonic Treatment of Baker’s Yeast Effluent

Authors: Emine Yılmaz, Serap Fındık

Abstract:

Baker’s yeast industry uses molasses as a raw material. Molasses is end product of sugar industry. Wastewater from molasses processing presents large amount of coloured substances that give dark brown color and high organic load to the effluents. The main coloured compounds are known as melanoidins. Melanoidins are product of Maillard reaction between amino acid and carbonyl groups in molasses. Dark colour prevents sunlight penetration and reduces photosynthetic activity and dissolved oxygen level of surface waters. Various methods like biological processes (aerobic and anaerobic), ozonation, wet air oxidation, coagulation/flocculation are used to treatment of baker’s yeast effluent. Before effluent is discharged adequate treatment is imperative. In addition to this, increasingly stringent environmental regulations are forcing distilleries to improve existing treatment and also to find alternative methods of effluent management or combination of treatment methods. Sonochemical oxidation is one of the alternative methods. Sonochemical oxidation employs ultrasound resulting in cavitation phenomena. In this study, decolorization of baker’s yeast effluent was investigated by using ultrasound. Baker’s yeast effluent was supplied from a factory which is located in the north of Turkey. An ultrasonic homogenizator used for this study. Its operating frequency is 20 kHz. TiO2-ZnO catalyst has been used as sonocatalyst. The effects of molar proportion of TiO2-ZnO, calcination temperature and time, catalyst amount were investigated on the decolorization of baker’s yeast effluent. The results showed that prepared composite TiO2-ZnO with 4:1 molar proportion treated at 700°C for 90 min provides better result. Initial decolorization rate at 15 min is 3% without catalyst, 14,5% with catalyst treated at 700°C for 90 min respectively.

Keywords: baker’s yeast effluent, decolorization, sonocatalyst, ultrasound

Procedia PDF Downloads 441
934 Analyzing Use of Figurativeness, Visual Elements, Allegory, Scenic Imagery as Support System in Punjabi Contemporary Theatre for Escaping Censorship

Authors: Shazia Anwer

Abstract:

This paper has discussed the unusual form of resistance in theatre against censorship board in Pakistan. The atypical approach of dramaturgy created massive space for performers and audiences to integrate and communicate. The social and religious absolutes creates suffocation in Pakistani society, strict control over all Fine and Performing Art has made art political, contemporary dramatics has started an amalgamated theatre to avoid censorship. Contemporary Punjabi theatre techniques are directly dependent on human cognition. The idea of indirect thought processing is not unique but dependent on spectators. The paper has provided an account of these techniques and their specific use for conveying specific messages across the audiences. For the Dramaturge of today, theatre space is an expression representing a linguistic formulation that includes qualities of experimental and non-traditional use of classical theatrical space in the context of fulfilling the concept of open theatre. Paper has explained the transformation of the theatrical experience into an event where the actor and the audience are co-existing and co-experiencing the dramatical experience. The denial of the existence of the 4th -Wall made two-way communication possible. This paper has elaborated that the previously marginalized genres such as naach, jugat, miras, are extensively included to counter the censorship board. Figurativeness, visual elements, allegory, scenic imagery are basic support system for contemporary Punjabi theatre. The body of the actor is used as a source for non-verbal communication, and for an escape from traditional theatrical space which by every means has every element that could be controlled and reprimanded by the controlling authority.

Keywords: communication, Punjabi theatre, figurativeness, censorship

Procedia PDF Downloads 107
933 Airborne SAR Data Analysis for Impact of Doppler Centroid on Image Quality and Registration Accuracy

Authors: Chhabi Nigam, S. Ramakrishnan

Abstract:

This paper brings out the analysis of the airborne Synthetic Aperture Radar (SAR) data to study the impact of Doppler centroid on Image quality and geocoding accuracy from the perspective of Stripmap mode of data acquisition. Although in Stripmap mode of data acquisition radar beam points at 90 degrees broad side (side looking), shift in the Doppler centroid is invariable due to platform motion. In-accurate estimation of Doppler centroid leads to poor image quality and image miss-registration. The effect of Doppler centroid is analyzed in this paper using multiple sets of data collected from airborne platform. Occurrences of ghost (ambiguous) targets and their power levels have been analyzed that impacts appropriate choice of PRF. Effect of aircraft attitudes (roll, pitch and yaw) on the Doppler centroid is also analyzed with the collected data sets. Various stages of the RDA (Range Doppler Algorithm) algorithm used for image formation in Stripmap mode, range compression, Doppler centroid estimation, azimuth compression, range cell migration correction are analyzed to find the performance limits and the dependence of the imaging geometry on the final image. The ability of Doppler centroid estimation to enhance the imaging accuracy for registration are also illustrated in this paper. The paper also tries to bring out the processing of low squint SAR data, the challenges and the performance limits imposed by the imaging geometry and the platform dynamics on the final image quality metrics. Finally, the effect on various terrain types, including land, water and bright scatters is also presented.

Keywords: ambiguous target, Doppler Centroid, image registration, Airborne SAR

Procedia PDF Downloads 186
932 Meanings and Concepts of Standardization in Systems Medicine

Authors: Imme Petersen, Wiebke Sick, Regine Kollek

Abstract:

In systems medicine, high-throughput technologies produce large amounts of data on different biological and pathological processes, including (disturbed) gene expressions, metabolic pathways and signaling. The large volume of data of different types, stored in separate databases and often located at different geographical sites have posed new challenges regarding data handling and processing. Tools based on bioinformatics have been developed to resolve the upcoming problems of systematizing, standardizing and integrating the various data. However, the heterogeneity of data gathered at different levels of biological complexity is still a major challenge in data analysis. To build multilayer disease modules, large and heterogeneous data of disease-related information (e.g., genotype, phenotype, environmental factors) are correlated. Therefore, a great deal of attention in systems medicine has been put on data standardization, primarily to retrieve and combine large, heterogeneous datasets into standardized and incorporated forms and structures. However, this data-centred concept of standardization in systems medicine is contrary to the debate in science and technology studies (STS) on standardization that rather emphasizes the dynamics, contexts and negotiations of standard operating procedures. Based on empirical work on research consortia that explore the molecular profile of diseases to establish systems medical approaches in the clinic in Germany, we trace how standardized data are processed and shaped by bioinformatics tools, how scientists using such data in research perceive such standard operating procedures and which consequences for knowledge production (e.g. modeling) arise from it. Hence, different concepts and meanings of standardization are explored to get a deeper insight into standard operating procedures not only in systems medicine, but also beyond.

Keywords: data, science and technology studies (STS), standardization, systems medicine

Procedia PDF Downloads 310
931 Marker Assisted Breeding for Grain Quality Improvement in Durum Wheat

Authors: Özlem Ateş Sönmezoğlu, Begüm Terzi, Ahmet Yıldırım, Leyla Gündüz

Abstract:

Durum wheat quality is defined as its suitability for pasta processing, that is pasta making quality. Another factor that determines the quality of durum wheat is the nutritional value of wheat or its final products. Wheat is a basic source of calories, proteins and minerals for humans in many countries of the world. For this reason, improvement of wheat nutritional value is of great importance. In recent years, deficiencies in protein and micronutrients, particularly in iron and zinc, have seriously increased. Therefore, basic foods such as wheat must be improved for micronutrient content. The effects of some major genes for grain quality established. Gpc-B1 locus is one of the genes increased protein and micronutrients content, and used in improvement studies of durum wheat nutritional value. The aim of this study was to increase the protein content and the micronutrient (Fe, Zn ve Mn) contents of an advanced durum wheat line (TMB 1) that was previously improved for its protein quality. For this purpose, TMB1 advanced durum wheat line were used as the recurrent parent and also, UC1113-Gpc-B1 line containing the Gpc-B1 gene was used as the gene source. In all of the generations, backcrossed plants carrying the targeted gene region were selected by marker assisted selection (MAS). BC4F1 plants MAS method was employed in combination with embryo culture and rapid plant growth in a controlled greenhouse conditions in order to shorten the duration of the transition between generations in backcross breeding. The Gpc-B1 gene was selected specific molecular markers. Since Yr-36 gene associated with Gpc-B1 allele, it was also transferred to the Gpc-B1 transferred lines. Thus, the backcrossed plants selected by MAS are resistance to yellow rust disease. This research has been financially supported by TÜBİTAK (112T910).

Keywords: Durum wheat, Gpc-B1, MAS, Triticum durum, Yr-36

Procedia PDF Downloads 249
930 Design and Creation of a BCI Videogame for Training and Measure of Sustained Attention in Children with ADHD

Authors: John E. Muñoz, Jose F. Lopez, David S. Lopez

Abstract:

Attention Deficit Hyperactivity Disorder (ADHD) is a disorder that affects 1 out of 5 Colombian children, converting into a real public health problem in the country. Conventional treatments such as medication and neuropsychological therapy have been proved to be insufficient in order to decrease high incidence levels of ADHD in the principal Colombian cities. This work demonstrates a design and development of a videogame that uses a brain computer interface not only to serve as an input device but also as a tool to monitor neurophysiologic signal. The video game named “The Harvest Challenge” puts a cultural scene of a Colombian coffee grower in its context, where a player can use his/her avatar in three mini games created in order to reinforce four fundamental aspects: i) waiting ability, ii) planning ability, iii) ability to follow instructions and iv) ability to achieve objectives. The details of this collaborative designing process of the multimedia tool according to the exact clinic necessities and the description of interaction proposals are presented through the mental stages of attention and relaxation. The final videogame is presented as a tool for sustained attention training in children with ADHD using as an action mechanism the neuromodulation of Beta and Theta waves through an electrode located in the central part of the front lobe of the brain. The processing of an electroencephalographic signal is produced automatically inside the videogame allowing to generate a report of the theta/beta ratio evolution - a biological marker, which has been demonstrated to be a sufficient measure to discriminate of children with deficit and without.

Keywords: BCI, neuromodulation, ADHD, videogame, neurofeedback, theta/beta ratio

Procedia PDF Downloads 340
929 Decision Support System for the Management of the Shandong Peninsula, China

Authors: Natacha Fery, Guilherme L. Dalledonne, Xiangyang Zheng, Cheng Tang, Roberto Mayerle

Abstract:

A Decision Support System (DSS) for supporting decision makers in the management of the Shandong Peninsula has been developed. Emphasis has been given to coastal protection, coastal cage aquaculture and harbors. The investigations were done in the framework of a joint research project funded by the German Ministry of Education and Research (BMBF) and the Chinese Academy of Sciences (CAS). In this paper, a description of the DSS, the development of its components, and results of its application are presented. The system integrates in-situ measurements, process-based models, and a database management system. Numerical models for the simulation of flow, waves, sediment transport and morphodynamics covering the entire Bohai Sea are set up based on the Delft3D modelling suite (Deltares). Calibration and validation of the models were realized based on the measurements of moored Acoustic Doppler Current Profilers (ADCP) and High Frequency (HF) radars. In order to enable cost-effective and scalable applications, a database management system was developed. It enhances information processing, data evaluation, and supports the generation of data products. Results of the application of the DSS to the management of coastal protection, coastal cage aquaculture and harbors are presented here. Model simulations covering the most severe storms observed during the last decades were carried out leading to an improved understanding of hydrodynamics and morphodynamics. Results helped in the identification of coastal stretches subjected to higher levels of energy and improved support for coastal protection measures.

Keywords: coastal protection, decision support system, in-situ measurements, numerical modelling

Procedia PDF Downloads 166
928 Investigation on Reducing the Bandgap in Nanocomposite Polymers by Doping

Authors: Sharvare Palwai, Padmaja Guggilla

Abstract:

Smart materials, also called as responsive materials, undergo reversible physical or chemical changes in their properties as a consequence of small environmental variations. They can respond to a single or multiple stimuli such as stress, temperature, moist, electric or magnetic fields, light, or chemical compounds. Hence smart materials are the basis of many applications, including biosensors and transducers, particularly electroactive polymers. As the polymers exhibit good flexibility, high transparency, easy processing, and low cost, they would be promising for the sensor material. Polyvinylidene Fluoride (PVDF), being a ferroelectric polymer, exhibits piezoelectric and pyro electric properties. Pyroelectric materials convert heat directly into electricity, while piezoelectric materials convert mechanical energy into electricity. These characteristics of PVDF make it useful in biosensor devices and batteries. However, the influence of nanoparticle fillers such as Lithium Tantalate (LiTaO₃/LT), Potassium Niobate (KNbO₃/PN), and Zinc Titanate (ZnTiO₃/ZT) in polymer films will be studied comprehensively. Developing advanced and cost-effective biosensors is pivotal to foresee the fullest potential of polymer based wireless sensor networks, which will further enable new types of self-powered applications. Finally, nanocomposites films with best set of properties; the sensory elements will be designed and tested for their performance as electric generators under laboratory conditions. By characterizing the materials for their optical properties and investigate the effects of doping on the bandgap energies, the science in the next-generation biosensor technologies can be advanced.

Keywords: polyvinylidene fluoride, PVDF, lithium tantalate, potassium niobate, zinc titanate

Procedia PDF Downloads 102
927 Development of Electronic Services in Georgia: Analysis of Current Situation

Authors: Dato Surmanidze, Dato Antadze, Tornike Partenadze

Abstract:

Public online services in Georgia are concentrated on main target segments: public administration, business, population, non-governmental and other interested organizations. Therefore, the strategy of digital Georgia is focused on providing G2C, G2B/B2G, G2NGO and G2G services. In G2C framework sophisticated and high-technological online services have been developed in order to provide passports, identity cards, documentations concerning residence and civil acts (birth, marriage, divorce, child adoption, change of name and surname, death, etc) as well as other services. Websites like my.gov.ge and sda.gov.ge have distance services like electronic application, processing and decision making. In line with international standards automatic services like electronic tenders, product catalogues, invoices and payment have been developed. This creates better investment climate for foreign companies in Georgia in the framework of G2B politics. The website mybusiness.gov.ge creates better conditions for local business. Among electronic services is e-NRMS (electronic system for national resource management) which was introduced by the Ministry of Finance of Georgia. The system was created in order to ensure management of national resources by state and business organizations. It is integrated with bank services and provides G2C, G2B and B2G representatives with electronic services. Also a portal meteo.gov.ge was created which gives electronic services concerning air, geological, environmental and pollution issues. Also worknet.gov.ge should be mentioned which is an electronic hub of information management for employers and employees. The information portal of labor market will facilitate receipt of information, its analysis and delivery to interested people like employers and employees. However, nowadays it’s been two years that only employees portal is activated. Therefore, awareness about the portal, its competitiveness and success is undermined.

Keywords: electronic services, public administration, information technology, information society

Procedia PDF Downloads 240
926 Preparation and Properties of Gelatin-Bamboo Fibres Foams for Packaging Applications

Authors: Luo Guidong, Song Hang, Jim Song, Virginia Martin Torrejon

Abstract:

Due to their excellent properties, polymer packaging foams have become increasingly essential in our current lifestyles. They are cost-effective and lightweight, with excellent mechanical and thermal insulation properties. However, they constitute a major environmental and health concern due to litter generation, ocean pollution, and microplastic contamination of the food chain. In recent years, considerable efforts have been made to develop more sustainable alternatives to conventional polymer packaging foams. As a result, biobased and compostable foams are increasingly becoming commercially available, such as starch-based loose-fill or PLA trays. However, there is still a need for bulk manufacturing of bio-foams planks for packaging applications as a viable alternative to their fossil fuel counterparts (i.e., polystyrene, polyethylene, and polyurethane). Gelatin is a promising biopolymer for packaging applications due to its biodegradability, availability, and biocompatibility, but its mechanical properties are poor compared to conventional plastics. However, as widely reported for other biopolymers, such as starch, the mechanical properties of gelatin-based bioplastics can be enhanced by formulation optimization, such as the incorporation of fibres from different crops, such as bamboo. This research aimed to produce gelatin-bamboo fibre foams by mechanical foaming and to study the effect of fibre content on the foams' properties and structure. As a result, foams with virtually no shrinkage, low density (<40 kg/m³), low thermal conductivity (<0.044 W/m•K), and mechanical properties comparable to conventional plastics were produced. Further work should focus on developing formulations suitable for the packaging of water-sensitive products and processing optimization, especially the reduction of the drying time.

Keywords: biobased and compostable foam, sustainable packaging, natural polymer hydrogel, cold chain packaging

Procedia PDF Downloads 73
925 Enhancement of Interface Properties of Thermoplastic Composite Materials

Authors: Reyhan Ozbask, Emek Moroydor Derin, Mustafa Dogu

Abstract:

There are a limited number of global companies in the world that manufacture and commercially offer thermoplastic composite prepregs in accordance with aerospace requirements. High-performance thermoplastic materials supplied for aerospace structural applications are PEEK (polyetheretherketone), PPS (polyphenylsulfite), PEI (polyetherimide), and PEKK (polyetherketoneketone). Among these, PEEK is the raw material used in the first applications and has started to become widespread. However, the use of these thermoplastic raw materials in composite production is very difficult due to their high processing temperatures and impregnation difficulties. This study, it is aimed to develop carbon fiber-reinforced thermoplastic PEEK composites that comply with the requirements of the aviation industry that are superior mechanical properties as well as being lightweight. Therefore, it is aimed to obtain high-performance thermoplastic composite materials with improved interface properties by using the sizing method (suspension development through chemical synthesis and functionalization), to optimize the production process. The use of boron nitride nanotube as a bonding agent by modifying its surface constitutes the original aspect of the study as it has not been used in composite production with high-performance thermoplastic materials yet. For this purpose, laboratory-scale studies on the application of thermoplastic compatible sizing will be carried out in order to increase the fiber-matrix interfacial adhesion. The method respectively consists of the selection of appropriate sizing type, laboratory-scale carbon fiber (CF) / poly ether ether ketone (PEEK) polymer interface enhancement studies, manufacturing of laboratory-scale BNNT coated CF/PEEK woven prepreg composites and their tests.

Keywords: carbon fiber reinforced composite, interface enhancement, boron nitride nanotube, thermoplastic composite

Procedia PDF Downloads 191
924 Communication Infrastructure Required for a Driver Behaviour Monitoring System, ‘SiaMOTO’ IT Platform

Authors: Dogaru-Ulieru Valentin, Sălișteanu Ioan Corneliu, Ardeleanu Mihăiță Nicolae, Broscăreanu Ștefan, Sălișteanu Bogdan, Mihai Mihail

Abstract:

The SiaMOTO system is a communications and data processing platform for vehicle traffic. The human factor is the most important factor in the generation of this data, as the driver is the one who dictates the trajectory of the vehicle. Like any trajectory, specific parameters refer to position, speed and acceleration. Constant knowledge of these parameters allows complex analyses. Roadways allow many vehicles to travel through their confined space, and the overlapping trajectories of several vehicles increase the likelihood of collision events, known as road accidents. Any such event has causes that lead to its occurrence, so the conditions for its occurrence are known. The human factor is predominant in deciding the trajectory parameters of the vehicle on the road, so monitoring it by knowing the events reported by the DiaMOTO device over time, will generate a guide to target any potentially high-risk driving behavior and reward those who control the driving phenomenon well. In this paper, we have focused on detailing the communication infrastructure of the DiaMOTO device with the traffic data collection server, the infrastructure through which the database that will be used for complex AI/DLM analysis is built. The central element of this description is the data string in CODEC-8 format sent by the DiaMOTO device to the SiaMOTO collection server database. The data presented are specific to a functional infrastructure implemented in an experimental model stage, by installing on a number of 50 vehicles DiaMOTO unique code devices, integrating ADAS and GPS functions, through which vehicle trajectories can be monitored 24 hours a day.

Keywords: DiaMOTO, Codec-8, ADAS, GPS, driver monitoring

Procedia PDF Downloads 42
923 VR in the Middle School Classroom-An Experimental Study on Spatial Relations and Immersive Virtual Reality

Authors: Danielle Schneider, Ying Xie

Abstract:

Middle school science, technology, engineering, and math (STEM) teachers experience an exceptional challenge in the expectation to incorporate curricula that builds strong spatial reasoning skills on rudimentary geometry concepts. Because spatial ability is so closely tied to STEM students’ success, researchers are tasked to determine effective instructional practices that create an authentic learning environment within the immersive virtual reality learning environment (IVRLE). This study looked to investigate the effect of the IVRLE on middle school STEM students’ spatial reasoning skills as a methodology to benefit the STEM middle school students’ spatial reasoning skills. This experimental study was comprised of thirty 7th-grade STEM students divided into a treatment group that was engaged in an immersive VR platform where they engaged in building an object in the virtual realm by applying spatial processing and visualizing its dimensions and a control group that built the identical object using a desktop computer-based, computer-aided design (CAD) program. Before and after the students participated in the respective “3D modeling” environment, their spatial reasoning abilities were assessed using the Middle Grades Mathematics Project Spatial Visualization Test (MGMP-SVT). Additionally, both groups created a physical 3D model as a secondary measure to measure the effectiveness of the IVRLE. The results of a one-way ANOVA in this study identified a negative effect on those in the IVRLE. These findings suggest that with middle school students, virtual reality (VR) proved an inadequate tool to benefit spatial relation skills as compared to desktop-based CAD.

Keywords: virtual reality, spatial reasoning, CAD, middle school STEM

Procedia PDF Downloads 52
922 A Comprehensive Study and Evaluation on Image Fashion Features Extraction

Authors: Yuanchao Sang, Zhihao Gong, Longsheng Chen, Long Chen

Abstract:

Clothing fashion represents a human’s aesthetic appreciation towards everyday outfits and appetite for fashion, and it reflects the development of status in society, humanity, and economics. However, modelling fashion by machine is extremely challenging because fashion is too abstract to be efficiently described by machines. Even human beings can hardly reach a consensus about fashion. In this paper, we are dedicated to answering a fundamental fashion-related problem: what image feature best describes clothing fashion? To address this issue, we have designed and evaluated various image features, ranging from traditional low-level hand-crafted features to mid-level style awareness features to various current popular deep neural network-based features, which have shown state-of-the-art performance in various vision tasks. In summary, we tested the following 9 feature representations: color, texture, shape, style, convolutional neural networks (CNNs), CNNs with distance metric learning (CNNs&DML), AutoEncoder, CNNs with multiple layer combination (CNNs&MLC) and CNNs with dynamic feature clustering (CNNs&DFC). Finally, we validated the performance of these features on two publicly available datasets. Quantitative and qualitative experimental results on both intra-domain and inter-domain fashion clothing image retrieval showed that deep learning based feature representations far outweigh traditional hand-crafted feature representation. Additionally, among all deep learning based methods, CNNs with explicit feature clustering performs best, which shows feature clustering is essential for discriminative fashion feature representation.

Keywords: convolutional neural network, feature representation, image processing, machine modelling

Procedia PDF Downloads 110
921 Assets Integrity Management in Oil and Gas Production Facilities through Corrosion Mitigation and Inspection Strategy: A Case Study of Sarir Oilfield

Authors: Iftikhar Ahmad, Youssef Elkezza

Abstract:

Sarir oilfield is in North Africa. It has facilities for oil and gas production. The assets of the Sarir oilfield can be divided into five following categories, namely: (i) well bore and wellheads; (ii) vessels such as separators, desalters, and gas processing facilities; (iii) pipelines including all flow lines, trunk lines, and shipping lines; (iv) storage tanks; (v) other assets such as turbines and compressors, etc. The nature of the petroleum industry recognizes the potential human, environmental and financial consequences that can result from failing to maintain the integrity of wellheads, vessels, tanks, pipelines, and other assets. The importance of effective asset integrity management increases as the industry infrastructure continues to age. The primary objective of assets integrity management (AIM) is to maintain assets in a fit-for-service condition while extending their remaining life in the most reliable, safe, and cost-effective manner. Corrosion management is one of the important aspects of successful asset integrity management. It covers corrosion mitigation, monitoring, inspection, and risk evaluation. External corrosion on pipelines, well bores, buried assets, and bottoms of tanks is controlled with a combination of coatings by cathodic protection, while the external corrosion on surface equipment, wellheads, and storage tanks is controlled by coatings. The periodic cleaning of the pipeline by pigging helps in the prevention of internal corrosion. Further, internal corrosion of pipelines is prevented by chemical treatment and controlled operations. This paper describes the integrity management system used in the Sarir oil field for its oil and gas production facilities based on standard practices of corrosion mitigation and inspection.

Keywords: assets integrity management, corrosion prevention in oilfield assets, corrosion management in oilfield, corrosion prevention, inspection activities

Procedia PDF Downloads 50
920 Study of Information Technology Support to Knowledge Sharing in Social Enterprises

Authors: Maria Granados

Abstract:

Information technology (IT) facilitates the management of knowledge in organisations through the effective leverage of collective experience and knowledge of employees. This supports information processing needs, as well as enables and facilitates sense-making activities of knowledge workers. The study of IT support for knowledge management (KM) has been carried out mainly in larger organisations where resources and competitive conditions can trigger the use of KM. However, there is still a lack of understanding on how IT can support the management of knowledge under different organisational settings influenced by: constant tensions between social and economic objectives, more focus on sustainability than competiveness, limited resources, and high levels of democratic participation and intrinsic motivations among employees. All these conditions are presented in Social Enterprises (SEs), which are normally micro and small businesses that trade to tackle social problems, improve communities, people’s life chances, and the environment. Thus, their importance to society and economies is increasing. However, there is still a need for more understanding of how these organisations operate, perform, innovate and scale-up. This knowledge is crucial to design and provide accurate strategies to enhance the sector and increase its impact and coverage. To obtain a conceptual and empirical understanding of how IT can facilitate KM in the particular organisational conditions of SEs, a quantitative study was conducted with 432 owners and senior members of SEs in UK, underpinned by 21 interviews. The findings demonstrated how IT was supporting more the recovery and storage of necessary information in SEs, and less the collaborative work and communication among enterprise members. However, it was established that SEs were using cloud solutions, web 2.0 tools, Skype and centralised shared servers to manage informally their knowledge. The possible impediments for SEs to support themselves more on IT solutions can be linked mainly to economic and human constraints. These findings elucidate new perspectives that can contribute not only to SEs and SE supporters, but also to other businesses.

Keywords: social enterprises, knowledge management, information technology, collaboration, small firms

Procedia PDF Downloads 245
919 Recovery of Au and Other Metals from Old Electronic Components by Leaching and Liquid Extraction Process

Authors: Tomasz Smolinski, Irena Herdzik-Koniecko, Marta Pyszynska, M. Rogowski

Abstract:

Old electronic components can be easily found nowadays. Significant quantities of valuable metals such as gold, silver or copper are used for the production of advanced electronic devices. Old useless electronic device slowly became a new source of precious metals, very often more efficient than natural. For example, it is possible to recover more gold from 1-ton personal computers than seventeen tons of gold ore. It makes urban mining industry very profitable and necessary for sustainable development. For the recovery of metals from waste of electronic equipment, various treatment options based on conventional physical, hydrometallurgical and pyrometallurgical processes are available. In this group hydrometallurgy processes with their relatively low capital cost, low environmental impact, potential for high metal recoveries and suitability for small scale applications, are very promising options. Institute of Nuclear Chemistry and Technology has great experience in hydrometallurgy processes especially focused on recovery metals from industrial and agricultural wastes. At the moment, urban mining project is carried out. The method of effective recovery of valuable metals from central processing units (CPU) components has been developed. The principal processes such as acidic leaching and solvent extraction were used for precious metals recovery from old processors and graphic cards. Electronic components were treated by acidic solution at various conditions. Optimal acid concentration, time of the process and temperature were selected. Precious metals have been extracted to the aqueous phase. At the next step, metals were selectively extracted by organic solvents such as oximes or tributyl phosphate (TBP) etc. Multistage mixer-settler equipment was used. The process was optimized.

Keywords: electronic waste, leaching, hydrometallurgy, metal recovery, solvent extraction

Procedia PDF Downloads 111
918 Simultaneous Saccharification and Fermentation for D-Lactic Acid Production from Dried Distillers Grains with Solubles

Authors: Nurul Aqilah Mohd Zaini, Afroditi Chatzifragkou, Dimitris Charalampopoulos

Abstract:

D-Lactic acid production is gaining increasing attention due to the thermostable properties of its polymer, Polylactic Acid (PLA). In this study, D-lactic acid was produced in microbial cultures using Lactobacillus coryniformis subsp. torquens as D-lactic acid producer and hydrolysates of Dried Distillers Grains with Solubles (DDGS) as fermentation substrate. Prior to fermentation, DDGS was first alkaline pretreated with 5% (w/v) NaOH, for 15 minutes (121oC/ ~16 psi). This led to the generation of DDGS solid residues, rich in carbohydrates and especially cellulose (~52%). The carbohydrate-rich solids were then subjected to enzymatic hydrolysis with Accellerase® 1500. For Separate Hydrolysis and Fermentation (SHF), enzymatic hydrolysis was carried out at 50oC for 24 hours, followed by fermentation of D-lactic acid at 37oC in controlled pH 6. The obtained hydrolysate contained 24 g/l glucose, 5.4 g/l xylose and 0.6 g/l arabinose. In the case of Simultaneous Saccharification and Fermentation (SSF), hydrolysis and fermentation were conducted in a single step process at 37oC in pH 5. The enzymatic hydrolysis of DGGS pretreated solids took place mostly during lag phase of L. coryniformis fermentation, with only a small amount of glucose consumed during the first 6 h. When exponential phase was started, glucose generation reduced as the microorganism started to consume glucose for D-lactic acid production. Higher concentrations of D-lactic acid were produced when SSF approach was applied, with 28 g/l D-lactic acid after 24 h of fermentation (84.5% yield). In contrast, 21.2 g/l D-lactic acid were produced when SHF was used. The optical pu rity of D-lactic acid produced from both experiments was 99.9%. Besides, approximately 2 g/l acetic acid was also generated due to lactic acid degradation after glucose depletion in SHF. SSF was proved an efficient towards DDGS ulilisation and D-lactic acid production, by reducing the overall processing time, yielding sufficient D-lactic acid concentrations without the generation of fermentation by-products.

Keywords: DDGS, alkaline pretreatment, SSF, D-lactic acid

Procedia PDF Downloads 310
917 Effect of a Polyherbal Gut Therapy Protocol in Changes of Gut and Behavioral Symptoms of Antibiotic Induced Dysbiosis of Autistic Babies

Authors: Dinesh K. S., D. R. C. V. Jayadevan

Abstract:

Autism is the most prevalent of a subset of the disorders organized under the umbrella of pervasive developmental disorders. After the publication of Andrew Wakefield's paper in lancet, many critiques deny this connection even without looking in to the matter. The British Medical Journal even put an editorial regarding this issue. BMJ 2010; 340:c1807. But ayurveda has ample of evidences to believe this connectivity. Dysbiosis, yeast growth of the gut, nutritional deficiencies, enzyme deficiencies, essential fatty acid deficiencies, Gastro esophageal reflux disease, indigestion, inflammatory bowel, chronic constipation & its cascade are few of them to note. The purpose of this paper is to present the observed changes in the behavioural symptoms of autistic babies after a gut management protocol which is a usual programme of our autism treatment plan especially after dysbiotic changes after antibiotic administration. Is there any correlation between changes (if significant) in gut symptoms and behavioral problems of autistic babies especially after a dysbiosis induced by antibiotics. Retrospective analysis of the case sheets of autistic patients admitted in Vaidyaratnam P.S.Varier Ayurveda College hospital, kottakkal,kerala, india from September 2010 are taken for the data processing. Autistic patients are used to come to this hospital as a part of their usual course of treatment. We investigated 40 cases diagnosed as autistic by clinical psychologists from different institutions who had dysbiosis induced by antibiotics. Significant change in gut symptoms before and after treatment p<0.05 in most of its components Significant change in behavioral symptoms before and after treatments p<0.05 in most of the components Correlation between gut symptoms change and behavioral symptoms changes after treatment is + 0.86. Conclusion : Selected Polyherbal Ayurveda treatment has significant role to play to make changes abnormal behaviors in autistic babies and has a positive correlation with changes in gut symptoms induced by dysbiosis of antibiotic intake.

Keywords: ayurveda, autism, dysbiosis, antibiotic

Procedia PDF Downloads 599
916 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment

Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee

Abstract:

Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.

Keywords: deep neural models, natural language inference, recognizing textual entailment (RTE), sentence-to-sentence relation

Procedia PDF Downloads 323
915 Electrochemical Growth and Properties of Cu2O Nanostructures

Authors: A. Azizi, S. Laidoudi, G. Schmerber, A. Dinia

Abstract:

Cuprous oxide (Cu2O) is a well-known oxide semiconductor with a band gap of 2.1 eV and a natural p-type conductivity, which is an attractive material for device applications because of its abundant availability, non toxicity, and low production cost. It has a higher absorption coefficient in the visible region and the minority carrier diffusion length is also suitable for use as a solar cell absorber layer and it has been explored in junction with n type ZnO for photovoltaic applications. Cu2O nanostructures have been made by a variety of techniques; the electrodeposition method has emerged as one of the most promising processing routes as it is particularly provides advantages such as a low-cost, low temperature and a high level of purity in the products. In this work, Cu2O nanostructures prepared by electrodeposition from aqueous cupric sulfate solution with citric acid at 65°C onto a fluorine doped tin oxide (FTO) coated glass substrates were investigated. The effects of deposition potential on the electrochemical, surface morphology, structural and optical properties of Cu2O thin films were investigated. During cyclic voltammetry experiences, the potential interval where the electrodeposition of Cu2O is carried out was established. The Mott–Schottky (M-S) plot demonstrates that all the films are p-type semiconductors, the flat-band potential and the acceptor density for the Cu2O thin films are determined. AFM images reveal that the applied potential has a very significant influence on the surface morphology and size of the crystallites of thin Cu2O. The XRD measurements indicated that all the obtained films display a Cu2O cubic structure with a strong preferential orientation of the (111) direction. The optical transmission spectra in the UV-Visible domains revealed the highest transmission (75 %), and their calculated gap values increased from 1.93 to 2.24 eV, with increasing potentials.

Keywords: Cu2O, electrodeposition, Mott–Schottky plot, nanostructure, optical properties, XRD

Procedia PDF Downloads 333
914 Multi-Impairment Compensation Based Deep Neural Networks for 16-QAM Coherent Optical Orthogonal Frequency Division Multiplexing System

Authors: Ying Han, Yuanxiang Chen, Yongtao Huang, Jia Fu, Kaile Li, Shangjing Lin, Jianguo Yu

Abstract:

In long-haul and high-speed optical transmission system, the orthogonal frequency division multiplexing (OFDM) signal suffers various linear and non-linear impairments. In recent years, researchers have proposed compensation schemes for specific impairment, and the effects are remarkable. However, different impairment compensation algorithms have caused an increase in transmission delay. With the widespread application of deep neural networks (DNN) in communication, multi-impairment compensation based on DNN will be a promising scheme. In this paper, we propose and apply DNN to compensate multi-impairment of 16-QAM coherent optical OFDM signal, thereby improving the performance of the transmission system. The trained DNN models are applied in the offline digital signal processing (DSP) module of the transmission system. The models can optimize the constellation mapping signals at the transmitter and compensate multi-impairment of the OFDM decoded signal at the receiver. Furthermore, the models reduce the peak to average power ratio (PAPR) of the transmitted OFDM signal and the bit error rate (BER) of the received signal. We verify the effectiveness of the proposed scheme for 16-QAM Coherent Optical OFDM signal and demonstrate and analyze transmission performance in different transmission scenarios. The experimental results show that the PAPR and BER of the transmission system are significantly reduced after using the trained DNN. It shows that the DNN with specific loss function and network structure can optimize the transmitted signal and learn the channel feature and compensate for multi-impairment in fiber transmission effectively.

Keywords: coherent optical OFDM, deep neural network, multi-impairment compensation, optical transmission

Procedia PDF Downloads 106
913 The Effect of Porous Alkali Activated Material Composition on Buffer Capacity in Bioreactors

Authors: Girts Bumanis, Diana Bajare

Abstract:

With demand for primary energy continuously growing, search for renewable and efficient energy sources has been high on agenda of our society. One of the most promising energy sources is biogas technology. Residues coming from dairy industry and milk processing could be used in biogas production; however, low efficiency and high cost impede wide application of such technology. One of the main problems is management and conversion of organic residues through the anaerobic digestion process which is characterized by acidic environment due to the low whey pH (<6) whereas additional pH control system is required. Low buffering capacity of whey is responsible for the rapid acidification in biological treatments; therefore alkali activated material is a promising solution of this problem. Alkali activated material is formed using SiO2 and Al2O3 rich materials under highly alkaline solution. After material structure forming process is completed, free alkalis remain in the structure of materials which are available for leaching and could provide buffer capacity potential. In this research porous alkali activated material was investigated. Highly porous material structure ensures gradual leaching of alkalis during time which is important in biogas digestion process. Research of mixture composition and SiO2/Na2O and SiO2/Al2O ratio was studied to test the buffer capacity potential of alkali activated material. This research has proved that by changing molar ratio of components it is possible to obtain a material with different buffer capacity, and this novel material was seen to have considerable potential for using it in processes where buffer capacity and pH control is vitally important.

Keywords: alkaline material, buffer capacity, biogas production, bioreactors

Procedia PDF Downloads 219
912 Defect Correlation of Computed Tomography and Serial Sectioning in Additively Manufactured Ti-6Al-4V

Authors: Bryce R. Jolley, Michael Uchic

Abstract:

This study presents initial results toward the correlative characterization of inherent defects of Ti-6Al-4V additive manufacture (AM). X-Ray Computed Tomography (CT) defect data are compared and correlated with microscopic photographs obtained via automated serial sectioning. The metal AM specimen was manufactured out of Ti-6Al-4V virgin powder to specified dimensions. A post-contour was applied during the fabrication process with a speed of 1050 mm/s, power of 260 W, and a width of 140 µm. The specimen was stress relief heat-treated at 16°F for 3 hours. Microfocus CT imaging was accomplished on the specimen within a predetermined region of the build. Microfocus CT imaging was conducted with parameters optimized for Ti-6Al-4V additive manufacture. After CT imaging, a modified RoboMet. 3D version 2 was employed for serial sectioning and optical microscopy characterization of the same predetermined region. Automated montage capture with sub-micron resolution, bright-field reflection, 12-bit monochrome optical images were performed in an automated fashion. These optical images were post-processed to produce 2D and 3D data sets. This processing included thresholding and segmentation to improve visualization of defect features. The defects observed from optical imaging were compared and correlated with the defects observed from CT imaging over the same predetermined region of the specimen. Quantitative results of area fraction and equivalent pore diameters obtained via each method are presented for this correlation. It is shown that Microfocus CT imaging does not capture all inherent defects within this Ti-6Al-4V AM sample. Best practices for this correlative effort are also presented as well as the future direction of research resultant from this current study.

Keywords: additive manufacture, automated serial sectioning, computed tomography, nondestructive evaluation

Procedia PDF Downloads 111
911 Patient-Specific Design Optimization of Cardiovascular Grafts

Authors: Pegah Ebrahimi, Farshad Oveissi, Iman Manavi-Tehrani, Sina Naficy, David F. Fletcher, Fariba Dehghani, David S. Winlaw

Abstract:

Despite advances in modern surgery, congenital heart disease remains a medical challenge and a major cause of infant mortality. Cardiovascular prostheses are routinely used in surgical procedures to address congenital malformations, for example establishing a pathway from the right ventricle to the pulmonary arteries in pulmonary valvar atresia. Current off-the-shelf options including human and adult products have limited biocompatibility and durability, and their fixed size necessitates multiple subsequent operations to upsize the conduit to match with patients’ growth over their lifetime. Non-physiological blood flow is another major problem, reducing the longevity of these prostheses. These limitations call for better designs that take into account the hemodynamical and anatomical characteristics of different patients. We have integrated tissue engineering techniques with modern medical imaging and image processing tools along with mathematical modeling to optimize the design of cardiovascular grafts in a patient-specific manner. Computational Fluid Dynamics (CFD) analysis is done according to models constructed from each individual patient’s data. This allows for improved geometrical design and achieving better hemodynamic performance. Tissue engineering strives to provide a material that grows with the patient and mimic the durability and elasticity of the native tissue. Simulations also give insight on the performance of the tissues produced in our lab and reduce the need for costly and time-consuming methods of evaluation of the grafts. We are also developing a methodology for the fabrication of the optimized designs.

Keywords: computational fluid dynamics, cardiovascular grafts, design optimization, tissue engineering

Procedia PDF Downloads 213
910 The Quality Assessment of Seismic Reflection Survey Data Using Statistical Analysis: A Case Study of Fort Abbas Area, Cholistan Desert, Pakistan

Authors: U. Waqas, M. F. Ahmed, A. Mehmood, M. A. Rashid

Abstract:

In geophysical exploration surveys, the quality of acquired data holds significant importance before executing the data processing and interpretation phases. In this study, 2D seismic reflection survey data of Fort Abbas area, Cholistan Desert, Pakistan was taken as test case in order to assess its quality on statistical bases by using normalized root mean square error (NRMSE), Cronbach’s alpha test (α) and null hypothesis tests (t-test and F-test). The analysis challenged the quality of the acquired data and highlighted the significant errors in the acquired database. It is proven that the study area is plain, tectonically least affected and rich in oil and gas reserves. However, subsurface 3D modeling and contouring by using acquired database revealed high degrees of structural complexities and intense folding. The NRMSE had highest percentage of residuals between the estimated and predicted cases. The outcomes of hypothesis testing also proved the biasness and erraticness of the acquired database. Low estimated value of alpha (α) in Cronbach’s alpha test confirmed poor reliability of acquired database. A very low quality of acquired database needs excessive static correction or in some cases, reacquisition of data is also suggested which is most of the time not feasible on economic grounds. The outcomes of this study could be used to assess the quality of large databases and to further utilize as a guideline to establish database quality assessment models to make much more informed decisions in hydrocarbon exploration field.

Keywords: Data quality, Null hypothesis, Seismic lines, Seismic reflection survey

Procedia PDF Downloads 114
909 Kinetics of Phytochemicals and Antioxidant Activity during Thermal Treatment of Cape Gooseberry (Physalis peruviana L)

Authors: Mary-Luz Olivares-Tenorio, Ruud Verkerk, Matthijs Dekker, Martinus A. J. S. van Boekel

Abstract:

Cape gooseberry, the fruit of the plant Physalis peruviana L. has gained interest in research given its contents of promising health-promoting compounds like contents. The presence of carotenoids, ascorbic acid, minerals, polyphenols, vitamins and antioxidants. This project aims to study thermal stability of β-carotene, ascorbic acid, catechin and epicatechin and antioxidant activity in the matrix of the Cape Gooseberry. Fruits were obtained from a Colombian field in Cundinamarca. Ripeness stage was 4 (According to NTC 4580, corresponding to mature stage) at the moment of the experiment. The fruits have been subjected to temperatures of 40, 60, 80, 100 and 120°C for several times. β-Carotene, ascorbic acid, catechin and epicatechin content were assessed with HPLC and antioxidant activity with the DPPH method. β-Carotene was stable upon 100°C, and showed some degradation at 120°C. The same behavior was observed for epicatechin. Catechin increased during treatment at 40°C, at 60°C it remained stable and it showed degradation at 80°C, 100°C and 120°C that could be described by a second order kinetic model. Ascorbic acid was the most heat-sensitive of the analyzed compounds. It showed degradation at all studied temperatures, and could be described by a first order model. The activation energy for ascorbic acid degradation in cape gooseberry was 46.0 kJ/mol and its degradation rate coefficient at 100 °C was 6.53 x 10-3 s-1. The antioxidant activity declined for all studied temperatures. Results from this study showed that cape gooseberry is an important source of different health-promoting compounds and some of them are stable to heat. That makes this fruit a suitable raw material for processed products such as jam, juices and dehydrated fruit, giving the consumer a good intake of these compounds.

Keywords: goldenberry, health-promoting compounds, phytochemical, processing, heat treatment

Procedia PDF Downloads 423
908 Conducting Quality Planning, Assurance and Control According to GMP (Good Manufacturing Practices) Standards and Benchmarking Data for Kuwait Food Industries

Authors: Alaa Alateeqi, Sara Aldhulaiee, Sara Alibraheem, Noura Alsaleh

Abstract:

For the past few decades or so, Kuwait's local food industry has grown remarkably due to increase in demand for processed or semi processed food products in the market. It is important that the ever increasing food manufacturing/processing units maintain the required quality standards as per regional and to some extent international quality requirements. It has been realized that all Kuwait food manufacturing units should understand and follow the international standard practices, and moreover a set of guidelines must be set for quality assurance such that any new business in this area is aware of the minimum requirements. The current study has been undertaken to identify the gaps in Kuwait food industries in following the Good Manufacturing Practices (GMP) in terms of quality planning, control and quality assurance. GMP refers to Good Manufacturing Practices, which are a set of rules, laws or regulations that certify producing products within quality standards and ensuring that it is safe, pure and effective. The present study therefore reports about a ‘case study’ in a reputed food manufacturing unit in Kuwait; starting from assessment of the current practices followed by diagnosis, report of the diagnosis and road map and corrective measures for GMP implementation in the unit. The case study has also been able to identify the best practices and establish a benchmarking data for other companies to follow, through measuring the selected company's quality, policies, products and strategies and compare it with the established benchmarking data. A set of questionnaires and assessment mechanism has been established for companies to identify their ‘benchmarking score’ in relation to the number of non-conformities and conformities with the GMP standard requirements.

Keywords: good manufacturing practices, GMP, benchmarking, Kuwait Food Industries, food quality

Procedia PDF Downloads 438
907 Relative Entropy Used to Determine the Divergence of Cells in Single Cell RNA Sequence Data Analysis

Authors: An Chengrui, Yin Zi, Wu Bingbing, Ma Yuanzhu, Jin Kaixiu, Chen Xiao, Ouyang Hongwei

Abstract:

Single cell RNA sequence (scRNA-seq) is one of the effective tools to study transcriptomics of biological processes. Recently, similarity measurement of cells is Euclidian distance or its derivatives. However, the process of scRNA-seq is a multi-variate Bernoulli event model, thus we hypothesize that it would be more efficient when the divergence between cells is valued with relative entropy than Euclidian distance. In this study, we compared the performances of Euclidian distance, Spearman correlation distance and Relative Entropy using scRNA-seq data of the early, medial and late stage of limb development generated in our lab. Relative Entropy is better than other methods according to cluster potential test. Furthermore, we developed KL-SNE, an algorithm modifying t-SNE whose definition of divergence between cells Euclidian distance to Kullback–Leibler divergence. Results showed that KL-SNE was more effective to dissect cell heterogeneity than t-SNE, indicating the better performance of relative entropy than Euclidian distance. Specifically, the chondrocyte expressing Comp was clustered together with KL-SNE but not with t-SNE. Surprisingly, cells in early stage were surrounded by cells in medial stage in the processing of KL-SNE while medial cells neighbored to late stage with the process of t-SNE. This results parallel to Heatmap which showed cells in medial stage were more heterogenic than cells in other stages. In addition, we also found that results of KL-SNE tend to follow Gaussian distribution compared with those of the t-SNE, which could also be verified with the analysis of scRNA-seq data from another study on human embryo development. Therefore, it is also an effective way to convert non-Gaussian distribution to Gaussian distribution and facilitate the subsequent statistic possesses. Thus, relative entropy is potentially a better way to determine the divergence of cells in scRNA-seq data analysis.

Keywords: Single cell RNA sequence, Similarity measurement, Relative Entropy, KL-SNE, t-SNE

Procedia PDF Downloads 316