Search results for: query processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3741

Search results for: query processing

3021 Disassociating Preferences from Evaluations Towards Pseudo Drink Brands

Authors: Micah Amd

Abstract:

Preferences towards unfamiliar drink brands can be predictably influenced following correlations of subliminally-presented brands (CS) with positively valenced attributes (US). Alternatively, evaluations towards subliminally-presented CS may be more variable, suggesting that CS-evoked evaluations may disassociate from CS-associated preferences following subliminal CS-US conditioning. We assessed this hypothesis over three experiments (Ex1, Ex2, Ex3). Across each experiment, participants first provided preferences and evaluations towards meaningless trigrams (CS) as a baseline, followed by conditioning and a final round of preference and evaluation measurements. During conditioning, four pairs of subliminal and supraliminal/visible CS were respectively correlated with four US categories varying along aggregate valence (e.g., 100% positive, 80% positive, 40% positive, 0% positive – for Ex1 and Ex2). Across Ex1 and Ex2, presentation durations for subliminal CS were 34 and 17 milliseconds, respectively. Across Ex3, aggregate valences of the four US categories were altered (75% positive, 55% positive, 45% positive, 25% positive). Valence across US categories was manipulated to address a supplemental query of whether US-to-CS valence transfer was summative or integrative. During analysis, we computed two sets of difference scores reflecting pre-post preference and evaluation performances, respectively. These were subjected to Bayes tests. Across all experiments, results illustrated US-to-CS valence transfer was most likely to shift evaluations for visible CS, but least likely to shift evaluations for subliminal CS. Alternatively, preferences were likely to shift following correlations with single-valence categories (e.g., 100% positive, 100% negative) across both visible and subliminal CS. Our results suggest that CS preferences can be influenced through subliminal conditioning even as CS evaluations remain unchanged, supporting our central hypothesis. As for whether transfer effects are summative/integrative, our results were more mixed; a comparison of relative likelihoods revealed that preferences are more likely to reflect summative effects whereas evaluations reflect integration, independent of visibility condition.

Keywords: subliminal conditioning, evaluations, preferences, valence transfer

Procedia PDF Downloads 140
3020 Verification and Proposal of Information Processing Model Using EEG-Based Brain Activity Monitoring

Authors: Toshitaka Higashino, Naoki Wakamiya

Abstract:

Human beings perform a task by perceiving information from outside, recognizing them, and responding them. There have been various attempts to analyze and understand internal processes behind the reaction to a given stimulus by conducting psychological experiments and analysis from multiple perspectives. Among these, we focused on Model Human Processor (MHP). However, it was built based on psychological experiments and thus the relation with brain activity was unclear so far. To verify the validity of the MHP and propose our model from a viewpoint of neuroscience, EEG (Electroencephalography) measurements are performed during experiments in this study. More specifically, first, experiments were conducted where Latin alphabet characters were used as visual stimuli. In addition to response time, ERPs (event-related potentials) such as N100 and P300 were measured by using EEG. By comparing cycle time predicted by the MHP and latency of ERPs, it was found that N100, related to perception of stimuli, appeared at the end of the perceptual processor. Furthermore, by conducting an additional experiment, it was revealed that P300, related to decision making, appeared during the response decision process, not at the end. Second, by experiments using Japanese Hiragana characters, i.e. Japan's own phonetic symbols, those findings were confirmed. Finally, Japanese Kanji characters were used as more complicated visual stimuli. A Kanji character usually has several readings and several meanings. Despite the difference, a reading-related task and a meaning-related task exhibited similar results, meaning that they involved similar information processing processes of the brain. Based on those results, our model was proposed which reflects response time and ERP latency. It consists of three processors: the perception processor from an input of a stimulus to appearance of N100, the cognitive processor from N100 to P300, and the decision-action processor from P300 to response. Using our model, an application system which reflects brain activity can be established.

Keywords: brain activity, EEG, information processing model, model human processor

Procedia PDF Downloads 86
3019 Comparative Efficacy of Gas Phase Sanitizers for Inactivating Salmonella, Escherichia coli O157:H7 and Listeria monocytogenes on Intact Lettuce Heads

Authors: Kayla Murray, Andrew Green, Gopi Paliyath, Keith Warriner

Abstract:

Introduction: It is now acknowledged that control of human pathogens associated with fresh produce requires an integrated approach of several interventions as opposed to relying on post-harvest washes to remove field acquired contamination. To this end, current research is directed towards identifying such interventions that can be applied at different points in leafy green processing. Purpose: In the following the efficacy of different gas phase treatments to decontaminate whole lettuce heads during pre-processing storage were evaluated. Methods: Whole Cos lettuce heads were spot inoculated with L. monocytogenes, E. coli O157:H7 or Salmonella spp. The inoculated lettuce heads were then placed in a treatment chamber and exposed to ozone, chlorine dioxide or hydroxyl radicals at different time periods under a range of relative humidity. Survivors of the treatments were enumerated along with sensory analysis performed on the treated lettuce. Results: Ozone gas reduced L. monocytogenes by 2-log10 after ten-minutes of exposure with Salmonella and E. coli O157:H7 being decreased by 0.66 and 0.56-log cfu respectively. Chlorine dioxide gas treatment reduced L. monocytogenes and Salmonella on lettuce heads by 4 log cfu but only supported a 0.8 log cfu reduction in E. coli O157:H7 numbers. In comparison, hydroxyl radicals supported a 2.9 – 4.8 log cfu reduction of model human pathogens inoculated onto lettuce heads but required extended exposure times and relative humidity < 0.8. Significance: From the gas phase sanitizers tested, chlorine dioxide and hydroxyl radicals are the most effective. The latter process holds most promise based on the ease of delivery, worker safety and preservation of lettuce sensory characteristics. Although expose times for hydroxyl radicles was relatively long (24h) this should not be considered a limitation given the intervention is applied in store rooms or in transport containers during transit.

Keywords: gas phase sanitizers, iceberg lettuce heads, leafy green processing

Procedia PDF Downloads 388
3018 Using Bidirectional Encoder Representations from Transformers to Extract Topic-Independent Sentiment Features for Social Media Bot Detection

Authors: Maryam Heidari, James H. Jones Jr.

Abstract:

Millions of online posts about different topics and products are shared on popular social media platforms. One use of this content is to provide crowd-sourced information about a specific topic, event or product. However, this use raises an important question: what percentage of information available through these services is trustworthy? In particular, might some of this information be generated by a machine, i.e., a bot, instead of a human? Bots can be, and often are, purposely designed to generate enough volume to skew an apparent trend or position on a topic, yet the consumer of such content cannot easily distinguish a bot post from a human post. In this paper, we introduce a model for social media bot detection which uses Bidirectional Encoder Representations from Transformers (Google Bert) for sentiment classification of tweets to identify topic-independent features. Our use of a Natural Language Processing approach to derive topic-independent features for our new bot detection model distinguishes this work from previous bot detection models. We achieve 94\% accuracy classifying the contents of data as generated by a bot or a human, where the most accurate prior work achieved accuracy of 92\%.

Keywords: bot detection, natural language processing, neural network, social media

Procedia PDF Downloads 100
3017 Physical Activity and Cognitive Functioning Relationship in Children

Authors: Comfort Mokgothu

Abstract:

This study investigated the relation between processing information and fitness level of active (fit) and sedentary (unfit) children drawn from rural and urban areas in Botswana. It was hypothesized that fit children would display faster simple reaction time (SRT), choice reaction times (CRT) and movement times (SMT). 60, third grade children (7.0 – 9.0 years) were initially selected and based upon fitness testing, 45 participated in the study (15 each of fit urban, unfit urban, fit rural). All children completed anthropometric measures, skinfold testing and submaximal cycle ergometer testing. The cognitive testing included SRT, CRT, SMT and Choice Movement Time (CMT) and memory sequence length. Results indicated that the rural fit group exhibited faster SMT than the urban fit and unfit groups. For CRT, both fit groups were faster than the unfit group. Collectively, the study shows that the relationship that exists between physical fitness and cognitive function amongst the elderly can tentatively be extended to the pediatric population. Physical fitness could be a factor in the speed at which we process information, including decision making, even in children.

Keywords: decision making, fitness, information processing, reaction time, cognition movement time

Procedia PDF Downloads 129
3016 An Intelligent Nondestructive Testing System of Ultrasonic Infrared Thermal Imaging Based on Embedded Linux

Authors: Hao Mi, Ming Yang, Tian-yue Yang

Abstract:

Ultrasonic infrared nondestructive testing is a kind of testing method with high speed, accuracy and localization. However, there are still some problems, such as the detection requires manual real-time field judgment, the methods of result storage and viewing are still primitive. An intelligent non-destructive detection system based on embedded linux is put forward in this paper. The hardware part of the detection system is based on the ARM (Advanced Reduced Instruction Set Computer Machine) core and an embedded linux system is built to realize image processing and defect detection of thermal images. The CLAHE algorithm and the Butterworth filter are used to process the thermal image, and then the boa server and CGI (Common Gateway Interface) technology are used to transmit the test results to the display terminal through the network for real-time monitoring and remote monitoring. The system also liberates labor and eliminates the obstacle of manual judgment. According to the experiment result, the system provides a convenient and quick solution for industrial non-destructive testing.

Keywords: remote monitoring, non-destructive testing, embedded Linux system, image processing

Procedia PDF Downloads 203
3015 Digital Forensics Compute Cluster: A High Speed Distributed Computing Capability for Digital Forensics

Authors: Daniel Gonzales, Zev Winkelman, Trung Tran, Ricardo Sanchez, Dulani Woods, John Hollywood

Abstract:

We have developed a distributed computing capability, Digital Forensics Compute Cluster (DFORC2) to speed up the ingestion and processing of digital evidence that is resident on computer hard drives. DFORC2 parallelizes evidence ingestion and file processing steps. It can be run on a standalone computer cluster or in the Amazon Web Services (AWS) cloud. When running in a virtualized computing environment, its cluster resources can be dynamically scaled up or down using Kubernetes. DFORC2 is an open source project that uses Autopsy, Apache Spark and Kafka, and other open source software packages. It extends the proven open source digital forensics capabilities of Autopsy to compute clusters and cloud architectures, so digital forensics tasks can be accomplished efficiently by a scalable array of cluster compute nodes. In this paper, we describe DFORC2 and compare it with a standalone version of Autopsy when both are used to process evidence from hard drives of different sizes.

Keywords: digital forensics, cloud computing, cyber security, spark, Kubernetes, Kafka

Procedia PDF Downloads 375
3014 Using Electrical Impedance Tomography to Control a Robot

Authors: Shayan Rezvanigilkolaei, Shayesteh Vefaghnematollahi

Abstract:

Electrical impedance tomography is a non-invasive medical imaging technique suitable for medical applications. This paper describes an electrical impedance tomography device with the ability to navigate a robotic arm to manipulate a target object. The design of the device includes various hardware and software sections to perform medical imaging and control the robotic arm. In its hardware section an image is formed by 16 electrodes which are located around a container. This image is used to navigate a 3DOF robotic arm to reach the exact location of the target object. The data set to form the impedance imaging is obtained by having repeated current injections and voltage measurements between all electrode pairs. After performing the necessary calculations to obtain the impedance, information is transmitted to the computer. This data is fed and then executed in MATLAB which is interfaced with EIDORS (Electrical Impedance Tomography Reconstruction Software) to reconstruct the image based on the acquired data. In the next step, the coordinates of the center of the target object are calculated by image processing toolbox of MATLAB (IPT). Finally, these coordinates are used to calculate the angles of each joint of the robotic arm. The robotic arm moves to the desired tissue with the user command.

Keywords: electrical impedance tomography, EIT, surgeon robot, image processing of electrical impedance tomography

Procedia PDF Downloads 260
3013 An Efficient FPGA Realization of Fir Filter Using Distributed Arithmetic

Authors: M. Iruleswari, A. Jeyapaul Murugan

Abstract:

Most fundamental part used in many Digital Signal Processing (DSP) application is a Finite Impulse Response (FIR) filter because of its linear phase, stability and regular structure. Designing a high-speed and hardware efficient FIR filter is a very challenging task as the complexity increases with the filter order. In most applications the higher order filters are required but the memory usage of the filter increases exponentially with the order of the filter. Using multipliers occupy a large chip area and need high computation time. Multiplier-less memory-based techniques have gained popularity over past two decades due to their high throughput processing capability and reduced dynamic power consumption. This paper describes the design and implementation of highly efficient Look-Up Table (LUT) based circuit for the implementation of FIR filter using Distributed arithmetic algorithm. It is a multiplier less FIR filter. The LUT can be subdivided into a number of LUT to reduce the memory usage of the LUT for higher order filter. Analysis on the performance of various filter orders with different address length is done using Xilinx 14.5 synthesis tool. The proposed design provides less latency, less memory usage and high throughput.

Keywords: finite impulse response, distributed arithmetic, field programmable gate array, look-up table

Procedia PDF Downloads 442
3012 Distributed Processing for Content Based Lecture Video Retrieval on Hadoop Framework

Authors: U. S. N. Raju, Kothuri Sai Kiran, Meena G. Kamal, Vinay Nikhil Pabba, Suresh Kanaparthi

Abstract:

There is huge amount of lecture video data available for public use, and many more lecture videos are being created and uploaded every day. Searching for videos on required topics from this huge database is a challenging task. Therefore, an efficient method for video retrieval is needed. An approach for automated video indexing and video search in large lecture video archives is presented. As the amount of video lecture data is huge, it is very inefficient to do the processing in a centralized computation framework. Hence, Hadoop Framework for distributed computing for Big Video Data is used. First, step in the process is automatic video segmentation and key-frame detection to offer a visual guideline for the video content navigation. In the next step, we extract textual metadata by applying video Optical Character Recognition (OCR) technology on key-frames. The OCR and detected slide text line types are adopted for keyword extraction, by which both video- and segment-level keywords are extracted for content-based video browsing and search. The performance of the indexing process can be improved for a large database by using distributed computing on Hadoop framework.

Keywords: video lectures, big video data, video retrieval, hadoop

Procedia PDF Downloads 510
3011 Effect of Temperature and Deformation Mode on Texture Evolution of AA6061

Authors: M. Ghosh, A. Miroux, L. A. I. Kestens

Abstract:

At molecular or micrometre scale, practically all materials are neither homogeneous nor isotropic. The concept of texture is used to identify the structural features that cause the properties of a material to be anisotropic. For metallic materials, the anisotropy of the mechanical behaviour originates from the crystallographic nature of plastic deformation, and is therefore controlled by the crystallographic texture. Anisotropy in mechanical properties often constitutes a disadvantage in the application of materials, as it is often illustrated by the earing phenomena during drawing. However, advantages may also be attained when considering other properties (e.g. optimization of magnetic behaviour to a specific direction) by controlling texture through thermo-mechanical processing). Nevertheless, in order to have better control over the final properties it is essential to relate texture with materials processing route and subsequently optimise their performance. However, up to date, few studies have been reported about the evolution of texture in 6061 aluminium alloy during warm processing (from room temperature to 250ºC). In present investigation, recrystallized 6061 aluminium alloy samples were subjected to tensile and plane strain compression (PSC) at room and warm temperatures. The gradual change of texture following both deformation modes were measured and discussed. Tensile tests demonstrate the mechanism at low strain while PSC does the same at high strain and eventually simulate the condition of rolling. Cube dominated texture of the initial rolled and recrystallized AA6061 sheets were replaced by domination of S and R components after PSC at room temperature, warm temperature (250ºC) though did not reflect any noticeable deviation from room temperature observation. It was also noticed that temperature has no significant effect on the evolution of grain morphology during PSC. The band contrast map revealed that after 30% deformation the substructure inside the grain is mainly made of series of parallel bands. A tendency for decrease of Cube and increase of Goss was noticed after tensile deformation compared to as-received material. Like PSC, texture does not change after deformation at warm temperature though. n-fibre was noticed for all the three textures from Goss to Cube.

Keywords: AA 6061, deformation, temperature, tensile, PSC, texture

Procedia PDF Downloads 472
3010 Selecting the Best Sub-Region Indexing the Images in the Case of Weak Segmentation Based on Local Color Histograms

Authors: Mawloud Mosbah, Bachir Boucheham

Abstract:

Color Histogram is considered as the oldest method used by CBIR systems for indexing images. In turn, the global histograms do not include the spatial information; this is why the other techniques coming later have attempted to encounter this limitation by involving the segmentation task as a preprocessing step. The weak segmentation is employed by the local histograms while other methods as CCV (Color Coherent Vector) are based on strong segmentation. The indexation based on local histograms consists of splitting the image into N overlapping blocks or sub-regions, and then the histogram of each block is computed. The dissimilarity between two images is reduced, as consequence, to compute the distance between the N local histograms of the both images resulting then in N*N values; generally, the lowest value is taken into account to rank images, that means that the lowest value is that which helps to designate which sub-region utilized to index images of the collection being asked. In this paper, we make under light the local histogram indexation method in the hope to compare the results obtained against those given by the global histogram. We address also another noteworthy issue when Relying on local histograms namely which value, among N*N values, to trust on when comparing images, in other words, which sub-region among the N*N sub-regions on which we base to index images. Based on the results achieved here, it seems that relying on the local histograms, which needs to pose an extra overhead on the system by involving another preprocessing step naming segmentation, does not necessary mean that it produces better results. In addition to that, we have proposed here some ideas to select the local histogram on which we rely on to encode the image rather than relying on the local histogram having lowest distance with the query histograms.

Keywords: CBIR, color global histogram, color local histogram, weak segmentation, Euclidean distance

Procedia PDF Downloads 346
3009 Experimental Modeling of Spray and Water Sheet Formation Due to Wave Interactions with Vertical and Slant Bow-Shaped Model

Authors: Armin Bodaghkhani, Bruce Colbourne, Yuri S. Muzychka

Abstract:

The process of spray-cloud formation and flow kinematics produced from breaking wave impact on vertical and slant lab-scale bow-shaped models were experimentally investigated. Bubble Image Velocimetry (BIV) and Image Processing (IP) techniques were applied to study the various types of wave-model impacts. Different wave characteristics were generated in a tow tank to investigate the effects of wave characteristics, such as wave phase velocity, wave steepness on droplet velocities, and behavior of the process of spray cloud formation. The phase ensemble-averaged vertical velocity and turbulent intensity were computed. A high-speed camera and diffused LED backlights were utilized to capture images for further post processing. Various pressure sensors and capacitive wave probes were used to measure the wave impact pressure and the free surface profile at different locations of the model and wave-tank, respectively. Droplet sizes and velocities were measured using BIV and IP techniques to trace bubbles and droplets in order to measure their velocities and sizes by correlating the texture in these images. The impact pressure and droplet size distributions were compared to several previously experimental models, and satisfactory agreements were achieved. The distribution of droplets in front of both models are demonstrated. Due to the highly transient process of spray formation, the drag coefficient for several stages of this transient displacement for various droplet size ranges and different Reynolds number were calculated based on the ensemble average method. From the experimental results, the slant model produces less spray in comparison with the vertical model, and the droplet velocities generated from the wave impact with the slant model have a lower velocity as compared with the vertical model.

Keywords: spray charachteristics, droplet size and velocity, wave-body interactions, bubble image velocimetry, image processing

Procedia PDF Downloads 286
3008 Data Clustering in Wireless Sensor Network Implemented on Self-Organization Feature Map (SOFM) Neural Network

Authors: Krishan Kumar, Mohit Mittal, Pramod Kumar

Abstract:

Wireless sensor network is one of the most promising communication networks for monitoring remote environmental areas. In this network, all the sensor nodes are communicated with each other via radio signals. The sensor nodes have capability of sensing, data storage and processing. The sensor nodes collect the information through neighboring nodes to particular node. The data collection and processing is done by data aggregation techniques. For the data aggregation in sensor network, clustering technique is implemented in the sensor network by implementing self-organizing feature map (SOFM) neural network. Some of the sensor nodes are selected as cluster head nodes. The information aggregated to cluster head nodes from non-cluster head nodes and then this information is transferred to base station (or sink nodes). The aim of this paper is to manage the huge amount of data with the help of SOM neural network. Clustered data is selected to transfer to base station instead of whole information aggregated at cluster head nodes. This reduces the battery consumption over the huge data management. The network lifetime is enhanced at a greater extent.

Keywords: artificial neural network, data clustering, self organization feature map, wireless sensor network

Procedia PDF Downloads 492
3007 Machine Learning Approach for Mutation Testing

Authors: Michael Stewart

Abstract:

Mutation testing is a type of software testing proposed in the 1970s where program statements are deliberately changed to introduce simple errors so that test cases can be validated to determine if they can detect the errors. Test cases are executed against the mutant code to determine if one fails, detects the error and ensures the program is correct. One major issue with this type of testing was it became intensive computationally to generate and test all possible mutations for complex programs. This paper used reinforcement learning and parallel processing within the context of mutation testing for the selection of mutation operators and test cases that reduced the computational cost of testing and improved test suite effectiveness. Experiments were conducted using sample programs to determine how well the reinforcement learning-based algorithm performed with one live mutation, multiple live mutations and no live mutations. The experiments, measured by mutation score, were used to update the algorithm and improved accuracy for predictions. The performance was then evaluated on multiple processor computers. With reinforcement learning, the mutation operators utilized were reduced by 50 – 100%.

Keywords: automated-testing, machine learning, mutation testing, parallel processing, reinforcement learning, software engineering, software testing

Procedia PDF Downloads 175
3006 Query in Grammatical Forms and Corpus Error Analysis

Authors: Katerina Florou

Abstract:

Two decades after coined the term "learner corpora" as collections of texts created by foreign or second language learners across various language contexts, and some years following suggestion to incorporate "focusing on form" within a Task-Based Learning framework, this study aims to explore how learner corpora, whether annotated with errors or not, can facilitate a focus on form in an educational setting. Argues that analyzing linguistic form serves the purpose of enabling students to delve into language and gain an understanding of different facets of the foreign language. This same objective is applicable when analyzing learner corpora marked with errors or in their raw state, but in this scenario, the emphasis lies on identifying incorrect forms. Teachers should aim to address errors or gaps in the students' second language knowledge while they engage in a task. Building on this recommendation, we compared the written output of two student groups: the first group (G1) employed the focusing on form phase by studying a specific aspect of the Italian language, namely the past participle, through examples from native speakers and grammar rules; the second group (G2) focused on form by scrutinizing their own errors and comparing them with analogous examples from a native speaker corpus. In order to test our hypothesis, we created four learner corpora. The initial two were generated during the task phase, with one representing each group of students, while the remaining two were produced as a follow-up activity at the end of the lesson. The results of the first comparison indicated that students' exposure to their own errors can enhance their grasp of a grammatical element. The study is in its second stage and more results are to be announced.

Keywords: Corpus interlanguage analysis, task based learning, Italian language as F1, learner corpora

Procedia PDF Downloads 29
3005 Mage Fusion Based Eye Tumor Detection

Authors: Ahmed Ashit

Abstract:

Image fusion is a significant and efficient image processing method used for detecting different types of tumors. This method has been used as an effective combination technique for obtaining high quality images that combine anatomy and physiology of an organ. It is the main key in the huge biomedical machines for diagnosing cancer such as PET-CT machine. This thesis aims to develop an image analysis system for the detection of the eye tumor. Different image processing methods are used to extract the tumor and then mark it on the original image. The images are first smoothed using median filtering. The background of the image is subtracted, to be then added to the original, results in a brighter area of interest or tumor area. The images are adjusted in order to increase the intensity of their pixels which lead to clearer and brighter images. once the images are enhanced, the edges of the images are detected using canny operators results in a segmented image comprises only of the pupil and the tumor for the abnormal images, and the pupil only for the normal images that have no tumor. The images of normal and abnormal images are collected from two sources: “Miles Research” and “Eye Cancer”. The computerized experimental results show that the developed image fusion based eye tumor detection system is capable of detecting the eye tumor and segment it to be superimposed on the original image.

Keywords: image fusion, eye tumor, canny operators, superimposed

Procedia PDF Downloads 339
3004 Identification of EEG Attention Level Using Empirical Mode Decompositions for BCI Applications

Authors: Chia-Ju Peng, Shih-Jui Chen

Abstract:

This paper proposes a method to discriminate electroencephalogram (EEG) signals between different concentration states using empirical mode decomposition (EMD). Brain-computer interface (BCI), also called brain-machine interface, is a direct communication pathway between the brain and an external device without the inherent pathway such as the peripheral nervous system or skeletal muscles. Attention level is a common index as a control signal of BCI systems. The EEG signals acquired from people paying attention or in relaxation, respectively, are decomposed into a set of intrinsic mode functions (IMF) by EMD. Fast Fourier transform (FFT) analysis is then applied to each IMF to obtain the frequency spectrums. By observing power spectrums of IMFs, the proposed method has the better identification of EEG attention level than the original EEG signals between different concentration states. The band power of IMF3 is the most obvious especially in β wave, which corresponds to fully awake and generally alert. The signal processing method and results of this experiment paves a new way for BCI robotic system using the attention-level control strategy. The integrated signal processing method reveals appropriate information for discrimination of the attention and relaxation, contributing to a more enhanced BCI performance.

Keywords: biomedical engineering, brain computer interface, electroencephalography, rehabilitation

Procedia PDF Downloads 377
3003 Process Optimization and Microbial Quality of Provitamin A-Biofortified Amahewu, a Non-Alcoholic Maize Based Beverage

Authors: Temitope D. Awobusuyi, Eric O. Amonsou, Muthulisi Siwela, Oluwatosin A. Ijabadeniyi

Abstract:

Provitamin A-biofortified maize has been developed to alleviate Vitamin A deficiency; a major public health problem in developing countries. Amahewu, a non-alcoholic fermented maize based beverage is produced using white maize, which is deficient in Vitamin A. In this study, the suitable processing conditions for the production of amahewu using provitamin A-biofortified maize and the microbial quality of the processed products were evaluated. Provitamin A-biofortified amahewu was produced with reference to traditional processing method. Processing variables were Inoculum types (Malted provitamin A maize, Wheat bran, and lactobacillus mixed starter culture with either malted provitamin A or wheat bran) and concentration (0.5 %, 1 % and 2 %). A total of four provitamin A-biofortified amahewu products after fermentation were subjected to different storage conditions: 4ᴼC, 25ᴼC and 37ᴼC. pH and TTA were monitored throughout the storage period. Sample of provitamin A-biofortified amahewu were plated and observed every day for 5 days to assess the presence of Aerobic and Anaerobic spore formers, E.coli, Lactobacillus and Mould. The addition of starter culture substantially reduced the fermentation time (6 hour, pH 3.3) compared to those with no addition of starter culture (24 hour pH 3.5). It was observed that Lactobacillus were present from day 0 for all the storage temperatures. The presence of aerobic spore former and mould were observed on day 3. E.coli and Anaerobic spore formers were not present throughout the storage period. These microbial growth were minimal at 4ᴼC while 25ᴼC had higher counts of growth with 37ᴼC having the highest colony count. Throughout the storage period, pH of provitamin A-biofortified amahewu was stable. Provitamin A-biofortified amahewu stored under refrigerated condition (4ᴼC) had better storability compared to 25ᴼC and 37ᴼC. The production and microbial quality of provitamin A-biofortified amahewu might be important in combating Vitamin A Deficiency.

Keywords: biofortification, fermentation, maize, vitamin A deficiency

Procedia PDF Downloads 415
3002 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach

Authors: Kristina Pflug, Markus Busch

Abstract:

Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.

Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology

Procedia PDF Downloads 111
3001 The Diagnostic Utility and Sensitivity of the Xpert® MTB/RIF Assay in Diagnosing Mycobacterium tuberculosis in Bone Marrow Aspirate Specimens

Authors: Nadhiya N. Subramony, Jenifer Vaughan, Lesley E. Scott

Abstract:

In South Africa, the World Health Organisation estimated 454000 new cases of Mycobacterium tuberculosis (M.tb) infection (MTB) in 2015. Disseminated tuberculosis arises from the haematogenous spread and seeding of the bacilli in extrapulmonary sites. The gold standard for the detection of MTB in bone marrow is TB culture which has an average turnaround time of 6 weeks. Histological examinations of trephine biopsies to diagnose MTB also have a time delay owing mainly to the 5-7 day processing period prior to microscopic examination. Adding to the diagnostic delay is the non-specific nature of granulomatous inflammation which is the hallmark of MTB involvement of the bone marrow. A Ziehl-Neelson stain (which highlights acid-fast bacilli) is therefore mandatory to confirm the diagnosis but can take up to 3 days for processing and evaluation. Owing to this delay in diagnosis, many patients are lost to follow up or remain untreated whilst results are awaited, thus encouraging the spread of undiagnosed TB. The Xpert® MTB/RIF (Cepheid, Sunnyvale, CA) is the molecular test used in the South African national TB program as the initial diagnostic test for pulmonary TB. This study investigates the optimisation and performance of the Xpert® MTB/RIF on bone marrow aspirate specimens (BMA), a first since the introduction of the assay in the diagnosis of extrapulmonary TB. BMA received for immunophenotypic analysis as part of the investigation into disseminated MTB or in the evaluation of cytopenias in immunocompromised patients were used. Processing BMA on the Xpert® MTB/RIF was optimised to ensure bone marrow in EDTA and heparin did not inhibit the PCR reaction. Inactivated M.tb was spiked into the clinical bone marrow specimen and distilled water (as a control). A volume of 500mcl and an incubation time of 15 minutes with sample reagent were investigated as the processing protocol. A total of 135 BMA specimens had sufficient residual volume for Xpert® MTB/RIF testing however 22 specimens (16.3%) were not included in the final statistical analysis as an adequate trephine biopsy and/or TB culture was not available. Xpert® MTB/RIF testing was not affected by BMA material in the presence of heparin or EDTA, but the overall detection of MTB in BMA was low compared to histology and culture. Sensitivity of the Xpert® MTB/RIF compared to both histology and culture was 8.7% (95% confidence interval (CI): 1.07-28.04%) and sensitivity compared to histology only was 11.1% (95% CI: 1.38-34.7%). Specificity of the Xpert® MTB/RIF was 98.9% (95% CI: 93.9-99.7%). Although the Xpert® MTB/RIF generates a faster result than histology and TB culture and is less expensive than culture and drug susceptibility testing, the low sensitivity of the Xpert® MTB/RIF precludes its use for the diagnosis of MTB in bone marrow aspirate specimens and warrants alternative/additional testing to optimise the assay.

Keywords: bone marrow aspirate , extrapulmonary TB, low sensitivity, Xpert® MTB/RIF

Procedia PDF Downloads 152
3000 Production and Evaluation of Mango Pulp by Using Ohmic Heating Process

Authors: Sobhy M. Mohsen, Mohamed M. El-Nikeety, Tarek G. Mohamed, Michael Murkovic

Abstract:

The present work aimed to study the use of ohmic heating in the processing of mango pulp comparing to conventional method. Mango pulp was processed by using ohmic heating under the studied suitable conditions. Physical, chemical and microbiological properties of mango pulp were studied. The results showed that processing of mango pulp by using either ohmic heating or conventional method caused a decrease in the contents of TSS, total carbohydrates, total acidity, total sugars (reducing and non-reducing sugar) and an increase in phenol content, ascorbic acid and carotenoids compared to the conventional process. The increase in electric conductivity of mango pulp during ohmic heating was due to the addition of some electrolytes (salts) to increase the ions and enhance the process. The results also indicate that mango pulp processed by ohmic heating contained more phenols, carbohydrates and vitamin C and less HMF compared to that produced by conventional one. Total pectin and its fractions had slightly reduced by ohmic heating compared to conventional method. Enzymatic activities showed a reduction in poly phenoloxidase (PPO) and polygalacturonase (PG) activity in mango pulp processed by conventional method. However, ohmic heating completely inhibited PPO and PG activities.

Keywords: ohmic heating, mango pulp, phenolic, sarotenoids

Procedia PDF Downloads 439
2999 Use of the Gas Chromatography Method for Hydrocarbons' Quality Evaluation in the Offshore Fields of the Baltic Sea

Authors: Pavel Shcherban, Vlad Golovanov

Abstract:

Currently, there is an active geological exploration and development of the subsoil shelf of the Kaliningrad region. To carry out a comprehensive and accurate assessment of the volumes and degree of extraction of hydrocarbons from open deposits, it is necessary to establish not only a number of geological and lithological characteristics of the structures under study, but also to determine the oil quality, its viscosity, density, fractional composition as accurately as possible. In terms of considered works, gas chromatography is one of the most capacious methods that allow the rapid formation of a significant amount of initial data. The aspects of the application of the gas chromatography method for determining the chemical characteristics of the hydrocarbons of the Kaliningrad shelf fields are observed in the article, as well as the correlation-regression analysis of these parameters in comparison with the previously obtained chemical characteristics of hydrocarbon deposits located on the land of the region. In the process of research, a number of methods of mathematical statistics and computer processing of large data sets have been applied, which makes it possible to evaluate the identity of the deposits, to specify the amount of reserves and to make a number of assumptions about the genesis of the hydrocarbons under analysis.

Keywords: computer processing of large databases, correlation-regression analysis, hydrocarbon deposits, method of gas chromatography

Procedia PDF Downloads 139
2998 Virtual 3D Environments for Image-Based Navigation Algorithms

Authors: V. B. Bastos, M. P. Lima, P. R. G. Kurka

Abstract:

This paper applies to the creation of virtual 3D environments for the study and development of mobile robot image based navigation algorithms and techniques, which need to operate robustly and efficiently. The test of these algorithms can be performed in a physical way, from conducting experiments on a prototype, or by numerical simulations. Current simulation platforms for robotic applications do not have flexible and updated models for image rendering, being unable to reproduce complex light effects and materials. Thus, it is necessary to create a test platform that integrates sophisticated simulated applications of real environments for navigation, with data and image processing. This work proposes the development of a high-level platform for building 3D model’s environments and the test of image-based navigation algorithms for mobile robots. Techniques were used for applying texture and lighting effects in order to accurately represent the generation of rendered images regarding the real world version. The application will integrate image processing scripts, trajectory control, dynamic modeling and simulation techniques for physics representation and picture rendering with the open source 3D creation suite - Blender.

Keywords: simulation, visual navigation, mobile robot, data visualization

Procedia PDF Downloads 233
2997 Omni-Modeler: Dynamic Learning for Pedestrian Redetection

Authors: Michael Karnes, Alper Yilmaz

Abstract:

This paper presents the application of the omni-modeler towards pedestrian redetection. The pedestrian redetection task creates several challenges when applying deep neural networks (DNN) due to the variety of pedestrian appearance with camera position, the variety of environmental conditions, and the specificity required to recognize one pedestrian from another. DNNs require significant training sets and are not easily adapted for changes in class appearances or changes in the set of classes held in its knowledge domain. Pedestrian redetection requires an algorithm that can actively manage its knowledge domain as individuals move in and out of the scene, as well as learn individual appearances from a few frames of a video. The Omni-Modeler is a dynamically learning few-shot visual recognition algorithm developed for tasks with limited training data availability. The Omni-Modeler adapts the knowledge domain of pre-trained deep neural networks to novel concepts with a calculated localized language encoder. The Omni-Modeler knowledge domain is generated by creating a dynamic dictionary of concept definitions, which are directly updatable as new information becomes available. Query images are identified through nearest neighbor comparison to the learned object definitions. The study presented in this paper evaluates its performance in re-identifying individuals as they move through a scene in both single-camera and multi-camera tracking applications. The results demonstrate that the Omni-Modeler shows potential for across-camera view pedestrian redetection and is highly effective for single-camera redetection with a 93% accuracy across 30 individuals using 64 example images for each individual.

Keywords: dynamic learning, few-shot learning, pedestrian redetection, visual recognition

Procedia PDF Downloads 56
2996 The Essence of Culture and Religion in Creating Disaster Resilient Societies through Corporate Social Responsibility

Authors: Repaul Kanji, Rajat Agrawal

Abstract:

In this era where issues like climate change and disasters are the topics of discussion at national and international forums, it is very often that humanity questions the causative role of corporates in such events. It is beyond any doubt that rapid industrialisation and development has taken a toll in the form of climate change and even disasters, in some case. Thus, demanding to fulfill a corporate's responsibilities in the form of rescue and relief in times of disaster, rehabilitation and even mitigation and preparedness to adapt to the oncoming changes is obvious. But how can the responsibilities of the corporates be channelised to ensure all this, i.e., develop a resilient society? More than that, which factors, when emphasised upon, can lead to the holistic development of the society. To answer this query, an extensive literature review was done to identify several enablers like legislations of a nation, the role of brand and reputation, ease of doing Corporate Social Responsibility, mission and vision of an organisation, religion and culture, etc. as a tool for building disaster resilience. A questionnaire survey, interviews with experts and academicians followed by interpretive structural modelling (ISM) were used to construct a multi-hierarchy model depicting the contextual relationship among the identified enablers. The study revealed that culture and religion are the most powerful driver, which affects other enablers either directly or indirectly. Taking cognisance of the fact that an idea of separation between religion and workplace (business) resides subconsciously within the society, the study tries to interpret the outcome of the ISM through the lenses of past researches (The Integrating Box) and explores how it can be leveraged to build a resilient society.

Keywords: corporate social responsibility, interpretive structural modelling, disaster resilience and risk reduction, the integration box (TIB)

Procedia PDF Downloads 187
2995 Effect of the Deposition Time of Hydrogenated Nanocrystalline Si Grown on Porous Alumina Film on Glass Substrate by Plasma Processing Chemical Vapor Deposition

Authors: F. Laatar, S. Ktifa, H. Ezzaouia

Abstract:

Plasma Enhanced Chemical Vapor Deposition (PECVD) method is used to deposit hydrogenated nanocrystalline silicon films (nc-Si: H) on Porous Anodic Alumina Films (PAF) on glass substrate at different deposition duration. Influence of the deposition time on the physical properties of nc-Si: H grown on PAF was investigated through an extensive correlation between micro-structural and optical properties of these films. In this paper, we present an extensive study of the morphological, structural and optical properties of these films by Atomic Force Microscopy (AFM), X-Ray Diffraction (XRD) techniques and a UV-Vis-NIR spectrometer. It was found that the changes in DT can modify the films thickness, the surface roughness and eventually improve the optical properties of the composite. Optical properties (optical thicknesses, refractive indexes (n), absorption coefficients (α), extinction coefficients (k), and the values of the optical transitions EG) of this kind of samples were obtained using the data of the transmittance T and reflectance R spectra’s recorded by the UV–Vis–NIR spectrometer. We used Cauchy and Wemple–DiDomenico models for the analysis of the dispersion of the refractive index and the determination of the optical properties of these films.

Keywords: hydragenated nanocrystalline silicon, plasma processing chemical vapor deposition, X-ray diffraction, optical properties

Procedia PDF Downloads 361
2994 Nano-Texturing of Single Crystalline Silicon via Cu-Catalyzed Chemical Etching

Authors: A. A. Abaker Omer, H. B. Mohamed Balh, W. Liu, A. Abas, J. Yu, S. Li, W. Ma, W. El Kolaly, Y. Y. Ahmed Abuker

Abstract:

We have discovered an important technical solution that could make new approaches in the processing of wet silicon etching, especially in the production of photovoltaic cells. During its inferior light-trapping and structural properties, the inverted pyramid structure outperforms the conventional pyramid textures and black silicone. The traditional pyramid textures and black silicon can only be accomplished with more advanced lithography, laser processing, etc. Importantly, our data demonstrate the feasibility of an inverted pyramidal structure of silicon via one-step Cu-catalyzed chemical etching (CCCE) in Cu (NO3)2/HF/H2O2/H2O solutions. The effects of etching time and reaction temperature on surface geometry and light trapping were systematically investigated. The conclusion shows that the inverted pyramid structure has ultra-low reflectivity of ~4.2% in the wavelength of 300~1000 nm; introduce of Cu particles can significantly accelerate the dissolution of the silicon wafer. The etching and the inverted pyramid structure formation mechanism are discussed. Inverted pyramid structure with outstanding anti-reflectivity includes useful applications throughout the manufacture of semi-conductive industry-compatible solar cells, and can have significant impacts on industry colleagues and populations.

Keywords: Cu-catalyzed chemical etching, inverted pyramid nanostructured, reflection, solar cells

Procedia PDF Downloads 139
2993 Sarcasm Recognition System Using Hybrid Tone-Word Spotting Audio Mining Technique

Authors: Sandhya Baskaran, Hari Kumar Nagabushanam

Abstract:

Sarcasm sentiment recognition is an area of natural language processing that is being probed into in the recent times. Even with the advancements in NLP, typical translations of words, sentences in its context fail to provide the exact information on a sentiment or emotion of a user. For example, if something bad happens, the statement ‘That's just what I need, great! Terrific!’ is expressed in a sarcastic tone which could be misread as a positive sign by any text-based analyzer. In this paper, we are presenting a unique real time ‘word with its tone’ spotting technique which would provide the sentiment analysis for a tone or pitch of a voice in combination with the words being expressed. This hybrid approach increases the probability for identification of special sentiment like sarcasm much closer to the real world than by mining text or speech individually. The system uses a tone analyzer such as YIN-FFT which extracts pitch segment-wise that would be used in parallel with a speech recognition system. The clustered data is classified for sentiments and sarcasm score for each of it determined. Our Simulations demonstrates the improvement in f-measure of around 12% compared to existing detection techniques with increased precision and recall.

Keywords: sarcasm recognition, tone-word spotting, natural language processing, pitch analyzer

Procedia PDF Downloads 275
2992 Building an Ontology for Researchers: An Application of Topic Maps and Social Information

Authors: Yu Hung Chiang, Hei Chia Wang

Abstract:

In the academic area, it is important for research to find proper research domain. Many researchers may refer to conference issues to find their interesting or new topics. Furthermore, conferences issues can help researchers realize current research trends in their field and learn about cutting-edge developments in their specialty. However, online published conference information may widely be distributed; it is not easy to be concluded. Many researchers use search engine of journals or conference issues to filter information in order to get what they want. However, this search engine has its limitation. There will still be some issues should be considered; i.e. researchers cannot find the associated topics which may be useful information for them. Hence, use Knowledge Management (KM) could be a way to resolve these issues. In KM, ontology is widely adopted; but most existed ontology construction methods do not consider social information between target users. To effective in academic KM, this study proposes a method of constructing research Topic Maps using Open Directory Project (ODP) and Social Information Processing (SIP). Through catching of social information in conference website: i.e. the information of co-authorship or collaborator, research topics can be associated among related researchers. Finally, the experiments show Topic Maps successfully help researchers to find the information they need more easily and quickly as well as construct associations between research topics.

Keywords: knowledge management, topic map, social information processing, ontology extraction

Procedia PDF Downloads 274