Search results for: images processing
4060 Noninvasive Brain-Machine Interface to Control Both Mecha TE Robotic Hands Using Emotiv EEG Neuroheadset
Authors: Adrienne Kline, Jaydip Desai
Abstract:
Electroencephalogram (EEG) is a noninvasive technique that registers signals originating from the firing of neurons in the brain. The Emotiv EEG Neuroheadset is a consumer product comprised of 14 EEG channels and was used to record the reactions of the neurons within the brain to two forms of stimuli in 10 participants. These stimuli consisted of auditory and visual formats that provided directions of ‘right’ or ‘left.’ Participants were instructed to raise their right or left arm in accordance with the instruction given. A scenario in OpenViBE was generated to both stimulate the participants while recording their data. In OpenViBE, the Graz Motor BCI Stimulator algorithm was configured to govern the duration and number of visual stimuli. Utilizing EEGLAB under the cross platform MATLAB®, the electrodes most stimulated during the study were defined. Data outputs from EEGLAB were analyzed using IBM SPSS Statistics® Version 20. This aided in determining the electrodes to use in the development of a brain-machine interface (BMI) using real-time EEG signals from the Emotiv EEG Neuroheadset. Signal processing and feature extraction were accomplished via the Simulink® signal processing toolbox. An Arduino™ Duemilanove microcontroller was used to link the Emotiv EEG Neuroheadset and the right and left Mecha TE™ Hands.Keywords: brain-machine interface, EEGLAB, emotiv EEG neuroheadset, OpenViBE, simulink
Procedia PDF Downloads 5024059 New Formulation of FFS3 Layered Blown Films Containing Toughened Polypropylene and Plastomer with Superior Properties
Authors: S. Talebnezhad, S. Pourmahdian, D. Soudbar, M. Khosravani, J. Merasi
Abstract:
Adding toughened polypropylene and plastomer in FFS 3 layered blown film formulation resulted in superior dart impact and MD tear resistance along with acceptable tensile properties in TD and MD. The optimum loading of toughened polypropylene and plastomer in each layer depends on miscibility of polypropylene in polyethylene medium, mechanical properties, welding characteristics in bags top and bottoms and friction coefficient of film surfaces. Film property tests and efficiency of FFS machinery during processing in industrial scale showed that about 4% loading of plastomer and 16% of toughened polypropylene (reactor grade) in middle layer and loading of 0-1% plastomer and 5-19% of toughened polypropylene in other layers results optimum characteristics in the formulation based on 1-butene LLDPE grade with MFR of 0.9 and LDPE grade with MFI of 0.3. Both the plastomer and toughened polypropylene had the MFI of blow 1 and the TiO2 and processing aid masterbatches loading was 2%. The friction coefficient test results also represented the anti-block masterbatch could be omitted from formulation with adding toughened polypropylene due to partial miscibility of PP in PE which makes the surface of films somewhat bristly.Keywords: FFS 3 layered blown film, toughened polypropylene, plastomer, dart impact, tear resistance
Procedia PDF Downloads 4104058 Preparation and Characterization of Nickel-Tungsten Nanoparticles Using Microemulsion Mediated Synthesis
Authors: S. Pal, R. Singh, S. Sivakumar, D. Kunzru
Abstract:
AOT stabilized reverse micelles of deionized water, dispersed in isooctane have been used to synthesize bimetallic nickel tungsten nanoparticles. Prepared nanoparticles were supported on γ-Al2O3 followed by calcination at 500oC. Characterizations of the nanoparticles were done by TEM, XRD, FTIR, XRF, TGA and BET. XRF results showed that this method gave good composition control with W/Ni weight ratio equal to 3.2. TEM images showed particle size of 5-10 nm. Removal of surfactant after calcination was confirmed by TGA and FTIR.Keywords: nanoparticles, reverse micelles, nickel, tungsten
Procedia PDF Downloads 5924057 Selecting Answers for Questions with Multiple Answer Choices in Arabic Question Answering Based on Textual Entailment Recognition
Authors: Anes Enakoa, Yawei Liang
Abstract:
Question Answering (QA) system is one of the most important and demanding tasks in the field of Natural Language Processing (NLP). In QA systems, the answer generation task generates a list of candidate answers to the user's question, in which only one answer is correct. Answer selection is one of the main components of the QA, which is concerned with selecting the best answer choice from the candidate answers suggested by the system. However, the selection process can be very challenging especially in Arabic due to its particularities. To address this challenge, an approach is proposed to answer questions with multiple answer choices for Arabic QA systems based on Textual Entailment (TE) recognition. The developed approach employs a Support Vector Machine that considers lexical, semantic and syntactic features in order to recognize the entailment between the generated hypotheses (H) and the text (T). A set of experiments has been conducted for performance evaluation and the overall performance of the proposed method reached an accuracy of 67.5% with C@1 score of 80.46%. The obtained results are promising and demonstrate that the proposed method is effective for TE recognition task.Keywords: information retrieval, machine learning, natural language processing, question answering, textual entailment
Procedia PDF Downloads 1454056 Hand Gesture Detection via EmguCV Canny Pruning
Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae
Abstract:
Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.Keywords: canny pruning, hand recognition, machine learning, skin tracking
Procedia PDF Downloads 1854055 Effect of Biostimulants to Control the Phelipanche ramosa L. Pomel in Processing Tomato Crop
Authors: G. Disciglio, G. Gatta, F. Lops, A. Libutti, A. Tarantino, E. Tarantino
Abstract:
The experimental trial was carried out in open field at Foggia district (Apulia Region, Southern Italy), during the spring-summer season 2014, in order to evaluate the effect of four biostimulant products (RadiconÒ, Viormon plusÒ, LysodinÒ and SiaptonÒ 10L), compared with a control (no biostimulant), on the infestation of processing tomato crop (cv Dres) by the chlorophyll-lacking root parasite Phelipanche ramosa. Biostimulants consist in different categories of products (microbial inoculants, humic and fulvic acids, hydrolyzed proteins and aminoacids, seaweed extracts) which play various roles in plant growing, including the improvement of crop resistance and quali-quantitative characteristics of yield. The experimental trial was arranged according to a complete randomized block design with five treatments, each of one replicated three times. The processing tomato seedlings were transplanted on 5 May 2014. Throughout the crop cycle, P. ramosa infestation was assessed according to the number of emerged shoots (branched plants) counted in each plot, at 66, 78 and 92 day after transplanting. The tomato fruits were harvested at full-stage of maturity on 8 August 2014. From each plot, the marketable yield was measured and the quali-quantitative yield parameters (mean weight, dry matter content, colour coordinate, colour index and soluble solids content of the fruits) were determined. The whole dataset was tested according to the basic assumptions for the analysis of variance (ANOVA) and the differences between the means were determined using Tukey’s tests at the 5% probability level. The results of the study showed that none of the applied biostimulants provided a whole control of Phelipanche, although some positive effects were obtained from their application. To this respect, the RadiconÒ appeared to be the most effective in reducing the infestation of this root-parasite in tomato crop. This treatment also gave the higher tomato yield.Keywords: biostimulant, control methods, Phelipanche ramosa, tomato crop
Procedia PDF Downloads 3014054 Study of Electro-Chemical Properties of ZnO Nanowires for Various Application
Authors: Meera A. Albloushi, Adel B. Gougam
Abstract:
The development in the field of piezoelectrics has led to a renewed interest in ZnO nanowires (NWs) as a promising material in the nanogenerator devices category. It can be used as a power source for self-powered electronic systems with higher density, higher efficiency, longer lifetime, as well as lower cost of fabrication. Highly aligned ZnO nanowires seem to exhibit a higher performance compared with nonaligned ones. The purpose of this study was to develop ZnO nanowires and to investigate their electrical and chemical properties for various applications. They were grown on silicon (100) and glass substrates. We have used a low temperature and non-hazardous method: aqueous chemical growth (ACG). ZnO (non-doped) and AZO (Aluminum doped) seed layers were deposited using RF magnetron sputteringunder Argon pressure of 3 mTorr and deposition power of 180 W, the times of growth were selected to obtain thicknesses in the range of 30 to 125 nm. Some of the films were subsequently annealed. The substrates were immersed tilted in an equimolar solution composed of zinc nitrate and hexamine (HMTA) of 0.02 M and 0.05 M in the temperature range of 80 to 90 ᵒC for 1.5 to 2 hours. The X-ray diffractometer shows strong peaks at 2Ө = 34.2ᵒ of ZnO films which indicates that the films have a preferred c-axis wurtzite hexagonal (002) orientation. The surface morphology of the films is investigated by atomic force microscope (AFM) which proved the uniformity of the film since the roughness is within 5 nm range. The scanning electron microscopes(SEM) (Quanta FEG 250, Quanta 3D FEG, Nova NanoSEM 650) are used to characterize both ZnO film and NWs. SEM images show forest of ZnO NWs grown vertically and have a range of length up to 2000 nm and diameter of 20-300 nm. The SEM images prove that the role of the seed layer is to enhance the vertical alignment of ZnO NWs at the pH solution of 5-6. Also electrical and optical properties of the NWs are carried out using Electrical Force Microscopy (EFM). After growing the ZnO NWs, developing the nano-generator is the second step of this study in order to determine the energy conversion efficiency and the power output.Keywords: ZnO nanowires(NWs), aqueous chemical growth (ACG), piezoelectric NWs, harvesting enery
Procedia PDF Downloads 3224053 Modeling Atmospheric Correction for Global Navigation Satellite System Signal to Improve Urban Cadastre 3D Positional Accuracy Case of: TANA and ADIS IGS Stations
Authors: Asmamaw Yehun
Abstract:
The name “TANA” is one of International Geodetic Service (IGS) Global Positioning System (GPS) station which is found in Bahir Dar University in Institute of Land Administration. The station name taken from one of big Lakes in Africa ,Lake Tana. The Institute of Land Administration (ILA) is part of Bahir Dar University, located in the capital of the Amhara National Regional State, Bahir Dar. The institute is the first of its kind in East Africa. The station is installed by cooperation of ILA and Sweden International Development Agency (SIDA) fund support. The Continues Operating Reference Station (CORS) is a network of stations that provide global satellite system navigation data to help three dimensional positioning, meteorology, space, weather, and geophysical applications throughout the globe. TANA station was as CORS since 2013 and sites are independently owned and operated by governments, research and education facilities and others. The data collected by the reference station is downloadable through Internet for post processing purpose by interested parties who carry out GNSS measurements and want to achieve a higher accuracy. We made a first observation on TANA, monitor stations on May 29th 2013. We used Leica 1200 receivers and AX1202GG antennas and made observations from 11:30 until 15:20 for about 3h 50minutes. Processing of data was done in an automatic post processing service CSRS-PPP by Natural Resources Canada (NRCan) . Post processing was done June 27th 2013 so precise ephemeris was used 30 days after observation. We found Latitude (ITRF08): 11 34 08.6573 (dms) / 0.008 (m), Longitude (ITRF08): 37 19 44.7811 (dms) / 0.018 (m) and Ellipsoidal Height (ITRF08): 1850.958 (m) / 0.037 (m). We were compared this result with GAMIT/GLOBK processed data and it was very closed and accurate. TANA station is one of the second IGS station for Ethiopia since 2015 up to now. It provides data for any civilian users, researchers, governmental and nongovernmental users. TANA station is installed with very advanced choke ring antenna and GR25 Leica receiver and also the site is very good for satellite accessibility. In order to test hydrostatic and wet zenith delay for positional data quality, we used GAMIT/GLOBK and we found that TANA station is the most accurate IGS station in East Africa. Due to lower tropospheric zenith and ionospheric delay, TANA and ADIS IGS stations has 2 and 1.9 meters 3D positional accuracy respectively.Keywords: atmosphere, GNSS, neutral atmosphere, precipitable water vapour
Procedia PDF Downloads 704052 Influence of Silicon Carbide Particle Size and Thermo-Mechanical Processing on Dimensional Stability of Al 2124SiC Nanocomposite
Authors: Mohamed M. Emara, Heba Ashraf
Abstract:
This study is to investigation the effect of silicon carbide (SiC) particle size and thermo-mechanical processing on dimensional stability of aluminum alloy 2124. Three combinations of SiC weight fractions are investigated, 2.5, 5, and 10 wt. % with different SiC particle sizes (25 μm, 5 μm, and 100nm) were produced using mechanical ball mill. The standard testing samples were fabricated using powder metallurgy technique. Both samples, prior and after extrusion, were heated from room temperature up to 400ºC in a dilatometer at different heating rates, that is, 10, 20, and 40ºC/min. The analysis showed that for all materials, there was an increase in length change as temperature increased and the temperature sensitivity of aluminum alloy decreased in the presence of both micro and nano-sized silicon carbide. For all conditions, nanocomposites showed better dimensional stability compared to conventional Al 2124/SiC composites. The after extrusion samples showed better thermal stability and less temperature sensitivity for the aluminum alloy for both micro and nano-sized silicon carbide.Keywords: aluminum 2124 metal matrix composite, SiC nano-sized reinforcements, powder metallurgy, extrusion mechanical ball mill, dimensional stability
Procedia PDF Downloads 5264051 Toward Indoor and Outdoor Surveillance using an Improved Fast Background Subtraction Algorithm
Authors: El Harraj Abdeslam, Raissouni Naoufal
Abstract:
The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes in variance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.Keywords: video surveillance, background subtraction, contrast limited histogram equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes
Procedia PDF Downloads 2564050 Sepiolite as a Processing Aid in Fibre Reinforced Cement Produced in Hatschek Machine
Authors: R. Pérez Castells, J. M. Carbajo
Abstract:
Sepiolite is used as a processing aid in the manufacture of fibre cement from the start of the replacement of asbestos in the 80s. Sepiolite increases the inter-laminar bond between cement layers and improves homogeneity of the slurries. A new type of sepiolite processed product, Wollatrop TF/C, has been checked as a retention agent for fine particles in the production of fibre cement in a Hatschek machine. The effect of Wollatrop T/FC on filtering and fine particle losses was studied as well as the interaction with anionic polyacrylamide and microsilica. The design of the experiments were factorial and the VDT equipment used for measuring retention and drainage was modified Rapid Köethen laboratory sheet former. Wollatrop TF/C increased the fine particle retention improving the economy of the process and reducing the accumulation of solids in recycled process water. At the same time, drainage time increased sharply at high concentration, however drainage time can be improved by adjusting APAM concentration. Wollatrop TF/C and microsilica are having very small interactions among them. Microsilica does not control fine particle losses while Wollatrop TF/C does efficiently. Further research on APAM type (molecular weight and anionic character) is advisable to improve drainage.Keywords: drainage, fibre-reinforced cement, fine particle losses, flocculation, microsilica, sepiolite
Procedia PDF Downloads 3274049 Analytical Comparison of Conventional Algorithms with Vedic Algorithm for Digital Multiplier
Authors: Akhilesh G. Naik, Dipankar Pal
Abstract:
In today’s scenario, the complexity of digital signal processing (DSP) applications and various microcontroller architectures have been increasing to such an extent that the traditional approaches to multiplier design in most processors are becoming outdated for being comparatively slow. Modern processing applications require suitable pipelined approaches, and therefore, algorithms that are friendlier with pipelined architectures. Traditional algorithms like Wallace Tree, Radix-4 Booth, Radix-8 Booth, Dadda architectures have been proven to be comparatively slow for pipelined architectures. These architectures, therefore, need to be optimized or combined with other architectures amongst them to enhance its performances and to be made suitable for pipelined hardware/architectures. Recently, Vedic algorithm mathematically has proven to be efficient by appearing to be less complex and with fewer steps for its output establishment and have assumed renewed importance. This paper describes and shows how the Vedic algorithm can be better suited for pipelined architectures and also can be combined with traditional architectures and algorithms for enhancing its ability even further. In this paper, we also established that for complex applications on DSP and other microcontroller architectures, using Vedic approach for multiplication proves to be the best available and efficient option.Keywords: Wallace Tree, Radix-4 Booth, Radix-8 Booth, Dadda, Vedic, Single-Stage Karatsuba (SSK), Looped Karatsuba (LK)
Procedia PDF Downloads 1694048 Application of Medium High Hydrostatic Pressure in Preserving Textural Quality and Safety of Pineapple Compote
Authors: Nazim Uddin, Yohiko Nakaura, Kazutaka Yamamoto
Abstract:
Compote (fruit in syrup) of pineapple (Ananas comosus L. Merrill) is expected to have a high market potential as one of convenient ready-to-eat (RTE) foods worldwide. High hydrostatic pressure (HHP) in combination with low temperature (LT) was applied to the processing of pineapple compote as well as medium HHP (MHHP) in combination with medium-high temperature (MHT) since both processes can enhance liquid impregnation and inactivate microbes. MHHP+MHT (55 or 65 °C) process, as well as the HHP+LT process, has successfully inactivated the microbes in the compote to a non-detectable level. Although the compotes processed by MHHP+MHT or HHP+LT have lost the fresh texture as in a similar manner as those processed solely by heat, it was indicated that the texture degradations by heat were suppressed under MHHP. Degassing process reduced the hardness, while calcium (Ca) contributed to be retained hardness in MHT and MHHP+MHT processes. Electrical impedance measurement supported the damage due to degassing and heat. The color, Brix, and appearance were not affected by the processing methods significantly. MHHP+MHT and HHP+LT processes may be applicable to produce high-quality, safe RTE pineapple compotes. Further studies on the optimization of packaging and storage condition will be indispensable for commercialization.Keywords: compote of pineapple, RTE, medium high hydrostatic pressure, postharvest loss, texture
Procedia PDF Downloads 1374047 Generative Pre-Trained Transformers (GPT-3) and Their Impact on Higher Education
Authors: Sheelagh Heugh, Michael Upton, Kriya Kalidas, Stephen Breen
Abstract:
This article aims to create awareness of the opportunities and issues the artificial intelligence (AI) tool GPT-3 (Generative Pre-trained Transformer-3) brings to higher education. Technological disruptors have featured in higher education (HE) since Konrad Klaus developed the first functional programmable automatic digital computer. The flurry of technological advances, such as personal computers, smartphones, the world wide web, search engines, and artificial intelligence (AI), have regularly caused disruption and discourse across the educational landscape around harnessing the change for the good. Accepting AI influences are inevitable; we took mixed methods through participatory action research and evaluation approach. Joining HE communities, reviewing the literature, and conducting our own research around Chat GPT-3, we reviewed our institutional approach to changing our current practices and developing policy linked to assessments and the use of Chat GPT-3. We review the impact of GPT-3, a high-powered natural language processing (NLP) system first seen in 2020 on HE. Historically HE has flexed and adapted with each technological advancement, and the latest debates for educationalists are focusing on the issues around this version of AI which creates natural human language text from prompts and other forms that can generate code and images. This paper explores how Chat GPT-3 affects the current educational landscape: we debate current views around plagiarism, research misconduct, and the credibility of assessment and determine the tool's value in developing skills for the workplace and enhancing critical analysis skills. These questions led us to review our institutional policy and explore the effects on our current assessments and the development of new assessments. Conclusions: After exploring the pros and cons of Chat GTP-3, it is evident that this form of AI cannot be un-invented. Technology needs to be harnessed for positive outcomes in higher education. We have observed that materials developed through AI and potential effects on our development of future assessments and teaching methods. Materials developed through Chat GPT-3 can still aid student learning but lead to redeveloping our institutional policy around plagiarism and academic integrity.Keywords: artificial intelligence, Chat GPT-3, intellectual property, plagiarism, research misconduct
Procedia PDF Downloads 894046 Design and Realization of Double-Delay Line Canceller (DDLC) Using Fpga
Authors: A. E. El-Henawey, A. A. El-Kouny, M. M. Abd –El-Halim
Abstract:
Moving target indication (MTI) which is an anti-clutter technique that limits the display of clutter echoes. It uses the radar received information primarily to display moving targets only. The purpose of MTI is to discriminate moving targets from a background of clutter or slowly-moving chaff particles as shown in this paper. Processing system in these radars is so massive and complex; since it is supposed to perform a great amount of processing in very short time, in most radar applications the response of a single canceler is not acceptable since it does not have a wide notch in the stop-band. A double-delay canceler is an MTI delay-line canceler employing the two-delay-line configuration to improve the performance by widening the clutter-rejection notches, as compared with single-delay cancelers. This canceler is also called a double canceler, dual-delay canceler, or three-pulse canceler. In this paper, a double delay line canceler is chosen for study due to its simplicity in both concept and implementation. Discussing the implementation of a simple digital moving target indicator (DMTI) using FPGA which has distinct advantages compared to other application specific integrated circuit (ASIC) for the purposes of this work. The FPGA provides flexibility and stability which are important factors in the radar application.Keywords: FPGA, MTI, double delay line canceler, Doppler Shift
Procedia PDF Downloads 6444045 Questions of Subjectivity in Establishing Plurality in Indian Women’s Autobiographies
Authors: Angkayarkan Vinayakaselvi
Abstract:
This paper aims at unpacking the questions of subjectivity and their role in altering and redefining the constructed images of self and community as represented in chosen Indian women’s autobiographies. India is a country of plurality and this plurality is further extended by diasporic explorations. As the third world feminism questioned the Euro-American views on homogenizing the socio-cultural condition of women of all over the world, Indian feminism needs to critique the view that all Indian women are one and the same. Similar to the plural nature of nation, the nature and condition of women, too, are plural in India. Indian women are differentiated by caste, class, and region. A critical scrutiny of autobiographies written by Indian women belong to different socio-cultural groups – Northeast Indian, Dalit and Diasporic categories – will assess the impact of education, profession and socio-cultural and economic status on Indian Women. Such a critique would highlight the heterogeneous subjectivity of Indian women. The images/selves of women as represented through these autobiographies are chosen with an aim to unmask and challenge, through ordering and positioning, the capitalist politics of literary representations of Indian women’s formation of 'her-self'. Methodologies and subjects associated with literature are considered essential for understanding and combating women’s oppression and empowerment. The representation of self in personal autobiographical history could be treated as the history of entire nation as personal is always political in feminist writings. The chosen narrators who are well-educated, well-settled, professional women of letters are capable of assessing, critiquing and re/articulating the shifting paradigms of women’s lives. Despite these factors, the textual spaces possess evidences to establish the facts that these women undergo sufferings, and they counter design cultural specific strategies for their empowerment. These metafictional self-conscious synecdoches extend to include the world of entire women. Thus these autobiographical texts could be reinterpreted as a searing critique of Indian society based on woman’s personal life.Keywords: ethnicity and diversity, gender studies, Indian women’s autobiographies, subjectivity
Procedia PDF Downloads 2204044 ROSgeoregistration: Aerial Multi-Spectral Image Simulator for the Robot Operating System
Authors: Andrew R. Willis, Kevin Brink, Kathleen Dipple
Abstract:
This article describes a software package called ROS-georegistration intended for use with the robot operating system (ROS) and the Gazebo 3D simulation environment. ROSgeoregistration provides tools for the simulation, test, and deployment of aerial georegistration algorithms and is available at github.com/uncc-visionlab/rosgeoregistration. A model creation package is provided which downloads multi-spectral images from the Google Earth Engine database and, if necessary, incorporates these images into a single, possibly very large, reference image. Additionally a Gazebo plugin which uses the real-time sensor pose and image formation model to generate simulated imagery using the specified reference image is provided along with related plugins for UAV relevant data. The novelty of this work is threefold: (1) this is the first system to link the massive multi-spectral imaging database of Google’s Earth Engine to the Gazebo simulator, (2) this is the first example of a system that can simulate geospatially and radiometrically accurate imagery from multiple sensor views of the same terrain region, and (3) integration with other UAS tools creates a new holistic UAS simulation environment to support UAS system and subsystem development where real-world testing would generally be prohibitive. Sensed imagery and ground truth registration information is published to client applications which can receive imagery synchronously with telemetry from other payload sensors, e.g., IMU, GPS/GNSS, barometer, and windspeed sensor data. To highlight functionality, we demonstrate ROSgeoregistration for simulating Electro-Optical (EO) and Synthetic Aperture Radar (SAR) image sensors and an example use case for developing and evaluating image-based UAS position feedback, i.e., pose for image-based Guidance Navigation and Control (GNC) applications.Keywords: EO-to-EO, EO-to-SAR, flight simulation, georegistration, image generation, robot operating system, vision-based navigation
Procedia PDF Downloads 1044043 Offline Signature Verification Using Minutiae and Curvature Orientation
Authors: Khaled Nagaty, Heba Nagaty, Gerard McKee
Abstract:
A signature is a behavioral biometric that is used for authenticating users in most financial and legal transactions. Signatures can be easily forged by skilled forgers. Therefore, it is essential to verify whether a signature is genuine or forged. The aim of any signature verification algorithm is to accommodate the differences between signatures of the same person and increase the ability to discriminate between signatures of different persons. This work presented in this paper proposes an automatic signature verification system to indicate whether a signature is genuine or not. The system comprises four phases: (1) The pre-processing phase in which image scaling, binarization, image rotation, dilation, thinning, and connecting ridge breaks are applied. (2) The feature extraction phase in which global and local features are extracted. The local features are minutiae points, curvature orientation, and curve plateau. The global features are signature area, signature aspect ratio, and Hu moments. (3) The post-processing phase, in which false minutiae are removed. (4) The classification phase in which features are enhanced before feeding it into the classifier. k-nearest neighbors and support vector machines are used. The classifier was trained on a benchmark dataset to compare the performance of the proposed offline signature verification system against the state-of-the-art. The accuracy of the proposed system is 92.3%.Keywords: signature, ridge breaks, minutiae, orientation
Procedia PDF Downloads 1464042 Correlation Analysis to Quantify Learning Outcomes for Different Teaching Pedagogies
Authors: Kanika Sood, Sijie Shang
Abstract:
A fundamental goal of education includes preparing students to become a part of the global workforce by making beneficial contributions to society. In this paper, we analyze student performance for multiple courses that involve different teaching pedagogies: a cooperative learning technique and an inquiry-based learning strategy. Student performance includes student engagement, grades, and attendance records. We perform this study in the Computer Science department for online and in-person courses for 450 students. We will perform correlation analysis to study the relationship between student scores and other parameters such as gender, mode of learning. We use natural language processing and machine learning to analyze student feedback data and performance data. We assess the learning outcomes of two teaching pedagogies for undergraduate and graduate courses to showcase the impact of pedagogical adoption and learning outcome as determinants of academic achievement. Early findings suggest that when using the specified pedagogies, students become experts on their topics and illustrate enhanced engagement with peers.Keywords: bag-of-words, cooperative learning, education, inquiry-based learning, in-person learning, natural language processing, online learning, sentiment analysis, teaching pedagogy
Procedia PDF Downloads 774041 A Multi-Role Oriented Collaboration Platform for Distributed Disaster Reduction in China
Authors: Linyao Qiu, Zhiqiang Du
Abstract:
As the rapid development of urbanization, economic developments, and steady population growth in China, the widespread devastation, economic damages, and loss of human lives caused by numerous forms of natural disasters are becoming increasingly serious every year. Disaster management requires available and effective cooperation of different roles and organizations in whole process including mitigation, preparedness, response and recovery. Due to the imbalance of regional development in China, the disaster management capabilities of national and provincial disaster reduction centers are uneven. When an undeveloped area suffers from disaster, neither local reduction department could get first-hand information like high-resolution remote sensing images from satellites and aircrafts independently, nor sharing mechanism is provided for the department to access to data resources deployed in other place directly. Most existing disaster management systems operate in a typical passive data-centric mode and work for single department, where resources cannot be fully shared. The impediment blocks local department and group from quick emergency response and decision-making. In this paper, we introduce a collaborative platform for distributed disaster reduction. To address the issues of imbalance of sharing data sources and technology in the process of disaster reduction, we propose a multi-role oriented collaboration business mechanism, which is capable of scheduling and allocating for optimum utilization of multiple resources, to link various roles for collaborative reduction business in different place. The platform fully considers the difference of equipment conditions in different provinces and provide several service modes to satisfy technology need in disaster reduction. An integrated collaboration system based on focusing services mechanism is designed and implemented for resource scheduling, functional integration, data processing, task management, collaborative mapping, and visualization. Actual applications illustrate that the platform can well support data sharing and business collaboration between national and provincial department. It could significantly improve the capability of disaster reduction in China.Keywords: business collaboration, data sharing, distributed disaster reduction, focusing service
Procedia PDF Downloads 2954040 Brand Tips of Thai Halal Products
Authors: Pibool Waijittragum
Abstract:
The purpose of this research is to analyze the marketing strategies of Thai Halal products which related to the way of life for Thai Muslims. The expected benefit is the marketing strategy for brand building process for Halal products in Thailand. 4 elements of marketing strategies which necessary for the brand identity creation is the research framework: Consists of Attributes, Benefits, Values and Personality. The research methodology was applied using qualitative and quantitative; 19 marketing experts with dynamic roles in Thai consumer products were interviewed. In addition, a field survey of 122 Thai Muslims selected from 175 Muslim communities in Bangkok was studied. Data analysis will be according to 5 categories of Thai Halal product: 1) Meat 2) Vegetable and Fruits 3) Instant foods and Garnishing ingredient 4) Beverages, desserts and snacks 5) Hygienic daily products; such as soap, shampoo and body lotion. The results will explain some suitable representation in the marketing strategies of Thai Halal products as are: 1) Benefit; the characteristics of the product with its benefit. Consumers will purchase this product with the reason of; it is beneficial nutrients product, there are no toxic or chemical residues. Fresh and clean materials 2) Attribute; the exterior images that attract to consumer. Consumers will purchase this product with the reason of; there is a standard proof mark, food and drug secure proof mark and Halal products mark. Packaging and its materials should be draw attention. Use an attractive graphic. Use outstanding images of product, material or ingredients. 3) Value; the value of products that affect to consumers perception; it is healthy products. Accumulate quality of life. It is a product of expertise, manufacturing of research result. Consumers are important. It’s sincere, honest and reliable to all. 4) Personality; reflection of consumers thought. The personality feedback to them after they were consumes this product; they are health care persons. They are the rational person, moral person, justice person and thoughtful person like a progressive thinking.Keywords: marketing strategies, product identity, branding, Thai Halal products
Procedia PDF Downloads 3864039 Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing
Authors: Anoop T. R., Otman Basir, Robert F. Hess, Eileen E. Birch, Brooke A. Koritala, Reed M. Jost, Becky Luu, David Stager, Ben Thompson
Abstract:
Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes).Keywords: strabismus, deep neural networks, face detection, facial landmarks, face alignment, segmentation, VGG 16, mask R-CNN, pupil coordinates, angle deviation, horizontal and vertical deviation
Procedia PDF Downloads 944038 Analysis of Translational Ship Oscillations in a Realistic Environment
Authors: Chen Zhang, Bernhard Schwarz-Röhr, Alexander Härting
Abstract:
To acquire accurate ship motions at the center of gravity, a single low-cost inertial sensor is utilized and applied on board to measure ship oscillating motions. As observations, the three axes accelerations and three axes rotational rates provided by the sensor are used. The mathematical model of processing the observation data includes determination of the distance vector between the sensor and the center of gravity in x, y, and z directions. After setting up the transfer matrix from sensor’s own coordinate system to the ship’s body frame, an extended Kalman filter is applied to deal with nonlinearities between the ship motion in the body frame and the observation information in the sensor’s frame. As a side effect, the method eliminates sensor noise and other unwanted errors. Results are not only roll and pitch, but also linear motions, in particular heave and surge at the center of gravity. For testing, we resort to measurements recorded on a small vessel in a well-defined sea state. With response amplitude operators computed numerically by a commercial software (Seaway), motion characteristics are estimated. These agree well with the measurements after processing with the suggested method.Keywords: extended Kalman filter, nonlinear estimation, sea trial, ship motion estimation
Procedia PDF Downloads 5234037 Computational Linguistic Implications of Gender Bias: Machines Reflect Misogyny in Society
Authors: Irene Yi
Abstract:
Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Computational linguistics is a growing field dealing with such issues of data collection for technological development. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Computational analysis on such linguistic data is used to find patterns of misogyny. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.Keywords: computational analysis, gendered grammar, misogynistic language, neural networks
Procedia PDF Downloads 1194036 Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning
Authors: T. Bryan , V. Kepuska, I. Kostnaic
Abstract:
A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors.Keywords: sparse dictionary learning, autoencoder, sparse autoencoder, basis vectors, atomic decomposition, envelope sampling, envelope samples, Gabor, gammatone, matching pursuit
Procedia PDF Downloads 2534035 Extracellular Enzymes from Halophilic Bacteria with Potential in Agricultural Secondary Flow Recovery Products
Authors: Madalin Enache, Simona Neagu, Roxana Cojoc, Ioana Gomoiu, Delia Ionela Dobre, Ancuta Roxana Trifoi
Abstract:
Various types of halophilic and halotolerant microorganisms able to be cultivated in laboratory on culture media with a wide range of sodium chloride content are isolated from several salted environments. The extracellular enzymes of these microorganisms showed the enzymatic activity in these spectrums of salinity thus being attractive for several biotechnological processes developed at high ionic strength. In present work, a number of amylase, protease, esterase, lipase, cellulase, pectinase, xilanases and innulinase were identified for more than 50th bacterial strains isolated from water samples and sapropelic mud from four saline and hypersaline lakes located in Romanian plain. On the other hand, the cellulase and pectinase activity were also detected in some halotolerant microorganisms isolated from secondary agricultural flow of grapes processing. The preliminary data revealed that from totally tested strains seven harbor proteases activity, eight amylase activity, four for esterase and another four for lipase, three for pectinase and for one strain were identified either cellulase or pectinase activity. There were no identified enzymes able to hydrolase innulin added to culture media. Several strains isolated from sapropelic mud showed multiple extracellular enzymatic activities, namely three strains harbor three activities and another seven harbor two activities. The data revealed that amylase and protease activities were frequently detected if compare with other tested enzymes. In the case of pectinase were investigated, their ability to be used for increasing resveratrol recovery from material resulted after grapes processing. In this way, the resulted material from grapes processing was treated with microbial supernatant for several times (two, four and 24 hours) and the content of resveratrol was detected by High Performance Liquid Chromatography method (HPLC). The preliminary data revealed some positive results of this treatment.Keywords: halophilic microorganisms, enzymes, pectinase, salinity
Procedia PDF Downloads 1944034 Adapting Grain Crop Cleaning Equipment for Sesame and Other Emerging Spice Crops
Authors: Ramadas Narayanan, Surya Bhattrai, Vu Hoan
Abstract:
Threshing and cleaning are crucial post-harvest procedures that are carried out to separate the grain or seed from the harvested plant and eliminate any potential contaminants or foreign debris. After harvesting, threshing and cleaning are necessary for the clean seeds to guarantee high quality and acceptable for consumption or further processing. For mechanised production, threshing can be conducted in a thresher. Afterwards, the seeds are to be cleaned in dedicated seed-cleaning facilities. This research investigates the effectiveness of Kimseed cleaning equipment MK3, designed for grain crops for processing new crops such as sesame, fennel and kalonji. Subsequently, systematic trials were conducted to adapt the equipment to the applications in sesame and spice crops. It was done to develop methods for mechanising harvest and post-harvest operations. For sesame, it is recommended to have t a two-step process in the cleaning machine to remove large and small contaminants. The first step is to remove the large contaminants, and the second is to remove the smaller ones. The optimal parameters for cleaning fennel are a shaker frequency of 6.0 to 6.5 Hz and an airflow of 1.0 to 1.5 m/s. The optimal parameters for cleaning kalonji are a shaker frequency of 5.5Hz to 6.0 Hz and airflow of 1.0 to under 1.5m/s.Keywords: sustainable mechanisation, sead cleaning process, optimal setting, shaker frequency
Procedia PDF Downloads 734033 Obtaining Nutritive Powder from Peel of Mangifera Indica L. (Mango) as a Food Additive
Authors: Chajira Garrote, Laura Arango, Lourdes Merino
Abstract:
This research explains how to obtain nutritious powder from a variety of ripe mango peels Hilacha (Mangifera indica L.) to use it as a food additive. Also, this study intends to use efficiently the by-products resulting from the operations of mango pulp manufacturing process by processing companies with the aim of giving them an added value. The physical and chemical characteristics of the mango peels and the benefits that may help humans, were studied. Unit operations are explained for the processing of mango peels and the production of nutritive powder as a food additive. Emphasis is placed on the preliminary operations applied to the raw material and on the drying method, which is very important in this project to obtain the suitable characteristics of the nutritive powder. Once the powder was obtained, it was subjected to laboratory tests to determine its functional properties: water retention capacity (WRC) and oil retention capacity (ORC), also a sensory analysis for the powder was performed to determine the product profile. The nutritive powder from the ripe mango peels reported excellent WRC and ORC values: 7.236 g of water / g B.S. and 1.796 g water / g B.S. respectively and the sensory analysis defined a complete profile of color, odor and texture of the nutritive powder, which is suitable to use it in the food industry.Keywords: mango, peel, powder, nutritive, functional properties, sensory analysis
Procedia PDF Downloads 3564032 Effective Solvents for Proteins Recovery from Microalgae
Authors: Win Nee Phong, Tau Chuan Ling, Pau Loke Show
Abstract:
From an industrial perspective, the exploitation of microalgae for protein source is of great economical and commercial interest due to numerous attractive characteristics. Nonetheless, the release of protein from microalgae is limited by the multiple layers of the rigid thick cell wall that generally contain a large proportion of cellulose. Thus an efficient cell disruption process is required to rupture the cell wall. The conventional downstream processing methods which typically involve several unit operational steps such as disruption, isolation, extraction, concentration and purification are energy-intensive and costly. To reduce the overall cost and establish a feasible technology for the success of the large-scale production, microalgal industry today demands a more cost-effective and eco-friendly technique in downstream processing. One of the main challenges to extract the proteins from microalgae is the presence of rigid cell wall. This study aims to provide some guidance on the selection of the efficient solvent to facilitate the proteins released during the cell disruption process. The effects of solvent types such as methanol, ethanol, 1-propanol and water in rupturing the microalgae cell wall were studied. It is interesting to know that water is the most effective solvent to recover proteins from microalgae and the cost is cheapest among all other solvents.Keywords: green, microalgae, protein, solvents
Procedia PDF Downloads 2584031 Unsupervised Part-of-Speech Tagging for Amharic Using K-Means Clustering
Authors: Zelalem Fantahun
Abstract:
Part-of-speech tagging is the process of assigning a part-of-speech or other lexical class marker to each word into naturally occurring text. Part-of-speech tagging is the most fundamental and basic task almost in all natural language processing. In natural language processing, the problem of providing large amount of manually annotated data is a knowledge acquisition bottleneck. Since, Amharic is one of under-resourced language, the availability of tagged corpus is the bottleneck problem for natural language processing especially for POS tagging. A promising direction to tackle this problem is to provide a system that does not require manually tagged data. In unsupervised learning, the learner is not provided with classifications. Unsupervised algorithms seek out similarity between pieces of data in order to determine whether they can be characterized as forming a group. This paper explicates the development of unsupervised part-of-speech tagger using K-Means clustering for Amharic language since large amount of data is produced in day-to-day activities. In the development of the tagger, the following procedures are followed. First, the unlabeled data (raw text) is divided into 10 folds and tokenization phase takes place; at this level, the raw text is chunked at sentence level and then into words. The second phase is feature extraction which includes word frequency, syntactic and morphological features of a word. The third phase is clustering. Among different clustering algorithms, K-means is selected and implemented in this study that brings group of similar words together. The fourth phase is mapping, which deals with looking at each cluster carefully and the most common tag is assigned to a group. This study finds out two features that are capable of distinguishing one part-of-speech from others these are morphological feature and positional information and show that it is possible to use unsupervised learning for Amharic POS tagging. In order to increase performance of the unsupervised part-of-speech tagger, there is a need to incorporate other features that are not included in this study, such as semantic related information. Finally, based on experimental result, the performance of the system achieves a maximum of 81% accuracy.Keywords: POS tagging, Amharic, unsupervised learning, k-means
Procedia PDF Downloads 451