Search results for: image processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2729

Search results for: image processing

149 Application of Interferometric Techniques for Quality Control of Oils Used in the Food Industry

Authors: Andres Piña, Amy Meléndez, Pablo Cano, Tomas Cahuich

Abstract:

The purpose of this project is to propose a quick and environmentally friendly alternative to measure the quality of oils used in food industry. There is evidence that repeated and indiscriminate use of oils in food processing cause physicochemical changes with formation of potentially toxic compounds that can affect the health of consumers and cause organoleptic changes. In order to assess the quality of oils, non-destructive optical techniques such as Interferometry offer a rapid alternative to the use of reagents, using only the interaction of light on the oil. Through this project, we used interferograms of samples of oil placed under different heating conditions to establish the changes in their quality. These interferograms were obtained by means of a Mach-Zehnder Interferometer using a beam of light from a HeNe laser of 10mW at 632.8nm. Each interferogram was captured, analyzed and measured full width at half-maximum (FWHM) using the software from Amcap and ImageJ. The total of FWHMs was organized in three groups. It was observed that the average obtained from each of the FWHMs of group A shows a behavior that is almost linear, therefore it is probable that the exposure time is not relevant when the oil is kept under constant temperature. Group B exhibits a slight exponential model when temperature raises between 373 K and 393 K. Results of the t-Student show a probability of 95% (0.05) of the existence of variation in the molecular composition of both samples. Furthermore, we found a correlation between the Iodine Indexes (Physicochemical Analysis) and the Interferograms (Optical Analysis) of group C. Based on these results, this project highlights the importance of the quality of the oils used in food industry and shows how Interferometry can be a useful tool for this purpose.

Keywords: Food industry, interferometric, oils, quality control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2180
148 Fast Return Path Planning for Agricultural Autonomous Terrestrial Robot in a Known Field

Authors: Carlo Cernicchiaro, Pedro D. Gaspar, Martim L. Aguiar

Abstract:

The agricultural sector is becoming more critical than ever in view of the expected overpopulation of the Earth. The introduction of robotic solutions in this field is an increasingly researched topic to make the most of the Earth's resources, thus going to avoid the problems of wear and tear of the human body due to the harsh agricultural work, and open the possibility of a constant careful processing 24 hours a day. This project is realized for a terrestrial autonomous robot aimed to navigate in an orchard collecting fallen peaches below the trees. When it receives the signal indicating the low battery, it has to return to the docking station where it will replace its battery and then return to the last work point and resume its routine. Considering a preset path in orchards with tree rows with variable length by which the robot goes iteratively using the algorithm D*. In case of low battery, the D* algorithm is still used to determine the fastest return path to the docking station as well as to come back from the docking station to the last work point. MATLAB simulations were performed to analyze the flexibility and adaptability of the developed algorithm. The simulation results show an enormous potential for adaptability, particularly in view of the irregularity of orchard field, since it is not flat and undergoes modifications over time from fallen branch as well as from other obstacles and constraints. The D* algorithm determines the best route in spite of the irregularity of the terrain. Moreover, in this work, it will be shown a possible solution to improve the initial points tracking and reduce time between movements.

Keywords: Path planning, fastest return path, agricultural terrestrial robot, autonomous, docking station.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 861
147 Effect of Injection Moulding Process Parameter on Tensile Strength Using Taguchi Method

Authors: Gurjeet Singh, M. K. Pradhan, Ajay Verma

Abstract:

The plastic industry plays very important role in the economy of any country. It is generally among the leading share of the economy of the country. Since metals and their alloys are very rarely available on the earth. Therefore, to produce plastic products and components, which finds application in many industrial as well as household consumer products is beneficial. Since 50% plastic products are manufactured by injection moulding process. For production of better quality product, we have to control quality characteristics and performance of the product. The process parameters plays a significant role in production of plastic, hence the control of process parameter is essential. In this paper the effect of the parameters selection on injection moulding process has been described. It is to define suitable parameters in producing plastic product. Selecting the process parameter by trial and error is neither desirable nor acceptable, as it is often tends to increase the cost and time. Hence, optimization of processing parameter of injection moulding process is essential. The experiments were designed with Taguchi’s orthogonal array to achieve the result with least number of experiments. Plastic material polypropylene is studied. Tensile strength test of material is done on universal testing machine, which is produced by injection moulding machine. By using Taguchi technique with the help of MiniTab-14 software the best value of injection pressure, melt temperature, packing pressure and packing time is obtained. We found that process parameter packing pressure contribute more in production of good tensile plastic product.

Keywords: Injection moulding, tensile strength, Taguchi method, poly-propylene.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3763
146 ORank: An Ontology Based System for Ranking Documents

Authors: Mehrnoush Shamsfard, Azadeh Nematzadeh, Sarah Motiee

Abstract:

Increasing growth of information volume in the internet causes an increasing need to develop new (semi)automatic methods for retrieval of documents and ranking them according to their relevance to the user query. In this paper, after a brief review on ranking models, a new ontology based approach for ranking HTML documents is proposed and evaluated in various circumstances. Our approach is a combination of conceptual, statistical and linguistic methods. This combination reserves the precision of ranking without loosing the speed. Our approach exploits natural language processing techniques for extracting phrases and stemming words. Then an ontology based conceptual method will be used to annotate documents and expand the query. To expand a query the spread activation algorithm is improved so that the expansion can be done in various aspects. The annotated documents and the expanded query will be processed to compute the relevance degree exploiting statistical methods. The outstanding features of our approach are (1) combining conceptual, statistical and linguistic features of documents, (2) expanding the query with its related concepts before comparing to documents, (3) extracting and using both words and phrases to compute relevance degree, (4) improving the spread activation algorithm to do the expansion based on weighted combination of different conceptual relationships and (5) allowing variable document vector dimensions. A ranking system called ORank is developed to implement and test the proposed model. The test results will be included at the end of the paper.

Keywords: Document ranking, Ontology, Spread activation algorithm, Annotation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888
145 A Novel Multiple Valued Logic OHRNS Modulo rn Adder Circuit

Authors: Mehdi Hosseinzadeh, Somayyeh Jafarali Jassbi, Keivan Navi

Abstract:

Residue Number System (RNS) is a modular representation and is proved to be an instrumental tool in many digital signal processing (DSP) applications which require high-speed computations. RNS is an integer and non weighted number system; it can support parallel, carry-free, high-speed and low power arithmetic. A very interesting correspondence exists between the concepts of Multiple Valued Logic (MVL) and Residue Number Arithmetic. If the number of levels used to represent MVL signals is chosen to be consistent with the moduli which create the finite rings in the RNS, MVL becomes a very natural representation for the RNS. There are two concerns related to the application of this Number System: reaching the most possible speed and the largest dynamic range. There is a conflict when one wants to resolve both these problem. That is augmenting the dynamic range results in reducing the speed in the same time. For achieving the most performance a method is considere named “One-Hot Residue Number System" in this implementation the propagation is only equal to one transistor delay. The problem with this method is the huge increase in the number of transistors they are increased in order m2 . In real application this is practically impossible. In this paper combining the Multiple Valued Logic and One-Hot Residue Number System we represent a new method to resolve both of these two problems. In this paper we represent a novel design of an OHRNS-based adder circuit. This circuit is useable for Multiple Valued Logic moduli, in comparison to other RNS design; this circuit has considerably improved the number of transistors and power consumption.

Keywords: Computer Arithmetic, Residue Number System, Multiple Valued Logic, One-Hot, VLSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1843
144 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain

Authors: Bita Payami-Shabestari, Dariush Eslami

Abstract:

The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.

Keywords: Economic production quantity, random cost, supply chain management, vendor-managed inventory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 682
143 Methods and Algorithms of Ensuring Data Privacy in AI-Based Healthcare Systems and Technologies

Authors: Omar Farshad Jeelani, Makaire Njie, Viktoriia M. Korzhuk

Abstract:

Recently, the application of AI-powered algorithms in healthcare continues to flourish. Particularly, access to healthcare information, including patient health history, diagnostic data, and PII (Personally Identifiable Information) is paramount in the delivery of efficient patient outcomes. However, as the exchange of healthcare information between patients and healthcare providers through AI-powered solutions increases, protecting a person’s information and their privacy has become even more important. Arguably, the increased adoption of healthcare AI has resulted in a significant concentration on the security risks and protection measures to the security and privacy of healthcare data, leading to escalated analyses and enforcement. Since these challenges are brought by the use of AI-based healthcare solutions to manage healthcare data, AI-based data protection measures are used to resolve the underlying problems. Consequently, these projects propose AI-powered safeguards and policies/laws to protect the privacy of healthcare data. The project present the best-in-school techniques used to preserve data privacy of AI-powered healthcare applications. Popular privacy-protecting methods like Federated learning, cryptography techniques, differential privacy methods, and hybrid methods are discussed together with potential cyber threats, data security concerns, and prospects. Also, the project discusses some of the relevant data security acts/laws that govern the collection, storage, and processing of healthcare data to guarantee owners’ privacy is preserved. This inquiry discusses various gaps and uncertainties associated with healthcare AI data collection procedures, and identifies potential correction/mitigation measures.

Keywords: Data privacy, artificial intelligence, healthcare AI, data sharing, healthcare organizations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 114
142 Natural Antioxidant Changes in Fresh and Dried Spices and Vegetables

Authors: Liga Priecina, Daina Karklina

Abstract:

Antioxidants are became the most analyzed substances in last decades. Antioxidants act as in activator for free radicals. Spices and vegetables are one of major antioxidant sources. Most common antioxidants in vegetables and spices are vitamin C, E, phenolic compounds, carotenoids. Therefore, it is important to get some view about antioxidant changes in spices and vegetables during processing. In this article was analyzed nine fresh and dried spices and vegetables- celery (Apium graveolens), parsley (Petroselinum crispum), dill (Anethum graveolens), leek (Allium ampeloprasum L.), garlic (Allium sativum L.), onion (Allium cepa), celery root (Apium graveolens var. rapaceum), pumpkin (Curcubica maxima), carrot (Daucus carota)- grown in Latvia 2013. Total carotenoids and phenolic compounds and their antiradical scavenging activity were determined for all samples. Dry matter content was calculated from moisture content. After drying process carotenoid content significantly decreases in all analyzed samples, except one -carotenoid content increases in parsley. Phenolic composition was different and depends on sample – fresh or dried. Total phenolic, flavonoid and phenolic acid content increases in dried spices. Flavan-3-ol content is not detected in fresh spice samples. For dried vegetables- phenolic acid content decreases significantly, but increases flavan-3-ols content. The higher antiradical scavenging activity was observed in samples with higher flavonoid and phenolic acid content.

Keywords: Antiradical scavenging activity, carotenoids, phenolic compounds, spices, vegetables.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3647
141 The Mechanism Study of Degradative Solvent Extraction of Biomass by Liquid Membrane-Fourier Transform Infrared Spectroscopy

Authors: W. Ketren, J. Wannapeera, Z. Heishun, A. Ryuichi, K. Toshiteru, M. Kouichi, O. Hideaki

Abstract:

Degradative solvent extraction is the method developed for biomass upgrading by dewatering and fractionation of biomass under the mild condition. However, the conversion mechanism of the degradative solvent extraction method has not been fully understood so far. The rice straw was treated in 1-methylnaphthalene (1-MN) at a different solvent-treatment temperature varied from 250 to 350 oC with the residence time for 60 min. The liquid membrane-Fourier Transform Infrared Spectroscopy (FTIR) technique is applied to study the processing mechanism in-depth without separation of the solvent. It has been found that the strength of the oxygen-hydrogen stretching  (3600-3100 cm-1) decreased slightly with increasing temperature in the range of 300-350 oC. The decrease of the hydroxyl group in the solvent soluble suggested dehydration reaction taking place between 300 and 350 oC. FTIR spectra in the carbonyl stretching region (1800-1600 cm-1) revealed the presence of esters groups, carboxylic acid and ketonic groups in the solvent-soluble of biomass. The carboxylic acid increased in the range of 200 to 250 oC and then decreased. The prevailing of aromatic groups showed that the aromatization took place during extraction at above 250 oC. From 300 to 350 oC, the carbonyl functional groups in the solvent-soluble noticeably decreased. The removal of the carboxylic acid and the decrease of esters into the form of carbon dioxide indicated that the decarboxylation reaction occurred during the extraction process.

Keywords: Biomass upgrading, liquid membrane-Fourier transform infrared spectroscopy, FTIR, degradative solvent extraction, mechanism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1021
140 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images

Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir

Abstract:

The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement. On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.

Keywords: Automatic landing, multirotor, nonlinear control, parameters estimation, optical flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 527
139 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model

Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You

Abstract:

The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.

Keywords: Clustering algorithm, potential function, speech signal, the UBSS model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 679
138 Robot Technology Impact on Dyslexic Students’ English Learning

Authors: Khaled Hamdan, Abid Amorri, Fatima Hamdan

Abstract:

Involving students in English language learning process and achieving an adequate English language proficiency in the target language can be a great challenge for both teachers and students. This can prove even a far greater challenge to engage students with special needs (Dyslexia) if they have physical impairment and inadequate mastery of basic communicative language competence/proficiency in the target language. From this perspective, technology like robots can probably be used to enhance learning process for the special needs students who have extensive communication needs, who face continuous struggle to interact with their peers and teachers and meet academic requirements. Robots, precisely NAO, can probably provide them with the perfect opportunity to practice social and communication skills, and meet their English academic requirements. This research paper aims to identify to what extent robots can be used to improve students’ social interaction and communication skills and to understand the potential for robotics-based education in motivating and engaging UAEU dyslexic students to meet university requirements. To reach this end, the paper will explore several factors that come into play – Motion Level-involving cognitive activities, Interaction Level-involving language processing, Behavior Level -establishing a close relationship with the robot and Appraisal Level- focusing on dyslexia students’ achievement in the target language.

Keywords: Dyslexia, robot technology, motion, interaction, behavior and appraisal levels, social and communication skills.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1396
137 Developing Well-Being Indicators and Measurement Methods as Illustrated by Projects Aimed at Preventing Obesity in Children

Authors: E. Grochowska-Niedworok, K. Brukało, M. Hadasik, M. Kardas

Abstract:

Consumption of vegetables by school children and adolescents is essential for their normal growth, development and health, but a significant minority of the world's population consumes the right amount of these products. The aim of the study was to evaluate the preferences and frequency of consumption of vegetables by school children and adolescents. It has been assumed that effectively implemented nutrition education programs should have an impact on increasing the frequency of vegetable consumption among the recipients. The study covered 514 students of five schools in the Opole Voivodeship aged 9 years to 22 years. The research tool was an author's questionnaire, which consisted of closed questions on the frequency of vegetable consumption and the use of 10 ways to treat them. Preferences and frequencies are shown in percentages, while correlations were estimated on the basis of Cramer`s V and gamma coefficients. In each of the examined age groups, the relationship between sex and vegetable consumption (the Cramer`s V coefficient value was 0.06 to 0.38) was determined and the various methods of culinary processing were used (V Craméra was 0.08 to 0.34). For both sexes, the relationship between age and frequency of vegetable consumption was shown (gamma values ranged from ~ 0.00 to 0.39) and different cooking methods (gamma values were 0.01 to 0.22). The most important determinant of nutritional choices is the taste and availability of products. The fact that they have a positive effect on their health is only in third position. As has been shown, obesity prevention programs can not only address nutrition education but also teach about new flavors and increase the availability of healthy foods. In addition, the frequency of vegetable consumption can be a good indicator reflecting the healthy behaviors of children and adolescents.

Keywords: Children and adolescents, frequency, welfare rate, vegetables.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1016
136 Co-Disposal of Coal Ash with Mine Tailings in Surface Paste Disposal Practices: A Gold Mining Case Study

Authors: M. L. Dinis, M. C. Vila, A. Fiúza, A. Futuro, C. Nunes

Abstract:

The present paper describes the study of paste tailings prepared in laboratory using gold tailings, produced in a Finnish gold mine with the incorporation of coal ash. Natural leaching tests were conducted with the original materials (tailings, fly and bottom ashes) and also with paste mixtures that were prepared with different percentages of tailings and ashes. After leaching, the solid wastes were physically and chemically characterized and the results were compared to those selected as blank – the unleached samples. The tailings and the coal ash, as well as the prepared mixtures, were characterized, in addition to the textural parameters, by the following measurements: grain size distribution, chemical composition and pH. Mixtures were also tested in order to characterize their mechanical behavior by measuring the flexural strength, the compressive strength and the consistency. The original tailing samples presented an alkaline pH because during their processing they were previously submitted to pressure oxidation with destruction of the sulfides. Therefore, it was not possible to ascertain the effect of the coal ashes in the acid mine drainage. However, it was possible to verify that the paste reactivity was affected mostly by the bottom ash and that the tailings blended with bottom ash present lower mechanical strength than when blended with a combination of fly and bottom ash. Surface paste disposal offer an attractive alternative to traditional methods in addition to the environmental benefits of incorporating large-volume wastes (e.g. bottom ash). However, a comprehensive characterization of the paste mixtures is crucial to optimize paste design in order to enhance engineer and environmental properties.

Keywords: Coal ash, gold tailings, paste, surface disposal.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443
135 Frictional Effects on the Dynamics of a Truncated Double-Cone Gravitational Motor

Authors: Barenten Suciu

Abstract:

In this work, effects of the friction and truncation on the dynamics of a double-cone gravitational motor, self-propelled on a straight V-shaped horizontal rail, are evaluated. Such mechanism has a variable radius of contact, and, on one hand, it is similar to a pulley mechanism that changes the potential energy into the kinetic energy of rotation, but on the other hand, it is similar to a pendulum mechanism that converts the potential energy of the suspended body into the kinetic energy of translation along a circular path. Movies of the self- propelled double-cones, made of S45C carbon steel and wood, along rails made of aluminum alloy, were shot for various opening angles of the rails. Kinematical features of the double-cones were estimated through the slow-motion processing of the recorded movies. Then, a kinematical model is derived under assumption that the distance traveled by the contact points on the rectilinear rails is identical with the distance traveled by the contact points on the truncated conical surface. Additionally, a dynamic model, for this particular contact problem, was proposed and validated against the experimental results. Based on such model, the traction force and the traction torque acting on the double-cone are identified. One proved that the rolling traction force is always smaller than the sliding friction force; i.e., the double-cone is rolling without slipping. Results obtained in this work can be used to achieve the proper design of such gravitational motor.

Keywords: Truncated double-cone, friction, rolling and sliding, dynamic model, gravitational motor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1352
134 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators

Authors: Wei Zhang

Abstract:

With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.

Keywords: Deep learning, field programmable gate array, FPGA, hardware acceleration, convolutional neural networks, CNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 895
133 Experimental Investigation of the Impact of Biosurfactants on Residual-Oil Recovery

Authors: S. V. Ukwungwu, A. J. Abbas, G. G. Nasr

Abstract:

The increasing high price of natural gas and oil with attendant increase in energy demand on world markets in recent years has stimulated interest in recovering residual oil saturation across the globe. In order to meet the energy security, efforts have been made in developing new technologies of enhancing the recovery of oil and gas, utilizing techniques like CO2 flooding, water injection, hydraulic fracturing, surfactant flooding etc. Surfactant flooding however optimizes production but poses risk to the environment due to their toxic nature. Amongst proven records that have utilized other type of bacterial in producing biosurfactants for enhancing oil recovery, this research uses a technique to combine biosurfactants that will achieve a scale of EOR through lowering interfacial tension/contact angle. In this study, three biosurfactants were produced from three Bacillus species from freeze dried cultures using sucrose 3 % (w/v) as their carbon source. Two of these produced biosurfactants were screened with the TEMCO Pendant Drop Image Analysis for reduction in IFT and contact angle. Interfacial tension was greatly reduced from 56.95 mN.m-1 to 1.41 mN.m-1 when biosurfactants in cell-free culture (Bacillus licheniformis) were used compared to 4. 83mN.m-1 cell-free culture of Bacillus subtilis. As a result, cell-free culture of (Bacillus licheniformis) changes the wettability of the biosurfactant treatment for contact angle measurement to more water-wet as the angle decreased from 130.75o to 65.17o. The influence of microbial treatment on crushed rock samples was also observed by qualitative wettability experiments. Treated samples with biosurfactants remained in the aqueous phase, indicating a water-wet system. These results could prove that biosurfactants can effectively change the chemistry of the wetting conditions against diverse surfaces, providing a desirable condition for efficient oil transport in this way serving as a mechanism for EOR. The environmental friendly effect of biosurfactants applications for industrial purposes play important advantages over chemically synthesized surfactants, with various possible structures, low toxicity, eco-friendly and biodegradability.

Keywords: Bacillus, biosurfactant, enhanced oil recovery, residual oil, wettability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1498
132 Automation of Heat Exchanger using Neural Network

Authors: Sudhir Agashe, Ashok Ghatol, Sujata Agashe

Abstract:

In this paper the development of a heat exchanger as a pilot plant for educational purpose is discussed and the use of neural network for controlling the process is being presented. The aim of the study is to highlight the need of a specific Pseudo Random Binary Sequence (PRBS) to excite a process under control. As the neural network is a data driven technique, the method for data generation plays an important role. In light of this a careful experimentation procedure for data generation was crucial task. Heat exchange is a complex process, which has a capacity and a time lag as process elements. The proposed system is a typical pipe-in- pipe type heat exchanger. The complexity of the system demands careful selection, proper installation and commissioning. The temperature, flow, and pressure sensors play a vital role in the control performance. The final control element used is a pneumatically operated control valve. While carrying out the experimentation on heat exchanger a welldrafted procedure is followed giving utmost attention towards safety of the system. The results obtained are encouraging and revealing the fact that if the process details are known completely as far as process parameters are concerned and utilities are well stabilized then feedback systems are suitable, whereas neural network control paradigm is useful for the processes with nonlinearity and less knowledge about process. The implementation of NN control reinforces the concepts of process control and NN control paradigm. The result also underlined the importance of excitation signal typically for that process. Data acquisition, processing, and presentation in a typical format are the most important parameters while validating the results.

Keywords: Process identification, neural network, heat exchanger.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1573
131 Discovery of Quantified Hierarchical Production Rules from Large Set of Discovered Rules

Authors: Tamanna Siddiqui, M. Afshar Alam

Abstract:

Automated discovery of Rule is, due to its applicability, one of the most fundamental and important method in KDD. It has been an active research area in the recent past. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form: Decision If < condition> Generality Specificity . HPRs systems are capable of handling taxonomical structures inherent in the knowledge about the real world. This paper focuses on the issue of mining Quantified rules with crisp hierarchical structure using Genetic Programming (GP) approach to knowledge discovery. The post-processing scheme presented in this work uses Quantified production rules as initial individuals of GP and discovers hierarchical structure. In proposed approach rules are quantified by using Dempster Shafer theory. Suitable genetic operators are proposed for the suggested encoding. Based on the Subsumption Matrix(SM), an appropriate fitness function is suggested. Finally, Quantified Hierarchical Production Rules (HPRs) are generated from the discovered hierarchy, using Dempster Shafer theory. Experimental results are presented to demonstrate the performance of the proposed algorithm.

Keywords: Knowledge discovery in database, quantification, dempster shafer theory, genetic programming, hierarchy, subsumption matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527
130 Occurrence of Foreign Matter in Food: Applied Identification Method - Association of Official Agricultural Chemists (AOAC) and Food and Drug Administration (FDA)

Authors: E. C. Mattos, V. S. M. G. Daros, R. Dal Col, A. L. Nascimento

Abstract:

The aim of this study is to present the results of a retrospective survey on the foreign matter found in foods analyzed at the Adolfo Lutz Institute, from July 2001 to July 2015. All the analyses were conducted according to the official methods described on Association of Official Agricultural Chemists (AOAC) for the micro analytical procedures and Food and Drug Administration (FDA) for the macro analytical procedures. The results showed flours, cereals and derivatives such as baking and pasta products were the types of food where foreign matters were found more frequently followed by condiments and teas. Fragments of stored grains insects, its larvae, nets, excrement, dead mites and rodent excrement were the most foreign matter found in food. Besides, foreign matters that can cause a physical risk to the consumer’s health such as metal, stones, glass, wood were found but rarely. Miscellaneous (shell, sand, dirt and seeds) were also reported. There are a lot of extraneous materials that are considered unavoidable since are something inherent to the product itself, such as insect fragments in grains. In contrast, there are avoidable extraneous materials that are less tolerated because it is preventable with the Good Manufacturing Practice. The conclusion of this work is that although most extraneous materials found in food are considered unavoidable it is necessary to keep the Good Manufacturing Practice throughout the food processing as well as maintaining a constant surveillance of the production process in order to avoid accidents that may lead to occurrence of these extraneous materials in food.

Keywords: Food contamination, extraneous materials, foreign matter, surveillance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3702
129 Issues in Spectral Source Separation Techniques for Plant-wide Oscillation Detection and Diagnosis

Authors: A.K. Tangirala, S. Babji

Abstract:

In the last few years, three multivariate spectral analysis techniques namely, Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) have emerged as effective tools for oscillation detection and isolation. While the first method is used in determining the number of oscillatory sources, the latter two methods are used to identify source signatures by formulating the detection problem as a source identification problem in the spectral domain. In this paper, we present a critical drawback of the underlying linear (mixing) model which strongly limits the ability of the associated source separation methods to determine the number of sources and/or identify the physical source signatures. It is shown that the assumed mixing model is only valid if each unit of the process gives equal weighting (all-pass filter) to all oscillatory components in its inputs. This is in contrast to the fact that each unit, in general, acts as a filter with non-uniform frequency response. Thus, the model can only facilitate correct identification of a source with a single frequency component, which is again unrealistic. To overcome this deficiency, an iterative post-processing algorithm that correctly identifies the physical source(s) is developed. An additional issue with the existing methods is that they lack a procedure to pre-screen non-oscillatory/noisy measurements which obscure the identification of oscillatory sources. In this regard, a pre-screening procedure is prescribed based on the notion of sparseness index to eliminate the noisy and non-oscillatory measurements from the data set used for analysis.

Keywords: non-negative matrix factorization, PCA, source separation, plant-wide diagnosis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534
128 Data Compression in Ultrasonic Network Communication via Sparse Signal Processing

Authors: Beata Zima, Octavio A. Márquez Reyes, Masoud Mohammadgholiha, Jochen Moll, Luca De Marchi

Abstract:

This document presents the approach of using compressed sensing in signal encoding and information transferring within a guided wave sensor network, comprised of specially designed frequency steerable acoustic transducers (FSATs). Wave propagation in a damaged plate was simulated using commercial FEM-based software COMSOL. Guided waves were excited by means of FSATs, characterized by the special shape of its electrodes, and modeled using PIC255 piezoelectric material. The special shape of the FSAT, allows for focusing wave energy in a certain direction, accordingly to the frequency components of its actuation signal, which makes a larger monitored area available. The process begins when a FSAT detects and records reflection from damage in the structure, this signal is then encoded and prepared for transmission, using a combined approach, based on Compressed Sensing Matching Pursuit and Quadrature Amplitude Modulation (QAM). After codification of the signal is in binary, the information is transmitted between the nodes in the network. The message reaches the last node, where it is finally decoded and processed, to be used for damage detection and localization purposes. The main aim of the investigation is to determine the location of detected damage using reconstructed signals. The study demonstrates that the special steerable capabilities of FSATs, not only facilitate the detection of damage but also permit transmitting the damage information to a chosen area in a specific direction of the investigated structure.

Keywords: Data compression, ultrasonic communication, guided waves, FEM analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 378
127 Privacy Concerns and Law Enforcement Data Collection to Tackle Domestic and Sexual Violence

Authors: Francesca Radice

Abstract:

It has been observed that violent or coercive behaviour has been apparent from initial conversations on dating apps like Tinder. Child pornography, stalking, and coercive control are some criminal offences from dating apps, including women murdered after finding partners through Tinder. Police databases and predictive policing are novel approaches taken to prevent crime before harm is done. This research will investigate how police databases can be used in a privacy-preserving way to characterise users in terms of their potential for violent crime. Using the COPS database of NSW Police, we will explore how the past criminal record can be interpreted to yield a category of potential danger for each dating app user. It is up to the judgement of each subscriber on what degree of the potential danger they are prepared to enter into. Sentiment analysis is an area where research into natural language processing has made great progress over the last decade. This research will investigate how sentiment analysis can be used to interpret interchanges between dating app users to detect manipulative or coercive sentiments. These can be used to alert law enforcement if continued for a defined number of communications. One of the potential problems of this approach is the potential prejudice a categorisation can cause. Another drawback is the possibility of misinterpreting communications and involving law enforcement without reason. The approach will be thoroughly tested with cross-checks by human readers who verify both the level of danger predicted by the interpretation of the criminal record and the sentiment detected from personal messages. Even if only a few violent crimes can be prevented, the approach will have a tangible value for real people.

Keywords: Sentiment Analysis, data mining, predictive policing, virtual manipulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 253
126 The Effect of Main Factors on Forces during FSJ Processing of AA2024 Aluminum

Authors: Dunwen Zuo, Yongfang Deng, Bo Song

Abstract:

An attempt is made here to measure the forces of three directions, under conditions of different feed speeds, different tilt angles of tool and without or with the pin on the tool, by using octagonal ring dynamometer in the AA2024 aluminum FSJ (Friction Stir Joining) process, and investigate how four main factors influence forces in the FSJ process. It is found that, high feed speed lead to small feed force and small lateral force, but high feed speed leads to large feed force in the stable joining stage of process. As the rotational speed increasing, the time of axial force drop from the maximum to the minimum required increased in the push-up process. In the stable joining stage, the rotational speed has little effect on the feed force; large rotational speed leads to small lateral force and axial force. The maximum axial force increases as the tilt angle of tool increases at the downward movement stage. At the moment of start feeding, as tilt angle of tool increases, the amplitudes of the axial force increasing become large. In the stable joining stage, with the increase of tilt angle of tool, the axial force is increased, the lateral force is decreased, and the feed force almost unchanged. The tool with pin will decrease axial force in the downward movement stage. The feed force and lateral force will increase, but the axial force will reduced in the stable joining stage by using the tool with pin compare to by using the tool without pin.

Keywords: FSJ, force factor, AA2024, friction stir joining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1138
125 Design and Modeling of Human Middle Ear for Harmonic Response Analysis

Authors: Shende Suraj Balu, A. B. Deoghare, K. M. Pandey

Abstract:

The human middle ear (ME) is a delicate and vital organ. It has a complex structure that performs various functions such as receiving sound pressure and producing vibrations of eardrum and propagating it to inner ear. It consists of Tympanic Membrane (TM), three auditory ossicles, various ligament structures and muscles. Incidents such as traumata, infections, ossification of ossicular structures and other pathologies may damage the ME organs. The conditions can be surgically treated by employing prosthesis. However, the suitability of the prosthesis needs to be examined in advance prior to the surgery. Few decades ago, this issue was addressed and analyzed by developing an equivalent representation either in the form of spring mass system, electrical system using R-L-C circuit or developing an approximated CAD model. But, nowadays a three-dimensional ME model can be constructed using micro X-Ray Computed Tomography (μCT) scan data. Moreover, the concern about patient specific integrity pertaining to the disease can be examined well in advance. The current research work emphasizes to develop the ME model from the stacks of μCT images which are used as input file to MIMICS Research 19.0 (Materialise Interactive Medical Image Control System) software. A stack of CT images is converted into geometrical surface model to build accurate morphology of ME. The work is further extended to understand the dynamic behaviour of Harmonic response of the stapes footplate and umbo for different sound pressure levels applied at lateral side of eardrum using finite element approach. The pathological condition Cholesteatoma of ME is investigated to obtain peak to peak displacement of stapes footplate and umbo. Apart from this condition, other pathologies, mainly, changes in the stiffness of stapedial ligament, TM thickness and ossicular chain separation and fixation are also explored. The developed model of ME for pathologies is validated by comparing the results available in the literatures and also with the results of a normal ME to calculate the percentage loss in hearing capability.

Keywords: Computed tomography, human middle ear, harmonic response, pathologies, tympanic membrane.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1013
124 Innovation in “Low-Tech” Industries: Portuguese Footwear Industry

Authors: António Marques, Graça Guedes

Abstract:

The Portuguese footwear industry had in the last five years a remarkable performance in the exportation values, the trade balance and others economic indicators. After a long period of difficulties and with a strong reduction of companies and employees since 1994 until 2009, the Portuguese footwear industry changed the strategy and is now a success case between the international players of footwear. Only the Italian industry sells footwear with a higher value than the Portuguese and the distance between them is decreasing year by year. This paper analyses how the Portuguese footwear companies innovate and make innovation, according the classification proposed by the Oslo Manual. Also, analyses the strategy follow in the innovation process and shows the linkage between the type of innovation and the strategy of innovation. The research methodology was qualitative and the strategy for data collection was the case study. The qualitative data will be analyzed with the MAXQDA software. The economic results of the footwear companies studied shows differences between all of them and these differences are related with the innovation strategy adopted. The companies focused in product and marketing innovation, oriented to their target market, have higher ratios “turnover per worker” than the companies focused in process innovation. However, all the footwear companies in this “low-tech” industry create value and contribute to a positive foreign trade of 1.310 million euros in 2013. The growth strategies implemented has the participation of the sectorial organizations in several innovative projects. And it’s obvious that cooperation between all of them is a critical element to the performance achieved by the companies and the innovation observed. The Portuguese footwear sector has in the last years an excellent performance (economic results, exportation values, trade balance, brands and international image) and his performance is strongly related with the strategy in innovation followed, the type of innovation and the networks in the cluster. A simplified model, called “Ace of Diamonds”, is proposed by the authors and explains the way how this performance was reached by the seven companies that participate in the study (two of them are the leaders in the setor), and if this model can be used in others traditional and “low-tech” industries.

Keywords: Footwear industry, innovation strategy, low-tech industry, Oslo Manual.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1768
123 Evaluation of Video Quality Metrics and Performance Comparison on Contents Taken from Most Commonly Used Devices

Authors: Pratik Dhabal Deo, Manoj P.

Abstract:

With the increasing number of social media users, the amount of video content available has also significantly increased. Currently, the number of smartphone users is at its peak, and many are increasingly using their smartphones as their main photography and recording devices. There have been a lot of developments in the field of video quality assessment in since the past years and more research on various other aspects of video and image are being done. Datasets that contain a huge number of videos from different high-end devices make it difficult to analyze the performance of the metrics on the content from most used devices even if they contain contents taken in poor lighting conditions using lower-end devices. These devices face a lot of distortions due to various factors since the spectrum of contents recorded on these devices is huge. In this paper, we have presented an analysis of the objective Video Quality Analysis (VQA) metrics on contents taken only from most used devices and their performance on them, focusing on full-reference metrics. To carry out this research, we created a custom dataset containing a total of 90 videos that have been taken from three most commonly used devices, and Android smartphone, an iOS smartphone and a Digital Single-Lens Reflex (DSLR) camera. On the videos taken on each of these devices, the six most common types of distortions that users face have been applied in addition to already existing H.264 compression based on four reference videos. These six applied distortions have three levels of degradation each. A total of the five most popular VQA metrics have been evaluated on this dataset and the highest values and the lowest values of each of the metrics on the distortions have been recorded. Finally, it is found that blur is the artifact on which most of the metrics did not perform well. Thus, in order to understand the results better the amount of blur in the data set has been calculated and an additional evaluation of the metrics was done using High Efficiency Video Coding (HEVC) codec, which is the next version of H.264 compression, on the camera that proved to be the sharpest among the devices. The results have shown that as the resolution increases, the performance of the metrics tends to become more accurate and the best performing metric among them is VQM with very few inconsistencies and inaccurate results when the compression applied is H.264, but when the compression is applied is HEVC, Structural Similarity (SSIM) metric and Video Multimethod Assessment Fusion (VMAF) have performed significantly better.

Keywords: Distortion, metrics, recording, frame rate, video quality assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 366
122 LIDAR Obstacle Warning and Avoidance System for Unmanned Aircraft

Authors: Roberto Sabatini, Alessandro Gardi, Mark A. Richardson

Abstract:

The availability of powerful eye-safe laser sources and the recent advancements in electro-optical and mechanical beam-steering components have allowed laser-based Light Detection and Ranging (LIDAR) to become a promising technology for obstacle warning and avoidance in a variety of manned and unmanned aircraft applications. LIDAR outstanding angular resolution and accuracy characteristics are coupled to its good detection performance in a wide range of incidence angles and weather conditions, providing an ideal obstacle avoidance solution, which is especially attractive in low-level flying platforms such as helicopters and small-to-medium size Unmanned Aircraft (UA). The Laser Obstacle Avoidance Marconi (LOAM) system is one of such systems, which was jointly developed and tested by SELEX-ES and the Italian Air Force Research and Flight Test Centre. The system was originally conceived for military rotorcraft platforms and, in this paper, we briefly review the previous work and discuss in more details some of the key development activities required for integration of LOAM on UA platforms. The main hardware and software design features of this LOAM variant are presented, including a brief description of the system interfaces and sensor characteristics, together with the system performance models and data processing algorithms for obstacle detection, classification and avoidance. In particular, the paper focuses on the algorithm proposed for optimal avoidance trajectory generation in UA applications.

Keywords: LIDAR, Low-Level Flight, Nap-of-the-Earth Flight, Near Infra-Red, Obstacle Avoidance, Obstacle Detection, Obstacle Warning System, Sense and Avoid, Trajectory Optimisation, Unmanned Aircraft.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7085
121 Determination of Potential Agricultural Lands Using Landsat 8 OLI Images and GIS: Case Study of Gokceada (Imroz) Turkey

Authors: Rahmi Kafadar, Levent Genc

Abstract:

In present study, it was aimed to determine potential agricultural lands (PALs) in Gokceada (Imroz) Island of Canakkale province, Turkey. Seven-band Landsat 8 OLI images acquired on July 12 and August 13, 2013, and their 14-band combination image were used to identify current Land Use Land Cover (LULC) status. Principal Component Analysis (PCA) was applied to three Landsat datasets in order to reduce the correlation between the bands. A total of six Original and PCA images were classified using supervised classification method to obtain the LULC maps including 6 main classes (“Forest”, “Agriculture”, “Water Surface”, “Residential Area- Bare Soil”, “Reforestation” and “Other”). Accuracy assessment was performed by checking the accuracy of 120 randomized points for each LULC maps. The best overall accuracy and Kappa statistic values (90.83%, 0.8791% respectively) were found for PCA images which were generated from 14-bands combined images called 3- B/JA. Digital Elevation Model (DEM) with 15 m spatial resolution (ASTER) was used to consider topographical characteristics. Soil properties were obtained by digitizing 1:25000 scaled soil maps of Rural Services Directorate General. Potential Agricultural Lands (PALs) were determined using Geographic information Systems (GIS). Procedure was applied considering that “Other” class of LULC map may be used for agricultural purposes in the future properties. Overlaying analysis was conducted using Slope (S), Land Use Capability Class (LUCC), Other Soil Properties (OSP) and Land Use Capability Sub-Class (SUBC) properties. A total of 901.62 ha areas within “Other” class (15798.2 ha) of LULC map were determined as PALs. These lands were ranked as “Very Suitable”, “Suitable”, “Moderate Suitable” and “Low Suitable”. It was determined that the 8.03 ha were classified as “Very Suitable” while 18.59 ha as suitable and 11.44 ha as “Moderate Suitable” for PALs. In addition, 756.56 ha were found to be “Low Suitable”. The results obtained from this preliminary study can serve as basis for further studies.

Keywords: Digital Elevation Model (DEM), Geographic Information Systems (GIS), LANDSAT 8 OLI-TIRS, Land Use Land Cover (LULC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2647
120 Applying Biosensors’ Electromyography Signals through an Artificial Neural Network to Control a Small Unmanned Aerial Vehicle

Authors: Mylena McCoggle, Shyra Wilson, Andrea Rivera, Rocio Alba-Flores, Valentin Soloiu

Abstract:

This work describes a system that uses electromyography (EMG) signals obtained from muscle sensors and an Artificial Neural Network (ANN) for signal classification and pattern recognition that is used to control a small unmanned aerial vehicle using specific arm movements. The main objective of this endeavor is the development of an intelligent interface that allows the user to control the flight of a drone beyond direct manual control. The sensor used were the MyoWare Muscle sensor which contains two EMG electrodes used to collect signals from the posterior (extensor) and anterior (flexor) forearm, and the bicep. The collection of the raw signals from each sensor was performed using an Arduino Uno. Data processing algorithms were developed with the purpose of classifying the signals generated by the arm’s muscles when performing specific movements, namely: flexing, resting, and motion of the arm. With these arm motions roll control of the drone was achieved. MATLAB software was utilized to condition the signals and prepare them for the classification. To generate the input vector for the ANN and perform the classification, the root mean square and the standard deviation were processed for the signals from each electrode. The neuromuscular information was trained using an ANN with a single 10 neurons hidden layer to categorize the four targets. The result of the classification shows that an accuracy of 97.5% was obtained. Afterwards, classification results are used to generate the appropriate control signals from the computer to the drone through a Wi-Fi network connection. These procedures were successfully tested, where the drone responded successfully in real time to the commanded inputs.

Keywords: Biosensors, electromyography, Artificial Neural Network, Arduino, drone flight control, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 555