Search results for: machine failures
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3305

Search results for: machine failures

1745 Neural Networks Models for Measuring Hotel Users Satisfaction

Authors: Asma Ameur, Dhafer Malouche

Abstract:

Nowadays, user comments on the Internet have an important impact on hotel bookings. This confirms that the e-reputation issue can influence the likelihood of customer loyalty to a hotel. In this way, e-reputation has become a real differentiator between hotels. For this reason, we have a unique opportunity in the opinion mining field to analyze the comments. In fact, this field provides the possibility of extracting information related to the polarity of user reviews. This sentimental study (Opinion Mining) represents a new line of research for analyzing the unstructured textual data. Knowing the score of e-reputation helps the hotelier to better manage his marketing strategy. The score we then obtain is translated into the image of hotels to differentiate between them. Therefore, this present research highlights the importance of hotel satisfaction ‘scoring. To calculate the satisfaction score, the sentimental analysis can be manipulated by several techniques of machine learning. In fact, this study treats the extracted textual data by using the Artificial Neural Networks Approach (ANNs). In this context, we adopt the aforementioned technique to extract information from the comments available in the ‘Trip Advisor’ website. This actual paper details the description and the modeling of the ANNs approach for the scoring of online hotel reviews. In summary, the validation of this used method provides a significant model for hotel sentiment analysis. So, it provides the possibility to determine precisely the polarity of the hotel users reviews. The empirical results show that the ANNs are an accurate approach for sentiment analysis. The obtained results show also that this proposed approach serves to the dimensionality reduction for textual data’ clustering. Thus, this study provides researchers with a useful exploration of this technique. Finally, we outline guidelines for future research in the hotel e-reputation field as comparing the ANNs with other technique.

Keywords: clustering, consumer behavior, data mining, e-reputation, machine learning, neural network, online hotel ‘reviews, opinion mining, scoring

Procedia PDF Downloads 136
1744 Artificial Intelligence Created Inventions

Authors: John Goodhue, Xiaonan Wei

Abstract:

Current legal decisions and policies regarding the naming as artificial intelligence as inventor are reviewed with emphasis on the recent decisions by the European Patent Office regarding the DABUS inventions holding that an artificial intelligence machine cannot be an inventor. Next, a set of hypotheticals is introduced and examined to better understand how artificial intelligence might be used to create or assist in creating new inventions and how application of existing or proposed changes in the law would affect the ability to protect these inventions including due to restrictions on artificial intelligence for being named as inventors, ownership of inventions made by artificial intelligence, and the effects on legal standards for inventiveness or obviousness.

Keywords: Artificial intelligence, innovation, invention, patent

Procedia PDF Downloads 173
1743 Multi-Agent System Based Solution for Operating Agile and Customizable Micro Manufacturing Systems

Authors: Dylan Santos De Pinho, Arnaud Gay De Combes, Matthieu Steuhlet, Claude Jeannerat, Nabil Ouerhani

Abstract:

The Industry 4.0 initiative has been launched to address huge challenges related to ever-smaller batch sizes. The end-user need for highly customized products requires highly adaptive production systems in order to keep the same efficiency of shop floors. Most of the classical Software solutions that operate the manufacturing processes in a shop floor are based on rigid Manufacturing Execution Systems (MES), which are not capable to adapt the production order on the fly depending on changing demands and or conditions. In this paper, we present a highly modular and flexible solution to orchestrate a set of production systems composed of a micro-milling machine-tool, a polishing station, a cleaning station, a part inspection station, and a rough material store. The different stations are installed according to a novel matrix configuration of a 3x3 vertical shelf. The different cells of the shelf are connected through horizontal and vertical rails on which a set of shuttles circulate to transport the machined parts from a station to another. Our software solution for orchestrating the tasks of each station is based on a Multi-Agent System. Each station and each shuttle is operated by an autonomous agent. All agents communicate with a central agent that holds all the information about the manufacturing order. The core innovation of this paper lies in the path planning of the different shuttles with two major objectives: 1) reduce the waiting time of stations and thus reduce the cycle time of the entire part, and 2) reduce the disturbances like vibration generated by the shuttles, which highly impacts the manufacturing process and thus the quality of the final part. Simulation results show that the cycle time of the parts is reduced by up to 50% compared with MES operated linear production lines while the disturbance is systematically avoided for the critical stations like the milling machine-tool.

Keywords: multi-agent systems, micro-manufacturing, flexible manufacturing, transfer systems

Procedia PDF Downloads 130
1742 Effect of the Workpiece Position on the Manufacturing Tolerances

Authors: Rahou Mohamed , Sebaa Fethi, Cheikh Abdelmadjid

Abstract:

Manufacturing tolerancing is intended to determine the intermediate geometrical and dimensional states of the part during its manufacturing process. These manufacturing dimensions also serve to satisfy not only the functional requirements given in the definition drawing but also the manufacturing constraints, for example geometrical defects of the machine, vibration, and the wear of the cutting tool. The choice of positioning has an important influence on the cost and quality of manufacture. To avoid this problem, a two-step approach have been developed. The first step is dedicated to the determination of the optimum position. As for the second step, a study was carried out for the tightening effect on the tolerance interval.

Keywords: dispersion, tolerance, manufacturing, position

Procedia PDF Downloads 336
1741 Optimizing the Use of Google Translate in Translation Teaching: A Case Study at Prince Sultan University

Authors: Saadia Elamin

Abstract:

The quasi-universal use of smart phones with internet connection available all the time makes it a reflex action for translation undergraduates, once they encounter the least translation problem, to turn to the freely available web resource: Google Translate. Like for other translator resources and aids, the use of Google Translate needs to be moderated in such a way that it contributes to developing translation competence. Here, instead of interfering with students’ learning by providing ready-made solutions which might not always fit into the contexts of use, it can help to consolidate the skills of analysis and transfer which students have already acquired. One way to do so is by training students to adhere to the basic principles of translation work. The most important of these is that analyzing the source text for comprehension comes first and foremost before jumping into the search for target language equivalents. Another basic principle is that certain translator aids and tools can be used for comprehension, while others are to be confined to the phase of re-expressing the meaning into the target language. The present paper reports on the experience of making a measured and reasonable use of Google Translate in translation teaching at Prince Sultan University (PSU), Riyadh. First, it traces the development that has taken place in the field of translation in this age of information technology, be it in translation teaching and translator training, or in the real-world practice of the profession. Second, it describes how, with the aim of reflecting this development onto the way translation is taught, senior students, after being trained on post-editing machine translation output, are authorized to use Google Translate in classwork and assignments. Third, the paper elaborates on the findings of this case study which has demonstrated that Google Translate, if used at the appropriate levels of training, can help to enhance students’ ability to perform different translation tasks. This help extends from the search for terms and expressions, to the tasks of drafting the target text, revising its content and finally editing it. In addition, using Google Translate in this way fosters a reflexive and critical attitude towards web resources in general, maximizing thus the benefit gained from them in preparing students to meet the requirements of the modern translation job market.

Keywords: Google Translate, post-editing machine translation output, principles of translation work, translation competence, translation teaching, translator aids and tools

Procedia PDF Downloads 473
1740 Modeling Biomass and Biodiversity across Environmental and Management Gradients in Temperate Grasslands with Deep Learning and Sentinel-1 and -2

Authors: Javier Muro, Anja Linstadter, Florian Manner, Lisa Schwarz, Stephan Wollauer, Paul Magdon, Gohar Ghazaryan, Olena Dubovyk

Abstract:

Monitoring the trade-off between biomass production and biodiversity in grasslands is critical to evaluate the effects of management practices across environmental gradients. New generations of remote sensing sensors and machine learning approaches can model grasslands’ characteristics with varying accuracies. However, studies often fail to cover a sufficiently broad range of environmental conditions, and evidence suggests that prediction models might be case specific. In this study, biomass production and biodiversity indices (species richness and Fishers’ α) are modeled in 150 grassland plots for three sites across Germany. These sites represent a North-South gradient and are characterized by distinct soil types, topographic properties, climatic conditions, and management intensities. Predictors used are derived from Sentinel-1 & 2 and a set of topoedaphic variables. The transferability of the models is tested by training and validating at different sites. The performance of feed-forward deep neural networks (DNN) is compared to a random forest algorithm. While biomass predictions across gradients and sites were acceptable (r2 0.5), predictions of biodiversity indices were poor (r2 0.14). DNN showed higher generalization capacity than random forest when predicting biomass across gradients and sites (relative root mean squared error of 0.5 for DNN vs. 0.85 for random forest). DNN also achieved high performance when using the Sentinel-2 surface reflectance data rather than different combinations of spectral indices, Sentinel-1 data, or topoedaphic variables, simplifying dimensionality. This study demonstrates the necessity of training biomass and biodiversity models using a broad range of environmental conditions and ensuring spatial independence to have realistic and transferable models where plot level information can be upscaled to landscape scale.

Keywords: ecosystem services, grassland management, machine learning, remote sensing

Procedia PDF Downloads 218
1739 Waste-Based Surface Modification to Enhance Corrosion Resistance of Aluminium Bronze Alloy

Authors: Wilson Handoko, Farshid Pahlevani, Isha Singla, Himanish Kumar, Veena Sahajwalla

Abstract:

Aluminium bronze alloys are well known for their superior abrasion, tensile strength and non-magnetic properties, due to the co-presence of iron (Fe) and aluminium (Al) as alloying elements and have been commonly used in many industrial applications. However, continuous exposure to the marine environment will accelerate the risk of a tendency to Al bronze alloys parts failures. Although a higher level of corrosion resistance properties can be achieved by modifying its elemental composition, it will come at a price through the complex manufacturing process and increases the risk of reducing the ductility of Al bronze alloy. In this research, the use of ironmaking slag and waste plastic as the input source for surface modification of Al bronze alloy was implemented. Microstructural analysis conducted using polarised light microscopy and scanning electron microscopy (SEM) that is equipped with energy dispersive spectroscopy (EDS). An electrochemical corrosion test was carried out through Tafel polarisation method and calculation of protection efficiency against the base-material was determined. Results have indicated that uniform modified surface which is as the result of selective diffusion process, has enhanced corrosion resistance properties up to 12.67%. This approach has opened a new opportunity to access various industrial utilisations in commercial scale through minimising the dependency on natural resources by transforming waste sources into the protective coating in environmentally friendly and cost-effective ways.

Keywords: aluminium bronze, waste-based surface modification, tafel polarisation, corrosion resistance

Procedia PDF Downloads 236
1738 Using Textual Pre-Processing and Text Mining to Create Semantic Links

Authors: Ricardo Avila, Gabriel Lopes, Vania Vidal, Jose Macedo

Abstract:

This article offers a approach to the automatic discovery of semantic concepts and links in the domain of Oil Exploration and Production (E&P). Machine learning methods combined with textual pre-processing techniques were used to detect local patterns in texts and, thus, generate new concepts and new semantic links. Even using more specific vocabularies within the oil domain, our approach has achieved satisfactory results, suggesting that the proposal can be applied in other domains and languages, requiring only minor adjustments.

Keywords: semantic links, data mining, linked data, SKOS

Procedia PDF Downloads 179
1737 A Novel Machine Learning Approach to Aid Agrammatism in Non-fluent Aphasia

Authors: Rohan Bhasin

Abstract:

Agrammatism in non-fluent Aphasia Cases can be defined as a language disorder wherein a patient can only use content words ( nouns, verbs and adjectives ) for communication and their speech is devoid of functional word types like conjunctions and articles, generating speech of with extremely rudimentary grammar . Past approaches involve Speech Therapy of some order with conversation analysis used to analyse pre-therapy speech patterns and qualitative changes in conversational behaviour after therapy. We describe this approach as a novel method to generate functional words (prepositions, articles, ) around content words ( nouns, verbs and adjectives ) using a combination of Natural Language Processing and Deep Learning algorithms. The applications of this approach can be used to assist communication. The approach the paper investigates is : LSTMs or Seq2Seq: A sequence2sequence approach (seq2seq) or LSTM would take in a sequence of inputs and output sequence. This approach needs a significant amount of training data, with each training data containing pairs such as (content words, complete sentence). We generate such data by starting with complete sentences from a text source, removing functional words to get just the content words. However, this approach would require a lot of training data to get a coherent input. The assumptions of this approach is that the content words received in the inputs of both text models are to be preserved, i.e, won't alter after the functional grammar is slotted in. This is a potential limit to cases of severe Agrammatism where such order might not be inherently correct. The applications of this approach can be used to assist communication mild Agrammatism in non-fluent Aphasia Cases. Thus by generating these function words around the content words, we can provide meaningful sentence options to the patient for articulate conversations. Thus our project translates the use case of generating sentences from content-specific words into an assistive technology for non-Fluent Aphasia Patients.

Keywords: aphasia, expressive aphasia, assistive algorithms, neurology, machine learning, natural language processing, language disorder, behaviour disorder, sequence to sequence, LSTM

Procedia PDF Downloads 164
1736 Determination of Slope of Hilly Terrain by Using Proposed Method of Resolution of Forces

Authors: Reshma Raskar-Phule, Makarand Landge, Saurabh Singh, Vijay Singh, Jash Saparia, Shivam Tripathi

Abstract:

For any construction project, slope calculations are necessary in order to evaluate constructability on the site, such as the slope of parking lots, sidewalks, and ramps, the slope of sanitary sewer lines, slope of roads and highways. When slopes and grades are to be determined, designers are concerned with establishing proper slopes and grades for their projects to assess cut and fill volume calculations and determine inverts of pipes. There are several established instruments commonly used to determine slopes, such as Dumpy level, Abney level or Hand Level, Inclinometer, Tacheometer, Henry method, etc., and surveyors are very familiar with the use of these instruments to calculate slopes. However, they have some other drawbacks which cannot be neglected while major surveying works. Firstly, it requires expert surveyors and skilled staff. The accessibility, visibility, and accommodation to remote hilly terrain with these instruments and surveying teams are difficult. Also, determination of gentle slopes in case of road and sewer drainage constructions in congested urban places with these instruments is not easy. This paper aims to develop a method that requires minimum field work, minimum instruments, no high-end technology or instruments or software, and low cost. It requires basic and handy surveying accessories like a plane table with a fixed weighing machine, standard weights, alidade, tripod, and ranging rods should be able to determine the terrain slope in congested areas as well as in remote hilly terrain. Also, being simple and easy to understand and perform the people of that local rural area can be easily trained for the proposed method. The idea for the proposed method is based on the principle of resolution of weight components. When any object of standard weight ‘W’ is placed on an inclined surface with a weighing machine below it, then its cosine component of weight is presently measured by that weighing machine. The slope can be determined from the relation between the true or actual weight and the apparent weight. A proper procedure is to be followed, which includes site location, centering and sighting work, fixing the whole set at the identified station, and finally taking the readings. A set of experiments for slope determination, mild and moderate slopes, are carried out by the proposed method and by the theodolite instrument in a controlled environment, on the college campus, and uncontrolled environment actual site. The slopes determined by the proposed method were compared with those determined by the established instruments. For example, it was observed that for the same distances for mild slope, the difference in the slope obtained by the proposed method and by the established method ranges from 4’ for a distance of 8m to 2o15’20” for a distance of 16m for an uncontrolled environment. Thus, for mild slopes, the proposed method is suitable for a distance of 8m to 10m. The correlation between the proposed method and the established method shows a good correlation of 0.91 to 0.99 for various combinations, mild and moderate slope, with the controlled and uncontrolled environment.

Keywords: surveying, plane table, weight component, slope determination, hilly terrain, construction

Procedia PDF Downloads 96
1735 Automated Manual Handling Risk Assessments: Practitioner Experienced Determinants of Automated Risk Analysis and Reporting Being a Benefit or Distraction

Authors: S. Cowley, M. Lawrance, D. Bick, R. McCord

Abstract:

Technology that automates manual handling (musculoskeletal disorder or MSD) risk assessments is increasingly available to ergonomists, engineers, generalist health and safety practitioners alike. The risk assessment process is generally based on the use of wearable motion sensors that capture information about worker movements for real-time or for posthoc analysis. Traditionally, MSD risk assessment is undertaken with the assistance of a checklist such as that from the SafeWork Australia code of practice, the expert assessor observing the task and ideally engaging with the worker in a discussion about the detail. Automation enables the non-expert to complete assessments and does not always require the assessor to be there. This clearly has cost and time benefits for the practitioner but is it an improvement on the assessment by the human. Human risk assessments draw on the knowledge and expertise of the assessor but, like all risk assessments, are highly subjective. The complexity of the checklists and models used in the process can be off-putting and sometimes will lead to the assessment becoming the focus and the end rather than a means to an end; the focus on risk control is lost. Automated risk assessment handles the complexity of the assessment for the assessor and delivers a simple risk score that enables decision-making regarding risk control. Being machine-based, they are objective and will deliver the same each time they assess an identical task. However, the WHS professional needs to know that this emergent technology asks the right questions and delivers the right answers. Whether it improves the risk assessment process and results or simply distances the professional from the task and the worker. They need clarity as to whether automation of manual task risk analysis and reporting leads to risk control or to a focus on the worker. Critically, they need evidence as to whether automation in this area of hazard management leads to better risk control or just a bigger collection of assessments. Practitioner experienced determinants of this automated manual task risk analysis and reporting being a benefit or distraction will address an understanding of emergent risk assessment technology, its use and things to consider when making decisions about adopting and applying these technologies.

Keywords: automated, manual-handling, risk-assessment, machine-based

Procedia PDF Downloads 119
1734 Risk Management Approach for a Secure and Performant Integration of Automated Drug Dispensing Systems in Hospitals

Authors: Hind Bouami, Patrick Millot

Abstract:

Medication dispensing system is a life-critical system whose failure may result in preventable adverse events leading to longer patient stays in hospitals or patient death. Automation has led to great improvements in life-critical systems as it increased safety, efficiency, and comfort. However, critical risks related to medical organization complexity and automated solutions integration can threaten drug dispensing security and performance. Knowledge about the system’s complexity aspects and human machine parameters to control for automated equipment’s security and performance will help operators to secure their automation process and to optimize their system’s reliability. In this context, this study aims to document the operator’s situation awareness about automation risks and parameters involved in automation security and performance. Our risk management approach has been deployed in the North Luxembourg hospital center’s pharmacy, which is equipped with automated drug dispensing systems since 2009. With more than 4 million euros of gains generated, North Luxembourg hospital center’s success story was enabled by the management commitment, pharmacy’s involvement in the implementation and improvement of the automation project, and the close collaboration between the pharmacy and Sinteco’s firm to implement the necessary innovation and organizational actions for automated solutions integration security and performance. An analysis of the actions implemented by the hospital and the parameters involved in automated equipment’s integration security and performance has been made. The parameters to control for automated equipment’s integration security and performance are human aspects (6.25%), technical aspects (50%), and human-machine interaction (43.75%). The implementation of an anthropocentric analysis system before automation would have prevented and optimized the control of risks related to automation.

Keywords: Automated drug delivery systems, Hospitals, Human-centered automated system, Risk management

Procedia PDF Downloads 137
1733 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings

Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir

Abstract:

Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.

Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine

Procedia PDF Downloads 162
1732 Approaching In vivo Dosimetry for Kilovoltage X-Ray Radiotherapy

Authors: Rodolfo Alfonso, David Alonso, Albin Garcia, Jose Luis Alonso

Abstract:

Recently a new kilovoltage radiotherapy unit model Xstrahl 200 - donated to the INOR´s Department of Radiotherapy (DR-INOR) in the framework of a IAEA's technical cooperation project- has been commissioned. This unit is able to treat shallow and low deep laying lesions, as it provides 8 discrete beam qualities, from 40 to 200 kV. As part of the patient-specific quality assurance program established at DR-INOR for external beam radiotherapy, it has been recommended to implement in vivo dose measurements (IVD), as they allow effectively discovering eventual errors or failures in the radiotherapy process. For that purpose a radio-photoluminescence (RPL) dosimetry system, model XXX, -also donated to DR-INOR by the same IAEA project- has been studied and commissioned. Main dosimetric parameters of the RPL system, such as reproducibility, linearity, and filed size influence were assessed. In a similar way, the response of radiochromic EBT3 type film was investigated for purposes of IVD. Both systems were calibrated in terms of entrance surface dose. Results of the dosimetric commissioning of RPL and EBT3 for IVD, and their pre-clinical implementation through end-to-end test cases are presented. The RPL dosimetry seems more recommendable for hyper-fractionated schemes with larger fields and curved patient contours, as those in chest wall irradiations, where the use of more than one dosimeter could be required. The radiochromic system involves smaller corrections with field size, but it sensibility is lower; hence it is more adequate for hypo-fractionated treatments with smaller fields.

Keywords: glass dosimetry, in vivo dosimetry, kilovotage radiotherapy, radiochromic dosimetry

Procedia PDF Downloads 398
1731 Recurrent Neural Networks for Classifying Outliers in Electronic Health Record Clinical Text

Authors: Duncan Wallace, M-Tahar Kechadi

Abstract:

In recent years, Machine Learning (ML) approaches have been successfully applied to an analysis of patient symptom data in the context of disease diagnosis, at least where such data is well codified. However, much of the data present in Electronic Health Records (EHR) are unlikely to prove suitable for classic ML approaches. Furthermore, as scores of data are widely spread across both hospitals and individuals, a decentralized, computationally scalable methodology is a priority. The focus of this paper is to develop a method to predict outliers in an out-of-hours healthcare provision center (OOHC). In particular, our research is based upon the early identification of patients who have underlying conditions which will cause them to repeatedly require medical attention. OOHC act as an ad-hoc delivery of triage and treatment, where interactions occur without recourse to a full medical history of the patient in question. Medical histories, relating to patients contacting an OOHC, may reside in several distinct EHR systems in multiple hospitals or surgeries, which are unavailable to the OOHC in question. As such, although a local solution is optimal for this problem, it follows that the data under investigation is incomplete, heterogeneous, and comprised mostly of noisy textual notes compiled during routine OOHC activities. Through the use of Deep Learning methodologies, the aim of this paper is to provide the means to identify patient cases, upon initial contact, which are likely to relate to such outliers. To this end, we compare the performance of Long Short-Term Memory, Gated Recurrent Units, and combinations of both with Convolutional Neural Networks. A further aim of this paper is to elucidate the discovery of such outliers by examining the exact terms which provide a strong indication of positive and negative case entries. While free-text is the principal data extracted from EHRs for classification, EHRs also contain normalized features. Although the specific demographical features treated within our corpus are relatively limited in scope, we examine whether it is beneficial to include such features among the inputs to our neural network, or whether these features are more successfully exploited in conjunction with a different form of a classifier. In this section, we compare the performance of randomly generated regression trees and support vector machines and determine the extent to which our classification program can be improved upon by using either of these machine learning approaches in conjunction with the output of our Recurrent Neural Network application. The output of our neural network is also used to help determine the most significant lexemes present within the corpus for determining high-risk patients. By combining the confidence of our classification program in relation to lexemes within true positive and true negative cases, with an inverse document frequency of the lexemes related to these cases, we can determine what features act as the primary indicators of frequent-attender and non-frequent-attender cases, providing a human interpretable appreciation of how our program classifies cases.

Keywords: artificial neural networks, data-mining, machine learning, medical informatics

Procedia PDF Downloads 131
1730 Creating Energy Sustainability in an Enterprise

Authors: John Lamb, Robert Epstein, Vasundhara L. Bhupathi, Sanjeev Kumar Marimekala

Abstract:

As we enter the new era of Artificial Intelligence (AI) and Cloud Computing, we mostly rely on the Machine and Natural Language Processing capabilities of AI, and Energy Efficient Hardware and Software Devices in almost every industry sector. In these industry sectors, much emphasis is on developing new and innovative methods for producing and conserving energy and sustaining the depletion of natural resources. The core pillars of sustainability are economic, environmental, and social, which is also informally referred to as the 3 P's (People, Planet and Profits). The 3 P's play a vital role in creating a core Sustainability Model in the Enterprise. Natural resources are continually being depleted, so there is more focus and growing demand for renewable energy. With this growing demand, there is also a growing concern in many industries on how to reduce carbon emissions and conserve natural resources while adopting sustainability in corporate business models and policies. In our paper, we would like to discuss the driving forces such as Climate changes, Natural Disasters, Pandemic, Disruptive Technologies, Corporate Policies, Scaled Business Models and Emerging social media and AI platforms that influence the 3 main pillars of Sustainability (3P’s). Through this paper, we would like to bring an overall perspective on enterprise strategies and the primary focus on bringing cultural shifts in adapting energy-efficient operational models. Overall, many industries across the globe are incorporating core sustainability principles such as reducing energy costs, reducing greenhouse gas (GHG) emissions, reducing waste and increasing recycling, adopting advanced monitoring and metering infrastructure, reducing server footprint and compute resources (Shared IT services, Cloud computing, and Application Modernization) with the vision for a sustainable environment.

Keywords: climate change, pandemic, disruptive technology, government policies, business model, machine learning and natural language processing, AI, social media platform, cloud computing, advanced monitoring, metering infrastructure

Procedia PDF Downloads 111
1729 Sterilization Incident Analysis by the Association of Litigation and Risk Management Method

Authors: Souhir Chelly, Asma Ben Cheikh, Hela Ghali, Salwa Khefacha, Lamine Dhidah, Mohamed Ben Rejeb, Houyem Said Latiri

Abstract:

The hospital risk management department is firstly involved in the methodological analysis of grade zero sterilization incidents. The system is based on a subsequent analysis process in compliance with the ongoing requirements of the Haute Autorité de santé (HAS) for a reactive approach to risk, allowing to identify failures and start the appropriate preventive and corrective measures. The use of the association of litigation and risk management (ALARM) method makes easier the grade zero analysis and brings to light the team or institutional, organizational, temporal, individual factors representative of undesirable effects. Two main factors come out again from this analysis, pre-disinfection step of the emergency block unsupervised instrumentalist intern was poorly done since she did not remove the battery from micro air motor. At the sterilization unit, the worker who was not supervised by the nurse did the conditioning of the motor without having checked it if it still contained the battery. The main cause is that the management of human resources was inadequate at both levels, the instrumental trainee in the block who was not supervised by his supervisor and the worker of the sterilization unit who was not supervised by the responsible nurse. There is a lack of research help, advice, and collaboration. The difficulties encountered during this type of analysis are multiple. The first is based on its necessary acceptance by the various actors of care involved, which should not perceive it as a tool leading to individual punishment, but rather as a means to improve their practices.

Keywords: ALARM (Association of Litigation and Risk Management Method), incident, risk management, sterilization

Procedia PDF Downloads 213
1728 Dependence of the Photoelectric Exponent on the Source Spectrum of the CT

Authors: Rezvan Ravanfar Haghighi, V. C. Vani, Suresh Perumal, Sabyasachi Chatterjee, Pratik Kumar

Abstract:

X-ray attenuation coefficient [µ(E)] of any substance, for energy (E), is a sum of the contributions from the Compton scattering [ μCom(E)] and photoelectric effect [µPh(E)]. In terms of the, electron density (ρe) and the effective atomic number (Zeff) we have µCom(E) is proportional to [(ρe)fKN(E)] while µPh(E) is proportional to [(ρeZeffx)/Ey] with fKN(E) being the Klein-Nishina formula, with x and y being the exponents for photoelectric effect. By taking the sample's HU at two different excitation voltages (V=V1, V2) of the CT machine, we can solve for X=ρe, Y=ρeZeffx from these two independent equations, as is attempted in DECT inversion. Since µCom(E) and µPh(E) are both energy dependent, the coefficients of inversion are also dependent on (a) the source spectrum S(E,V) and (b) the detector efficiency D(E) of the CT machine. In the present paper we tabulate these coefficients of inversion for different practical manifestations of S(E,V) and D(E). The HU(V) values from the CT follow: <µ(V)>=<µw(V)>[1+HU(V)/1000] where the subscript 'w' refers to water and the averaging process <….> accounts for the source spectrum S(E,V) and the detector efficiency D(E). Linearity of μ(E) with respect to X and Y implies that (a) <µ(V)> is a linear combination of X and Y and (b) for inversion, X and Y can be written as linear combinations of two independent observations <µ(V1)>, <µ(V2)> with V1≠V2. These coefficients of inversion would naturally depend upon S(E, V) and D(E). We numerically investigate this dependence for some practical cases, by taking V = 100 , 140 kVp, as are used for cardiological investigations. The S(E,V) are generated by using the Boone-Seibert source spectrum, being superposed on aluminium filters of different thickness lAl with 7mm≤lAl≤12mm and the D(E) is considered to be that of a typical Si[Li] solid state and GdOS scintilator detector. In the values of X and Y, found by using the calculated inversion coefficients, errors are below 2% for data with solutions of glycerol, sucrose and glucose. For low Zeff materials like propionic acid, Zeffx is overestimated by 20% with X being within1%. For high Zeffx materials like KOH the value of Zeffx is underestimated by 22% while the error in X is + 15%. These imply that the source may have additional filtering than the aluminium filter specified by the manufacturer. Also it is found that the difference in the values of the inversion coefficients for the two types of detectors is negligible. The type of the detector does not affect on the DECT inversion algorithm to find the unknown chemical characteristic of the scanned materials. The effect of the source should be considered as an important factor to calculate the coefficients of inversion.

Keywords: attenuation coefficient, computed tomography, photoelectric effect, source spectrum

Procedia PDF Downloads 400
1727 Smartphone-Based Human Activity Recognition by Machine Learning Methods

Authors: Yanting Cao, Kazumitsu Nawata

Abstract:

As smartphones upgrading, their software and hardware are getting smarter, so the smartphone-based human activity recognition will be described as more refined, complex, and detailed. In this context, we analyzed a set of experimental data obtained by observing and measuring 30 volunteers with six activities of daily living (ADL). Due to the large sample size, especially a 561-feature vector with time and frequency domain variables, cleaning these intractable features and training a proper model becomes extremely challenging. After a series of feature selection and parameters adjustment, a well-performed SVM classifier has been trained.

Keywords: smart sensors, human activity recognition, artificial intelligence, SVM

Procedia PDF Downloads 144
1726 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models

Authors: V. Mantey, N. Findlay, I. Maddox

Abstract:

The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.

Keywords: building detection, disaster relief, mask-RCNN, satellite mapping

Procedia PDF Downloads 169
1725 Dissolution of South African Limestone for Wet Flue Gas Desulphurization

Authors: Lawrence Koech, Ray Everson, Hein Neomagus, Hilary Rutto

Abstract:

Wet Flue gas desulphurization (FGD) systems are commonly used to remove sulphur dioxide from flue gas by contacting it with limestone in aqueous phase which is obtained by dissolution. Dissolution is important as it affects the overall performance of a wet FGD system. In the present study, effects of pH, stirring speed, solid to liquid ratio and acid concentration on the dissolution of limestone using an organic acid (adipic acid) were investigated. This was investigated using the pH stat apparatus. Calcium ions were analyzed at the end of each experiment using Atomic Absorption (AAS) machine.

Keywords: desulphurization, limestone, dissolution, pH stat apparatus

Procedia PDF Downloads 461
1724 DQN for Navigation in Gazebo Simulator

Authors: Xabier Olaz Moratinos

Abstract:

Drone navigation is critical, particularly during the initial phases, such as the initial ascension, where pilots may fail due to strong external interferences that could potentially lead to a crash. In this ongoing work, a drone has been successfully trained to perform an ascent of up to 6 meters at speeds with external disturbances pushing it up to 24 mph, with the DQN algorithm managing external forces affecting the system. It has been demonstrated that the system can control its height, position, and stability in all three axes (roll, pitch, and yaw) throughout the process. The learning process is carried out in the Gazebo simulator, which emulates interferences, while ROS is used to communicate with the agent.

Keywords: machine learning, DQN, gazebo, navigation

Procedia PDF Downloads 113
1723 Using Optical Character Recognition to Manage the Unstructured Disaster Data into Smart Disaster Management System

Authors: Dong Seop Lee, Byung Sik Kim

Abstract:

In the 4th Industrial Revolution, various intelligent technologies have been developed in many fields. These artificial intelligence technologies are applied in various services, including disaster management. Disaster information management does not just support disaster work, but it is also the foundation of smart disaster management. Furthermore, it gets historical disaster information using artificial intelligence technology. Disaster information is one of important elements of entire disaster cycle. Disaster information management refers to the act of managing and processing electronic data about disaster cycle from its’ occurrence to progress, response, and plan. However, information about status control, response, recovery from natural and social disaster events, etc. is mainly managed in the structured and unstructured form of reports. Those exist as handouts or hard-copies of reports. Such unstructured form of data is often lost or destroyed due to inefficient management. It is necessary to manage unstructured data for disaster information. In this paper, the Optical Character Recognition approach is used to convert handout, hard-copies, images or reports, which is printed or generated by scanners, etc. into electronic documents. Following that, the converted disaster data is organized into the disaster code system as disaster information. Those data are stored in the disaster database system. Gathering and creating disaster information based on Optical Character Recognition for unstructured data is important element as realm of the smart disaster management. In this paper, Korean characters were improved to over 90% character recognition rate by using upgraded OCR. In the case of character recognition, the recognition rate depends on the fonts, size, and special symbols of character. We improved it through the machine learning algorithm. These converted structured data is managed in a standardized disaster information form connected with the disaster code system. The disaster code system is covered that the structured information is stored and retrieve on entire disaster cycle such as historical disaster progress, damages, response, and recovery. The expected effect of this research will be able to apply it to smart disaster management and decision making by combining artificial intelligence technologies and historical big data.

Keywords: disaster information management, unstructured data, optical character recognition, machine learning

Procedia PDF Downloads 129
1722 Monitorization of Junction Temperature Using a Thermal-Test-Device

Authors: B. Arzhanov, A. Correia, P. Delgado, J. Meireles

Abstract:

Due to the higher power loss levels in electronic components, the thermal design of PCBs (Printed Circuit Boards) of an assembled device becomes one of the most important quality factors in electronics. Nonetheless, some of leading causes of the microelectronic component failures are due to higher temperatures, the leakages or thermal-mechanical stress, which is a concern, is the reliability of microelectronic packages. This article presents an experimental approach to measure the junction temperature of exposed pad packages. The implemented solution is in a prototype phase, using a temperature-sensitive parameter (TSP) to measure temperature directly on the die, validating the numeric results provided by the Mechanical APDL (Ansys Parametric Design Language) under same conditions. The physical device-under-test is composed by a Thermal Test Chip (TTC-1002) and assembly in a QFN cavity, soldered to a test-board according to JEDEC Standards. Monitoring the voltage drop across a forward-biased diode, is an indirectly method but accurate to obtain the junction temperature of QFN component with an applied power range between 0,3W to 1.5W. The temperature distributions on the PCB test-board and QFN cavity surface were monitored by an infra-red thermal camera (Goby-384) controlled and images processed by the Xeneth software. The article provides a set-up to monitorize in real-time the junction temperature of ICs, namely devices with the exposed pad package (i.e. QFN). Presenting the PCB layout parameters that the designer should use to improve thermal performance, and evaluate the impact of voids in solder interface in the device junction temperature.

Keywords: quad flat no-Lead packages, exposed pads, junction temperature, thermal management and measurements

Procedia PDF Downloads 286
1721 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 167
1720 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 159
1719 Matrix-Based Linear Analysis of Switched Reluctance Generator with Optimum Pole Angles Determination

Authors: Walid A. M. Ghoneim, Hamdy A. Ashour, Asmaa E. Abdo

Abstract:

In this paper, linear analysis of a Switched Reluctance Generator (SRG) model is applied on the most common configurations (4/2, 6/4 and 8/6) for both conventional short-pitched and fully-pitched designs, in order to determine the optimum stator/rotor pole angles at which the maximum output voltage is generated per unit excitation current. This study is focused on SRG analysis and design as a proposed solution for renewable energy applications, such as wind energy conversion systems. The world’s potential to develop the renewable energy technologies through dedicated scientific researches was the motive behind this study due to its positive impact on economy and environment. In addition, the problem of rare earth metals (Permanent magnet) caused by mining limitations, banned export by top producers and environment restrictions leads to the unavailability of materials used for rotating machines manufacturing. This challenge gave authors the opportunity to study, analyze and determine the optimum design of the SRG that has the benefit to be free from permanent magnets, rotor windings, with flexible control system and compatible with any application that requires variable-speed operation. In addition, SRG has been proved to be very efficient and reliable in both low-speed or high-speed applications. Linear analysis was performed using MATLAB simulations based on the (Modified generalized matrix approach) of Switched Reluctance Machine (SRM). About 90 different pole angles combinations and excitation patterns were simulated through this study, and the optimum output results for each case were recorded and presented in detail. This procedure has been proved to be applicable for any SRG configuration, dimension and excitation pattern. The delivered results of this study provide evidence for using the 4-phase 8/6 fully pitched SRG as the main optimum configuration for the same machine dimensions at the same angular speed.

Keywords: generalized matrix approach, linear analysis, renewable applications, switched reluctance generator

Procedia PDF Downloads 198
1718 Automatic Furrow Detection for Precision Agriculture

Authors: Manpreet Kaur, Cheol-Hong Min

Abstract:

The increasing advancement in the robotics equipped with machine vision sensors applied to precision agriculture is a demanding solution for various problems in the agricultural farms. An important issue related with the machine vision system concerns crop row and weed detection. This paper proposes an automatic furrow detection system based on real-time processing for identifying crop rows in maize fields in the presence of weed. This vision system is designed to be installed on the farming vehicles, that is, submitted to gyros, vibration and other undesired movements. The images are captured under image perspective, being affected by above undesired effects. The goal is to identify crop rows for vehicle navigation which includes weed removal, where weeds are identified as plants outside the crop rows. The images quality is affected by different lighting conditions and gaps along the crop rows due to lack of germination and wrong plantation. The proposed image processing method consists of four different processes. First, image segmentation based on HSV (Hue, Saturation, Value) decision tree. The proposed algorithm used HSV color space to discriminate crops, weeds and soil. The region of interest is defined by filtering each of the HSV channels between maximum and minimum threshold values. Then the noises in the images were eliminated by the means of hybrid median filter. Further, mathematical morphological processes, i.e., erosion to remove smaller objects followed by dilation to gradually enlarge the boundaries of regions of foreground pixels was applied. It enhances the image contrast. To accurately detect the position of crop rows, the region of interest is defined by creating a binary mask. The edge detection and Hough transform were applied to detect lines represented in polar coordinates and furrow directions as accumulations on the angle axis in the Hough space. The experimental results show that the method is effective.

Keywords: furrow detection, morphological, HSV, Hough transform

Procedia PDF Downloads 231
1717 Prediction of Formation Pressure Using Artificial Intelligence Techniques

Authors: Abdulmalek Ahmed

Abstract:

Formation pressure is the main function that affects drilling operation economically and efficiently. Knowing the pore pressure and the parameters that affect it will help to reduce the cost of drilling process. Many empirical models reported in the literature were used to calculate the formation pressure based on different parameters. Some of these models used only drilling parameters to estimate pore pressure. Other models predicted the formation pressure based on log data. All of these models required different trends such as normal or abnormal to predict the pore pressure. Few researchers applied artificial intelligence (AI) techniques to predict the formation pressure by only one method or a maximum of two methods of AI. The objective of this research is to predict the pore pressure based on both drilling parameters and log data namely; weight on bit, rotary speed, rate of penetration, mud weight, bulk density, porosity and delta sonic time. A real field data is used to predict the formation pressure using five different artificial intelligence (AI) methods such as; artificial neural networks (ANN), radial basis function (RBF), fuzzy logic (FL), support vector machine (SVM) and functional networks (FN). All AI tools were compared with different empirical models. AI methods estimated the formation pressure by a high accuracy (high correlation coefficient and low average absolute percentage error) and outperformed all previous. The advantage of the new technique is its simplicity, which represented from its estimation of pore pressure without the need of different trends as compared to other models which require a two different trend (normal or abnormal pressure). Moreover, by comparing the AI tools with each other, the results indicate that SVM has the advantage of pore pressure prediction by its fast processing speed and high performance (a high correlation coefficient of 0.997 and a low average absolute percentage error of 0.14%). In the end, a new empirical correlation for formation pressure was developed using ANN method that can estimate pore pressure with a high precision (correlation coefficient of 0.998 and average absolute percentage error of 0.17%).

Keywords: Artificial Intelligence (AI), Formation pressure, Artificial Neural Networks (ANN), Fuzzy Logic (FL), Support Vector Machine (SVM), Functional Networks (FN), Radial Basis Function (RBF)

Procedia PDF Downloads 149
1716 A Three-modal Authentication Method for Industrial Robots

Authors: Luo Jiaoyang, Yu Hongyang

Abstract:

In this paper, we explore a method that can be used in the working scene of intelligent industrial robots to confirm the identity information of operators to ensure that the robot executes instructions in a sufficiently safe environment. This approach uses three information modalities, namely visible light, depth, and sound. We explored a variety of fusion modes for the three modalities and finally used the joint feature learning method to improve the performance of the model in the case of noise compared with the single-modal case, making the maximum noise in the experiment. It can also maintain an accuracy rate of more than 90%.

Keywords: multimodal, kinect, machine learning, distance image

Procedia PDF Downloads 79