Search results for: automated feature engineering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5132

Search results for: automated feature engineering

4742 A New Approach to Image Stitching of Radiographic Images

Authors: Somaya Adwan, Rasha Majed, Lamya'a Majed, Hamzah Arof

Abstract:

In order to produce images with whole body parts, X-ray of different portions of the body parts is assembled using image stitching methods. A new method for image stitching that exploits mutually feature based method and direct based method to identify and merge pairs of X-ray medical images is presented in this paper. The performance of the proposed method based on this hybrid approach is investigated in this paper. The ability of the proposed method to stitch and merge the overlapping pairs of images is demonstrated. Our proposed method display comparable if not superior performance to other feature based methods that are mentioned in the literature on the standard databases. These results are promising and demonstrate the potential of the proposed method for further development to tackle more advanced stitching problems.

Keywords: image stitching, direct based method, panoramic image, X-ray

Procedia PDF Downloads 516
4741 An Ensemble-based Method for Vehicle Color Recognition

Authors: Saeedeh Barzegar Khalilsaraei, Manoocheher Kelarestaghi, Farshad Eshghi

Abstract:

The vehicle color, as a prominent and stable feature, helps to identify a vehicle more accurately. As a result, vehicle color recognition is of great importance in intelligent transportation systems. Unlike conventional methods which use only a single Convolutional Neural Network (CNN) for feature extraction or classification, in this paper, four CNNs, with different architectures well-performing in different classes, are trained to extract various features from the input image. To take advantage of the distinct capability of each network, the multiple outputs are combined using a stack generalization algorithm as an ensemble technique. As a result, the final model performs better than each CNN individually in vehicle color identification. The evaluation results in terms of overall average accuracy and accuracy variance show the proposed method’s outperformance compared to the state-of-the-art rivals.

Keywords: Vehicle Color Recognition, Ensemble Algorithm, Stack Generalization, Convolutional Neural Network

Procedia PDF Downloads 56
4740 Knowledge Diffusion via Automated Organizational Cartography (Autocart)

Authors: Mounir Kehal

Abstract:

The post-globalization epoch has placed businesses everywhere in new and different competitive situations where knowledgeable, effective and efficient behavior has come to provide the competitive and comparative edge. Enterprises have turned to explicit - and even conceptualizing on tacit - knowledge management to elaborate a systematic approach to develop and sustain the intellectual capital needed to succeed. To be able to do that, you have to be able to visualize your organization as consisting of nothing but knowledge and knowledge flows, whilst being presented in a graphical and visual framework, referred to as automated organizational cartography. Hence, creating the ability of further actively classifying existing organizational content evolving from and within data feeds, in an algorithmic manner, potentially giving insightful schemes and dynamics by which organizational know-how is visualized. It is discussed and elaborated on most recent and applicable definitions and classifications of knowledge management, representing a wide range of views from mechanistic (systematic, data driven) to a more socially (psychologically, cognitive/metadata driven) orientated. More elaborate continuum models, for knowledge acquisition and reasoning purposes, are being used for effectively representing the domain of information that an end user may contain in their decision making process for utilization of available organizational intellectual resources (i.e. Autocart). In this paper, we present an empirical research study conducted previously to try and explore knowledge diffusion in a specialist knowledge domain.

Keywords: knowledge management, knowledge maps, knowledge diffusion, organizational cartography

Procedia PDF Downloads 282
4739 Development of an Automatic Monitoring System Based on the Open Architecture Concept

Authors: Andrii Biloshchytskyi, Serik Omirbayev, Alexandr Neftissov, Sapar Toxanov, Svitlana Biloshchytska, Adil Faizullin

Abstract:

Kazakhstan has adopted a carbon neutrality strategy until 2060. In accordance with this strategy, it is necessary to introduce various tools to maintain the environmental safety of the environment. The use of IoT, in combination with the characteristics and requirements of Kazakhstan's environmental legislation, makes it possible to develop a modern environmental monitoring system. The article proposes a solution for developing an example of an automated system for the continuous collection of data on the concentration of pollutants in the atmosphere based on an open architecture. The Audino-based device acts as a microcontroller. It should be noted that the transmission of measured values is carried out via an open wireless communication protocol. The architecture of the system, which was used to build a prototype based on sensors, an Arduino microcontroller, and a wireless data transmission module, is presented. The selection of elementary components may change depending on the requirements of the system; the introduction of new units is limited by the number of ports. The openness of solutions allows you to change the configuration depending on the conditions. The advantages of the solutions are openness, low cost, versatility and mobility. However, there is no comparison of the working processes of the proposed solution with traditional ones.

Keywords: environmental monitoring, greenhouse gases emissions, environmental pollution, Industry 4.0, IoT, microcontroller, automated monitoring system.

Procedia PDF Downloads 15
4738 Automated Human Balance Assessment Using Contactless Sensors

Authors: Justin Tang

Abstract:

Balance tests are frequently used to diagnose concussions on the sidelines of sporting events. Manual scoring, however, is labor intensive and subjective, and many concussions go undetected. This study institutes a novel approach to conducting the Balance Error Scoring System (BESS) more quantitatively using Microsoft’s gaming system Kinect, which uses a contactless sensor and several cameras to receive data and estimate body limb positions. Using a machine learning approach, Visual Gesture Builder, and a deterministic approach, MATLAB, we tested whether the Kinect can differentiate between “correct” and erroneous stances of the BESS. We created the two separate solutions by recording test videos to teach the Kinect correct stances and by developing a code using Java. Twenty-two subjects were asked to perform a series of BESS tests while the Kinect was collecting data. The Kinect recorded the subjects and mapped key joints onto their bodies to obtain angles and measurements that are interpreted by the software. Through VGB and MATLAB, the videos are analyzed to enumerate the number of errors committed during testing. The resulting statistics demonstrate a high correlation between manual scoring and the Kinect approaches, indicating the viability of the use of remote tracking devices in conducting concussion tests.

Keywords: automated, concussion detection, contactless sensors, microsoft kinect

Procedia PDF Downloads 298
4737 Automated Multisensory Data Collection System for Continuous Monitoring of Refrigerating Appliances Recycling Plants

Authors: Georgii Emelianov, Mikhail Polikarpov, Fabian Hübner, Jochen Deuse, Jochen Schiemann

Abstract:

Recycling refrigerating appliances plays a major role in protecting the Earth's atmosphere from ozone depletion and emissions of greenhouse gases. The performance of refrigerator recycling plants in terms of material retention is the subject of strict environmental certifications and is reviewed periodically through specialized audits. The continuous collection of Refrigerator data required for the input-output analysis is still mostly manual, error-prone, and not digitalized. In this paper, we propose an automated data collection system for recycling plants in order to deduce expected material contents in individual end-of-life refrigerating appliances. The system utilizes laser scanner measurements and optical data to extract attributes of individual refrigerators by applying transfer learning with pre-trained vision models and optical character recognition. Based on Recognized features, the system automatically provides material categories and target values of contained material masses, especially foaming and cooling agents. The presented data collection system paves the way for continuous performance monitoring and efficient control of refrigerator recycling plants.

Keywords: automation, data collection, performance monitoring, recycling, refrigerators

Procedia PDF Downloads 135
4736 Generation of Photo-Mosaic Images through Block Matching and Color Adjustment

Authors: Hae-Yeoun Lee

Abstract:

Mosaic refers to a technique that makes image by gathering lots of small materials in various colours. This paper presents an automatic algorithm that makes the photomosaic image using photos. The algorithm is composed of four steps: Partition and feature extraction, block matching, redundancy removal and colour adjustment. The input image is partitioned in the small block to extract feature. Each block is matched to find similar photo in database by comparing similarity with Euclidean difference between blocks. The intensity of the block is adjusted to enhance the similarity of image by replacing the value of light and darkness with that of relevant block. Further, the quality of image is improved by minimizing the redundancy of tiles in the adjacent blocks. Experimental results support that the proposed algorithm is excellent in quantitative analysis and qualitative analysis.

Keywords: photomosaic, Euclidean distance, block matching, intensity adjustment

Procedia PDF Downloads 256
4735 The Role of Twitter Bots in Political Discussion on 2019 European Elections

Authors: Thomai Voulgari, Vasilis Vasilopoulos, Antonis Skamnakis

Abstract:

The aim of this study is to investigate the effect of the European election campaigns (May 23-26, 2019) on Twitter achieving with artificial intelligence tools such as troll factories and automated inauthentic accounts. Our research focuses on the last European Parliamentary elections that took place between 23 and 26 May 2019 specifically in Italy, Greece, Germany and France. It is difficult to estimate how many Twitter users are actually bots (Echeverría, 2017). Detection for fake accounts is becoming even more complicated as AI bots are made more advanced. A political bot can be programmed to post comments on a Twitter account for a political candidate, target journalists with manipulated content or engage with politicians and artificially increase their impact and popularity. We analyze variables related to 1) the scope of activity of automated bots accounts and 2) degree of coherence and 3) degree of interaction taking into account different factors, such as the type of content of Twitter messages and their intentions, as well as the spreading to the general public. For this purpose, we collected large volumes of Twitter accounts of party leaders and MEP candidates between 10th of May and 26th of July based on content analysis of tweets based on hashtags while using an innovative network analysis tool known as MediaWatch.io (https://mediawatch.io/). According to our findings, one of the highest percentage (64.6%) of automated “bot” accounts during 2019 European election campaigns was in Greece. In general terms, political bots aim to proliferation of misinformation on social media. Targeting voters is a way that it can be achieved contribute to social media manipulation. We found that political parties and individual politicians create and promote purposeful content on Twitter using algorithmic tools. Based on this analysis, online political advertising play an important role to the process of spreading misinformation during elections campaigns. Overall, inauthentic accounts and social media algorithms are being used to manipulate political behavior and public opinion.

Keywords: artificial intelligence tools, human-bot interactions, political manipulation, social networking, troll factories

Procedia PDF Downloads 116
4734 Accelerating Quantum Chemistry Calculations: Machine Learning for Efficient Evaluation of Electron-Repulsion Integrals

Authors: Nishant Rodrigues, Nicole Spanedda, Chilukuri K. Mohan, Arindam Chakraborty

Abstract:

A crucial objective in quantum chemistry is the computation of the energy levels of chemical systems. This task requires electron-repulsion integrals as inputs, and the steep computational cost of evaluating these integrals poses a major numerical challenge in efficient implementation of quantum chemical software. This work presents a moment-based machine-learning approach for the efficient evaluation of electron-repulsion integrals. These integrals were approximated using linear combinations of a small number of moments. Machine learning algorithms were applied to estimate the coefficients in the linear combination. A random forest approach was used to identify promising features using a recursive feature elimination approach, which performed best for learning the sign of each coefficient but not the magnitude. A neural network with two hidden layers were then used to learn the coefficient magnitudes along with an iterative feature masking approach to perform input vector compression, identifying a small subset of orbitals whose coefficients are sufficient for the quantum state energy computation. Finally, a small ensemble of neural networks (with a median rule for decision fusion) was shown to improve results when compared to a single network.

Keywords: quantum energy calculations, atomic orbitals, electron-repulsion integrals, ensemble machine learning, random forests, neural networks, feature extraction

Procedia PDF Downloads 82
4733 Viability of Irrigation Water Conservation Practices in the Low Desert of California

Authors: Ali Montazar

Abstract:

California and the Colorado River Basin are facing increasing uncertainty concerning water supplies. The Colorado River is the main source of irrigation water in the low desert of California. Currently, due to an increasing water-use competition and long-term drought at the Colorado River Basin, efficient use of irrigation water is one of the highest conservation priorities in the region. This study aims to present some of current irrigation technologies and management approaches in the low desert and assess the viability and potential of these water management practices. The results of several field experiments are used to assess five water conservation practices of sub-surface drip irrigation, automated surface irrigation, sprinkler irrigation, tail-water recovery system, and deficit irrigation strategy. The preliminary results of several ongoing studies at commercial fields are presented, particularly researches in alfalfa, sugar beets, kliengrass, sunflower, and spinach fields. The findings indicate that all these practices have significant potential to conserve water (an average of 1 ac-ft/ac) and enhance the efficiency of water use (15-25%). Further work is needed to better understand the feasibility of each of these applications and to help maintain profitable and sustainable agricultural production system in the low desert as water and labor costs, and environmental issues increase.

Keywords: automated surface irrigation, deficit irrigation, low desert of California, sprinkler irrigation, sub-surface drip irrigation, tail-water recovery system

Procedia PDF Downloads 130
4732 ACBM: Attention-Based CNN and Bi-LSTM Model for Continuous Identity Authentication

Authors: Rui Mao, Heming Ji, Xiaoyu Wang

Abstract:

Keystroke dynamics are widely used in identity recognition. It has the advantage that the individual typing rhythm is difficult to imitate. It also supports continuous authentication through the keyboard without extra devices. The existing keystroke dynamics authentication methods based on machine learning have a drawback in supporting relatively complex scenarios with massive data. There are drawbacks to both feature extraction and model optimization in these methods. To overcome the above weakness, an authentication model of keystroke dynamics based on deep learning is proposed. The model uses feature vectors formed by keystroke content and keystroke time. It ensures efficient continuous authentication by cooperating attention mechanisms with the combination of CNN and Bi-LSTM. The model has been tested with Open Data Buffalo dataset, and the result shows that the FRR is 3.09%, FAR is 3.03%, and EER is 4.23%. This proves that the model is efficient and accurate on continuous authentication.

Keywords: keystroke dynamics, identity authentication, deep learning, CNN, LSTM

Procedia PDF Downloads 127
4731 Automatic Classification of Lung Diseases from CT Images

Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari

Abstract:

Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life of the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or Covidi-19 induced pneumonia. The early prediction and classification of such lung diseases help to reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans have pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publically available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.

Keywords: CT scan, Covid-19, deep learning, image processing, lung disease classification

Procedia PDF Downloads 120
4730 INRAM-3DCNN: Multi-Scale Convolutional Neural Network Based on Residual and Attention Module Combined with Multilayer Perceptron for Hyperspectral Image Classification

Authors: Jianhong Xiang, Rui Sun, Linyu Wang

Abstract:

In recent years, due to the continuous improvement of deep learning theory, Convolutional Neural Network (CNN) has played a great superior performance in the research of Hyperspectral Image (HSI) classification. Since HSI has rich spatial-spectral information, only utilizing a single dimensional or single size convolutional kernel will limit the detailed feature information received by CNN, which limits the classification accuracy of HSI. In this paper, we design a multi-scale CNN with MLP based on residual and attention modules (INRAM-3DCNN) for the HSI classification task. We propose to use multiple 3D convolutional kernels to extract the packet feature information and fully learn the spatial-spectral features of HSI while designing residual 3D convolutional branches to avoid the decline of classification accuracy due to network degradation. Secondly, we also design the 2D Inception module with a joint channel attention mechanism to quickly extract key spatial feature information at different scales of HSI and reduce the complexity of the 3D model. Due to the high parallel processing capability and nonlinear global action of the Multilayer Perceptron (MLP), we use it in combination with the previous CNN structure for the final classification process. The experimental results on two HSI datasets show that the proposed INRAM-3DCNN method has superior classification performance and can perform the classification task excellently.

Keywords: INRAM-3DCNN, residual, channel attention, hyperspectral image classification

Procedia PDF Downloads 45
4729 Design and Evaluation of a Fully-Automated Fluidized Bed Dryer for Complete Drying of Paddy

Authors: R. J. Pontawe, R. C. Martinez, N. T. Asuncion, R. V. Villacorte

Abstract:

Drying of high moisture paddy remains a major problem in the Philippines, especially during inclement weather condition. To alleviate the problem, mechanical dryers were used like a flat bed and recirculating batch-type dryers. However, drying to 14% (wet basis) final moisture content is long which takes 10-12 hours and tedious which is not the ideal for handling high moisture paddy. Fully-automated pilot-scale fluidized bed drying system with 500 kilograms per hour capacity was evaluated using a high moisture paddy. The developed fluidized bed dryer was evaluated using four drying temperatures and two variations in fluidization time at a constant airflow, static pressure and tempering period. Complete drying of paddy with ≥28% (w.b.) initial MC was attained after 2 passes of fluidized-bed drying at 2 minutes exposure to 70 °C drying temperature and 4.9 m/s superficial air velocity, followed by 60 min ambient air tempering period (30 min without ventilation and 30 min with air ventilation) for a total drying time of 2.07 h. Around 82% from normal mechanical drying time was saved at 70 °C drying temperature. The drying cost was calculated to be P0.63 per kilogram of wet paddy. Specific heat energy consumption was only 2.84 MJ/kg of water removed. The Head Rice Yield recovery of the dried paddy passed the Philippine Agricultural Engineering Standards. Sensory evaluation showed that the color and taste of the samples dried in the fluidized bed dryer were comparable to air dried paddy. The optimum drying parameters of using fluidized bed dryer is 70 oC drying temperature at 2 min fluidization time, 4.9 m/s superficial air velocity, 10.16 cm grain depth and 60 min ambient air tempering period.

Keywords: drying, fluidized bed dryer, head rice yield, paddy

Procedia PDF Downloads 297
4728 Optimization of a Convolutional Neural Network for the Automated Diagnosis of Melanoma

Authors: Kemka C. Ihemelandu, Chukwuemeka U. Ihemelandu

Abstract:

The incidence of melanoma has been increasing rapidly over the past two decades, making melanoma a current public health crisis. Unfortunately, even as screening efforts continue to expand in an effort to ameliorate the death rate from melanoma, there is a need to improve diagnostic accuracy to decrease misdiagnosis. Artificial intelligence (AI) a new frontier in patient care has the ability to improve the accuracy of melanoma diagnosis. Convolutional neural network (CNN) a form of deep neural network, most commonly applied to analyze visual imagery, has been shown to outperform the human brain in pattern recognition. However, there are noted limitations with the accuracy of the CNN models. Our aim in this study was the optimization of convolutional neural network algorithms for the automated diagnosis of melanoma. We hypothesized that Optimal selection of the momentum and batch hyperparameter increases model accuracy. Our most successful model developed during this study, showed that optimal selection of momentum of 0.25, batch size of 2, led to a superior performance and a faster model training time, with an accuracy of ~ 83% after nine hours of training. We did notice a lack of diversity in the dataset used, with a noted class imbalance favoring lighter vs. darker skin tone. Training set image transformations did not result in a superior model performance in our study.

Keywords: melanoma, convolutional neural network, momentum, batch hyperparameter

Procedia PDF Downloads 80
4727 Field Production Data Collection, Analysis and Reporting Using Automated System

Authors: Amir AlAmeeri, Mohamed Ibrahim

Abstract:

Various data points are constantly being measured in the production system, and due to the nature of the wells, these data points, such as pressure, temperature, water cut, etc.., fluctuations are constant, which requires high frequency monitoring and collection. It is a very difficult task to analyze these parameters manually using spreadsheets and email. An automated system greatly enhances efficiency, reduce errors, the need for constant emails which take up disk space, and frees up time for the operator to perform other critical tasks. Various production data is being recorded in an oil field, and this huge volume of data can be seen as irrelevant to some, especially when viewed on its own with no context. In order to fully utilize all this information, it needs to be properly collected, verified and stored in one common place and analyzed for surveillance and monitoring purposes. This paper describes how data is recorded by different parties and departments in the field, and verified numerous times as it is being loaded into a repository. Once it is loaded, a final check is done before being entered into a production monitoring system. Once all this is collected, various calculations are performed to report allocated production. Calculated production data is used to report field production automatically. It is also used to monitor well and surface facility performance. Engineers can use this for their studies and analyses to ensure field is performing as it should be, predict and forecast production, and monitor any changes in wells that could affect field performance.

Keywords: automation, oil production, Cheleken, exploration and production (E&P), Caspian Sea, allocation, forecast

Procedia PDF Downloads 133
4726 The Acquisition of Case in Biological Domain Based on Text Mining

Authors: Shen Jian, Hu Jie, Qi Jin, Liu Wei Jie, Chen Ji Yi, Peng Ying Hong

Abstract:

In order to settle the problem of acquiring case in biological related to design problems, a biometrics instance acquisition method based on text mining is presented. Through the construction of corpus text vector space and knowledge mining, the feature selection, similarity measure and case retrieval method of text in the field of biology are studied. First, we establish a vector space model of the corpus in the biological field and complete the preprocessing steps. Then, the corpus is retrieved by using the vector space model combined with the functional keywords to obtain the biological domain examples related to the design problems. Finally, we verify the validity of this method by taking the example of text.

Keywords: text mining, vector space model, feature selection, biologically inspired design

Procedia PDF Downloads 232
4725 Knowledge Diffusion via Automated Organizational Cartography: Autocart

Authors: Mounir Kehal, Adel Al Araifi

Abstract:

The post-globalisation epoch has placed businesses everywhere in new and different competitive situations where knowledgeable, effective and efficient behaviour has come to provide the competitive and comparative edge. Enterprises have turned to explicit- and even conceptualising on tacit- Knowledge Management to elaborate a systematic approach to develop and sustain the Intellectual Capital needed to succeed. To be able to do that, you have to be able to visualize your organization as consisting of nothing but knowledge and knowledge flows, whilst being presented in a graphical and visual framework, referred to as automated organizational cartography. Hence, creating the ability of further actively classifying existing organizational content evolving from and within data feeds, in an algorithmic manner, potentially giving insightful schemes and dynamics by which organizational know-how is visualised. It is discussed and elaborated on most recent and applicable definitions and classifications of knowledge management, representing a wide range of views from mechanistic (systematic, data driven) to a more socially (psychologically, cognitive/metadata driven) orientated. More elaborate continuum models, for knowledge acquisition and reasoning purposes, are being used for effectively representing the domain of information that an end user may contain in their decision making process for utilization of available organizational intellectual resources (i.e. Autocart). In this paper we present likewise an empirical research study conducted previously to try and explore knowledge diffusion in a specialist knowledge domain.

Keywords: knowledge management, knowledge maps, knowledge diffusion, organizational cartography

Procedia PDF Downloads 397
4724 Application of Federated Learning in the Health Care Sector for Malware Detection and Mitigation Using Software-Defined Networking Approach

Authors: A. Dinelka Panagoda, Bathiya Bandara, Chamod Wijetunga, Chathura Malinda, Lakmal Rupasinghe, Chethana Liyanapathirana

Abstract:

This research takes us forward with the concepts of Federated Learning and Software-Defined Networking (SDN) to introduce an efficient malware detection technique and provide a mitigation mechanism to give birth to a resilient and automated healthcare sector network system by also adding the feature of extended privacy preservation. Due to the daily transformation of new malware attacks on hospital Integrated Clinical Environment (ICEs), the healthcare industry is at an undefinable peak of never knowing its continuity direction. The state of blindness by the array of indispensable opportunities that new medical device inventions and their connected coordination offer daily, a factor that should be focused driven is not yet entirely understood by most healthcare operators and patients. This solution has the involvement of four clients in the form of hospital networks to build up the federated learning experimentation architectural structure with different geographical participation to reach the most reasonable accuracy rate with privacy preservation. While the logistic regression with cross-entropy conveys the detection, SDN comes in handy in the second half of the research to stack up the initial development phases of the system with malware mitigation based on policy implementation. The overall evaluation sums up with a system that proves the accuracy with the added privacy. It is no longer needed to continue with traditional centralized systems that offer almost everything but not privacy.

Keywords: software-defined network, federated learning, privacy, integrated clinical environment, decentralized learning, malware detection, malware mitigation

Procedia PDF Downloads 150
4723 Metropolis-Hastings Sampling Approach for High Dimensional Testing Methods of Autonomous Vehicles

Authors: Nacer Eddine Chelbi, Ayet Bagane, Annie Saleh, Claude Sauvageau, Denis Gingras

Abstract:

As recently stated by National Highway Traffic Safety Administration (NHTSA), to demonstrate the expected performance of a highly automated vehicles system, test approaches should include a combination of simulation, test track, and on-road testing. In this paper, we propose a new validation method for autonomous vehicles involving on-road tests (Field Operational Tests), test track (Test Matrix) and simulation (Worst Case Scenarios). We concentrate our discussion on the simulation aspects, in particular, we extend recent work based on Importance Sampling by using a Metropolis-Hasting algorithm (MHS) to sample collected data from the Safety Pilot Model Deployment (SPMD) in lane-change scenarios. Our proposed MH sampling method will be compared to the Importance Sampling method, which does not perform well in high-dimensional problems. The importance of this study is to obtain a sampler that could be applied to high dimensional simulation problems in order to reduce and optimize the number of test scenarios that are necessary for validation and certification of autonomous vehicles.

Keywords: automated driving, autonomous emergency braking (AEB), autonomous vehicles, certification, evaluation, importance sampling, metropolis-hastings sampling, tests

Procedia PDF Downloads 259
4722 Single-Camera Basketball Tracker through Pose and Semantic Feature Fusion

Authors: Adrià Arbués-Sangüesa, Coloma Ballester, Gloria Haro

Abstract:

Tracking sports players is a widely challenging scenario, specially in single-feed videos recorded in tight courts, where cluttering and occlusions cannot be avoided. This paper presents an analysis of several geometric and semantic visual features to detect and track basketball players. An ablation study is carried out and then used to remark that a robust tracker can be built with Deep Learning features, without the need of extracting contextual ones, such as proximity or color similarity, nor applying camera stabilization techniques. The presented tracker consists of: (1) a detection step, which uses a pretrained deep learning model to estimate the players pose, followed by (2) a tracking step, which leverages pose and semantic information from the output of a convolutional layer in a VGG network. Its performance is analyzed in terms of MOTA over a basketball dataset with more than 10k instances.

Keywords: basketball, deep learning, feature extraction, single-camera, tracking

Procedia PDF Downloads 113
4721 An Automated Optimal Robotic Assembly Sequence Planning Using Artificial Bee Colony Algorithm

Authors: Balamurali Gunji, B. B. V. L. Deepak, B. B. Biswal, Amrutha Rout, Golak Bihari Mohanta

Abstract:

Robots play an important role in the operations like pick and place, assembly, spot welding and much more in manufacturing industries. Out of those, assembly is a very important process in manufacturing, where 20% of manufacturing cost is wholly occupied by the assembly process. To do the assembly task effectively, Assembly Sequences Planning (ASP) is required. ASP is one of the multi-objective non-deterministic optimization problems, achieving the optimal assembly sequence involves huge search space and highly complex in nature. Many researchers have followed different algorithms to solve ASP problem, which they have several limitations like the local optimal solution, huge search space, and execution time is more, complexity in applying the algorithm, etc. By keeping the above limitations in mind, in this paper, a new automated optimal robotic assembly sequence planning using Artificial Bee Colony (ABC) Algorithm is proposed. In this algorithm, automatic extraction of assembly predicates is done using Computer Aided Design (CAD) interface instead of extracting the assembly predicates manually. Due to this, the time of extraction of assembly predicates to obtain the feasible assembly sequence is reduced. The fitness evaluation of the obtained feasible sequence is carried out using ABC algorithm to generate the optimal assembly sequence. The proposed methodology is applied to different industrial products and compared the results with past literature.

Keywords: assembly sequence planning, CAD, artificial Bee colony algorithm, assembly predicates

Procedia PDF Downloads 217
4720 Feature Extraction and Impact Analysis for Solid Mechanics Using Supervised Finite Element Analysis

Authors: Edward Schwalb, Matthias Dehmer, Michael Schlenkrich, Farzaneh Taslimi, Ketron Mitchell-Wynne, Horen Kuecuekyan

Abstract:

We present a generalized feature extraction approach for supporting Machine Learning (ML) algorithms which perform tasks similar to Finite-Element Analysis (FEA). We report results for estimating the Head Injury Categorization (HIC) of vehicle engine compartments across various impact scenarios. Our experiments demonstrate that models learned using features derived with a simple discretization approach provide a reasonable approximation of a full simulation. We observe that Decision Trees could be as effective as Neural Networks for the HIC task. The simplicity and performance of the learned Decision Trees could offer a trade-off of a multiple order of magnitude increase in speed and cost improvement over full simulation for a reasonable approximation. When used as a complement to full simulation, the approach enables rapid approximate feedback to engineering teams before submission for full analysis. The approach produces mesh independent features and is further agnostic of the assembly structure.

Keywords: mechanical design validation, FEA, supervised decision tree, convolutional neural network.

Procedia PDF Downloads 110
4719 Image Instance Segmentation Using Modified Mask R-CNN

Authors: Avatharam Ganivada, Krishna Shah

Abstract:

The Mask R-CNN is recently introduced by the team of Facebook AI Research (FAIR), which is mainly concerned with instance segmentation in images. Here, the Mask R-CNN is based on ResNet and feature pyramid network (FPN), where a single dropout method is employed. This paper provides a modified Mask R-CNN by adding multiple dropout methods into the Mask R-CNN. The proposed model has also utilized the concepts of Resnet and FPN to extract stage-wise network feature maps, wherein a top-down network path having lateral connections is used to obtain semantically strong features. The proposed model produces three outputs for each object in the image: class label, bounding box coordinates, and object mask. The performance of the proposed network is evaluated in the segmentation of every instance in images using COCO and cityscape datasets. The proposed model achieves better performance than the state-of-the-networks for the datasets.

Keywords: instance segmentation, object detection, convolutional neural networks, deep learning, computer vision

Procedia PDF Downloads 52
4718 Investigating Software Engineering Challenges in Game Development

Authors: Fawad Zaidi

Abstract:

This paper discusses a variety of challenges and solutions involved with creating computer games and the issues faced by the software engineers working in this field. This review further investigates the articles coverage of project scope and the problem of feature creep that appears to be inherent with game development. The paper tries to answer the following question: Is this a problem caused by a shortage, or bad software engineering practices, or is this outside the control of the software engineering component of the game production process?

Keywords: software engineering, computer games, software applications, development

Procedia PDF Downloads 453
4717 Recognition of Grocery Products in Images Captured by Cellular Phones

Authors: Farshideh Einsele, Hassan Foroosh

Abstract:

In this paper, we present a robust algorithm to recognize extracted text from grocery product images captured by mobile phone cameras. Recognition of such text is challenging since text in grocery product images varies in its size, orientation, style, illumination, and can suffer from perspective distortion. Pre-processing is performed to make the characters scale and rotation invariant. Since text degradations can not be appropriately defined using wellknown geometric transformations such as translation, rotation, affine transformation and shearing, we use the whole character black pixels as our feature vector. Classification is performed with minimum distance classifier using the maximum likelihood criterion, which delivers very promising Character Recognition Rate (CRR) of 89%. We achieve considerably higher Word Recognition Rate (WRR) of 99% when using lower level linguistic knowledge about product words during the recognition process.

Keywords: camera-based OCR, feature extraction, document, image processing, grocery products

Procedia PDF Downloads 384
4716 Applying Kinect on the Development of a Customized 3D Mannequin

Authors: Shih-Wen Hsiao, Rong-Qi Chen

Abstract:

In the field of fashion design, 3D Mannequin is a kind of assisting tool which could rapidly realize the design concepts. While the concept of 3D Mannequin is applied to the computer added fashion design, it will connect with the development and the application of design platform and system. Thus, the situation mentioned above revealed a truth that it is very critical to develop a module of 3D Mannequin which would correspond with the necessity of fashion design. This research proposes a concrete plan that developing and constructing a system of 3D Mannequin with Kinect. In the content, ergonomic measurements of objective human features could be attained real-time through the implement with depth camera of Kinect, and then the mesh morphing can be implemented through transformed the locations of the control-points on the model by inputting those ergonomic data to get an exclusive 3D mannequin model. In the proposed methodology, after the scanned points from the Kinect are revised for accuracy and smoothening, a complete human feature would be reconstructed by the ICP algorithm with the method of image processing. Also, the objective human feature could be recognized to analyze and get real measurements. Furthermore, the data of ergonomic measurements could be applied to shape morphing for the division of 3D Mannequin reconstructed by feature curves. Due to a standardized and customer-oriented 3D Mannequin would be generated by the implement of subdivision, the research could be applied to the fashion design or the presentation and display of 3D virtual clothes. In order to examine the practicality of research structure, a system of 3D Mannequin would be constructed with JAVA program in this study. Through the revision of experiments the practicability-contained research result would come out.

Keywords: 3D mannequin, kinect scanner, interactive closest point, shape morphing, subdivision

Procedia PDF Downloads 284
4715 Information Communication Technologies and Renewable Technologies' Impact on Irish People's Lifestyle: A Constructivist Grounded Theory Study

Authors: Hamilton V. Niculescu

Abstract:

This paper discusses findings relating to people's engagement with mobile communication technologies and remote automated systems. This interdisciplinary study employs a constructivist grounded theory methodology, with qualitative data that was generated following in-depth semi-structured interviews with 18 people living in Ireland being corroborated with participants' observations and quantitative data. Additional data was collected following participants' remote interaction with six custom-built automated enclosures, located at six different sites around Dublin, Republic of Ireland. This paper argues that ownership and education play a vital role in people engaging with and adoption of new technologies. Analysis of participants' behavior and attitude towards Information Communication Technologies (ICT) suggests that innovations do not always improve peoples' social inclusion. Technological innovations are sometimes perceived as destroying communities and create a dysfunctional society. Moreover, the findings indicate that a lack of public information and support from Irish governmental institutions, as well as limited off-the-shelves availability, has led to low trust and adoption of renewable technologies. A limited variation in participants' behavior and interaction patterns with technologies was observed during the study. This suggests that people will eventually adopt new technologies according to their needs and experience, even though they initially rejected the idea of changing their lifestyle.

Keywords: automation, communication, ICT, renewables

Procedia PDF Downloads 90
4714 MRI Quality Control Using Texture Analysis and Spatial Metrics

Authors: Kumar Kanudkuri, A. Sandhya

Abstract:

Typically, in a MRI clinical setting, there are several protocols run, each indicated for a specific anatomy and disease condition. However, these protocols or parameters within them can change over time due to changes to the recommendations by the physician groups or updates in the software or by the availability of new technologies. Most of the time, the changes are performed by the MRI technologist to account for either time, coverage, physiological, or Specific Absorbtion Rate (SAR ) reasons. However, giving properly guidelines to MRI technologist is important so that they do not change the parameters that negatively impact the image quality. Typically a standard American College of Radiology (ACR) MRI phantom is used for Quality Control (QC) in order to guarantee that the primary objectives of MRI are met. The visual evaluation of quality depends on the operator/reviewer and might change amongst operators as well as for the same operator at various times. Therefore, overcoming these constraints is essential for a more impartial evaluation of quality. This makes quantitative estimation of image quality (IQ) metrics for MRI quality control is very important. So in order to solve this problem, we proposed that there is a need for a robust, open-source, and automated MRI image control tool. The Designed and developed an automatic analysis tool for measuring MRI image quality (IQ) metrics like Signal to Noise Ratio (SNR), Signal to Noise Ratio Uniformity (SNRU), Visual Information Fidelity (VIF), Feature Similarity (FSIM), Gray level co-occurrence matrix (GLCM), slice thickness accuracy, slice position accuracy, High contrast spatial resolution) provided good accuracy assessment. A standardized quality report has generated that incorporates metrics that impact diagnostic quality.

Keywords: ACR MRI phantom, MRI image quality metrics, SNRU, VIF, FSIM, GLCM, slice thickness accuracy, slice position accuracy

Procedia PDF Downloads 136
4713 Switching to the Latin Alphabet in Kazakhstan: A Brief Overview of Character Recognition Methods

Authors: Ainagul Yermekova, Liudmila Goncharenko, Ali Baghirzade, Sergey Sybachin

Abstract:

In this article, we address the problem of Kazakhstan's transition to the Latin alphabet. The transition process started in 2017 and is scheduled to be completed in 2025. In connection with these events, the problem of recognizing the characters of the new alphabet is raised. Well-known character recognition programs such as ABBYY FineReader, FormReader, MyScript Stylus did not recognize specific Kazakh letters that were used in Cyrillic. The author tries to give an assessment of the well-known method of character recognition that could be in demand as part of the country's transition to the Latin alphabet. Three methods of character recognition: template, structured, and feature-based, are considered through the algorithms of operation. At the end of the article, a general conclusion is made about the possibility of applying a certain method to a particular recognition process: for example, in the process of population census, recognition of typographic text in Latin, or recognition of photos of car numbers, store signs, etc.

Keywords: text detection, template method, recognition algorithm, structured method, feature method

Procedia PDF Downloads 161