Search results for: machine vision
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3625

Search results for: machine vision

3445 Assessing the Effectiveness of Machine Learning Algorithms for Cyber Threat Intelligence Discovery from the Darknet

Authors: Azene Zenebe

Abstract:

Deep learning is a subset of machine learning which incorporates techniques for the construction of artificial neural networks and found to be useful for modeling complex problems with large dataset. Deep learning requires a very high power computational and longer time for training. By aggregating computing power, high performance computer (HPC) has emerged as an approach to resolving advanced problems and performing data-driven research activities. Cyber threat intelligence (CIT) is actionable information or insight an organization or individual uses to understand the threats that have, will, or are currently targeting the organization. Results of review of literature will be presented along with results of experimental study that compares the performance of tree-based and function-base machine learning including deep learning algorithms using secondary dataset collected from darknet.

Keywords: deep-learning, cyber security, cyber threat modeling, tree-based machine learning, function-based machine learning, data science

Procedia PDF Downloads 123
3444 Traumatic Chiasmal Syndrome Following Traumatic Brain Injury

Authors: Jiping Cai, Ningzhi Wangyang, Jun Shao

Abstract:

Traumatic brain injury (TBI) is one of the major causes of morbidity and mortality that leads to structural and functional damage in several parts of the brain, such as cranial nerves, optic nerve tract or other circuitry involved in vision and occipital lobe, depending on its location and severity. As a result, the function associated with vision processing and perception are significantly affected and cause blurred vision, double vision, decreased peripheral vision and blindness. Here two cases complaining of monocular vision loss (actually temporal hemianopia) due to traumatic chiasmal syndrome after frontal head injury were reported, and were compared the findings with individual case reports published in the literature. Reported cases of traumatic chiasmal syndrome appear to share some common features, such as injury to the frontal bone and fracture of the anterior skull base. The degree of bitemporal hemianopia and visual loss acuity have a variable presentation and was not necessarily related to the severity of the craniocerebral trauma. Chiasmal injury may occur even in the absence bony chip impingement. Isolated bitemporal hemianopia is rare and clinical improvement usually may not occur. Mechanisms of damage to the optic chiasm after trauma include direct tearing, contusion haemorrhage and contusion necrosis, and secondary mechanisms such as cell death, inflammation, edema, neurogenesis impairment and axonal damage associated with TBI. Beside visual field test, MRI evaluation of optic pathways seems to the strong objective evidence to demonstrate the impairment of the integrity of visual systems following TBI. Therefore, traumatic chiasmal syndrome should be considered as a differential diagnosis by both neurosurgeons and ophthalmologists in patients presenting with visual impairment, especially bitemporal hemianopia after head injury causing frontal and anterior skull base fracture.

Keywords: bitemporal hemianopia, brain injury, optic chiasma, traumatic chiasmal syndrome.

Procedia PDF Downloads 48
3443 Auto Classification of Multiple ECG Arrhythmic Detection via Machine Learning Techniques: A Review

Authors: Ng Liang Shen, Hau Yuan Wen

Abstract:

Arrhythmia analysis of ECG signal plays a major role in diagnosing most of the cardiac diseases. Therefore, a single arrhythmia detection of an electrocardiographic (ECG) record can determine multiple pattern of various algorithms and match accordingly each ECG beats based on Machine Learning supervised learning. These researchers used different features and classification methods to classify different arrhythmia types. A major problem in these studies is the fact that the symptoms of the disease do not show all the time in the ECG record. Hence, a successful diagnosis might require the manual investigation of several hours of ECG records. The point of this paper presents investigations cardiovascular ailment in Electrocardiogram (ECG) Signals for Cardiac Arrhythmia utilizing examination of ECG irregular wave frames via heart beat as correspond arrhythmia which with Machine Learning Pattern Recognition.

Keywords: electrocardiogram, ECG, classification, machine learning, pattern recognition, detection, QRS

Procedia PDF Downloads 342
3442 Laser Corneoplastique™: A Refractive Surgery for Corneal Scars

Authors: Arun C. Gulani, Aaishwariya A. Gulani, Amanda Southall

Abstract:

Background: Laser Corneoplastique™ as a least interventional, visually promising technique for patients with vision disability from corneal scars of varied causes has been retrospectively reviewed and proves to cause a paradigm shift in mindset and approach towards corneal scars as a Refractive surgery aiming for emmetropic, unaided vision of 20;/20 in most cases. Three decades of work on this technique has been compiled in this 15-year study. Subject and Methods: The objective of this study was to determine the success of Laser Corneoplastique™ surgery as a treatment of corneal scar cases. A survey of corneal scar cases caused by various medical histories that had undergone Laser Corneoplastique™ surgery over the past twenty years by a single surgeon Arun C. Gulani, M.D. were retrospectively reviewed. The details of each of the cases were retrieved from their medical records and analyzed. Each patient had been examined thoroughly at their preoperative appointments for stability of refraction and vision, depth of scar, pachymetry, topography, pattern of the scar and uncorrected and best corrected vision potential, which were all taken into account in the patients' treatment plans. Results: 64 eyes of 53 patients were investigated for scar etiology, keratometry, visual acuity, and complications. There were 25 different etiologies seen, with the most common being a Herpetic scar. The average visual acuity post-op was, on average, 20/23.55 (±7.05). Laser parameters used were depth and pulses. Overall, the mean Laser ablation depth was 30.67 (±19.05), ranging from 2 to 73 µm. Number of Laser pulses averaged 191.85 (±112.02). Conclusion: Refractive Laser Corneoplastique™ surgery, when practiced as an art, can address all levels of ametropia while reversing complex corneas and scars from refractive surgery complications back to 20/20 vision.

Keywords: corneal scar, refractive surgery, corneal transplant, laser corneoplastique

Procedia PDF Downloads 150
3441 Usability Evaluation of a Mobile Application to Enhance the Use of Smartphone, by Visually Impaired Users in Indonesia

Authors: Johanna Renny Octavia, Kamila Okta Saarah

Abstract:

Smartphone nowadays is widely used by many people all over the world. However, people with vision impairment may experience difficulties that interfere with the proper usage of the smartphone. In Indonesia, the population of visually impaired is about 13 million people (estimated 285 million people worldwide). There are a number of mobile applications developed to enhance the use of smartphone by visually impaired. This paper discusses the usability evaluation of a mobile application, namely Ray Vision, designed to help visually impaired in using smartphone. A series of usability testing with a number of Indonesian visually impaired revealed 28 usability problems in the mobile application that led to 14 design recommendations. The redesigned application was then re-evaluated through another usability testing series. The results showed that all five usability criteria assessed were increased (usefulness by 13%, effectiveness by 27%, efficiency by 27%, satisfaction by 23%, and learnability by 12%). The System Usability Score (SUS) was also increased by 14.92%.

Keywords: mobile application, smartphone, usability evaluation, vision impaired

Procedia PDF Downloads 287
3440 Air Quality Analysis Using Machine Learning Models Under Python Environment

Authors: Salahaeddine Sbai

Abstract:

Air quality analysis using machine learning models is a method employed to assess and predict air pollution levels. This approach leverages the capabilities of machine learning algorithms to analyze vast amounts of air quality data and extract valuable insights. By training these models on historical air quality data, they can learn patterns and relationships between various factors such as weather conditions, pollutant emissions, and geographical features. The trained models can then be used to predict air quality levels in real-time or forecast future pollution levels. This application of machine learning in air quality analysis enables policymakers, environmental agencies, and the general public to make informed decisions regarding health, environmental impact, and mitigation strategies. By understanding the factors influencing air quality, interventions can be implemented to reduce pollution levels, mitigate health risks, and enhance overall air quality management. Climate change is having significant impacts on Morocco, affecting various aspects of the country's environment, economy, and society. In this study, we use some machine learning models under python environment to predict and analysis air quality change over North of Morocco to evaluate the climate change impact on agriculture.

Keywords: air quality, machine learning models, pollution, pollutant emissions

Procedia PDF Downloads 59
3439 Review of Different Machine Learning Algorithms

Authors: Syed Romat Ali Shah, Bilal Shoaib, Saleem Akhtar, Munib Ahmad, Shahan Sadiqui

Abstract:

Classification is a data mining technique, which is recognizedon Machine Learning (ML) algorithm. It is used to classifythe individual articlein a knownofinformation into a set of predefinemodules or group. Web mining is also a portion of that sympathetic of data mining methods. The main purpose of this paper to analysis and compare the performance of Naïve Bayse Algorithm, Decision Tree, K-Nearest Neighbor (KNN), Artificial Neural Network (ANN)and Support Vector Machine (SVM). This paper consists of different ML algorithm and their advantages and disadvantages and also define research issues.

Keywords: Data Mining, Web Mining, classification, ML Algorithms

Procedia PDF Downloads 260
3438 Flood-prone Urban Area Mapping Using Machine Learning, a Case Sudy of M'sila City (Algeria)

Authors: Medjadj Tarek, Ghribi Hayet

Abstract:

This study aims to develop a flood sensitivity assessment tool using machine learning (ML) techniques and geographic information system (GIS). The importance of this study is integrating the geographic information systems (GIS) and machine learning (ML) techniques for mapping flood risks, which help decision-makers to identify the most vulnerable areas and take the necessary precautions to face this type of natural disaster. To reach this goal, we will study the case of the city of M'sila, which is among the areas most vulnerable to floods. This study drew a map of flood-prone areas based on the methodology where we have made a comparison between 3 machine learning algorithms: the xGboost model, the Random Forest algorithm and the K Nearest Neighbour algorithm. Each of them gave an accuracy respectively of 97.92 - 95 - 93.75. In the process of mapping flood-prone areas, the first model was relied upon, which gave the greatest accuracy (xGboost).

Keywords: Geographic information systems (GIS), machine learning (ML), emergency mapping, flood disaster management

Procedia PDF Downloads 65
3437 The Role of Optimization and Machine Learning in e-Commerce Logistics in 2030

Authors: Vincenzo Capalbo, Gianpaolo Ghiani, Emanuele Manni

Abstract:

Global e-commerce sales have reached unprecedented levels in the past few years. As this trend is only predicted to go up as we continue into the ’20s, new challenges will be faced by companies when planning and controlling e-commerce logistics. In this paper, we survey the related literature on Optimization and Machine Learning as well as on combined methodologies. We also identify the distinctive features of next-generation planning algorithms - namely scalability, model-and-run features and learning capabilities - that will be fundamental to cope with the scale and complexity of logistics in the next decade.

Keywords: e-commerce, hardware acceleration, logistics, machine learning, mixed integer programming, optimization

Procedia PDF Downloads 208
3436 Comparison of Machine Learning Models for the Prediction of System Marginal Price of Greek Energy Market

Authors: Ioannis P. Panapakidis, Marios N. Moschakis

Abstract:

The Greek Energy Market is structured as a mandatory pool where the producers make their bid offers in day-ahead basis. The System Operator solves an optimization routine aiming at the minimization of the cost of produced electricity. The solution of the optimization problem leads to the calculation of the System Marginal Price (SMP). Accurate forecasts of the SMP can lead to increased profits and more efficient portfolio management from the producer`s perspective. Aim of this study is to provide a comparative analysis of various machine learning models such as artificial neural networks and neuro-fuzzy models for the prediction of the SMP of the Greek market. Machine learning algorithms are favored in predictions problems since they can capture and simulate the volatilities of complex time series.

Keywords: deregulated energy market, forecasting, machine learning, system marginal price

Procedia PDF Downloads 182
3435 Volume Density of Power of Multivector Electric Machine

Authors: Aldan A. Sapargaliyev, Yerbol A. Sapargaliyev

Abstract:

Since the invention, the electric machine (EM) can be defined as oEM – one-vector electric machine, as it works due to one-vector inductive coupling with use of one-vector electromagnet. The disadvantages of oEM are large size and limited efficiency at low and medium power applications. This paper describes multi-vector electric machine (mEM) based on multi-vector inductive coupling, which is characterized by the increased surface area of ​​the inductive coupling per EM volume, with a reduced share of inefficient and energy-consuming part of the winding, in comparison with oEM’s. Particularly, it is considered, calculated and compared the performance of three different electrical motors and their power at the same volumes and rotor frequencies. It is also presented the result of calculation of correlation between power density and volume for oEM and mEM. The method of multi-vector inductive coupling enables mEM to possess 1.5-4.0 greater density of power per volume and significantly higher efficiency, in comparison with today’s oEM, especially in low and medium power applications. mEM has distinct advantages, when used in transport vehicles such as electric cars and aircrafts.

Keywords: electric machine, electric motor, electromagnet, efficiency of electric motor

Procedia PDF Downloads 315
3434 Use Cloud-Based Watson Deep Learning Platform to Train Models Faster and More Accurate

Authors: Susan Diamond

Abstract:

Machine Learning workloads have traditionally been run in high-performance computing (HPC) environments, where users log in to dedicated machines and utilize the attached GPUs to run training jobs on huge datasets. Training of large neural network models is very resource intensive, and even after exploiting parallelism and accelerators such as GPUs, a single training job can still take days. Consequently, the cost of hardware is a barrier to entry. Even when upfront cost is not a concern, the lead time to set up such an HPC environment takes months from acquiring hardware to set up the hardware with the right set of firmware, software installed and configured. Furthermore, scalability is hard to achieve in a rigid traditional lab environment. Therefore, it is slow to react to the dynamic change in the artificial intelligent industry. Watson Deep Learning as a service, a cloud-based deep learning platform that mitigates the long lead time and high upfront investment in hardware. It enables robust and scalable sharing of resources among the teams in an organization. It is designed for on-demand cloud environments. Providing a similar user experience in a multi-tenant cloud environment comes with its own unique challenges regarding fault tolerance, performance, and security. Watson Deep Learning as a service tackles these challenges and present a deep learning stack for the cloud environments in a secure, scalable and fault-tolerant manner. It supports a wide range of deep-learning frameworks such as Tensorflow, PyTorch, Caffe, Torch, Theano, and MXNet etc. These frameworks reduce the effort and skillset required to design, train, and use deep learning models. Deep Learning as a service is used at IBM by AI researchers in areas including machine translation, computer vision, and healthcare. 

Keywords: deep learning, machine learning, cognitive computing, model training

Procedia PDF Downloads 184
3433 A Deep Learning Approach to Subsection Identification in Electronic Health Records

Authors: Nitin Shravan, Sudarsun Santhiappan, B. Sivaselvan

Abstract:

Subsection identification, in the context of Electronic Health Records (EHRs), is identifying the important sections for down-stream tasks like auto-coding. In this work, we classify the text present in EHRs according to their information, using machine learning and deep learning techniques. We initially describe briefly about the problem and formulate it as a text classification problem. Then, we discuss upon the methods from the literature. We try two approaches - traditional feature extraction based machine learning methods and deep learning methods. Through experiments on a private dataset, we establish that the deep learning methods perform better than the feature extraction based Machine Learning Models.

Keywords: deep learning, machine learning, semantic clinical classification, subsection identification, text classification

Procedia PDF Downloads 183
3432 Multi-Level Air Quality Classification in China Using Information Gain and Support Vector Machine

Authors: Bingchun Liu, Pei-Chann Chang, Natasha Huang, Dun Li

Abstract:

Machine Learning and Data Mining are the two important tools for extracting useful information and knowledge from large datasets. In machine learning, classification is a wildly used technique to predict qualitative variables and is generally preferred over regression from an operational point of view. Due to the enormous increase in air pollution in various countries especially China, Air Quality Classification has become one of the most important topics in air quality research and modelling. This study aims at introducing a hybrid classification model based on information theory and Support Vector Machine (SVM) using the air quality data of four cities in China namely Beijing, Guangzhou, Shanghai and Tianjin from Jan 1, 2014 to April 30, 2016. China's Ministry of Environmental Protection has classified the daily air quality into 6 levels namely Serious Pollution, Severe Pollution, Moderate Pollution, Light Pollution, Good and Excellent based on their respective Air Quality Index (AQI) values. Using the information theory, information gain (IG) is calculated and feature selection is done for both categorical features and continuous numeric features. Then SVM Machine Learning algorithm is implemented on the selected features with cross-validation. The final evaluation reveals that the IG and SVM hybrid model performs better than SVM (alone), Artificial Neural Network (ANN) and K-Nearest Neighbours (KNN) models in terms of accuracy as well as complexity.

Keywords: machine learning, air quality classification, air quality index, information gain, support vector machine, cross-validation

Procedia PDF Downloads 203
3431 Intelligent Production Machine

Authors: A. Şahinoğlu, R. Gürbüz, A. Güllü, M. Karhan

Abstract:

This study in production machines, it is aimed that machine will automatically perceive cutting data and alter cutting parameters. The two most important parameters have to be checked in machine control unit are progress feed rate and speeds. These parameters are aimed to be controlled by sounds of machine. Optimum sound’s features introduced to computer. During process, real time data is received and converted by Matlab software. Data is converted into numerical values. According to them progress and speeds decreases/increases at a certain rate and thus optimum sound is acquired. Cutting process is made in respect of optimum cutting parameters. During chip remove progress, features of cutting tools, kind of cut material, cutting parameters and used machine; affects on various parameters. Instead of required parameters need to be measured such as temperature, vibration, and tool wear that emerged during cutting process; detailed analysis of the sound emerged during cutting process will provide detection of various data that included in the cutting process by the much more easy and economic way. The relation between cutting parameters and sound is being identified.

Keywords: cutting process, sound processing, intelligent late, sound analysis

Procedia PDF Downloads 306
3430 Optimized Preprocessing for Accurate and Efficient Bioassay Prediction with Machine Learning Algorithms

Authors: Jeff Clarine, Chang-Shyh Peng, Daisy Sang

Abstract:

Bioassay is the measurement of the potency of a chemical substance by its effect on a living animal or plant tissue. Bioassay data and chemical structures from pharmacokinetic and drug metabolism screening are mined from and housed in multiple databases. Bioassay prediction is calculated accordingly to determine further advancement. This paper proposes a four-step preprocessing of datasets for improving the bioassay predictions. The first step is instance selection in which dataset is categorized into training, testing, and validation sets. The second step is discretization that partitions the data in consideration of accuracy vs. precision. The third step is normalization where data are normalized between 0 and 1 for subsequent machine learning processing. The fourth step is feature selection where key chemical properties and attributes are generated. The streamlined results are then analyzed for the prediction of effectiveness by various machine learning algorithms including Pipeline Pilot, R, Weka, and Excel. Experiments and evaluations reveal the effectiveness of various combination of preprocessing steps and machine learning algorithms in more consistent and accurate prediction.

Keywords: bioassay, machine learning, preprocessing, virtual screen

Procedia PDF Downloads 253
3429 The Accuracy of Parkinson's Disease Diagnosis Using [123I]-FP-CIT Brain SPECT Data with Machine Learning Techniques: A Survey

Authors: Lavanya Madhuri Bollipo, K. V. Kadambari

Abstract:

Objective: To discuss key issues in the diagnosis of Parkinson disease (PD), To discuss features influencing PD progression, To discuss importance of brain SPECT data in PD diagnosis, and To discuss the essentiality of machine learning techniques in early diagnosis of PD. An accurate and early diagnosis of PD is nowadays a challenge as clinical symptoms in PD arise only when there is more than 60% loss of dopaminergic neurons. So far there are no laboratory tests for the diagnosis of PD, causing a high rate of misdiagnosis especially when the disease is in the early stages. Recent neuroimaging studies with brain SPECT using 123I-Ioflupane (DaTSCAN) as radiotracer shown to be widely used to assist the diagnosis of PD even in its early stages. Machine learning techniques can be used in combination with image analysis procedures to develop computer-aided diagnosis (CAD) systems for PD. This paper addressed recent studies involving diagnosis of PD in its early stages using brain SPECT data with Machine Learning Techniques.

Keywords: Parkinson disease (PD), dopamine transporter, single-photon emission computed tomography (SPECT), support vector machine (SVM)

Procedia PDF Downloads 366
3428 Discussing Embedded versus Central Machine Learning in Wireless Sensor Networks

Authors: Anne-Lena Kampen, Øivind Kure

Abstract:

Machine learning (ML) can be implemented in Wireless Sensor Networks (WSNs) as a central solution or distributed solution where the ML is embedded in the nodes. Embedding improves privacy and may reduce prediction delay. In addition, the number of transmissions is reduced. However, quality factors such as prediction accuracy, fault detection efficiency and coordinated control of the overall system suffer. Here, we discuss and highlight the trade-offs that should be considered when choosing between embedding and centralized ML, especially for multihop networks. In addition, we present estimations that demonstrate the energy trade-offs between embedded and centralized ML. Although the total network energy consumption is lower with central prediction, it makes the network more prone for partitioning due to the high forwarding load on the one-hop nodes. Moreover, the continuous improvements in the number of operations per joule for embedded devices will move the energy balance toward embedded prediction.

Keywords: central machine learning, embedded machine learning, energy consumption, local machine learning, wireless sensor networks, WSN

Procedia PDF Downloads 123
3427 Fundamental Research Dissension between Hot and Cold Chamber High Pressure Die Casting

Authors: Sahil Kumar, Surinder Pal, Rahul Kapoor

Abstract:

This paper is focused on to define the basic difference between hot and cold chamber high pressure die casting process which is not fully defined in a research before paper which we have studied. The pressure die casting is basically defined into two types (1) Hot chamber Die Casting (2) Cold chamber Die Casting. Cold chamber die casting is used for casting alloys that require high pressure and have a high melting temperature, such as brass, aluminum, magnesium, copper based alloys and other high melting point nonferrous alloys. Hot chamber die casting is suitable for casting zinc, tin, lead, and low melting point alloys. In hot chamber die casting machine, the molten metal is an integral pan of the machine. It mainly consists of hot chamber and gooseneck type metal container made of cast iron. This machine is mainly used for low melting alloys and alloys of metals like zinc, lead etc. Metals and alloys having a high melting point and those which are having an affinity for iron cannot be cast by this machine, which could otherwise attack the shot sleeve and damage the machine.

Keywords: hot chamber die casting, cold chamber die casting, metals and alloys, casting technology

Procedia PDF Downloads 592
3426 Depth Estimation in DNN Using Stereo Thermal Image Pairs

Authors: Ahmet Faruk Akyuz, Hasan Sakir Bilge

Abstract:

Depth estimation using stereo images is a challenging problem in computer vision. Many different studies have been carried out to solve this problem. With advancing machine learning, tackling this problem is often done with neural network-based solutions. The images used in these studies are mostly in the visible spectrum. However, the need to use the Infrared (IR) spectrum for depth estimation has emerged because it gives better results than visible spectra in some conditions. At this point, we recommend using thermal-thermal (IR) image pairs for depth estimation. In this study, we used two well-known networks (PSMNet, FADNet) with minor modifications to demonstrate the viability of this idea.

Keywords: thermal stereo matching, deep neural networks, CNN, Depth estimation

Procedia PDF Downloads 240
3425 Spectral Clustering for Manufacturing Cell Formation

Authors: Yessica Nataliani, Miin-Shen Yang

Abstract:

Cell formation (CF) is an important step in group technology. It is used in designing cellular manufacturing systems using similarities between parts in relation to machines so that it can identify part families and machine groups. There are many CF methods in the literature, but there is less spectral clustering used in CF. In this paper, we propose a spectral clustering algorithm for machine-part CF. Some experimental examples are used to illustrate its efficiency. Overall, the spectral clustering algorithm can be used in CF with a wide variety of machine/part matrices.

Keywords: group technology, cell formation, spectral clustering, grouping efficiency

Procedia PDF Downloads 375
3424 Design and Performance Analysis of a Hydro-Power Rim-Driven Superconducting Synchronous Generator

Authors: A. Hassannia, S. Ramezani

Abstract:

The technology of superconductivity has developed in many power system devices such as transmission cable, transformer, current limiter, motor and generator. Superconducting wires can carry high density current without loss, which is the capability that is used to design the compact, lightweight and more efficient electrical machines. Superconducting motors have found applications in marine and air propulsion systems as well as superconducting generators are considered in low power hydraulic and wind generators. This paper presents a rim-driven superconducting synchronous generator for hydraulic power plant. The rim-driven concept improves the performance of hydro turbine. Furthermore, high magnetic field that is produced by superconducting windings allows replacing the rotor core. As a consequent, the volume and weight of the machine is decreased significantly. In this paper, a 1 MW coreless rim-driven superconducting synchronous generator is designed. Main performance characteristics of the proposed machine are then evaluated using finite elements method and compared to an ordinary similar size synchronous generator.

Keywords: coreless machine, electrical machine design, hydraulic generator, rim-driven machine, superconducting generator

Procedia PDF Downloads 142
3423 Stereo Motion Tracking

Authors: Yudhajit Datta, Hamsi Iyer, Jonathan Bandi, Ankit Sethia

Abstract:

Motion Tracking and Stereo Vision are complicated, albeit well-understood problems in computer vision. Existing softwares that combine the two approaches to perform stereo motion tracking typically employ complicated and computationally expensive procedures. The purpose of this study is to create a simple and effective solution capable of combining the two approaches. The study aims to explore a strategy to combine the two techniques of two-dimensional motion tracking using Kalman Filter; and depth detection of object using Stereo Vision. In conventional approaches objects in the scene of interest are observed using a single camera. However for Stereo Motion Tracking; the scene of interest is observed using video feeds from two calibrated cameras. Using two simultaneous measurements from the two cameras a calculation for the depth of the object from the plane containing the cameras is made. The approach attempts to capture the entire three-dimensional spatial information of each object at the scene and represent it through a software estimator object. In discrete intervals, the estimator tracks object motion in the plane parallel to plane containing cameras and updates the perpendicular distance value of the object from the plane containing the cameras as depth. The ability to efficiently track the motion of objects in three-dimensional space using a simplified approach could prove to be an indispensable tool in a variety of surveillance scenarios. The approach may find application from high security surveillance scenes such as premises of bank vaults, prisons or other detention facilities; to low cost applications in supermarkets and car parking lots.

Keywords: kalman filter, stereo vision, motion tracking, matlab, object tracking, camera calibration, computer vision system toolbox

Procedia PDF Downloads 299
3422 Automatic Generating CNC-Code for Milling Machine

Authors: Chalakorn Chitsaart, Suchada Rianmora, Mann Rattana-Areeyagon, Wutichai Namjaiprasert

Abstract:

G-code is the main factor in computer numerical control (CNC) machine for controlling the tool-paths and generating the profile of the object’s features. For obtaining high surface accuracy of the surface finish, non-stop operation is required for CNC machine. Recently, to design a new product, the strategy that concerns about a change that has low impact on business and does not consume lot of resources has been introduced. Cost and time for designing minor changes can be reduced since the traditional geometric details of the existing models are applied. In order to support this strategy as the alternative channel for machining operation, this research proposes the automatic generating codes for CNC milling operation. Using this technique can assist the manufacturer to easily change the size and the geometric shape of the product during the operation where the time spent for setting up or processing the machine are reduced. The algorithm implemented on MATLAB platform is developed by analyzing and evaluating the geometric information of the part. Codes are created rapidly to control the operations of the machine. Comparing to the codes obtained from CAM, this developed algorithm can shortly generate and simulate the cutting profile of the part.

Keywords: geometric shapes, milling operation, minor changes, CNC Machine, G-code, cutting parameters

Procedia PDF Downloads 326
3421 An Analysis of Classification of Imbalanced Datasets by Using Synthetic Minority Over-Sampling Technique

Authors: Ghada A. Alfattni

Abstract:

Analysing unbalanced datasets is one of the challenges that practitioners in machine learning field face. However, many researches have been carried out to determine the effectiveness of the use of the synthetic minority over-sampling technique (SMOTE) to address this issue. The aim of this study was therefore to compare the effectiveness of the SMOTE over different models on unbalanced datasets. Three classification models (Logistic Regression, Support Vector Machine and Nearest Neighbour) were tested with multiple datasets, then the same datasets were oversampled by using SMOTE and applied again to the three models to compare the differences in the performances. Results of experiments show that the highest number of nearest neighbours gives lower values of error rates. 

Keywords: imbalanced datasets, SMOTE, machine learning, logistic regression, support vector machine, nearest neighbour

Procedia PDF Downloads 318
3420 Managing Subretinal Bleeds with Intravitreal Aflibercept

Authors: Prachi Abhishek Dave, Abhishek Dave

Abstract:

Purpose: The purpose of this study is to elucidate the role of intravitreal injection Aflibercept in managing complex cases of Wet Age Related Macular Degeneration (ARMD) and the gratifying visual recovery experienced with a minimally invasive procedure. Methods: A 73-year-old gentleman presented with a drop in vision in the left eye for 25 days. On examination, his best corrected visual acuity (BCVA) in the Right eye (OD) was 6/60, and finger counting close to face in the Left eye (OS). On multimodal imaging, he was diagnosed to have a scarred Wet ARMD in OD and an active Wet ARMD with a large subretinal bleed secondary to Wet ARMD in OS. Treatment management options included monotherapy with an Injection Aflibercept or an intravitreal gas injection with tPA followed by Injection Aflibercept. Considering his one-eyed status, the patient decided to go for Aflibercept monotherapy. Results: After 3 monthly injections of injection Aflibercept, the subretinal bleed reduced, the subretinal fluid resolved, and his vision in OS improved to 6/9. He is on a regular follow-up and has not needed any further injections in OS and he maintains 6/9 vision. Conclusions: Conventional treatment guidelines for a large subretinal bleed dictate the use of gas followed by intravitreal Injection Aflibercept. However, gas has its own limitations of causing a rise in intraocular pressure and a transient loss of vision, which is particularly troublesome in one-eyed patients. Injection Aflibercept offers a much safer, less invasive, and elegant treatment option for such patients with equally good or even better visual outcomes.

Keywords: wet ARMD, subretinal bleed, intravitreal injections, aflibercept, EYELEA, intravitreal gas

Procedia PDF Downloads 15
3419 Use of Fractal Geometry in Machine Learning

Authors: Fuad M. Alkoot

Abstract:

The main component of a machine learning system is the classifier. Classifiers are mathematical models that can perform classification tasks for a specific application area. Additionally, many classifiers are combined using any of the available methods to reduce the classifier error rate. The benefits gained from the combination of multiple classifier designs has motivated the development of diverse approaches to multiple classifiers. We aim to investigate using fractal geometry to develop an improved classifier combiner. Initially we experiment with measuring the fractal dimension of data and use the results in the development of a combiner strategy.

Keywords: fractal geometry, machine learning, classifier, fractal dimension

Procedia PDF Downloads 186
3418 English Grammatical Errors of Arabic Sentence Translations Done by Machine Translations

Authors: Muhammad Fathurridho

Abstract:

Grammar as a rule used by every language to be understood by everyone is always related to syntax and morphology. Arabic grammar is different with another languages’ grammars. It has more rules and difficulties. This paper aims to investigate and describe the English grammatical errors of machine translation systems in translating Arabic sentences, including declarative, exclamation, imperative, and interrogative sentences, specifically in year 2018 which can be supported with artificial intelligence’s role. The Arabic sample sentences which are divided into two; verbal and nominal sentence of several Arabic published texts will be examined as the source language samples. The translated sentences done by several popular online machine translation systems, including Google Translate, Microsoft Bing, Babylon, Facebook, Hellotalk, Worldlingo, Yandex Translate, and Tradukka Translate are the material objects of this research. Descriptive method that will be taken to finish this research will show the grammatical errors of English target language, and classify them. The conclusion of this paper has showed that the grammatical errors of machine translation results are varied and generally classified into morphological, syntactical, and semantic errors in all type of Arabic words (Noun, Verb, and Particle), and it will be one of the evaluations for machine translation’s providers to correct them in order to improve their understandable results.

Keywords: Arabic, Arabic-English translation, machine translation, grammatical errors

Procedia PDF Downloads 133
3417 General Architecture for Automation of Machine Learning Practices

Authors: U. Borasi, Amit Kr. Jain, Rakesh, Piyush Jain

Abstract:

Data collection, data preparation, model training, model evaluation, and deployment are all processes in a typical machine learning workflow. Training data needs to be gathered and organised. This often entails collecting a sizable dataset and cleaning it to remove or correct any inaccurate or missing information. Preparing the data for use in the machine learning model requires pre-processing it after it has been acquired. This often entails actions like scaling or normalising the data, handling outliers, selecting appropriate features, reducing dimensionality, etc. This pre-processed data is then used to train a model on some machine learning algorithm. After the model has been trained, it needs to be assessed by determining metrics like accuracy, precision, and recall, utilising a test dataset. Every time a new model is built, both data pre-processing and model training—two crucial processes in the Machine learning (ML) workflow—must be carried out. Thus, there are various Machine Learning algorithms that can be employed for every single approach to data pre-processing, generating a large set of combinations to choose from. Example: for every method to handle missing values (dropping records, replacing with mean, etc.), for every scaling technique, and for every combination of features selected, a different algorithm can be used. As a result, in order to get the optimum outcomes, these tasks are frequently repeated in different combinations. This paper suggests a simple architecture for organizing this largely produced “combination set of pre-processing steps and algorithms” into an automated workflow which simplifies the task of carrying out all possibilities.

Keywords: machine learning, automation, AUTOML, architecture, operator pool, configuration, scheduler

Procedia PDF Downloads 29
3416 Machine Installation and Maintenance Management

Authors: Mohammed Benmostefa

Abstract:

In the industrial production of large series or even medium series, there are vibration problems. In continuous operations, technical devices result in vibrations in solid bodies and machine components, which generate solid noise and/or airborne noise. This is because vibrations are the mechanical oscillations of an object near its equilibrium point. In response to the problems resulting from these vibrations, a number of remedial acts and solutions have been put forward. These include insulation of machines, insulation of concrete masses, insulation under screeds, insulation of sensitive equipment, point insulation of machines, linear insulation of machines, full surface insulation of machines, and the like. Following this, the researcher sought not only to raise awareness on the possibility of lowering the vibration frequency in industrial machines but also to stress the significance of procedures involving the pre-installation process of machinery, namely, setting appropriate installation and start-up methods of the machine, allocating and updating imprint folders to each machine, and scheduling maintenance of each machine all year round to have reliable equipment, gain cost reduction and maintenance efficiency to eventually ensure the overall economic performance of the company.

Keywords: maintenance, vibration, efficiency, production, machinery

Procedia PDF Downloads 50