Search results for: k-nearest neighbor algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3743

Search results for: k-nearest neighbor algorithm

1763 State of the Art on the Recommendation Techniques of Mobile Learning Activities

Authors: Nassim Dennouni, Yvan Peter, Luigi Lancieri, Zohra Slama

Abstract:

The objective of this article is to make a bibliographic study on the recommendation of mobile learning activities that are used as part of the field trip scenarios. Indeed, the recommendation systems are widely used in the context of mobility because they can be used to provide learning activities. These systems should take into account the history of visits and teacher pedagogy to provide adaptive learning according to the instantaneous position of the learner. To achieve this objective, we review the existing literature on field trip scenarios to recommend mobile learning activities.

Keywords: mobile learning, field trip, mobile learning activities, collaborative filtering, recommendation system, point of interest, ACO algorithm

Procedia PDF Downloads 446
1762 Implementation of a Predictive DTC-SVM of an Induction Motor

Authors: Chebaani Mohamed, Gplea Amar, Benchouia Mohamed Toufik

Abstract:

Direct torque control is characterized by the merits of fast response, simple structure and strong robustness to the motor parameters variations. This paper proposes the implementation of DTC-SVM of an induction motor drive using Predictive controller. The principle of the method is explained and the system mathematical description is provided. The derived control algorithm is implemented both in the simulation software MatLab/Simulink and on the real induction motor drive with dSPACE control system. Simulated and measured results in steady states and transients are presented.

Keywords: induction motor, DTC-SVM, predictive controller, implementation, dSPACE, Matlab, Simulink

Procedia PDF Downloads 518
1761 Detailed Observations on Numerically Invariant Signatures

Authors: Reza Aghayan

Abstract:

Numerically invariant signatures were introduced as a new paradigm of the invariant recognition for visual objects modulo a certain group of transformations. This paper shows that the current formulation suffers from noise and indeterminacy in the resulting joint group-signatures and applies the n-difference technique and the m-mean signature method to minimize their effects. In our experimental results of applying the proposed numerical scheme to generate joint group-invariant signatures, the sensitivity of some parameters such as regularity and mesh resolution used in the algorithm will also be examined. Finally, several interesting observations are made.

Keywords: Euclidean and affine geometry, differential invariant G-signature curves, numerically invariant joint G-signatures, object recognition, noise, indeterminacy

Procedia PDF Downloads 398
1760 Network Based Speed Synchronization Control for Multi-Motor via Consensus Theory

Authors: Liqin Zhang, Liang Yan

Abstract:

This paper addresses the speed synchronization control problem for a network-based multi-motor system from the perspective of cluster consensus theory. Each motor is considered as a single agent connected through fixed and undirected network. This paper presents an improved control protocol from three aspects. First, for the purpose of improving both tracking and synchronization performance, this paper presents a distributed leader-following method. The improved control protocol takes the importance of each motor’s speed into consideration, and all motors are divided into different groups according to speed weights. Specifically, by using control parameters optimization, the synchronization error and tracking error can be regulated and decoupled to some extent. The simulation results demonstrate the effectiveness and superiority of the proposed strategy. In practical engineering, the simplified models are unrealistic, such as single-integrator and double-integrator. And previous algorithms require the acceleration information of the leader available to all followers if the leader has a varying velocity, which is also difficult to realize. Therefore, the method focuses on an observer-based variable structure algorithm for consensus tracking, which gets rid of the leader acceleration. The presented scheme optimizes synchronization performance, as well as provides satisfactory robustness. What’s more, the existing algorithms can obtain a stable synchronous system; however, the obtained stable system may encounter some disturbances that may destroy the synchronization. Focus on this challenging technological problem, a state-dependent-switching approach is introduced. In the presence of unmeasured angular speed and unknown failures, this paper investigates a distributed fault-tolerant consensus tracking algorithm for a group non-identical motors. The failures are modeled by nonlinear functions, and the sliding mode observer is designed to estimate the angular speed and nonlinear failures. The convergence and stability of the given multi-motor system are proved. Simulation results have shown that all followers asymptotically converge to a consistent state when one follower fails to follow the virtual leader during a large enough disturbance, which illustrates the good performance of synchronization control accuracy.

Keywords: consensus control, distributed follow, fault-tolerant control, multi-motor system, speed synchronization

Procedia PDF Downloads 125
1759 OCR/ICR Text Recognition Using ABBYY FineReader as an Example Text

Authors: A. R. Bagirzade, A. Sh. Najafova, S. M. Yessirkepova, E. S. Albert

Abstract:

This article describes a text recognition method based on Optical Character Recognition (OCR). The features of the OCR method were examined using the ABBYY FineReader program. It describes automatic text recognition in images. OCR is necessary because optical input devices can only transmit raster graphics as a result. Text recognition describes the task of recognizing letters shown as such, to identify and assign them an assigned numerical value in accordance with the usual text encoding (ASCII, Unicode). The peculiarity of this study conducted by the authors using the example of the ABBYY FineReader, was confirmed and shown in practice, the improvement of digital text recognition platforms developed by Electronic Publication.

Keywords: ABBYY FineReader system, algorithm symbol recognition, OCR/ICR techniques, recognition technologies

Procedia PDF Downloads 168
1758 The Different Ways to Describe Regular Languages by Using Finite Automata and the Changing Algorithm Implementation

Authors: Abdulmajid Mukhtar Afat

Abstract:

This paper aims at introducing finite automata theory, the different ways to describe regular languages and create a program to implement the subset construction algorithms to convert nondeterministic finite automata (NFA) to deterministic finite automata (DFA). This program is written in c++ programming language. The program reads FA 5tuples from text file and then classifies it into either DFA or NFA. For DFA, the program will read the string w and decide whether it is acceptable or not. If accepted, the program will save the tracking path and point it out. On the other hand, when the automation is NFA, the program will change the Automation to DFA so that it is easy to track and it can decide whether the w exists in the regular language or not.

Keywords: finite automata, subset construction, DFA, NFA

Procedia PDF Downloads 426
1757 A Novel Combination Method for Computing the Importance Map of Image

Authors: Ahmad Absetan, Mahdi Nooshyar

Abstract:

The importance map is an image-based measure and is a core part of the resizing algorithm. Importance measures include image gradients, saliency and entropy, as well as high level cues such as face detectors, motion detectors and more. In this work we proposed a new method to calculate the importance map, the importance map is generated automatically using a novel combination of image edge density and Harel saliency measurement. Experiments of different type images demonstrate that our method effectively detects prominent areas can be used in image resizing applications to aware important areas while preserving image quality.

Keywords: content-aware image resizing, visual saliency, edge density, image warping

Procedia PDF Downloads 582
1756 Independent Encryption Technique for Mobile Voice Calls

Authors: Nael Hirzalla

Abstract:

The legality of some countries or agencies’ acts to spy on personal phone calls of the public became a hot topic to many social groups’ talks. It is believed that this act is considered an invasion to someone’s privacy. Such act may be justified if it is singling out specific cases but to spy without limits is very unacceptable. This paper discusses the needs for not only a simple and light weight technique to secure mobile voice calls but also a technique that is independent from any encryption standard or library. It then presents and tests one encrypting algorithm that is based of frequency scrambling technique to show fair and delay-free process that can be used to protect phone calls from such spying acts.

Keywords: frequency scrambling, mobile applications, real-time voice encryption, spying on calls

Procedia PDF Downloads 479
1755 Artificial Intelligence Ethics: What Business Leaders Need to Consider for the Future

Authors: Kylie Leonard

Abstract:

Investment in artificial intelligence (AI) can be an attractive opportunity for business leaders as there are many easy-to-see benefits. These benefits include task completion rates, overall cost, and better forecasting. Business leaders are often unaware of the challenges that can accompany AI, such as data center costs, access to data, employee acceptance, and privacy concerns. In addition to the benefits and challenges of AI, it is important to practice AI ethics to ensure the safe creation of AI. AI ethics include aspects of algorithm bias, limits in transparency, and surveillance. To be a good business leader, it is critical to address all the considerations involving the challenges of AI and AI ethics.

Keywords: artificial intelligence, artificial intelligence ethics, business leaders, business concerns

Procedia PDF Downloads 147
1754 Grid Pattern Recognition and Suppression in Computed Radiographic Images

Authors: Igor Belykh

Abstract:

Anti-scatter grids used in radiographic imaging for the contrast enhancement leave specific artifacts. Those artifacts may be visible or may cause Moiré effect when a digital image is resized on a diagnostic monitor. In this paper, we propose an automated grid artifacts detection and suppression algorithm which is still an actual problem. Grid artifacts detection is based on statistical approach in spatial domain. Grid artifacts suppression is based on Kaiser bandstop filter transfer function design and application avoiding ringing artifacts. Experimental results are discussed and concluded with description of advantages over existing approaches.

Keywords: grid, computed radiography, pattern recognition, image processing, filtering

Procedia PDF Downloads 283
1753 Similarity Based Retrieval in Case Based Reasoning for Analysis of Medical Images

Authors: M. Dasgupta, S. Banerjee

Abstract:

Content Based Image Retrieval (CBIR) coupled with Case Based Reasoning (CBR) is a paradigm that is becoming increasingly popular in the diagnosis and therapy planning of medical ailments utilizing the digital content of medical images. This paper presents a survey of some of the promising approaches used in the detection of abnormalities in retina images as well in mammographic screening and detection of regions of interest in MRI scans of the brain. We also describe our proposed algorithm to detect hard exudates in fundus images of the retina of Diabetic Retinopathy patients.

Keywords: case based reasoning, exudates, retina image, similarity based retrieval

Procedia PDF Downloads 348
1752 An Online 3D Modeling Method Based on a Lossless Compression Algorithm

Authors: Jiankang Wang, Hongyang Yu

Abstract:

This paper proposes a portable online 3D modeling method. The method first utilizes a depth camera to collect data and compresses the depth data using a frame-by-frame lossless data compression method. The color image is encoded using the H.264 encoding format. After the cloud obtains the color image and depth image, a 3D modeling method based on bundlefusion is used to complete the 3D modeling. The results of this study indicate that this method has the characteristics of portability, online, and high efficiency and has a wide range of application prospects.

Keywords: 3D reconstruction, bundlefusion, lossless compression, depth image

Procedia PDF Downloads 82
1751 Descent Algorithms for Optimization Algorithms Using q-Derivative

Authors: Geetanjali Panda, Suvrakanti Chakraborty

Abstract:

In this paper, Newton-like descent methods are proposed for unconstrained optimization problems, which use q-derivatives of the gradient of an objective function. First, a local scheme is developed with alternative sufficient optimality condition, and then the method is extended to a global scheme. Moreover, a variant of practical Newton scheme is also developed introducing a real sequence. Global convergence of these schemes is proved under some mild conditions. Numerical experiments and graphical illustrations are provided. Finally, the performance profiles on a test set show that the proposed schemes are competitive to the existing first-order schemes for optimization problems.

Keywords: Descent algorithm, line search method, q calculus, Quasi Newton method

Procedia PDF Downloads 398
1750 A Nonlinear Parabolic Partial Differential Equation Model for Image Enhancement

Authors: Tudor Barbu

Abstract:

We present a robust nonlinear parabolic partial differential equation (PDE)-based denoising scheme in this article. Our approach is based on a second-order anisotropic diffusion model that is described first. Then, a consistent and explicit numerical approximation algorithm is constructed for this continuous model by using the finite-difference method. Finally, our restoration experiments and method comparison, which prove the effectiveness of this proposed technique, are discussed in this paper.

Keywords: anisotropic diffusion, finite differences, image denoising and restoration, nonlinear PDE model, anisotropic diffusion, numerical approximation schemes

Procedia PDF Downloads 313
1749 Algorithm for Modelling Land Surface Temperature and Land Cover Classification and Their Interaction

Authors: Jigg Pelayo, Ricardo Villar, Einstine Opiso

Abstract:

The rampant and unintended spread of urban areas resulted in increasing artificial component features in the land cover types of the countryside and bringing forth the urban heat island (UHI). This paved the way to wide range of negative influences on the human health and environment which commonly relates to air pollution, drought, higher energy demand, and water shortage. Land cover type also plays a relevant role in the process of understanding the interaction between ground surfaces with the local temperature. At the moment, the depiction of the land surface temperature (LST) at city/municipality scale particularly in certain areas of Misamis Oriental, Philippines is inadequate as support to efficient mitigations and adaptations of the surface urban heat island (SUHI). Thus, this study purposely attempts to provide application on the Landsat 8 satellite data and low density Light Detection and Ranging (LiDAR) products in mapping out quality automated LST model and crop-level land cover classification in a local scale, through theoretical and algorithm based approach utilizing the principle of data analysis subjected to multi-dimensional image object model. The paper also aims to explore the relationship between the derived LST and land cover classification. The results of the presented model showed the ability of comprehensive data analysis and GIS functionalities with the integration of object-based image analysis (OBIA) approach on automating complex maps production processes with considerable efficiency and high accuracy. The findings may potentially lead to expanded investigation of temporal dynamics of land surface UHI. It is worthwhile to note that the environmental significance of these interactions through combined application of remote sensing, geographic information tools, mathematical morphology and data analysis can provide microclimate perception, awareness and improved decision-making for land use planning and characterization at local and neighborhood scale. As a result, it can aid in facilitating problem identification, support mitigations and adaptations more efficiently.

Keywords: LiDAR, OBIA, remote sensing, local scale

Procedia PDF Downloads 282
1748 A Double Ended AC Series Arc Fault Location Algorithm Based on Currents Estimation and a Fault Map Trace Generation

Authors: Edwin Calderon-Mendoza, Patrick Schweitzer, Serge Weber

Abstract:

Series arc faults appear frequently and unpredictably in low voltage distribution systems. Many methods have been developed to detect this type of faults and commercial protection systems such AFCI (arc fault circuit interrupter) have been used successfully in electrical networks to prevent damage and catastrophic incidents like fires. However, these devices do not allow series arc faults to be located on the line in operating mode. This paper presents a location algorithm for series arc fault in a low-voltage indoor power line in an AC 230 V-50Hz home network. The method is validated through simulations using the MATLAB software. The fault location method uses electrical parameters (resistance, inductance, capacitance, and conductance) of a 49 m indoor power line. The mathematical model of a series arc fault is based on the analysis of the V-I characteristics of the arc and consists basically of two antiparallel diodes and DC voltage sources. In a first step, the arc fault model is inserted at some different positions across the line which is modeled using lumped parameters. At both ends of the line, currents and voltages are recorded for each arc fault generation at different distances. In the second step, a fault map trace is created by using signature coefficients obtained from Kirchhoff equations which allow a virtual decoupling of the line’s mutual capacitance. Each signature coefficient obtained from the subtraction of estimated currents is calculated taking into account the Discrete Fast Fourier Transform of currents and voltages and also the fault distance value. These parameters are then substituted into Kirchhoff equations. In a third step, the same procedure described previously to calculate signature coefficients is employed but this time by considering hypothetical fault distances where the fault can appear. In this step the fault distance is unknown. The iterative calculus from Kirchhoff equations considering stepped variations of the fault distance entails the obtaining of a curve with a linear trend. Finally, the fault distance location is estimated at the intersection of two curves obtained in steps 2 and 3. The series arc fault model is validated by comparing current registered from simulation with real recorded currents. The model of the complete circuit is obtained for a 49m line with a resistive load. Also, 11 different arc fault positions are considered for the map trace generation. By carrying out the complete simulation, the performance of the method and the perspectives of the work will be presented.

Keywords: indoor power line, fault location, fault map trace, series arc fault

Procedia PDF Downloads 137
1747 Capacity Optimization in Cooperative Cognitive Radio Networks

Authors: Mahdi Pirmoradian, Olayinka Adigun, Christos Politis

Abstract:

Cooperative spectrum sensing is a crucial challenge in cognitive radio networks. Cooperative sensing can increase the reliability of spectrum hole detection, optimize sensing time and reduce delay in cooperative networks. In this paper, an efficient central capacity optimization algorithm is proposed to minimize cooperative sensing time in a homogenous sensor network using OR decision rule subject to the detection and false alarm probabilities constraints. The evaluation results reveal significant improvement in the sensing time and normalized capacity of the cognitive sensors.

Keywords: cooperative networks, normalized capacity, sensing time

Procedia PDF Downloads 633
1746 On Improving Breast Cancer Prediction Using GRNN-CP

Authors: Kefaya Qaddoum

Abstract:

The aim of this study is to predict breast cancer and to construct a supportive model that will stimulate a more reliable prediction as a factor that is fundamental for public health. In this study, we utilize general regression neural networks (GRNN) to replace the normal predictions with prediction periods to achieve a reasonable percentage of confidence. The mechanism employed here utilises a machine learning system called conformal prediction (CP), in order to assign consistent confidence measures to predictions, which are combined with GRNN. We apply the resulting algorithm to the problem of breast cancer diagnosis. The results show that the prediction constructed by this method is reasonable and could be useful in practice.

Keywords: neural network, conformal prediction, cancer classification, regression

Procedia PDF Downloads 291
1745 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation

Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim

Abstract:

In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.

Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement

Procedia PDF Downloads 117
1744 Machine Learning Model to Predict TB Bacteria-Resistant Drugs from TB Isolates

Authors: Rosa Tsegaye Aga, Xuan Jiang, Pavel Vazquez Faci, Siqing Liu, Simon Rayner, Endalkachew Alemu, Markos Abebe

Abstract:

Tuberculosis (TB) is a major cause of disease globally. In most cases, TB is treatable and curable, but only with the proper treatment. There is a time when drug-resistant TB occurs when bacteria become resistant to the drugs that are used to treat TB. Current strategies to identify drug-resistant TB bacteria are laboratory-based, and it takes a longer time to identify the drug-resistant bacteria and treat the patient accordingly. But machine learning (ML) and data science approaches can offer new approaches to the problem. In this study, we propose to develop an ML-based model to predict the antibiotic resistance phenotypes of TB isolates in minutes and give the right treatment to the patient immediately. The study has been using the whole genome sequence (WGS) of TB isolates as training data that have been extracted from the NCBI repository and contain different countries’ samples to build the ML models. The reason that different countries’ samples have been included is to generalize the large group of TB isolates from different regions in the world. This supports the model to train different behaviors of the TB bacteria and makes the model robust. The model training has been considering three pieces of information that have been extracted from the WGS data to train the model. These are all variants that have been found within the candidate genes (F1), predetermined resistance-associated variants (F2), and only resistance-associated gene information for the particular drug. Two major datasets have been constructed using these three information. F1 and F2 information have been considered as two independent datasets, and the third information is used as a class to label the two datasets. Five machine learning algorithms have been considered to train the model. These are Support Vector Machine (SVM), Random forest (RF), Logistic regression (LR), Gradient Boosting, and Ada boost algorithms. The models have been trained on the datasets F1, F2, and F1F2 that is the F1 and the F2 dataset merged. Additionally, an ensemble approach has been used to train the model. The ensemble approach has been considered to run F1 and F2 datasets on gradient boosting algorithm and use the output as one dataset that is called F1F2 ensemble dataset and train a model using this dataset on the five algorithms. As the experiment shows, the ensemble approach model that has been trained on the Gradient Boosting algorithm outperformed the rest of the models. In conclusion, this study suggests the ensemble approach, that is, the RF + Gradient boosting model, to predict the antibiotic resistance phenotypes of TB isolates by outperforming the rest of the models.

Keywords: machine learning, MTB, WGS, drug resistant TB

Procedia PDF Downloads 52
1743 Survey on Big Data Stream Classification by Decision Tree

Authors: Mansoureh Ghiasabadi Farahani, Samira Kalantary, Sara Taghi-Pour, Mahboubeh Shamsi

Abstract:

Nowadays, the development of computers technology and its recent applications provide access to new types of data, which have not been considered by the traditional data analysts. Two particularly interesting characteristics of such data sets include their huge size and streaming nature .Incremental learning techniques have been used extensively to address the data stream classification problem. This paper presents a concise survey on the obstacles and the requirements issues classifying data streams with using decision tree. The most important issue is to maintain a balance between accuracy and efficiency, the algorithm should provide good classification performance with a reasonable time response.

Keywords: big data, data streams, classification, decision tree

Procedia PDF Downloads 521
1742 Model Predictive Control of Turbocharged Diesel Engine with Exhaust Gas Recirculation

Authors: U. Yavas, M. Gokasan

Abstract:

Control of diesel engine’s air path has drawn a lot of attention due to its multi input-multi output, closed coupled, non-linear relation. Today, precise control of amount of air to be combusted is a must in order to meet with tight emission limits and performance targets. In this study, passenger car size diesel engine is modeled by AVL Boost RT, and then simulated with standard, industry level PID controllers. Finally, linear model predictive control is designed and simulated. This study shows the importance of modeling and control of diesel engines with flexible algorithm development in computer based systems.

Keywords: predictive control, engine control, engine modeling, PID control, feedforward compensation

Procedia PDF Downloads 636
1741 Axisymmetric Nonlinear Analysis of Point Supported Shallow Spherical Shells

Authors: M. Altekin, R. F. Yükseler

Abstract:

Geometrically nonlinear axisymmetric bending of a shallow spherical shell with a point support at the apex under linearly varying axisymmetric load was investigated numerically. The edge of the shell was assumed to be simply supported or clamped. The solution was obtained by the finite difference and the Newton-Raphson methods. The thickness of the shell was considered to be uniform and the material was assumed to be homogeneous and isotropic. Sensitivity analysis was made for two geometrical parameters. The accuracy of the algorithm was checked by comparing the deflection with the solution of point supported circular plates and good agreement was obtained.

Keywords: Bending, Nonlinear, Plate, Point support, Shell.

Procedia PDF Downloads 264
1740 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 90
1739 Detecting Elderly Abuse in US Nursing Homes Using Machine Learning and Text Analytics

Authors: Minh Huynh, Aaron Heuser, Luke Patterson, Chris Zhang, Mason Miller, Daniel Wang, Sandeep Shetty, Mike Trinh, Abigail Miller, Adaeze Enekwechi, Tenille Daniels, Lu Huynh

Abstract:

Machine learning and text analytics have been used to analyze child abuse, cyberbullying, domestic abuse and domestic violence, and hate speech. However, to the authors’ knowledge, no research to date has used these methods to study elder abuse in nursing homes or skilled nursing facilities from field inspection reports. We used machine learning and text analytics methods to analyze 356,000 inspection reports, which have been extracted from CMS Form-2567 field inspections of US nursing homes and skilled nursing facilities between 2016 and 2021. Our algorithm detected occurrences of the various types of abuse, including physical abuse, psychological abuse, verbal abuse, sexual abuse, and passive and active neglect. For example, to detect physical abuse, our algorithms search for combinations or phrases and words suggesting willful infliction of damage (hitting, pinching or burning, tethering, tying), or consciously ignoring an emergency. To detect occurrences of elder neglect, our algorithm looks for combinations or phrases and words suggesting both passive neglect (neglecting vital needs, allowing malnutrition and dehydration, allowing decubiti, deprivation of information, limitation of freedom, negligence toward safety precautions) and active neglect (intimidation and name-calling, tying the victim up to prevent falls without consent, consciously ignoring an emergency, not calling a physician in spite of indication, stopping important treatments, failure to provide essential care, deprivation of nourishment, leaving a person alone for an inappropriate amount of time, excessive demands in a situation of care). We further compare the prevalence of abuse before and after Covid-19 related restrictions on nursing home visits. We also identified the facilities with the most number of cases of abuse with no abuse facilities within a 25-mile radius as most likely candidates for additional inspections. We also built an interactive display to visualize the location of these facilities.

Keywords: machine learning, text analytics, elder abuse, elder neglect, nursing home abuse

Procedia PDF Downloads 145
1738 Improving Fingerprinting-Based Localization System Using Generative AI

Authors: Getaneh Berie Tarekegn, Li-Chia Tai

Abstract:

With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarms, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 42
1737 Simulation of the Visco-Elasto-Plastic Deformation Behaviour of Short Glass Fibre Reinforced Polyphthalamides

Authors: V. Keim, J. Spachtholz, J. Hammer

Abstract:

The importance of fibre reinforced plastics continually increases due to the excellent mechanical properties, low material and manufacturing costs combined with significant weight reduction. Today, components are usually designed and calculated numerically by using finite element methods (FEM) to avoid expensive laboratory tests. These programs are based on material models including material specific deformation characteristics. In this research project, material models for short glass fibre reinforced plastics are presented to simulate the visco-elasto-plastic deformation behaviour. Prior to modelling specimens of the material EMS Grivory HTV-5H1, consisting of a Polyphthalamide matrix reinforced by 50wt.-% of short glass fibres, are characterized experimentally in terms of the highly time dependent deformation behaviour of the matrix material. To minimize the experimental effort, the cyclic deformation behaviour under tensile and compressive loading (R = −1) is characterized by isothermal complex low cycle fatigue (CLCF) tests. Combining cycles under two strain amplitudes and strain rates within three orders of magnitude and relaxation intervals into one experiment the visco-elastic deformation is characterized. To identify visco-plastic deformation monotonous tensile tests either displacement controlled or strain controlled (CERT) are compared. All relevant modelling parameters for this complex superposition of simultaneously varying mechanical loadings are quantified by these experiments. Subsequently, two different material models are compared with respect to their accuracy describing the visco-elasto-plastic deformation behaviour. First, based on Chaboche an extended 12 parameter model (EVP-KV2) is used to model cyclic visco-elasto-plasticity at two time scales. The parameters of the model including a total separation of elastic and plastic deformation are obtained by computational optimization using an evolutionary algorithm based on a fitness function called genetic algorithm. Second, the 12 parameter visco-elasto-plastic material model by Launay is used. In detail, the model contains a different type of a flow function based on the definition of the visco-plastic deformation as a part of the overall deformation. The accuracy of the models is verified by corresponding experimental LCF testing.

Keywords: complex low cycle fatigue, material modelling, short glass fibre reinforced polyphthalamides, visco-elasto-plastic deformation

Procedia PDF Downloads 215
1736 Wadjda, a Film That Quietly Sets the Stage for a Cultural Revolution in Saudi Arabia

Authors: Anouar El Younssi

Abstract:

This study seeks to shed some light on the political and social ramifications and implications of Haifaa al-Mansour’s 2012 film Wadjda. The film made international headlines following its release, and was touted as the first film ever to be shot in its entirety inside the Kingdom of Saudi Arabia, and also the first to be directed by a female (Haifaa al-Mansour). Wadjda revolves around a simple storyline: A teenage Saudi girl living in the capital city Riyadh—named Wadjda—wants to have a bicycle just like her male teenage neighbor and friend Abdullah, but her ultra-conservative Saudi society places so many constraints on its female population—including not allowing girls and women to ride bicycles. Wadjda, who displays a rebellious spirit, takes concrete steps to save money in order to realize her dream of buying a bicycle. For example, she starts making and selling sports bracelets to her school mates, and she decides to participate in a Qur’an competition in hopes of winning a sum of money that comes with the first prize. In the end, Wadjda could not beat the system on her own, but the film reverses course, and the audience gets a happy ending: Wadjda’s mother, whose husband has decided to take a second wife, defies the system and buys her daughter the very bicycle Wadjda has been dreaming of. It is quite significant that the mother takes her daughter’s side on the subject of the bicycle at the end of the film, for this shows that she finally came to the realization that she and her daughter are both oppressed by the cultural norms prevalent in Saudi society. It is no coincidence that this change of heart and action on the part of the mother takes place immediately after the wedding night celebrating her husband’s second marriage. Gender inequality is thus placed front and center in the film. Nevertheless, a major finding of this study is that the film carries out its social critique in a soft and almost covert manner. The female actors in the film never issue a direct criticism of Saudi society or government; the criticism is consistently implied and subtle throughout. It is a criticism that relies more on showing than telling. The film shows us—rather than tells us directly—what is wrong, and lets us, the audience, decide and make a judgment. In fact, showing could arguably be more powerful and impactful than telling. Regarding methodology, this study will focus on and analyze the visuals and a number of key utterances by the main actor Wadjda in order to corroborate the study’s argument about the film’s bent on critiquing patriarchy. This research will attempt to establish a link between the film as an art object and as a social text. Ultimately, Wadjda sends a message of hope, that change is possible and that it is already happening slowly inside the Kingdom. It also sends the message that an insurrectional approach regarding women’s rights in Saudi Arabia is perhaps not the right one, at least at this historical juncture.

Keywords: bicycle, gender inequality, social critique, Wadjda, women’s rights

Procedia PDF Downloads 128
1735 Mixed Integer Programming-Based One-Class Classification Method for Process Monitoring

Authors: Younghoon Kim, Seoung Bum Kim

Abstract:

One-class classification plays an important role in detecting outlier and abnormality from normal observations. In the previous research, several attempts were made to extend the scope of application of the one-class classification techniques to statistical process control problems. For most previous approaches, such as support vector data description (SVDD) control chart, the design of the control limits is commonly based on the assumption that the proportion of abnormal observations is approximately equal to an expected Type I error rate in Phase I process. Because of the limitation of the one-class classification techniques based on convex optimization, we cannot make the proportion of abnormal observations exactly equal to expected Type I error rate: controlling Type I error rate requires to optimize constraints with integer decision variables, but convex optimization cannot satisfy the requirement. This limitation would be undesirable in theoretical and practical perspective to construct effective control charts. In this work, to address the limitation of previous approaches, we propose the one-class classification algorithm based on the mixed integer programming technique, which can solve problems formulated with continuous and integer decision variables. The proposed method minimizes the radius of a spherically shaped boundary subject to the number of normal data to be equal to a constant value specified by users. By modifying this constant value, users can exactly control the proportion of normal data described by the spherically shaped boundary. Thus, the proportion of abnormal observations can be made theoretically equal to an expected Type I error rate in Phase I process. Moreover, analogous to SVDD, the boundary can be made to describe complex structures by using some kernel functions. New multivariate control chart applying the effectiveness of the algorithm is proposed. This chart uses a monitoring statistic to characterize the degree of being an abnormal point as obtained through the proposed one-class classification. The control limit of the proposed chart is established by the radius of the boundary. The usefulness of the proposed method was demonstrated through experiments with simulated and real process data from a thin film transistor-liquid crystal display.

Keywords: control chart, mixed integer programming, one-class classification, support vector data description

Procedia PDF Downloads 174
1734 A Machine Learning Decision Support Framework for Industrial Engineering Purposes

Authors: Anli Du Preez, James Bekker

Abstract:

Data is currently one of the most critical and influential emerging technologies. However, the true potential of data is yet to be exploited since, currently, about 1% of generated data are ever actually analyzed for value creation. There is a data gap where data is not explored due to the lack of data analytics infrastructure and the required data analytics skills. This study developed a decision support framework for data analytics by following Jabareen’s framework development methodology. The study focused on machine learning algorithms, which is a subset of data analytics. The developed framework is designed to assist data analysts with little experience, in choosing the appropriate machine learning algorithm given the purpose of their application.

Keywords: Data analytics, Industrial engineering, Machine learning, Value creation

Procedia PDF Downloads 168