Search results for: human detection and identification
12297 Focus-Latent Dirichlet Allocation for Aspect-Level Opinion Mining
Authors: Mohsen Farhadloo, Majid Farhadloo
Abstract:
Aspect-level opinion mining that aims at discovering aspects (aspect identification) and their corresponding ratings (sentiment identification) from customer reviews have increasingly attracted attention of researchers and practitioners as it provides valuable insights about products/services from customer's points of view. Instead of addressing aspect identification and sentiment identification in two separate steps, it is possible to simultaneously identify both aspects and sentiments. In recent years many graphical models based on Latent Dirichlet Allocation (LDA) have been proposed to solve both aspect and sentiment identifications in a single step. Although LDA models have been effective tools for the statistical analysis of document collections, they also have shortcomings in addressing some unique characteristics of opinion mining. Our goal in this paper is to address one of the limitations of topic models to date; that is, they fail to directly model the associations among topics. Indeed in many text corpora, it is natural to expect that subsets of the latent topics have higher probabilities. We propose a probabilistic graphical model called focus-LDA, to better capture the associations among topics when applied to aspect-level opinion mining. Our experiments on real-life data sets demonstrate the improved effectiveness of the focus-LDA model in terms of the accuracy of the predictive distributions over held out documents. Furthermore, we demonstrate qualitatively that the focus-LDA topic model provides a natural way of visualizing and exploring unstructured collection of textual data.Keywords: aspect-level opinion mining, document modeling, Latent Dirichlet Allocation, LDA, sentiment analysis
Procedia PDF Downloads 9412296 Optimization of Hate Speech and Abusive Language Detection on Indonesian-language Twitter using Genetic Algorithms
Authors: Rikson Gultom
Abstract:
Hate Speech and Abusive language on social media is difficult to detect, usually, it is detected after it becomes viral in cyberspace, of course, it is too late for prevention. An early detection system that has a fairly good accuracy is needed so that it can reduce conflicts that occur in society caused by postings on social media that attack individuals, groups, and governments in Indonesia. The purpose of this study is to find an early detection model on Twitter social media using machine learning that has high accuracy from several machine learning methods studied. In this study, the support vector machine (SVM), Naïve Bayes (NB), and Random Forest Decision Tree (RFDT) methods were compared with the Support Vector machine with genetic algorithm (SVM-GA), Nave Bayes with genetic algorithm (NB-GA), and Random Forest Decision Tree with Genetic Algorithm (RFDT-GA). The study produced a comparison table for the accuracy of the hate speech and abusive language detection model, and presented it in the form of a graph of the accuracy of the six algorithms developed based on the Indonesian-language Twitter dataset, and concluded the best model with the highest accuracy.Keywords: abusive language, hate speech, machine learning, optimization, social media
Procedia PDF Downloads 12812295 A Resource Based View: Perspective on Acquired Human Resource towards Competitive Advantage
Authors: Monia Hassan Abdulrahman
Abstract:
Resource-based view is built on many theories in addition to diverse perspectives, we extend this view placing emphasis on human resources addressing the tools required to sustain competitive advantage. Highlighting on several theories and judgments, assumptions were established to clearly reach if resource possession alone suffices for the sustainability of competitive advantage, or necessary accommodation are required for better performance. New practices were indicated in terms of resources used in firms, these practices were implemented on the human resources in particular, and results were developed in compliance to the mentioned assumptions. Such results drew attention to the significance of practices that provide enhancement of human resources that have a core responsibility of maintaining resource-based view for an organization to lead the way to gaining competitive advantage.Keywords: competitive advantage, resource based value, human resources, strategic management
Procedia PDF Downloads 39112294 The Journey of a Malicious HTTP Request
Authors: M. Mansouri, P. Jaklitsch, E. Teiniker
Abstract:
SQL injection on web applications is a very popular kind of attack. There are mechanisms such as intrusion detection systems in order to detect this attack. These strategies often rely on techniques implemented at high layers of the application but do not consider the low level of system calls. The problem of only considering the high level perspective is that an attacker can circumvent the detection tools using certain techniques such as URL encoding. One technique currently used for detecting low-level attacks on privileged processes is the tracing of system calls. System calls act as a single gate to the Operating System (OS) kernel; they allow catching the critical data at an appropriate level of detail. Our basic assumption is that any type of application, be it a system service, utility program or Web application, “speaks” the language of system calls when having a conversation with the OS kernel. At this level we can see the actual attack while it is happening. We conduct an experiment in order to demonstrate the suitability of system call analysis for detecting SQL injection. We are able to detect the attack. Therefore we conclude that system calls are not only powerful in detecting low-level attacks but that they also enable us to detect high-level attacks such as SQL injection.Keywords: Linux system calls, web attack detection, interception, SQL
Procedia PDF Downloads 35812293 Credit Card Fraud Detection with Ensemble Model: A Meta-Heuristic Approach
Authors: Gong Zhilin, Jing Yang, Jian Yin
Abstract:
The purpose of this paper is to develop a novel system for credit card fraud detection based on sequential modeling of data using hybrid deep learning models. The projected model encapsulates five major phases are pre-processing, imbalance-data handling, feature extraction, optimal feature selection, and fraud detection with an ensemble classifier. The collected raw data (input) is pre-processed to enhance the quality of the data through alleviation of the missing data, noisy data as well as null values. The pre-processed data are class imbalanced in nature, and therefore they are handled effectively with the K-means clustering-based SMOTE model. From the balanced class data, the most relevant features like improved Principal Component Analysis (PCA), statistical features (mean, median, standard deviation) and higher-order statistical features (skewness and kurtosis). Among the extracted features, the most optimal features are selected with the Self-improved Arithmetic Optimization Algorithm (SI-AOA). This SI-AOA model is the conceptual improvement of the standard Arithmetic Optimization Algorithm. The deep learning models like Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and optimized Quantum Deep Neural Network (QDNN). The LSTM and CNN are trained with the extracted optimal features. The outcomes from LSTM and CNN will enter as input to optimized QDNN that provides the final detection outcome. Since the QDNN is the ultimate detector, its weight function is fine-tuned with the Self-improved Arithmetic Optimization Algorithm (SI-AOA).Keywords: credit card, data mining, fraud detection, money transactions
Procedia PDF Downloads 13112292 Bioaccumulation and Forensic Relevance of Gunshot Residue in Forensically Relevant Blowflies
Authors: Michaela Storen, Michelle Harvey, Xavier Conlan
Abstract:
Gun violence internationally is increasing at an unprecedented level, becoming a favoured means for executing violence against another individual. Not only is this putting a strain on forensic scientists who attempt to determine the cause of death in circumstances where firearms have been involved in the death of an individual, but it also highlights the need for an alternative technique of identification of a gunshot wound when other established techniques have been exhausted. A corpse may be colonized by necrophagous insects following death, and this close association between the time of death and insect colonization makes entomological samples valuable evidence when remains become decomposed beyond toxicological utility. Entomotoxicology provides the potential for the identification of toxins in a decomposing corpse, with recent research uncovering the capabilities of entomotoxicology to detect gunshot residue (GSR) in a corpse. However, shortcomings of the limited literature available on this topic have not been addressed, with the bioaccumulation, detection limits, and sensitivity to gunshots not considered thus far, leaving questions as to the applicability of this new technique in the forensic context. Larvae were placed on meat contaminated with GSR at different concentrations and compared to a control meat sample to establish the uptake of GSR by the larvae, with bioaccumulation established by placing the larvae on fresh, uncontaminated meat for a period of time before analysis using ICP-MS. The findings of Pb, Ba, and Sb at each stage of the lifecycle and bioaccumulation in the larvae will be presented. In addition, throughout these previously mentioned experiments, larvae were washed once, twice and three times to evaluate the effectiveness of existing entomological practices in removing external toxins from specimens prior to entomotoxicologyical analysis. Analysis of these larval washes will be presented. By addressing these points, this research extends the utility of entomotoxicology in cause-of-death investigations and provides an additional source of evidence for forensic scientists in the circumstances involving a gunshot wound on a corpse, in addition to advising the effectiveness of current entomology collection protocols.Keywords: bioaccumulation, chemistry, entomology, gunshot residue, toxicology
Procedia PDF Downloads 8112291 Analysis of the Unmanned Aerial Vehicles’ Incidents and Accidents: The Role of Human Factors
Authors: Jacob J. Shila, Xiaoyu O. Wu
Abstract:
As the applications of unmanned aerial vehicles (UAV) continue to increase across the world, it is critical to understand the factors that contribute to incidents and accidents associated with these systems. Given the variety of daily applications that could utilize the operations of the UAV (e.g., medical, security operations, construction activities, landscape activities), the main discussion has been how to safely incorporate the UAV into the national airspace system. The types of UAV incidents being reported range from near sightings by other pilots to actual collisions with aircraft or UAV. These incidents have the potential to impact the rest of aviation operations in a variety of ways, including human lives, liability costs, and delay costs. One of the largest causes of these incidents cited is the human factor; other causes cited include maintenance, aircraft, and others. This work investigates the key human factors associated with UAV incidents. To that end, the data related to UAV incidents that have occurred in the United States is both reviewed and analyzed to identify key human factors related to UAV incidents. The data utilized in this work is gathered from the Federal Aviation Administration (FAA) drone database. This study adopts the human factor analysis and classification system (HFACS) to identify key human factors that have contributed to some of the UAV failures to date. The uniqueness of this work is the incorporation of UAV incident data from a variety of applications and not just military data. In addition, identifying the specific human factors is crucial towards developing safety operational models and human factor guidelines for the UAV. The findings of these common human factors are also compared to similar studies in other countries to determine whether these factors are common internationally.Keywords: human factors, incidents and accidents, safety, UAS, UAV
Procedia PDF Downloads 24212290 Fault Detection and Isolation of a Three-Tank System using Analytical Temporal Redundancy, Parity Space/Relation Based Residual Generation
Authors: A. T. Kuda, J. J. Dayya, A. Jimoh
Abstract:
This paper investigates the fault detection and Isolation technique of measurement data sets from a three tank system using analytical model-based temporal redundancy which is based on residual generation using parity equations/space approach. It further briefly outlines other approaches of model-based residual generation. The basic idea of parity space residual generation in temporal redundancy is dynamic relationship between sensor outputs and actuator inputs (input-output model). These residuals where then used to detect whether or not the system is faulty and indicate the location of the fault when it is faulty. The method obtains good results by detecting and isolating faults from the considered data sets measurements generated from the system.Keywords: fault detection, fault isolation, disturbing influences, system failure, parity equation/relation, structured parity equations
Procedia PDF Downloads 30212289 Parameters Identification of Granular Soils around PMT Test by Inverse Analysis
Authors: Younes Abed
Abstract:
The successful application of in-situ testing of soils heavily depends on development of interpretation methods of tests. The pressuremeter test simulates the expansion of a cylindrical cavity and because it has well defined boundary conditions, it is more unable to rigorous theoretical analysis (i. e. cavity expansion theory) then most other in-situ tests. In this article, and in order to make the identification process more convenient, we propose a relatively simple procedure which involves the numerical identification of some mechanical parameters of a granular soil, especially, the elastic modulus and the friction angle from a pressuremeter curve. The procedure, applied here to identify the parameters of generalised prager model associated to the Drucker & Prager criterion from a pressuremeter curve, is based on an inverse analysis approach, which consists of minimizing the function representing the difference between the experimental curve and the curve obtained by integrating the model along the loading path in in-situ testing. The numerical process implemented here is based on the established finite element program. We present a validation of the proposed approach by a database of tests on expansion of cylindrical cavity. This database consists of four types of tests; thick cylinder tests carried out on the Hostun RF sand, pressuremeter tests carried out on the Hostun sand, in-situ pressuremeter tests carried out at the site of Fos with marine self-boring pressuremeter and in-situ pressuremeter tests realized on the site of Labenne with Menard pressuremeter.Keywords: granular soils, cavity expansion, pressuremeter test, finite element method, identification procedure
Procedia PDF Downloads 29212288 An Improved Convolution Deep Learning Model for Predicting Trip Mode Scheduling
Authors: Amin Nezarat, Naeime Seifadini
Abstract:
Trip mode selection is a behavioral characteristic of passengers with immense importance for travel demand analysis, transportation planning, and traffic management. Identification of trip mode distribution will allow transportation authorities to adopt appropriate strategies to reduce travel time, traffic and air pollution. The majority of existing trip mode inference models operate based on human selected features and traditional machine learning algorithms. However, human selected features are sensitive to changes in traffic and environmental conditions and susceptible to personal biases, which can make them inefficient. One way to overcome these problems is to use neural networks capable of extracting high-level features from raw input. In this study, the convolutional neural network (CNN) architecture is used to predict the trip mode distribution based on raw GPS trajectory data. The key innovation of this paper is the design of the layout of the input layer of CNN as well as normalization operation, in a way that is not only compatible with the CNN architecture but can also represent the fundamental features of motion including speed, acceleration, jerk, and Bearing rate. The highest prediction accuracy achieved with the proposed configuration for the convolutional neural network with batch normalization is 85.26%.Keywords: predicting, deep learning, neural network, urban trip
Procedia PDF Downloads 13812287 Encoded Nanospheres for the Fast Ratiometric Detection of Cystic Fibrosis
Authors: Iván Castelló, Georgiana Stoica, Emilio Palomares, Fernando Bravo
Abstract:
We present herein two colour encoded silica nanospheres (2nanoSi) for the fluorescence quantitative ratiometric determination of trypsin in humans. The system proved to be a faster (minutes) method, with two times higher sensitivity than the state-of-the-art biomarkers based sensors for cystic fibrosis (CF), allowing the quantification of trypsin concentrations in a wide range (0-350 mg/L). Furthermore, as trypsin is directly related to the development of cystic fibrosis, different human genotypes, i.e. healthy homozygotic (> 80 mg/L), CF homozygotic (< 50 mg/L), and heterozygotic (> 50 mg/L), respectively, can be determined using our 2nanoSi nanospheres.Keywords: cystic fibrosis, trypsin, quantum dots, biomarker, homozygote, heterozygote
Procedia PDF Downloads 48412286 Human Lens Metabolome: A Combined LC-MS and NMR Study
Authors: Vadim V. Yanshole, Lyudmila V. Yanshole, Alexey S. Kiryutin, Timofey D. Verkhovod, Yuri P. Tsentalovich
Abstract:
Cataract, or clouding of the eye lens, is the leading cause of vision impairment in the world. The lens tissue have very specific structure: It does not have vascular system, the lens proteins – crystallins – do not turnover throughout lifespan. The protection of lens proteins is provided by the metabolites which diffuse inside the lens from the aqueous humor or synthesized in the lens epithelial layer. Therefore, the study of changes in the metabolite composition of a cataractous lens as compared to a normal lens may elucidate the possible mechanisms of the cataract formation. Quantitative metabolomic profiles of normal and cataractous human lenses were obtained with the combined use of high-frequency nuclear magnetic resonance (NMR) and ion-pairing high-performance liquid chromatography with high-resolution mass-spectrometric detection (LC-MS) methods. The quantitative content of more than fifty metabolites has been determined in this work for normal aged and cataractous human lenses. The most abundant metabolites in the normal lens are myo-inositol, lactate, creatine, glutathione, glutamate, and glucose. For the majority of metabolites, their levels in the lens cortex and nucleus are similar, with the few exceptions including antioxidants and UV filters: The concentrations of glutathione, ascorbate and NAD in the lens nucleus decrease as compared to the cortex, while the levels of the secondary UV filters formed from primary UV filters in redox processes increase. That confirms that the lens core is metabolically inert, and the metabolic activity in the lens nucleus is mostly restricted by protection from the oxidative stress caused by UV irradiation, UV filter spontaneous decomposition, or other factors. It was found that the metabolomic composition of normal and age-matched cataractous human lenses differ significantly. The content of the most important metabolites – antioxidants, UV filters, and osmolytes – in the cataractous nucleus is at least ten fold lower than in the normal nucleus. One may suppose that the majority of these metabolites are synthesized in the lens epithelial layer, and that age-related cataractogenesis might originate from the dysfunction of the lens epithelial cells. Comprehensive quantitative metabolic profiles of the human eye lens have been acquired for the first time. The obtained data can be used for the analysis of changes in the lens chemical composition occurring with age and with the cataract development.Keywords: cataract, lens, NMR, LC-MS, metabolome
Procedia PDF Downloads 32212285 Comparative Analysis of Feature Extraction and Classification Techniques
Authors: R. L. Ujjwal, Abhishek Jain
Abstract:
In the field of computer vision, most facial variations such as identity, expression, emotions and gender have been extensively studied. Automatic age estimation has been rarely explored. With age progression of a human, the features of the face changes. This paper is providing a new comparable study of different type of algorithm to feature extraction [Hybrid features using HAAR cascade & HOG features] & classification [KNN & SVM] training dataset. By using these algorithms we are trying to find out one of the best classification algorithms. Same thing we have done on the feature selection part, we extract the feature by using HAAR cascade and HOG. This work will be done in context of age group classification model.Keywords: computer vision, age group, face detection
Procedia PDF Downloads 36812284 Silicon-Photonic-Sensor System for Botulinum Toxin Detection in Water
Authors: Binh T. T. Nguyen, Zhenyu Li, Eric Yap, Yi Zhang, Ai-Qun Liu
Abstract:
Silicon-photonic-sensor system is an emerging class of analytical technologies that use evanescent field wave to sensitively measure the slight difference in the surrounding environment. The wavelength shift induced by local refractive index change is used as an indicator in the system. These devices can be served as sensors for a wide variety of chemical or biomolecular detection in clinical and environmental fields. In our study, a system including a silicon-based micro-ring resonator, microfluidic channel, and optical processing is designed, fabricated for biomolecule detection. The system is demonstrated to detect Clostridium botulinum type A neurotoxin (BoNT) in different water sources. BoNT is one of the most toxic substances known and relatively easily obtained from a cultured bacteria source. The toxin is extremely lethal with LD50 of about 0.1µg/70kg intravenously, 1µg/ 70 kg by inhalation, and 70µg/kg orally. These factors make botulinum neurotoxins primary candidates as bioterrorism or biothreat agents. It is required to have a sensing system which can detect BoNT in a short time, high sensitive and automatic. For BoNT detection, silicon-based micro-ring resonator is modified with a linker for the immobilization of the anti-botulinum capture antibody. The enzymatic reaction is employed to increase the signal hence gains sensitivity. As a result, a detection limit to 30 pg/mL is achieved by our silicon-photonic sensor within a short period of 80 min. The sensor also shows high specificity versus the other type of botulinum. In the future, by designing the multifunctional waveguide array with fully automatic control system, it is simple to simultaneously detect multi-biomaterials at a low concentration within a short period. The system has a great potential to apply for online, real-time and high sensitivity for the label-free bimolecular rapid detection.Keywords: biotoxin, photonic, ring resonator, sensor
Procedia PDF Downloads 11712283 Unidentified Remains with Extensive Bone Disease without a Clear Diagnosis
Authors: Patricia Shirley Almeida Prado, Selma Paixão Argollo, Maria De Fátima Teixeira Guimarães, Leticia Matos Sobrinho
Abstract:
Skeletal differential diagnosis is essential in forensic anthropology in order to differentiate skeletal trauma from normal osseous variation and pathological processes. Thus, part of forensic anthropological field is differentiate skeletal criminal injuries from the normal skeletal variation (bone fusion or nonunion, transitional vertebrae and other non-metric traits), non-traumatic skeletal pathology (myositis ossificans, arthritis, bone metastasis, osteomyelitis) from traumatic skeletal pathology (myositis ossificans traumatic) avoiding misdiagnosis. This case shows the importance of effective pathological diagnosis in order to accelerate the identification process of skeletonized human remains. THE CASE: An unidentified skeletal remains at the medico legal institute Nina Rodrigues-Salvador, of a male young adult (29 to 40 years estimated) showing a massive heterotopic ossification on its right tibia at upper epiphysis and adjacent articular femur surface; an extensive ossification on the right clavicle (at the sternal extremity) also presenting an heterotopic ossification at right scapulae (upper third of scapulae lateral margin and infraglenoid tubercule) and at the head of right humerus at the shoulder joint area. Curiously, this case also shows an unusual porosity in certain vertebrae´s body and in some tarsal and carpal bones. Likewise, his left fifth metacarpal bones (right and left) showed a healed fracture which led both bones distorted. Based on identification, of pathological conditions in human skeletal remains literature and protocols these alterations can be misdiagnosed and this skeleton may present more than one pathological process. The anthropological forensic lab at Medico-legal Institute Nina Rodrigues in Salvador (Brazil) adopts international protocols to ancestry, sex, age and stature estimations, also implemented well-established conventions to identify pathological disease and skeletal alterations. The most compatible diagnosis for this case is hematogenous osteomyelitis due to following findings: 1: the healed fracture pattern at the clavicle showing a cloaca which is a pathognomonic for osteomyelitis; 2: the metacarpals healed fracture does not present cloaca although they developed a periosteal formation. 3: the superior articular surface of the right tibia shows an extensive inflammatory healing process that extends to adjacent femur articular surface showing some cloaca at tibia bone disease. 4: the uncommon porosities may result from hematogenous infectious process. The fractures probably have occurred in a different moments based on the healing process; the tibia injury is more extensive and has not been reorganized, while metacarpals and clavicle fracture is properly healed. We suggest that the clavicle and tibia´s fractures were infected by an existing infectious disease (syphilis, tuberculosis, brucellosis) or an existing syndrome (Gorham’s disease), which led to the development of osteomyelitis. This hypothesis is supported by the fact that different bones are affected in diverse levels. Like the metacarpals that do not show the cloaca, but then a periosteal new bone formation; then the unusual porosities do not show a classical osteoarthritic processes findings as the marginal osteophyte, pitting and new bone formation, they just show an erosive process without bone formation or osteophyte. To confirm and prove our hypothesis we are working on different clinical approaches like DNA, histopathology and other image exams to find the correct diagnostic.Keywords: bone disease, forensic anthropology, hematogenous osteomyelitis, human identification, human remains
Procedia PDF Downloads 32512282 A Spatio-Temporal Analysis and Change Detection of Wetlands in Diamond Harbour, West Bengal, India Using Normalized Difference Water Index
Authors: Lopita Pal, Suresh V. Madha
Abstract:
Wetlands are areas of marsh, fen, peat land or water, whether natural or artificial, permanent or temporary, with water that is static or flowing, fresh, brackish or salt, including areas of marine water the depth of which at low tide does not exceed six metres. The rapidly expanding human population, large scale changes in land use/land cover, burgeoning development projects and improper use of watersheds all has caused a substantial decline of wetland resources in the world. Major degradations have been impacted from agricultural, industrial and urban developments leading to various types of pollutions and hydrological perturbations. Regular fishing activities and unsustainable grazing of animals are degrading the wetlands in a slow pace. The paper focuses on the spatio-temporal change detection of the area of the water body and the main cause of this depletion. The total area under study (22°19’87’’ N, 88°20’23’’ E) is a wetland region in West Bengal of 213 sq.km. The procedure used is the Normalized Difference Water Index (NDWI) from multi-spectral imagery and Landsat to detect the presence of surface water, and the datasets have been compared of the years 2016, 2006 and 1996. The result shows a sharp decline in the area of water body due to a rapid increase in the agricultural practices and the growing urbanization.Keywords: spatio-temporal change, NDWI, urbanization, wetland
Procedia PDF Downloads 28312281 Diversity Indices as a Tool for Evaluating Quality of Water Ways
Authors: Khadra Ahmed, Khaled Kheireldin
Abstract:
In this paper, we present a pedestrian detection descriptor called Fused Structure and Texture (FST) features based on the combination of the local phase information with the texture features. Since the phase of the signal conveys more structural information than the magnitude, the phase congruency concept is used to capture the structural features. On the other hand, the Center-Symmetric Local Binary Pattern (CSLBP) approach is used to capture the texture information of the image. The dimension less quantity of the phase congruency and the robustness of the CSLBP operator on the flat images, as well as the blur and illumination changes, lead the proposed descriptor to be more robust and less sensitive to the light variations. The proposed descriptor can be formed by extracting the phase congruency and the CSLBP values of each pixel of the image with respect to its neighborhood. The histogram of the oriented phase and the histogram of the CSLBP values for the local regions in the image are computed and concatenated to construct the FST descriptor. Several experiments were conducted on INRIA and the low resolution DaimlerChrysler datasets to evaluate the detection performance of the pedestrian detection system that is based on the FST descriptor. A linear Support Vector Machine (SVM) is used to train the pedestrian classifier. These experiments showed that the proposed FST descriptor has better detection performance over a set of state of the art feature extraction methodologies.Keywords: planktons, diversity indices, water quality index, water ways
Procedia PDF Downloads 51812280 Screening for Hit Identification against Mycobacterium abscessus
Authors: Jichan Jang
Abstract:
Mycobacterium abscessus is a rapidly growing life-threatening mycobacterium with multiple drug-resistance mechanisms. In this study, we screened the library to identify active molecules targeting Mycobacterium abscessus using resazurin live/dead assays. In this screening assay, the Z-factor was 0.7, as an indication of the statistical confidence of the assay. A cut-off of 80% growth inhibition in the screening resulted in the identification of four different compounds at a single concentration (20 μM). Dose-response curves identified three different hit candidates, which generated good inhibitory curves. All hit candidates were expected to have different molecular targets. Thus, we found that compound X, identified, may be a promising candidate in the M. abscessus drug discovery pipeline.Keywords: Mycobacterium abscessus, antibiotics, drug discovery, emerging Pathogen
Procedia PDF Downloads 20912279 Sensor Registration in Multi-Static Sonar Fusion Detection
Authors: Longxiang Guo, Haoyan Hao, Xueli Sheng, Hanjun Yu, Jingwei Yin
Abstract:
In order to prevent target splitting and ensure the accuracy of fusion, system error registration is an important step in multi-static sonar fusion detection system. To eliminate the inherent system errors including distance error and angle error of each sonar in detection, this paper uses offline estimation method for error registration. Suppose several sonars from different platforms work together to detect a target. The target position detected by each sonar is based on each sonar’s own reference coordinate system. Based on the two-dimensional stereo projection method, this paper uses real-time quality control (RTQC) method and least squares (LS) method to estimate sensor biases. The RTQC method takes the average value of each sonar’s data as the observation value and the LS method makes the least square processing of each sonar’s data to get the observation value. In the underwater acoustic environment, matlab simulation is carried out and the simulation results show that both algorithms can estimate the distance and angle error of sonar system. The performance of the two algorithms is also compared through the root mean square error and the influence of measurement noise on registration accuracy is explored by simulation. The system error convergence of RTQC method is rapid, but the distribution of targets has a serious impact on its performance. LS method can not be affected by target distribution, but the increase of random noise will slow down the convergence rate. LS method is an improvement of RTQC method, which is widely used in two-dimensional registration. The improved method can be used for underwater multi-target detection registration.Keywords: data fusion, multi-static sonar detection, offline estimation, sensor registration problem
Procedia PDF Downloads 16912278 Vehicular Speed Detection Camera System Using Video Stream
Authors: C. A. Anser Pasha
Abstract:
In this paper, a new Vehicular Speed Detection Camera System that is applicable as an alternative to traditional radars with the same accuracy or even better is presented. The real-time measurement and analysis of various traffic parameters such as speed and number of vehicles are increasingly required in traffic control and management. Image processing techniques are now considered as an attractive and flexible method for automatic analysis and data collections in traffic engineering. Various algorithms based on image processing techniques have been applied to detect multiple vehicles and track them. The SDCS processes can be divided into three successive phases; the first phase is Objects detection phase, which uses a hybrid algorithm based on combining an adaptive background subtraction technique with a three-frame differencing algorithm which ratifies the major drawback of using only adaptive background subtraction. The second phase is Objects tracking, which consists of three successive operations - object segmentation, object labeling, and object center extraction. Objects tracking operation takes into consideration the different possible scenarios of the moving object like simple tracking, the object has left the scene, the object has entered the scene, object crossed by another object, and object leaves and another one enters the scene. The third phase is speed calculation phase, which is calculated from the number of frames consumed by the object to pass by the scene.Keywords: radar, image processing, detection, tracking, segmentation
Procedia PDF Downloads 46712277 Gaussian Probability Density for Forest Fire Detection Using Satellite Imagery
Authors: S. Benkraouda, Z. Djelloul-Khedda, B. Yagoubi
Abstract:
we present a method for early detection of forest fires from a thermal infrared satellite image, using the image matrix of the probability of belonging. The principle of the method is to compare a theoretical mathematical model to an experimental model. We considered that each line of the image matrix, as an embodiment of a non-stationary random process. Since the distribution of pixels in the satellite image is statistically dependent, we divided these lines into small stationary and ergodic intervals to characterize the image by an adequate mathematical model. A standard deviation was chosen to generate random variables, so each interval behaves naturally like white Gaussian noise. The latter has been selected as the mathematical model that represents a set of very majority pixels, which we can be considered as the image background. Before modeling the image, we made a few pretreatments, then the parameters of the theoretical Gaussian model were extracted from the modeled image, these settings will be used to calculate the probability of each interval of the modeled image to belong to the theoretical Gaussian model. The high intensities pixels are regarded as foreign elements to it, so they will have a low probability, and the pixels that belong to the background image will have a high probability. Finally, we did present the reverse of the matrix of probabilities of these intervals for a better fire detection.Keywords: forest fire, forest fire detection, satellite image, normal distribution, theoretical gaussian model, thermal infrared matrix image
Procedia PDF Downloads 14212276 Training a Neural Network to Segment, Detect and Recognize Numbers
Authors: Abhisek Dash
Abstract:
This study had three neural networks, one for number segmentation, one for number detection and one for number recognition all of which are coupled to one another. All networks were trained on the MNIST dataset and were convolutional. It was assumed that the images had lighter background and darker foreground. The segmentation network took 28x28 images as input and had sixteen outputs. Segmentation training starts when a dark pixel is encountered. Taking a window(7x7) over that pixel as focus, the eight neighborhood of the focus was checked for further dark pixels. The segmentation network was then trained to move in those directions which had dark pixels. To this end the segmentation network had 16 outputs. They were arranged as “go east”, ”don’t go east ”, “go south east”, “don’t go south east”, “go south”, “don’t go south” and so on w.r.t focus window. The focus window was resized into a 28x28 image and the network was trained to consider those neighborhoods which had dark pixels. The neighborhoods which had dark pixels were pushed into a queue in a particular order. The neighborhoods were then popped one at a time stitched to the existing partial image of the number one at a time and trained on which neighborhoods to consider when the new partial image was presented. The above process was repeated until the image was fully covered by the 7x7 neighborhoods and there were no more uncovered black pixels. During testing the network scans and looks for the first dark pixel. From here on the network predicts which neighborhoods to consider and segments the image. After this step the group of neighborhoods are passed into the detection network. The detection network took 28x28 images as input and had two outputs denoting whether a number was detected or not. Since the ground truth of the bounds of a number was known during training the detection network outputted in favor of number not found until the bounds were not met and vice versa. The recognition network was a standard CNN that also took 28x28 images and had 10 outputs for recognition of numbers from 0 to 9. This network was activated only when the detection network votes in favor of number detected. The above methodology could segment connected and overlapping numbers. Additionally the recognition unit was only invoked when a number was detected which minimized false positives. It also eliminated the need for rules of thumb as segmentation is learned. The strategy can also be extended to other characters as well.Keywords: convolutional neural networks, OCR, text detection, text segmentation
Procedia PDF Downloads 16112275 Detection of COVID-19 Cases From X-Ray Images Using Capsule-Based Network
Authors: Donya Ashtiani Haghighi, Amirali Baniasadi
Abstract:
Coronavirus (COVID-19) disease has spread abruptly all over the world since the end of 2019. Computed tomography (CT) scans and X-ray images are used to detect this disease. Different Deep Neural Network (DNN)-based diagnosis solutions have been developed, mainly based on Convolutional Neural Networks (CNNs), to accelerate the identification of COVID-19 cases. However, CNNs lose important information in intermediate layers and require large datasets. In this paper, Capsule Network (CapsNet) is used. Capsule Network performs better than CNNs for small datasets. Accuracy of 0.9885, f1-score of 0.9883, precision of 0.9859, recall of 0.9908, and Area Under the Curve (AUC) of 0.9948 are achieved on the Capsule-based framework with hyperparameter tuning. Moreover, different dropout rates are investigated to decrease overfitting. Accordingly, a dropout rate of 0.1 shows the best results. Finally, we remove one convolution layer and decrease the number of trainable parameters to 146,752, which is a promising result.Keywords: capsule network, dropout, hyperparameter tuning, classification
Procedia PDF Downloads 7712274 The Development Status of Terahertz Wave and Its Prospect in Wireless Communication
Authors: Yiquan Liao, Quanhong Jiang
Abstract:
Since terahertz was observed by German scientists, we have obtained terahertz through different generation technologies of broadband and narrowband. Then, with the development of semiconductor and other technologies, the imaging technology of terahertz has become increasingly perfect. From the earliest application of nondestructive testing in aviation to the present application of information transmission and human safety detection, the role of terahertz will shine in various fields. The weapons produced by terahertz were epoch-making, which is a crushing deterrent against technologically backward countries. At the same time, terahertz technology in the fields of imaging, medical and livelihood, communication and communication are for the well-being of the country and the people.Keywords: terahertz, imaging, communication, medical treatment
Procedia PDF Downloads 9912273 Street Begging: A Loss of Human Resource in Nigeria
Authors: Sulaiman Kassim Ibrahim
Abstract:
Human Resource is one of the most important elements in any country. They are very important in actualizing the potential of every sector in the country, i.e Agric, Education, Finance, Judiciary and all formal and informal sectors. The purpose of this study is to investigate the loss of human resource in Nigeria through street begging. The study used intensive literature review. Finding from the review indicate that a significant number of human resource are into street begging in the country undeveloped and untapped. The paper recommend that policy should be initiated to discourage street begging, develop this resource through education and empowerment, stop rural-urban migration by providing infrastructure in the rural areas and abolish informal (Almajiri or beggars school) and transform it into formal school.Keywords: human resource, street begging, Nigeria, Almajiri
Procedia PDF Downloads 25512272 One-Step Synthesis of Fluorescent Carbon Dots in a Green Way as Effective Fluorescent Probes for Detection of Iron Ions and pH Value
Authors: Mostafa Ghasemi, Andrew Urquhart
Abstract:
In this study, fluorescent carbon dots (CDs) were synthesized in a green way using a one-step hydrothermal method. Carbon dots are carbon-based nanomaterials with a size of less than 10 nm, unique structure, and excellent properties such as low toxicity, good biocompatibility, tunable fluorescence, excellent photostability, and easy functionalization. These properties make them a good candidate to use in different fields such as biological sensing, photocatalysis, photodynamic, and drug delivery. Fourier transformed infrared (FTIR) spectra approved OH/NH groups on the surface of the as-synthesized CDs, and UV-vis spectra showed excellent fluorescence quenching effect of Fe (III) ion on the as-synthesized CDs with high selectivity detection compared with other metal ions. The probe showed a linear response concentration range (0–2.0 mM) to Fe (III) ion, and the limit of detection was calculated to be about 0.50 μM. In addition, CDs also showed good sensitivity to the pH value in the range from 2 to 14, indicating great potential as a pH sensor.Keywords: carbon dots, fluorescence, pH sensing, metal ions sensor
Procedia PDF Downloads 7512271 Humans, Social Robots, and Mutual Love: An Application of Aristotle’s Nicomachean Ethics
Authors: Ruby Jean Hornsby
Abstract:
In our rapidly advancing techno-moral world, human-robot relationships are increasingly becoming a part of intimate human life. Indeed, social robots - that is, autonomous or semi-autonomous embodied artificial agents that generally possess human or animal-like qualities (such as responding to environmental stimuli, communicating, learning, performing human tasks, and making autonomous decisions) - have been designed to function as human friends. In light of such advances, immediate philosophical scrutiny is imperative in order to examine the extent to which human-robot interactions constitute genuine friendship and therefore contribute towards the good human life. Aristotle's conception of friendship is philosophically illuminating and sufficiently broad in scope to guide such analysis. On his account, it is necessary (though not sufficient) that for a friendship to exist between two agents - A and B - both agents must have a mutual love for one another. Aristotle claims that A loves B if: Condition 1: A desires those apparent good (qua pleasant, useful, or virtuous) properties attributable to B, and Condition 2: A has goodwill (wishes what is best) for B. This paper argues that human-robot interaction can (and does) successfully meet both conditions; as such, it demonstrates that robots and humans can reciprocally love one another. It will argue for this position by first justifying the claim that a human can desire apparent good features attributable to a robot (i.e., by taking them to be pleasant and/or useful) and outlining how it is that a human can wish a robot well in light of that robot's (quasi-) interests. Next, the paper will argue that a robot can (quasi-)desire certain properties that are attributable to a human before elucidating how it is possible for a robot to act in the interests of a human. Accordingly, this paper will conclude that it is already the case that humans can formulate relationships with robots that involve reciprocated love. This is significant because it suggests that social robots are candidates for human friendship and can therefore contribute toward flourishing human futures.Keywords: ancient philosophy, friendship, inter-disciplinary applied ethics, love, social robotics
Procedia PDF Downloads 10112270 Game Structure and Spatio-Temporal Action Detection in Soccer Using Graphs and 3D Convolutional Networks
Authors: Jérémie Ochin
Abstract:
Soccer analytics are built on two data sources: the frame-by-frame position of each player on the terrain and the sequences of events, such as ball drive, pass, cross, shot, throw-in... With more than 2000 ball-events per soccer game, their precise and exhaustive annotation, based on a monocular video stream such as a TV broadcast, remains a tedious and costly manual task. State-of-the-art methods for spatio-temporal action detection from a monocular video stream, often based on 3D convolutional neural networks, are close to reach levels of performances in mean Average Precision (mAP) compatibles with the automation of such task. Nevertheless, to meet their expectation of exhaustiveness in the context of data analytics, such methods must be applied in a regime of high recall – low precision, using low confidence score thresholds. This setting unavoidably leads to the detection of false positives that are the product of the well documented overconfidence behaviour of neural networks and, in this case, their limited access to contextual information and understanding of the game: their predictions are highly unstructured. Based on the assumption that professional soccer players’ behaviour, pose, positions and velocity are highly interrelated and locally driven by the player performing a ball-action, it is hypothesized that the addition of information regarding surrounding player’s appearance, positions and velocity in the prediction methods can improve their metrics. Several methods are compared to build a proper representation of the game surrounding a player, from handcrafted features of the local graph, based on domain knowledge, to the use of Graph Neural Networks trained in an end-to-end fashion with existing state-of-the-art 3D convolutional neural networks. It is shown that the inclusion of information regarding surrounding players helps reaching higher metrics.Keywords: fine-grained action recognition, human action recognition, convolutional neural networks, graph neural networks, spatio-temporal action recognition
Procedia PDF Downloads 2312269 A Study on the Role of Human Rights in the Aid Allocations of China and the United States
Authors: Shazmeen Maroof
Abstract:
The study is motivated by a desire to investigate whether there is substance to claims that, relative to traditional donors, China disregards human rights considerations when allocating overseas aid. While the stated policy of the U.S. is that consideration of potential aid recipients’ respect for human rights is mandatory, some quantitative studies have cast doubt on whether this is reflected in actual allocations. There is a lack of academic literature that formally assesses the extent to which the two countries' aid allocations differ; which is essential to test whether the criticisms of China's aid policy in comparison to that of the U.S. are justified. Using data on two standard human rights measures, 'Political Terror Scale' and 'Civil Liberties', the study analyse the two donors’ aid allocations among 125 countries over the period 2000 to 2014. The bivariate analysis demonstrated that a significant share of China’s aid flow to countries with poor human rights record. At the same time, the U.S. seems little different in providing aid to these countries. The empirical results obtained from the Fractional Logit model also provided some support to the general pessimism regarding China’s provision of aid to countries with poor human rights record, yet challenge the optimists expecting better targeted aid from the U.S. These findings are consistent with the split between humanitarian and non-humanitarian aid and in the sample of countries whose human rights record is below some threshold level.Keywords: China's aid policy, foreign aid allocation, human rights, United States Foreign Assistance Act
Procedia PDF Downloads 10912268 Alternator Fault Detection Using Wigner-Ville Distribution
Authors: Amin Ranjbar, Amir Arsalan Jalili Zolfaghari, Amir Abolfazl Suratgar, Mehrdad Khajavi
Abstract:
This paper describes two stages of learning-based fault detection procedure in alternators. The procedure consists of three states of machine condition namely shortened brush, high impedance relay and maintaining a healthy condition in the alternator. The fault detection algorithm uses Wigner-Ville distribution as a feature extractor and also appropriate feature classifier. In this work, ANN (Artificial Neural Network) and also SVM (support vector machine) were compared to determine more suitable performance evaluated by the mean squared of errors criteria. Modules work together to detect possible faulty conditions of machines working. To test the method performance, a signal database is prepared by making different conditions on a laboratory setup. Therefore, it seems by implementing this method, satisfactory results are achieved.Keywords: alternator, artificial neural network, support vector machine, time-frequency analysis, Wigner-Ville distribution
Procedia PDF Downloads 374