Search results for: radar detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3697

Search results for: radar detection

2257 Wireless Sensor Network for Forest Fire Detection and Localization

Authors: Tarek Dandashi

Abstract:

WSNs may provide a fast and reliable solution for the early detection of environment events like forest fires. This is crucial for alerting and calling for fire brigade intervention. Sensor nodes communicate sensor data to a host station, which enables a global analysis and the generation of a reliable decision on a potential fire and its location. A WSN with TinyOS and nesC for the capturing and transmission of a variety of sensor information with controlled source, data rates, duration, and the records/displaying activity traces is presented. We propose a similarity distance (SD) between the distribution of currently sensed data and that of a reference. At any given time, a fire causes diverging opinions in the reported data, which alters the usual data distribution. Basically, SD consists of a metric on the Cumulative Distribution Function (CDF). SD is designed to be invariant versus day-to-day changes of temperature, changes due to the surrounding environment, and normal changes in weather, which preserve the data locality. Evaluation shows that SD sensitivity is quadratic versus an increase in sensor node temperature for a group of sensors of different sizes and neighborhood. Simulation of fire spreading when ignition is placed at random locations with some wind speed shows that SD takes a few minutes to reliably detect fires and locate them. We also discuss the case of false negative and false positive and their impact on the decision reliability.

Keywords: forest fire, WSN, wireless sensor network, algortihm

Procedia PDF Downloads 264
2256 Effect of Birks Constant and Defocusing Parameter on Triple-to-Double Coincidence Ratio Parameter in Monte Carlo Simulation-GEANT4

Authors: Farmesk Abubaker, Francesco Tortorici, Marco Capogni, Concetta Sutera, Vincenzo Bellini

Abstract:

This project concerns with the detection efficiency of the portable triple-to-double coincidence ratio (TDCR) at the National Institute of Metrology of Ionizing Radiation (INMRI-ENEA) which allows direct activity measurement and radionuclide standardization for pure-beta emitter or pure electron capture radionuclides. The dependency of the simulated detection efficiency of the TDCR, by using Monte Carlo simulation Geant4 code, on the Birks factor (kB) and defocusing parameter has been examined especially for low energy beta-emitter radionuclides such as 3H and 14C, for which this dependency is relevant. The results achieved in this analysis can be used for selecting the best kB factor and the defocusing parameter for computing theoretical TDCR parameter value. The theoretical results were compared with the available ones, measured by the ENEA TDCR portable detector, for some pure-beta emitter radionuclides. This analysis allowed to improve the knowledge of the characteristics of the ENEA TDCR detector that can be used as a traveling instrument for in-situ measurements with particular benefits in many applications in the field of nuclear medicine and in the nuclear energy industry.

Keywords: Birks constant, defocusing parameter, GEANT4 code, TDCR parameter

Procedia PDF Downloads 149
2255 Remote Assessment and Change Detection of GreenLAI of Cotton Crop Using Different Vegetation Indices

Authors: Ganesh B. Shinde, Vijaya B. Musande

Abstract:

Cotton crop identification based on the timely information has significant advantage to the different implications of food, economic and environment. Due to the significant advantages, the accurate detection of cotton crop regions using supervised learning procedure is challenging problem in remote sensing. Here, classifiers on the direct image are played a major role but the results are not much satisfactorily. In order to further improve the effectiveness, variety of vegetation indices are proposed in the literature. But, recently, the major challenge is to find the better vegetation indices for the cotton crop identification through the proposed methodology. Accordingly, fuzzy c-means clustering is combined with neural network algorithm, trained by Levenberg-Marquardt for cotton crop classification. To experiment the proposed method, five LISS-III satellite images was taken and the experimentation was done with six vegetation indices such as Simple Ratio, Normalized Difference Vegetation Index, Enhanced Vegetation Index, Green Atmospherically Resistant Vegetation Index, Wide-Dynamic Range Vegetation Index, Green Chlorophyll Index. Along with these indices, Green Leaf Area Index is also considered for investigation. From the research outcome, Green Atmospherically Resistant Vegetation Index outperformed with all other indices by reaching the average accuracy value of 95.21%.

Keywords: Fuzzy C-Means clustering (FCM), neural network, Levenberg-Marquardt (LM) algorithm, vegetation indices

Procedia PDF Downloads 320
2254 Automatic Detection of Sugarcane Diseases: A Computer Vision-Based Approach

Authors: Himanshu Sharma, Karthik Kumar, Harish Kumar

Abstract:

The major problem in crop cultivation is the occurrence of multiple crop diseases. During the growth stage, timely identification of crop diseases is paramount to ensure the high yield of crops, lower production costs, and minimize pesticide usage. In most cases, crop diseases produce observable characteristics and symptoms. The Surveyors usually diagnose crop diseases when they walk through the fields. However, surveyor inspections tend to be biased and error-prone due to the nature of the monotonous task and the subjectivity of individuals. In addition, visual inspection of each leaf or plant is costly, time-consuming, and labour-intensive. Furthermore, the plant pathologists and experts who can often identify the disease within the plant according to their symptoms in early stages are not readily available in remote regions. Therefore, this study specifically addressed early detection of leaf scald, red rot, and eyespot types of diseases within sugarcane plants. The study proposes a computer vision-based approach using a convolutional neural network (CNN) for automatic identification of crop diseases. To facilitate this, firstly, images of sugarcane diseases were taken from google without modifying the scene, background, or controlling the illumination to build the training dataset. Then, the testing dataset was developed based on the real-time collected images from the sugarcane field from India. Then, the image dataset is pre-processed for feature extraction and selection. Finally, the CNN-based Visual Geometry Group (VGG) model was deployed on the training and testing dataset to classify the images into diseased and healthy sugarcane plants and measure the model's performance using various parameters, i.e., accuracy, sensitivity, specificity, and F1-score. The promising result of the proposed model lays the groundwork for the automatic early detection of sugarcane disease. The proposed research directly sustains an increase in crop yield.

Keywords: automatic classification, computer vision, convolutional neural network, image processing, sugarcane disease, visual geometry group

Procedia PDF Downloads 117
2253 Study on the Pavement Structural Performance of Highways in the North China Region Based on Pavement Distress and Ground Penetrating Radar

Authors: Mingwei Yi, Liujie Guo, Zongjun Pan, Xiang Lin, Xiaoming Yi

Abstract:

With the rapid expansion of road construction mileage in China, the scale of road maintenance needs has concurrently escalated. As the service life of roads extends, the design of pavement repair and maintenance emerges as a crucial component in preserving the excellent performance of the pavement. The remaining service life of asphalt pavement structure is a vital parameter in the lifecycle maintenance design of asphalt pavements. Based on an analysis of pavement structural integrity, this study introduces a characterization and assessment of the remaining life of existing asphalt pavement structures. It proposes indicators such as the transverse crack spacing and the length of longitudinal cracks. The transverse crack spacing decreases with an increase in maintenance intervals and with the extended use of semi-rigid base layer structures, although this trend becomes less pronounced after maintenance intervals exceed 4 years. The length of longitudinal cracks increases with longer maintenance intervals, but this trend weakens after five years. This system can support the enhancement of standardization and scientific design in highway maintenance decision-making processes.

Keywords: structural integrity, highways, pavement evaluation, asphalt concrete pavement

Procedia PDF Downloads 71
2252 Detection of Hepatitis B by the Use of Artifical Intelegence

Authors: Shizra Waris, Bilal Shoaib, Munib Ahmad

Abstract:

Background; The using of clinical decision support systems (CDSSs) may recover unceasing disease organization, which requires regular visits to multiple health professionals, treatment monitoring, disease control, and patient behavior modification. The objective of this survey is to determine if these CDSSs improve the processes of unceasing care including diagnosis, treatment, and monitoring of diseases. Though artificial intelligence is not a new idea it has been widely documented as a new technology in computer science. Numerous areas such as education business, medical and developed have made use of artificial intelligence Methods: The survey covers articles extracted from relevant databases. It uses search terms related to information technology and viral hepatitis which are published between 2000 and 2016. Results: Overall, 80% of studies asserted the profit provided by information technology (IT); 75% of learning asserted the benefits concerned with medical domain;25% of studies do not clearly define the added benefits due IT. The CDSS current state requires many improvements to hold up the management of liver diseases such as HCV, liver fibrosis, and cirrhosis. Conclusion: We concluded that the planned model gives earlier and more correct calculation of hepatitis B and it works as promising tool for calculating of custom hepatitis B from the clinical laboratory data.

Keywords: detection, hapataties, observation, disesese

Procedia PDF Downloads 159
2251 Electrohydrodynamic Patterning for Surface Enhanced Raman Scattering for Point-of-Care Diagnostics

Authors: J. J. Rickard, A. Belli, P. Goldberg Oppenheimer

Abstract:

Medical diagnostics, environmental monitoring, homeland security and forensics increasingly demand specific and field-deployable analytical technologies for quick point-of-care diagnostics. Although technological advancements have made optical methods well-suited for miniaturization, a highly-sensitive detection technique for minute sample volumes is required. Raman spectroscopy is a well-known analytical tool, but has very weak signals and hence is unsuitable for trace level analysis. Enhancement via localized optical fields (surface plasmons resonances) on nanoscale metallic materials generates huge signals in surface-enhanced Raman scattering (SERS), enabling single molecule detection. This enhancement can be tuned by manipulation of the surface roughness and architecture at the sub-micron level. Nevertheless, the development and application of SERS has been inhibited by the irreproducibility and complexity of fabrication routes. The ability to generate straightforward, cost-effective, multiplex-able and addressable SERS substrates with high enhancements is of profound interest for SERS-based sensing devices. While most SERS substrates are manufactured by conventional lithographic methods, the development of a cost-effective approach to create nanostructured surfaces is a much sought-after goal in the SERS community. Here, a method is established to create controlled, self-organized, hierarchical nanostructures using electrohydrodynamic (HEHD) instabilities. The created structures are readily fine-tuned, which is an important requirement for optimizing SERS to obtain the highest enhancements. HEHD pattern formation enables the fabrication of multiscale 3D structured arrays as SERS-active platforms. Importantly, each of the HEHD-patterned individual structural units yield a considerable SERS enhancement. This enables each single unit to function as an isolated sensor. Each of the formed structures can be effectively tuned and tailored to provide high SERS enhancement, while arising from different HEHD morphologies. The HEHD fabrication of sub-micrometer architectures is straightforward and robust, providing an elegant route for high-throughput biological and chemical sensing. The superior detection properties and the ability to fabricate SERS substrates on the miniaturized scale, will facilitate the development of advanced and novel opto-fluidic devices, such as portable detection systems, and will offer numerous applications in biomedical diagnostics, forensics, ecological warfare and homeland security.

Keywords: hierarchical electrohydrodynamic patterning, medical diagnostics, point-of care devices, SERS

Procedia PDF Downloads 347
2250 Rapid Detection and Differentiation of Camel Pox, Contagious Ecthyma and Papilloma Viruses in Clinical Samples of Camels Using a Multiplex PCR

Authors: A. I. Khalafalla, K. A. Al-Busada, I. M. El-Sabagh

Abstract:

Pox and pox-like diseases of camels are a group of exanthematous skin conditions that have become increasingly important economically. They may be caused by three distinct viruses: camelpox virus (CMPV), camel contagious ecthyma virus (CCEV) and camel papillomavirus (CAPV). These diseases are difficult to differentiate based on clinical presentation in disease outbreaks. Molecular methods such as PCR targeting species-specific genes have been developed and used to identify CMPV and CCEV, but not simultaneously in a single tube. Recently, multiplex PCR has gained reputation as a convenient diagnostic method with cost- and time–saving benefits. In the present communication, we describe the development, optimization and validation a multiplex PCR assays able to detect simultaneously the genome of the three viruses in one single test allowing for rapid and efficient molecular diagnosis. The assay was developed based on the evaluation and combination of published and new primer sets, and was applied to the detection of 110 tissue samples. The method showed high sensitivity, and the specificity was confirmed by PCR-product sequencing. In conclusion, this rapid, sensitive and specific assay is considered a useful method for identifying three important viruses in specimens from camels and as part of a molecular diagnostic regime.

Keywords: multiplex PCR, diagnosis, pox and pox-like diseases, camels

Procedia PDF Downloads 471
2249 An Approach for Detection Efficiency Determination of High Purity Germanium Detector Using Cesium-137

Authors: Abdulsalam M. Alhawsawi

Abstract:

Estimation of a radiation detector's efficiency plays a significant role in calculating the activity of radioactive samples. Detector efficiency is measured using sources that emit a variety of energies from low to high-energy photons along the energy spectrum. Some photon energies are hard to find in lab settings either because check sources are hard to obtain or the sources have short half-lives. This work aims to develop a method to determine the efficiency of a High Purity Germanium Detector (HPGe) based on the 662 keV gamma ray photon emitted from Cs-137. Cesium-137 is readily available in most labs with radiation detection and health physics applications and has a long half-life of ~30 years. Several photon efficiencies were calculated using the MCNP5 simulation code. The simulated efficiency of the 662 keV photon was used as a base to calculate other photon efficiencies in a point source and a Marinelli Beaker form. In the Marinelli Beaker filled with water case, the efficiency of the 59 keV low energy photons from Am-241 was estimated with a 9% error compared to the MCNP5 simulated efficiency. The 1.17 and 1.33 MeV high energy photons emitted by Co-60 had errors of 4% and 5%, respectively. The estimated errors are considered acceptable in calculating the activity of unknown samples as they fall within the 95% confidence level.

Keywords: MCNP5, MonteCarlo simulations, efficiency calculation, absolute efficiency, activity estimation, Cs-137

Procedia PDF Downloads 118
2248 Fraud Detection in Credit Cards with Machine Learning

Authors: Anjali Chouksey, Riya Nimje, Jahanvi Saraf

Abstract:

Online transactions have increased dramatically in this new ‘social-distancing’ era. With online transactions, Fraud in online payments has also increased significantly. Frauds are a significant problem in various industries like insurance companies, baking, etc. These frauds include leaking sensitive information related to the credit card, which can be easily misused. Due to the government also pushing online transactions, E-commerce is on a boom. But due to increasing frauds in online payments, these E-commerce industries are suffering a great loss of trust from their customers. These companies are finding credit card fraud to be a big problem. People have started using online payment options and thus are becoming easy targets of credit card fraud. In this research paper, we will be discussing machine learning algorithms. We have used a decision tree, XGBOOST, k-nearest neighbour, logistic-regression, random forest, and SVM on a dataset in which there are transactions done online mode using credit cards. We will test all these algorithms for detecting fraud cases using the confusion matrix, F1 score, and calculating the accuracy score for each model to identify which algorithm can be used in detecting frauds.

Keywords: machine learning, fraud detection, artificial intelligence, decision tree, k nearest neighbour, random forest, XGBOOST, logistic regression, support vector machine

Procedia PDF Downloads 150
2247 Facial Recognition and Landmark Detection in Fitness Assessment and Performance Improvement

Authors: Brittany Richardson, Ying Wang

Abstract:

For physical therapy, exercise prescription, athlete training, and regular fitness training, it is crucial to perform health assessments or fitness assessments periodically. An accurate assessment is propitious for tracking recovery progress, preventing potential injury and making long-range training plans. Assessments include necessary measurements, height, weight, blood pressure, heart rate, body fat, etc. and advanced evaluation, muscle group strength, stability-mobility, and movement evaluation, etc. In the current standard assessment procedures, the accuracy of assessments, especially advanced evaluations, largely depends on the experience of physicians, coaches, and personal trainers. And it is challenging to track clients’ progress in the current assessment. Unlike the tradition assessment, in this paper, we present a deep learning based face recognition algorithm for accurate, comprehensive and trackable assessment. Based on the result from our assessment, physicians, coaches, and personal trainers are able to adjust the training targets and methods. The system categorizes the difficulty levels of the current activity for the client or user, furthermore make more comprehensive assessments based on tracking muscle group over time using a designed landmark detection method. The system also includes the function of grading and correcting the form of the clients during exercise. Experienced coaches and personal trainer can tell the clients' limit based on their facial expression and muscle group movements, even during the first several sessions. Similar to this, using a convolution neural network, the system is trained with people’s facial expression to differentiate challenge levels for clients. It uses landmark detection for subtle changes in muscle groups movements. It measures the proximal mobility of the hips and thoracic spine, the proximal stability of the scapulothoracic region and distal mobility of the glenohumeral joint, as well as distal mobility, and its effect on the kinetic chain. This system integrates data from other fitness assistant devices, including but not limited to Apple Watch, Fitbit, etc. for a improved training and testing performance. The system itself doesn’t require history data for an individual client, but the history data of a client can be used to create a more effective exercise plan. In order to validate the performance of the proposed work, an experimental design is presented. The results show that the proposed work contributes towards improving the quality of exercise plan, execution, progress tracking, and performance.

Keywords: exercise prescription, facial recognition, landmark detection, fitness assessments

Procedia PDF Downloads 135
2246 Fault Tolerant Control System Using a Multiple Time Scale SMC Technique and a Geometric Approach

Authors: Ghodbane Azeddine, Saad Maarouf, Boland Jean-Francois, Thibeault Claude

Abstract:

This paper proposes a new design of an active fault-tolerant flight control system against abrupt actuator faults. This overall system combines a multiple time scale sliding mode controller for fault compensation and a geometric approach for fault detection and diagnosis. The proposed control system is able to accommodate several kinds of partial and total actuator failures, by using available healthy redundancy actuators. The overall system first estimates the correct fault information using the geometric approach. Then, and based on that, a new reconfigurable control law is designed based on the multiple time scale sliding mode technique for on-line compensating the effect of such faults. This approach takes advantages of the fact that there are significant difference between the time scales of aircraft states that have a slow dynamics and those that have a fast dynamics. The closed-loop stability of the overall system is proved using Lyapunov technique. A case study of the non-linear model of the F16 fighter, subject to the rudder total loss of control confirms the effectiveness of the proposed approach.

Keywords: actuator faults, fault detection and diagnosis, fault tolerant flight control, sliding mode control, multiple time scale approximation, geometric approach for fault reconstruction, lyapunov stability

Procedia PDF Downloads 373
2245 Serological and Molecular Detection of Alfalfa Mosaic Virus in the Major Potato Growing Areas of Saudi Arabia

Authors: Khalid Alhudaib

Abstract:

Potato is considered as one of the most important and potential vegetable crops in Saudi Arabia. Alfalfa mosaic virus (AMV), genus Alfamovirus, family Bromoviridae is among the broad spread of viruses in potato. During spring and fall growing seasons of potato in 2015 and 2016, several field visits were conducted in the four major growing areas of potato cultivation (Riyadh-Qaseem-Hail-Hard). The presence of AMV was detected in samples using ELISA, dot blot hybridization and/or RT-PCR. The highest occurrence of AMV was observed as 18.6% in Qaseem followed by Riyadh with 15.2% while; the lowest infection rates were recorded in Hard and Hail, 8.3 and 10.4%, respectively. The sequences of seven isolates of AMV obtained in this study were determined and the sequences were aligned with the other sequences available in the GenBank database. Analyses confirmed the low variability among AMV isolated in this study, which means that all AMV isolates may originate from the same source. Due to high incidence of AMV, other economic susceptible crops may become affected by high incidence of this virus in potato crops. This requires accurate examination of potato seed tubers to prevent the spread of the virus in Saudi Arabia. The obtained results indicated that the hybridization and ELISA are suitable techniques in the routine detection of AMV in a large number of samples while RT-PCR is more sensitive and essential for molecular characterization of AMV.

Keywords: Alfamovirus, AMV, Alfalfa mosaic virus, PCR, potato

Procedia PDF Downloads 180
2244 Co-Seismic Surface Deformation Induced By 24 September 2019 Mirpur, Pakistan Earthquake Along an Active Blind Fault Estimated Using Sentinel-1 TOPS Interferometry

Authors: Muhammad Ali, Zeeshan Afzal, Giampaolo Ferraioli, Gilda Schirinzi, Muhammad Saleem Mughal, Vito Pascazio

Abstract:

On 24 September 2019, an earthquake with 5.6 Mw and 10 km depth stroke in Mirpur. The Mirpur area was highly affected by this earthquake, with the death of 34 people. This study aims to estimate the surface deformation associated with this earthquake. The interferometric synthetic aperture radar (InSAR) technique is applied to study earthquake induced surface motion. InSAR data using 9 Sentinel-1A SAR images from 11 August 2019 to 22 October 2019 is used to investigate the pre, co-, and post-seismic deformation trends. Time series investigation reveals that there was not such deformation in pre-seismic time period. In the co-seismic time period, strong displacement was observed, and in post-seismic results, small displacement is seen due to aftershocks. Our results show the existence of a previously unpublished blind fault in Mirpur and help to locate the fault line. Previous this fault line was triggered during the 2005 earthquake, and now it’s activated on 24 September 2019. Study area is already facing many problems due to natural hazards where additional surface deformations, particularly because of an earthquake with an activated blind fault, have increased its vulnerability.

Keywords: surface deformation, InSAR, earthquake, sentinel-1, mirpur

Procedia PDF Downloads 131
2243 Enhancing Early Detection of Coronary Heart Disease Through Cloud-Based AI and Novel Simulation Techniques

Authors: Md. Abu Sufian, Robiqul Islam, Imam Hossain Shajid, Mahesh Hanumanthu, Jarasree Varadarajan, Md. Sipon Miah, Mingbo Niu

Abstract:

Coronary Heart Disease (CHD) remains a principal cause of global morbidity and mortality, characterized by atherosclerosis—the build-up of fatty deposits inside the arteries. The study introduces an innovative methodology that leverages cloud-based platforms like AWS Live Streaming and Artificial Intelligence (AI) to early detect and prevent CHD symptoms in web applications. By employing novel simulation processes and AI algorithms, this research aims to significantly mitigate the health and societal impacts of CHD. Methodology: This study introduces a novel simulation process alongside a multi-phased model development strategy. Initially, health-related data, including heart rate variability, blood pressure, lipid profiles, and ECG readings, were collected through user interactions with web-based applications as well as API Integration. The novel simulation process involved creating synthetic datasets that mimic early-stage CHD symptoms, allowing for the refinement and training of AI algorithms under controlled conditions without compromising patient privacy. AWS Live Streaming was utilized to capture real-time health data, which was then processed and analysed using advanced AI techniques. The novel aspect of our methodology lies in the simulation of CHD symptom progression, which provides a dynamic training environment for our AI models enhancing their predictive accuracy and robustness. Model Development: it developed a machine learning model trained on both real and simulated datasets. Incorporating a variety of algorithms including neural networks and ensemble learning model to identify early signs of CHD. The model's continuous learning mechanism allows it to evolve adapting to new data inputs and improving its predictive performance over time. Results and Findings: The deployment of our model yielded promising results. In the validation phase, it achieved an accuracy of 92% in predicting early CHD symptoms surpassing existing models. The precision and recall metrics stood at 89% and 91% respectively, indicating a high level of reliability in identifying at-risk individuals. These results underscore the effectiveness of combining live data streaming with AI in the early detection of CHD. Societal Implications: The implementation of cloud-based AI for CHD symptom detection represents a significant step forward in preventive healthcare. By facilitating early intervention, this approach has the potential to reduce the incidence of CHD-related complications, decrease healthcare costs, and improve patient outcomes. Moreover, the accessibility and scalability of cloud-based solutions democratize advanced health monitoring, making it available to a broader population. This study illustrates the transformative potential of integrating technology and healthcare, setting a new standard for the early detection and management of chronic diseases.

Keywords: coronary heart disease, cloud-based ai, machine learning, novel simulation techniques, early detection, preventive healthcare

Procedia PDF Downloads 67
2242 Analysis of Operating Speed on Four-Lane Divided Highways under Mixed Traffic Conditions

Authors: Chaitanya Varma, Arpan Mehar

Abstract:

The present study demonstrates the procedure to analyse speed data collected on various four-lane divided sections in India. Field data for the study was collected at different straight and curved sections on rural highways with the help of radar speed gun and video camera. The data collected at the sections were analysed and parameters pertain to speed distributions were estimated. The different statistical distribution was analysed on vehicle type speed data and for mixed traffic speed data. It was found that vehicle type speed data was either follows the normal distribution or Log-normal distribution, whereas the mixed traffic speed data follows more than one type of statistical distribution. The most common fit observed on mixed traffic speed data were Beta distribution and Weibull distribution. The separate operating speed model based on traffic and roadway geometric parameters were proposed in the present study. The operating speed model with traffic parameters and curve geometry parameters were established. Two different operating speed models were proposed with variables 1/R and Ln(R) and were found to be realistic with a different range of curve radius. The models developed in the present study are simple and realistic and can be used for forecasting operating speed on four-lane highways.

Keywords: highway, mixed traffic flow, modeling, operating speed

Procedia PDF Downloads 460
2241 Detecting Geographically Dispersed Overlay Communities Using Community Networks

Authors: Madhushi Bandara, Dharshana Kasthurirathna, Danaja Maldeniya, Mahendra Piraveenan

Abstract:

Community detection is an extremely useful technique in understanding the structure and function of a social network. Louvain algorithm, which is based on Newman-Girman modularity optimization technique, is extensively used as a computationally efficient method extract the communities in social networks. It has been suggested that the nodes that are in close geographical proximity have a higher tendency of forming communities. Variants of the Newman-Girman modularity measure such as dist-modularity try to normalize the effect of geographical proximity to extract geographically dispersed communities, at the expense of losing the information about the geographically proximate communities. In this work, we propose a method to extract geographically dispersed communities while preserving the information about the geographically proximate communities, by analyzing the ‘community network’, where the centroids of communities would be considered as network nodes. We suggest that the inter-community link strengths, which are normalized over the community sizes, may be used to identify and extract the ‘overlay communities’. The overlay communities would have relatively higher link strengths, despite being relatively apart in their spatial distribution. We apply this method to the Gowalla online social network, which contains the geographical signatures of its users, and identify the overlay communities within it.

Keywords: social networks, community detection, modularity optimization, geographically dispersed communities

Procedia PDF Downloads 236
2240 Human Identification and Detection of Suspicious Incidents Based on Outfit Colors: Image Processing Approach in CCTV Videos

Authors: Thilini M. Yatanwala

Abstract:

CCTV (Closed-Circuit-Television) Surveillance System is being used in public places over decades and a large variety of data is being produced every moment. However, most of the CCTV data is stored in isolation without having integrity. As a result, identification of the behavior of suspicious people along with their location has become strenuous. This research was conducted to acquire more accurate and reliable timely information from the CCTV video records. The implemented system can identify human objects in public places based on outfit colors. Inter-process communication technologies were used to implement the CCTV camera network to track people in the premises. The research was conducted in three stages and in the first stage human objects were filtered from other movable objects available in public places. In the second stage people were uniquely identified based on their outfit colors and in the third stage an individual was continuously tracked in the CCTV network. A face detection algorithm was implemented using cascade classifier based on the training model to detect human objects. HAAR feature based two-dimensional convolution operator was introduced to identify features of the human face such as region of eyes, region of nose and bridge of the nose based on darkness and lightness of facial area. In the second stage outfit colors of human objects were analyzed by dividing the area into upper left, upper right, lower left, lower right of the body. Mean color, mod color and standard deviation of each area were extracted as crucial factors to uniquely identify human object using histogram based approach. Color based measurements were written in to XML files and separate directories were maintained to store XML files related to each camera according to time stamp. As the third stage of the approach, inter-process communication techniques were used to implement an acknowledgement based CCTV camera network to continuously track individuals in a network of cameras. Real time analysis of XML files generated in each camera can determine the path of individual to monitor full activity sequence. Higher efficiency was achieved by sending and receiving acknowledgments only among adjacent cameras. Suspicious incidents such as a person staying in a sensitive area for a longer period or a person disappeared from the camera coverage can be detected in this approach. The system was tested for 150 people with the accuracy level of 82%. However, this approach was unable to produce expected results in the presence of group of people wearing similar type of outfits. This approach can be applied to any existing camera network without changing the physical arrangement of CCTV cameras. The study of human identification and suspicious incident detection using outfit color analysis can achieve higher level of accuracy and the project will be continued by integrating motion and gait feature analysis techniques to derive more information from CCTV videos.

Keywords: CCTV surveillance, human detection and identification, image processing, inter-process communication, security, suspicious detection

Procedia PDF Downloads 184
2239 Fast Detection of Local Fiber Shifts by X-Ray Scattering

Authors: Peter Modregger, Özgül Öztürk

Abstract:

Glass fabric reinforced thermoplastic (GFRT) are composite materials, which combine low weight and resilient mechanical properties rendering them especially suitable for automobile construction. However, defects in the glass fabric as well as in the polymer matrix can occur during manufacturing, which may compromise component lifetime or even safety. One type of these defects is local fiber shifts, which can be difficult to detect. Recently, we have experimentally demonstrated the reliable detection of local fiber shifts by X-ray scattering based on the edge-illumination (EI) principle. EI constitutes a novel X-ray imaging technique that utilizes two slit masks, one in front of the sample and one in front of the detector, in order to simultaneously provide absorption, phase, and scattering contrast. The principle of contrast formation is as follows. The incident X-ray beam is split into smaller beamlets by the sample mask, resulting in small beamlets. These are distorted by the interaction with the sample, and the distortions are scaled up by the detector masks, rendering them visible to a pixelated detector. In the experiment, the sample mask is laterally scanned, resulting in Gaussian-like intensity distributions in each pixel. The area under the curves represents absorption, the peak offset refraction, and the width of the curve represents the scattering occurring in the sample. Here, scattering is caused by the numerous glass fiber/polymer matrix interfaces. In our recent publication, we have shown that the standard deviation of the absorption and scattering values over a selected field of view can be used to distinguish between intact samples and samples with local fiber shift defects. The quantification of defect detection performance was done by using p-values (p=0.002 for absorption and p=0.009 for scattering) and contrast-to-noise ratios (CNR=3.0 for absorption and CNR=2.1 for scattering) between the two groups of samples. This was further improved for the scattering contrast to p=0.0004 and CNR=4.2 by utilizing a harmonic decomposition analysis of the images. Thus, we concluded that local fiber shifts can be reliably detected by the X-ray scattering contrasts provided by EI. However, a potential application in, for example, production monitoring requires fast data acquisition times. For the results above, the scanning of the sample masks was performed over 50 individual steps, which resulted in long total scan times. In this paper, we will demonstrate that reliable detection of local fiber shift defects is also possible by using single images, which implies a speed up of total scan time by a factor of 50. Additional performance improvements will also be discussed, which opens the possibility for real-time acquisition. This contributes a vital step for the translation of EI to industrial applications for a wide variety of materials consisting of numerous interfaces on the micrometer scale.

Keywords: defects in composites, X-ray scattering, local fiber shifts, X-ray edge Illumination

Procedia PDF Downloads 64
2238 Study on Acoustic Source Detection Performance Improvement of Microphone Array Installed on Drones Using Blind Source Separation

Authors: Youngsun Moon, Yeong-Ju Go, Jong-Soo Choi

Abstract:

Most drones that currently have surveillance/reconnaissance missions are basically equipped with optical equipment, but we also need to use a microphone array to estimate the location of the acoustic source. This can provide additional information in the absence of optical equipment. The purpose of this study is to estimate Direction of Arrival (DOA) based on Time Difference of Arrival (TDOA) estimation of the acoustic source in the drone. The problem is that it is impossible to measure the clear target acoustic source because of the drone noise. To overcome this problem is to separate the drone noise and the target acoustic source using Blind Source Separation(BSS) based on Independent Component Analysis(ICA). ICA can be performed assuming that the drone noise and target acoustic source are independent and each signal has non-gaussianity. For maximized non-gaussianity each signal, we use Negentropy and Kurtosis based on probability theory. As a result, we can improve TDOA estimation and DOA estimation of the target source in the noisy environment. We simulated the performance of the DOA algorithm applying BSS algorithm, and demonstrated the simulation through experiment at the anechoic wind tunnel.

Keywords: aeroacoustics, acoustic source detection, time difference of arrival, direction of arrival, blind source separation, independent component analysis, drone

Procedia PDF Downloads 165
2237 The Involvement of Visual and Verbal Representations Within a Quantitative and Qualitative Visual Change Detection Paradigm

Authors: Laura Jenkins, Tim Eschle, Joanne Ciafone, Colin Hamilton

Abstract:

An original working memory model suggested the separation of visual and verbal systems in working memory architecture, in which only visual working memory components were used during visual working memory tasks. It was later suggested that the visuo spatial sketch pad was the only memory component at use during visual working memory tasks, and components such as the phonological loop were not considered. In more recent years, a contrasting approach has been developed with the use of an executive resource to incorporate both visual and verbal representations in visual working memory paradigms. This was supported using research demonstrating the use of verbal representations and an executive resource in a visual matrix patterns task. The aim of the current research is to investigate the working memory architecture during both a quantitative and a qualitative visual working memory task. A dual task method will be used. Three secondary tasks will be used which are designed to hit specific components within the working memory architecture – Dynamic Visual Noise (visual components), Visual Attention (spatial components) and Verbal Attention (verbal components). A comparison of the visual working memory tasks will be made to discover if verbal representations are at use, as the previous literature suggested. This direct comparison has not been made so far in the literature. Considerations will be made as to whether a domain specific approach should be employed when discussing visual working memory tasks, or whether a more domain general approach could be used instead.

Keywords: semantic organisation, visual memory, change detection

Procedia PDF Downloads 596
2236 Concept for Determining the Focus of Technology Monitoring Activities

Authors: Guenther Schuh, Christina Koenig, Nico Schoen, Markus Wellensiek

Abstract:

Identification and selection of appropriate product and manufacturing technologies are key factors for competitiveness and market success of technology-based companies. Therefore many companies perform technology intelligence (TI) activities to ensure the identification of evolving technologies at the right time. Technology monitoring is one of the three base activities of TI, besides scanning and scouting. As the technological progress is accelerating, more and more technologies are being developed. Against the background of limited resources it is therefore necessary to focus TI activities. In this paper, we propose a concept for defining appropriate search fields for technology monitoring. This limitation of search space leads to more concentrated monitoring activities. The concept will be introduced and demonstrated through an anonymized case study conducted within an industry project at the Fraunhofer Institute for Production Technology. The described concept provides a customized monitoring approach, which is suitable for use in technology-oriented companies especially those that have not yet defined an explicit technology strategy. It is shown in this paper that the definition of search fields and search tasks are suitable methods to define topics of interest and thus to direct monitoring activities. Current as well as planned product, production and material technologies as well as existing skills, capabilities and resources form the basis of the described derivation of relevant search areas. To further improve the concept of technology monitoring the proposed concept should be extended during future research e.g. by the definition of relevant monitoring parameters.

Keywords: monitoring radar, search field, technology intelligence, technology monitoring

Procedia PDF Downloads 476
2235 Development of the Analysis and Pretreatment of Brown HT in Foods

Authors: Hee-Jae Suh, Mi-Na Hong, Min-Ji Kim, Yeon-Seong Jeong, Ok-Hwan Lee, Jae-Wook Shin, Hyang-Sook Chun, Chan Lee

Abstract:

Brown HT is a bis-azo dye which is permitted in EU as a food colorant. So far, many studies have focused on HPLC using diode array detection (DAD) analysis for detection of this food colorant with different columns and mobile phases. Even though these methods make it possible to detect Brown HT, low recovery, reproducibility, and linearity are still the major limitations for the application in foods. The purpose of this study was to compare various methods for the analysis of Brown HT and to develop an improved analytical methods including pretreatment. Among tested analysis methods, best resolution of Brown HT was observed when the following solvent was applied as a eluent; solvent A of mobile phase was 0.575g NH4H2PO4, and 0.7g Na2HPO4 in 500mL water added with 500mL methanol. The pH was adjusted using phosphoric acid to pH 6.9 and solvent B was methanol. Major peak for Brown HT appeared at the end of separation, 13.4min after injection. This method exhibited relatively high recovery and reproducibility compared with other methods. LOD (0.284 ppm), LOQ (0.861 ppm), resolution (6.143), and selectivity (1.3) of this method were better than those of ammonium acetate solution method which was most frequently used. Precision and accuracy were verified through inter-day test and intra-day test. Various methods for sample pretreatments were developed for different foods and relatively high recovery over 80% was observed in all case. This method exhibited high resolution and reproducibility of Brown HT compared with other previously reported official methods from FSA and, EU regulation.

Keywords: analytic method, Brown HT, food colorants, pretreatment method

Procedia PDF Downloads 480
2234 Using Deep Learning Real-Time Object Detection Convolution Neural Networks for Fast Fruit Recognition in the Tree

Authors: K. Bresilla, L. Manfrini, B. Morandi, A. Boini, G. Perulli, L. C. Grappadelli

Abstract:

Image/video processing for fruit in the tree using hard-coded feature extraction algorithms have shown high accuracy during recent years. While accurate, these approaches even with high-end hardware are computationally intensive and too slow for real-time systems. This paper details the use of deep convolution neural networks (CNNs), specifically an algorithm (YOLO - You Only Look Once) with 24+2 convolution layers. Using deep-learning techniques eliminated the need for hard-code specific features for specific fruit shapes, color and/or other attributes. This CNN is trained on more than 5000 images of apple and pear fruits on 960 cores GPU (Graphical Processing Unit). Testing set showed an accuracy of 90%. After this, trained data were transferred to an embedded device (Raspberry Pi gen.3) with camera for more portability. Based on correlation between number of visible fruits or detected fruits on one frame and the real number of fruits on one tree, a model was created to accommodate this error rate. Speed of processing and detection of the whole platform was higher than 40 frames per second. This speed is fast enough for any grasping/harvesting robotic arm or other real-time applications.

Keywords: artificial intelligence, computer vision, deep learning, fruit recognition, harvesting robot, precision agriculture

Procedia PDF Downloads 423
2233 Adaptive Analysis of Housing Policies in Development Programming After 1970s (Case Study: Kermanshah City in the Western Iran)

Authors: Zeinab. Shahrokhifar, Abolfazl Meshkini, Seyed Ali. Alavi

Abstract:

Considering the different dimensions of deprivation, housing supply is noted as a basic requirement in Iran after 1979 (coming to work of the new government). The government had built the constitution and obliged to meet this need in the form of five-year development programs in Iran’s provinces. This study focused on the adaptive analysis of housing policies in these five development programs in Kermanshah province located in western Iran. Our research is divided into two different analytical sections. In the first section, we collected the documentary information using approved plans and field studies. In the second section, a questionnaire was prepared and designed for the elite community (30) to support the documentary analysis. The results showed that various projects adopted in the form of strategic plans and implemented the policies included both quantitative and qualitative housing in Kermanshah province after 1979. The quality of housing, from the first to the fifth development plans has improved the situation in the housing indicators. The quantity of housing units for households has also been implemented through various policies that has desired results. The sequences of housing policies and plans do not overlap in the five development programs. According to the radar graph, the development programs overlapped in some policies, which shows the continuation of the previous policies, but this overlap is not perfect.

Keywords: law enforcement policy, housing policy, development programs, housing indicators, the city of Kermanshah

Procedia PDF Downloads 73
2232 Use of Nanosensors in Detection and Treatment of HIV

Authors: Sayed Obeidullah Abrar

Abstract:

Nanosensor is the combination of two terms nanoparticles and sensors. These are chemical or physical sensor constructed using nanoscale components, usually microscopic or submicroscopic in size. These sensors are very sensitive and can detect single virus particle or even very low concentrations of substances that could be potentially harmful. Nanosensors have a large scope of research especially in the field of medical sciences, military applications, pharmaceuticals etc.

Keywords: HIV/AIDS, nanosensors, DNA, RNA

Procedia PDF Downloads 300
2231 Comparison of Number of Waves Surfed and Duration Using Global Positioning System and Inertial Sensors

Authors: João Madureira, Ricardo Lagido, Inês Sousa, Fraunhofer Portugal

Abstract:

Surf is an increasingly popular sport and its performance evaluation is often qualitative. This work aims at using a smartphone to collect and analyze the GPS and inertial sensors data in order to obtain quantitative metrics of the surfing performance. Two approaches are compared for detection of wave rides, computing the number of waves rode in a surfing session, the starting time of each wave and its duration. The first approach is based on computing the velocity from the Global Positioning System (GPS) signal and finding the velocity thresholds that allow identifying the start and end of each wave ride. The second approach adds information from the Inertial Measurement Unit (IMU) of the smartphone, to the velocity thresholds obtained from the GPS unit, to determine the start and end of each wave ride. The two methods were evaluated using GPS and IMU data from two surfing sessions and validated with similar metrics extracted from video data collected from the beach. The second method, combining GPS and IMU data, was found to be more accurate in determining the number of waves, start time and duration. This paper shows that it is feasible to use smartphones for quantification of performance metrics during surfing. In particular, detection of the waves rode and their duration can be accurately determined using the smartphone GPS and IMU.

Keywords: inertial measurement unit (IMU), global positioning system (GPS), smartphone, surfing performance

Procedia PDF Downloads 403
2230 Information Requirements for Vessel Traffic Service Operations

Authors: Fan Li, Chun-Hsien Chen, Li Pheng Khoo

Abstract:

Operators of vessel traffic service (VTS) center provides three different types of services; namely information service, navigational assistance and traffic organization to vessels. To provide these services, operators monitor vessel traffic through computer interface and provide navigational advice based on the information integrated from multiple sources, including automatic identification system (AIS), radar system, and closed circuit television (CCTV) system. Therefore, this information is crucial in VTS operation. However, what information the VTS operator actually need to efficiently and properly offer services is unclear. The aim of this study is to investigate into information requirements for VTS operation. To achieve this aim, field observation was carried out to elicit the information requirements for VTS operation. The study revealed that the most frequent and important tasks were handling arrival vessel report, potential conflict control and abeam vessel report. Current location and vessel name were used in all tasks. Hazard cargo information was particularly required when operators handle arrival vessel report. The speed, the course, and the distance of two or several vessels were only used in potential conflict control. The information requirements identified in this study can be utilized in designing a human-computer interface that takes into consideration what and when information should be displayed, and might be further used to build the foundation of a decision support system for VTS.

Keywords: vessel traffic service, information requirements, hierarchy task analysis, field observation

Procedia PDF Downloads 252
2229 Data Recording for Remote Monitoring of Autonomous Vehicles

Authors: Rong-Terng Juang

Abstract:

Autonomous vehicles offer the possibility of significant benefits to social welfare. However, fully automated cars might not be going to happen in the near further. To speed the adoption of the self-driving technologies, many governments worldwide are passing laws requiring data recorders for the testing of autonomous vehicles. Currently, the self-driving vehicle, (e.g., shuttle bus) has to be monitored from a remote control center. When an autonomous vehicle encounters an unexpected driving environment, such as road construction or an obstruction, it should request assistance from a remote operator. Nevertheless, large amounts of data, including images, radar and lidar data, etc., have to be transmitted from the vehicle to the remote center. Therefore, this paper proposes a data compression method of in-vehicle networks for remote monitoring of autonomous vehicles. Firstly, the time-series data are rearranged into a multi-dimensional signal space. Upon the arrival, for controller area networks (CAN), the new data are mapped onto a time-data two-dimensional space associated with the specific CAN identity. Secondly, the data are sampled based on differential sampling. Finally, the whole set of data are encoded using existing algorithms such as Huffman, arithmetic and codebook encoding methods. To evaluate system performance, the proposed method was deployed on an in-house built autonomous vehicle. The testing results show that the amount of data can be reduced as much as 1/7 compared to the raw data.

Keywords: autonomous vehicle, data compression, remote monitoring, controller area networks (CAN), Lidar

Procedia PDF Downloads 164
2228 Design and Implementation of a 94 GHz CMOS Double-Balanced Up-Conversion Mixer for 94 GHz Imaging Radar Sensors

Authors: Yo-Sheng Lin, Run-Chi Liu, Chien-Chu Ji, Chih-Chung Chen, Chien-Chin Wang

Abstract:

A W-band double-balanced mixer for direct up-conversion using standard 90 nm CMOS technology is reported. The mixer comprises an enhanced double-balanced Gilbert cell with PMOS negative resistance compensation for conversion gain (CG) enhancement and current injection for power consumption reduction and linearity improvement, a Marchand balun for converting the single LO input signal to differential signal, another Marchand balun for converting the differential RF output signal to single signal, and an output buffer amplifier for loading effect suppression, power consumption reduction and CG enhancement. The mixer consumes low power of 6.9 mW and achieves LO-port input reflection coefficient of -17.8~ -38.7 dB and RF-port input reflection coefficient of -16.8~ -27.9 dB for frequencies of 90~100 GHz. The mixer achieves maximum CG of 3.6 dB at 95 GHz, and CG of 2.1±1.5 dB for frequencies of 91.9~99.4 GHz. That is, the corresponding 3 dB CG bandwidth is 7.5 GHz. In addition, the mixer achieves LO-RF isolation of 36.8 dB at 94 GHz. To the authors’ knowledge, the CG, LO-RF isolation and power dissipation results are the best data ever reported for a 94 GHz CMOS/BiCMOS up-conversion mixer.

Keywords: CMOS, W-band, up-conversion mixer, conversion gain, negative resistance compensation, output buffer amplifier

Procedia PDF Downloads 533