Search results for: median filtering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 852

Search results for: median filtering

762 PDDA: Priority-Based, Dynamic Data Aggregation Approach for Sensor-Based Big Data Framework

Authors: Lutful Karim, Mohammed S. Al-kahtani

Abstract:

Sensors are being used in various applications such as agriculture, health monitoring, air and water pollution monitoring, traffic monitoring and control and hence, play the vital role in the growth of big data. However, sensors collect redundant data. Thus, aggregating and filtering sensors data are significantly important to design an efficient big data framework. Current researches do not focus on aggregating and filtering data at multiple layers of sensor-based big data framework. Thus, this paper introduces (i) three layers data aggregation and framework for big data and (ii) a priority-based, dynamic data aggregation scheme (PDDA) for the lowest layer at sensors. Simulation results show that the PDDA outperforms existing tree and cluster-based data aggregation scheme in terms of overall network energy consumptions and end-to-end data transmission delay.

Keywords: big data, clustering, tree topology, data aggregation, sensor networks

Procedia PDF Downloads 302
761 Capacitated Multiple Allocation P-Hub Median Problem on a Cluster Based Network under Congestion

Authors: Çağrı Özgün Kibiroğlu, Zeynep Turgut

Abstract:

This paper considers a hub location problem where the network service area partitioned into predetermined zones (represented by node clusters is given) and potential hub nodes capacity levels are determined a priori as a selection criteria of hub to investigate congestion effect on network. The objective is to design hub network by determining all required hub locations in the node clusters and also allocate non-hub nodes to hubs such that the total cost including transportation cost, opening cost of hubs and penalty cost for exceed of capacity level at hubs is minimized. A mixed integer linear programming model is developed introducing additional constraints to the traditional model of capacitated multiple allocation hub location problem and empirically tested.

Keywords: hub location problem, p-hub median problem, clustering, congestion

Procedia PDF Downloads 457
760 Platelet Indices among the Cases of Vivax Malaria

Authors: Mirza Sultan Ahmad, Mubashra Ahmad, Ramlah Mehmood, Nazia Mahboob, Waqar Nasir

Abstract:

Objective: To ascertain the prevalence of thrombocytopenia and study changes in MPV and PDW among cases of vivax malaria. Design: Descriptive analytic study. Place and duration of study: Department of pediatrics, Fazle Omar Hospital, from January to December 2012. Methodology: All patients from birth to 16 years age, who presented in Fazle- Omar hospital, Rabwah from January to December 2012 were included in this study. Hundred patients with other febrile illnesses were taken as control. Full blood counts were checked by Madonic CA 620 analyzer. Name, age, sex, weight, platelet counts. MPV, PDW, any evidence of bleeding, outcome of cases included in this study and taken as control were recorded on data sheets. Results: One hundred and forty-two patients were included in this study. There was no incidence of death or active bleeding. Median platelet count was 109000/mm3. Thrombocytopenia was present in 108 (76.1%) patients. Severe thrombocytopenia was present in 10(7%) patients. Minimum count was 27000/mm3 and maximum was 341000/mm3. Platelet counts of control group was significantly more as compared with study group.(p<.001) Median MPV was 8.70. Minimum value was 6.40 and maximum was 11.90. MPV of study group was significantly more than control group.(p<.001) Median PDW was 11.30. Minimum value was 8.5 and maximum was 16.70. There was no difference between PDW of study and control groups (p=0.246). Conclusions: Thrombocytopenia is a common complication among pediatric cases of vivax malaria. MPV of cases of vivax malaria is higher than control group.

Keywords: malaria vivax, platelet, mean platelet volume, thrombocytopenia

Procedia PDF Downloads 365
759 Bipolar Impulse Noise Removal and Edge Preservation in Color Images and Video Using Improved Kuwahara Filter

Authors: Reji Thankachan, Varsha PS

Abstract:

Both image capturing devices and human visual systems are nonlinear. Hence nonlinear filtering methods outperforms its linear counterpart in many applications. Linear methods are unable to remove impulsive noise in images by preserving its edges and fine details. In addition, linear algorithms are unable to remove signal dependent or multiplicative noise in images. This paper presents an approach to denoise and smoothen the Bipolar impulse noised images and videos using improved Kuwahara filter. It involves a 2 stage algorithm which includes a noise detection followed by filtering. Numerous simulation demonstrate that proposed method outperforms the existing method by eliminating the painting like flattening effect along the local feature direction while preserving edge with improvement in PSNR and MSE.

Keywords: bipolar impulse noise, Kuwahara, PSNR MSE, PDF

Procedia PDF Downloads 466
758 Predictor Factors for Treatment Failure among Patients on Second Line Antiretroviral Therapy

Authors: Mohd. A. M. Rahim, Yahaya Hassan, Mathumalar L. Fahrni

Abstract:

Second line antiretroviral therapy (ART) regimen is used when patients fail their first line regimen. There are many factors such as non-adherence, drug resistance as well as virological and immunological failure that lead to second line highly active antiretroviral therapy (HAART) regimen treatment failure. This study was aimed at determining predictor factors to treatment failure with second line HAART and analyzing median survival time. An observational, retrospective study was conducted in Sungai Buloh Hospital (HSB) to assess current status of HIV patients treated with second line HAART regimen. Convenience sampling was used and 104 patients were included based on the study’s inclusion and exclusion criteria. Data was collected for six months i.e. from July until December 2013. Data was then analysed using SPSS version 18. Kaplan-Meier and Cox regression analyses were used to measure median survival times and predictor factors for treatment failure. The study population consisted mainly of male subjects, aged 30-45 years, who were heterosexual, and had HIV infection for less than 6 years. The most common second line HAART regimen given was lopinavir/ritonavir (LPV/r)-based combination. Kaplan-Meier analysis showed that patients on LPV/r demonstrated longer median survival times than patients on indinavir/ritonavir (IDV/r) based combination (p<0.001). The commonest reason for a treatment to fail with second line HAART was non-adherence. Based on Cox regression analysis, other predictor factors for treatment failure with second line HAART regimen were age and mode of HIV transmission.

Keywords: adherence, antiretroviral therapy, second line, treatment failure

Procedia PDF Downloads 236
757 A Palmprint Identification System Based Multi-Layer Perceptron

Authors: David P. Tantua, Abdulkader Helwan

Abstract:

Biometrics has been recently used for the human identification systems using the biological traits such as the fingerprints and iris scanning. Identification systems based biometrics show great efficiency and accuracy in such human identification applications. However, these types of systems are so far based on some image processing techniques only, which may decrease the efficiency of such applications. Thus, this paper aims to develop a human palmprint identification system using multi-layer perceptron neural network which has the capability to learn using a backpropagation learning algorithms. The developed system uses images obtained from a public database available on the internet (CASIA). The processing system is as follows: image filtering using median filter, image adjustment, image skeletonizing, edge detection using canny operator to extract features, clear unwanted components of the image. The second phase is to feed those processed images into a neural network classifier which will adaptively learn and create a class for each different image. 100 different images are used for training the system. Since this is an identification system, it should be tested with the same images. Therefore, the same 100 images are used for testing it, and any image out of the training set should be unrecognized. The experimental results shows that this developed system has a great accuracy 100% and it can be implemented in real life applications.

Keywords: biometrics, biological traits, multi-layer perceptron neural network, image skeletonizing, edge detection using canny operator

Procedia PDF Downloads 336
756 The Use of Image Processing Responses Tools Applied to Analysing Bouguer Gravity Anomaly Map (Tangier-Tetuan's Area-Morocco)

Authors: Saad Bakkali

Abstract:

Image processing is a powerful tool for the enhancement of edges in images used in the interpretation of geophysical potential field data. Arial and terrestrial gravimetric surveys were carried out in the region of Tangier-Tetuan. From the observed and measured data of gravity Bouguer gravity anomalies map was prepared. This paper reports the results and interpretations of the transformed maps of Bouguer gravity anomaly of the Tangier-Tetuan area using image processing. Filtering analysis based on classical image process was applied. Operator image process like logarithmic and gamma correction are used. This paper also present the results obtained from this image processing analysis of the enhancement edges of the Bouguer gravity anomaly map of the Tangier-Tetuan zone.

Keywords: bouguer, tangier, filtering, gamma correction, logarithmic enhancement edges

Procedia PDF Downloads 398
755 Time to CT in Major Trauma in Coffs Harbour Health Campus - The Australian Rural Centre Experience

Authors: Thampi Rawther, Jack Cecire, Andrew Sutherland

Abstract:

Introduction: CT facilitates the diagnosis of potentially life-threatening injuries and facilitates early management. There is evidence that reduced CT acquisition time reduces mortality and length of hospital stay. Currently, there are variable recommendations for ideal timing. Indeed, the NHS standard contract for a major trauma service and STAG both recommend immediate access to CT within a maximum time of 60min and appropriate reporting within 60min of the scan. At Coffs Harbour Health Campus (CHHC), a CT radiographer is on site between 8am-11pm. Aim: To investigate the average time to CT at CHHC and assess for any significant relationship between time to CT and injury severity score (ISS) or time of triage. Method: All major trauma calls between Jan 2021-Oct 2021 were audited (N=87). Patients were excluded if they went from ED to the theatre. Time to CT is defined as the time between triage to the timestamp on the first CT image. Median and interquartile range was used as a measure of central tendency as the data was not normally distributed, and Chi-square test was used to determine association. Results: The median time to CT is 51.5min (IQR 40-74). We found no relationship between time to CT and ISS (P=0.18) and time of triage to time to CT (P=0.35). We compared this to other centres such as John Hunter Hospital and Gold Coast Hospital. We found that the median CT acquisition times were 76min (IQR 52-115) and 43min, respectively. Conclusion: This shows an avenue for improvement given 35% of CT’s were >30min. Furthermore, being proactive and aware of time to CT as an important factor to trauma management can be another avenue for improvement. Based on this, we will re-audit in 12-24months to assess if any improvement has been made.

Keywords: imaging, rural surgery, trauma surgery, improvement

Procedia PDF Downloads 68
754 Direct Cost of Anesthesia in Traumatic Patients with Massive Bleeding: A Prospective Micro-Costing Study

Authors: Asamaporn Puetpaiboon, Sunisa Chatmongkolchart, Nalinee Kovitwanawong, Osaree Akaraborworn

Abstract:

Traumatic patients with massive bleeding require intensive resuscitation. The actual cost of anesthesia per case has never been clarified, so our study aimed to quantify the direct cost, and cost-to-charge ratio of anesthetic care in traumatic patients with intraoperative massive bleeding. This study was a prospective, observational, cost analysis study, conducted in Prince of Songkla University hospital, Thailand, with traumatic patients, of any mechanisms being recruited. Massive bleeding was defined as estimated blood loss of at least one blood volume in 24 hours, or a half of blood volume in 3 hours. The cost components were identified by the micro-costing method, and valued by the bottom-up approach. The direct cost was divided into 4 categories: the labor cost, the capital cost, the material cost and the cost of drugs. From September 2017 to August 2018, 10 patients with multiple injuries were included. Seven patients had motorcycle accidents, two patients fell from a height and another one was in a minibus accident. Two patients died on the operating table, and another two died within 48 hours. The median Sequential Organ Failure Assessment (SOFA) score was 8. The median intraoperative blood loss was 3,500 ml. The median direct cost, per case, was 250 United States Dollars (2017 exchange rate), and the cost-to-charge ratio was 0.53. In summary, the direct cost was nearly half of the hospital charge, for these traumatic patients with massive bleeding. However, our study did not analyze the indirect cost.

Keywords: cost, cost-to-charge ratio, micro-costing, trauma

Procedia PDF Downloads 117
753 A Background Subtraction Based Moving Object Detection Around the Host Vehicle

Authors: Hyojin Lim, Cuong Nguyen Khac, Ho-Youl Jung

Abstract:

In this paper, we propose moving object detection method which is helpful for driver to safely take his/her car out of parking lot. When moving objects such as motorbikes, pedestrians, the other cars and some obstacles are detected at the rear-side of host vehicle, the proposed algorithm can provide to driver warning. We assume that the host vehicle is just before departure. Gaussian Mixture Model (GMM) based background subtraction is basically applied. Pre-processing such as smoothing and post-processing as morphological filtering are added.We examine “which color space has better performance for detection of moving objects?” Three color spaces including RGB, YCbCr, and Y are applied and compared, in terms of detection rate. Through simulation, we prove that RGB space is more suitable for moving object detection based on background subtraction.

Keywords: gaussian mixture model, background subtraction, moving object detection, color space, morphological filtering

Procedia PDF Downloads 583
752 Filtering and Reconstruction System for Grey-Level Forensic Images

Authors: Ahd Aljarf, Saad Amin

Abstract:

Images are important source of information used as evidence during any investigation process. Their clarity and accuracy is essential and of the utmost importance for any investigation. Images are vulnerable to losing blocks and having noise added to them either after alteration or when the image was taken initially, therefore, having a high performance image processing system and it is implementation is very important in a forensic point of view. This paper focuses on improving the quality of the forensic images. For different reasons packets that store data can be affected, harmed or even lost because of noise. For example, sending the image through a wireless channel can cause loss of bits. These types of errors might give difficulties generally for the visual display quality of the forensic images. Two of the images problems: noise and losing blocks are covered. However, information which gets transmitted through any way of communication may suffer alteration from its original state or even lose important data due to the channel noise. Therefore, a developed system is introduced to improve the quality and clarity of the forensic images.

Keywords: image filtering, image reconstruction, image processing, forensic images

Procedia PDF Downloads 336
751 Diagnostic Evaluation of Micro Rna (miRNA-21, miRNA-215 and miRNA-378) in Patients with Colorectal Cancer

Authors: Ossama Abdelmotaal, Olfat Shaker, Tarek Salman, Lamiaa Nabeel, Mostafa Shabayek

Abstract:

Colorectal Cancer (CRC) is an important worldwide health problem. Colonoscopy is used in detecting CRC suffer from drawbacks where colonoscopy is an invasive method. This study validates easier and less time-consuming techniques to evaluate the usefulness of detecting miRNA-21, miRNA-215 and miRNA-378 in the sera of colorectal cancer patients as new diagnostic tools. This study includes malignant (Colo Rectal Cancer patients, n= 64)) and healthy (n=27) groups. The studied groups were subjected to colonoscopic examination and estimation of miRNA-21, miRNA-215 and miRNA-378 in sera by RT-PCR. miRNA-21 showed the statistically significantly highest median fold change. miRNA-378 showed statistically significantly lower value (Both showed over-expression). miRNA-215 showed the statistically significantly lowest median fold change (It showed down-regulation). Overall the miRNA (21-215 and 378) appear to be promising method of detecting CRC and evaluating its stages.

Keywords: colorectal cancer, miRNA-21, miRNA-215, miRNA-378

Procedia PDF Downloads 274
750 Evaluation of Diagnosis Performance Based on Pairwise Model Construction and Filtered Data

Authors: Hyun-Woo Cho

Abstract:

It is quite important to utilize right time and intelligent production monitoring and diagnosis of industrial processes in terms of quality and safety issues. When compared with monitoring task, fault diagnosis represents the task of finding process variables responsible causing a specific fault in the process. It can be helpful to process operators who should investigate and eliminate root causes more effectively and efficiently. This work focused on the active use of combining a nonlinear statistical technique with a preprocessing method in order to implement practical real-time fault identification schemes for data-rich cases. To compare its performance to existing identification schemes, a case study on a benchmark process was performed in several scenarios. The results showed that the proposed fault identification scheme produced more reliable diagnosis results than linear methods. In addition, the use of the filtering step improved the identification results for the complicated processes with massive data sets.

Keywords: diagnosis, filtering, nonlinear statistical techniques, process monitoring

Procedia PDF Downloads 213
749 Semantic-Based Collaborative Filtering to Improve Visitor Cold Start in Recommender Systems

Authors: Baba Mbaye

Abstract:

In collaborative filtering recommendation systems, a user receives suggested items based on the opinions and evaluations of a community of users. This type of recommendation system uses only the information (notes in numerical values) contained in a usage matrix as input data. This matrix can be constructed based on users' behaviors or by offering users to declare their opinions on the items they know. The cold start problem leads to very poor performance for new users. It is a phenomenon that occurs at the beginning of use, in the situation where the system lacks data to make recommendations. There are three types of cold start problems: cold start for a new item, a new system, and a new user. We are interested in this article at the cold start for a new user. When the system welcomes a new user, the profile exists but does not have enough data, and its communities with other users profiles are still unknown. This leads to recommendations not adapted to the profile of the new user. In this paper, we propose an approach that improves cold start by using the notions of similarity and semantic proximity between users profiles during cold start. We will use the cold-metadata available (metadata extracted from the new user's data) useful in positioning the new user within a community. The aim is to look for similarities and semantic proximities with the old and current user profiles of the system. Proximity is represented by close concepts considered to belong to the same group, while similarity groups together elements that appear similar. Similarity and proximity are two close but not similar concepts. This similarity leads us to the construction of similarity which is based on: a) the concepts (properties, terms, instances) independent of ontology structure and, b) the simultaneous representation of the two concepts (relations, presence of terms in a document, simultaneous presence of the authorities). We propose an ontology, OIVCSRS (Ontology of Improvement Visitor Cold Start in Recommender Systems), in order to structure the terms and concepts representing the meaning of an information field, whether by the metadata of a namespace, or the elements of a knowledge domain. This approach allows us to automatically attach the new user to a user community, partially compensate for the data that was not initially provided and ultimately to associate a better first profile with the cold start. Thus, the aim of this paper is to propose an approach to improving cold start using semantic technologies.

Keywords: visitor cold start, recommender systems, collaborative filtering, semantic filtering

Procedia PDF Downloads 195
748 Analyzing On-Line Process Data for Industrial Production Quality Control

Authors: Hyun-Woo Cho

Abstract:

The monitoring of industrial production quality has to be implemented to alarm early warning for unusual operating conditions. Furthermore, identification of their assignable causes is necessary for a quality control purpose. For such tasks many multivariate statistical techniques have been applied and shown to be quite effective tools. This work presents a process data-based monitoring scheme for production processes. For more reliable results some additional steps of noise filtering and preprocessing are considered. It may lead to enhanced performance by eliminating unwanted variation of the data. The performance evaluation is executed using data sets from test processes. The proposed method is shown to provide reliable quality control results, and thus is more effective in quality monitoring in the example. For practical implementation of the method, an on-line data system must be available to gather historical and on-line data. Recently large amounts of data are collected on-line in most processes and implementation of the current scheme is feasible and does not give additional burdens to users.

Keywords: detection, filtering, monitoring, process data

Procedia PDF Downloads 518
747 Email Phishing Detection Using Natural Language Processing and Convolutional Neural Network

Authors: M. Hilani, B. Nassih

Abstract:

Phishing is one of the oldest and best known scams on the Internet. It can be defined as any type of telecommunications fraud that uses social engineering tricks to obtain confidential data from its victims. It’s a cybercrime aimed at stealing your sensitive information. Phishing is generally done via private email, so scammers impersonate large companies or other trusted entities to encourage victims to voluntarily provide information such as login credentials or, worse yet, credit card numbers. The COVID-19 theme is used by cybercriminals in multiple malicious campaigns like phishing. In this environment, messaging filtering solutions have become essential to protect devices that will now be used outside of the secure perimeter. Despite constantly updating methods to avoid these cyberattacks, the end result is currently insufficient. Many researchers are looking for optimal solutions to filter phishing emails, but we still need good results. In this work, we concentrated on solving the problem of detecting phishing emails using the different steps of NLP preprocessing, and we proposed and trained a model using one-dimensional CNN. Our study results show that our model obtained an accuracy of 99.99%, which demonstrates how well our model is working.

Keywords: phishing, e-mail, NLP preprocessing, CNN, e-mail filtering

Procedia PDF Downloads 83
746 Low Enrollment in Antiretroviral Treatment among Pregnant Women Screened HIV Infected in Informal Health Centers in Cameroon

Authors: Lydie Audrey Amboua Schouame, Sylvie Kwedi Nolna, Antoine Socpa, Alexandre Benjamin Nkoum

Abstract:

Background: Despite the struggle of the Cameroonian Ministry of Public Health against informal health centers (IHCs) because of their illegality, IHCs are booming in Cameroon and a large part of the population uses them. In 2017, more than 3.000 IHCs were counted across the country. Most of these IHCs have antenatal clinics and they screen pregnant women for HIV. However, there is no data on the Prevention of Mother-To-Child Transmission of HIV (PMTCT) in this informal health sector in Cameroon. This study aimed to investigate the initiation of Antiretroviral treatment (ART) in pregnant women screened HIV positive in IHCs and associated factors. Methods: From January 01, 2018, to June 30, 2020, we carried out a cohort study of pregnant women attending their first antenatal visit and screened HIV positive in informal health centers in the cities of Douala and Ebolowa in Cameroon. Consenting participants were interviewed at two points: at least one week after delivery of the HIV result and three months later. The collected data were entered into Kobo collected and analyzed in SPSS V23.0 software. Results: A total of 182 HIV-infected pregnant women were enrolled in the study. The median age at enrollment was 30 years (IQR, 24-34) and the median gestational age at first ANC was 25 weeks (IQR, 19-31). Overall 61% (111/182) had a secondary level of education, 65% (118/182) were married/in a common-law relationship and 69% (126/182) had no income activity. At their first ANC, 91% (166/182) were naïve to ARV treatment. Among them, only 45% (74/166) initiated ART. The median delay in initiating ARV treatment was 5 days (IQR, 0-25). Of those who have started ART, only 64% (48/74) remained on treatment 3 months later. Conclusion: In order to eliminate mother-to-child transmission of HIV, attention should be paid to IHCs.

Keywords: informal health centers, human immunodeficiency, antiretroviral treatment, pregnant women

Procedia PDF Downloads 116
745 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface

Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto

Abstract:

Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.

Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns

Procedia PDF Downloads 94
744 Geomatic Techniques to Filter Vegetation from Point Clouds

Authors: M. Amparo Núñez-Andrés, Felipe Buill, Albert Prades

Abstract:

More and more frequently, geomatics techniques such as terrestrial laser scanning or digital photogrammetry, either terrestrial or from drones, are being used to obtain digital terrain models (DTM) used for the monitoring of geological phenomena that cause natural disasters, such as landslides, rockfalls, debris-flow. One of the main multitemporal analyses developed from these models is the quantification of volume changes in the slopes and hillsides, either caused by erosion, fall, or land movement in the source area or sedimentation in the deposition zone. To carry out this task, it is necessary to filter the point clouds of all those elements that do not belong to the slopes. Among these elements, vegetation stands out as it is the one we find with the greatest presence and its constant change, both seasonal and daily, as it is affected by factors such as wind. One of the best-known indexes to detect vegetation on the image is the NVDI (Normalized Difference Vegetation Index), which is obtained from the combination of the infrared and red channels. Therefore it is necessary to have a multispectral camera. These cameras are generally of lower resolution than conventional RGB cameras, while their cost is much higher. Therefore we have to look for alternative indices based on RGB. In this communication, we present the results obtained in Georisk project (PID2019‐103974RB‐I00/MCIN/AEI/10.13039/501100011033) by using the GLI (Green Leaf Index) and ExG (Excessive Greenness), as well as the change to the Hue-Saturation-Value (HSV) color space being the H coordinate the one that gives us the most information for vegetation filtering. These filters are applied both to the images, creating binary masks to be used when applying the SfM algorithms, and to the point cloud obtained directly by the photogrammetric process without any previous filter or the one obtained by TLS (Terrestrial Laser Scanning). In this last case, we have also tried to work with a Riegl VZ400i sensor that allows the reception, as in the aerial LiDAR, of several returns of the signal. Information to be used for the classification on the point cloud. After applying all the techniques in different locations, the results show that the color-based filters allow correct filtering in those areas where the presence of shadows is not excessive and there is a contrast between the color of the slope lithology and the vegetation. As we have advanced in the case of using the HSV color space, it is the H coordinate that responds best for this filtering. Finally, the use of the various returns of the TLS signal allows filtering with some limitations.

Keywords: RGB index, TLS, photogrammetry, multispectral camera, point cloud

Procedia PDF Downloads 95
743 Effect of Foot Posture and Fatigue on Static Balance and Electromyographic Activity of Selected Lower Limb Muscles in School Children Aged 12 to 14 Years

Authors: Riza Adriyani, Tommy Apriantono, Suprijanto

Abstract:

Objective: Several studies have revealed that flatfoot posture has some effect on altered lower limb muscle function, in comparison to normal foot posture. There were still limited studies to examine the effect of fatigue on flatfoot posture in children. Therefore, this study was aimed to find out jumping fatiguing effect on static balance and to compare lower limb muscle function between flatfoot and normal foot in school children. Methods: Thirty junior high school children aged 12 to 14 years took part in this study. Of these all children, 15 had the normal foot (8 males and 7 females) and 15 had flatfoot (6 males and 9 females). Foot posture was classified based on an arch index of the footprint by a foot scanner which calculated the data using AUTOCAD 2013 software. Surface electromyography (EMG) activity was recorded from tibialis anterior, gastrocnemius medialis, and peroneus longus muscles while those participants were standing on one leg barefoot with opened eyes. All participants completed the entire protocol (pre-fatigue data collection, fatigue protocol, and post fatigue data collection) in a single session. Static balance and electromyographic data were collected before and after a functional fatigue protocol. Results: School children with normal foot had arch index 0.25±0.01 whereas those with flatfoot had 0.36±0.01. In fact, there were no significant differences for anthropometric characteristics between children with flatfoot and normal foot. This statistical analysis showed that fatigue could influence static balance in flatfoot school children (p < 0.05), but not in normal foot school children. Based on electromyographic data, the statistical analysis showed that there were significant differences (p < 0.05) of the decreased median frequency on tibialis anterior in flatfoot compared to normal foot school children after fatigue. However, there were no significant differences on the median frequency of gastrocnemius medialis and peroneus longus between both groups. After fatigue, median frequency timing was significantly different (p < 0.05) on tibialis anterior in flatfoot compared to normal foot children and tended to appear earlier on tibialis anterior, gastrocnemius medialis and peroneus longus (at 7s, 8s, 9s) in flatfoot compared to normal foot (at 15s, 11s , 12s). Conclusion: Fatigue influenced static balance and tended to appear earlier on selected lower limb muscles while performing static balance in flatfoot school children. After fatigue, tremor (median frequency decreased) showed more significant differences on tibialis anterior in flatfoot rather than in normal foot school children.

Keywords: fatigue, foot postures, median frequency, static balance

Procedia PDF Downloads 472
742 A Comparative Study in Acute Pancreatitis to Find out the Effectiveness of Early Addition of Ulinastatin to Current Standard Care in Indian Subjects

Authors: Dr. Jenit Gandhi, Dr. Manojith SS, Dr. Nakul GV, Dr. Sharath Honnani, Dr. Shaurav Ghosh, Dr. Neel Shetty, Dr. Nagabhushan JS, Dr. Manish Joshi

Abstract:

Introduction: Acute pancreatitis is an inflammatory condition of the pancreas which begins in pancreatic acinar cells and triggers local inflammation that may progress to systemic inflammatory response (SIRS) and causing distant organ involvement and its function and ending up with multiple organ dysfunction syndromes (MODS). Aim: A comparative study in acute pancreatitis to find out the effectiveness of early addition of Ulinastatin to current standard care in Indian subjects . Methodology: A current prospective observational study is done during study period of 1year (Dec 2018 –Dec 2019) duration to evaluate the effect of early addition of Ulinastatin to the current standard treatment and its efficacy to reduce the early complication, analgesic requirement and duration of hospital stay in patients with Acute Pancreatitis. Results: In the control group 25 were males and 05 were females. In the test group 18 were males and 12 females. Majority was in the age group between 30 - 70 yrs of age with >50% in the 30-50yrs age group in both test and control groups. The VAS was median grade 3 in control group as compared to median grade 2 in test group , the pain was more in the initial 2 days in test group compared to 4 days in test group , the analgesic requirement was used for more in control group (median 6) to test group( median 3 days ). On follow up after 5 days for a period of 2 weeks none of the patients in the test group developed any complication. Where as in the control group 8 patients developed pleural effusion, 04-Pseudopancreatic cyst, 02 – patient developed portal vein and splenic vein thrombosis, 02 patients – ventilator with ARDS which were treated symptomatically whereas in test group 02 patient developed pleural effusions and 01 pseudo pancreatic cyst with splenic artery aneurysm, 01 – patient with AKI and MODS symptomatically treated. The duration of hospital stay for a median period of 4 days (2 – 7 days) in test group and 7 days (4 -10 days) in control group. All patients were able to return to normal work on an average of 5days compared 8days in control group, the difference was significant. Conclusion:The study concluded that early addition of Ulinastatin to current standard treatment of acute Pancreatitis is effective in reducing pain, early complication and duration of hospital stay in Indian subject

Keywords: Ulinastatin, VAS – visual analogue score , AKI – acute kidney injury , ARDS – acute respiratory distress syndrome

Procedia PDF Downloads 94
741 Power Quality Modeling Using Recognition Learning Methods for Waveform Disturbances

Authors: Sang-Keun Moon, Hong-Rok Lim, Jin-O Kim

Abstract:

This paper presents a Power Quality (PQ) modeling and filtering processes for the distribution system disturbances using recognition learning methods. Typical PQ waveforms with mathematical applications and gathered field data are applied to the proposed models. The objective of this paper is analyzing PQ data with respect to monitoring, discriminating, and evaluating the waveform of power disturbances to ensure the system preventative system failure protections and complex system problem estimations. Examined signal filtering techniques are used for the field waveform noises and feature extractions. Using extraction and learning classification techniques, the efficiency was verified for the recognition of the PQ disturbances with focusing on interactive modeling methods in this paper. The waveform of selected 8 disturbances is modeled with randomized parameters of IEEE 1159 PQ ranges. The range, parameters, and weights are updated regarding field waveform obtained. Along with voltages, currents have same process to obtain the waveform features as the voltage apart from some of ratings and filters. Changing loads are causing the distortion in the voltage waveform due to the drawing of the different patterns of current variation. In the conclusion, PQ disturbances in the voltage and current waveforms indicate different types of patterns of variations and disturbance, and a modified technique based on the symmetrical components in time domain was proposed in this paper for the PQ disturbances detection and then classification. Our method is based on the fact that obtained waveforms from suggested trigger conditions contain potential information for abnormality detections. The extracted features are sequentially applied to estimation and recognition learning modules for further studies.

Keywords: power quality recognition, PQ modeling, waveform feature extraction, disturbance trigger condition, PQ signal filtering

Procedia PDF Downloads 162
740 Optimization of Multiplier Extraction Digital Filter On FPGA

Authors: Shiksha Jain, Ramesh Mishra

Abstract:

One of the most widely used complex signals processing operation is filtering. The most important FIR digital filter are widely used in DSP for filtering to alter the spectrum according to some given specifications. Power consumption and Area complexity in the algorithm of Finite Impulse Response (FIR) filter is mainly caused by multipliers. So we present a multiplier less technique (DA technique). In this technique, precomputed value of inner product is stored in LUT. Which are further added and shifted with number of iterations equal to the precision of input sample. But the exponential growth of LUT with the order of FIR filter, in this basic structure, makes it prohibitive for many applications. The significant area and power reduction over traditional Distributed Arithmetic (DA) structure is presented in this paper, by the use of slicing of LUT to the desired length. An architecture of 16 tap FIR filter is presented, with different length of slice of LUT. The result of FIR Filter implementation on Xilinx ISE synthesis tool (XST) vertex-4 FPGA Tool by using proposed method shows the increase of the maximum frequency, the decrease of the resources as usage saving in area with more number of slices and the reduction dynamic power.

Keywords: multiplier less technique, linear phase symmetric FIR filter, FPGA tool, look up table

Procedia PDF Downloads 364
739 User Modeling from the Perspective of Improvement in Search Results: A Survey of the State of the Art

Authors: Samira Karimi-Mansoub, Rahem Abri

Abstract:

Currently, users expect high quality and personalized information from search results. To satisfy user’s needs, personalized approaches to web search have been proposed. These approaches can provide the most appropriate answer for user’s needs by using user context and incorporating information about query provided by combining search technologies. To carry out personalized web search, there is a need to make different techniques on whole of user search process. There are the number of possible deployment of personalized approaches such as personalized web search, personalized recommendation, personalized summarization and filtering systems and etc. but the common feature of all approaches in various domains is that user modeling is utilized to provide personalized information from the Web. So the most important work in personalized approaches is user model mining. User modeling applications and technologies can be used in various domains depending on how the user collected information may be extracted. In addition to, the used techniques to create user model is also different in each of these applications. Since in the previous studies, there was not a complete survey in this field, our purpose is to present a survey on applications and techniques of user modeling from the viewpoint of improvement in search results by considering the existing literature and researches.

Keywords: filtering systems, personalized web search, user modeling, user search behavior

Procedia PDF Downloads 247
738 A Near-Optimal Domain Independent Approach for Detecting Approximate Duplicates

Authors: Abdelaziz Fellah, Allaoua Maamir

Abstract:

We propose a domain-independent merging-cluster filter approach complemented with a set of algorithms for identifying approximate duplicate entities efficiently and accurately within a single and across multiple data sources. The near-optimal merging-cluster filter (MCF) approach is based on the Monge-Elkan well-tuned algorithm and extended with an affine variant of the Smith-Waterman similarity measure. Then we present constant, variable, and function threshold algorithms that work conceptually in a divide-merge filtering fashion for detecting near duplicates as hierarchical clusters along with their corresponding representatives. The algorithms take recursive refinement approaches in the spirit of filtering, merging, and updating, cluster representatives to detect approximate duplicates at each level of the cluster tree. Experiments show a high effectiveness and accuracy of the MCF approach in detecting approximate duplicates by outperforming the seminal Monge-Elkan’s algorithm on several real-world benchmarks and generated datasets.

Keywords: data mining, data cleaning, approximate duplicates, near-duplicates detection, data mining applications and discovery

Procedia PDF Downloads 358
737 An Improved Tracking Approach Using Particle Filter and Background Subtraction

Authors: Amir Mukhtar, Dr. Likun Xia

Abstract:

An improved, robust and efficient visual target tracking algorithm using particle filtering is proposed. Particle filtering has been proven very successful in estimating non-Gaussian and non-linear problems. In this paper, the particle filter is used with color feature to estimate the target state with time. Color distributions are applied as this feature is scale and rotational invariant, shows robustness to partial occlusion and computationally efficient. The performance is made more robust by choosing the different (YIQ) color scheme. Tracking is performed by comparison of chrominance histograms of target and candidate positions (particles). Color based particle filter tracking often leads to inaccurate results when light intensity changes during a video stream. Furthermore, background subtraction technique is used for size estimation of the target. The qualitative evaluation of proposed algorithm is performed on several real-world videos. The experimental results demonstrate that the improved algorithm can track the moving objects very well under illumination changes, occlusion and moving background.

Keywords: tracking, particle filter, histogram, corner points, occlusion, illumination

Procedia PDF Downloads 354
736 A Novel RLS Based Adaptive Filtering Method for Speech Enhancement

Authors: Pogula Rakesh, T. Kishore Kumar

Abstract:

Speech enhancement is a long standing problem with numerous applications like teleconferencing, VoIP, hearing aids, and speech recognition. The motivation behind this research work is to obtain a clean speech signal of higher quality by applying the optimal noise cancellation technique. Real-time adaptive filtering algorithms seem to be the best candidate among all categories of the speech enhancement methods. In this paper, we propose a speech enhancement method based on Recursive Least Squares (RLS) adaptive filter of speech signals. Experiments were performed on noisy data which was prepared by adding AWGN, Babble and Pink noise to clean speech samples at -5dB, 0dB, 5dB, and 10dB SNR levels. We then compare the noise cancellation performance of proposed RLS algorithm with existing NLMS algorithm in terms of Mean Squared Error (MSE), Signal to Noise ratio (SNR), and SNR loss. Based on the performance evaluation, the proposed RLS algorithm was found to be a better optimal noise cancellation technique for speech signals.

Keywords: adaptive filter, adaptive noise canceller, mean squared error, noise reduction, NLMS, RLS, SNR, SNR loss

Procedia PDF Downloads 448
735 Static Application Security Testing Approach for Non-Standard Smart Contracts

Authors: Antonio Horta, Renato Marinho, Raimir Holanda

Abstract:

Considered as an evolution of the Blockchain, the Ethereum platform, besides allowing transactions of its cryptocurrency named Ether, it allows the programming of decentralised applications (DApps) and smart contracts. However, this functionality into blockchains has raised other types of threats, and the exploitation of smart contracts vulnerabilities has taken companies to experience big losses. This research intends to figure out the number of contracts that are under risk of being drained. Through a deep investigation, more than two hundred thousand smart contracts currently available in the Ethereum platform were scanned and estimated how much money is at risk. The experiment was based in a query run on Google Big Query in July 2022 and returned 50,707,133 contracts published on the Ethereum platform. After applying the filtering criteria, the experimentgot 430,584 smart contracts to download and analyse. The filtering criteria consisted of filtering out: ERC20 and ERC721 contracts, contracts without transactions, and contracts without balance. From this amount of 430,584 smart contracts selected, only 268,103 had source codes published on Etherscan, however, we discovered, using a hashing process, that there were contracts duplication. Removing the duplicated contracts, the process ended up with 20,417 source codes, which were analysed using the open source SAST tool smartbugswith oyente and securify algorithms. In the end, there was nearly $100,000 at risk of being drained from the potentially vulnerable smart contracts. It is important to note that the tools used in this study may generate false positives, which may interfere with the number of vulnerable contracts. To address this point, our next step in this research is to develop an application to test the contract in a parallel environment to verify the vulnerability. Finally, this study aims to alert users and companies about the risk on not properly creating and analysing their smart contracts before publishing them into the platform. As any other application, smart contracts are at risk of having vulnerabilities which, in this case, may result in direct financial losses.

Keywords: blockchain, reentrancy, static application security testing, smart contracts

Procedia PDF Downloads 57
734 Application of Envelope Spectrum Analysis and Spectral Kurtosis to Diagnose Debris Fault in Bearing Using Acoustic Signals

Authors: Henry Ogbemudia Omoregbee, Mabel Usunobun Olanipekun

Abstract:

Debris fault diagnosis based on acoustic signals in rolling element bearing running at low speed and high radial loads are more of low amplitudes, particularly in the case of debris faults whose signals necessitate high sensitivity analyses. As the rollers in the bearing roll over debris trapped in grease used to lubricate the bearings, the envelope signal created by amplitude demodulation carries additional diagnostic information that is not available through ordinary spectrum analysis of the raw signal. The kurtosis value obtained for three different scenarios (debris induced, outer crack induced, and a normal good bearing) couldn't be used to easily identify whether the used bearings were defective or not. It was established in this work that the envelope spectrum analysis detected the fault signature and its harmonics induced in the debris bearings when bandpass filtering of the raw signal with the frequency band specified by kurtogram and spectral kurtosis was made.

Keywords: rolling bearings, rolling element bearing noise, bandpass filtering, harmonics, envelope spectrum analysis, spectral kurtosis

Procedia PDF Downloads 48
733 Shoulder-Arm Mobility and Upper and Lower Extremity Muscle Function are Impaired in Patients with Systemic Sclerosis

Authors: F. Bringby, A. Nordin, L. Björnådal, E. Svenungsson, C. Boström, H Alexanderson

Abstract:

Patients with systemic sclerosis (SSc) have reduced hand function and self-reported limitations in daily activities. Few studies have explored limitations in shoulder-arm mobility and muscle function, or if there are differences in physical function between diffuse cutaneous (dcSSc) and limited cutaneous (lcSSc) SSc. The purpose of this study was to describe objectively assessed shoulder-arm mobility, lower extremity muscle function and muscle endurance in SSc and evaluate possible differences between lcSSc and dcSSc. 121 patients with SSc were included in this cross sectional study. Shoulder-arm mobility were examined using the Shoulder Function Assessment Scale (SFA) including 5 tasks ,lower extremity muscle function was measured by Timed stands test (TST) and muscle endurance in shoulder- and hip flexors were assessed by the Functional Index 2 (FI-2). Patients with dcSSc had median SFA hand to back score 5 (4-6) and median “hand to seat” score of 5 (4-6) compared to patients with lcSSc with corresponding median values of 6 (4-6) and 6 (5-6) respectively (p<0.01-p<0.05). 50% of both patientsgroups had lower muscle function assessed by the TST compared to age- and gender matched reference values but there were no differences in TST between the two patient groups. There was no difference in FI-2 scores between dcSSc and lcSSc. The whole group had 40 (28-83) % and 38 (32-72) % of maximal FI-2 shoulder flexion score on the right and left sides, and 40 (23-63) % and 37 (23-62) % of maximal FI-2 hip flexion score on the right and left sides. Reference values for the FI-2 indicate that healthy individuals perform in mean 100 % of maximal score. Patients with dcSSc were more limited than patients with lcSSc. Patients with SSc have reduced muscle function compared to reference values. These results highlights the importance of assessing shoulder-arm mobility and muscle function as well as a need for further research to identify exercise interventions to target these limitations.

Keywords: diffuse, limited, mobility, muscle function, physical therapy, systemic sclerosis

Procedia PDF Downloads 362