Search results for: Foveation Filtering
31 A Pairwise-Gaussian-Merging Approach: Towards Genome Segmentation for Copy Number Analysis
Authors: Chih-Hao Chen, Hsing-Chung Lee, Qingdong Ling, Hsiao-Jung Chen, Sun-Chong Wang, Li-Ching Wu, H.C. Lee
Abstract:
Segmentation, filtering out of measurement errors and identification of breakpoints are integral parts of any analysis of microarray data for the detection of copy number variation (CNV). Existing algorithms designed for these tasks have had some successes in the past, but they tend to be O(N2) in either computation time or memory requirement, or both, and the rapid advance of microarray resolution has practically rendered such algorithms useless. Here we propose an algorithm, SAD, that is much faster and much less thirsty for memory – O(N) in both computation time and memory requirement -- and offers higher accuracy. The two key ingredients of SAD are the fundamental assumption in statistics that measurement errors are normally distributed and the mathematical relation that the product of two Gaussians is another Gaussian (function). We have produced a computer program for analyzing CNV based on SAD. In addition to being fast and small it offers two important features: quantitative statistics for predictions and, with only two user-decided parameters, ease of use. Its speed shows little dependence on genomic profile. Running on an average modern computer, it completes CNV analyses for a 262 thousand-probe array in ~1 second and a 1.8 million-probe array in 9 secondsKeywords: Cancer, pathogenesis, chromosomal aberration, copy number variation, segmentation analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 147730 An Enhanced SAR-Based Tsunami Detection System
Authors: Jean-Pierre Dubois, Jihad S. Daba, H. Karam, J. Abdallah
Abstract:
Tsunami early detection and warning systems have proved to be of ultimate importance, especially after the destructive tsunami that hit Japan in March 2012. Such systems are crucial to inform the authorities of any risk of a tsunami and of the degree of its danger in order to make the right decision and notify the public of the actions they need to take to save their lives. The purpose of this research is to enhance existing tsunami detection and warning systems. We first propose an automated and miniaturized model of an early tsunami detection and warning system. The model for the operation of a tsunami warning system is simulated using the data acquisition toolbox of Matlab and measurements acquired from specified internet pages due to the lack of the required real-life sensors, both seismic and hydrologic, and building a graphical user interface for the system. In the second phase of this work, we implement various satellite image filtering schemes to enhance the acquired synthetic aperture radar images of the tsunami affected region that are masked by speckle noise. This enables us to conduct a post-tsunami damage extent study and calculate the percentage damage. We conclude by proposing improvements to the existing telecommunication infrastructure of existing warning tsunami systems using a migration to IP-based networks and fiber optics links.
Keywords: Detection, GIS, GSN, GTS, GPS, speckle noise, synthetic aperture radar, tsunami, wiener filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 217629 Embedding a Large Amount of Information Using High Secure Neural Based Steganography Algorithm
Authors: Nameer N. EL-Emam
Abstract:
In this paper, we construct and implement a new Steganography algorithm based on learning system to hide a large amount of information into color BMP image. We have used adaptive image filtering and adaptive non-uniform image segmentation with bits replacement on the appropriate pixels. These pixels are selected randomly rather than sequentially by using new concept defined by main cases with sub cases for each byte in one pixel. According to the steps of design, we have been concluded 16 main cases with their sub cases that covere all aspects of the input information into color bitmap image. High security layers have been proposed through four layers of security to make it difficult to break the encryption of the input information and confuse steganalysis too. Learning system has been introduces at the fourth layer of security through neural network. This layer is used to increase the difficulties of the statistical attacks. Our results against statistical and visual attacks are discussed before and after using the learning system and we make comparison with the previous Steganography algorithm. We show that our algorithm can embed efficiently a large amount of information that has been reached to 75% of the image size (replace 18 bits for each pixel as a maximum) with high quality of the output.Keywords: Adaptive image segmentation, hiding with high capacity, hiding with high security, neural networks, Steganography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 198928 Low Resolution Single Neural Network Based Face Recognition
Authors: Jahan Zeb, Muhammad Younus Javed, Usman Qayyum
Abstract:
This research paper deals with the implementation of face recognition using neural network (recognition classifier) on low-resolution images. The proposed system contains two parts, preprocessing and face classification. The preprocessing part converts original images into blurry image using average filter and equalizes the histogram of those image (lighting normalization). The bi-cubic interpolation function is applied onto equalized image to get resized image. The resized image is actually low-resolution image providing faster processing for training and testing. The preprocessed image becomes the input to neural network classifier, which uses back-propagation algorithm to recognize the familiar faces. The crux of proposed algorithm is its beauty to use single neural network as classifier, which produces straightforward approach towards face recognition. The single neural network consists of three layers with Log sigmoid, Hyperbolic tangent sigmoid and Linear transfer function respectively. The training function, which is incorporated in our work, is Gradient descent with momentum (adaptive learning rate) back propagation. The proposed algorithm was trained on ORL (Olivetti Research Laboratory) database with 5 training images. The empirical results provide the accuracy of 94.50%, 93.00% and 90.25% for 20, 30 and 40 subjects respectively, with time delay of 0.0934 sec per image.Keywords: Average filtering, Bicubic Interpolation, Neurons, vectorization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 175027 Locating Center Points for Radial Basis Function Networks Using Instance Reduction Techniques
Authors: Rana Yousef, Khalil el Hindi
Abstract:
The behavior of Radial Basis Function (RBF) Networks greatly depends on how the center points of the basis functions are selected. In this work we investigate the use of instance reduction techniques, originally developed to reduce the storage requirements of instance based learners, for this purpose. Five Instance-Based Reduction Techniques were used to determine the set of center points, and RBF networks were trained using these sets of centers. The performance of the RBF networks is studied in terms of classification accuracy and training time. The results obtained were compared with two Radial Basis Function Networks: RBF networks that use all instances of the training set as center points (RBF-ALL) and Probabilistic Neural Networks (PNN). The former achieves high classification accuracies and the latter requires smaller training time. Results showed that RBF networks trained using sets of centers located by noise-filtering techniques (ALLKNN and ENN) rather than pure reduction techniques produce the best results in terms of classification accuracy. The results show that these networks require smaller training time than that of RBF-ALL and higher classification accuracy than that of PNN. Thus, using ALLKNN and ENN to select center points gives better combination of classification accuracy and training time. Our experiments also show that using the reduced sets to train the networks is beneficial especially in the presence of noise in the original training sets.
Keywords: Radial basis function networks, Instance-based reduction, PNN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 168826 Objects Extraction by Cooperating Optical Flow, Edge Detection and Region Growing Procedures
Abstract:
The image segmentation method described in this paper has been developed as a pre-processing stage to be used in methodologies and tools for video/image indexing and retrieval by content. This method solves the problem of whole objects extraction from background and it produces images of single complete objects from videos or photos. The extracted images are used for calculating the object visual features necessary for both indexing and retrieval processes. The segmentation algorithm is based on the cooperation among an optical flow evaluation method, edge detection and region growing procedures. The optical flow estimator belongs to the class of differential methods. It permits to detect motions ranging from a fraction of a pixel to a few pixels per frame, achieving good results in presence of noise without the need of a filtering pre-processing stage and includes a specialised model for moving object detection. The first task of the presented method exploits the cues from motion analysis for moving areas detection. Objects and background are then refined using respectively edge detection and seeded region growing procedures. All the tasks are iteratively performed until objects and background are completely resolved. The method has been applied to a variety of indoor and outdoor scenes where objects of different type and shape are represented on variously textured background.Keywords: Image Segmentation, Motion Detection, Object Extraction, Optical Flow
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 175725 A New Model for Question Answering Systems
Authors: Mohammad Reza Kangavari, Samira Ghandchi, Manak Golpour
Abstract:
Most of the Question Answering systems composed of three main modules: question processing, document processing and answer processing. Question processing module plays an important role in QA systems. If this module doesn't work properly, it will make problems for other sections. Moreover answer processing module is an emerging topic in Question Answering, where these systems are often required to rank and validate candidate answers. These techniques aiming at finding short and precise answers are often based on the semantic classification. This paper discussed about a new model for question answering which improved two main modules, question processing and answer processing. There are two important components which are the bases of the question processing. First component is question classification that specifies types of question and answer. Second one is reformulation which converts the user's question into an understandable question by QA system in a specific domain. Answer processing module, consists of candidate answer filtering, candidate answer ordering components and also it has a validation section for interacting with user. This module makes it more suitable to find exact answer. In this paper we have described question and answer processing modules with modeling, implementing and evaluating the system. System implemented in two versions. Results show that 'Version No.1' gave correct answer to 70% of questions (30 correct answers to 50 asked questions) and 'version No.2' gave correct answers to 94% of questions (47 correct answers to 50 asked questions).Keywords: Answer Processing, Classification, QuestionAnswering and Query Reformulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 212524 Effect of Wood Vinegar for Controlling on Housefly (Musca domestica L.)
Authors: U. Pangnakorn, S. Kanlaya, C. Kuntha
Abstract:
Raw wood vinegar was purified by both standing and filtering methods. Toxicity tests were conducted under laboratory conditions by the topical application method (contact poison) and feeding method (stomach poison). Larvicidal activities of wood vinegar at four different concentrations (10, 15, 20, 25 and 30 %) were studied against second instar larvae of housefly (Musca domestica L.). Four replicates were maintained for all treatments and controls. Larval mortality was recorded up to 96 hours and compared with the larval survivability by two methods of larvicidal bioassay. Percent pupation and percent adult emergence were observed in treated M. domestica. The study revealed that the feeding method gave higher efficiency compared with the topical application method. Larval mortality increased with increasing concentration of wood vinegar and the duration of exposure. No mortality was found in treated M. domestica larvae at minimum 10% concentration of wood vinegar through the experiments. The treated larvae were maintained up to pupa and adult emergence. At 30% maximum concentration larval duration was extended to 11 days in M. domestica for topical application method and 9 days for feeding method. Similarly the pupal durations were also increased with increased concentrations (16 and 24 days for topical application method and feeding method respectively at 30% concentration) of the treatments.Keywords: Housefly (Musca domestica L.), wood vinegar, mortality, topical application, feeding
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 315723 Hybrid Collaborative-Context Based Recommendations for Civil Affairs Operations
Authors: Patrick Cummings, Laura Cassani, Deirdre Kelliher
Abstract:
In this paper we present findings from a research effort to apply a hybrid collaborative-context approach for a system focused on Marine Corps civil affairs data collection, aggregation, and analysis called the Marine Civil Information Management System (MARCIMS). The goal of this effort is to provide operators with information to make sense of the interconnectedness of entities and relationships in their area of operation and discover existing data to support civil military operations. Our approach to build a recommendation engine was designed to overcome several technical challenges, including 1) ensuring models were robust to the relatively small amount of data collected by the Marine Corps civil affairs community; 2) finding methods to recommend novel data for which there are no interactions captured; and 3) overcoming confirmation bias by ensuring content was recommended that was relevant for the mission despite being obscure or less well known. We solve this by implementing a combination of collective matrix factorization (CMF) and graph-based random walks to provide recommendations to civil military operations users. We also present a method to resolve the challenge of computation complexity inherent from highly connected nodes through a precomputed process.
Keywords: Recommendation engine, collaborative filtering, context based recommendation, graph analysis, coverage, civil affairs operations, Marine Corps.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38922 Info-participation of the Disabled Using the Mixed Preference Data in Improving Their Travel Quality
Authors: Y. Duvarci, S. Mizokami
Abstract:
Today, the preferences and participation of the TD groups such as the elderly and disabled is still lacking in decision-making of transportation planning, and their reactions to certain type of policies are not well known. Thus, a clear methodology is needed. This study aimed to develop a method to extract the preferences of the disabled to be used in the policy-making stage that can also guide to future estimations. The method utilizes the combination of cluster analysis and data filtering using the data of the Arao city (Japan). The method is a process that follows: defining the TD group by the cluster analysis tool, their travel preferences in tabular form from the household surveys by policy variableimpact pairs, zones, and by trip purposes, and the final outcome is the preference probabilities of the disabled. The preferences vary by trip purpose; for the work trips, accessibility and transit system quality policies with the accompanying impacts of modal shifts towards public mode use as well as the decreasing travel costs, and the trip rate increase; for the social trips, the same accessibility and transit system policies leading to the same mode shift impact, together with the travel quality policy area leading to trip rate increase. These results explain the policies to focus and can be used in scenario generation in models, or any other planning purpose as decision support tool.
Keywords: Transportation Disadvantaged, Disabled, Mixed Preference, Stated Preference Data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 107921 Improving the Shunt Active Power Filter Performance Using Synchronous Reference Frame PI Based Controller with Anti-Windup Scheme
Authors: Consalva J. Msigwa, Beda J. Kundy, Bakari M. M. Mwinyiwiwa
Abstract:
In this paper the reference current for Voltage Source Converter (VSC) of the Shunt Active Power Filter (SAPF) is generated using Synchronous Reference Frame method, incorporating the PI controller with anti-windup scheme. The proposed method improves the harmonic filtering by compensating the winding up phenomenon caused by the integral term of the PI controller. Using Reference Frame Transformation, the current is transformed from om a - b - c stationery frame to rotating 0 - d - q frame. Using the PI controller, the current in the 0 - d - q frame is controlled to get the desired reference signal. A controller with integral action combined with an actuator that becomes saturated can give some undesirable effects. If the control error is so large that the integrator saturates the actuator, the feedback path becomes ineffective because the actuator will remain saturated even if the process output changes. The integrator being an unstable system may then integrate to a very large value, the phenomenon known as integrator windup. Implementing the integrator anti-windup circuit turns off the integrator action when the actuator saturates, hence improving the performance of the SAPF and dynamically compensating harmonics in the power network. In this paper the system performance is examined with Shunt Active Power Filter simulation model.Keywords: Phase Locked Loop (PLL), Voltage SourceConverter (VSC), Shunt Active Power Filter (SAPF), PI, Pulse WidthModulation (PWM).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 156820 Fast Approximate Bayesian Contextual Cold Start Learning (FAB-COST)
Authors: Jack R. McKenzie, Peter A. Appleby, Thomas House, Neil Walton
Abstract:
Cold-start is a notoriously difficult problem which can occur in recommendation systems, and arises when there is insufficient information to draw inferences for users or items. To address this challenge, a contextual bandit algorithm – the Fast Approximate Bayesian Contextual Cold Start Learning algorithm (FAB-COST) – is proposed, which is designed to provide improved accuracy compared to the traditionally used Laplace approximation in the logistic contextual bandit, while controlling both algorithmic complexity and computational cost. To this end, FAB-COST uses a combination of two moment projection variational methods: Expectation Propagation (EP), which performs well at the cold start, but becomes slow as the amount of data increases; and Assumed Density Filtering (ADF), which has slower growth of computational cost with data size but requires more data to obtain an acceptable level of accuracy. By switching from EP to ADF when the dataset becomes large, it is able to exploit their complementary strengths. The empirical justification for FAB-COST is presented, and systematically compared to other approaches on simulated data. In a benchmark against the Laplace approximation on real data consisting of over 670, 000 impressions from autotrader.co.uk, FAB-COST demonstrates at one point increase of over 16% in user clicks. On the basis of these results, it is argued that FAB-COST is likely to be an attractive approach to cold-start recommendation systems in a variety of contexts.Keywords: Cold-start, expectation propagation, multi-armed bandits, Thompson sampling, variational inference.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 55219 Image Transmission via Iterative Cellular-Turbo System
Authors: Ersin Gose, Kenan Buyukatak, Onur Osman, Osman N. Ucan
Abstract:
To compress, improve bit error performance and also enhance 2D images, a new scheme, called Iterative Cellular-Turbo System (IC-TS) is introduced. In IC-TS, the original image is partitioned into 2N quantization levels, where N is denoted as bit planes. Then each of the N-bit-plane is coded by Turbo encoder and transmitted over Additive White Gaussian Noise (AWGN) channel. At the receiver side, bit-planes are re-assembled taking into consideration of neighborhood relationship of pixels in 2-D images. Each of the noisy bit-plane values of the image is evaluated iteratively using IC-TS structure, which is composed of equalization block; Iterative Cellular Image Processing Algorithm (ICIPA) and Turbo decoder. In IC-TS, there is an iterative feedback link between ICIPA and Turbo decoder. ICIPA uses mean and standard deviation of estimated values of each pixel neighborhood. It has extra-ordinary satisfactory results of both Bit Error Rate (BER) and image enhancement performance for less than -1 dB Signal-to-Noise Ratio (SNR) values, compared to traditional turbo coding scheme and 2-D filtering, applied separately. Also, compression can be achieved by using IC-TS systems. In compression, less memory storage is used and data rate is increased up to N-1 times by simply choosing any number of bit slices, sacrificing resolution. Hence, it is concluded that IC-TS system will be a compromising approach in 2-D image transmission, recovery of noisy signals and image compression.
Keywords: Iterative Cellular Image Processing Algorithm (ICIPA), Turbo Coding, Iterative Cellular Turbo System (IC-TS), Image Compression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 181518 Web Content Mining: A Solution to Consumer's Product Hunt
Authors: Syed Salman Ahmed, Zahid Halim, Rauf Baig, Shariq Bashir
Abstract:
With the rapid growth in business size, today's businesses orient towards electronic technologies. Amazon.com and e-bay.com are some of the major stakeholders in this regard. Unfortunately the enormous size and hugely unstructured data on the web, even for a single commodity, has become a cause of ambiguity for consumers. Extracting valuable information from such an everincreasing data is an extremely tedious task and is fast becoming critical towards the success of businesses. Web content mining can play a major role in solving these issues. It involves using efficient algorithmic techniques to search and retrieve the desired information from a seemingly impossible to search unstructured data on the Internet. Application of web content mining can be very encouraging in the areas of Customer Relations Modeling, billing records, logistics investigations, product cataloguing and quality management. In this paper we present a review of some very interesting, efficient yet implementable techniques from the field of web content mining and study their impact in the area specific to business user needs focusing both on the customer as well as the producer. The techniques we would be reviewing include, mining by developing a knowledge-base repository of the domain, iterative refinement of user queries for personalized search, using a graphbased approach for the development of a web-crawler and filtering information for personalized search using website captions. These techniques have been analyzed and compared on the basis of their execution time and relevance of the result they produced against a particular search.
Keywords: Data mining, web mining, search engines, knowledge discovery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 205317 Sources of Water Supply and Water Quality for Local Consumption: The Case Study of Eco-Tourism Village, Suan Luang Sub- District Municipality, Ampawa District, Samut Songkram Province, Thailand
Authors: Paiboon Jeamponk, Tasanee Ponglaa, Patchapon Srisanguan
Abstract:
The aim of this research paper was based on an examination of sources of water supply and water quality for local consumption, conducted at eco- tourism villages of Suan Luang Sub- District Municipality of Amphawa District, Samut Songkram Province. The study incorporated both questionnaire and field work of water testing as the research tool and method. The sample size of 288 households was based on the population of the district, whereas the selected sample water sources were from 60 households: 30 samples were ground water and another 30 were surface water. Degree of heavy metal contamination in the water including copper, iron, manganese, zinc, cadmium and lead was investigated utilizing the Atomic Absorption- Direct Aspiration method. The findings unveiled that 96.0 percent of household water consumption was based on water supply, while the rest on canal, river and rain water. The household behavior of consumption revealed that 47.2 percent of people routinely consumed water without boiling or filtering prior to consumption. The investigation of water supply quality found that the degree of heavy metal contamination including metal, lead, iron, copper, manganese and cadmium met the standards of the Department of Health.
Keywords: Sources of water supply, water quality, water supply.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 185616 Assessment of the Number of Damaged Buildings from a Flood Event Using Remote Sensing Technique
Authors: Jaturong Som-ard
Abstract:
The heavy rainfall from 3rd to 22th January 2017 had swamped much area of Ranot district in southern Thailand. Due to heavy rainfall, the district was flooded which had a lot of effects on economy and social loss. The major objective of this study is to detect flooding extent using Sentinel-1A data and identify a number of damaged buildings over there. The data were collected in two stages as pre-flooding and during flood event. Calibration, speckle filtering, geometric correction, and histogram thresholding were performed with the data, based on intensity spectral values to classify thematic maps. The maps were used to identify flooding extent using change detection, along with the buildings digitized and collected on JOSM desktop. The numbers of damaged buildings were counted within the flooding extent with respect to building data. The total flooded areas were observed as 181.45 sq.km. These areas were mostly occurred at Ban khao, Ranot, Takhria, and Phang Yang sub-districts, respectively. The Ban khao sub-district had more occurrence than the others because this area is located at lower altitude and close to Thale Noi and Thale Luang lakes than others. The numbers of damaged buildings were high in Khlong Daen (726 features), Tha Bon (645 features), and Ranot sub-district (604 features), respectively. The final flood extent map might be very useful for the plan, prevention and management of flood occurrence area. The map of building damage can be used for the quick response, recovery and mitigation to the affected areas for different concern organization.Keywords: Flooding extent, Sentinel-1A data, JOSM desktop, damaged buildings.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 93915 A Hybridized Competency-Based Teacher Candidate Selection System
Authors: R. Ramli, M. I. Ghazali, H. Ibrahim, M. M. Kasim, F. M. Kamal, S.Vikneswari
Abstract:
Teachers form the backbone of any educational system, hence selecting qualified candidates is very crucial. In Malaysia, the decision making in the selection process involves a few stages: Initial filtering through academic achievement, taking entry examination and going through an interview session. The last stage is the most challenging since it highly depends on human judgment. Therefore, this study sought to identify the selection criteria for teacher candidates that form the basis for an efficient multi-criteria teacher-candidate selection model for that last stage. The relevant criteria were determined from the literature and also based on expert input that is those who were involved in interviewing teacher candidates from a public university offering the formal training program. There are three main competency criteria that were identified which are content of knowledge, communication skills and personality. Further, each main criterion was divided into a few subcriteria. The Analytical Hierarchy Process (AHP) technique was employed to allocate weights for the criteria and later, integrated a Simple Weighted Average (SWA) scoring approach to develop the selection model. Subsequently, a web-based Decision Support System was developed to assist in the process of selecting the qualified teacher candidates. The Teacher-Candidate Selection (TeCaS) system is able to assist the panel of interviewers during the selection process which involves a large amount of complex qualitative judgments.
Keywords: Analytic Hierarchy Process, Simple Weighted Average, Decision Support System, Multi-criteria decision making problem.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 218714 Information Overload, Information Literacy and Use of Technology by Students
Authors: Elena Krelja Kurelović, Jasminka Tomljanović, Vlatka Davidović
Abstract:
The development of web technologies and mobile devices makes creating, accessing, using and sharing information or communicating with each other simpler every day. However, while the amount of information constantly increasing it is becoming harder to effectively organize and find quality information despite the availability of web search engines, filtering and indexing tools. Although digital technologies have overall positive impact on students’ lives, frequent use of these technologies and digital media enriched with dynamic hypertext and hypermedia content, as well as multitasking, distractions caused by notifications, calls or messages; can decrease the attention span, make thinking, memorizing and learning more difficult, which can lead to stress and mental exhaustion. This is referred to as “information overload”, “information glut” or “information anxiety”. Objective of this study is to determine whether students show signs of information overload and to identify the possible predictors. Research was conducted using a questionnaire developed for the purpose of this study. The results show that students frequently use technology (computers, gadgets and digital media), while they show moderate level of information literacy. They have sometimes experienced symptoms of information overload. According to the statistical analysis, higher frequency of technology use and lower level of information literacy are correlated with larger information overload. The multiple regression analysis has confirmed that the combination of these two independent variables has statistically significant predictive capacity for information overload. Therefore, the information science teachers should pay attention to improving the level of students’ information literacy and educate them about the risks of excessive technology use.
Keywords: Information overload, technology use, digital media, information literacy, students.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 268813 Scatterer Density in Edge and Coherence Enhancing Nonlinear Anisotropic Diffusion for Medical Ultrasound Speckle Reduction
Authors: Ahmed Badawi, J. Michael Johnson, Mohamed Mahfouz
Abstract:
This paper proposes new enhancement models to the methods of nonlinear anisotropic diffusion to greatly reduce speckle and preserve image features in medical ultrasound images. By incorporating local physical characteristics of the image, in this case scatterer density, in addition to the gradient, into existing tensorbased image diffusion methods, we were able to greatly improve the performance of the existing filtering methods, namely edge enhancing (EE) and coherence enhancing (CE) diffusion. The new enhancement methods were tested using various ultrasound images, including phantom and some clinical images, to determine the amount of speckle reduction, edge, and coherence enhancements. Scatterer density weighted nonlinear anisotropic diffusion (SDWNAD) for ultrasound images consistently outperformed its traditional tensor-based counterparts that use gradient only to weight the diffusivity function. SDWNAD is shown to greatly reduce speckle noise while preserving image features as edges, orientation coherence, and scatterer density. SDWNAD superior performances over nonlinear coherent diffusion (NCD), speckle reducing anisotropic diffusion (SRAD), adaptive weighted median filter (AWMF), wavelet shrinkage (WS), and wavelet shrinkage with contrast enhancement (WSCE), make these methods ideal preprocessing steps for automatic segmentation in ultrasound imaging.Keywords: Nonlinear anisotropic diffusion, ultrasound imaging, speckle reduction, scatterer density estimation, edge based enhancement, coherence enhancement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 190712 Identification of Spam Keywords Using Hierarchical Category in C2C E-commerce
Authors: Shao Bo Cheng, Yong-Jin Han, Se Young Park, Seong-Bae Park
Abstract:
Consumer-to-Consumer (C2C) E-commerce has been growing at a very high speed in recent years. Since identical or nearly-same kinds of products compete one another by relying on keyword search in C2C E-commerce, some sellers describe their products with spam keywords that are popular but are not related to their products. Though such products get more chances to be retrieved and selected by consumers than those without spam keywords, the spam keywords mislead the consumers and waste their time. This problem has been reported in many commercial services like ebay and taobao, but there have been little research to solve this problem. As a solution to this problem, this paper proposes a method to classify whether keywords of a product are spam or not. The proposed method assumes that a keyword for a given product is more reliable if the keyword is observed commonly in specifications of products which are the same or the same kind as the given product. This is because that a hierarchical category of a product in general determined precisely by a seller of the product and so is the specification of the product. Since higher layers of the hierarchical category represent more general kinds of products, a reliable degree is differently determined according to the layers. Hence, reliable degrees from different layers of a hierarchical category become features for keywords and they are used together with features only from specifications for classification of the keywords. Support Vector Machines are adopted as a basic classifier using the features, since it is powerful, and widely used in many classification tasks. In the experiments, the proposed method is evaluated with a golden standard dataset from Yi-han-wang, a Chinese C2C E-commerce, and is compared with a baseline method that does not consider the hierarchical category. The experimental results show that the proposed method outperforms the baseline in F1-measure, which proves that spam keywords are effectively identified by a hierarchical category in C2C E-commerce.
Keywords: Spam Keyword, E-commerce, keyword features, spam filtering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 250811 Reducing CO2 Emission Using EDA and Weighted Sum Model in Smart Parking System
Authors: Rahman Ali, Muhammad Sajjad, Farkhund Iqbal, Muhammad Sadiq Hassan Zada, Mohammed Hussain
Abstract:
Emission of Carbon Dioxide (CO2) has adversely affected the environment. One of the major sources of CO2 emission is transportation. In the last few decades, the increase in mobility of people using vehicles has enormously increased the emission of CO2 in the environment. To reduce CO2 emission, sustainable transportation system is required in which smart parking is one of the important measures that need to be established. To contribute to the issue of reducing the amount of CO2 emission, this research proposes a smart parking system. A cloud-based solution is provided to the drivers which automatically searches and recommends the most preferred parking slots. To determine preferences of the parking areas, this methodology exploits a number of unique parking features which ultimately results in the selection of a parking that leads to minimum level of CO2 emission from the current position of the vehicle. To realize the methodology, a scenario-based implementation is considered. During the implementation, a mobile application with GPS signals, vehicles with a number of vehicle features and a list of parking areas with parking features are used by sorting, multi-level filtering, exploratory data analysis (EDA, Analytical Hierarchy Process (AHP)) and weighted sum model (WSM) to rank the parking areas and recommend the drivers with top-k most preferred parking areas. In the EDA process, “2020testcar-2020-03-03”, a freely available dataset is used to estimate CO2 emission of a particular vehicle. To evaluate the system, results of the proposed system are compared with the conventional approach, which reveal that the proposed methodology supersedes the conventional one in reducing the emission of CO2 into the atmosphere.
Keywords: CO2 emission, IoT, EDA, Weighted Sum Model, WSM, regression, smart parking system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 74310 Information Retrieval: Improving Question Answering Systems by Query Reformulation and Answer Validation
Authors: Mohammad Reza Kangavari, Samira Ghandchi, Manak Golpour
Abstract:
Question answering (QA) aims at retrieving precise information from a large collection of documents. Most of the Question Answering systems composed of three main modules: question processing, document processing and answer processing. Question processing module plays an important role in QA systems to reformulate questions. Moreover answer processing module is an emerging topic in QA systems, where these systems are often required to rank and validate candidate answers. These techniques aiming at finding short and precise answers are often based on the semantic relations and co-occurrence keywords. This paper discussed about a new model for question answering which improved two main modules, question processing and answer processing which both affect on the evaluation of the system operations. There are two important components which are the bases of the question processing. First component is question classification that specifies types of question and answer. Second one is reformulation which converts the user's question into an understandable question by QA system in a specific domain. The objective of an Answer Validation task is thus to judge the correctness of an answer returned by a QA system, according to the text snippet given to support it. For validating answers we apply candidate answer filtering, candidate answer ranking and also it has a final validation section by user voting. Also this paper described new architecture of question and answer processing modules with modeling, implementing and evaluating the system. The system differs from most question answering systems in its answer validation model. This module makes it more suitable to find exact answer. Results show that, from total 50 asked questions, evaluation of the model, show 92% improving the decision of the system.
Keywords: Answer processing, answer validation, classification, question answering, query reformulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28479 A Structured Mechanism for Identifying Political Influencers on Social Media Platforms: Top 10 Saudi Political Twitter Users
Authors: Ahmad Alsolami, Darren Mundy, Manuel Hernandez-Perez
Abstract:
Social media networks, such as Twitter, offer the perfect opportunity to either positively or negatively affect political attitudes on large audiences. The existence of influential users who have developed a reputation for their knowledge and experience of specific topics is a major factor contributing to this impact. Therefore, knowledge of the mechanisms to identify influential users on social media is vital for understanding their effect on their audience. The concept of the influential user is related to the concept of opinion leaders' to indicate that ideas first flow from mass media to opinion leaders and then to the rest of the population. Hence, the objective of this research was to provide reliable and accurate structural mechanisms to identify influential users, which could be applied to different platforms, places, and subjects. Twitter was selected as the platform of interest, and Saudi Arabia as the context for the investigation. These were selected because Saudi Arabia has a large number of Twitter users, some of whom are considerably active in setting agendas and disseminating ideas. The study considered the scientific methods that have been used to identify public opinion leaders before, utilizing metrics software on Twitter. The key findings propose multiple novel metrics to compare Twitter influencers, including the number of followers, social authority and the use of political hashtags, and four secondary filtering measures. Thus, using ratio and percentage calculations to classify the most influential users, Twitter accounts were filtered, analyzed and included. The structured approach is used as a mechanism to explore the top ten influencers on Twitter from the political domain in Saudi Arabia.
Keywords: Twitter, influencers, structured mechanism, Saudi Arabia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5328 Data Privacy and Safety with Large Language Models
Authors: Ashly Joseph, Jithu Paulose
Abstract:
Large language models (LLMs) have revolutionized natural language processing capabilities, enabling applications such as chatbots, dialogue agents, image, and video generators. Nevertheless, their trainings on extensive datasets comprising personal information poses notable privacy and safety hazards. This study examines methods for addressing these challenges, specifically focusing on approaches to enhance the security of LLM outputs, safeguard user privacy, and adhere to data protection rules. We explore several methods including post-processing detection algorithms, content filtering, reinforcement learning from human and AI inputs, and the difficulties in maintaining a balance between model safety and performance. The study also emphasizes the dangers of unintentional data leakage, privacy issues related to user prompts, and the possibility of data breaches. We highlight the significance of corporate data governance rules and optimal methods for engaging with chatbots. In addition, we analyze the development of data protection frameworks, evaluate the adherence of LLMs to General Data Protection Regulation (GDPR), and examine privacy legislation in academic and business policies. We demonstrate the difficulties and remedies involved in preserving data privacy and security in the age of sophisticated artificial intelligence by employing case studies and real-life instances. This article seeks to educate stakeholders on practical strategies for improving the security and privacy of LLMs, while also assuring their responsible and ethical implementation.
Keywords: Data privacy, large language models, artificial intelligence, machine learning, cybersecurity, general data protection regulation, data safety.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1157 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling
Authors: M. Almutairi, S. Hadjiloucas
Abstract:
The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.
Keywords: Harmonics, passive filter, power factor, power quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21916 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring
Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti
Abstract:
Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., entropy, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one-class classification (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, principal component analysis (PCA), kernel principal component analysis (KPCA), and autoassociative neural network (ANN) are presented and their performance are compared. It is also shown that, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 95%.
Keywords: Anomaly detection, dimensionality reduction, frequencies selection, modal analysis, neural network, structural health monitoring, vibration measurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7095 Validation on 3D Surface Roughness Algorithm for Measuring Roughness of Psoriasis Lesion
Authors: M.H. Ahmad Fadzil, Esa Prakasa, Hurriyatul Fitriyah, Hermawan Nugroho, Azura Mohd Affandi, S.H. Hussein
Abstract:
Psoriasis is a widespread skin disease affecting up to 2% population with plaque psoriasis accounting to about 80%. It can be identified as a red lesion and for the higher severity the lesion is usually covered with rough scale. Psoriasis Area Severity Index (PASI) scoring is the gold standard method for measuring psoriasis severity. Scaliness is one of PASI parameter that needs to be quantified in PASI scoring. Surface roughness of lesion can be used as a scaliness feature, since existing scale on lesion surface makes the lesion rougher. The dermatologist usually assesses the severity through their tactile sense, therefore direct contact between doctor and patient is required. The problem is the doctor may not assess the lesion objectively. In this paper, a digital image analysis technique is developed to objectively determine the scaliness of the psoriasis lesion and provide the PASI scaliness score. Psoriasis lesion is modelled by a rough surface. The rough surface is created by superimposing a smooth average (curve) surface with a triangular waveform. For roughness determination, a polynomial surface fitting is used to estimate average surface followed by a subtraction between rough and average surface to give elevation surface (surface deviations). Roughness index is calculated by using average roughness equation to the height map matrix. The roughness algorithm has been tested to 444 lesion models. From roughness validation result, only 6 models can not be accepted (percentage error is greater than 10%). These errors occur due the scanned image quality. Roughness algorithm is validated for roughness measurement on abrasive papers at flat surface. The Pearson-s correlation coefficient of grade value (G) of abrasive paper and Ra is -0.9488, its shows there is a strong relation between G and Ra. The algorithm needs to be improved by surface filtering, especially to overcome a problem with noisy data.
Keywords: psoriasis, roughness algorithm, polynomial surfacefitting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24914 Calibration of 2D and 3D Optical Measuring Instruments in Industrial Environments at Submillimeter Range
Authors: A. Mínguez-Martínez, J. de Vicente
Abstract:
Modern manufacturing processes have led to the miniaturization of systems and, as a result, parts at the micro and nanoscale are produced. This trend seems to become increasingly important in the near future. Besides, as a requirement of Industry 4.0, the digitalization of the models of production and processes makes it very important to ensure that the dimensions of newly manufactured parts meet the specifications of the models. Therefore, it is possible to reduce the scrap and the cost of non-conformities, ensuring the stability of the production at the same time. To ensure the quality of manufactured parts, it becomes necessary to carry out traceable measurements at scales lower than one millimeter. Providing adequate traceability to the SI unit of length (the meter) to 2D and 3D measurements at this scale is a problem that does not have a unique solution in industrial environments. Researchers in the field of dimensional metrology all around the world are working on this issue. A solution for industrial environments, even if it is not complete, will enable working with some traceability. At this point, we believe that the study of the surfaces could provide us with a first approximation to a solution. In this paper, we propose a calibration procedure for the scales of optical measuring instruments, particularizing for a confocal microscope, using material standards easy to find and calibrate in metrology and quality laboratories in industrial environments. Confocal microscopes are measuring instruments capable of filtering the out-of-focus reflected light so that when it reaches the detector, it is possible to take pictures of the part of the surface that is focused. Varying and taking pictures at different Z levels of the focus, a specialized software interpolates between the different planes, and it could reconstruct the surface geometry into a 3D model. As it is easy to deduce, it is necessary to give traceability to each axis. As a complementary result, the roughness Ra parameter will be traced to the reference. Although the solution is designed for a confocal microscope, it may be used for the calibration of other optical measuring instruments, by applying minor changes.
Keywords: Industrial environment, confocal microscope, optical measuring instrument, traceability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4113 Remote Vital Signs Monitoring in Neonatal Intensive Care Unit Using a Digital Camera
Authors: Fatema-Tuz-Zohra Khanam, Ali Al-Naji, Asanka G. Perera, Kim Gibson, Javaan Chahl
Abstract:
Conventional contact-based vital signs monitoring sensors such as pulse oximeters or electrocardiogram (ECG) may cause discomfort, skin damage, and infections, particularly in neonates with fragile, sensitive skin. Therefore, remote monitoring of the vital sign is desired in both clinical and non-clinical settings to overcome these issues. Camera-based vital signs monitoring is a recent technology for these applications with many positive attributes. However, there are still limited camera-based studies on neonates in a clinical setting. In this study, the heart rate (HR) and respiratory rate (RR) of eight infants at the Neonatal Intensive Care Unit (NICU) in Flinders Medical Centre were remotely monitored using a digital camera applying color and motion-based computational methods. The region-of-interest (ROI) was efficiently selected by incorporating an image decomposition method. Furthermore, spatial averaging, spectral analysis, band-pass filtering, and peak detection were also used to extract both HR and RR. The experimental results were validated with the ground truth data obtained from an ECG monitor and showed a strong correlation using the Pearson correlation coefficient (PCC) 0.9794 and 0.9412 for HR and RR, respectively. The root mean square errors (RMSE) between camera-based data and ECG data for HR and RR were 2.84 beats/min and 2.91 breaths/min, respectively. A Bland Altman analysis of the data also showed a close correlation between both data sets with a mean bias of 0.60 beats/min and 1 breath/min, and the lower and upper limit of agreement -4.9 to + 6.1 beats/min and -4.4 to +6.4 breaths/min for both HR and RR, respectively. Therefore, video camera imaging may replace conventional contact-based monitoring in NICU and has potential applications in other contexts such as home health monitoring.
Keywords: Neonates, NICU, digital camera, heart rate, respiratory rate, image decomposition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5802 Person Identification using Gait by Combined Features of Width and Shape of the Binary Silhouette
Authors: M.K. Bhuyan, Aragala Jagan.
Abstract:
Current image-based individual human recognition methods, such as fingerprints, face, or iris biometric modalities generally require a cooperative subject, views from certain aspects, and physical contact or close proximity. These methods cannot reliably recognize non-cooperating individuals at a distance in the real world under changing environmental conditions. Gait, which concerns recognizing individuals by the way they walk, is a relatively new biometric without these disadvantages. The inherent gait characteristic of an individual makes it irreplaceable and useful in visual surveillance. In this paper, an efficient gait recognition system for human identification by extracting two features namely width vector of the binary silhouette and the MPEG-7-based region-based shape descriptors is proposed. In the proposed method, foreground objects i.e., human and other moving objects are extracted by estimating background information by a Gaussian Mixture Model (GMM) and subsequently, median filtering operation is performed for removing noises in the background subtracted image. A moving target classification algorithm is used to separate human being (i.e., pedestrian) from other foreground objects (viz., vehicles). Shape and boundary information is used in the moving target classification algorithm. Subsequently, width vector of the outer contour of binary silhouette and the MPEG-7 Angular Radial Transform coefficients are taken as the feature vector. Next, the Principal Component Analysis (PCA) is applied to the selected feature vector to reduce its dimensionality. These extracted feature vectors are used to train an Hidden Markov Model (HMM) for identification of some individuals. The proposed system is evaluated using some gait sequences and the experimental results show the efficacy of the proposed algorithm.Keywords: Gait Recognition, Gaussian Mixture Model, PrincipalComponent Analysis, MPEG-7 Angular Radial Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1911