Search results for: random features
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5642

Search results for: random features

5192 Innovativeness of the Furniture Enterprises in Bulgaria

Authors: Radostina Popova

Abstract:

The paper presents an analysis of the innovation performance of small and medium-sized furniture enterprises in Bulgaria, accounting for over 97% of the companies in the sector. It contains advanced features of innovation in enterprises, specific features of the furniture industry in Bulgaria and analysis of the results of studies on the topic. The results from studies of three successive periods - 2006-2008; 2008-2010; 2010-2012, during which were studied 594 small and medium-sized furniture enterprises. There are commonly used in the EU definitions and indicators (European Commission, OECD, Oslo Manual), which allows for the comparability of results.

Keywords: innovation activity, competitiveness of innovation, furniture enterprises in Bulgaria

Procedia PDF Downloads 254
5191 An Ensemble Deep Learning Architecture for Imbalanced Classification of Thoracic Surgery Patients

Authors: Saba Ebrahimi, Saeed Ahmadian, Hedie Ashrafi

Abstract:

Selecting appropriate patients for surgery is one of the main issues in thoracic surgery (TS). Both short-term and long-term risks and benefits of surgery must be considered in the patient selection criteria. There are some limitations in the existing datasets of TS patients because of missing values of attributes and imbalanced distribution of survival classes. In this study, a novel ensemble architecture of deep learning networks is proposed based on stacking different linear and non-linear layers to deal with imbalance datasets. The categorical and numerical features are split using different layers with ability to shrink the unnecessary features. Then, after extracting the insight from the raw features, a novel biased-kernel layer is applied to reinforce the gradient of the minority class and cause the network to be trained better comparing the current methods. Finally, the performance and advantages of our proposed model over the existing models are examined for predicting patient survival after thoracic surgery using a real-life clinical data for lung cancer patients.

Keywords: deep learning, ensemble models, imbalanced classification, lung cancer, TS patient selection

Procedia PDF Downloads 124
5190 Emotional Intelligence and Age in Open Distance Learning

Authors: Naila Naseer

Abstract:

Emotional Intelligence (EI) concept is not new yet unique and interesting. EI is a person’s ability to be aware of his/her own emotions and to manage, handle and communicate emotions with others effectively. The present study was conducted to assess the relationship between emotional intelligence and age of graduate level students at Allama Iqbal Open University (AIOU). Population consisted of Allama Iqbal Open University students (B.Ed 3rd Semester, Autumn 2007) from Rawalpindi and Islamabad regions. Total number of sample consisted of 469 participants was randomly drawn out by using table of random numbers. Bar-On EQ-i was administered on the participants through personal contact. The instrument was also validated through pilot study on a random sample of 50 participants (B.Ed students Spring 2006), who had completed their B.Ed degree successfully. Data was analyzed and tabulated in percentages, frequencies, mean, standard deviation, correlation, and scatter gram in SPSS (version 16.0 for windows). The results revealed that students with higher age group had scored low on the scale (Bar-On EQ-i). Moreover, the students in low age groups exhibited higher levels of EI as compared with old age students.

Keywords: emotional intelligence, age level, learning, emotion-related feelings

Procedia PDF Downloads 313
5189 Product Feature Modelling for Integrating Product Design and Assembly Process Planning

Authors: Baha Hasan, Jan Wikander

Abstract:

This paper describes a part of the integrating work between assembly design and assembly process planning domains (APP). The work is based, in its first stage, on modelling assembly features to support APP. A multi-layer architecture, based on feature-based modelling, is proposed to establish a dynamic and adaptable link between product design using CAD tools and APP. The proposed approach is based on deriving “specific function” features from the “generic” assembly and form features extracted from the CAD tools. A hierarchal structure from “generic” to “specific” and from “high level geometrical entities” to “low level geometrical entities” is proposed in order to integrate geometrical and assembly data extracted from geometrical and assembly modelers to the required processes and resources in APP. The feature concept, feature-based modelling, and feature recognition techniques are reviewed.

Keywords: assembly feature, assembly process planning, feature, feature-based modelling, form feature, ontology

Procedia PDF Downloads 293
5188 The Impact of Inpatient New Boarding Policy on Emergency Department Overcrowding: A Discrete Event Simulation Study

Authors: Wheyming Tina Song, Chi-Hao Hong

Abstract:

In this study, we investigate the effect of a new boarding policy - short stay, on the overcrowding efficiency in emergency department (ED). The decision variables are no. of short stay beds for least acuity ED patients. The performance measurements used are national emergency department overcrowding score (NEDOCS) and ED retention rate (the percentage that patients stay in ED over than 48 hours in one month). Discrete event simulation (DES) is used as an analysis tool to evaluate the strategy. Also, common random number (CRN) technique is applied to enhance the simulation precision. The DES model was based on a census of 6 months' patients who were treated in the ED of the National Taiwan University Hospital Yunlin Branch. Our results show that the new short-stay boarding significantly impacts both the NEDOCS and ED retention rate when the no. of short stay beds is more than three.

Keywords: emergency department (ED), common random number (CRN), national emergency department overcrowding score (NEDOCS), discrete event simulation (DES)

Procedia PDF Downloads 333
5187 Is Presence of Psychotic Features Themselves Carry a Risk for Metabolic Syndrome?

Authors: Rady A., Elsheshai A., Elsawy M., Nagui R.

Abstract:

Background and Aim: Metabolic syndrome affect around 20% of general population , authors have incriminated antipsychotics as serious risk factor that may provoke such derangement. The aim of our study is to assess metabolic syndrome in patients presenting psychotic features (delusions and hallucinations) whether schizophrenia or mood disorder and compare results in terms of drug naïf, on medication and healthy control. Subjects and Methods: The study recruited 40 schizophrenic patients, half of them drug naïf and the other half on antipsychotics, 40 patients with mood disorder with psychotic features, half of them drug naïf and the other half on medication, 20 healthy control. Exclusion criteria were put in order to exclude patients having already endocrine or metabolic disorders that my interfere with results obtain to minimize confusion bias. Metabolic syndrome assessed by measuring parameters including weight, body mass index, waist circumference, triglyceride level, HDL, fasting glucose, fasting insulin and insulin resistance Results: No difference was found when comparing drug naïf to those on medication in both schizophrenic and psychotic mood disorder arms, schizophrenic patients whether on medication or drug naïf should difference with control group for fasting glucose, schizophrenic patients on medication also showed difference in insulin resistance compared to control group. On the other hand, patients with psychotic mood disorder whether drug naïf or on medication showed difference from control group for fasting insulin level. Those on medication also differed from control for insulin resistance Conclusion: Our study didn’t reveal difference in metabolic syndrome among patients with psychotic features whether on medication or drug naïf. Only patients with Psychotic features on medication showed insulin resistance. Schizophrenic patients drug naïf or on medication tend to show higher fasting glucose while psychotic mood disorder whether drug naïf or on medication tend to show higher fasting insulin. This study suggest that presence of psychotic features themselves regardless being on medication or not carries a risk for insulin resistance and metabolic syndrome. Limitation: This study is limited by number of participants and larger numbers in future studies should be included in order to extrapolate results. Cohort longitudinal studies are needed in order to evaluate such hypothesis.

Keywords: schizophrenia, metabolic syndrome, psychosis, insulin, resistance

Procedia PDF Downloads 521
5186 High Performance Computing Enhancement of Agent-Based Economic Models

Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna

Abstract:

This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).

Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process

Procedia PDF Downloads 113
5185 Identification of Spam Keywords Using Hierarchical Category in C2C E-Commerce

Authors: Shao Bo Cheng, Yong-Jin Han, Se Young Park, Seong-Bae Park

Abstract:

Consumer-to-Consumer (C2C) E-commerce has been growing at a very high speed in recent years. Since identical or nearly-same kinds of products compete one another by relying on keyword search in C2C E-commerce, some sellers describe their products with spam keywords that are popular but are not related to their products. Though such products get more chances to be retrieved and selected by consumers than those without spam keywords, the spam keywords mislead the consumers and waste their time. This problem has been reported in many commercial services like e-bay and taobao, but there have been little research to solve this problem. As a solution to this problem, this paper proposes a method to classify whether keywords of a product are spam or not. The proposed method assumes that a keyword for a given product is more reliable if the keyword is observed commonly in specifications of products which are the same or the same kind as the given product. This is because that a hierarchical category of a product in general determined precisely by a seller of the product and so is the specification of the product. Since higher layers of the hierarchical category represent more general kinds of products, a reliable degree is differently determined according to the layers. Hence, reliable degrees from different layers of a hierarchical category become features for keywords and they are used together with features only from specifications for classification of the keywords. Support Vector Machines are adopted as a basic classifier using the features, since it is powerful, and widely used in many classification tasks. In the experiments, the proposed method is evaluated with a golden standard dataset from Yi-han-wang, a Chinese C2C e-commerce, and is compared with a baseline method that does not consider the hierarchical category. The experimental results show that the proposed method outperforms the baseline in F1-measure, which proves that spam keywords are effectively identified by a hierarchical category in C2C e-commerce.

Keywords: spam keyword, e-commerce, keyword features, spam filtering

Procedia PDF Downloads 278
5184 Food Insecurity Assessment, Consumption Pattern and Implications of Integrated Food Security Phase Classification: Evidence from Sudan

Authors: Ahmed A. A. Fadol, Guangji Tong, Wlaa Mohamed

Abstract:

This paper provides a comprehensive analysis of food insecurity in Sudan, focusing on consumption patterns and their implications, employing the Integrated Food Security Phase Classification (IPC) assessment framework. Years of conflict and economic instability have driven large segments of the population in Sudan into crisis levels of acute food insecurity according to the (IPC). A substantial number of people are estimated to currently face emergency conditions, with an additional sizeable portion categorized under less severe but still extreme hunger levels. In this study, we explore the multifaceted nature of food insecurity in Sudan, considering its historical, political, economic, and social dimensions. An analysis of consumption patterns and trends was conducted, taking into account cultural influences, dietary shifts, and demographic changes. Furthermore, we employ logistic regression and random forest analysis to identify significant independent variables influencing food security status in Sudan. Random forest clearly outperforms logistic regression in terms of area under curve (AUC), accuracy, precision and recall. Forward projections of the IPC for Sudan estimate that 15 million individuals are anticipated to face Crisis level (IPC Phase 3) or worse acute food insecurity conditions between October 2023 and February 2024. Of this, 60% are concentrated in Greater Darfur, Greater Kordofan, and Khartoum State, with Greater Darfur alone representing 29% of this total. These findings emphasize the urgent need for both short-term humanitarian aid and long-term strategies to address Sudan's deepening food insecurity crisis.

Keywords: food insecurity, consumption patterns, logistic regression, random forest analysis

Procedia PDF Downloads 45
5183 Automatic Adult Age Estimation Using Deep Learning of the ResNeXt Model Based on CT Reconstruction Images of the Costal Cartilage

Authors: Ting Lu, Ya-Ru Diao, Fei Fan, Ye Xue, Lei Shi, Xian-e Tang, Meng-jun Zhan, Zhen-hua Deng

Abstract:

Accurate adult age estimation (AAE) is a significant and challenging task in forensic and archeology fields. Attempts have been made to explore optimal adult age metrics, and the rib is considered a potential age marker. The traditional way is to extract age-related features designed by experts from macroscopic or radiological images followed by classification or regression analysis. Those results still have not met the high-level requirements for practice, and the limitation of using feature design and manual extraction methods is loss of information since the features are likely not designed explicitly for extracting information relevant to age. Deep learning (DL) has recently garnered much interest in imaging learning and computer vision. It enables learning features that are important without a prior bias or hypothesis and could be supportive of AAE. This study aimed to develop DL models for AAE based on CT images and compare their performance to the manual visual scoring method. Chest CT data were reconstructed using volume rendering (VR). Retrospective data of 2500 patients aged 20.00-69.99 years were obtained between December 2019 and September 2021. Five-fold cross-validation was performed, and datasets were randomly split into training and validation sets in a 4:1 ratio for each fold. Before feeding the inputs into networks, all images were augmented with random rotation and vertical flip, normalized, and resized to 224×224 pixels. ResNeXt was chosen as the DL baseline due to its advantages of higher efficiency and accuracy in image classification. Mean absolute error (MAE) was the primary parameter. Independent data from 100 patients acquired between March and April 2022 were used as a test set. The manual method completely followed the prior study, which reported the lowest MAEs (5.31 in males and 6.72 in females) among similar studies. CT data and VR images were used. The radiation density of the first costal cartilage was recorded using CT data on the workstation. The osseous and calcified projections of the 1 to 7 costal cartilages were scored based on VR images using an eight-stage staging technique. According to the results of the prior study, the optimal models were the decision tree regression model in males and the stepwise multiple linear regression equation in females. Predicted ages of the test set were calculated separately using different models by sex. A total of 2600 patients (training and validation sets, mean age=45.19 years±14.20 [SD]; test set, mean age=46.57±9.66) were evaluated in this study. Of ResNeXt model training, MAEs were obtained with 3.95 in males and 3.65 in females. Based on the test set, DL achieved MAEs of 4.05 in males and 4.54 in females, which were far better than the MAEs of 8.90 and 6.42 respectively, for the manual method. Those results showed that the DL of the ResNeXt model outperformed the manual method in AAE based on CT reconstruction of the costal cartilage and the developed system may be a supportive tool for AAE.

Keywords: forensic anthropology, age determination by the skeleton, costal cartilage, CT, deep learning

Procedia PDF Downloads 58
5182 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain

Authors: Zachary Blanks, Solomon Sonya

Abstract:

Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.

Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection

Procedia PDF Downloads 271
5181 Performance Degradation for the GLR Test-Statistics for Spatial Signal Detection

Authors: Olesya Bolkhovskaya, Alexander Maltsev

Abstract:

Antenna arrays are widely used in modern radio systems in sonar and communications. The solving of the detection problems of a useful signal on the background of noise is based on the GLRT method. There is a large number of problem which depends on the known a priori information. In this work, in contrast to the majority of already solved problems, it is used only difference spatial properties of the signal and noise for detection. We are analyzing the influence of the degree of non-coherence of signal and noise unhomogeneity on the performance characteristics of different GLRT statistics. The description of the signal and noise is carried out by means of the spatial covariance matrices C in the cases of different number of known information. The partially coherent signal is simulated as a plane wave with a random angle of incidence of the wave concerning a normal. Background noise is simulated as random process with uniform distribution function in each element. The results of investigation of degradation of performance characteristics for different cases are represented in this work.

Keywords: GLRT, Neumann-Pearson’s criterion, Test-statistics, degradation, spatial processing, multielement antenna array

Procedia PDF Downloads 372
5180 Determining the Most Efficient Test Available in Software Testing

Authors: Qasim Zafar, Matthew Anderson, Esteban Garcia, Steven Drager

Abstract:

Software failures can present an enormous detriment to people's lives and cost millions of dollars to repair when they are unexpectedly encountered in the wild. Despite a significant portion of the software development lifecycle and resources are dedicated to testing, software failures are a relatively frequent occurrence. Nevertheless, the evaluation of testing effectiveness remains at the forefront of ensuring high-quality software and software metrics play a critical role in providing valuable insights into quantifiable objectives to assess the level of assurance and confidence in the system. As the selection of appropriate metrics can be an arduous process, the goal of this paper is to shed light on the significance of software metrics by examining a range of testing techniques and metrics as well as identifying key areas for improvement. Additionally, through this investigation, readers will gain a deeper understanding of how metrics can help to drive informed decision-making on delivering high-quality software and facilitate continuous improvement in testing practices.

Keywords: software testing, software metrics, testing effectiveness, black box testing, random testing, adaptive random testing, combinatorial testing, fuzz testing, equivalence partition, boundary value analysis, white box testing

Procedia PDF Downloads 64
5179 The Influence of Microscopic Features on the Self-Cleaning Ability of Developed 3D Printed Fabric-Like Structures Using Different Printing Parameters

Authors: Ayat Adnan Atwah, Muhammad A. Khan

Abstract:

Self-cleaning surfaces are getting significant attention in industrial fields. Especially for textile fabrics, it is observed that self-cleaning textile fabric surfaces are created by manipulating the surface features with the help of coatings and nanoparticles, which are considered costly and far more complicated. However, controlling the fabrication parameters of textile fabrics at the microscopic level by exploring the potential for self-cleaning has not been addressed. This study aimed to establish the context of self-cleaning textile fabrics by controlling the fabrication parameters of the textile fabric at the microscopic level. Therefore, 3D-printed textile fabrics were fabricated using the low-cost fused filament fabrication (FFF) technique. The printing parameters, such as orientation angle (O), layer height (LH), and extruder width (EW), were used to control the microscopic features of the printed fabrics. The combination of three printing parameters was created to provide the best self-cleaning textile fabric surface: (LH) (0.15, 0.13, 0.10 mm) and (EW) (0.5, 0.4, 0.3 mm) along with two different (O) of (45º and 90º). Three different thermoplastic flexible filament materials were used: (TPU 98A), (TPE felaflex), and (TPC flex45). The printing parameters were optimised to get the optimum self-cleaning ability of the printed specimens. Furthermore, the impact of these characteristics on mechanical strength at the fabric-woven structure level was investigated. The study revealed that the printing parameters significantly affect the self-cleaning properties after adjusting the selected combination of layer height, extruder width, and printing orientation. A linear regression model was effectively developed to demonstrate the association between 3D printing parameters (layer height, extruder width, and orientation). According to the experimental results, (TPE felaflex) has a better self-cleaning ability than the other two materials.

Keywords: 3D printing, self-cleaning fabric, microscopic features, printing parameters, fabrication

Procedia PDF Downloads 66
5178 A New Approach to Predicting Physical Biometrics from Behavioural Biometrics

Authors: Raid R. O. Al-Nima, S. S. Dlay, W. L. Woo

Abstract:

A relationship between face and signature biometrics is established in this paper. A new approach is developed to predict faces from signatures by using artificial intelligence. A multilayer perceptron (MLP) neural network is used to generate face details from features extracted from signatures, here face is the physical biometric and signatures is the behavioural biometric. The new method establishes a relationship between the two biometrics and regenerates a visible face image from the signature features. Furthermore, the performance efficiencies of our new technique are demonstrated in terms of minimum error rates compared to published work.

Keywords: behavioural biometric, face biometric, neural network, physical biometric, signature biometric

Procedia PDF Downloads 464
5177 Longitudinal Study of the Phenomenon of Acting White in Hungarian Elementary Schools Analysed by Fixed and Random Effects Models

Authors: Lilla Dorina Habsz, Marta Rado

Abstract:

Popularity is affected by a variety of factors in the primary school such as academic achievement and ethnicity. The main goal of our study was to analyse whether acting white exists in Hungarian elementary schools. In other words, we observed whether Roma students penalize those in-group members who obtain the high academic achievement. Furthermore, to show how popularity is influenced by changes in academic achievement in inter-ethnic relations. The empirical basis of our research was the 'competition and negative networks' longitudinal dataset, which was collected by the MTA TK 'Lendület' RECENS research group. This research followed 11 and 12-year old students for a two-year period. The survey was analysed using fixed and random effect models. Overall, we found a positive correlation between grades and popularity, but no evidence for the acting white effect. However, better grades were more positively evaluated within the majority group than within the minority group, which may further increase inequalities.

Keywords: academic achievement, elementary school, ethnicity, popularity

Procedia PDF Downloads 179
5176 Probabilistic Gathering of Agents with Simple Sensors: Distributed Algorithm for Aggregation of Robots Equipped with Binary On-Board Detectors

Authors: Ariel Barel, Rotem Manor, Alfred M. Bruckstein

Abstract:

We present a probabilistic gathering algorithm for agents that can only detect the presence of other agents in front of or behind them. The agents act in the plane and are identical and indistinguishable, oblivious, and lack any means of direct communication. They do not have a common frame of reference in the plane and choose their orientation (direction of possible motion) at random. The analysis of the gathering process assumes that the agents act synchronously in selecting random orientations that remain fixed during each unit time-interval. Two algorithms are discussed. The first one assumes discrete jumps based on the sensing results given the randomly selected motion direction, and in this case, extensive experimental results exhibit probabilistic clustering into a circular region with radius equal to the step-size in time proportional to the number of agents. The second algorithm assumes agents with continuous sensing and motion, and in this case, we can prove gathering into a very small circular region in finite expected time.

Keywords: control, decentralized, gathering, multi-agent, simple sensors

Procedia PDF Downloads 151
5175 K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors

Authors: Shao-Tzu Huang, Chen-Chien Hsu, Wei-Yen Wang

Abstract:

Matching high dimensional features between images is computationally expensive for exhaustive search approaches in computer vision. Although the dimension of the feature can be degraded by simplifying the prior knowledge of homography, matching accuracy may degrade as a tradeoff. In this paper, we present a feature matching method based on k-means algorithm that reduces the matching cost and matches the features between images instead of using a simplified geometric assumption. Experimental results show that the proposed method outperforms the previous linear exhaustive search approaches in terms of the inlier ratio of matched pairs.

Keywords: feature matching, k-means clustering, SIFT, RANSAC

Procedia PDF Downloads 340
5174 Hybrid Approach for Face Recognition Combining Gabor Wavelet and Linear Discriminant Analysis

Authors: A: Annis Fathima, V. Vaidehi, S. Ajitha

Abstract:

Face recognition system finds many applications in surveillance and human computer interaction systems. As the applications using face recognition systems are of much importance and demand more accuracy, more robustness in the face recognition system is expected with less computation time. In this paper, a hybrid approach for face recognition combining Gabor Wavelet and Linear Discriminant Analysis (HGWLDA) is proposed. The normalized input grayscale image is approximated and reduced in dimension to lower the processing overhead for Gabor filters. This image is convolved with bank of Gabor filters with varying scales and orientations. LDA, a subspace analysis techniques are used to reduce the intra-class space and maximize the inter-class space. The techniques used are 2-dimensional Linear Discriminant Analysis (2D-LDA), 2-dimensional bidirectional LDA ((2D)2LDA), Weighted 2-dimensional bidirectional Linear Discriminant Analysis (Wt (2D)2 LDA). LDA reduces the feature dimension by extracting the features with greater variance. k-Nearest Neighbour (k-NN) classifier is used to classify and recognize the test image by comparing its feature with each of the training set features. The HGWLDA approach is robust against illumination conditions as the Gabor features are illumination invariant. This approach also aims at a better recognition rate using less number of features for varying expressions. The performance of the proposed HGWLDA approaches is evaluated using AT&T database, MIT-India face database and faces94 database. It is found that the proposed HGWLDA approach provides better results than the existing Gabor approach.

Keywords: face recognition, Gabor wavelet, LDA, k-NN classifier

Procedia PDF Downloads 454
5173 Design and Implementation of an Image Based System to Enhance the Security of ATM

Authors: Seyed Nima Tayarani Bathaie

Abstract:

In this paper, an image-receiving system was designed and implemented through optimization of object detection algorithms using Haar features. This optimized algorithm served as face and eye detection separately. Then, cascading them led to a clear image of the user. Utilization of this feature brought about higher security by preventing fraud. This attribute results from the fact that services will be given to the user on condition that a clear image of his face has already been captured which would exclude the inappropriate person. In order to expedite processing and eliminating unnecessary ones, the input image was compressed, a motion detection function was included in the program, and detection window size was confined.

Keywords: face detection algorithm, Haar features, security of ATM

Procedia PDF Downloads 399
5172 Combined Optical Coherence Microscopy and Spectrally Resolved Multiphoton Microscopy

Authors: Bjorn-Ole Meyer, Dominik Marti, Peter E. Andersen

Abstract:

A multimodal imaging system, combining spectrally resolved multiphoton microscopy (MPM) and optical coherence microscopy (OCM) is demonstrated. MPM and OCM are commonly integrated into multimodal imaging platforms to combine functional and morphological information. The MPM signals, such as two-photon fluorescence emission (TPFE) and signals created by second harmonic generation (SHG) are biomarkers which exhibit information on functional biological features such as the ratio of pyridine nucleotide (NAD(P)H) and flavin adenine dinucleotide (FAD) in the classification of cancerous tissue. While the spectrally resolved imaging allows for the study of biomarkers, using a spectrometer as a detector limits the imaging speed of the system significantly. To overcome those limitations, an OCM setup was added to the system, which allows for fast acquisition of structural information. Thus, after rapid imaging of larger specimens, navigation within the sample is possible. Subsequently, distinct features can be selected for further investigation using MPM. Additionally, by probing a different contrast, complementary information is obtained, and different biomarkers can be investigated. OCM images of tissue and cell samples are obtained, and distinctive features are evaluated using MPM to illustrate the benefits of the system.

Keywords: optical coherence microscopy, multiphoton microscopy, multimodal imaging, two-photon fluorescence emission

Procedia PDF Downloads 492
5171 Analytical Slope Stability Analysis Based on the Statistical Characterization of Soil Shear Strength

Authors: Bernardo C. P. Albuquerque, Darym J. F. Campos

Abstract:

Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.

Keywords: statistical slope stability analysis, skew distributions, probability of failure, functions of random variables

Procedia PDF Downloads 322
5170 Using Bidirectional Encoder Representations from Transformers to Extract Topic-Independent Sentiment Features for Social Media Bot Detection

Authors: Maryam Heidari, James H. Jones Jr.

Abstract:

Millions of online posts about different topics and products are shared on popular social media platforms. One use of this content is to provide crowd-sourced information about a specific topic, event or product. However, this use raises an important question: what percentage of information available through these services is trustworthy? In particular, might some of this information be generated by a machine, i.e., a bot, instead of a human? Bots can be, and often are, purposely designed to generate enough volume to skew an apparent trend or position on a topic, yet the consumer of such content cannot easily distinguish a bot post from a human post. In this paper, we introduce a model for social media bot detection which uses Bidirectional Encoder Representations from Transformers (Google Bert) for sentiment classification of tweets to identify topic-independent features. Our use of a Natural Language Processing approach to derive topic-independent features for our new bot detection model distinguishes this work from previous bot detection models. We achieve 94\% accuracy classifying the contents of data as generated by a bot or a human, where the most accurate prior work achieved accuracy of 92\%.

Keywords: bot detection, natural language processing, neural network, social media

Procedia PDF Downloads 101
5169 Moving Object Detection Using Histogram of Uniformly Oriented Gradient

Authors: Wei-Jong Yang, Yu-Siang Su, Pau-Choo Chung, Jar-Ferr Yang

Abstract:

Moving object detection (MOD) is an important issue in advanced driver assistance systems (ADAS). There are two important moving objects, pedestrians and scooters in ADAS. In real-world systems, there exist two important challenges for MOD, including the computational complexity and the detection accuracy. The histogram of oriented gradient (HOG) features can easily detect the edge of object without invariance to changes in illumination and shadowing. However, to reduce the execution time for real-time systems, the image size should be down sampled which would lead the outlier influence to increase. For this reason, we propose the histogram of uniformly-oriented gradient (HUG) features to get better accurate description of the contour of human body. In the testing phase, the support vector machine (SVM) with linear kernel function is involved. Experimental results show the correctness and effectiveness of the proposed method. With SVM classifiers, the real testing results show the proposed HUG features achieve better than classification performance than the HOG ones.

Keywords: moving object detection, histogram of oriented gradient, histogram of uniformly-oriented gradient, linear support vector machine

Procedia PDF Downloads 575
5168 Pareidolia and Perception of Anger in Vehicle Styles: Survey Results

Authors: Alan S. Hoback

Abstract:

Most people see human faces in car front and back ends because of the process of pareidolia. 96 people were surveyed to see how many of them saw a face in the vehicle styling. Participants were aged 18 to 72 years. 94% of the participants saw faces in the front-end design of production models. All participants that recognized faces indicated that most styles showed some degree of an angry expression. It was found that women were more likely to see faces in inanimate objects. However, with respect to whether women were more likely to perceive anger in the vehicle design, the results need further clarification. Survey responses were correlated to the design features of vehicles to determine what cues the respondents were likely looking at when responding. Whether the features looked anthropomorphic was key to anger perception. Features such as the headlights which could represent eyes and the air intake that could represent a mouth had high correlations to trends in scores. Results are compared among models, makers, by groupings of body styles classifications for the top 12 brands sold in the US, and by year for the top 20 models sold in the US in 2016. All of the top models sold increased in perception of an angry expression over the last 20 years or since the model was introduced, but the relative change varied by body style grouping.

Keywords: aggressive driving, face recognition, road rage, vehicle styling

Procedia PDF Downloads 128
5167 New Standardized Framework for Developing Mobile Applications (Based On Real Case Studies and CMMI)

Authors: Ammar Khader Almasri

Abstract:

The software processes play a vital role for delivering a high quality software system that meets the user’s needs. There are many software development models which are used by most system developers, which can be categorized into two categories (traditional and new methodologies). Mobile applications like other desktop applications need appropriate and well-working software development process. Nevertheless, mobile applications have different features which limit their performance and efficiency like application size, mobile hardware features. Moreover, this research aims to help developers in using a standardized model for developing mobile applications.

Keywords: software development process, agile methods , moblile application development, traditional methods

Procedia PDF Downloads 370
5166 Assessment of Carbon Dioxide Separation by Amine Solutions Using Electrolyte Non-Random Two-Liquid and Peng-Robinson Models: Carbon Dioxide Absorption Efficiency

Authors: Arash Esmaeili, Zhibang Liu, Yang Xiang, Jimmy Yun, Lei Shao

Abstract:

A high pressure carbon dioxide (CO2) absorption from a specific gas in a conventional column has been evaluated by the Aspen HYSYS simulator using a wide range of single absorbents and blended solutions to estimate the outlet CO2 concentration, absorption efficiency and CO2 loading to choose the most proper solution in terms of CO2 capture for environmental concerns. The property package (Acid Gas-Chemical Solvent) which is compatible with all applied solutions for the simulation in this study, estimates the properties based on an electrolyte non-random two-liquid (E-NRTL) model for electrolyte thermodynamics and Peng-Robinson equation of state for the vapor and liquid hydrocarbon phases. Among all the investigated single amines as well as blended solutions, piperazine (PZ) and the mixture of piperazine and monoethanolamine (MEA) have been found as the most effective absorbents respectively for CO2 absorption with high reactivity based on the simulated operational conditions.

Keywords: absorption, amine solutions, Aspen HYSYS, carbon dioxide, simulation

Procedia PDF Downloads 164
5165 Perusing the Influence of a Visual Editor in Enabling PostgreSQL Query Learn-Ability

Authors: Manuela Nayantara Jeyaraj

Abstract:

PostgreSQL is an Object-Relational Database Management System (ORDBMS) with an architecture that ensures optimal quality data management. But due to the shading growth of similar ORDBMS, PostgreSQL has not been renowned among the database user community. Despite having its features and in-built functionalities shadowed, PostgreSQL renders a vast range of utilities for data manipulation and hence calling for it to be upheld more among users. But introducing PostgreSQL in order to stimulate its advantageous features among users, mandates endorsing learn-ability as an add-on as the target groups considered consist of both amateur as well as professional PostgreSQL users. The scope of this paper deliberates providing easy contemplation of query formulations and flows through a visual editor designed according to user interface principles that standby to support every aspect of making PostgreSQL learn-able by self-operation and creation of queries within the visual editor. This paper tends to scrutinize the importance of choosing PostgreSQL as the working database environment, the visual perspectives that influence human behaviour and ultimately learning, the modes in which learn-ability can be provided via visualization and the advantages reaped by the implementation of the proposed system features.

Keywords: database, learn-ability, PostgreSQL, query, visual-editor

Procedia PDF Downloads 160
5164 Image Features Comparison-Based Position Estimation Method Using a Camera Sensor

Authors: Jinseon Song, Yongwan Park

Abstract:

In this paper, propose method that can user’s position that based on database is built from single camera. Previous positioning calculate distance by arrival-time of signal like GPS (Global Positioning System), RF(Radio Frequency). However, these previous method have weakness because these have large error range according to signal interference. Method for solution estimate position by camera sensor. But, signal camera is difficult to obtain relative position data and stereo camera is difficult to provide real-time position data because of a lot of image data, too. First of all, in this research we build image database at space that able to provide positioning service with single camera. Next, we judge similarity through image matching of database image and transmission image from user. Finally, we decide position of user through position of most similar database image. For verification of propose method, we experiment at real-environment like indoor and outdoor. Propose method is wide positioning range and this method can verify not only position of user but also direction.

Keywords: positioning, distance, camera, features, SURF(Speed-Up Robust Features), database, estimation

Procedia PDF Downloads 333
5163 Kuehne + Nagel's PharmaChain: IoT-Enabled Product Monitoring Using Radio Frequency Identification

Authors: Rebecca Angeles

Abstract:

This case study features the Kuehne + Nagel PharmaChain solution for ‘cold chain’ pharmaceutical and biologic product shipments with IOT-enabled features for shipment temperature and location tracking. Using the case study method and content analysis, this research project investigates the application of the structurational model of technology theory introduced by Orlikowski in order to interpret the firm’s entry and participation in the IOT-impelled marketplace.

Keywords: Internet of Things (IOT), radio frequency identification (RFID), structurational model of technology (Orlikowski), supply chain management

Procedia PDF Downloads 218