Search results for: efficient features selection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10241

Search results for: efficient features selection

10061 Web Service Architectural Style Selection in Multi-Criteria Requirements

Authors: Ahmad Mohsin, Syda Fatima, Falak Nawaz, Aman Ullah Khan

Abstract:

Selection of an appropriate architectural style is vital to the success of target web service under development. The nature of architecture design and selection for service-oriented computing applications is quite different as compared to traditional software. Web Services have complex and rigorous architectural styles to choose. Due to this, selection for accurate architectural style for web services development has become a more complex decision to be made by architects. Architectural style selection is a multi-criteria decision and demands lots of experience in service oriented computing. Decision support systems are good solutions to simplify the selection process of a particular architectural style. Our research suggests a new approach using DSS for selection of architectural styles while developing a web service to cater FRs and NFRs. Our proposed DSS helps architects to select right web service architectural pattern according to the domain and non-functional requirements. In this paper, a rule base DSS has been developed using CLIPS (C Language Integrated Production System) to support decisions using multi-criteria requirements. This DSS takes architectural characteristics, domain requirements and software architect preferences for NFRs as input for different architectural styles in use today in service-oriented computing. Weighted sum model has been applied to prioritize quality attributes and domain requirements. Scores are calculated using multiple criterions to choose the final architecture style.

Keywords: software architecture, web-service, rule-based, DSS, multi-criteria requirements, quality attributes

Procedia PDF Downloads 347
10060 Effective Parameter Selection for Audio-Based Music Mood Classification for Christian Kokborok Song: A Regression-Based Approach

Authors: Sanchali Das, Swapan Debbarma

Abstract:

Music mood classification is developing in both the areas of music information retrieval (MIR) and natural language processing (NLP). Some of the Indian languages like Hindi English etc. have considerable exposure in MIR. But research in mood classification in regional language is very less. In this paper, powerful audio based feature for Kokborok Christian song is identified and mood classification task has been performed. Kokborok is an Indo-Burman language especially spoken in the northeastern part of India and also some other countries like Bangladesh, Myanmar etc. For performing audio-based classification task, useful audio features are taken out by jMIR software. There are some standard audio parameters are there for the audio-based task but as known to all that every language has its unique characteristics. So here, the most significant features which are the best fit for the database of Kokborok song is analysed. The regression-based model is used to find out the independent parameters that act as a predictor and predicts the dependencies of parameters and shows how it will impact on overall classification result. For classification WEKA 3.5 is used, and selected parameters create a classification model. And another model is developed by using all the standard audio features that are used by most of the researcher. In this experiment, the essential parameters that are responsible for effective audio based mood classification and parameters that do not significantly change for each of the Christian Kokborok songs are analysed, and a comparison is also shown between the two above model.

Keywords: Christian Kokborok song, mood classification, music information retrieval, regression

Procedia PDF Downloads 207
10059 A Deep Learning Approach to Online Social Network Account Compromisation

Authors: Edward K. Boahen, Brunel E. Bouya-Moko, Changda Wang

Abstract:

The major threat to online social network (OSN) users is account compromisation. Spammers now spread malicious messages by exploiting the trust relationship established between account owners and their friends. The challenge in detecting a compromised account by service providers is validating the trusted relationship established between the account owners, their friends, and the spammers. Another challenge is the increase in required human interaction with the feature selection. Research available on supervised learning (machine learning) has limitations with the feature selection and accounts that cannot be profiled, like application programming interface (API). Therefore, this paper discusses the various behaviours of the OSN users and the current approaches in detecting a compromised OSN account, emphasizing its limitations and challenges. We propose a deep learning approach that addresses and resolve the constraints faced by the previous schemes. We detailed our proposed optimized nonsymmetric deep auto-encoder (OPT_NDAE) for unsupervised feature learning, which reduces the required human interaction levels in the selection and extraction of features. We evaluated our proposed classifier using the NSL-KDD and KDDCUP'99 datasets in a graphical user interface enabled Weka application. The results obtained indicate that our proposed approach outperformed most of the traditional schemes in OSN compromised account detection with an accuracy rate of 99.86%.

Keywords: computer security, network security, online social network, account compromisation

Procedia PDF Downloads 102
10058 Comparison between XGBoost, LightGBM and CatBoost Using a Home Credit Dataset

Authors: Essam Al Daoud

Abstract:

Gradient boosting methods have been proven to be a very important strategy. Many successful machine learning solutions were developed using the XGBoost and its derivatives. The aim of this study is to investigate and compare the efficiency of three gradient methods. Home credit dataset is used in this work which contains 219 features and 356251 records. However, new features are generated and several techniques are used to rank and select the best features. The implementation indicates that the LightGBM is faster and more accurate than CatBoost and XGBoost using variant number of features and records.

Keywords: gradient boosting, XGBoost, LightGBM, CatBoost, home credit

Procedia PDF Downloads 153
10057 Improvement of Low Delta-9 Tetrahydrocannabinol (THC) Hemp Cultivars for High Fiber Content

Authors: Sarita Pinmanee, Saipan Krapbia, Rataya Yanaphan

Abstract:

Hemp (Cannabis sativa L.) is multi-purpose crop delivering fibers, shives, and seed. The fiber is used today for special paper, insulation material, and biocomposites. This research was to improve low delta-9 Tetrahydrocannabinol (THC) hemp variety for high fiber contents. Mass selection for increased fiber content in four low THC Thai cultivars (including RPF1, RPF2, RPF3, and RPF4) was carried out in highland areas in the northern Thailand. Research work was conducted for three consecutive growing seasons during 2012 to 2014 at Pangda Royal Agricultural Station, Samoeng District, Chiang Mai Province, Thailand. Results of selection indicated that after selecting for three successive generations, the average fiber content of four low THC Thai cultivars increased to 28-36 %. The resulted of selection was found that fiber content of RPF1, RPF2, RPF3 and RPF4 increased to 20.6, 19.1, 19.9 and 22.8%, respectively. In addition, THC contents of these four varieties were 0.07, 0.138, 0.08 and 0.072 % respectively. As well, mass selection method was considered as an effective and suitable method for improving this fiber content.

Keywords: Hemp, mass selection, fiber content, low THC content

Procedia PDF Downloads 400
10056 The Choosing the Right Projects With Multi-Criteria Decision Making to Ensure the Sustainability of the Projects

Authors: Saniye Çeşmecioğlu

Abstract:

The importance of project sustainability and success has become increasingly significant due to the proliferation of external environmental factors that have decreased project resistance in contemporary times. The primary approach to forestall the failure of projects is to ensure their long-term viability through the strategic selection of projects as creating judicious project selection framework within the organization. Decision-makers require precise decision contexts (models) that conform to the company's business objectives and sustainability expectations during the project selection process. The establishment of a rational model for project selection enables organizations to create a distinctive and objective framework for the selection process. Additionally, for the optimal implementation of this decision-making model, it is crucial to establish a Project Management Office (PMO) team and Project Steering Committee within the organizational structure to oversee the framework. These teams enable updating project selection criteria and weights in response to changing conditions, ensuring alignment with the company's business goals, and facilitating the selection of potentially viable projects. This paper presents a multi-criteria decision model for selecting project sustainability and project success criteria that ensures timely project completion and retention. The model was developed using MACBETH (Measuring Attractiveness by a Categorical Based Evaluation Technique) and was based on broadcaster companies’ expectations. The ultimate results of this study provide a model that endorses the process of selecting the appropriate project objectively by utilizing project selection and sustainability criteria along with their respective weights for organizations. Additionally, the study offers suggestions that may ascertain helpful in future endeavors.

Keywords: project portfolio management, project selection, multi-criteria decision making, project sustainability and success criteria, MACBETH

Procedia PDF Downloads 48
10055 An Automatic Bayesian Classification System for File Format Selection

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for the classification of an unstructured format description for identification of file formats. The main contribution of this work is the employment of data mining techniques to support file format selection with just the unstructured text description that comprises the most important format features for a particular organisation. Subsequently, the file format indentification method employs file format classifier and associated configurations to support digital preservation experts with an estimation of required file format. Our goal is to make use of a format specification knowledge base aggregated from a different Web sources in order to select file format for a particular institution. Using the naive Bayes method, the decision support system recommends to an expert, the file format for his institution. The proposed methods facilitate the selection of file format and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and specifications of file formats. To facilitate decision-making, the aggregated information about the file formats is presented as a file format vocabulary that comprises most common terms that are characteristic for all researched formats. The goal is to suggest a particular file format based on this vocabulary for analysis by an expert. The sample file format calculation and the calculation results including probabilities are presented in the evaluation section.

Keywords: data mining, digital libraries, digital preservation, file format

Procedia PDF Downloads 486
10054 Determinants of Self-Reported Hunger: An Ordered Probit Model with Sample Selection Approach

Authors: Brian W. Mandikiana

Abstract:

Homestead food production has the potential to alleviate hunger, improve health and nutrition for children and adults. This article examines the relationship between self-reported hunger and homestead food production using the ordered probit model. A sample of households participating in homestead food production was drawn from the first wave of the South African National Income Dynamics Survey, a nationally representative cross-section. The sample selection problem was corrected using an ordered probit model with sample selection approach. The findings show that homestead food production exerts a positive and significant impact on children and adults’ ability to cope with hunger and malnutrition. Yet, on the contrary, potential gains of homestead food production are threatened by shocks such as crop failure.

Keywords: agriculture, hunger, nutrition, sample selection

Procedia PDF Downloads 317
10053 HD-WSComp: Hypergraph Decomposition for Web Services Composition Based on QoS

Authors: Samah Benmerbi, Kamal Amroun, Abdelkamel Tari

Abstract:

The increasing number of Web service (WS)providers throughout the globe, have produced numerous Web services providing the same or similar functionality. Therefore, there is a need of tools developing the best answer of queries by selecting and composing services with total transparency. This paper reviews various QoS based Web service selection mechanisms and architectures which facilitate qualitatively optimal selection, in other fact Web service composition is required when a request cannot be fulfilled by a single web service. In such cases, it is preferable to integrate existing web services to satisfy user’s request. We introduce an automatic Web service composition method based on hypergraph decomposition using hypertree decomposition method. The problem of selection and the composition of the web services is transformed into a resolution in a hypertree by exploring the relations of dependency between web services to get composite web service via employing an execution order of WS satisfying global request.

Keywords: web service, web service selection, web service composition, QoS, hypergraph decomposition, BE hypergraph decomposition, hypertree resolution

Procedia PDF Downloads 499
10052 Exploring Alignability Effects and the Role of Information Structure in Promoting Uptake of Energy Efficient Technologies

Authors: Rebecca Hafner, David Elmes, Daniel Read

Abstract:

The current research applies decision-making theory to the problem of increasing uptake of energy efficient technologies in the market place, where uptake is currently slower than one might predict following rational choice models. We apply the alignable/non-alignable features effect and explore the impact of varying information structure on the consumers’ preference for standard versus energy efficient technologies. In two studies we present participants with a choice between similar (boiler vs. boiler) vs. dissimilar (boiler vs. heat pump) technologies, described by a list of alignable and non-alignable attributes. In study One there is a preference for alignability when options are similar; an effect mediated by an increased tendency to infer missing information is the same. No effects of alignability on preference are found when options differ. One explanation for this split-shift in attentional focus is a change in construal levels potentially induced by the added consideration of environmental concern. Study two was designed to explore the interplay between alignability and construal level in greater detail. We manipulated construal level via a thought prime task prior to taking part in the same heating systems choice task, and find that there is a general preference for non-alignability, regardless of option type. We draw theoretical and applied implications for the type of information structure best suited for the promotion of energy efficient technologies.

Keywords: alignability effects, decision making, energy-efficient technologies, sustainable behaviour change

Procedia PDF Downloads 298
10051 Sentiment Analysis: An Enhancement of Ontological-Based Features Extraction Techniques and Word Equations

Authors: Mohd Ridzwan Yaakub, Muhammad Iqbal Abu Latiffi

Abstract:

Online business has become popular recently due to the massive amount of information and medium available on the Internet. This has resulted in the huge number of reviews where the consumers share their opinion, criticisms, and satisfaction on the products they have purchased on the websites or the social media such as Facebook and Twitter. However, to analyze customer’s behavior has become very important for organizations to find new market trends and insights. The reviews from the websites or the social media are in structured and unstructured data that need a sentiment analysis approach in analyzing customer’s review. In this article, techniques used in will be defined. Definition of the ontology and description of its possible usage in sentiment analysis will be defined. It will lead to empirical research that related to mobile phones used in research and the ontology used in the experiment. The researcher also will explore the role of preprocessing data and feature selection methodology. As the result, ontology-based approach in sentiment analysis can help in achieving high accuracy for the classification task.

Keywords: feature selection, ontology, opinion, preprocessing data, sentiment analysis

Procedia PDF Downloads 188
10050 Native Language Identification with Cross-Corpus Evaluation Using Social Media Data: ’Reddit’

Authors: Yasmeen Bassas, Sandra Kuebler, Allen Riddell

Abstract:

Native language identification is one of the growing subfields in natural language processing (NLP). The task of native language identification (NLI) is mainly concerned with predicting the native language of an author’s writing in a second language. In this paper, we investigate the performance of two types of features; content-based features vs. content independent features, when they are evaluated on a different corpus (using social media data “Reddit”). In this NLI task, the predefined models are trained on one corpus (TOEFL), and then the trained models are evaluated on different data using an external corpus (Reddit). Three classifiers are used in this task; the baseline, linear SVM, and logistic regression. Results show that content-based features are more accurate and robust than content independent ones when tested within the corpus and across corpus.

Keywords: NLI, NLP, content-based features, content independent features, social media corpus, ML

Procedia PDF Downloads 121
10049 Trial Version of a Systematic Material Selection Tool in Building Element Design

Authors: Mine Koyaz, M. Cem Altun

Abstract:

Selection of the materials satisfying the expected performances is significantly important for any design. Today, with the constantly evolving and developing technologies, the material options are so wide that the necessity of the use of some support tools in the selection process is arising. Therefore, as a sub process of building element design, a systematic material selection tool is developed, that defines four main steps of the material selection; definition, research, comparison and decision. The main purpose of the tool is being an educational instrument that would show a methodic way of material selection in architectural detailing for the use of architecture students. The tool predefines the possible uses of various material databases and other sources of information on material properties. Hence, it is to be used as a guidance for designers, especially with a limited material knowledge and experience. The material selection tool not only embraces technical properties of materials related with building elements’ functional requirements, but also its sensual properties related with the identity of design and its environmental impacts with respect to the sustainability of the design. The method followed in the development of the tool has two main sections; first the examination and application of the existing methods and second the development of trial versions and their applications. Within the scope of the existing methods; design support tools, methodic approaches for the building element design and material selection process, material properties, material databases, methodic approaches for the decision making process are examined. The existing methods are applied by architecture students and newly graduate architects through different design problems. With respect to the results of these applications, strong and weak sides of the existing material selection tools are presented. A main flow chart of the material selection tool has been developed with the objective to apply the strong aspects of the existing methods and develop their weak sides. Through different stages, a different aspect of the material selection process is investigated and the tool took its final form. Systematic material selection tool, within the building element design process, guides the users with a minimum background information, to practically and accurately determine the ideal material that is to be chosen, satisfying the needs of their design. The tool has a flexible structure that answers different needs of different designs and designers. The trial version issued in this paper shows one of the paths that could be followed and illustrates its application over a design problem.

Keywords: architectural education, building element design, material selection tool, systematic approach

Procedia PDF Downloads 335
10048 A Proposed Optimized and Efficient Intrusion Detection System for Wireless Sensor Network

Authors: Abdulaziz Alsadhan, Naveed Khan

Abstract:

In recent years intrusions on computer network are the major security threat. Hence, it is important to impede such intrusions. The hindrance of such intrusions entirely relies on its detection, which is primary concern of any security tool like Intrusion Detection System (IDS). Therefore, it is imperative to accurately detect network attack. Numerous intrusion detection techniques are available but the main issue is their performance. The performance of IDS can be improved by increasing the accurate detection rate and reducing false positive. The existing intrusion detection techniques have the limitation of usage of raw data set for classification. The classifier may get jumble due to redundancy, which results incorrect classification. To minimize this problem, Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Local Binary Pattern (LBP) can be applied to transform raw features into principle features space and select the features based on their sensitivity. Eigen values can be used to determine the sensitivity. To further classify, the selected features greedy search, back elimination, and Particle Swarm Optimization (PSO) can be used to obtain a subset of features with optimal sensitivity and highest discriminatory power. These optimal feature subset used to perform classification. For classification purpose, Support Vector Machine (SVM) and Multilayer Perceptron (MLP) used due to its proven ability in classification. The Knowledge Discovery and Data mining (KDD’99) cup dataset was considered as a benchmark for evaluating security detection mechanisms. The proposed approach can provide an optimal intrusion detection mechanism that outperforms the existing approaches and has the capability to minimize the number of features and maximize the detection rates.

Keywords: Particle Swarm Optimization (PSO), Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), Local Binary Pattern (LBP), Support Vector Machine (SVM), Multilayer Perceptron (MLP)

Procedia PDF Downloads 353
10047 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data

Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone

Abstract:

The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.

Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine

Procedia PDF Downloads 228
10046 Mobility-Aware Relay Selection in Two Hop Unmanned Aerial Vehicles Network

Authors: Tayyaba Hussain, Sobia Jangsher, Saqib Ali, Saqib Ejaz

Abstract:

Unmanned Aerial vehicles (UAV’s) have gained great popularity due to their remoteness, ease of deployment and high maneuverability in different applications like real-time surveillance, image capturing, weather atmospheric studies, disaster site monitoring and mapping. These applications can involve a real-time communication with the ground station. However, altitude and mobility possess a few challenges for the communication. UAV’s at high altitude usually require more transmit power. One possible solution can be with the use of multi hops (UAV’s acting as relays) and exploiting the mobility pattern of the UAV’s. In this paper, we studied a relay (UAV’s acting as relays) selection for a reliable transmission to a destination UAV. We exploit the mobility information of the UAV’s to propose a Mobility-Aware Relay Selection (MARS) algorithm with the objective of giving improved data rates. The results are compared with Non Mobility-Aware relay selection scheme and optimal values. Numerical results show that our proposed MARS algorithm gives 6% better achievable data rates for the mobile UAV’s as compared with Non MobilityAware relay selection scheme. On average a decrease of 20.2% in data rate is achieved with MARS as compared with SDP solver in Yalmip.

Keywords: mobility aware, relay selection, time division multiple acess, unmanned aerial vehicle

Procedia PDF Downloads 227
10045 Development of Graph-Theoretic Model for Ranking Top of Rail Lubricants

Authors: Subhash Chandra Sharma, Mohammad Soleimani

Abstract:

Selection of the correct lubricant for the top of rail application is a complex process. In this paper, the selection of the proper lubricant for a Top-Of-Rail (TOR) lubrication system based on graph theory and matrix approach has been developed. Attributes influencing the selection process and their influence on each other has been represented through a digraph and an equivalent matrix. A matrix function which is called the Permanent Function is derived. By substituting the level of inherent contribution of the influencing parameters and their influence on each other qualitatively, a criterion called Suitability Index is derived. Based on these indices, lubricants can be ranked for their suitability. The proposed model can be useful for maintenance engineers in selecting the best lubricant for a TOR application. The proposed methodology is illustrated step–by-step through an example.

Keywords: lubricant selection, top of rail lubrication, graph-theory, Ranking of lubricants

Procedia PDF Downloads 281
10044 Systematic Analysis of Logistics Location Search Methods under Aspects of Sustainability

Authors: Markus Pajones, Theresa Steiner, Matthias Neubauer

Abstract:

Selecting a logistics location is vital for logistics providers, food retailing and other trading companies since the selection poses an essential factor for economic success. Therefore various location search methods like cost-benefit analysis and others are well known and under usage. The development of a logistics location can be related to considerable negative effects for the eco system such as sealing the surface, wrecking of biodiversity or CO2 and noise emissions generated by freight and commuting traffic. The increasing importance of sustainability demands for taking an informed decision when selecting a logistics location for the future. Sustainability considers economic, ecologic and social aspects which should be equally integrated in the process of location search. Objectives of this paper are to define various methods which support the selection of sustainable logistics locations and to generate knowledge about the suitability, assets and limitations of the methods within the selection process. This paper investigates the role of economical, ecological and social aspects when searching for new logistics locations. Thereby, related work targeted towards location search is analyzed with respect to encoded sustainability aspects. In addition, this research aims to gain knowledge on how to include aspects of sustainability and take an informed decision when searching for a logistics location. As a result, a decomposition of the various location search methods in there components leads to a comparative analysis in form of a matrix. The comparison within a matrix enables a transparent overview about the mentioned assets and limitations of the methods and their suitability for selecting sustainable logistics locations. A further result is to generate knowledge on how to combine the separate methods to a new method for a more efficient selection of logistics locations in the context of sustainability. Future work will especially investigate the above mentioned combination of various location search methods. The objective is to develop an innovative instrument, which supports the search for logistics locations with a focus on a balanced sustainability (economy, ecology, social). Because of an ideal selection of logistics locations, induced traffic should be reduced and a mode shift to rail and public transport should be facilitated.

Keywords: commuting traffic, freight traffic, logistics location search, location search method

Procedia PDF Downloads 310
10043 Object-Scene: Deep Convolutional Representation for Scene Classification

Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang

Abstract:

Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.

Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization

Procedia PDF Downloads 319
10042 An Adaptive Hybrid Surrogate-Assisted Particle Swarm Optimization Algorithm for Expensive Structural Optimization

Authors: Xiongxiong You, Zhanwen Niu

Abstract:

Choosing an appropriate surrogate model plays an important role in surrogates-assisted evolutionary algorithms (SAEAs) since there are many types and different kernel functions in the surrogate model. In this paper, an adaptive selection of the best suitable surrogate model method is proposed to solve different kinds of expensive optimization problems. Firstly, according to the prediction residual error sum of square (PRESS) and different model selection strategies, the excellent individual surrogate models are integrated into multiple ensemble models in each generation. Then, based on the minimum root of mean square error (RMSE), the best suitable surrogate model is selected dynamically. Secondly, two methods with dynamic number of models and selection strategies are designed, which are used to show the influence of the number of individual models and selection strategy. Finally, some compared studies are made to deal with several commonly used benchmark problems, as well as a rotor system optimization problem. The results demonstrate the accuracy and robustness of the proposed method.

Keywords: adaptive selection, expensive optimization, rotor system, surrogates assisted evolutionary algorithms

Procedia PDF Downloads 131
10041 Board Nomination and Selection Process in Indonesian State-Owned Enterprises

Authors: Synthia A. Sari

Abstract:

The transparent nomination and selection process is the first step to obtaining qualified members of board. It is believed as the representative (agent) of the owners, members of the board must consist of competent and professional people. However, the development of transparent and ideal nomination and selection process in Indonesian State-owned enterprises (SOEs) has been based on relatively little research. Considering the relative importance attached by boards to conduct their roles in their principal’s interest in a variety of governance tasks in state-owned enterprises, the primary aim of this paper is to shed light on the extent of nomination and selection process impact performance of the board in implementing good corporate governance in Indonesian SOEs. The exploratory nature of this study led to the adoption of a qualitative research methodology which uses semi-structured interviews and publically available documents to collect a range of data pertaining board nomination and selection and the work of boards. Interviews were conducted with four informants from three Indonesian SOEs and Ministry of SOEs. Findings in this study demonstrate unclear job description and expectations board members as a result of unclear functions of the board in Indonesian SOEs make transparent and accountable nomination and selection process hard to conduct. This situation is vulnerable to the influences from political interest and that even the process itself can degenerate into situations of political interference. In the end, it leads to choosing the wrong person for membership of the board. This study makes a significant contribution to several fields; the human resource management, corporate governance, and Southeast studies by addressing the basic research gaps of board selection process issues in Indonesian SOEs. The gap is addressed by providing a more coherent framework for effective nomination and selection system which reflects more clearly the real experiences of those actually involved at board level.

Keywords: board selection and nomination process, Indonesian stated-owned enterprises, good corporate governance, political influence

Procedia PDF Downloads 257
10040 Energy Management Techniques in Mobile Robots

Authors: G. Gurguze, I. Turkoglu

Abstract:

Today, the developing features of technological tools with limited energy resources have made it necessary to use energy efficiently. Energy management techniques have emerged for this purpose. As with every field, energy management is vital for robots that are being used in many areas from industry to daily life and that are thought to take up more spaces in the future. Particularly, effective power management in autonomous and multi robots, which are getting more complicated and increasing day by day, will improve the performance and success. In this study, robot management algorithms, usage of renewable and hybrid energy sources, robot motion patterns, robot designs, sharing strategies of workloads in multiple robots, road and mission planning algorithms are discussed for efficient use of energy resources by mobile robots. These techniques have been evaluated in terms of efficient use of existing energy resources and energy management in robots.

Keywords: energy management, mobile robot, robot administration, robot management, robot planning

Procedia PDF Downloads 256
10039 Optimization of Personnel Selection Problems via Unconstrained Geometric Programming

Authors: Vildan Kistik, Tuncay Can

Abstract:

From a business perspective, cost and profit are two key factors for businesses. The intent of most businesses is to minimize the cost to maximize or equalize the profit, so as to provide the greatest benefit to itself. However, the physical system is very complicated because of technological constructions, rapid increase of competitive environments and similar factors. In such a system it is not easy to maximize profits or to minimize costs. Businesses must decide on the competence and competence of the personnel to be recruited, taking into consideration many criteria in selecting personnel. There are many criteria to determine the competence and competence of a staff member. Factors such as the level of education, experience, psychological and sociological position, and human relationships that exist in the field are just some of the important factors in selecting a staff for a firm. Personnel selection is a very important and costly process in terms of businesses in today's competitive market. Although there are many mathematical methods developed for the selection of personnel, unfortunately the use of these mathematical methods is rarely encountered in real life. In this study, unlike other methods, an exponential programming model was established based on the possibilities of failing in case the selected personnel was started to work. With the necessary transformations, the problem has been transformed into unconstrained Geometrical Programming problem and personnel selection problem is approached with geometric programming technique. Personnel selection scenarios for a classroom were established with the help of normal distribution and optimum solutions were obtained. In the most appropriate solutions, the personnel selection process for the classroom has been achieved with minimum cost.

Keywords: geometric programming, personnel selection, non-linear programming, operations research

Procedia PDF Downloads 258
10038 Automatic Classification of the Stand-to-Sit Phase in the TUG Test Using Machine Learning

Authors: Yasmine Abu Adla, Racha Soubra, Milana Kasab, Mohamad O. Diab, Aly Chkeir

Abstract:

Over the past several years, researchers have shown a great interest in assessing the mobility of elderly people to measure their functional status. Usually, such an assessment is done by conducting tests that require the subject to walk a certain distance, turn around, and finally sit back down. Consequently, this study aims to provide an at home monitoring system to assess the patient’s status continuously. Thus, we proposed a technique to automatically detect when a subject sits down while walking at home. In this study, we utilized a Doppler radar system to capture the motion of the subjects. More than 20 features were extracted from the radar signals, out of which 11 were chosen based on their intraclass correlation coefficient (ICC > 0.75). Accordingly, the sequential floating forward selection wrapper was applied to further narrow down the final feature vector. Finally, 5 features were introduced to the linear discriminant analysis classifier, and an accuracy of 93.75% was achieved as well as a precision and recall of 95% and 90%, respectively.

Keywords: Doppler radar system, stand-to-sit phase, TUG test, machine learning, classification

Procedia PDF Downloads 148
10037 Design and Fabrication of a Scaffold with Appropriate Features for Cartilage Tissue Engineering

Authors: S. S. Salehi, A. Shamloo

Abstract:

Poor ability of cartilage tissue when experiencing a damage leads scientists to use tissue engineering as a reliable and effective method for regenerating or replacing damaged tissues. An artificial tissue should have some features such as biocompatibility, biodegradation and, enough mechanical properties like the original tissue. In this work, a composite hydrogel is prepared by using natural and synthetic materials that has high porosity. Mechanical properties of different combinations of polymers such as modulus of elasticity were tested, and a hydrogel with good mechanical properties was selected. Bone marrow derived mesenchymal stem cells were also seeded into the pores of the sponge, and the results showed the adhesion and proliferation of cells within the hydrogel after one month. In comparison with previous works, this study offers a new and efficient procedure for the fabrication of cartilage like tissue and further cartilage repair.

Keywords: cartilage tissue engineering, hydrogel, mechanical strength, mesenchymal stem cell

Procedia PDF Downloads 286
10036 Analysis of Initial Entry-Level Technology Course Impacts on STEM Major Selection

Authors: Ethan Shafer, Timothy Graziano

Abstract:

This research seeks to answer whether first-year courses at institutions of higher learning can impact STEM major selection. Unlike many universities, an entry-level technology course (often referred to as CS0) is required for all United States Military Academy (USMA) students–regardless of major–in their first year of attendance. Students at the academy choose their major at the end of their first year of studies. Through student responses to a multi-semester survey, this paper identifies a number of factors that potentially influence STEM major selection. Student demographic data, pre-existing exposure and access to technology, perceptions of STEM subjects, and initial desire for a STEM major are captured before and after taking a CS0 course. An analysis of factors that contribute to student perception of STEM and major selection are presented. This work provides recommendations and suggestions for institutions currently providing or looking to provide CS0-like courses to their students.

Keywords: education, STEM, pedagogy, digital literacy

Procedia PDF Downloads 110
10035 Task Distraction vs. Visual Enhancement: Which Is More Effective?

Authors: Huangmei Liu, Si Liu, Jia’nan Liu

Abstract:

The present experiment investigated and compared the effectiveness of two kinds of methods of attention control: Task distraction and visual enhancement. In the study, the effectiveness of task distractions to explicit features and of visual enhancement to implicit features of the same group of Chinese characters were compared based on their effect on the participants’ reaction time, subjective confidence rating, and verbal report. We found support that the visual enhancement on implicit features did overcome the contrary effect of training distraction and led to awareness of those implicit features, at least to some extent.

Keywords: task distraction, visual enhancement, attention, awareness, learning

Procedia PDF Downloads 421
10034 Optimized Preprocessing for Accurate and Efficient Bioassay Prediction with Machine Learning Algorithms

Authors: Jeff Clarine, Chang-Shyh Peng, Daisy Sang

Abstract:

Bioassay is the measurement of the potency of a chemical substance by its effect on a living animal or plant tissue. Bioassay data and chemical structures from pharmacokinetic and drug metabolism screening are mined from and housed in multiple databases. Bioassay prediction is calculated accordingly to determine further advancement. This paper proposes a four-step preprocessing of datasets for improving the bioassay predictions. The first step is instance selection in which dataset is categorized into training, testing, and validation sets. The second step is discretization that partitions the data in consideration of accuracy vs. precision. The third step is normalization where data are normalized between 0 and 1 for subsequent machine learning processing. The fourth step is feature selection where key chemical properties and attributes are generated. The streamlined results are then analyzed for the prediction of effectiveness by various machine learning algorithms including Pipeline Pilot, R, Weka, and Excel. Experiments and evaluations reveal the effectiveness of various combination of preprocessing steps and machine learning algorithms in more consistent and accurate prediction.

Keywords: bioassay, machine learning, preprocessing, virtual screen

Procedia PDF Downloads 263
10033 Logistics Information and Customer Service

Authors: Š. Čemerková, M. Wilczková

Abstract:

The paper deals with the importance of information flow for providing of defined level of customer service in the firms. Setting of the criteria for the selection and implementation of logistics information system is a prerequisite for ensuring of the flow of information in firms. The decision on the selection and implementation of logistics information system is linked to the investment costs and operating costs, which are included in the total logistics costs. The article also deals with the conclusions of the research focused on the logistics information system selection in companies in the Czech Republic.

Keywords: customer service, information system, logistics, research

Procedia PDF Downloads 345
10032 Security Features for Remote Healthcare System: A Feasibility Study

Authors: Tamil Chelvi Vadivelu, Nurazean Maarop, Rasimah Che Yusoff, Farhana Aini Saludin

Abstract:

Implementing a remote healthcare system needs to consider many security features. Therefore, before any deployment of the remote healthcare system, a feasibility study from the security perspective is crucial. Remote healthcare system using WBAN technology has been used in other countries for medical purposes but in Malaysia, such projects are still not yet implemented. This study was conducted qualitatively. The interview results involving five healthcare practitioners are further elaborated. The study has addressed four important security features in order to incorporate remote healthcare system using WBAN in Malaysian government hospitals.

Keywords: remote healthcare, IT security, security features, wireless sensor application

Procedia PDF Downloads 294