Search results for: informative theoretic similarity metrics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1447

Search results for: informative theoretic similarity metrics

1387 Active Features Determination: A Unified Framework

Authors: Meenal Badki

Abstract:

We address the issue of active feature determination, where the objective is to determine the set of examples on which additional data (such as lab tests) needs to be gathered, given a large number of examples with some features (such as demographics) and some examples with all the features (such as the complete Electronic Health Record). We note that certain features may be more costly, unique, or laborious to gather. Our proposal is a general active learning approach that is independent of classifiers and similarity metrics. It allows us to identify examples that differ from the full data set and obtain all the features for the examples that match. Our comprehensive evaluation shows the efficacy of this approach, which is driven by four authentic clinical tasks.

Keywords: feature determination, classification, active learning, sample-efficiency

Procedia PDF Downloads 38
1386 Resume Ranking Using Custom Word2vec and Rule-Based Natural Language Processing Techniques

Authors: Subodh Chandra Shakya, Rajendra Sapkota, Aakash Tamang, Shushant Pudasaini, Sujan Adhikari, Sajjan Adhikari

Abstract:

Lots of efforts have been made in order to measure the semantic similarity between the text corpora in the documents. Techniques have been evolved to measure the similarity of two documents. One such state-of-art technique in the field of Natural Language Processing (NLP) is word to vector models, which converts the words into their word-embedding and measures the similarity between the vectors. We found this to be quite useful for the task of resume ranking. So, this research paper is the implementation of the word2vec model along with other Natural Language Processing techniques in order to rank the resumes for the particular job description so as to automate the process of hiring. The research paper proposes the system and the findings that were made during the process of building the system.

Keywords: chunking, document similarity, information extraction, natural language processing, word2vec, word embedding

Procedia PDF Downloads 131
1385 Unsteady Similarity Solution for a Slender Dry Patch in a Thin Newtonian Fluid Film

Authors: S. S. Abas, Y. M. Yatim

Abstract:

In this paper the unsteady, slender, symmetric dry patch in an infinitely wide and thin liquid film of Newtonian fluid draining under gravity down an inclined plane in the presence of strong surface-tension effect is considered. A similarity transformation, named a travelling-wave similarity solution is used to reduce the governing partial differential equation into the ordinary differential equation which is then solved numerically using a shooting method. The introduction of surface-tension effect on the flow leads to a fourth-order ordinary differential equation. The solution obtained predicts that the dry patch has a quartic shape and the free surface has a capillary ridge near the contact line which decays in an oscillatory manner far from it.

Keywords: dry patch, Newtonian fluid, similarity solution, surface-tension effect, travelling-wave, unsteady thin-film flow

Procedia PDF Downloads 284
1384 Enhancing Word Meaning Retrieval Using FastText and Natural Language Processing Techniques

Authors: Sankalp Devanand, Prateek Agasimani, Shamith V. S., Rohith Neeraje

Abstract:

Machine translation has witnessed significant advancements in recent years, but the translation of languages with distinct linguistic characteristics, such as English and Sanskrit, remains a challenging task. This research presents the development of a dedicated English-to-Sanskrit machine translation model, aiming to bridge the linguistic and cultural gap between these two languages. Using a variety of natural language processing (NLP) approaches, including FastText embeddings, this research proposes a thorough method to improve word meaning retrieval. Data preparation, part-of-speech tagging, dictionary searches, and transliteration are all included in the methodology. The study also addresses the implementation of an interpreter pattern and uses a word similarity task to assess the quality of word embeddings. The experimental outcomes show how the suggested approach may be used to enhance word meaning retrieval tasks with greater efficacy, accuracy, and adaptability. Evaluation of the model's performance is conducted through rigorous testing, comparing its output against existing machine translation systems. The assessment includes quantitative metrics such as BLEU scores, METEOR scores, Jaccard Similarity, etc.

Keywords: machine translation, English to Sanskrit, natural language processing, word meaning retrieval, fastText embeddings

Procedia PDF Downloads 7
1383 Aligning Cultural Practices through Information Exchange: A Taxonomy in Global Manufacturing Industry

Authors: Hung Nguyen

Abstract:

With the rise of global supply chain network, the choice of supply chain orientation is critical. The alignment between cultural similarity and supply chain information exchange could help identify appropriate supply chain orientations, which would differentiate the stronger competitors and performers from the weaker ones. Through developing a taxonomy, this study examined whether the choices of action programs and manufacturing performance differ depending on the levels of attainment cultural similarity and information exchange. This study employed statistical tests on a large-scale dataset consisting of 680 manufacturing plants from various cultures and industries. Firms need to align cultural practices with the level of information exchange in order to achieve good overall business performance. There appeared to be consistent three major orientations: the Proactive, the Initiative and the Reactive. Firms are experiencing higher payoffs from various improvements are the ones successful alignment in both information exchange and cultural similarity The findings provide step-by-step decision making for supply chain information exchange and offer guidance especially for global supply chain managers. In including both cultural similarity and information exchange, this paper adds greater comprehensiveness and richness to the supply chain literature.

Keywords: culture, information exchange, supply chain orientation, similarity

Procedia PDF Downloads 335
1382 From Abraham to Average Man: Game Theoretic Analysis of Divine Social Relationships

Authors: Elizabeth Latham

Abstract:

Billions of people worldwide profess some feeling of psychological or spiritual connection with the divine. The majority of them attribute this personal connection to the God of the Christian Bible. The objective of this research was to discover what could be known about the exact social nature of these relationships and to see if they mimic the interactions recounted in the bible; if a worldwide majority believes that the Christian Bible is a true account of God’s interactions with mankind, it is reasonable to assume that the interactions between God and the aforementioned people would be similar to the ones in the bible. This analysis required the employment of an unusual method of biblical analysis: Game Theory. Because the research focused on documented social interaction between God and man in scripture, it was important to go beyond text-analysis methods. We used stories from the New Revised Standard Version of the bible to set up “games” using economics-style matrices featuring each player’s motivations and possible courses of action, modeled after interactions in the Old and New Testaments between the Judeo-Christian God and some mortal person. We examined all relevant interactions for the objectives held by each party and their strategies for obtaining them. These findings were then compared to similar “games” created based on interviews with people subscribing to different levels of Christianity who ranged from barely-practicing to clergymen. The range was broad so as to look for a correlation between scriptural knowledge and game-similarity to the bible. Each interview described a personal experience someone believed they had with God and matrices were developed to describe each one as social interaction: a “game” to be analyzed quantitively. The data showed that in most cases, the social features of God-man interactions in the modern lives of people were like those present in the “games” between God and man in the bible. This similarity was referred to in the study as “biblical faith” and it alone was a fascinating finding with many implications. The even more notable finding, however, was that the amount of game-similarity present did not correlate with the amount of scriptural knowledge. Each participant was also surveyed on family background, political stances, general education, scriptural knowledge, and those who had biblical faith were not necessarily the ones that knew the bible best. Instead, there was a high degree of correlation between biblical faith and family religious observance. It seems that to have a biblical psychological relationship with God, it is more important to have a religious family than to have studied scripture, a surprising insight with massive implications on the practice and preservation of religion.

Keywords: bible, Christianity, game theory, social psychology

Procedia PDF Downloads 123
1381 The Influence of Audio on Perceived Quality of Segmentation

Authors: Silvio Ricardo Rodrigues Sanches, Bianca Cogo Barbosa, Beatriz Regina Brum, Cléber Gimenez Corrêa

Abstract:

To evaluate the quality of a segmentation algorithm, the authors use subjective or objective metrics. Although subjective metrics are more accurate than objective ones, objective metrics do not require user feedback to test an algorithm. Objective metrics require subjective experiments only during their development. Subjective experiments typically display to users some videos (generated from frames with segmentation errors) that simulate the environment of an application domain. This user feedback is crucial information for metric definition. In the subjective experiments applied to develop some state-of-the-art metrics used to test segmentation algorithms, the videos displayed during the experiments did not contain audio. Audio is an essential component in applications such as videoconference and augmented reality. If the audio influences the user’s perception, using only videos without audio in subjective experiments can compromise the efficiency of an objective metric generated using data from these experiments. This work aims to identify if the audio influences the user’s perception of segmentation quality in background substitution applications with audio. The proposed approach used a subjective method based on formal video quality assessment methods. The results showed that audio influences the quality of segmentation perceived by a user.

Keywords: background substitution, influence of audio, segmentation evaluation, segmentation quality

Procedia PDF Downloads 92
1380 A Relational Case-Based Reasoning Framework for Project Delivery System Selection

Authors: Yang Cui, Yong Qiang Chen

Abstract:

An appropriate project delivery system (PDS) is crucial to the success of a construction project. Case-based reasoning (CBR) is a useful support for PDS selection. However, the traditional CBR approach represents cases as attribute-value vectors without taking relations among attributes into consideration, and could not calculate the similarity when the structures of cases are not strictly same. Therefore, this paper solves this problem by adopting the relational case-based reasoning (RCBR) approach for PDS selection, considering both the structural similarity and feature similarity. To develop the feature terms of the construction projects, the criteria and factors governing PDS selection process are first identified. Then, feature terms for the construction projects are developed. Finally, the mechanism of similarity calculation and a case study indicate how RCBR works for PDS selection. The adoption of RCBR in PDS selection expands the scope of application of traditional CBR method and improves the accuracy of the PDS selection system.

Keywords: relational cased-based reasoning, case-based reasoning, project delivery system, PDS selection

Procedia PDF Downloads 400
1379 Integration of Fuzzy Logic in the Representation of Knowledge: Application in the Building Domain

Authors: Hafida Bouarfa, Mohamed Abed

Abstract:

The main object of our work is the development and the validation of a system indicated Fuzzy Vulnerability. Fuzzy Vulnerability uses a fuzzy representation in order to tolerate the imprecision during the description of construction. At the the second phase, we evaluated the similarity between the vulnerability of a new construction and those of the whole of the historical cases. This similarity is evaluated on two levels: 1) individual similarity: bases on the fuzzy techniques of aggregation; 2) Global similarity: uses the increasing monotonous linguistic quantifiers (RIM) to combine the various individual similarities between two constructions. The third phase of the process of Fuzzy Vulnerability consists in using vulnerabilities of historical constructions narrowly similar to current construction to deduce its estimate vulnerability. We validated our system by using 50 cases. We evaluated the performances of Fuzzy Vulnerability on the basis of two basic criteria, the precision of the estimates and the tolerance of the imprecision along the process of estimation. The comparison was done with estimates made by tiresome and long models. The results are satisfactory.

Keywords: case based reasoning, fuzzy logic, fuzzy case based reasoning, seismic vulnerability

Procedia PDF Downloads 262
1378 From Responses of Macroinvertebrate Metrics to the Definition of Reference Thresholds

Authors: Hounyèmè Romuald, Mama Daouda, Argillier Christine

Abstract:

The present study focused on the use of benthic macrofauna to define the reference state of an anthropized lagoon (Nokoué-Benin) from the responses of relevant metrics to proxies. The approach used is a combination of a joint species distribution model and Bayesian networks. The joint species distribution model was used to select the relevant metrics and generate posterior probabilities that were then converted into posterior response probabilities for each of the quality classes (pressure levels), which will constitute the conditional probability tables allowing the establishment of the probabilistic graph representing the different causal relationships between metrics and pressure proxies. For the definition of the reference thresholds, the predicted responses for low-pressure levels were read via probability density diagrams. Observations collected during high and low water periods spanning 03 consecutive years (2004-2006), sampling 33 macroinvertebrate taxa present at all seasons and sampling points, and measurements of 14 environmental parameters were used as application data. The study demonstrated reliable inferences, selection of 07 relevant metrics and definition of quality thresholds for each environmental parameter. The relevance of the metrics as well as the reference thresholds for ecological assessment despite the small sample size, suggests the potential for wider applicability of the approach for aquatic ecosystem monitoring and assessment programs in developing countries generally characterized by a lack of monitoring data.

Keywords: pressure proxies, bayesian inference, bioindicators, acadjas, functional traits

Procedia PDF Downloads 59
1377 The Impact of ESG Factors on Performance Measures in European Business

Authors: Raquel Pérez Estébanez

Abstract:

This research proposal seeks to delve into the intricate relationship between performance indicators and sustainability metrics within the realm of corporate entities. As businesses grapple with the imperative of sustainable practices, understanding how traditional performance metrics intersect with sustainability indicators becomes paramount. This study endeavours to unravel the dynamics of this relationship, aiming to illuminate ways in which these two sets of metrics can be harmoniously integrated to offer a comprehensive evaluation of a company's success while considering its environmental and societal impact. The integration of performance measures and sustainability metrics has become a focal point in contemporary business literature as companies strive to balance economic success with environmental and social responsibility. Performance indicators traditionally focus on financial metrics such as return on assets, return on equity and profitability. Sustainability metrics, on the other hand, encompass environmental, social, and governance (ESG) factors. The challenge lies in aligning these diverse metrics for a comprehensive assessment. Researches indicates a growing trend among corporations to incorporate sustainability metrics into their performance evaluations. However, challenges persist, with companies often struggling to integrate non-financial indicators seamlessly. The works of GRI (Global Reporting Initiative) and SASB (Sustainability Accounting Standards Board) propose frameworks for harmonizing financial and sustainability reporting. These frameworks emphasize the need for companies to disclose material sustainability information alongside traditional financial metrics. Several studies suggest that integrating sustainability metrics positively influences decision-making. Companies considering sustainability factors in decision-making exhibit improved long-term performance and risk management. Other researches highlights the increasing importance of sustainability metrics in shaping stakeholder perceptions. Investors, in particular, are placing greater emphasis on companies' environmental and social performance when making investment decisions. Industry-specific studies underscore the need for customized approaches to integration due to sector-specific challenges and opportunities. This suggests that a one-size-fits-all solution may not be applicable across diverse industries. While progress is evident, challenges persist, necessitating further research to refine integration frameworks, address industry-specific nuances, and assess the long-term impact on organizational performance and societal contributions.

Keywords: ESG, ROE, ROA, performance measures

Procedia PDF Downloads 6
1376 Similarity of the Disposition of the Electrostatic Potential of Tetrazole and Carboxylic Group to Investigate Their Bioisosteric Relationship

Authors: Alya A. Arabi

Abstract:

Bioisosteres are functional groups that can be interchangeably used without affecting the potency of the drug. Bioisosteres have similar pharmacological properties. Bioisosterism is useful for modifying the physicochemical properties of a drug while obeying the Lipinski’s rules. Bioisosteres are key in optimizing the pharmacokinetic and pharmacodynamics properties of a drug. Tetrazole and carboxylate anions are non-classic bioisosteres. Density functional theory was used to obtain the wavefunction of the molecules and the optimized geometries. The quantum theory of atoms in molecules (QTAIM) was used to uncover the similarity of the average electron density in tetrazole and carboxylate anions. This similarity between the bioisosteres capped by a methyl group was valid despite the fact that the groups have different volumes, charges, energies, or electron populations. The biochemical correspondence of tetrazole and carboxylic acid was also determined to be a result of the similarity of the topography of the electrostatic potential (ESP). The ESP demonstrates the pharmacological and biochemical resemblance for a matching “key-and-lock” interaction.

Keywords: bioisosteres, carboxylic acid, density functional theory, electrostatic potential, tetrazole

Procedia PDF Downloads 403
1375 Using Equipment Telemetry Data for Condition-Based maintenance decisions

Authors: John Q. Todd

Abstract:

Given that modern equipment can provide comprehensive health, status, and error condition data via built-in sensors, maintenance organizations have a new and valuable source of insight to take advantage of. This presentation will expose what these data payloads might look like and how they can be filtered, visualized, calculated into metrics, used for machine learning, and generate alerts for further action.

Keywords: condition based maintenance, equipment data, metrics, alerts

Procedia PDF Downloads 161
1374 Genetic Diversity Analysis in Triticum Aestivum Using Microsatellite Markers

Authors: Prachi Sharma, Mukesh Kumar Rana

Abstract:

In the present study, the simple sequence repeat(SSR) markers have been used in analysis of genetic diversity of 37 genotypes of Triticum aestivum. The DNA was extracted using cTAB method. The DNA was quantified using the fluorimeter. The annealing temperatures for 27 primer pairs were standardized using gradient PCR, out of which 16 primers gave satisfactory amplification at temperature ranging from 50-62⁰ C. Out of 16 polymorphic SSR markers only 10 SSR primer pairs were used in the study generating 34 reproducible amplicons among 37 genotypes out of which 30 were polymorphic. Primer pairs Xgwm533, Xgwm 160, Xgwm 408, Xgwm 120, Xgwm 186, Xgwm 261 produced maximum percent of polymorphic bands (100%). The bands ranged on an average of 3.4 bands per primer. The genetic relationship was determined using Jaccard pair wise similarity co-efficient and UPGMA cluster analysis with NTSYS Pc.2 software. The values of similarity index range from 0-1. The similarity coefficient ranged from 0.13 to 0.97. A minimum genetic similarity (0.13) was observed between VL 804 and HPW 288, meaning they are only 13% similar. More number of available SSR markers can be useful for supporting the genetic diversity analysis in the above wheat genotypes.

Keywords: wheat, genetic diversity, microsatellite, polymorphism

Procedia PDF Downloads 585
1373 Evaluating Classification with Efficacy Metrics

Authors: Guofan Shao, Lina Tang, Hao Zhang

Abstract:

The values of image classification accuracy are affected by class size distributions and classification schemes, making it difficult to compare the performance of classification algorithms across different remote sensing data sources and classification systems. Based on the term efficacy from medicine and pharmacology, we have developed the metrics of image classification efficacy at the map and class levels. The novelty of this approach is that a baseline classification is involved in computing image classification efficacies so that the effects of class statistics are reduced. Furthermore, the image classification efficacies are interpretable and comparable, and thus, strengthen the assessment of image data classification methods. We use real-world and hypothetical examples to explain the use of image classification efficacies. The metrics of image classification efficacy meet the critical need to rectify the strategy for the assessment of image classification performance as image classification methods are becoming more diversified.

Keywords: accuracy assessment, efficacy, image classification, machine learning, uncertainty

Procedia PDF Downloads 180
1372 Cross-Dialect Sentence Transformation: A Comparative Analysis of Language Models for Adapting Sentences to British English

Authors: Shashwat Mookherjee, Shruti Dutta

Abstract:

This study explores linguistic distinctions among American, Indian, and Irish English dialects and assesses various Language Models (LLMs) in their ability to generate British English translations from these dialects. Using cosine similarity analysis, the study measures the linguistic proximity between original British English translations and those produced by LLMs for each dialect. The findings reveal that Indian and Irish English translations maintain notably high similarity scores, suggesting strong linguistic alignment with British English. In contrast, American English exhibits slightly lower similarity, reflecting its distinct linguistic traits. Additionally, the choice of LLM significantly impacts translation quality, with Llama-2-70b consistently demonstrating superior performance. The study underscores the importance of selecting the right model for dialect translation, emphasizing the role of linguistic expertise and contextual understanding in achieving accurate translations.

Keywords: cross-dialect translation, language models, linguistic similarity, multilingual NLP

Procedia PDF Downloads 27
1371 A Bayesian Model with Improved Prior in Extreme Value Problems

Authors: Eva L. Sanjuán, Jacinto Martín, M. Isabel Parra, Mario M. Pizarro

Abstract:

In Extreme Value Theory, inference estimation for the parameters of the distribution is made employing a small part of the observation values. When block maxima values are taken, many data are discarded. We developed a new Bayesian inference model to seize all the information provided by the data, introducing informative priors and using the relations between baseline and limit parameters. Firstly, we studied the accuracy of the new model for three baseline distributions that lead to a Gumbel extreme distribution: Exponential, Normal and Gumbel. Secondly, we considered mixtures of Normal variables, to simulate practical situations when data do not adjust to pure distributions, because of perturbations (noise).

Keywords: bayesian inference, extreme value theory, Gumbel distribution, highly informative prior

Procedia PDF Downloads 169
1370 Benchmarking Bert-Based Low-Resource Language: Case Uzbek NLP Models

Authors: Jamshid Qodirov, Sirojiddin Komolov, Ravilov Mirahmad, Olimjon Mirzayev

Abstract:

Nowadays, natural language processing tools play a crucial role in our daily lives, including various techniques with text processing. There are very advanced models in modern languages, such as English, Russian etc. But, in some languages, such as Uzbek, the NLP models have been developed recently. Thus, there are only a few NLP models in Uzbek language. Moreover, there is no such work that could show which Uzbek NLP model behaves in different situations and when to use them. This work tries to close this gap and compares the Uzbek NLP models existing as of the time this article was written. The authors try to compare the NLP models in two different scenarios: sentiment analysis and sentence similarity, which are the implementations of the two most common problems in the industry: classification and similarity. Another outcome from this work is two datasets for classification and sentence similarity in Uzbek language that we generated ourselves and can be useful in both industry and academia as well.

Keywords: NLP, benchmak, bert, vectorization

Procedia PDF Downloads 27
1369 Optimality Theoretic Account of Indian Loanwords in Hadhrami Arabic

Authors: Mohammed Saleh Lahmdi, Hassan Obeid Alfadly

Abstract:

This study explores an optimality-theoretic account of Indian loanwords in Hadhrami Arabic (henceforth HA), a variety of Arabic spoken in Hadhramout Province in the coastal areas and Hadhramout Valley. The purpose of this paper is to find out how the phonological forms of Indian loanwords can be accounted for from an OT standpoint. To achieve this purpose, two main instruments were implemented: participant observation and interview. The sample of this study was selected carefully with certain characteristics by judgment sampling consisting of eleven informants. An ethnographic qualitative approach was employed to find out the phonological articulations that the researcher encountered during the implementation. Many phonological processes are used and several markedness and faithfulness constraints have been interacted in conflict in order to choose the optimal form of Hadhrami realisations. The findings of the study confirm that the Hadhrami syllable structure prevails over the donor language, i.e., the Indian (mainly Urdu) language. Specifically, markedness constraints dominate faithfulness ones when most of the Indian loanwords are incorporated into HA.

Keywords: linguistic borrowing, optimality theory, Hadhrami Arabic, loanword, phonological processes

Procedia PDF Downloads 20
1368 Bottleneck Modeling in Information Technology Service Management

Authors: Abhinay Puvvala, Veerendra Kumar Rai

Abstract:

A bottleneck situation arises when the outflow is lesser than the inflow in a pipe-like setup. A more practical interpretation of bottlenecks emphasizes on the realization of Service Level Objectives (SLOs) at given workloads. Our approach detects two key aspects of bottlenecks – when and where. To identify ‘when’ we continuously poll on certain key metrics such as resource utilization, processing time, request backlog and throughput at a system level. Further, when the slope of the expected sojourn time at a workload is greater than ‘K’ times the slope of expected sojourn time at the previous step of the workload while the workload is being gradually increased in discrete steps, a bottleneck situation arises. ‘K’ defines the threshold condition and is computed based on the system’s service level objectives. The second aspect of our approach is to identify the location of the bottleneck. In multi-tier systems with a complex network of layers, it is a challenging problem to locate bottleneck that affects the overall system performance. We stage the system by varying workload incrementally to draw a correlation between load increase and system performance to the point where Service Level Objectives are violated. During the staging process, multiple metrics are monitored at hardware and application levels. The correlations are drawn between metrics and the overall system performance. These correlations along with the Service Level Objectives are used to arrive at the threshold conditions for each of these metrics. Subsequently, the same method used to identify when a bottleneck occurs is used on metrics data with threshold conditions to locate bottlenecks.

Keywords: bottleneck, workload, service level objectives (SLOs), throughput, system performance

Procedia PDF Downloads 206
1367 Examining the Importance of the Structure Based on Grid Computing Service and Virtual Organizations

Authors: Sajjad Baghernezhad, Saeideh Baghernezhad

Abstract:

Vast changes and developments achieved in information technology field in recent decades have made the review of different issues such as organizational structures unavoidable. Applying informative technologies such as internet and also vast use of computer and related networks have led to new organizational formations with a nature completely different from the traditional, great and bureaucratic ones; some common specifications of such organizations are transfer of the affairs out of the organization, benefiting from informative and communicative networks and centered-science workers. Such communicative necessities have led to network sciences development including grid computing. First, the grid computing was only to relate some sites for short – time and use their sources simultaneously, but now it has gone beyond such idea. In this article, the grid computing technology was examined, and at the same time, virtual organization concept was discussed.

Keywords: grid computing, virtual organizations, software engineering, organization

Procedia PDF Downloads 306
1366 Towards the Use of Software Product Metrics as an Indicator for Measuring Mobile Applications Power Consumption

Authors: Ching Kin Keong, Koh Tieng Wei, Abdul Azim Abd. Ghani, Khaironi Yatim Sharif

Abstract:

Maintaining factory default battery endurance rate over time in supporting huge amount of running applications on energy-restricted mobile devices has created a new challenge for mobile applications developer. While delivering customers’ unlimited expectations, developers are barely aware of efficient use of energy from the application itself. Thus developers need a set of valid energy consumption indicators in assisting them to develop energy saving applications. In this paper, we present a few software product metrics that can be used as an indicator to measure energy consumption of Android-based mobile applications in the early of design stage. In particular, Trepn Profiler (Power profiling tool for Qualcomm processor) has used to collect the data of mobile application power consumption, and then analyzed for the 23 software metrics in this preliminary study. The results show that McCabe cyclomatic complexity, number of parameters, nested block depth, number of methods, weighted methods per class, number of classes, total lines of code and method lines have direct relationship with power consumption of mobile application.

Keywords: battery endurance, software metrics, mobile application, power consumption

Procedia PDF Downloads 371
1365 Cost Sensitive Feature Selection in Decision-Theoretic Rough Set Models for Customer Churn Prediction: The Case of Telecommunication Sector Customers

Authors: Emel Kızılkaya Aydogan, Mihrimah Ozmen, Yılmaz Delice

Abstract:

In recent days, there is a change and the ongoing development of the telecommunications sector in the global market. In this sector, churn analysis techniques are commonly used for analysing why some customers terminate their service subscriptions prematurely. In addition, customer churn is utmost significant in this sector since it causes to important business loss. Many companies make various researches in order to prevent losses while increasing customer loyalty. Although a large quantity of accumulated data is available in this sector, their usefulness is limited by data quality and relevance. In this paper, a cost-sensitive feature selection framework is developed aiming to obtain the feature reducts to predict customer churn. The framework is a cost based optional pre-processing stage to remove redundant features for churn management. In addition, this cost-based feature selection algorithm is applied in a telecommunication company in Turkey and the results obtained with this algorithm.

Keywords: churn prediction, data mining, decision-theoretic rough set, feature selection

Procedia PDF Downloads 422
1364 Assessing the Effects of Land Use Spatial Structure on Urban Heat Island Using New Launched Remote Sensing in Shenzhen, China

Authors: Kai Liua, Hongbo Sua, Weimin Wangb, Hong Liangb

Abstract:

Urban heat island (UHI) has attracted attention around the world since they profoundly affect human life and climatological. Better understanding the effects of landscape pattern on UHI is crucial for improving the ecological security and sustainability of cities. This study aims to investigate how landscape composition and configuration would affect UHI in Shenzhen, China, based on the analysis of land surface temperature (LST) in relation landscape metrics, mainly with the aid of three new satellite sensors launched by China. HJ-1B satellite system was utilized to estimate surface temperature and comprehensively explore the urban thermal spatial pattern. The landscape metrics of the high spatial resolution remote sensing satellites (GF-1 and ZY-3) were compared and analyzed to validate the performance of the new launched satellite sensors. Results show that the mean LST is correlated with main landscape metrics involving class-based metrics and landscape-based metrics, suggesting that the landscape composition and the spatial configuration both influence UHI. These relationships also reveal that urban green has a significant effect in mitigating UHI in Shenzhen due to its homogeneous spatial distribution and large spatial extent. Overall, our study not only confirm the applicability and effectiveness of the HJ-1B, GF-1 and ZY-3 satellite system for studying UHI but also reveal the impacts of the urban spatial structure on UHI, which is meaningful for the planning and management of the urban environment.

Keywords: urban heat island, Shenzhen, new remote sensing sensor, remote sensing satellites

Procedia PDF Downloads 384
1363 A Self-Coexistence Strategy for Spectrum Allocation Using Selfish and Unselfish Game Models in Cognitive Radio Networks

Authors: Noel Jeygar Robert, V. K.Vidya

Abstract:

Cognitive radio is a software-defined radio technology that allows cognitive users to operate on the vacant bands of spectrum allocated to licensed users. Cognitive radio plays a vital role in the efficient utilization of wireless radio spectrum available between cognitive users and licensed users without making any interference to licensed users. The spectrum allocation followed by spectrum sharing is done in a fashion where a cognitive user has to wait until spectrum holes are identified and allocated when the licensed user moves out of his own allocated spectrum. In this paper, we propose a self –coexistence strategy using bargaining and Cournot game model for achieving spectrum allocation in cognitive radio networks. The game-theoretic model analyses the behaviour of cognitive users in both cooperative and non-cooperative scenarios and provides an equilibrium level of spectrum allocation. Game-theoretic models such as bargaining game model and Cournot game model produce a balanced distribution of spectrum resources and energy consumption. Simulation results show that both game theories achieve better performance compared to other popular techniques

Keywords: cognitive radio, game theory, bargaining game, Cournot game

Procedia PDF Downloads 263
1362 Using Multi-Arm Bandits to Optimize Game Play Metrics and Effective Game Design

Authors: Kenny Raharjo, Ramon Lawrence

Abstract:

Game designers have the challenging task of building games that engage players to spend their time and money on the game. There are an infinite number of game variations and design choices, and it is hard to systematically determine game design choices that will have positive experiences for players. In this work, we demonstrate how multi-arm bandits can be used to automatically explore game design variations to achieve improved player metrics. The advantage of multi-arm bandits is that they allow for continuous experimentation and variation, intrinsically converge to the best solution, and require no special infrastructure to use beyond allowing minor game variations to be deployed to users for evaluation. A user study confirms that applying multi-arm bandits was successful in determining the preferred game variation with highest play time metrics and can be a useful technique in a game designer's toolkit.

Keywords: game design, multi-arm bandit, design exploration and data mining, player metric optimization and analytics

Procedia PDF Downloads 487
1361 The Processing of Context-Dependent and Context-Independent Scalar Implicatures

Authors: Liu Jia’nan

Abstract:

The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.

Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing

Procedia PDF Downloads 293
1360 Reconstructability Analysis for Landslide Prediction

Authors: David Percy

Abstract:

Landslides are a geologic phenomenon that affects a large number of inhabited places and are constantly being monitored and studied for the prediction of future occurrences. Reconstructability analysis (RA) is a methodology for extracting informative models from large volumes of data that work exclusively with discrete data. While RA has been used in medical applications and social science extensively, we are introducing it to the spatial sciences through applications like landslide prediction. Since RA works exclusively with discrete data, such as soil classification or bedrock type, working with continuous data, such as porosity, requires that these data are binned for inclusion in the model. RA constructs models of the data which pick out the most informative elements, independent variables (IVs), from each layer that predict the dependent variable (DV), landslide occurrence. Each layer included in the model retains its classification data as a primary encoding of the data. Unlike other machine learning algorithms that force the data into one-hot encoding type of schemes, RA works directly with the data as it is encoded, with the exception of continuous data, which must be binned. The usual physical and derived layers are included in the model, and testing our results against other published methodologies, such as neural networks, yields accuracy that is similar but with the advantage of a completely transparent model. The results of an RA session with a data set are a report on every combination of variables and their probability of landslide events occurring. In this way, every combination of informative state combinations can be examined.

Keywords: reconstructability analysis, machine learning, landslides, raster analysis

Procedia PDF Downloads 35
1359 Handover for Dense Small Cells Heterogeneous Networks: A Power-Efficient Game Theoretical Approach

Authors: Mohanad Alhabo, Li Zhang, Naveed Nawaz

Abstract:

In this paper, a non-cooperative game method is formulated where all players compete to transmit at higher power. Every base station represents a player in the game. The game is solved by obtaining the Nash equilibrium (NE) where the game converges to optimality. The proposed method, named Power Efficient Handover Game Theoretic (PEHO-GT) approach, aims to control the handover in dense small cell networks. Players optimize their payoff by adjusting the transmission power to improve the performance in terms of throughput, handover, power consumption and load balancing. To select the desired transmission power for a player, the payoff function considers the gain of increasing the transmission power. Then, the cell selection takes place by deploying Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS). A game theoretical method is implemented for heterogeneous networks to validate the improvement obtained. Results reveal that the proposed method gives a throughput improvement while reducing the power consumption and minimizing the frequent handover.

Keywords: energy efficiency, game theory, handover, HetNets, small cells

Procedia PDF Downloads 96
1358 Code Refactoring Using Slice-Based Cohesion Metrics and AOP

Authors: Jagannath Singh, Durga Prasad Mohapatra

Abstract:

Software refactoring is very essential for maintaining the software quality. It is an usual practice that we first design the software and then go for coding. But after coding is completed, if the requirement changes slightly or our expected output is not achieved, then we change the codes. For each small code change, we cannot change the design. In course of time, due to these small changes made to the code, the software design decays. Software refactoring is used to restructure the code in order to improve the design and quality of the software. In this paper, we propose an approach for performing code refactoring. We use slice-based cohesion metrics to identify the target methods which requires refactoring. After identifying the target methods, we use program slicing to divide the target method into two parts. Finally, we have used the concepts of Aspects to adjust the code structure so that the external behaviour of the original module does not change.

Keywords: software refactoring, program slicing, AOP, cohesion metrics, code restructure, AspectJ

Procedia PDF Downloads 480