Search results for: Location Selection
1292 How Social Network Structure Affects the Dynamics of Evolution of Cooperation?
Authors: Mohammad Akbarpour, Reza Nasiri Mahalati, Caro Lucas
Abstract:
The existence of many biological systems, especially human societies, is based on cooperative behavior [1, 2]. If natural selection favors selfish individuals, then what mechanism is at work that we see so many cooperative behaviors? One answer is the effect of network structure. On a graph, cooperators can evolve by forming network bunches [2, 3, 4]. In a research, Ohtsuki et al used the idea of iterated prisoners- dilemma on a graph to model an evolutionary game. They showed that the average number of neighbors plays an important role in determining whether cooperation is the ESS of the system or not [3]. In this paper, we are going to study the dynamics of evolution of cooperation in a social network. We show that during evolution, the ratio of cooperators among individuals with fewer neighbors to cooperators among other individuals is greater than unity. The extent to which the fitness function depends on the payoff of the game determines this ratio.Keywords: Evolution of cooperation, Iterated prisoner's dilemma, Model dynamics, Social network structure, Intensity of selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13551291 The Effect of Program Type on Mutation Testing: Comparative Study
Authors: B. Falah, N. E. Abakouy
Abstract:
Due to its high computational cost, mutation testing has been neglected by researchers. Recently, many cost and mutants’ reduction techniques have been developed, improved, and experimented, but few of them has relied the possibility of reducing the cost of mutation testing on the program type of the application under test. This paper is a comparative study between four operators’ selection techniques (mutants sampling, class level operators, method level operators, and all operators’ selection) based on the program code type of each application under test. It aims at finding an alternative approach to reveal the effect of code type on mutation testing score. The result of our experiment shows that the program code type can affect the mutation score and that the programs using polymorphism are best suited to be tested with mutation testing.Keywords: Equivalent mutant, killed mutant, mutation score, mutation testing, program code type.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14161290 Using Machine Learning Techniques for Autism Spectrum Disorder Analysis and Detection in Children
Authors: Norah Alshahrani, Abdulaziz Almaleh
Abstract:
Autism Spectrum Disorder (ASD) is a condition related to issues with brain development that affects how a person recognises and communicates with others which results in difficulties with interaction and communication socially and it is constantly growing. Early recognition of ASD allows children to lead safe and healthy lives and helps doctors with accurate diagnoses and management of conditions. Therefore, it is crucial to develop a method that will achieve good results and with high accuracy for the measurement of ASD in children. In this paper, ASD datasets of toddlers and children have been analyzed. We employed the following machine learning techniques to attempt to explore ASD: Random Forest (RF), Decision Tree (DT), Na¨ıve Bayes (NB) and Support Vector Machine (SVM). Then feature selection was used to provide fewer attributes from ASD datasets while preserving model performance. As a result, we found that the best result has been provided by SVM, achieving 0.98% in the toddler dataset and 0.99% in the children dataset.
Keywords: Autism Spectrum Disorder, ASD, Machine Learning, ML, Feature Selection, Support Vector Machine, SVM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5981289 Design of a CMOS Highly Linear Front-end IC with Auto Gain Controller for a Magnetic Field Transceiver
Authors: Yeon-kug Moon, Kang-Yoon Lee, Yun-Jae Won, Seung-Ok Lim
Abstract:
This paper describes a low-voltage and low-power channel selection analog front end with continuous-time low pass filters and highly linear programmable gain amplifier (PGA). The filters were realized as balanced Gm-C biquadratic filters to achieve a low current consumption. High linearity and a constant wide bandwidth are achieved by using a new transconductance (Gm) cell. The PGA has a voltage gain varying from 0 to 65dB, while maintaining a constant bandwidth. A filter tuning circuit that requires an accurate time base but no external components is presented. With a 1-Vrms differential input and output, the filter achieves -85dB THD and a 78dB signal-to-noise ratio. Both the filter and PGA were implemented in a 0.18um 1P6M n-well CMOS process. They consume 3.2mW from a 1.8V power supply and occupy an area of 0.19mm2.Keywords: component ; Channel selection filters, DC offset, programmable gain amplifier, tuning circuit
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21401288 A Model of a Heat Radiation on a Mould Surface in the Car Industry
Abstract:
This article is focused on the calculation of heat radiation intensity and its optimization on an aluminum mould surface. The inside of the mould is sprinkled with a special powder and its outside is heated by infra heaters located above the mould surface, up to a temperature of 250°C. By this way artificial leathers in the car industry are produced (e. g. the artificial leather on a car dashboard). A mathematical model of heat radiation of infra heaters on a mould surface is described in this paper. This model allows us to calculate a heat-intensity radiation on the mould surface for the concrete location of infra heaters above the mould surface. It is necessary to ensure approximately the same heat intensity radiation on the mould surface by finding a suitable location for the infra heaters, and in this way the same material structure and color of artificial leather. In the model we have used a genetic algorithm to optimize the radiation intensity on the mould surface. Experimental measured values for the heat radiation intensity by a sensor in the surroundings of an infra heater are used for the calculation procedures. A computational procedure was programmed in language Matlab.Keywords: Genetic algorithm, mathematical model of heat radiation, optimization of radiation intensity, software implementation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14021287 Technology Identification, Evaluation and Selection Methodology for Industrial Process Water and Waste Water Treatment Plant of 3x150 MWe Tufanbeyli Lignite-Fired Power Plant
Authors: Cigdem Safak Saglam
Abstract:
Most thermal power plants use steam as working fluid in their power cycle. Therefore, in addition to fuel, water is the other main input for thermal plants. Water and steam must be highly pure in order to protect the systems from corrosion, scaling and biofouling. Pure process water is produced in water treatment plants having many several treatment methods. Treatment plant design is selected depending on raw water source and required water quality. Although working principle of fossil-fuel fired thermal power plants are same, there is no standard design and equipment arrangement valid for all thermal power plant utility systems. Besides that, there are many other technology evaluation and selection criteria for designing the most optimal water systems meeting the requirements such as local conditions, environmental restrictions, electricity and other consumables availability and transport, process water sources and scarcity, land use constraints etc. Aim of this study is explaining the adopted methodology for technology selection for process water preparation and industrial waste water treatment plant in a thermal power plant project located in Tufanbeyli, Adana Province in Turkey. Thermal power plant is fired with indigenous lignite coal extracted from adjacent lignite reserves. This paper addresses all above-mentioned factors affecting the thermal power plant water treatment facilities (demineralization + waste water treatment) design and describes the ultimate design of Tufanbeyli Thermal Power Plant Water Treatment Plant.
Keywords: Thermal power plant, lignite coal, pre-treatment, demineralization, electrodialysis, recycling, waste water, process water.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17121286 Input Variable Selection for RBFN-based Electric Utility's CO2 Emissions Forecasting
Authors: I. Falconett, K. Nagasaka
Abstract:
This study investigates the performance of radial basis function networks (RBFN) in forecasting the monthly CO2 emissions of an electric power utility. We also propose a method for input variable selection. This method is based on identifying the general relationships between groups of input candidates and the output. The effect that each input has on the forecasting error is examined by removing all inputs except the variable to be investigated from its group, calculating the networks parameter and performing the forecast. Finally, the new forecasting error is compared with the reference model. Eight input variables were identified as the most relevant, which is significantly less than our reference model with 30 input variables. The simulation results demonstrate that the model with the 8 inputs selected using the method introduced in this study performs as accurate as the reference model, while also being the most parsimonious.
Keywords: Correlation analysis, CO2 emissions forecasting, electric power utility, radial basis function networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15371285 The Study of the Mutual Effect of Genotype in Environment by Percent of Oil Criterion in Sunflower
Authors: Seyed Mohammad Nasir Mousavi, Pasha Hejazi, Maryam Ebrahimian Dehkordi
Abstract:
In order to study the Mutual effect of genotype × environment for the percent of oil index in sunflower items, an experiment was accomplished form complete random block designs in four iteration and was four diverse researching station comprising Esfahan, Birjand, Sari, and Karaj. Complex variance analysis showed that there is an important diversity between the items under investigation. The results relevant the coefficient variation of items Azargol and Vidoc has respectively allocated the minimum coefficient of variations. According to the results extrapolated from Shokla stability variance, the Items Brocar, Allison and Fabiola, are among the stable genotypes for oil percent respectively. In the biplot GGE, the location under investigations divided in two superenvironments, first one comprised of locations naming Esfahan, Karaj, and Birjand, and second one were such a location as Sari. By this point of view, in the first super-environment, the Item Fabiola and in the second Almanzor item was among the best items and crops.
Keywords: Sunflower, Stability, GGE biplot, Super- Environment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18431284 A Fuzzy Swarm Optimized Approach for Piece Selection in Bit Torrent Like Peer to Peer Network
Authors: M. Padmavathi, R. M. Suresh
Abstract:
Every machine plays roles of client and server simultaneously in a peer-to-peer (P2P) network. Though a P2P network has many advantages over traditional client-server models regarding efficiency and fault-tolerance, it also faces additional security threats. Users/IT administrators should be aware of risks from malicious code propagation, downloaded content legality, and P2P software’s vulnerabilities. Security and preventative measures are a must to protect networks from potential sensitive information leakage and security breaches. Bit Torrent is a popular and scalable P2P file distribution mechanism which successfully distributes large files quickly and efficiently without problems for origin server. Bit Torrent achieved excellent upload utilization according to measurement studies, but it also raised many questions as regards utilization in settings, than those measuring, fairness, and Bit Torrent’s mechanisms choice. This work proposed a block selection technique using Fuzzy ACO with optimal rules selected using ACO.
Keywords: Ant Colony Optimization (ACO), Bit Torrent, Download time, Peer-to-Peer (P2P) network, Performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25871283 Customer Churn Prediction Using Four Machine Learning Algorithms Integrating Feature Selection and Normalization in the Telecom Sector
Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh
Abstract:
A crucial part of maintaining a customer-oriented business in the telecommunications industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years, which has made it more important to understand customers’ needs in this strong market. For those who are looking to turn over their service providers, understanding their needs is especially important. Predictive churn is now a mandatory requirement for retaining customers in the telecommunications industry. Machine learning can be used to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.
Keywords: Machine Learning, Gradient Boosting, Logistic Regression, Churn, Random Forest, Decision Tree, ROC, AUC, F1-score.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4081282 GSM Position Tracking using a Kalman Filter
Authors: Jean-Pierre Dubois, Jihad S. Daba, M. Nader, C. El Ferkh
Abstract:
GSM has undoubtedly become the most widespread cellular technology and has established itself as one of the most promising technology in wireless communication. The next generation of mobile telephones had also become more powerful and innovative in a way that new services related to the user-s location will arise. Other than the 911 requirements for emergency location initiated by the Federal Communication Commission (FCC) of the United States, GSM positioning can be highly integrated in cellular communication technology for commercial use. However, GSM positioning is facing many challenges. Issues like accuracy, availability, reliability and suitable cost render the development and implementation of GSM positioning a challenging task. In this paper, we investigate the optimal mobile position tracking means. We employ an innovative scheme by integrating the Kalman filter in the localization process especially that it has great tracking characteristics. When tracking in two dimensions, Kalman filter is very powerful due to its reliable performance as it supports estimation of past, present, and future states, even when performing in unknown environments. We show that enhanced position tracking results is achieved when implementing the Kalman filter for GSM tracking.Keywords: Cellular communication, estimation, GSM, Kalman filter, positioning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30731281 Zero Inflated Models for Overdispersed Count Data
Authors: Y. N. Phang, E. F. Loh
Abstract:
The zero inflated models are usually used in modeling count data with excess zeros where the existence of the excess zeros could be structural zeros or zeros which occur by chance. These type of data are commonly found in various disciplines such as finance, insurance, biomedical, econometrical, ecology, and health sciences which involve sex and health dental epidemiology. The most popular zero inflated models used by many researchers are zero inflated Poisson and zero inflated negative binomial models. In addition, zero inflated generalized Poisson and zero inflated double Poisson models are also discussed and found in some literature. Recently zero inflated inverse trinomial model and zero inflated strict arcsine models are advocated and proven to serve as alternative models in modeling overdispersed count data caused by excessive zeros and unobserved heterogeneity. The purpose of this paper is to review some related literature and provide a variety of examples from different disciplines in the application of zero inflated models. Different model selection methods used in model comparison are discussed.
Keywords: Overdispersed count data, model selection methods, likelihood ratio, AIC, BIC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 45321280 A Genetic Algorithm with Priority Selection for the Traveling Salesman Problem
Authors: Cha-Hwa Lin, Je-Wei Hu
Abstract:
The conventional GA combined with a local search algorithm, such as the 2-OPT, forms a hybrid genetic algorithm(HGA) for the traveling salesman problem (TSP). However, the geometric properties which are problem specific knowledge can be used to improve the search process of the HGA. Some tour segments (edges) of TSPs are fine while some maybe too long to appear in a short tour. This knowledge could constrain GAs to work out with fine tour segments without considering long tour segments as often. Consequently, a new algorithm is proposed, called intelligent-OPT hybrid genetic algorithm (IOHGA), to improve the GA and the 2-OPT algorithm in order to reduce the search time for the optimal solution. Based on the geometric properties, all the tour segments are assigned 2-level priorities to distinguish between good and bad genes. A simulation study was conducted to evaluate the performance of the IOHGA. The experimental results indicate that in general the IOHGA could obtain near-optimal solutions with less time and better accuracy than the hybrid genetic algorithm with simulated annealing algorithm (HGA(SA)).Keywords: Traveling salesman problem, hybrid geneticalgorithm, priority selection, 2-OPT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15601279 A Novel Prediction Method for Tag SNP Selection using Genetic Algorithm based on KNN
Authors: Li-Yeh Chuang, Yu-Jen Hou, Jr., Cheng-Hong Yang
Abstract:
Single nucleotide polymorphisms (SNPs) hold much promise as a basis for disease-gene association. However, research is limited by the cost of genotyping the tremendous number of SNPs. Therefore, it is important to identify a small subset of informative SNPs, the so-called tag SNPs. This subset consists of selected SNPs of the genotypes, and accurately represents the rest of the SNPs. Furthermore, an effective evaluation method is needed to evaluate prediction accuracy of a set of tag SNPs. In this paper, a genetic algorithm (GA) is applied to tag SNP problems, and the K-nearest neighbor (K-NN) serves as a prediction method of tag SNP selection. The experimental data used was taken from the HapMap project; it consists of genotype data rather than haplotype data. The proposed method consistently identified tag SNPs with considerably better prediction accuracy than methods from the literature. At the same time, the number of tag SNPs identified was smaller than the number of tag SNPs in the other methods. The run time of the proposed method was much shorter than the run time of the SVM/STSA method when the same accuracy was reached.
Keywords: Genetic Algorithm (GA), Genotype, Single nucleotide polymorphism (SNP), tag SNPs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17711278 Multiple-Points Fault Signature's Dynamics Modeling for Bearing Defect Frequencies
Authors: Muhammad F. Yaqub, Iqbal Gondal, Joarder Kamruzzaman
Abstract:
Occurrence of a multiple-points fault in machine operations could result in exhibiting complex fault signatures, which could result in lowering fault diagnosis accuracy. In this study, a multiple-points defect model (MPDM) is proposed which can simulate fault signature-s dynamics for n-points bearing faults. Furthermore, this study identifies that in case of multiple-points fault in the rotary machine, the location of the dominant component of defect frequency shifts depending upon the relative location of the fault points which could mislead the fault diagnostic model to inaccurate detections. Analytical and experimental results are presented to characterize and validate the variation in the dominant component of defect frequency. Based on envelop detection analysis, a modification is recommended in the existing fault diagnostic models to consider the multiples of defect frequency rather than only considering the frequency spectrum at the defect frequency in order to incorporate the impact of multiple points fault.
Keywords: Envelop detection, machine defect frequency, multiple faults, machine health monitoring.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22741277 Network Coding-based ARQ scheme with Overlapping Selection for Resource Limited Multicast/Broadcast Services
Authors: Jung-Hyun Kim, Jihyung Kim, Kwangjae Lim, Dong Seung Kwon
Abstract:
Network coding has recently attracted attention as an efficient technique in multicast/broadcast services. The problem of finding the optimal network coding mechanism maximizing the bandwidth efficiency is hard to solve and hard to approximate. Lots of network coding-based schemes have been suggested in the literature to improve the bandwidth efficiency, especially network coding-based automatic repeat request (NCARQ) schemes. However, existing schemes have several limitations which cause the performance degradation in resource limited systems. To improve the performance in resource limited systems, we propose NCARQ with overlapping selection (OS-NCARQ) scheme. The advantages of OS-NCARQ scheme over the traditional ARQ scheme and existing NCARQ schemes are shown through the analysis and simulations.
Keywords: ARQ, Network coding, Multicast/Broadcast services, Packet-based systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15101276 Limitations of the Analytic Hierarchy Process Technique with Respect to Geographically Distributed Stakeholders
Authors: Azeem Ahmad, Magnus Goransson, Aamir Shahzad
Abstract:
The selection of appropriate requirements for product releases can make a big difference in a product success. The selection of requirements is done by different requirements prioritization techniques. These techniques are based on pre-defined and systematic steps to calculate the requirements relative weight. Prioritization is complicated by new development settings, shifting from traditional co-located development to geographically distributed development. Stakeholders, connected to a project, are distributed all over the world. These geographically distributions of stakeholders make it hard to prioritize requirements as each stakeholder have their own perception and expectations of the requirements in a software project. This paper discusses limitations of the Analytical Hierarchy Process with respect to geographically distributed stakeholders- (GDS) prioritization of requirements. This paper also provides a solution, in the form of a modified AHP, in order to prioritize requirements for GDS. We will conduct two experiments in this paper and will analyze the results in order to discuss AHP limitations with respect to GDS. The modified AHP variant is also validated in this paper.Keywords: Requirements Prioritization, GeographicallyDistributed Stakeholders, AHP, Modified AHP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28641275 Genetic Content-Based MP3 Audio Watermarking in MDCT Domain
Authors: N. Moghadam, H. Sadeghi
Abstract:
In this paper a novel scheme for watermarking digital audio during its compression to MPEG-1 Layer III format is proposed. For this purpose we slightly modify some of the selected MDCT coefficients, which are used during MPEG audio compression procedure. Due to the possibility of modifying different MDCT coefficients, there will be different choices for embedding the watermark into audio data, considering robustness and transparency factors. Our proposed method uses a genetic algorithm to select the best coefficients to embed the watermark. This genetic selection is done according to the parameters that are extracted from the perceptual content of the audio to optimize the robustness and transparency of the watermark. On the other hand the watermark security is increased due to the random nature of the genetic selection. The information of the selected MDCT coefficients that carry the watermark bits, are saves in a database for future extraction of the watermark. The proposed method is suitable for online MP3 stores to pursue illegal copies of musical artworks. Experimental results show that the detection ratio of the watermarks at the bitrate of 128kbps remains above 90% while the inaudibility of the watermark is preserved.Keywords: Content-Based Audio Watermarking, Genetic AudioWatermarking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15171274 Selection of Intensity Measure in Probabilistic Seismic Risk Assessment of a Turkish Railway Bridge
Authors: M. F. Yilmaz, B. Ö. Çağlayan
Abstract:
Fragility curve is an effective common used tool to determine the earthquake performance of structural and nonstructural components. Also, it is used to determine the nonlinear behavior of bridges. There are many historical bridges in the Turkish railway network; the earthquake performances of these bridges are needed to be investigated. To derive fragility curve Intensity measures (IMs) and Engineering demand parameters (EDP) are needed to be determined. And the relation between IMs and EDP are needed to be derived. In this study, a typical simply supported steel girder riveted railway bridge is studied. Fragility curves of this bridge are derived by two parameters lognormal distribution. Time history analyses are done for selected 60 real earthquake data to determine the relation between IMs and EDP. Moreover, efficiency, practicality, and sufficiency of three different IMs are discussed. PGA, Sa(0.2s) and Sa(1s), the most common used IMs parameters for fragility curve in the literature, are taken into consideration in terms of efficiency, practicality and sufficiency.
Keywords: Railway bridges, earthquake performance, fragility analyses, selection of intensity measures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9021273 Selecting an Advanced Creep Model or a Sophisticated Time-Integration? A New Approach by Means of Sensitivity Analysis
Authors: Holger Keitel
Abstract:
The prediction of long-term deformations of concrete and reinforced concrete structures has been a field of extensive research and several different creep models have been developed so far. Most of the models were developed for constant concrete stresses, thus, in case of varying stresses a specific superposition principle or time-integration, respectively, is necessary. Nowadays, when modeling concrete creep the engineering focus is rather on the application of sophisticated time-integration methods than choosing the more appropriate creep model. For this reason, this paper presents a method to quantify the uncertainties of creep prediction originating from the selection of creep models or from the time-integration methods. By adapting variance based global sensitivity analysis, a methodology is developed to quantify the influence of creep model selection or choice of time-integration method. Applying the developed method, general recommendations how to model creep behavior for varying stresses are given.
Keywords: Concrete creep models, time-integration methods, sensitivity analysis, prediction uncertainty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15381272 A New Framework for Evaluation and Prioritization of Suppliers using a Hierarchical Fuzzy TOPSIS
Authors: Mohammad Taghi Taghavifard, Danial Mirheydari
Abstract:
This paper suggests an algorithm for the evaluation and selection of suppliers. At the beginning, all the needed materials and services used by the organization were identified and categorized with regard to their nature by ABC method. Afterwards, in order to reduce risk factors and maximize the organization's profit, purchase strategies were determined. Then, appropriate criteria were identified for primary evaluation of suppliers applying to the organization. The output of this stage was a list of suppliers qualified by the organization to participate in its tenders. Subsequently, considering a material in particular, appropriate criteria on the ordering of the mentioned material were determined, taking into account the particular materials' specifications as well as the organization's needs. Finally, for the purpose of validation and verification of the proposed model, it was applied to Mobarakeh Steel Company (MSC), the qualified suppliers of this Company are ranked by the means of a Hierarchical Fuzzy TOPSIS method. The obtained results show that the proposed algorithm is quite effective, efficient and easy to apply.Keywords: ABC analysis, Hierarchical Fuzzy TOPSIS, Primary supplier evaluation, Purchasing strategy, supplier selection
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14041271 Secure Resource Selection in Computational Grid Based on Quantitative Execution Trust
Authors: G.Kavitha, V.Sankaranarayanan
Abstract:
Grid computing provides a virtual framework for controlled sharing of resources across institutional boundaries. Recently, trust has been recognised as an important factor for selection of optimal resources in a grid. We introduce a new method that provides a quantitative trust value, based on the past interactions and present environment characteristics. This quantitative trust value is used to select a suitable resource for a job and eliminates run time failures arising from incompatible user-resource pairs. The proposed work will act as a tool to calculate the trust values of the various components of the grid and there by improves the success rate of the jobs submitted to the resource on the grid. The access to a resource not only depend on the identity and behaviour of the resource but also upon its context of transaction, time of transaction, connectivity bandwidth, availability of the resource and load on the resource. The quality of the recommender is also evaluated based on the accuracy of the feedback provided about a resource. The jobs are submitted for execution to the selected resource after finding the overall trust value of the resource. The overall trust value is computed with respect to the subjective and objective parameters.Keywords: access control, feedback, grid computing, reputation, security, trust, trust parameter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14881270 Fault Classification of Double Circuit Transmission Line Using Artificial Neural Network
Authors: Anamika Jain, A. S. Thoke, R. N. Patel
Abstract:
This paper addresses the problems encountered by conventional distance relays when protecting double-circuit transmission lines. The problems arise principally as a result of the mutual coupling between the two circuits under different fault conditions; this mutual coupling is highly nonlinear in nature. An adaptive protection scheme is proposed for such lines based on application of artificial neural network (ANN). ANN has the ability to classify the nonlinear relationship between measured signals by identifying different patterns of the associated signals. One of the key points of the present work is that only current signals measured at local end have been used to detect and classify the faults in the double circuit transmission line with double end infeed. The adaptive protection scheme is tested under a specific fault type, but varying fault location, fault resistance, fault inception angle and with remote end infeed. An improved performance is experienced once the neural network is trained adequately, which performs precisely when faced with different system parameters and conditions. The entire test results clearly show that the fault is detected and classified within a quarter cycle; thus the proposed adaptive protection technique is well suited for double circuit transmission line fault detection & classification. Results of performance studies show that the proposed neural network-based module can improve the performance of conventional fault selection algorithms.
Keywords: Double circuit transmission line, Fault detection and classification, High impedance fault and Artificial Neural Network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31861269 Mining Image Features in an Automatic Two-Dimensional Shape Recognition System
Authors: R. A. Salam, M.A. Rodrigues
Abstract:
The number of features required to represent an image can be very huge. Using all available features to recognize objects can suffer from curse dimensionality. Feature selection and extraction is the pre-processing step of image mining. Main issues in analyzing images is the effective identification of features and another one is extracting them. The mining problem that has been focused is the grouping of features for different shapes. Experiments have been conducted by using shape outline as the features. Shape outline readings are put through normalization and dimensionality reduction process using an eigenvector based method to produce a new set of readings. After this pre-processing step data will be grouped through their shapes. Through statistical analysis, these readings together with peak measures a robust classification and recognition process is achieved. Tests showed that the suggested methods are able to automatically recognize objects through their shapes. Finally, experiments also demonstrate the system invariance to rotation, translation, scale, reflection and to a small degree of distortion.Keywords: Image mining, feature selection, shape recognition, peak measures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14581268 Knowledge Flows and Innovative Performances of NTBFs in Gauteng, South Africa: An Attempt to Explain Mixed Findings in Science Park Research
Authors: Kai-Ying A. Chan, Leon A.G. Oerlemans, Marthinus W. Pretorius
Abstract:
Science parks are often established to drive regional economic growth, especially in countries with emerging economies. However, mixed findings regarding the performances of science park firms are found in the literature. This study tries to explain these mixed findings by taking a relational approach and exploring (un)intended knowledge transfers between new technology-based firms (NTBFs) in the emerging South African economy. Moreover, the innovation outcomes of these NTBFs are examined by using a multi-dimensional construct. Results show that science park location plays a significant role in explaining innovative sales, but is insignificant when a different indicator of innovation outcomes is used. Furthermore, only for innovations that are new to the firms, both science park location and intended knowledge transfer via informal business relationships have a positive impact; whereas social relationships have a negative impact.Keywords: knowledge flows, innovative performances, science parks, new technology-based firms
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15141267 Pharmacology Applied Learning Program in Preclinical Years – Student Perspectives
Authors: Amudha Kadirvelu, Sunil Gurtu, Sivalal Sadasivan
Abstract:
Pharmacology curriculum plays an integral role in medical education. Learning pharmacology to choose and prescribe drugs is a major challenge encountered by students. We developed pharmacology applied learning activities for first year medical students that included realistic clinical situations with escalating complications which required the students to analyze the situation and think critically to choose a safe drug. Tutor feedback was provided at the end of session. Evaluation was done to assess the students- level of interest and usefulness of the sessions in rational selection of drugs. Majority (98 %) of the students agreed that the session was an extremely useful learning exercise and agreed that similar sessions would help in rational selection of drugs. Applied learning sessions in the early years of medical program may promote deep learning and bridge the gap between pharmacology theory and clinical practice. Besides, it may also enhance safe prescribing skills.Keywords: Medical education, pharmacology curriculum, applied learning, safe prescribing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21911266 A New Fuzzy DSS/ES for Stock Portfolio Selection using Technical and Fundamental Approaches in Parallel
Authors: H. Zarei, M. H. Fazel Zarandi, M. Karbasian
Abstract:
A Decision Support System/Expert System for stock portfolio selection presented where at first step, both technical and fundamental data used to estimate technical and fundamental return and risk (1st phase); Then, the estimated values are aggregated with the investor preferences (2nd phase) to produce convenient stock portfolio. In the 1st phase, there are two expert systems, each of which is responsible for technical or fundamental estimation. In the technical expert system, for each stock, twenty seven candidates are identified and with using rough sets-based clustering method (RC) the effective variables have been selected. Next, for each stock two fuzzy rulebases are developed with fuzzy C-Mean method and Takai-Sugeno- Kang (TSK) approach; one for return estimation and the other for risk. Thereafter, the parameters of the rule-bases are tuned with backpropagation method. In parallel, for fundamental expert systems, fuzzy rule-bases have been identified in the form of “IF-THEN" rules through brainstorming with the stock market experts and the input data have been derived from financial statements; as a result two fuzzy rule-bases have been generated for all the stocks, one for return and the other for risk. In the 2nd phase, user preferences represented by four criteria and are obtained by questionnaire. Using an expert system, four estimated values of return and risk have been aggregated with the respective values of user preference. At last, a fuzzy rule base having four rules, treats these values and produce a ranking score for each stock which will lead to a satisfactory portfolio for the user. The stocks of six manufacturing companies and the period of 2003-2006 selected for data gathering.Keywords: Stock Portfolio Selection, Fuzzy Rule-Base ExpertSystems, Financial Decision Support Systems, Technical Analysis, Fundamental Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18411265 Aircraft Selection Problem Using Decision Uncertainty Distance in Fuzzy Multiple Criteria Decision Making Analysis
Authors: C. Ardil
Abstract:
Aircraft have different capabilities and specifications according to the required strategic goals and objectives in operations. With various types on the market with different aircraft characteristics, it becomes difficult to select a suitable aircraft for certain operations and requirements. The entropy weighting method (EWM) is a useful, highly consistent, and reliable method for obtaining the weights of the criteria and is worth integrating with the decision uncertainty distance (DUD) method, which is more applicable and requires less computation than other methods. An illustrative example is presented to demonstrate the validity and usability of the proposed methodology. Comparing the ranking results matches the distance-based approach, which is the technique for order preference by similarity to ideal solution (TOPSIS) method, which shows the robustness of the entropy DUD hybrid method. Validity analysis shows that the proposed hybrid multiple criteria decision-making analysis (MCDMA) methodology is quantitatively stable and reliable.
Keywords: aircraft selection, decision uncertainty distance (DUD), multiple criteria decision making analysis, MCDMA, TOPSIS
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5421264 Centre Of Mass Selection Operator Based Meta-Heuristic For Unbounded Knapsack Problem
Authors: D.Venkatesan, K.Kannan, S. Raja Balachandar
Abstract:
In this paper a new Genetic Algorithm based on a heuristic operator and Centre of Mass selection operator (CMGA) is designed for the unbounded knapsack problem(UKP), which is NP-Hard combinatorial optimization problem. The proposed genetic algorithm is based on a heuristic operator, which utilizes problem specific knowledge. This center of mass operator when combined with other Genetic Operators forms a competitive algorithm to the existing ones. Computational results show that the proposed algorithm is capable of obtaining high quality solutions for problems of standard randomly generated knapsack instances. Comparative study of CMGA with simple GA in terms of results for unbounded knapsack instances of size up to 200 show the superiority of CMGA. Thus CMGA is an efficient tool of solving UKP and this algorithm is competitive with other Genetic Algorithms also.
Keywords: Genetic Algorithm, Unbounded Knapsack Problem, Combinatorial Optimization, Meta-Heuristic, Center of Mass
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16991263 Unmanned Combat Aircraft Selection using Fuzzy Proximity Measure Method in Multiple Criteria Group Decision Making
Authors: C. Ardil
Abstract:
The decision to select an unmanned combat aircraft is complicated since several options and conflicting criteria must be considered at simultaneously. When making multiple criteria decision, it is important to consider the selected evaluation criteria, including priceability, payloadability, stealthability, speedability , and survivability. The fundamental goal of the study is to select the best unmanned combat aircraft by taking these evaluation criteria into account. The optimal aircraft was chosen using the fuzzy proximity measure method, which enables decision-makers to designate preferences as standard fuzzy set numbers during the multiple criteria decision-making process. To assess the applicability of the proposed approach, a numerical example is provided. Finally, by comparing determined unmanned combat aircraft, the proposed method produced a successful application, and the best aircraft was selected.
Keywords: standard fuzzy sets (SFS), unmanned combat aircraft selection, multiple criteria decision making (MCDM), multiple criteria group decision making (MCGDM), proximity measure method (PMM)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 434