Search results for: Resource Selection.
1249 Customer Churn Prediction Using Four Machine Learning Algorithms Integrating Feature Selection and Normalization in the Telecom Sector
Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh
Abstract:
A crucial part of maintaining a customer-oriented business in the telecommunications industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years, which has made it more important to understand customers’ needs in this strong market. For those who are looking to turn over their service providers, understanding their needs is especially important. Predictive churn is now a mandatory requirement for retaining customers in the telecommunications industry. Machine learning can be used to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.
Keywords: Machine Learning, Gradient Boosting, Logistic Regression, Churn, Random Forest, Decision Tree, ROC, AUC, F1-score.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4081248 A Fitted Random Sampling Scheme for Load Distribution in Grid Networks
Authors: O. A. Rahmeh, P. Johnson, S. Lehmann
Abstract:
Grid networks provide the ability to perform higher throughput computing by taking advantage of many networked computer-s resources to solve large-scale computation problems. As the popularity of the Grid networks has increased, there is a need to efficiently distribute the load among the resources accessible on the network. In this paper, we present a stochastic network system that gives a distributed load-balancing scheme by generating almost regular networks. This network system is self-organized and depends only on local information for load distribution and resource discovery. The in-degree of each node is refers to its free resources, and job assignment and resource discovery processes required for load balancing is accomplished by using fitted random sampling. Simulation results show that the generated network system provides an effective, scalable, and reliable load-balancing scheme for the distributed resources accessible on Grid networks.
Keywords: Complex networks, grid networks, load-balancing, random sampling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17851247 The Optimal Equilibrium Capacity of Information Hiding Based on Game Theory
Authors: Ziquan Hu, Kun She, Shahzad Ali, Kai Yan
Abstract:
Game theory could be used to analyze the conflicted issues in the field of information hiding. In this paper, 2-phase game can be used to build the embedder-attacker system to analyze the limits of hiding capacity of embedding algorithms: the embedder minimizes the expected damage and the attacker maximizes it. In the system, the embedder first consumes its resource to build embedded units (EU) and insert the secret information into EU. Then the attacker distributes its resource evenly to the attacked EU. The expected equilibrium damage, which is maximum damage in value from the point of view of the attacker and minimum from the embedder against the attacker, is evaluated by the case when the attacker attacks a subset from all the EU. Furthermore, the optimal equilibrium capacity of hiding information is calculated through the optimal number of EU with the embedded secret information. Finally, illustrative examples of the optimal equilibrium capacity are presented.Keywords: 2-Phase Game, Expected Equilibrium damage, InformationHiding, Optimal Equilibrium Capacity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16211246 Optimal Water Allocation: Sustainable Management of Dam Reservoir
Authors: Afshin Jahangirzadeh, Shatirah Akib, Babak Kamali, Sadia Rahman
Abstract:
Scarcity of water resources and huge costs of establishing new hydraulic installations necessitate optimal exploitation from existing reservoirs. Sustainable management and efficient exploitation from existing finite water resources are important factors in water resource management, particularly in the periods of water insufficiency and in dry regions, and on account of competitive allocations in the view of exploitation management. This study aims to minimize reservoir water release from a determined rate of demand. A numerical model for water optimal exploitation has been developed using GAMS introduced by the World Bank and applied to the case of Meijaran dam, northern Iran. The results indicate that this model can optimize the function of reservoir exploitation while required water for lower parts of the region will be supplied. Further, allocating optimal water from reservoir, the optimal rate of water allocated to any group of the users were specified to increase benefits in curve dam exploitation.Keywords: Water resource management, water reservoirs, water allocation, GAMS, Meijaran dam
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27081245 Zero Inflated Models for Overdispersed Count Data
Authors: Y. N. Phang, E. F. Loh
Abstract:
The zero inflated models are usually used in modeling count data with excess zeros where the existence of the excess zeros could be structural zeros or zeros which occur by chance. These type of data are commonly found in various disciplines such as finance, insurance, biomedical, econometrical, ecology, and health sciences which involve sex and health dental epidemiology. The most popular zero inflated models used by many researchers are zero inflated Poisson and zero inflated negative binomial models. In addition, zero inflated generalized Poisson and zero inflated double Poisson models are also discussed and found in some literature. Recently zero inflated inverse trinomial model and zero inflated strict arcsine models are advocated and proven to serve as alternative models in modeling overdispersed count data caused by excessive zeros and unobserved heterogeneity. The purpose of this paper is to review some related literature and provide a variety of examples from different disciplines in the application of zero inflated models. Different model selection methods used in model comparison are discussed.
Keywords: Overdispersed count data, model selection methods, likelihood ratio, AIC, BIC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 45321244 Satellite Data Classification Accuracy Assessment Based from Reference Dataset
Authors: Mohd Hasmadi Ismail, Kamaruzaman Jusoff
Abstract:
In order to develop forest management strategies in tropical forest in Malaysia, surveying the forest resources and monitoring the forest area affected by logging activities is essential. There are tremendous effort has been done in classification of land cover related to forest resource management in this country as it is a priority in all aspects of forest mapping using remote sensing and related technology such as GIS. In fact classification process is a compulsory step in any remote sensing research. Therefore, the main objective of this paper is to assess classification accuracy of classified forest map on Landsat TM data from difference number of reference data (200 and 388 reference data). This comparison was made through observation (200 reference data), and interpretation and observation approaches (388 reference data). Five land cover classes namely primary forest, logged over forest, water bodies, bare land and agricultural crop/mixed horticultural can be identified by the differences in spectral wavelength. Result showed that an overall accuracy from 200 reference data was 83.5 % (kappa value 0.7502459; kappa variance 0.002871), which was considered acceptable or good for optical data. However, when 200 reference data was increased to 388 in the confusion matrix, the accuracy slightly improved from 83.5% to 89.17%, with Kappa statistic increased from 0.7502459 to 0.8026135, respectively. The accuracy in this classification suggested that this strategy for the selection of training area, interpretation approaches and number of reference data used were importance to perform better classification result.Keywords: Image Classification, Reference Data, Accuracy Assessment, Kappa Statistic, Forest Land Cover
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31411243 EEG Indices to Time-On-Task Effects and to a Workload Manipulation (Cueing)
Authors: A. T. Kamzanova, G. Matthews, A. M. Kustubayeva, S. M. Jakupov
Abstract:
The aim of this study was to evaluate the sensitivity of a range of EEG indices to time-on-task effects and to a workload manipulation (cueing), during performance of a resource-limited vigilance task. Effects of task period and cueing on performance and subjective state response were consistent with previous vigilance studies and with resource theory. Two EEG indices – the Task Load Index (TLI) and global lower frequency (LF) alpha power – showed effects of task period and cueing similar to those seen with correct detections. Across four successive task periods, the TLI declined and LF alpha power increased. Cueing increased TLI and decreased LF alpha. Other indices – the Engagement Index (EI), frontal theta and upper frequency (UF) alpha failed to show these effects. However, EI and frontal theta were sensitive to interactive effects of task period and cueing, which may correspond to a stronger anxiety response to the uncued task.Keywords: brain activity, EEG, task engagement, vigilance task.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23241242 A Genetic Algorithm with Priority Selection for the Traveling Salesman Problem
Authors: Cha-Hwa Lin, Je-Wei Hu
Abstract:
The conventional GA combined with a local search algorithm, such as the 2-OPT, forms a hybrid genetic algorithm(HGA) for the traveling salesman problem (TSP). However, the geometric properties which are problem specific knowledge can be used to improve the search process of the HGA. Some tour segments (edges) of TSPs are fine while some maybe too long to appear in a short tour. This knowledge could constrain GAs to work out with fine tour segments without considering long tour segments as often. Consequently, a new algorithm is proposed, called intelligent-OPT hybrid genetic algorithm (IOHGA), to improve the GA and the 2-OPT algorithm in order to reduce the search time for the optimal solution. Based on the geometric properties, all the tour segments are assigned 2-level priorities to distinguish between good and bad genes. A simulation study was conducted to evaluate the performance of the IOHGA. The experimental results indicate that in general the IOHGA could obtain near-optimal solutions with less time and better accuracy than the hybrid genetic algorithm with simulated annealing algorithm (HGA(SA)).Keywords: Traveling salesman problem, hybrid geneticalgorithm, priority selection, 2-OPT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15601241 A Novel Prediction Method for Tag SNP Selection using Genetic Algorithm based on KNN
Authors: Li-Yeh Chuang, Yu-Jen Hou, Jr., Cheng-Hong Yang
Abstract:
Single nucleotide polymorphisms (SNPs) hold much promise as a basis for disease-gene association. However, research is limited by the cost of genotyping the tremendous number of SNPs. Therefore, it is important to identify a small subset of informative SNPs, the so-called tag SNPs. This subset consists of selected SNPs of the genotypes, and accurately represents the rest of the SNPs. Furthermore, an effective evaluation method is needed to evaluate prediction accuracy of a set of tag SNPs. In this paper, a genetic algorithm (GA) is applied to tag SNP problems, and the K-nearest neighbor (K-NN) serves as a prediction method of tag SNP selection. The experimental data used was taken from the HapMap project; it consists of genotype data rather than haplotype data. The proposed method consistently identified tag SNPs with considerably better prediction accuracy than methods from the literature. At the same time, the number of tag SNPs identified was smaller than the number of tag SNPs in the other methods. The run time of the proposed method was much shorter than the run time of the SVM/STSA method when the same accuracy was reached.
Keywords: Genetic Algorithm (GA), Genotype, Single nucleotide polymorphism (SNP), tag SNPs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17711240 Factors Influencing Knowledge Management Process Model: A Case Study of Manufacturing Industry in Thailand
Authors: Daranee Pimchangthong, Supaporn Tinprapa
Abstract:
The objectives of this research were to explore factors influencing knowledge management process in the manufacturing industry and develop a model to support knowledge management processes. The studied factors were technology infrastructure, human resource, knowledge sharing, and the culture of the organization. The knowledge management processes included discovery, capture, sharing, and application. Data were collected through questionnaires and analyzed using multiple linear regression and multiple correlation. The results found that technology infrastructure, human resource, knowledge sharing, and culture of the organization influenced the discovery and capture processes. However, knowledge sharing had no influence in sharing and application processes. A model to support knowledge management processes was developed, which indicated that sharing knowledge needed further improvement in the organization.Keywords: knowledge management, knowledge management process, tacit knowledge
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18581239 Impact of Moderating Role of e-Administration on Training, Perfromance Appraisal and Organizational Performance
Authors: Ejaz Ali, Muhammad Younas, Tahir Saeed
Abstract:
In this age of information technology, organizations are revisiting their approach in great deal. E-administration is the most popular area to proceed with. Organizations in order to excel over their competitors are spending a substantial chunk of its resources on E-Administration as it is the most effective, transparent and efficient way to achieve their short term as well as long term organizational goals. E-administration being a tool of ICT plays a significant role towards effective management of HR practices resulting into optimal performance of an organization. The present research was carried out to analyze the impact of moderating role of e-administration in the relationships training and performance appraisal aligned with perceived organizational performance. The study is based on RBV and AMO theories, advocating that use of latest technology in execution of human resource (HR) functions enables an organization to achieve and sustain competitive advantage which leads to optimal firm performance.Keywords: Human resource management, HR function, e-administration, performance appraisal, training, organizational performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11751238 Research on Load Balancing Technology for Web Service Mobile Host
Authors: Yao Lu, Xiuguo Zhang, Zhiying Cao
Abstract:
In this paper, Load Balancing idea is used in the Web service mobile host. The main idea of Load Balancing is to establish a one-to-many mapping mechanism: An entrance-mapping request to plurality of processing node in order to realize the dividing and assignment processing. Because the mobile host is a resource constrained environment, there are some Web services which cannot be completed on the mobile host. When the mobile host resource is not enough to complete the request, Load Balancing scheduler will divide the request into a plurality of sub-requests and transfer them to different auxiliary mobile hosts. Auxiliary mobile host executes sub-requests, and then, the results will be returned to the mobile host. Service request integrator receives results of sub-requests from the auxiliary mobile host, and integrates the sub-requests. In the end, the complete request is returned to the client. Experimental results show that this technology adopted in this paper can complete requests and have a higher efficiency.Keywords: Dinic, load balancing, mobile host, web service.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11321237 Scheduling Multiple Workflow Using De-De Dodging Algorithm and PBD Algorithm in Cloud: Detailed Study
Authors: B. Arun Kumar, T. Ravichandran
Abstract:
Workflow scheduling is an important part of cloud computing and based on different criteria it decides cost, execution time, and performances. A cloud workflow system is a platform service facilitating automation of distributed applications based on new cloud infrastructure. An aspect which differentiates cloud workflow system from others is market-oriented business model, an innovation which challenges conventional workflow scheduling strategies. Time and Cost optimization algorithm for scheduling Hybrid Clouds (TCHC) algorithm decides which resource should be chartered from public providers is combined with a new De-De algorithm considering that every instance of single and multiple workflows work without deadlocks. To offset this, two new concepts - De-De Dodging Algorithm and Priority Based Decisive Algorithm - combine with conventional deadlock avoidance issues by proposing one algorithm that maximizes active (not just allocated) resource use and reduces Makespan.Keywords: Workflow Scheduling, cloud workflow, TCHC algorithm, De-De Dodging Algorithm, Priority Based Decisive Algorithm (PBD), Makespan.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27961236 Limitations of the Analytic Hierarchy Process Technique with Respect to Geographically Distributed Stakeholders
Authors: Azeem Ahmad, Magnus Goransson, Aamir Shahzad
Abstract:
The selection of appropriate requirements for product releases can make a big difference in a product success. The selection of requirements is done by different requirements prioritization techniques. These techniques are based on pre-defined and systematic steps to calculate the requirements relative weight. Prioritization is complicated by new development settings, shifting from traditional co-located development to geographically distributed development. Stakeholders, connected to a project, are distributed all over the world. These geographically distributions of stakeholders make it hard to prioritize requirements as each stakeholder have their own perception and expectations of the requirements in a software project. This paper discusses limitations of the Analytical Hierarchy Process with respect to geographically distributed stakeholders- (GDS) prioritization of requirements. This paper also provides a solution, in the form of a modified AHP, in order to prioritize requirements for GDS. We will conduct two experiments in this paper and will analyze the results in order to discuss AHP limitations with respect to GDS. The modified AHP variant is also validated in this paper.Keywords: Requirements Prioritization, GeographicallyDistributed Stakeholders, AHP, Modified AHP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28641235 Genetic Content-Based MP3 Audio Watermarking in MDCT Domain
Authors: N. Moghadam, H. Sadeghi
Abstract:
In this paper a novel scheme for watermarking digital audio during its compression to MPEG-1 Layer III format is proposed. For this purpose we slightly modify some of the selected MDCT coefficients, which are used during MPEG audio compression procedure. Due to the possibility of modifying different MDCT coefficients, there will be different choices for embedding the watermark into audio data, considering robustness and transparency factors. Our proposed method uses a genetic algorithm to select the best coefficients to embed the watermark. This genetic selection is done according to the parameters that are extracted from the perceptual content of the audio to optimize the robustness and transparency of the watermark. On the other hand the watermark security is increased due to the random nature of the genetic selection. The information of the selected MDCT coefficients that carry the watermark bits, are saves in a database for future extraction of the watermark. The proposed method is suitable for online MP3 stores to pursue illegal copies of musical artworks. Experimental results show that the detection ratio of the watermarks at the bitrate of 128kbps remains above 90% while the inaudibility of the watermark is preserved.Keywords: Content-Based Audio Watermarking, Genetic AudioWatermarking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15171234 Selection of Intensity Measure in Probabilistic Seismic Risk Assessment of a Turkish Railway Bridge
Authors: M. F. Yilmaz, B. Ö. Çağlayan
Abstract:
Fragility curve is an effective common used tool to determine the earthquake performance of structural and nonstructural components. Also, it is used to determine the nonlinear behavior of bridges. There are many historical bridges in the Turkish railway network; the earthquake performances of these bridges are needed to be investigated. To derive fragility curve Intensity measures (IMs) and Engineering demand parameters (EDP) are needed to be determined. And the relation between IMs and EDP are needed to be derived. In this study, a typical simply supported steel girder riveted railway bridge is studied. Fragility curves of this bridge are derived by two parameters lognormal distribution. Time history analyses are done for selected 60 real earthquake data to determine the relation between IMs and EDP. Moreover, efficiency, practicality, and sufficiency of three different IMs are discussed. PGA, Sa(0.2s) and Sa(1s), the most common used IMs parameters for fragility curve in the literature, are taken into consideration in terms of efficiency, practicality and sufficiency.
Keywords: Railway bridges, earthquake performance, fragility analyses, selection of intensity measures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9021233 Selecting an Advanced Creep Model or a Sophisticated Time-Integration? A New Approach by Means of Sensitivity Analysis
Authors: Holger Keitel
Abstract:
The prediction of long-term deformations of concrete and reinforced concrete structures has been a field of extensive research and several different creep models have been developed so far. Most of the models were developed for constant concrete stresses, thus, in case of varying stresses a specific superposition principle or time-integration, respectively, is necessary. Nowadays, when modeling concrete creep the engineering focus is rather on the application of sophisticated time-integration methods than choosing the more appropriate creep model. For this reason, this paper presents a method to quantify the uncertainties of creep prediction originating from the selection of creep models or from the time-integration methods. By adapting variance based global sensitivity analysis, a methodology is developed to quantify the influence of creep model selection or choice of time-integration method. Applying the developed method, general recommendations how to model creep behavior for varying stresses are given.
Keywords: Concrete creep models, time-integration methods, sensitivity analysis, prediction uncertainty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15381232 A New Framework for Evaluation and Prioritization of Suppliers using a Hierarchical Fuzzy TOPSIS
Authors: Mohammad Taghi Taghavifard, Danial Mirheydari
Abstract:
This paper suggests an algorithm for the evaluation and selection of suppliers. At the beginning, all the needed materials and services used by the organization were identified and categorized with regard to their nature by ABC method. Afterwards, in order to reduce risk factors and maximize the organization's profit, purchase strategies were determined. Then, appropriate criteria were identified for primary evaluation of suppliers applying to the organization. The output of this stage was a list of suppliers qualified by the organization to participate in its tenders. Subsequently, considering a material in particular, appropriate criteria on the ordering of the mentioned material were determined, taking into account the particular materials' specifications as well as the organization's needs. Finally, for the purpose of validation and verification of the proposed model, it was applied to Mobarakeh Steel Company (MSC), the qualified suppliers of this Company are ranked by the means of a Hierarchical Fuzzy TOPSIS method. The obtained results show that the proposed algorithm is quite effective, efficient and easy to apply.Keywords: ABC analysis, Hierarchical Fuzzy TOPSIS, Primary supplier evaluation, Purchasing strategy, supplier selection
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14041231 Mining Image Features in an Automatic Two-Dimensional Shape Recognition System
Authors: R. A. Salam, M.A. Rodrigues
Abstract:
The number of features required to represent an image can be very huge. Using all available features to recognize objects can suffer from curse dimensionality. Feature selection and extraction is the pre-processing step of image mining. Main issues in analyzing images is the effective identification of features and another one is extracting them. The mining problem that has been focused is the grouping of features for different shapes. Experiments have been conducted by using shape outline as the features. Shape outline readings are put through normalization and dimensionality reduction process using an eigenvector based method to produce a new set of readings. After this pre-processing step data will be grouped through their shapes. Through statistical analysis, these readings together with peak measures a robust classification and recognition process is achieved. Tests showed that the suggested methods are able to automatically recognize objects through their shapes. Finally, experiments also demonstrate the system invariance to rotation, translation, scale, reflection and to a small degree of distortion.Keywords: Image mining, feature selection, shape recognition, peak measures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14581230 A Study of Priority Evaluation and Resource Allocation for Revitalization of Cultural Heritages in the Urban Development
Authors: Wann-Ming Wey, Yi-Chih Huang
Abstract:
Proper maintenance and preservation of significant cultural heritages or historic buildings is necessary. It can not only enhance environmental benefits and a sense of community, but also preserve a city's history and people’s memory. It allows the next generation to be able to get a glimpse of our past, and achieve the goal of sustainable preserved cultural assets. However, the management of maintenance work has not been appropriate for many designated heritages or historic buildings so far. The planning and implementation of the reuse has yet to have a breakthrough specification. It leads the heritages to a mere formality of being “reserved”, instead of the real meaning of “conservation”. For the restoration and preservation of cultural heritages study issues, it is very important due to the consideration of historical significance, symbolism, and economic benefits effects. However, the decision makers such as the officials from public sector they often encounter which heritage should be prioritized to be restored first under the available limited budgets. Only very few techniques are available today to determine the appropriately restoration priorities for the diverse historical heritages, perhaps because of a lack of systematized decision-making aids been proposed before. In the past, the discussions of management and maintenance towards cultural assets were limited to the selection of reuse alternatives instead of the allocation of resources. In view of this, this research will adopt some integrated research methods to solve the existing problems that decision-makers might encounter when allocating resources in the management and maintenance of heritages and historic buildings.
The purpose of this study is to develop a sustainable decision making model for local governments to resolve these problems. We propose an alternative decision support model to prioritize restoration needs within the limited budgets. The model is constructed based on fuzzy Delphi, fuzzy analysis network process (FANP) and goal programming (GP) methods. In order to avoid misallocate resources; this research proposes a precise procedure that can take multi-stakeholders views, limited costs and resources into consideration. Also, the combination of many factors and goals has been taken into account to find the highest priority and feasible solution results. To illustrate the approach we propose in this research, seven cultural heritages in Taipei city as one example has been used as an empirical study, and the results are in depth analyzed to explain the application of our proposed approach.
Keywords: Cultural Heritage, Historic Buildings, Priority Evaluation, Multi-Criteria Decision Making, Goal Programming, Fuzzy Analytic Network Process, Resource Allocation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23121229 Pharmacology Applied Learning Program in Preclinical Years – Student Perspectives
Authors: Amudha Kadirvelu, Sunil Gurtu, Sivalal Sadasivan
Abstract:
Pharmacology curriculum plays an integral role in medical education. Learning pharmacology to choose and prescribe drugs is a major challenge encountered by students. We developed pharmacology applied learning activities for first year medical students that included realistic clinical situations with escalating complications which required the students to analyze the situation and think critically to choose a safe drug. Tutor feedback was provided at the end of session. Evaluation was done to assess the students- level of interest and usefulness of the sessions in rational selection of drugs. Majority (98 %) of the students agreed that the session was an extremely useful learning exercise and agreed that similar sessions would help in rational selection of drugs. Applied learning sessions in the early years of medical program may promote deep learning and bridge the gap between pharmacology theory and clinical practice. Besides, it may also enhance safe prescribing skills.Keywords: Medical education, pharmacology curriculum, applied learning, safe prescribing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21911228 The Resource Description Framework (RDF) as a Modern Structure for Medical Data
Authors: Gabriela Lindemann, Danilo Schmidt, Thomas Schrader, Dietmar Keune
Abstract:
The amount and heterogeneity of data in biomedical research, notably in interdisciplinary fields, requires new methods for the collection, presentation and analysis of information. Important data from laboratory experiments as well as patient trials are available but come out of distributed resources. The Charité - University Hospital Berlin has established together with the German Research Foundation (DFG) a new information service centre for kidney diseases and transplantation (Open European Nephrology Science Centre - OpEN.SC). Beside a collaborative aspect to create new research groups every single partner or institution of this science information centre making his own data available is allowed to search the whole data pool of the various involved centres. A core task is the implementation of a non-restricting open data structure for the various different data sources. We decided to use a modern RDF model and in a first phase transformed original data coming from the web-based Electronic Patient Record database TBase©.
Keywords: Medical databases, Resource Description Framework (RDF), metadata repository.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20311227 Performance Evaluation of a Neural Network based General Purpose Space Vector Modulator
Authors: A.Muthuramalingam, S.Himavathi
Abstract:
Space Vector Modulation (SVM) is an optimum Pulse Width Modulation (PWM) technique for an inverter used in a variable frequency drive applications. It is computationally rigorous and hence limits the inverter switching frequency. Increase in switching frequency can be achieved using Neural Network (NN) based SVM, implemented on application specific chips. This paper proposes a neural network based SVM technique for a Voltage Source Inverter (VSI). The network proposed is independent of switching frequency. Different architectures are investigated keeping the total number of neurons constant. The performance of the inverter is compared for various switching frequencies for different architectures of NN based SVM. From the results obtained, the network with minimum resource and appropriate word length is identified. The bit precision required for this application is identified. The network with 8-bit precision is implemented in the IC XCV 400 and the results are presented. The performance of NN based general purpose SVM with higher bit precision is discussed.Keywords: NN based SVM, FPGA Implementation, LayerMultiplexing, NN structure and Resource Reduction, PerformanceEvaluation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14911226 A New Fuzzy DSS/ES for Stock Portfolio Selection using Technical and Fundamental Approaches in Parallel
Authors: H. Zarei, M. H. Fazel Zarandi, M. Karbasian
Abstract:
A Decision Support System/Expert System for stock portfolio selection presented where at first step, both technical and fundamental data used to estimate technical and fundamental return and risk (1st phase); Then, the estimated values are aggregated with the investor preferences (2nd phase) to produce convenient stock portfolio. In the 1st phase, there are two expert systems, each of which is responsible for technical or fundamental estimation. In the technical expert system, for each stock, twenty seven candidates are identified and with using rough sets-based clustering method (RC) the effective variables have been selected. Next, for each stock two fuzzy rulebases are developed with fuzzy C-Mean method and Takai-Sugeno- Kang (TSK) approach; one for return estimation and the other for risk. Thereafter, the parameters of the rule-bases are tuned with backpropagation method. In parallel, for fundamental expert systems, fuzzy rule-bases have been identified in the form of “IF-THEN" rules through brainstorming with the stock market experts and the input data have been derived from financial statements; as a result two fuzzy rule-bases have been generated for all the stocks, one for return and the other for risk. In the 2nd phase, user preferences represented by four criteria and are obtained by questionnaire. Using an expert system, four estimated values of return and risk have been aggregated with the respective values of user preference. At last, a fuzzy rule base having four rules, treats these values and produce a ranking score for each stock which will lead to a satisfactory portfolio for the user. The stocks of six manufacturing companies and the period of 2003-2006 selected for data gathering.Keywords: Stock Portfolio Selection, Fuzzy Rule-Base ExpertSystems, Financial Decision Support Systems, Technical Analysis, Fundamental Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18411225 Aircraft Selection Problem Using Decision Uncertainty Distance in Fuzzy Multiple Criteria Decision Making Analysis
Authors: C. Ardil
Abstract:
Aircraft have different capabilities and specifications according to the required strategic goals and objectives in operations. With various types on the market with different aircraft characteristics, it becomes difficult to select a suitable aircraft for certain operations and requirements. The entropy weighting method (EWM) is a useful, highly consistent, and reliable method for obtaining the weights of the criteria and is worth integrating with the decision uncertainty distance (DUD) method, which is more applicable and requires less computation than other methods. An illustrative example is presented to demonstrate the validity and usability of the proposed methodology. Comparing the ranking results matches the distance-based approach, which is the technique for order preference by similarity to ideal solution (TOPSIS) method, which shows the robustness of the entropy DUD hybrid method. Validity analysis shows that the proposed hybrid multiple criteria decision-making analysis (MCDMA) methodology is quantitatively stable and reliable.
Keywords: aircraft selection, decision uncertainty distance (DUD), multiple criteria decision making analysis, MCDMA, TOPSIS
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5421224 Centre Of Mass Selection Operator Based Meta-Heuristic For Unbounded Knapsack Problem
Authors: D.Venkatesan, K.Kannan, S. Raja Balachandar
Abstract:
In this paper a new Genetic Algorithm based on a heuristic operator and Centre of Mass selection operator (CMGA) is designed for the unbounded knapsack problem(UKP), which is NP-Hard combinatorial optimization problem. The proposed genetic algorithm is based on a heuristic operator, which utilizes problem specific knowledge. This center of mass operator when combined with other Genetic Operators forms a competitive algorithm to the existing ones. Computational results show that the proposed algorithm is capable of obtaining high quality solutions for problems of standard randomly generated knapsack instances. Comparative study of CMGA with simple GA in terms of results for unbounded knapsack instances of size up to 200 show the superiority of CMGA. Thus CMGA is an efficient tool of solving UKP and this algorithm is competitive with other Genetic Algorithms also.
Keywords: Genetic Algorithm, Unbounded Knapsack Problem, Combinatorial Optimization, Meta-Heuristic, Center of Mass
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16991223 MFCA: An Environmental Management Accounting Technique for Optimal Resource Efficiency in Production Processes
Authors: Omolola A. Tajelawi, Hari L. Garbharran
Abstract:
Revenue leakages are one of the major challenges manufacturers face in production processes, as most of the input materials that should emanate as products from the lines are lost as waste. Rather than generating income from material input which is meant to end-up as products, losses are further incurred as costs in order to manage waste generated. In addition, due to the lack of a clear view of the flow of resources on the lines from input to output stage, acquiring information on the true cost of waste generated have become a challenge. This has therefore given birth to the conceptualization and implementation of waste minimization strategies by several manufacturing industries. This paper reviews the principles and applications of three environmental management accounting tools namely Activity-based Costing (ABC), Life-Cycle Assessment (LCA) and Material Flow Cost Accounting (MFCA) in the manufacturing industry and their effectiveness in curbing revenue leakages. The paper unveils the strengths and limitations of each of the tools; beaming a searchlight on the tool that could allow for optimal resource utilization, transparency in production process as well as improved cost efficiency. Findings from this review reveal that MFCA may offer superior advantages with regards to the provision of more detailed information (both in physical and monetary terms) on the flow of material inputs throughout the production process compared to the other environmental accounting tools. This paper therefore makes a case for the adoption of MFCA as a viable technique for the identification and reduction of waste in production processes, and also for effective decision making by production managers, financial advisors and other relevant stakeholders.Keywords: MFCA, environmental management accounting, resource efficiency, waste reduction, revenue losses.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 44051222 The Potential of Roof Top Rain Water Harvesting as a Water Resource in Jordan: Featuring Two Application Case Studies
Authors: Zain M. Al-Houri, Oday K. Abu-Hadba, Khaled A. Hamdan
Abstract:
Roof top rainwater harvesting (RWH) has been carried out worldwide to provide an inexpensive source of water for many people. This research aims at evaluating the potential of roof top rain water harvesting as a resource in Jordan. For the purpose of this work, two case studies at Al-Jubiha and Shafa-Badran districts in Amman city were selected. All existing rooftops in both districts were identified by digitizing 2012 satellite images of the two districts using Google earth and ArcGIS tools. Rational method was used to estimate the potential volume of rainwater that can be harvested from the digitized rooftops. Results indicated that 1.17 and 0.526 MCM/yr can be harvested in Al-Jubiha and Shafa-Badran districts, respectively. This study should increase the attention to the importance of implementing RWH technique in Jordanian residences as a viable alternative for ensuring a continued source of non-potable water.
Keywords: Amman districts, ArcGIS, Rational method, Roof top rain water harvesting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33041221 A System Dynamic Based DSS for Ecological Urban Management in Alexandria, Egypt
Authors: Mona M. Salem, Khaled S. Al-Hagla, Hany M. Ayad
Abstract:
The concept of urban metabolism has increasingly been employed in a diverse range of disciplines as a mean to analyze and theorize the city. Urban ecology has a particular focus on the implications of applying the metabolism concept to the urban realm. This approach has been developed by a few researchers, though it has rarely if ever been used in policy development for city planning. The aim of this research is to use ecologically informed urban planning interventions to increase the sustainability of urban metabolism; with special focus on land stock as a most important city resource by developing a system dynamic based DSS. This model identifies two critical management strategy variables for the Strategic Urban Plan Alexandria SUP 2032. As a result, this comprehensive and precise quantitative approach is needed to monitor, measure, evaluate and observe dynamic urban changes working as a decision support system (DSS) for policy making.
Keywords: Alexandria SUP 2032, DSS, ecology, land resource, LULCC, management, metabolism, model, scenarios, System dynamics, urban development.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11941220 Unmanned Combat Aircraft Selection using Fuzzy Proximity Measure Method in Multiple Criteria Group Decision Making
Authors: C. Ardil
Abstract:
The decision to select an unmanned combat aircraft is complicated since several options and conflicting criteria must be considered at simultaneously. When making multiple criteria decision, it is important to consider the selected evaluation criteria, including priceability, payloadability, stealthability, speedability , and survivability. The fundamental goal of the study is to select the best unmanned combat aircraft by taking these evaluation criteria into account. The optimal aircraft was chosen using the fuzzy proximity measure method, which enables decision-makers to designate preferences as standard fuzzy set numbers during the multiple criteria decision-making process. To assess the applicability of the proposed approach, a numerical example is provided. Finally, by comparing determined unmanned combat aircraft, the proposed method produced a successful application, and the best aircraft was selected.
Keywords: standard fuzzy sets (SFS), unmanned combat aircraft selection, multiple criteria decision making (MCDM), multiple criteria group decision making (MCGDM), proximity measure method (PMM)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 434