Search results for: rank -principal certificate
1230 The Role of Academic Leaders at Jerash University in Crises Management 'Virus Corona as a Model'
Authors: Khaled M. Hama, Mohammed Al Magableh, Zaid Al Kuri, Ahmad Qayam
Abstract:
The study aimed to identify the role of academic leaders at Jerash University in crisis management from the faculty members' point of view, ‘the emerging Corona pandemic as a model’, as well as to identify the differences in the role of academic leaders at Jerash University in crisis management at the significance level (0.05 ≤ α) according to the study variables Gender Academic rank, years of experience, and identifying proposals that contribute to developing the performance of academic leaders at Jerash University in crisis management, ‘the Corona pandemic as a model’. The study was applied to a randomly selected sample of (72) faculty members at Jerash University, The researcher designed a tool for the study, which is the questionnaire, and it included two parts: the first part related to the personal data of the study sample members, and the second part was divided into five areas and (34) paragraphs to reveal the role of academic leaders at Jerash University in crisis management - the Corona pandemic as a model, it was confirmed From the validity and reliability of the tool, the study used the descriptive analytical method The study reached the following results: that the role of academic leaders at Jerash University in crisis management from the point of view of faculty members, ‘the emerging corona pandemic as a model’, came to a high degree, and there were no statistically significant differences at the level of statistical significance (α = 0.05) between the computational circles for the estimates of individuals The study sample for the role of academic leaders at Jerash University in crisis management is attributed to the study variables (gender, academic rank, and years of experience)Keywords: academic leaders, crisis management, corona pandemic, Jerash University
Procedia PDF Downloads 541229 Understanding the Information in Principal Component Analysis of Raman Spectroscopic Data during Healing of Subcritical Calvarial Defects
Authors: Rafay Ahmed, Condon Lau
Abstract:
Bone healing is a complex and sequential process involving changes at the molecular level. Raman spectroscopy is a promising technique to study bone mineral and matrix environments simultaneously. In this study, subcritical calvarial defects are used to study bone composition during healing without discomposing the fracture. The model allowed to monitor the natural healing of bone avoiding mechanical harm to the callus. Calvarial defects were created using 1mm burr drill in the parietal bones of Sprague-Dawley rats (n=8) that served in vivo defects. After 7 days, their skulls were harvested after euthanizing. One additional defect per sample was created on the opposite parietal bone using same calvarial defect procedure to serve as control defect. Raman spectroscopy (785 nm) was established to investigate bone parameters of three different skull surfaces; in vivo defects, control defects and normal surface. Principal component analysis (PCA) was utilized for the data analysis and interpretation of Raman spectra and helped in the classification of groups. PCA was able to distinguish in vivo defects from normal surface and control defects. PC1 shows that the major variation at 958 cm⁻¹, which corresponds to ʋ1 phosphate mineral band. PC2 shows the major variation at 1448 cm⁻¹ which is the characteristic band of CH2 deformation and corresponds to collagens. Raman parameters, namely, mineral to matrix ratio and crystallinity was found significantly decreased in the in vivo defects compared to surface and controls. Scanning electron microscope and optical microscope images show the formation of newly generated matrix by means of bony bridges of collagens. Optical profiler shows that surface roughness increased by 30% from controls to in vivo defects after 7 days. These results agree with Raman assessment parameters and confirm the new collagen formation during healing.Keywords: Raman spectroscopy, principal component analysis, calvarial defects, tissue characterization
Procedia PDF Downloads 2231228 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings
Authors: Jude K. Safo
Abstract:
Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics
Procedia PDF Downloads 681227 Principal Component Analysis Combined Machine Learning Techniques on Pharmaceutical Samples by Laser Induced Breakdown Spectroscopy
Authors: Kemal Efe Eseller, Göktuğ Yazici
Abstract:
Laser-induced breakdown spectroscopy (LIBS) is a rapid optical atomic emission spectroscopy which is used for material identification and analysis with the advantages of in-situ analysis, elimination of intensive sample preparation, and micro-destructive properties for the material to be tested. LIBS delivers short pulses of laser beams onto the material in order to create plasma by excitation of the material to a certain threshold. The plasma characteristics, which consist of wavelength value and intensity amplitude, depends on the material and the experiment’s environment. In the present work, medicine samples’ spectrum profiles were obtained via LIBS. Medicine samples’ datasets include two different concentrations for both paracetamol based medicines, namely Aferin and Parafon. The spectrum data of the samples were preprocessed via filling outliers based on quartiles, smoothing spectra to eliminate noise and normalizing both wavelength and intensity axis. Statistical information was obtained and principal component analysis (PCA) was incorporated to both the preprocessed and raw datasets. The machine learning models were set based on two different train-test splits, which were 70% training – 30% test and 80% training – 20% test. Cross-validation was preferred to protect the models against overfitting; thus the sample amount is small. The machine learning results of preprocessed and raw datasets were subjected to comparison for both splits. This is the first time that all supervised machine learning classification algorithms; consisting of Decision Trees, Discriminant, naïve Bayes, Support Vector Machines (SVM), k-NN(k-Nearest Neighbor) Ensemble Learning and Neural Network algorithms; were incorporated to LIBS data of paracetamol based pharmaceutical samples, and their different concentrations on preprocessed and raw dataset in order to observe the effect of preprocessing.Keywords: machine learning, laser-induced breakdown spectroscopy, medicines, principal component analysis, preprocessing
Procedia PDF Downloads 871226 Exploratory Study of the Influencing Factors for Hotels' Competitors
Authors: Asma Ameur, Dhafer Malouche
Abstract:
Hotel competitiveness research is an essential phase of the marketing strategy for any hotel. Certainly, knowing the hotels' competitors helps the hotelier to grasp its position in the market and the citizen to make the right choice in picking a hotel. Thus, competitiveness is an important indicator that can be influenced by various factors. In fact, the issue of competitiveness, this ability to cope with competition, remains a difficult and complex concept to define and to exploit. Therefore, the purpose of this article is to make an exploratory study to calculate a competitiveness indicator for hotels. Further on, this paper makes it possible to determine the criteria of direct or indirect effect on the image and the perception of a hotel. The actual research is used to look into the right model for hotel ‘competitiveness. For this reason, we exploit different theoretical contributions in the field of machine learning. Thus, we use some statistical techniques such as the Principal Component Analysis (PCA) to reduce the dimensions, as well as other techniques of statistical modeling. This paper presents a survey covering of the techniques and methods in hotel competitiveness research. Furthermore, this study allows us to deduct the significant variables that influence the determination of hotel’s competitors. Lastly, the discussed experiences in this article found that the hotel competitors are influenced by several factors with different rates.Keywords: competitiveness, e-reputation, hotels' competitors, online hotel’ review, principal component analysis, statistical modeling
Procedia PDF Downloads 1191225 Rural Households’ Resilience to Food Insecurity in Niger
Authors: Aboubakr Gambo, Adama Diaw, Tobias Wunscher
Abstract:
This study attempts to identify factors affecting rural households’ resilience to food insecurity in Niger. For this, we first create a resilience index by using Principal Component Analysis on the following five variables at the household level: income, food expenditure, duration of grain held in stock, livestock in Tropical Livestock Units and number of farms exploited and second apply Structural Equation Modelling to identify the determinants. Data from the 2010 National Survey on Households’ Vulnerability to Food Insecurity done by the National Institute of Statistics is used. The study shows that asset and social safety nets indicators are significant and have a positive impact on households’ resilience. Climate change approximated by long-term mean rainfall has a negative and significant effect on households’ resilience to food insecurity. The results indicate that to strengthen households’ resilience to food insecurity, there is a need to increase assistance to households through social safety nets and to help them gather more resources in order to acquire more assets. Furthermore, early warning of climatic events could alert households especially farmers to be prepared and avoid important losses that they experience anytime an uneven climatic event occur.Keywords: food insecurity, principal component analysis, structural equation modelling, resilience
Procedia PDF Downloads 3611224 Freight Forwarders’ Liability: A Need for Revival of Unidroit Draft Convention after Six Decades
Authors: Mojtaba Eshraghi Arani
Abstract:
The freight forwarders, who are known as the Architect of Transportation, play a vital role in the supply chain management. The package of various services which they provide has made the legal nature of freight forwarders very controversial, so that they might be qualified once as principal or carrier and, on other occasions, as agent of the shipper as the case may be. They could even be involved in the transportation process as the agent of shipping line, which makes the situation much more complicated. The courts in all countries have long had trouble in distinguishing the “forwarder as agent” from “forwarder as principal” (as it is outstanding in the prominent case of “Vastfame Camera Ltd v Birkart Globistics Ltd And Others” 2005, Hong Kong). It is not fully known that in the case of a claim against the forwarder, what particular parameter would be used by the judge among multiple, and sometimes contradictory, tests for determining the scope of the forwarder liability. In particular, every country has its own legal parameters for qualifying the freight forwarders that is completely different from others, as it is the case in France in comparison with Germany and England. The unpredictability of the courts’ decisions in this regard has provided the freight forwarders with the opportunity to impose any limitation or exception of liability while pretending to play the role of a principal, consequently making the cargo interests incur ever-increasing damage. The transportation industry needs to remove such uncertainty by unifying national laws governing freight forwarders liability. A long time ago, in 1967, The International Institute for Unification of Private Law (UNIDROIT) prepared a draft convention called “Draft Convention on Contract of Agency for Forwarding Agents Relating to International Carriage of Goods” (hereinafter called “UNIDROIT draft convention”). The UNIDROIT draft convention provided a clear and certain framework for the liability of freight forwarder in each capacity as agent or carrier, but it failed to transform to a convention, and eventually, it was consigned to oblivion. Today, after nearly 6 decades from that era, the necessity of such convention can be felt apparently. However, one might reason that the same grounds, in particular, the resistance by forwarders’ association, FIATA, exist yet, and thus it is not logical to revive a forgotten draft convention after such long period of time. It is argued in this article that the main reason for resisting the UNIDROIT draft convention in the past was pending efforts for developing the “1980 United Nation Convention on International Multimodal Transport of Goods”. However, the latter convention failed to become in force on due time in a way that there was no new accession since 1996, as a result of which the UNIDROIT draft convention must be revived strongly and immediately submitted to the relevant diplomatic conference. A qualitative method with the concept of interpretation of data collection has been used in this manuscript. The source of the data is the analysis of international conventions and cases.Keywords: freight forwarder, revival, agent, principal, uidroit, draft convention
Procedia PDF Downloads 741223 Monitoring Blood Pressure Using Regression Techniques
Authors: Qasem Qananwah, Ahmad Dagamseh, Hiam AlQuran, Khalid Shaker Ibrahim
Abstract:
Blood pressure helps the physicians greatly to have a deep insight into the cardiovascular system. The determination of individual blood pressure is a standard clinical procedure considered for cardiovascular system problems. The conventional techniques to measure blood pressure (e.g. cuff method) allows a limited number of readings for a certain period (e.g. every 5-10 minutes). Additionally, these systems cause turbulence to blood flow; impeding continuous blood pressure monitoring, especially in emergency cases or critically ill persons. In this paper, the most important statistical features in the photoplethysmogram (PPG) signals were extracted to estimate the blood pressure noninvasively. PPG signals from more than 40 subjects were measured and analyzed and 12 features were extracted. The features were fed to principal component analysis (PCA) to find the most important independent features that have the highest correlation with blood pressure. The results show that the stiffness index means and standard deviation for the beat-to-beat heart rate were the most important features. A model representing both features for Systolic Blood Pressure (SBP) and Diastolic Blood Pressure (DBP) was obtained using a statistical regression technique. Surface fitting is used to best fit the series of data and the results show that the error value in estimating the SBP is 4.95% and in estimating the DBP is 3.99%.Keywords: blood pressure, noninvasive optical system, principal component analysis, PCA, continuous monitoring
Procedia PDF Downloads 1611222 Variation among East Wollega Coffee (Coffea arabica L.) Landraces for Quality Attributes
Authors: Getachew Weldemichael, Sentayehu Alamerew, Leta Tulu, Gezahegn Berecha
Abstract:
Coffee quality improvement program is becoming the focus of coffee research, as the world coffee consumption pattern shifted to high-quality coffee. However, there is limited information on the genetic variation of C. Arabica for quality improvement in potential specialty coffee growing areas of Ethiopia. Therefore, this experiment was conducted with the objectives of determining the magnitude of variation among 105 coffee accessions collected from east Wollega coffee growing areas and assessing correlations between the different coffee qualities attributes. It was conducted in RCRD with three replications. Data on green bean physical characters (shape and make, bean color and odor) and organoleptic cup quality traits (aromatic intensity, aromatic quality, acidity, astringency, bitterness, body, flavor, and overall standard of the liquor) were recorded. Analysis of variance, clustering, genetic divergence, principal component and correlation analysis was performed using SAS software. The result revealed that there were highly significant differences (P<0.01) among the accessions for all quality attributes except for odor and bitterness. Among the tested accessions, EW104 /09, EW101 /09, EW58/09, EW77/09, EW35/09, EW71/09, EW68/09, EW96 /09, EW83/09 and EW72/09 had the highest total coffee quality values (the sum of bean physical and cup quality attributes). These genotypes could serve as a source of genes for green bean physical characters and cup quality improvement in Arabica coffee. Furthermore, cluster analysis grouped the coffee accessions into five clusters with significant inter-cluster distances implying that there is moderate diversity among the accessions and crossing accessions from these divergent inter-clusters would result in hetrosis and recombinants in segregating generations. The principal component analysis revealed that the first three principal components with eigenvalues greater than unity accounted for 83.1% of the total variability due to the variation of nine quality attributes considered for PC analysis, indicating that all quality attributes equally contribute to a grouping of the accessions in different clusters. Organoleptic cup quality attributes showed positive and significant correlations both at the genotypic and phenotypic levels, demonstrating the possibility of simultaneous improvement of the traits. Path coefficient analysis revealed that acidity, flavor, and body had a high positive direct effect on overall cup quality, implying that these traits can be used as indirect criteria to improve overall coffee quality. Therefore, it was concluded that there is considerable variation among the accessions, which need to be properly conserved for future improvement of the coffee quality. However, the variability observed for quality attributes must be further verified using biochemical and molecular analysis.Keywords: accessions, Coffea arabica, cluster analysis, correlation, principal component
Procedia PDF Downloads 1661221 Infrastructure Investment Law Formulation to Ensure Low Transaction Cost at Policy Level: Case Study of Public Private Partnership Project at the Ministry of Public Works and Housing of the Republic of Indonesia
Authors: Yolanda Indah Permatasari, Sudarsono Hardjosoekarto
Abstract:
Public private partnership (PPP) scheme was considered as an alternative source of funding for infrastructure provision. However, the performance of PPP scheme and interest of private sector to participate in the provision of infrastructure was still practically low. This phenomenon motivates the research to reconstruct the form of collaborative governance at the policy level from the perspective of transaction cost of the PPP scheme. Soft-system methodology (SSM)-based action research was used as this research methodology. The result of this study concludes that the emergence of transaction cost sources at the policy level is caused by the absence of a law that governs infrastructure investment, especially the implementation of PPP scheme. This absence is causing the imbalance in risk allocation and risk mitigation between the public and private sector. Thus, this research recommended the formulation of infrastructure investment law that aims to minimize asymmetry information, to anticipate the principal-principal problems, and to provide legal basis that ensures risk certainty and guarantee fair risk allocation between public and private sector.Keywords: public governance, public private partnership, soft system methodology, transaction cost
Procedia PDF Downloads 1401220 Modeling Karachi Dengue Outbreak and Exploration of Climate Structure
Authors: Syed Afrozuddin Ahmed, Junaid Saghir Siddiqi, Sabah Quaiser
Abstract:
Various studies have reported that global warming causes unstable climate and many serious impact to physical environment and public health. The increasing incidence of dengue incidence is now a priority health issue and become a health burden of Pakistan. In this study it has been investigated that spatial pattern of environment causes the emergence or increasing rate of dengue fever incidence that effects the population and its health. The climatic or environmental structure data and the Dengue Fever (DF) data was processed by coding, editing, tabulating, recoding, restructuring in terms of re-tabulating was carried out, and finally applying different statistical methods, techniques, and procedures for the evaluation. Five climatic variables which we have studied are precipitation (P), Maximum temperature (Mx), Minimum temperature (Mn), Humidity (H) and Wind speed (W) collected from 1980-2012. The dengue cases in Karachi from 2010 to 2012 are reported on weekly basis. Principal component analysis is applied to explore the climatic variables and/or the climatic (structure) which may influence in the increase or decrease in the number of dengue fever cases in Karachi. PC1 for all the period is General atmospheric condition. PC2 for dengue period is contrast between precipitation and wind speed. PC3 is the weighted difference between maximum temperature and wind speed. PC4 for dengue period contrast between maximum and wind speed. Negative binomial and Poisson regression model are used to correlate the dengue fever incidence to climatic variable and principal component score. Relative humidity is estimated to positively influence on the chances of dengue occurrence by 1.71% times. Maximum temperature positively influence on the chances dengue occurrence by 19.48% times. Minimum temperature affects positively on the chances of dengue occurrence by 11.51% times. Wind speed is effecting negatively on the weekly occurrence of dengue fever by 7.41% times.Keywords: principal component analysis, dengue fever, negative binomial regression model, poisson regression model
Procedia PDF Downloads 4451219 Verifiable Secure Computation of Large Scale Two-Point Boundary Value Problems Using Certificate Validation
Authors: Yogita M. Ahire, Nedal M. Mohammed, Ahmed A. Hamoud
Abstract:
Scientific computation outsourcing is gaining popularity because it allows customers with limited computing resources and storage devices to outsource complex computation workloads to more powerful service providers. However, it raises some security and privacy concerns and challenges, such as customer input and output privacy, as well as cloud cheating behaviors. This study was motivated by these concerns and focused on privacy-preserving Two-Point Boundary Value Problems (BVP) as a common and realistic instance for verifiable safe multiparty computing. We'll look at the safe and verifiable schema with correctness guarantees by utilizing standard multiparty approaches to compute the result of a computation and then solely using verifiable ways to check that the result was right.Keywords: verifiable computing, cloud computing, secure and privacy BVP, secure computation outsourcing
Procedia PDF Downloads 971218 Evaluation of Yield and Yield Components of Malaysian Palm Oil Board-Senegal Oil Palm Germplasm Using Multivariate Tools
Authors: Khin Aye Myint, Mohd Rafii Yusop, Mohd Yusoff Abd Samad, Shairul Izan Ramlee, Mohd Din Amiruddin, Zulkifli Yaakub
Abstract:
The narrow base of genetic is the main obstacle of breeding and genetic improvement in oil palm industry. In order to broaden the genetic bases, the Malaysian Palm Oil Board has been extensively collected wild germplasm from its original area of 11 African countries which are Nigeria, Senegal, Gambia, Guinea, Sierra Leone, Ghana, Cameroon, Zaire, Angola, Madagascar, and Tanzania. The germplasm collections were established and maintained as a field gene bank in Malaysian Palm Oil Board (MPOB) Research Station in Kluang, Johor, Malaysia to conserve a wide range of oil palm genetic resources for genetic improvement of Malaysian oil palm industry. Therefore, assessing the performance and genetic diversity of the wild materials is very important for understanding the genetic structure of natural oil palm population and to explore genetic resources. Principal component analysis (PCA) and Cluster analysis are very efficient multivariate tools in the evaluation of genetic variation of germplasm and have been applied in many crops. In this study, eight populations of MPOB-Senegal oil palm germplasm were studied to explore the genetic variation pattern using PCA and cluster analysis. A total of 20 yield and yield component traits were used to analyze PCA and Ward’s clustering using SAS 9.4 version software. The first four principal components which have eigenvalue >1 accounted for 93% of total variation with the value of 44%, 19%, 18% and 12% respectively for each principal component. PC1 showed highest positive correlation with fresh fruit bunch (0.315), bunch number (0.321), oil yield (0.317), kernel yield (0.326), total economic product (0.324), and total oil (0.324) while PC 2 has the largest positive association with oil to wet mesocarp (0.397) and oil to fruit (0.458). The oil palm population were grouped into four distinct clusters based on 20 evaluated traits, this imply that high genetic variation existed in among the germplasm. Cluster 1 contains two populations which are SEN 12 and SEN 10, while cluster 2 has only one population of SEN 3. Cluster 3 consists of three populations which are SEN 4, SEN 6, and SEN 7 while SEN 2 and SEN 5 were grouped in cluster 4. Cluster 4 showed the highest mean value of fresh fruit bunch, bunch number, oil yield, kernel yield, total economic product, and total oil and Cluster 1 was characterized by high oil to wet mesocarp, and oil to fruit. The desired traits that have the largest positive correlation on extracted PCs could be utilized for the improvement of oil palm breeding program. The populations from different clusters with the highest cluster means could be used for hybridization. The information from this study can be utilized for effective conservation and selection of the MPOB-Senegal oil palm germplasm for the future breeding program.Keywords: cluster analysis, genetic variability, germplasm, oil palm, principal component analysis
Procedia PDF Downloads 1641217 Image Multi-Feature Analysis by Principal Component Analysis for Visual Surface Roughness Measurement
Authors: Wei Zhang, Yan He, Yan Wang, Yufeng Li, Chuanpeng Hao
Abstract:
Surface roughness is an important index for evaluating surface quality, needs to be accurately measured to ensure the performance of the workpiece. The roughness measurement based on machine vision involves various image features, some of which are redundant. These redundant features affect the accuracy and speed of the visual approach. Previous research used correlation analysis methods to select the appropriate features. However, this feature analysis is independent and cannot fully utilize the information of data. Besides, blindly reducing features lose a lot of useful information, resulting in unreliable results. Therefore, the focus of this paper is on providing a redundant feature removal approach for visual roughness measurement. In this paper, the statistical methods and gray-level co-occurrence matrix(GLCM) are employed to extract the texture features of machined images effectively. Then, the principal component analysis(PCA) is used to fuse all extracted features into a new one, which reduces the feature dimension and maintains the integrity of the original information. Finally, the relationship between new features and roughness is established by the support vector machine(SVM). The experimental results show that the approach can effectively solve multi-feature information redundancy of machined surface images and provides a new idea for the visual evaluation of surface roughness.Keywords: feature analysis, machine vision, PCA, surface roughness, SVM
Procedia PDF Downloads 2121216 The Inherent Flaw in the NBA Playoff Structure
Authors: Larry Turkish
Abstract:
Introduction: The NBA is an example of mediocrity and this will be evident in the following paper. The study examines and evaluates the characteristics of the NBA champions. As divisions and playoff teams increase, there is an increase in the probability that the champion originates from the mediocre category. Since it’s inception in 1947, the league has been mediocre and continues to this day. Why does a professional league allow any team with a less than 50% winning percentage into the playoffs? As long as the finances flow into the league, owners will not change the current algorithm. The objective of this paper is to determine if the regular season has meaning in finding an NBA champion. Statistical Analysis: The data originates from the NBA website. The following variables are part of the statistical analysis: Rank, the rank of a team relative to other teams in the league based on the regular season win-loss record; Winning Percentage of a team based on the regular season; Divisions, the number of divisions within the league and Playoff Teams, the number of playoff teams relative to a particular season. The following statistical applications are applied to the data: Pearson Product-Moment Correlation, Analysis of Variance, Factor and Regression analysis. Conclusion: The results indicate that the divisional structure and number of playoff teams results in a negative effect on the winning percentage of playoff teams. It also prevents teams with higher winning percentages from accessing the playoffs. Recommendations: 1. Teams that have a winning percentage greater than 1 standard deviation from the mean from the regular season will have access to playoffs. (Eliminates mediocre teams.) 2. Eliminate Divisions (Eliminates weaker teams from access to playoffs.) 3. Eliminate Conferences (Eliminates weaker teams from access to the playoffs.) 4. Have a balanced regular season schedule, (Reduces the number of regular season games, creates equilibrium, reduces bias) that will reduce the need for load management.Keywords: alignment, mediocrity, regression, z-score
Procedia PDF Downloads 1301215 Quantifying Stakeholders’ Values of Technical and Vocational Education and Training Provision in Nigeria
Authors: Lidimma Benjamin, Nimmyel Gwakzing, Wuyep Nanyi
Abstract:
Technical and Vocational Education and Training (TVET) has many stakeholders, each with their own values and interests. This study will focus on the diversity of the values and interests within and across groups of stakeholders by quantifying the value that stakeholders attached to several quality attributes of TVET, and also find out to what extent TVET stakeholders differ in their values. The quality of TVET therefore, depends on how well it aligns with the values and interests of these stakeholders. The five stakeholders are parents, students, teachers, policy makers, and work place training supervisors. The 9 attributes are employer appreciation of students, graduation rate, obtained computer skills of students, mentoring hours in workplace learning/Students Industrial Work Experience Scheme (SIWES), challenge, structure, students’ appreciation of teachers, schooling hours, and attention to civic education. 346 respondents (comprising Parents, Students, Teachers, Policy Makers, and Workplace Training Supervisors) were repeatedly asked to rank a set of 4 programs, each with a specific value on the nine quality indicators. Conjoint analysis was used to obtain the values that the stakeholders assigned to the 9 attributes when evaluating the quality of TVET programs. Rank-ordered logistic regression was the statistical/tool used for ranking the respondents values assign to the attributes. The similarities and diversity in values and interests of the different stakeholders will be of use by both Nigerian government and TVET colleges, to improve the overall quality of education and the match between vocational programs and their stakeholders simultaneous evaluation and combination of information in product attributes. Such approach models the decision environment by confronting a respondent with choices that are close to real-life choices. Therefore, it is more realistically than traditional survey methods.Keywords: TVET, vignette study, conjoint analysis, quality perception, educational stakeholders
Procedia PDF Downloads 831214 Factors for Entry Timing Choices Using Principal Axis Factorial Analysis and Logistic Regression Model
Authors: C. M. Mat Isa, H. Mohd Saman, S. R. Mohd Nasir, A. Jaapar
Abstract:
International market expansion involves a strategic process of market entry decision through which a firm expands its operation from domestic to the international domain. Hence, entry timing choices require the needs to balance the early entry risks and the problems in losing opportunities as a result of late entry into a new market. Questionnaire surveys administered to 115 Malaysian construction firms operating in 51 countries worldwide have resulted in 39.1 percent response rate. Factor analysis was used to determine the most significant factors affecting entry timing choices of the firms to penetrate the international market. A logistic regression analysis used to examine the firms’ entry timing choices, indicates that the model has correctly classified 89.5 per cent of cases as late movers. The findings reveal that the most significant factor influencing the construction firms’ choices as late movers was the firm factor related to the firm’s international experience, resources, competencies and financing capacity. The study also offers valuable information to construction firms with intention to internationalize their businesses.Keywords: factors, early movers, entry timing choices, late movers, logistic regression model, principal axis factorial analysis, Malaysian construction firms
Procedia PDF Downloads 3781213 Study Habits and Level of Difficulty Encountered by Maltese Students Studying Biology Advanced Level Topics
Authors: Marthese Azzopardi, Liberato Camilleri
Abstract:
This research was performed to investigate the study habits and level of difficulty perceived by post-secondary students in Biology at Advanced-level topics after completing their first year of study. At the end of a two-year ‘sixth form’ course, Maltese students sit for the Matriculation and Secondary Education Certificate (MATSEC) Advanced-level biology exam as a requirement to pursue science-related studies at the University of Malta. The sample was composed of 23 students (16 taking Chemistry and seven taking some ‘Other’ subject at the Advanced Level). The cohort comprised seven males and 16 females. A questionnaire constructed by the authors, was answered anonymously during the last lecture at the end of the first year of study, in May 2016. The Chi square test revealed that gender plays no effect on the various study habits (c2 (6) = 5.873, p = 0.438). ‘Reading both notes and textbooks’ was the most common method adopted by males (71.4%), whereas ‘Writing notes on each topic’ was that mostly used by females (81.3%). The Mann-Whitney U test showed no significant difference in the study habits of students and the mean assessment mark obtained at the end of the first year course (p = 0.231). Statistical difference was found with the One-ANOVA test when comparing the mean assessment mark obtained at the end of the first year course when students are clustered by their Secondary Education Certificate (SEC) grade (p < 0.001). Those obtaining a SEC grade of 2 and 3 got the highest mean assessment of 68.33% and 66.9%, respectively [SEC grading is 1-7, where 1 is the highest]. The Friedman test was used to compare the mean difficulty rating scores provided for the difficulty of each topic. The mean difficulty rating score ranges from 1 to 4, where the larger the mean rating score, the higher the difficulty. When considering the whole group of students, nine topics out of 21 were perceived as significantly more difficult than the other topics. Protein synthesis, DNA Replication and Biomolecules were the most difficult, in that order. The Mann-Whitney U test revealed that the perceived level of difficulty in comprehending Biomolecules is significantly lower for students taking Chemistry compared to those not choosing the subject (p = 0.018). Protein Synthesis was claimed as the most difficult by Chemistry students and Biomolecules by those not studying Chemistry. DNA Replication was the second most difficult topic perceived by both groups. The Mann-Whitney U test was used to examine the effect of gender on the perceived level of difficulty in comprehending various topics. It was found that females have significantly more difficulty in comprehending Biomolecules than males (p=0.039). Protein synthesis was perceived as the most difficult topic by males (mean difficulty rating score = 3.14), while Biomolecules, DNA Replication and Protein synthesis were of equal difficulty for females (mean difficulty rating score = 3.00). Males and females perceived DNA Replication as equally difficult (mean difficulty rating score = 3.00). Discovering the students’ study habits and perceived level of difficulty of specific topics is vital for the lecturer to offer guidance that leads to higher academic achievement.Keywords: biology, perceived difficulty, post-secondary, study habits
Procedia PDF Downloads 1881212 Dietary Pattern and Risk of Breast Cancer Among Women:a Case Control Study
Authors: Huma Naqeeb
Abstract:
Epidemiological studies have shown the robust link between breast cancer and dietary pattern. There has been no previous study conducted in Pakistan, which specifically focuses on dietary patterns among breast cancer women. This study aims to examine the association of breast cancer with dietary patterns among Pakistani women. This case-control research was carried in multiple tertiary care facilities. Newly diagnosed primary breast cancer patients were recruited as cases (n = 408); age matched controls (n = 408) were randomly selected from the general population. Data on required parameters were systematically collected using subjective and objective tools. Factor and Principal Component Analysis (PCA) techniques were used to extract women’s dietary patterns. Four dietary patterns were identified based on eigenvalue >1; (i) veg-ovo-fish, (ii) meat-fat-sweet, (iii) mix (milk and its products, and gourds vegetables) and (iv) lentils - spices. Results of the multiple regressions were displayed as adjusted odds ratio (Adj. OR) and their respective confidence intervals (95% CI). After adjusted for potential confounders, veg-ovo-fish dietary pattern was found to be robustly associated with a lower risk of breast cancer among women (Adj. OR: 0.68, 95%CI: (0.46-0.99, p<0.01). The study findings concluded that attachment to the diets majorly composed of fresh vegetables, and high quality protein sources may contribute in lowering the risk of breast cancer among women.Keywords: breast cancer, dietary pattern, women, principal component analysis
Procedia PDF Downloads 1231211 A Quantitative Survey Research on the Development and Assessment of Attitude toward Mathematics Instrument
Authors: Soofia Malik
Abstract:
The purpose of this study is to develop an instrument to measure undergraduate students’ attitudes toward mathematics (MAT) and to assess the data collected from the instrument for validity and reliability. The instrument is developed using five subscales: anxiety, enjoyment, self-confidence, value, and technology. The technology dimension is added as the fifth subscale of attitude toward mathematics because of the recent trend of incorporating online homework in mathematics courses as well as due to heavy reliance of higher education on using online learning management systems, such as Blackboard and Moodle. The sample consists of 163 (M = 82, F = 81) undergraduates enrolled in College Algebra course in the summer 2017 semester at a university in the USA. The data is analyzed to answer the research question: if and how do undergraduate students’ attitudes toward mathematics load using Principal Components Analysis (PCA)? As a result of PCA, three subscales emerged namely: anxiety/self-confidence scale, enjoyment, and value scale. After deleting the last five items or the last two subscales from the initial MAT scale, the Cronbach’s alpha was recalculated using the scores from 20 items and was found to be α = .95. It is important to note that the reliability of the initial MAT form was α = .93. This means that employing the final MAT survey form would yield consistent results in repeated uses. The final MAT form is, therefore, more reliable as compared to the initial MAT form.Keywords: college algebra, Cronbach's alpha reliability coefficient, Principal Components Analysis, PCA, technology in mathematics
Procedia PDF Downloads 1231210 The Analysis of Changes in Urban Hierarchy of Isfahan Province in the Fifty-Year Period (1956-2006)
Authors: Hamidreza Joudaki, Yousefali Ziari
Abstract:
The appearance of city and urbanism is one of the important processes which have affected social communities. Being industrialized urbanism developed along with each other in the history. In addition, they have had simple relationship for more than six thousand years, that is, from the appearance of the first cities. In 18th century by coming out of industrial capitalism, progressive development took place in urbanism in the world. In Iran, the city of each region made its decision by itself and the capital of region (downtown) was the only central part and also the regional city without any hierarchy, controlled its realm. However, this method of ruling during these three decays, because of changing in political, social and economic issues that have caused changes in rural and urban relationship. Moreover, it has changed the variety of performance of cities and systematic urban network in Iran. Today, urban system has very vast imbalanced apace and performance. In Isfahan, the trend of urbanism is like the other part of Iran and systematic urban hierarchy is not suitable and normal. This article is a quantitative and analytical. The statistical communities are Isfahan Province cities and the changes in urban network and its hierarchy during the period of fifty years (1956 -2006) has been surveyed. In addition, those data have been analyzed by model of Rank and size and Entropy index. In this article Iran cities and also the factor of entropy of primate city and urban hierarchy of Isfahan Province have been introduced. Urban residents of this Province have been reached from 55 percent to 83% (2006). As we see the analytical data reflects that there is mismatching and imbalance between cities. Because the entropy index was.91 in 1956.And it decreased to.63 in 2006. Isfahan city is the primate city in the whole of these periods. Moreover, the second and the third cities have population gap with regard to the other cities and finally, they do not follow the system of rank-size.Keywords: urban network, urban hierarchy, primate city, Isfahan province, urbanism, first cities
Procedia PDF Downloads 2581209 Competitor Integration with Voice of Customer Ratings in QFD Studies Using Geometric Mean Based on AHP
Authors: Zafar Iqbal, Nigel P. Grigg, K. Govindaraju, Nicola M. Campbell-Allen
Abstract:
Quality Function Deployment (QFD) is structured approach. It has been used to improve the quality of products and process in a wide range of fields. Using this systematic tool, practitioners normally rank Voice of Customer ratings (VoCs) in order to produce Improvement Ratios (IRs) which become the basis for prioritising process / product design or improvement activities. In one matrix of the House of Quality (HOQ) competitors are rated. The method of obtaining improvement ratios (IRs) does not always integrate the competitors’ rating in a systematic way that fully utilises competitor rating information. This can have the effect of diverting QFD practitioners’ attention from a potentially important VOC to less important VOC. In order to enhance QFD analysis, we present a more systematic method for integrating competitor ratings, utilising the geometric mean of the customer rating matrix. In this paper we develop a new approach, based on the Analytic Hierarchy Process (AHP), in which we generating a matrix of multiple comparisons of all competitors, and derive a geometric mean for each competitor. For each VOC an improved IR is derived which-we argue herein - enhances the initial VOC importance ratings by integrating more information about competitor performance. In this way, our method can help overcome one of the possible shortcomings of QFD. We then use a published QFD example from literature as a case study to demonstrate the use of the new AHP-based IRs, and show how these can be used to re-rank existing VOCs to -arguably- better achieve the goal of customer satisfaction in relation VOC ratings and competitors’ rankings. We demonstrate how two dimensional AHP-based geometric mean derived from the multiple competitor comparisons matrix can be useful for analysing competitors’ rankings. Our method utilises an established methodology (AHP) applied within an established application (QFD), but in an original way (through the competitor analysis matrix), to achieve a novel improvement.Keywords: quality function deployment, geometric mean, improvement ratio, AHP, competitors ratings
Procedia PDF Downloads 3691208 Atmospheric Polycyclic Aromatic Hydrocarbons (PAHs) in Rural and Urban of Central Taiwan
Authors: Shih Yu Pan, Pao Chen Hung, Chuan Yao Lin, Charles C.-K. Chou, Yu Chi Lin, Kai Hsien Chi
Abstract:
This study analyzed 16 atmospheric PAHs species which were controlled by USEPA and IARC. To measure the concentration of PAHs, four rural sampling sites and two urban sampling sites were selected in Central Taiwan during spring and summer. In central Taiwan, the rural sampling stations were located in the downstream of Da-An River, Da-Jang River, Wu River and Chuo-shui River. On the other hand, the urban sampling sites were located in Taichung district and close to the roadside. Ambient air samples of both vapor phase and particle phase of PAHs compounds were collected using high volume sampling trains (Analitica). The sampling media were polyurethane foam (PUF) with XAD2 and quartz fiber filters. Diagnostic ratio, Principal component analysis (PCA), Positive Matrix Factorization (PMF) models were used to evaluate the apportionment of PAHs in the atmosphere and speculate the relative contribution of various emission sources. Because of the high temperature and low wind speed, high PAHs concentration in the atmosphere was observed. The total PAHs concentration, especially in vapor phase, had significant change during summer. During the sampling periods the total PAHs concentration of atmospheric at four rural and two urban sampling sites in spring and summer were 3.70±0.40 ng/m3,3.40±0.63 ng/m3,5.22±1.24 ng/m3,7.23±0.37 ng/m3,7.46±2.36 ng/m3,6.21±0.55 ng/m3 ; 15.0± 0.14 ng/m3,18.8±8.05 ng/m3,20.2±8.58 ng/m3,16.1±3.75 ng/m3,29.8±10.4 ng/m3,35.3±11.8 ng/m3, respectively. In order to identify PAHs sources, we used diagnostic ratio to classify the emission sources. The potential sources were diesel combustion and gasoline combustion in spring and summer, respectively. According to the principal component analysis (PCA), the PC1 and PC2 had 23.8%, 20.4% variance and 21.3%, 17.1% variance in spring and summer, respectively. Especially high molecular weight PAHs (BaP, IND, BghiP, Flu, Phe, Flt, Pyr) were dominated in spring when low molecular weight PAHs (AcPy, Ant, Acp, Flu) because of the dominating high temperatures were dominated in the summer. Analysis by using PMF model found the sources of PAHs in spring were stationary sources (34%), vehicle emissions (24%), coal combustion (23%) and petrochemical fuel gas (19%), while in summer the emission sources were petrochemical fuel gas (34%), the natural environment of volatile organic compounds (29%), coal combustion (19%) and stationary sources (18%).Keywords: PAHs, source identification, diagnostic ratio, principal component analysis, positive matrix factorization
Procedia PDF Downloads 2671207 The Attitude and Intention to Purchase Halal Cosmetic Products: A Study of Muslim Consumers in Saudi Arabia
Authors: Abdulwahab S. Shmailan
Abstract:
The links between the halalan tayyiban dimensions and their impact on the propensity to purchase halal cosmetics in Muslim culture are investigated in this study. The information was gathered by a self-administered questionnaire survey of 207 Saudi Muslim customers using purposive sampling. The suggested model was tested using Pearson correlation coefficients and an ANOVA test. Significant and positive connections were found between halalan tayyiban dimensions, attitudes, and purchasing intent. There were also substantial changes in the study parameters depending on the respondent's work title. This is one of the first empirical tests of the halalan tayyiban, attitudes, and intention to purchase model among Saudi Muslim customers. The study offers helpful recommendations for cosmetics sector marketers as well as strategy formulation.Keywords: cosmetics, halal cosmetics, halalan tayyiban, halal certificate, customers attitude, intention to purchase
Procedia PDF Downloads 1781206 Non-Fungible Token (NFT) - Used in the Music Industry for Independent Artists without a Music Recording Label
Authors: Bartholomew Badar
Abstract:
An NFT is a digital certificate with rights to own an asset, including various valuable digital goods such as art pieces, music items, collectibles, etc. The market for NFTs started developing in 2017 and has lately seen increased growth as crypto-currencies and the blockchain market continue to gain popularity. This study aims to understand potential uses for NFTs concerning the music industry and record labels. Independent artists struggle to distribute and sell their music without the help of a record label. The NFT marketplace could be a great tool to eliminate this problem. The research objective is to identify possibilities for independent artists to own their music rights and share value with an audience. We see a trend of new-school music artists trying to enter the music NFT market by creating visualizers, beats, cover art, etc. To analyze various existing music NFT assets and determine whether or not independent artists could monetize their music without a record label is the main focus of this scholarly paper.Keywords: blockchain, crypto-currency, music, artist, NFT
Procedia PDF Downloads 1771205 Dietary Pattern derived by Reduced Rank Regression is Associated with Reduced Cognitive Impairment Risk in Singaporean Older Adults
Authors: Kaisy Xinhong Ye, Su Lin Lim, Jialiang Li, Lei Feng
Abstract:
background: Multiple healthful dietary patterns have been linked with dementia, but limited studies have looked at the role of diet in cognitive health in Asians whose eating habits are very different from their counterparts in the west. This study aimed to derive a dietary pattern that is associated with the risk of cognitive impairment (CI) in the Singaporean population. Method: The analysis was based on 719 community older adults aged 60 and above. Dietary intake was measured using a validated semi-quantitative food-frequency questionnaire (FFQ). Reduced rank regression (RRR) was used to extract dietary pattern from 45 food groups, specifying sugar, dietary fiber, vitamin A, calcium, and the ratio of polyunsaturated fat to saturated fat intake (P:S ratio) as response variables. The RRR-derived dietary patterns were subsequently investigated using multivariate logistic regression models to look for associations with the risk of CI. Results: A dietary pattern characterized by greater intakes of green leafy vegetables, red-orange vegetables, wholegrains, tofu, nuts, and lower intakes of biscuits, pastries, local sweets, coffee, poultry with skin, sugar added to beverages, malt beverages, roti, butter, and fast food was associated with reduced risk of CI [multivariable-adjusted OR comparing extreme quintiles, 0.29 (95% CI: 0.11, 0.77); P-trend =0.03]. This pattern was positively correlated with P:S ratio, vitamin A, and dietary fiber and negatively correlated with sugar. Conclusion: A dietary pattern providing high P:S ratio, vitamin A and dietary fiber, and a low level of sugar may reduce the risk of cognitive impairment in old age. The findings have significance in guiding local Singaporeans to dementia prevention through food-based dietary approaches.Keywords: dementia, cognitive impairment, diet, nutrient, elderly
Procedia PDF Downloads 821204 Marine Phytoplankton and Zooplankton from the North-Eastern Bay of Bengal, Bangladesh
Authors: Mahmudur Rahman Khan, Saima Sharif Nilla, Kawser Ahmed, Abdul Aziz
Abstract:
The marine phyto and zooplankton of the extreme north-eastern part of the Bay of Bengal, off the coast of Bangladesh have been studied. Relative occurrence of phyto and zooplankton and their relationship with physico-chemical conditions (f.e. temperature, salinity, dissolved oxygen, carbonate, phosphate, and sulphate) of the water and Shannon-Weiber diversity indices were also studied. The phytoplankton communities represented by 25 genera with 69 species of Bacillariophyceae, 5 genera with 12 species of Dinophyceae and 6 genera with 16 species of Chlorophyceae have been found. A total of 24 genera of 25 species belonging to Protozoa, Coelenterata, Chaetognatha, Nematoda, Cladocera, Copepoda, and decapoda have been recorded. In addition, the average phytoplankton was 80% of all collections, whereas the zooplankton was 20%, Z ratio of about 4:1. The total numbers of plankton individuals per liter were generally higher during low tide than those of high one. Shannon-Weiber diversity indices were highest (3.675 for phytoplankton and 3.021 for zooplankton) in the north-east part and lowest (1.516 for phytoplankton and 1.302 for zooplankton) in the south-east part of the study area. Principal Component Analysis (PCA) showed the relationship between pH and some species of phyto and zooplankton where all diatoms and copepods have showed positive correlation and dinoflagellates showed negative correlation with pH.Keywords: plankton presence, shannon-weiber diversity index, principal component analysis, Bay of Bengal
Procedia PDF Downloads 6601203 A Convergent Interacting Particle Method for Computing Kpp Front Speeds in Random Flows
Authors: Tan Zhang, Zhongjian Wang, Jack Xin, Zhiwen Zhang
Abstract:
We aim to efficiently compute the spreading speeds of reaction-diffusion-advection (RDA) fronts in divergence-free random flows under the Kolmogorov-Petrovsky-Piskunov (KPP) nonlinearity. We study a stochastic interacting particle method (IPM) for the reduced principal eigenvalue (Lyapunov exponent) problem of an associated linear advection-diffusion operator with spatially random coefficients. The Fourier representation of the random advection field and the Feynman-Kac (FK) formula of the principal eigenvalue (Lyapunov exponent) form the foundation of our method implemented as a genetic evolution algorithm. The particles undergo advection-diffusion and mutation/selection through a fitness function originated in the FK semigroup. We analyze the convergence of the algorithm based on operator splitting and present numerical results on representative flows such as 2D cellular flow and 3D Arnold-Beltrami-Childress (ABC) flow under random perturbations. The 2D examples serve as a consistency check with semi-Lagrangian computation. The 3D results demonstrate that IPM, being mesh-free and self-adaptive, is simple to implement and efficient for computing front spreading speeds in the advection-dominated regime for high-dimensional random flows on unbounded domains where no truncation is needed.Keywords: KPP front speeds, random flows, Feynman-Kac semigroups, interacting particle method, convergence analysis
Procedia PDF Downloads 461202 Employers’ Preferences when Employing Solo Self-employed: a Vignette Study in the Netherlands
Authors: Lian Kösters, Wendy Smits, Raymond Montizaan
Abstract:
The number of solo self-employed in the Netherlands has been increasing for years. The relative increase is among the largest in the EU. To explain this increase, most studies have focused on the supply side, workers who offer themselves as solo self-employed. The number of studies that focus on the demand side, the employer who hires the solo self-employed, is still scarce. Studies into employer behaviour conducted until now show that employers mainly choose self-employed workers when they have a temporary need for specialist knowledge, but also during projects or production peaks. These studies do not provide insight into the employers’ considerations for different contract types. In this study, interviews with employers were conducted, and available literature was consulted to provide an overview of the several factors employers use to compare different contract types. That input was used to set up a vignette study. This was carried out at the end of 2021 among almost 1000 business owners, HR managers, and business leaders of Dutch companies. Each respondent was given two sets of five fictitious candidates for two possible positions in their organization. They were asked to rank these candidates. The positions varied with regard to the type of tasks (core tasks or support tasks) and the time it took to train new people for the position. The respondents were asked additional questions about the positions, such as the required level of education, the duration, and the degree of predictability of tasks. The fictitious candidates varied, among other things, in the type of contract on which they would come to work for the organization. The results were analyzed using a rank-ordered logit analysis. This vignette setup makes it possible to see which factors are most important for employers when choosing to hire a solo self-employed person compared to other contracts. The results show that there are no indications that employers would want to hire solo self-employed workers en masse. They prefer regular employee contracts. The probability of being chosen with a solo self-employed contract over someone who comes to work as a temporary employee is 32 percent. This probability is even lower than for on-call and temporary agency workers. For a permanent contract, this probability is 46 percent. The results provide indications that employers consider knowledge and skills more important than the solo self-employed contract and that this can compensate. A solo self-employed candidate with 10 years of work experience has a 63 percent probability of being found attractive by an employer compared to a temporary employee without work experience. This suggests that employers are willing to give someone a less attractive contract for the employer if the worker so wishes. The results also show that the probability that a solo self-employed person is preferred over a candidate with a temporary employee contract is somewhat higher in business economics, administrative and technical professions. No significant results were found for factors where it was expected that solo self-employed workers are preferred more often, such as for unpredictable or temporary work.Keywords: employer behaviour, rank-ordered logit analysis, solo self-employment, temporary contract, vignette study
Procedia PDF Downloads 731201 Uplift Segmentation Approach for Targeting Customers in a Churn Prediction Model
Authors: Shivahari Revathi Venkateswaran
Abstract:
Segmenting customers plays a significant role in churn prediction. It helps the marketing team with proactive and reactive customer retention. For the reactive retention, the retention team reaches out to customers who already showed intent to disconnect by giving some special offers. When coming to proactive retention, the marketing team uses churn prediction model, which ranks each customer from rank 1 to 100, where 1 being more risk to churn/disconnect (high ranks have high propensity to churn). The churn prediction model is built by using XGBoost model. However, with the churn rank, the marketing team can only reach out to the customers based on their individual ranks. To profile different groups of customers and to frame different marketing strategies for targeted groups of customers are not possible with the churn ranks. For this, the customers must be grouped in different segments based on their profiles, like demographics and other non-controllable attributes. This helps the marketing team to frame different offer groups for the targeted audience and prevent them from disconnecting (proactive retention). For segmentation, machine learning approaches like k-mean clustering will not form unique customer segments that have customers with same attributes. This paper finds an alternate approach to find all the combination of unique segments that can be formed from the user attributes and then finds the segments who have uplift (churn rate higher than the baseline churn rate). For this, search algorithms like fast search and recursive search are used. Further, for each segment, all customers can be targeted using individual churn ranks from the churn prediction model. Finally, a UI (User Interface) is developed for the marketing team to interactively search for the meaningful segments that are formed and target the right set of audience for future marketing campaigns and prevent them from disconnecting.Keywords: churn prediction modeling, XGBoost model, uplift segments, proactive marketing, search algorithms, retention, k-mean clustering
Procedia PDF Downloads 71