Search results for: metric grouping
148 Synthesis of Amine Functionalized MOF-74 for Carbon Dioxide Capture
Authors: Ghulam Murshid, Samil Ullah
Abstract:
Scientific studies suggested that the incremented greenhouse gas concentration in the atmosphere, particularly of carbon dioxide (CO2) is one of the major factors in global warming. The concentration of CO2 in our climate has crossed the milestone level of 400 parts per million (ppm) hence breaking the record of human history. A report by 49 researchers from 10 countries said, 'Global CO2 emissions from burning fossil fuels will rise to a record 36 billion metric tons (39.683 billion tons) this year.' Main contributors of CO2 in to the atmosphere are usage of fossil fuel, transportation sector and power generation plants. Among all available technologies, which include; absorption via chemicals, membrane separation, cryogenic and adsorption are in practice around the globe. Adsorption of CO2 using metal organic frameworks (MOF) is getting interest of researcher around the globe. In the current work, MOF-74 as well as modified MOF-74 with a sterically hindered amine (AMP) was synthesized and characterized. The modification was carried out using a sterically hindered amine in order to study the effect on its adsorption capacity. Resulting samples were characterized by using Fourier Transform Infrared Spectroscopy (FTIR), Field Emission Scanning Electron Microscope (FESEM), Thermal Gravimetric Analyser (TGA) and Brunauer-Emmett-Teller (BET). The FTIR results clearly confirmed the formation of MOF-74 structure and the presence of AMP. FESEM and TEM revealed the topography and morphology of the both MOF-74 and amine modified MOF. BET isotherm result shows that due to the addition of AMP in to the structure, significant enhancement of CO2 adsorption was observed.Keywords: adsorbents, amine, CO2, global warming
Procedia PDF Downloads 422147 Spino-Pelvic Alignment with SpineCor Brace Use in Adolescent Idiopathic Scoliosis
Authors: Reham H. Diab, Amira A. A. Abdallah, Eman A. Embaby
Abstract:
Background: The effectiveness of bracing on preventing spino-pelvic alignment deterioration in idiopathic scoliosis has been extensively studied especially in the frontal plane. Yet, there is lack of knowledge regarding the effect of soft braces on spino-pelvic alignment in the sagittal plane. Achieving harmonious sagittal plane spino-pelvic balance is critical for the preservation of physiologic posture and spinal health. Purpose: This study examined the kyphotic angle, lordotic angle and pelvic inclination in the sagittal plane and trunk imbalance in the frontal plane before and after a six-month rehabilitation period. Methods: Nineteen patients with idiopathic scoliosis participated in the study. They were divided into two groups; experimental and control. The experimental group (group I) used the SpineCor brace in addition to a rehabilitation exercise program while the control group (group II) had the exercise program only. The mean ±SD age, weight and height were 16.89±2.15 vs. 15.3±2.5 years; 59.78±6.85 vs. 62.5±8.33 Kg and 162.78±5.76 vs. 159±5.72 cm for group I vs. group II. Data were collected using for metric Π system. Results: Mixed design MANOVA showed that there were significant (p < 0.05) decreases in all the tested variables after the six-month period compared with “before” in both groups. Moreover, there was a significant decrease in the kyphotic angle in group I compared with group II after the six-month period. Interpretation and conclusion: SpineCor brace is beneficial in reducing spino-pelvic alignment deterioration in both sagittal and frontal planes.Keywords: adolescent idiopathic scoliosis, SpineCor, spino-pelvic alignment, biomechanics
Procedia PDF Downloads 340146 Tomato-Weed Classification by RetinaNet One-Step Neural Network
Authors: Dionisio Andujar, Juan lópez-Correa, Hugo Moreno, Angela Ri
Abstract:
The increased number of weeds in tomato crops highly lower yields. Weed identification with the aim of machine learning is important to carry out site-specific control. The last advances in computer vision are a powerful tool to face the problem. The analysis of RGB (Red, Green, Blue) images through Artificial Neural Networks had been rapidly developed in the past few years, providing new methods for weed classification. The development of the algorithms for crop and weed species classification looks for a real-time classification system using Object Detection algorithms based on Convolutional Neural Networks. The site study was located in commercial corn fields. The classification system has been tested. The procedure can detect and classify weed seedlings in tomato fields. The input to the Neural Network was a set of 10,000 RGB images with a natural infestation of Cyperus rotundus l., Echinochloa crus galli L., Setaria italica L., Portulaca oeracea L., and Solanum nigrum L. The validation process was done with a random selection of RGB images containing the aforementioned species. The mean average precision (mAP) was established as the metric for object detection. The results showed agreements higher than 95 %. The system will provide the input for an online spraying system. Thus, this work plays an important role in Site Specific Weed Management by reducing herbicide use in a single step.Keywords: deep learning, object detection, cnn, tomato, weeds
Procedia PDF Downloads 103145 Genetic Diversity Analysis of Pearl Millet (Pennisetum glaucum [L. R. Rr.]) Accessions from Northwestern Nigeria
Authors: Sa’adu Mafara Abubakar, Muhammad Nuraddeen Danjuma, Adewole Tomiwa Adetunji, Richard Mundembe, Salisu Mohammed, Francis Bayo Lewu, Joseph I. Kiok
Abstract:
Pearl millet is the most drought tolerant of all domesticated cereals, is cultivated extensively to feed millions of people who mainly live in hash agroclimatic zones. It serves as a major source of food for more than 40 million smallholder farmers living in the marginal agricultural lands of Northern Nigeria. Pearl millet grain is more nutritious than other cereals like maize, is also a principal source of energy, protein, vitamins, and minerals for millions of poorest people in the regions where it is cultivated. Pearl millet has recorded relatively little research attention compared with other crops and no sufficient work has analyzed its genetic diversity in north-western Nigeria. Therefore, this study was undertaken with the objectives to analyze the genetic diversity of pearl millet accessions using SSR marker and to analyze the extent of evolutionary relationship among pearl millet accessions at the molecular level. The result of the present study confirmed diversity among accessions of pearl millet in the study area. Simple Sequence Repeats (SSR) markers were used for genetic analysis and evolutionary relationship of the accessions of pearl millet. To analyze the level of genetic diversity, 8 polymorphic SSR markers were used to screen 69 accessions collected based on three maturity periods. SSR markers result reveal relationships among the accessions in terms of genetic similarities, evolutionary and ancestral origin, it also reveals a total of 53 alleles recorded with 8 microsatellites and an average of 6.875 per microsatellite, the range was from 3 to 9 alleles in PSMP2248 and PSMP2080 respectively. Moreover, both the factorial analysis and the dendrogram of phylogeny tree grouping patterns and cluster analysis were almost in agreement with each other that diversity is not clustering according to geographical patterns but, according to similarity, the result showed maximum similarity among clusters with few numbers of accessions. It has been recommended that other molecular markers should be tested in the same study area.Keywords: pearl millet, genetic diversity, simple sequence repeat (SSR)
Procedia PDF Downloads 269144 Regression Approach for Optimal Purchase of Hosts Cluster in Fixed Fund for Hadoop Big Data Platform
Authors: Haitao Yang, Jianming Lv, Fei Xu, Xintong Wang, Yilin Huang, Lanting Xia, Xuewu Zhu
Abstract:
Given a fixed fund, purchasing fewer hosts of higher capability or inversely more of lower capability is a must-be-made trade-off in practices for building a Hadoop big data platform. An exploratory study is presented for a Housing Big Data Platform project (HBDP), where typical big data computing is with SQL queries of aggregate, join, and space-time condition selections executed upon massive data from more than 10 million housing units. In HBDP, an empirical formula was introduced to predict the performance of host clusters potential for the intended typical big data computing, and it was shaped via a regression approach. With this empirical formula, it is easy to suggest an optimal cluster configuration. The investigation was based on a typical Hadoop computing ecosystem HDFS+Hive+Spark. A proper metric was raised to measure the performance of Hadoop clusters in HBDP, which was tested and compared with its predicted counterpart, on executing three kinds of typical SQL query tasks. Tests were conducted with respect to factors of CPU benchmark, memory size, virtual host division, and the number of element physical host in cluster. The research has been applied to practical cluster procurement for housing big data computing.Keywords: Hadoop platform planning, optimal cluster scheme at fixed-fund, performance predicting formula, typical SQL query tasks
Procedia PDF Downloads 232143 Improving Similarity Search Using Clustered Data
Authors: Deokho Kim, Wonwoo Lee, Jaewoong Lee, Teresa Ng, Gun-Ill Lee, Jiwon Jeong
Abstract:
This paper presents a method for improving object search accuracy using a deep learning model. A major limitation to provide accurate similarity with deep learning is the requirement of huge amount of data for training pairwise similarity scores (metrics), which is impractical to collect. Thus, similarity scores are usually trained with a relatively small dataset, which comes from a different domain, causing limited accuracy on measuring similarity. For this reason, this paper proposes a deep learning model that can be trained with a significantly small amount of data, a clustered data which of each cluster contains a set of visually similar images. In order to measure similarity distance with the proposed method, visual features of two images are extracted from intermediate layers of a convolutional neural network with various pooling methods, and the network is trained with pairwise similarity scores which is defined zero for images in identical cluster. The proposed method outperforms the state-of-the-art object similarity scoring techniques on evaluation for finding exact items. The proposed method achieves 86.5% of accuracy compared to the accuracy of the state-of-the-art technique, which is 59.9%. That is, an exact item can be found among four retrieved images with an accuracy of 86.5%, and the rest can possibly be similar products more than the accuracy. Therefore, the proposed method can greatly reduce the amount of training data with an order of magnitude as well as providing a reliable similarity metric.Keywords: visual search, deep learning, convolutional neural network, machine learning
Procedia PDF Downloads 215142 Back to Basics: Redefining Quality Measurement for Hybrid Software Development Organizations
Authors: Satya Pradhan, Venky Nanniyur
Abstract:
As the software industry transitions from a license-based model to a subscription-based Software-as-a-Service (SaaS) model, many software development groups are using a hybrid development model that incorporates Agile and Waterfall methodologies in different parts of the organization. The traditional metrics used for measuring software quality in Waterfall or Agile paradigms do not apply to this new hybrid methodology. In addition, to respond to higher quality demands from customers and to gain a competitive advantage in the market, many companies are starting to prioritize quality as a strategic differentiator. As a result, quality metrics are included in the decision-making activities all the way up to the executive level, including board of director reviews. This paper presents key challenges associated with measuring software quality in organizations using the hybrid development model. We introduce a framework called Prevention-Inspection-Evaluation-Removal (PIER) to provide a comprehensive metric definition for hybrid organizations. The framework includes quality measurements, quality enforcement, and quality decision points at different organizational levels and project milestones. The metrics framework defined in this paper is being used for all Cisco systems products used in customer premises. We present several field metrics for one product portfolio (enterprise networking) to show the effectiveness of the proposed measurement system. As the results show, this metrics framework has significantly improved in-process defect management as well as field quality.Keywords: quality management system, quality metrics framework, quality metrics, agile, waterfall, hybrid development system
Procedia PDF Downloads 174141 Relay-Augmented Bottleneck Throughput Maximization for Correlated Data Routing: A Game Theoretic Perspective
Authors: Isra Elfatih Salih Edrees, Mehmet Serdar Ufuk Türeli
Abstract:
In this paper, an energy-aware method is presented, integrating energy-efficient relay-augmented techniques for correlated data routing with the goal of optimizing bottleneck throughput in wireless sensor networks. The system tackles the dual challenge of throughput optimization while considering sensor network energy consumption. A unique routing metric has been developed to enable throughput maximization while minimizing energy consumption by utilizing data correlation patterns. The paper introduces a game theoretic framework to address the NP-complete optimization problem inherent in throughput-maximizing correlation-aware routing with energy limitations. By creating an algorithm that blends energy-aware route selection strategies with the best reaction dynamics, this framework provides a local solution. The suggested technique considerably raises the bottleneck throughput for each source in the network while reducing energy consumption by choosing the best routes that strike a compromise between throughput enhancement and energy efficiency. Extensive numerical analyses verify the efficiency of the method. The outcomes demonstrate the significant decrease in energy consumption attained by the energy-efficient relay-augmented bottleneck throughput maximization technique, in addition to confirming the anticipated throughput benefits.Keywords: correlated data aggregation, energy efficiency, game theory, relay-augmented routing, throughput maximization, wireless sensor networks
Procedia PDF Downloads 82140 Measuring Corporate Brand Loyalties in Business Markets: A Case for Caution
Authors: Niklas Bondesson
Abstract:
Purpose: This paper attempts to examine how different facets of attitudinal brand loyalty are determined by different brand image elements in business markets. Design/Methodology/Approach: Statistical analysis is employed to data from a web survey, covering 226 professional packaging buyers in eight countries. Findings: The results reveal that different brand loyalty facets have different antecedents. Affective brand loyalties (or loyalty 'feelings') are mainly driven by customer associations to service relationships, whereas customers’ loyalty intentions (to purchase and recommend a brand) are triggered by associations to the general reputation of the company. The findings also indicate that willingness to pay a price premium is a distinct form of loyalty, with unique determinants. Research implications: Theoretically, the paper suggests that corporate B2B brand loyalty needs to be conceptualised with more refinement than has been done in extant B2B branding work. Methodologically, the paper highlights that single-item approaches can be fruitful when measuring B2B brand loyalty, and that multi-item scales can conceal important nuances in terms of understanding why customers are loyal. Practical implications: The idea of a loyalty 'silver metric' is an attractive idea, but this study indicates that firms who rely too much on one single type of brand loyalty risk to miss important building blocks. Originality/Value/Contribution: The major contribution is a more multi-faceted conceptualisation, and measurement, of corporate B2B brand loyalty and its brand image determinants than extant work has provided.Keywords: brand equity, business-to-business branding, industrial marketing, buying behaviour
Procedia PDF Downloads 414139 Deep Learning Approach for Chronic Kidney Disease Complications
Authors: Mario Isaza-Ruget, Claudia C. Colmenares-Mejia, Nancy Yomayusa, Camilo A. González, Andres Cely, Jossie Murcia
Abstract:
Quantification of risks associated with complications development from chronic kidney disease (CKD) through accurate survival models can help with patient management. A retrospective cohort that included patients diagnosed with CKD from a primary care program and followed up between 2013 and 2018 was carried out. Time-dependent and static covariates associated with demographic, clinical, and laboratory factors were included. Deep Learning (DL) survival analyzes were developed for three CKD outcomes: CKD stage progression, >25% decrease in Estimated Glomerular Filtration Rate (eGFR), and Renal Replacement Therapy (RRT). Models were evaluated and compared with Random Survival Forest (RSF) based on concordance index (C-index) metric. 2.143 patients were included. Two models were developed for each outcome, Deep Neural Network (DNN) model reported C-index=0.9867 for CKD stage progression; C-index=0.9905 for reduction in eGFR; C-index=0.9867 for RRT. Regarding the RSF model, C-index=0.6650 was reached for CKD stage progression; decreased eGFR C-index=0.6759; RRT C-index=0.8926. DNN models applied in survival analysis context with considerations of longitudinal covariates at the start of follow-up can predict renal stage progression, a significant decrease in eGFR and RRT. The success of these survival models lies in the appropriate definition of survival times and the analysis of covariates, especially those that vary over time.Keywords: artificial intelligence, chronic kidney disease, deep neural networks, survival analysis
Procedia PDF Downloads 134138 Spatial Interpolation of Aerosol Optical Depth Pollution: Comparison of Methods for the Development of Aerosol Distribution
Authors: Sahabeh Safarpour, Khiruddin Abdullah, Hwee San Lim, Mohsen Dadras
Abstract:
Air pollution is a growing problem arising from domestic heating, high density of vehicle traffic, electricity production, and expanding commercial and industrial activities, all increasing in parallel with urban population. Monitoring and forecasting of air quality parameters are important due to health impact. One widely available metric of aerosol abundance is the aerosol optical depth (AOD). The AOD is the integrated light extinction coefficient over a vertical atmospheric column of unit cross section, which represents the extent to which the aerosols in that vertical profile prevent the transmission of light by absorption or scattering. Seasonal aerosol optical depth (AOD) values at 550 nm derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard NASA’s Terra satellites, for the 10 years period of 2000-2010 were used to test 7 different spatial interpolation methods in the present study. The accuracy of estimations was assessed through visual analysis as well as independent validation based on basic statistics, such as root mean square error (RMSE) and correlation coefficient. Based on the RMSE and R values of predictions made using measured values from 2000 to 2010, Radial Basis Functions (RBFs) yielded the best results for spring, summer, and winter and ordinary kriging yielded the best results for fall.Keywords: aerosol optical depth, MODIS, spatial interpolation techniques, Radial Basis Functions
Procedia PDF Downloads 408137 Description of Anthracotheriidae Remains from the Middle and Upper Siwaliks of Punjab, Pakistan
Authors: Abdul M. Khan, Ayesha Iqbal
Abstract:
In this paper, new dental remains of Merycopotamus (Anthracotheriidae) are described. The specimens were collected during field work by the authors from the well dated fossiliferous locality 'Hasnot' belonging to the Dhok Pathan Formation, and from 'Tatrot' village belonging to Tatrot Formation of the Potwar Plateau, Pakistan. The stratigraphic age of the Neogene deposits around Hasnot is 7 - 5 Ma; whereas the age of the Tatrot Formation is from 3.4 - 2.6 Ma. The newly discovered material when compared with the previous records of the genus Merycopotamus from the Siwaliks led us to identify all the three reported species of this genus from the Siwaliks of Pakistan. As the sample comprises only the dental remains so the identification of the specimens is solely based upon the morpho-metric analysis. The occlusal pattern of the upper molar in Merycopotamus dissimilis is different from Merycopotamus medioximus and Merycopotamus nanus in having a mesostyle fully divided, forming two prominent cusps, while mesostyle in M. medioximus is partly divided and small lateral crests are present on the mesostyle. A continuous loop like mesostyle is present in Merycopotamus nanus. The entoconid fold is present in Merycopotamus dissimilis on the lower molars whereas it is absent in Merycopotamus medioximus and Merycopotamus nanus. The hypoconulid in M. dissimilis is relatively simple but a loop like hypoconulid is present in M. medioximus and M. nanus. The results of the present findings are in line with the previous records of the genus Merycopotamus, with M. nanus, M. medioximus and M. dissimilis in the Late Miocene – Early Pliocene Dhok Pathan Formation, and M. dissimilis in the Late Pliocene Tatrot sediments of Pakistan.Keywords: Dhok Pathan, late miocene, merycopotamus, pliocene, Tatrot
Procedia PDF Downloads 242136 Quantifying Meaning in Biological Systems
Authors: Richard L. Summers
Abstract:
The advanced computational analysis of biological systems is becoming increasingly dependent upon an understanding of the information-theoretic structure of the materials, energy and interactive processes that comprise those systems. The stability and survival of these living systems are fundamentally contingent upon their ability to acquire and process the meaning of information concerning the physical state of its biological continuum (biocontinuum). The drive for adaptive system reconciliation of a divergence from steady-state within this biocontinuum can be described by an information metric-based formulation of the process for actionable knowledge acquisition that incorporates the axiomatic inference of Kullback-Leibler information minimization driven by survival replicator dynamics. If the mathematical expression of this process is the Lagrangian integrand for any change within the biocontinuum then it can also be considered as an action functional for the living system. In the direct method of Lyapunov, such a summarizing mathematical formulation of global system behavior based on the driving forces of energy currents and constraints within the system can serve as a platform for the analysis of stability. As the system evolves in time in response to biocontinuum perturbations, the summarizing function then conveys information about its overall stability. This stability information portends survival and therefore has absolute existential meaning for the living system. The first derivative of the Lyapunov energy information function will have a negative trajectory toward a system's steady state if the driving force is dissipating. By contrast, system instability leading to system dissolution will have a positive trajectory. The direction and magnitude of the vector for the trajectory then serves as a quantifiable signature of the meaning associated with the living system’s stability information, homeostasis and survival potential.Keywords: meaning, information, Lyapunov, living systems
Procedia PDF Downloads 131135 A Comparative Study Mechanical Properties of Polytetrafluoroethylene Materials Synthesized by Non-Conventional and Conventional Techniques
Authors: H. Lahlali F. El Haouzi, A.M.Al-Baradi, I. El Aboudi, M. El Azhari, A. Mdarhri
Abstract:
Polytetrafluoroethylene (PTFE) is a high performance thermoplastic polymer with exceptional physical and chemical properties, such as a high melting temperature, high thermal stability, and very good chemical resistance. Nevertheless, manufacturing PTFE is problematic due to its high melt viscosity (10 12 Pa.s). In practice, it is by now well established that this property presents a serious problem when the classical methods are used to synthesized the dense PTFE materials in particularly hot pressing, high temperature extrusion. In this framework, we use here a new process namely spark plasma sintering (SPS) to elaborate PTFE samples from the micro metric particles powder. It consists in applying simultaneous electric current and pressure directly on the sample powder. By controlling the processing parameters of this technique, a series of PTFE samples are easy obtained and associated to remarkably short time as is reported in an early work. Our central goal in the present study is to understand how the non conventional SPS affects the mechanical properties at room temperature. For this end, a second commercially series of PTFE synthesized by using the extrusion method is investigated. The first data according to the tensile mechanical properties are found to be superior for the first set samples (SPS). However, this trend is not observed for the results obtained from the compression testing. The observed macro-behaviors are correlated to some physical properties of the two series of samples such as their crystallinity or density. Upon a close examination of these properties, we believe the SPS technique can be seen as a promising way to elaborate the polymer having high molecular mass without compromising their mechanical properties.Keywords: PTFE, extrusion, Spark Plasma Sintering, physical properties, mechanical behavior
Procedia PDF Downloads 308134 Distance Learning and Modern Challenges of Education Management in Georgia
Authors: Giorgi Gaganidze, Eter Kharaishvili
Abstract:
The atypical crisis has created new challenges in the education system. Globally, including in Georgia, traditional methods of managing the education system have appeared particularly vulnerable. In addition, new opportunities for the introduction of innovative management of learning processes have emerged. The aim of the research is to identify the main challenges in the field of education management in the distance learning process in Georgia and to develop recommendations on the opportunities for the introduction of innovative management. The paper substantiates the relevance of the research, in particular, it notes that in Georgia, as in many countries, distance learning in higher education institutions became particularly crucial during the Covid-19 pandemic. What is more, theoretical and practical aspects of distance learning are less proven, and a number of problems have been identified in the field of education management in Georgia. The article justifies the need to study the challenges of distance learning for the formation of a sustainable education management system. Within the bibliographic research, there are grouped the opinions of researchers on the modern problems of distance learning and education management in the article. Based on scientific papers, the expectations formed about distance learning are studied, and the main focus is on the existing problems of education management during the atypical crisis. The article discusses the forms and opportunities of distance learning in different countries, evaluates different approaches and challenges to distance learning, and justifies the role of education management in effective distance learning. The paper uses various theoretical-methodological tools of research, including desk research on the research topic; Data selection-grouping, problem identification is carried out by analysis, synthesis, sampling, induction, and other methods;SWOT analysis is used to assess the strengths, weaknesses, opportunities, and threats of distance education and management; The level of student satisfaction with distance learning is determined through the Population-based / Census-based approach; The results of the research are processed by SPSS program. Quantitative research and semi-structured interviews with relevant focus groups were conducted to identify working directions for innovative management of distance learning and education. Research has shown that the demand for distance education is growing in Georgia, but the need to introduce innovative education management remains a particular challenge. Conclusions have been made on the introduction of innovative education management, and the relevant recommendations have been developed.Keywords: distance learning, management challenges, education management, innovative management
Procedia PDF Downloads 125133 Implementing a Strategy of Reliability Centred Maintenance (RCM) in the Libyan Cement Industry
Authors: Khalid M. Albarkoly, Kenneth S. Park
Abstract:
The substantial development of the construction industry has forced the cement industry, its major support, to focus on achieving maximum productivity to meet the growing demand for this material. Statistics indicate that the demand for cement rose from 1.6 billion metric tons (bmt) in 2000 to 4bmt in 2013. This means that the reliability of a production system needs to be at the highest level that can be achieved by good maintenance. This paper studies the extent to which the implementation of RCM is needed as a strategy for increasing the reliability of the production systems component can be increased, thus ensuring continuous productivity. In a case study of four Libyan cement factories, 80 employees were surveyed and 12 top and middle managers interviewed. It is evident that these factories usually breakdown more often than once per month which has led to a decline in productivity, they cannot produce more than 50% of their designed capacity. This has resulted from the poor reliability of their production systems as a result of poor or insufficient maintenance. It has been found that most of the factories’ employees misunderstand maintenance and its importance. The main cause of this problem is the lack of qualified and trained staff, but in addition, it has been found that most employees are not found to be motivated as a result of a lack of management support and interest. In response to these findings, it has been suggested that the RCM strategy should be implemented in the four factories. The paper shows the importance of considering the development of maintenance strategies through the implementation of RCM in these factories. The purpose of it would be to overcome the problems that could reduce the level of reliability of the production systems. This study could be a useful source of information for academic researchers and the industrial organisations which are still experiencing problems in maintenance practices.Keywords: Libyan cement industry, reliability centred maintenance, maintenance, production, reliability
Procedia PDF Downloads 390132 Barriers to Public Innovation in Colombia: Case Study in Central Administrative Region
Authors: Yessenia Parrado, Ana Barbosa, Daniela Mahe, Sebastian Toro, Jhon Garcia
Abstract:
Public innovation has gained strength in recent years in response to the need to find new strategies or mechanisms to interact between government entities and citizens. In this way, the Colombian government has been promoting policies aimed at strengthening innovation as a fundamental aspect in the work of public entities. However, in order to potentiate the capacities of public servants and therefore of the institutions and organizations to which they belong, it is necessary to be able to understand the context under which they operate in their daily work. This article aims to compile the work developed by the laboratory of innovation, creativity, and new technologies LAB101 of the National University of Colombia for the National Department of Planning. A case study was developed in the central region of Colombia made up of five departments, through the construction of instruments based on quantitative techniques in response to the item combined with qualitative analysis through semi-structured interviews to understand the perception of possible barriers to innovation and the obstacles that have prevented the acceleration of transformation within public organizations. From the information collected, different analyzes are carried out that allows a more robust explanation to be given to the results obtained, and a set of categories are established to group different characteristics associated with possible difficulties that officials perceive to innovate and that are later conceived as barriers. Finally, a proposal for an indicator was built to measure the degree of innovation within public entities in order to be able to carry a metric in future opportunities. The main findings of this study show three key components to be strengthened in public entities and organizations: governance, knowledge management, and the promotion of collaborative workspaces.Keywords: barriers, enablers, management, public innovation
Procedia PDF Downloads 115131 Genetic Diversity of Sugar Beet Pollinators
Authors: Ksenija Taški-Ajdukovic, Nevena Nagl, Živko Ćurčić, Dario Danojević
Abstract:
Information about genetic diversity of sugar beet parental populations is of a great importance for hybrid breeding programs. The aim of this research was to evaluate genetic diversity among and within populations and lines of diploid sugar beet pollinators, by using SSR markers. As plant material were used eight pollinators originating from three USDA-ARS breeding programs and four pollinators from Institute of Field and Vegetable Crops, Novi Sad. Depending on the presence of self-fertility gene, the pollinators were divided into three groups: autofertile (inbred lines), autosterile (open-pollinating populations), and group with partial presence of autofertility gene. A total of 40 SSR primers were screened, out of which 34 were selected for the analysis of genetic diversity. A total of 129 different alleles were obtained with mean value 3.2 alleles per SSR primer. According to the results of genetic variability assessment the number and percentage of polymorphic loci was the maximal in pollinators NS1 and tester cms2 while effective number of alleles, expected heterozygosis and Shannon’s index was highest in pollinator EL0204. Analysis of molecular variance (AMOVA) showed that 77.34% of the total genetic variation was attributed to intra-varietal variance. Correspondence analysis results were very similar to grouping by neighbor-joining algorithm. Number of groups was smaller by one, because correspondence analysis merged IFVCNS pollinators with CZ25 into one group. Pollinators FC220, FC221 and C 51 were in the next group, while self-fertile pollinators CR10 and C930-35 from USDA-Salinas were separated. On another branch were self-sterile pollinators ЕL0204 and ЕL53 from USDA-East Lansing. Sterile testers cms1 and cms2 formed separate group. The presented results confirmed that SSR analysis can be successfully used in estimation of genetic diversity within and among sugar beet populations. Since the tested pollinator differed considering the presence of self-fertility gene, their heterozygosity differed as well. It was lower in genotypes with fixed self-fertility genes. Since the most of tested populations were open-pollinated, which rarely self-pollinate, high variability within the populations was expected. Cluster analysis grouped populations according to their origin.Keywords: auto fertility, genetic diversity, pollinator, SSR, sugar beet
Procedia PDF Downloads 461130 Performance Comparison of Microcontroller-Based Optimum Controller for Fruit Drying System
Authors: Umar Salisu
Abstract:
This research presents the development of a hot air tomatoes drying system. To provide a more efficient and continuous temperature control, microcontroller-based optimal controller was developed. The system is based on a power control principle to achieve smooth power variations depending on a feedback temperature signal of the process. An LM35 temperature sensor and LM399 differential comparator were used to measure the temperature. The mathematical model of the system was developed and the optimal controller was designed and simulated and compared with the PID controller transient response. A controlled environment suitable for fruit drying is developed within a closed chamber and is a three step process. First, the infrared light is used internally to preheated the fruit to speedily remove the water content inside the fruit for fast drying. Second, hot air of a specified temperature is blown inside the chamber to maintain the humidity below a specified level and exhaust the humid air of the chamber. Third, the microcontroller disconnects the power to the chamber after the moisture content of the fruits is removed to minimal. Experiments were conducted with 1kg of fresh tomatoes at three different temperatures (40, 50 and 60 °C) at constant relative humidity of 30%RH. The results obtained indicate that the system is significantly reducing the drying time without affecting the quality of the fruits. In the context of temperature control, the results obtained showed that the response of the optimal controller has zero overshoot whereas the PID controller response overshoots to about 30% of the set-point. Another performance metric used is the rising time; the optimal controller rose without any delay while the PID controller delayed for more than 50s. It can be argued that the optimal controller performance is preferable than that of the PID controller since it does not overshoot and it starts in good time.Keywords: drying, microcontroller, optimum controller, PID controller
Procedia PDF Downloads 301129 X-Ray Diffraction, Microstructure, and Mössbauer Studies of Nanostructured Materials Obtained by High-Energy Ball Milling
Authors: N. Boudinar, A. Djekoun, A. Otmani, B. Bouzabata, J. M. Greneche
Abstract:
High-energy ball milling is a solid-state powder processing technique that allows synthesizing a variety of equilibrium and non-equilibrium alloy phases starting from elemental powders. The advantage of this process technology is that the powder can be produced in large quantities and the processing parameters can be easily controlled, thus it is a suitable method for commercial applications. It can also be used to produce amorphous and nanocrystalline materials in commercially relevant amounts and is also amenable to the production of a variety of alloy compositions. Mechanical alloying (high-energy ball milling) provides an inter-dispersion of elements through a repeated cold welding and fracture of free powder particles; the grain size decreases to nano metric scale and the element mix together. Progressively, the concentration gradients disappear and eventually the elements are mixed at the atomic scale. The end products depend on many parameters such as the milling conditions and the thermodynamic properties of the milled system. Here, the mechanical alloying technique has been used to prepare nano crystalline Fe_50 and Fe_64 wt.% Ni alloys from powder mixtures. Scanning electron microscopy (SEM) with energy-dispersive, X-ray analyses and Mössbauer spectroscopy were used to study the mixing at nanometric scale. The Mössbauer Spectroscopy confirmed the ferromagnetic ordering and was use to calculate the distribution of hyperfin field. The Mössbauer spectrum for both alloys shows the existence of a ferromagnetic phase attributed to γ-Fe-Ni solid solution.Keywords: nanocrystalline, mechanical alloying, X-ray diffraction, Mössbauer spectroscopy, phase transformations
Procedia PDF Downloads 437128 Environmental Protection by Optimum Utilization of Car Air Conditioners
Authors: Sanchita Abrol, Kunal Rana, Ankit Dhir, S. K. Gupta
Abstract:
According to N.R.E.L.’s findings, 700 crore gallons of petrol is used annually to run the air conditioners of passenger vehicles (nearly 6% of total fuel consumption in the USA). Beyond fuel use, the Environmental Protection Agency reported that refrigerant leaks from auto air conditioning units add an additional 5 crore metric tons of carbon emissions to the atmosphere each year. The objective of our project is to deal with this vital issue by carefully modifying the interiors of a car thereby increasing its mileage and the efficiency of its engine. This would consequently result in a decrease in tail emission and generated pollution along with improved car performance. An automatic mechanism, deployed between the front and the rear seats, consisting of transparent thermal insulating sheet/curtain, would roll down as per the requirement of the driver in order to optimize the volume for effective air conditioning, when travelling alone or with a person. The reduction in effective volume will yield favourable results. Even on a mild sunny day, the temperature inside a parked car can quickly spike to life-threatening levels. For a stationary parked car, insulation would be provided beneath its metal body so as to reduce the rate of heat transfer and increase the transmissivity. As a result, the car would not require a large amount of air conditioning for maintaining lower temperature, which would provide us similar benefits. Authors established the feasibility studies, system engineering and primarily theoretical and experimental results confirming the idea and motivation to fabricate and test the actual product.Keywords: automation, car, cooling insulating curtains, heat optimization, insulation, reduction in tail emission, mileage
Procedia PDF Downloads 277127 Cluster Analysis and Benchmarking for Performance Optimization of a Pyrochlore Processing Unit
Authors: Ana C. R. P. Ferreira, Adriano H. P. Pereira
Abstract:
Given the frequent variation of mineral properties throughout the Araxá pyrochlore deposit, even if a good homogenization work has been carried out before feeding the processing plants, an operation with quality and performance’s high variety standard is expected. These results could be improved and standardized if the blend composition parameters that most influence the processing route are determined, and then the types of raw materials are grouped by them, finally presenting a great reference with operational settings for each group. Associating the physical and chemical parameters of a unit operation through benchmarking or even an optimal reference of metallurgical recovery and product quality reflects in the reduction of the production costs, optimization of the mineral resource, and guarantee of greater stability in the subsequent processes of the production chain that uses the mineral of interest. Conducting a comprehensive exploratory data analysis to identify which characteristics of the ore are most relevant to the process route, associated with the use of Machine Learning algorithms for grouping the raw material (ore) and associating these with reference variables in the process’ benchmark is a reasonable alternative for the standardization and improvement of mineral processing units. Clustering methods through Decision Tree and K-Means were employed, associated with algorithms based on the theory of benchmarking, with criteria defined by the process team in order to reference the best adjustments for processing the ore piles of each cluster. A clean user interface was created to obtain the outputs of the created algorithm. The results were measured through the average time of adjustment and stabilization of the process after a new pile of homogenized ore enters the plant, as well as the average time needed to achieve the best processing result. Direct gains from the metallurgical recovery of the process were also measured. The results were promising, with a reduction in the adjustment time and stabilization when starting the processing of a new ore pile, as well as reaching the benchmark. Also noteworthy are the gains in metallurgical recovery, which reflect a significant saving in ore consumption and a consequent reduction in production costs, hence a more rational use of the tailings dams and life optimization of the mineral deposit.Keywords: mineral clustering, machine learning, process optimization, pyrochlore processing
Procedia PDF Downloads 143126 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things
Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker
Abstract:
Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data
Procedia PDF Downloads 334125 Causes for the Precession of the Perihelion in the Planetary Orbits
Authors: Kwan U. Kim, Jin Sim, Ryong Jin Jang, Sung Duk Kim
Abstract:
It is Leverrier that discovered the precession of the perihelion in the planetary orbits for the first time in the world, while it is Einstein that explained the astronomical phenomenom for the first time in the world. The amount of the precession of the perihelion for Einstein’s theory of gravitation has been explained by means of the inverse fourth power force(inverse third power potential) introduced totheory of gravitation through Schwarzschild metric However, the methodology has a serious shortcoming that it is impossible to explain the cause for the precession of the perihelion in the planetary orbits. According to our study, without taking the cause for the precession of the perihelion, 6 methods can explain the amount of the precession of the perihelion discovered by Leverrier. Therefore, the problem of what caused the perihelion to precess in the planetary orbits must be solved for physics because it is a profound scientific and technological problem for a basic experiment in construction of relativistic theory of gravitation. The scientific solution to the problem proved that Einstein’s explanation for the planetary orbits is a magic made by the numerical expressions obtained from fictitious gravitation introduced to theory of gravitation and wrong definition of proper time The problem of the precession of the perihelion seems solved already by means of general theory of relativity, but, in essence, the cause for the astronomical phenomenon has not been successfully explained for astronomy yet. The right solution to the problem comes from generalized theory of gravitation. Therefore, in this paper, it has been shown that by means of Schwarzschild field and the physical quantities of relativistic Lagrangian redflected in it, fictitious gravitation is not the main factor which can cause the perihelion to precess in the planetary orbits. In addition to it, it has been shown that the main factor which can cause the perihelion to precess in the planetary orbits is the inverse third power force existing really in the relativistic region in the Solar system.Keywords: inverse third power force, precession of the perihelion, fictitious gravitation, planetary orbits
Procedia PDF Downloads 13124 An In-Situ Integrated Micromachining System for Intricate Micro-Parts Machining
Authors: Shun-Tong Chen, Wei-Ping Huang, Hong-Ye Yang, Ming-Chieh Yeh, Chih-Wei Du
Abstract:
This study presents a novel versatile high-precision integrated micromachining system that combines contact and non-contact micromachining techniques to machine intricate micro-parts precisely. Two broad methods of micro fabrication-1) volume additive (micro co-deposition), and 2) volume subtractive (nanometric flycutting, ultrafine w-EDM (wire Electrical Discharge Machining), and micro honing) - are integrated in the developed micromachining system, and their effectiveness is verified. A multidirectional headstock that supports various machining orientations is designed to evaluate the feasibility of multifunctional micromachining. An exchangeable working-tank that allows for various machining mechanisms is also incorporated into the system. Hence, the micro tool and workpiece need not be unloaded or repositioned until all the planned tasks have been completed. By using the designed servo rotary mechanism, a nanometric flycutting approach with a concentric rotary accuracy of 5-nm is constructed and utilized with the system to machine a diffraction-grating element with a nano-metric scale V-groove array. To improve the wear resistance of the micro tool, the micro co-deposition function is used to provide a micro-abrasive coating by an electrochemical method. The construction of ultrafine w-EDM facilitates the fabrication of micro slots with a width of less than 20-µm on a hardened tool. The hardened tool can thus be employed as a micro honing-tool to hone a micro hole with an internal diameter of 200 µm on SKD-11 molded steel. Experimental results prove that intricate micro-parts can be in-situ manufactured with high-precision by the developed integrated micromachining system.Keywords: integrated micromachining system, in-situ micromachining, nanometric flycutting, ultrafine w-EDM, micro honing
Procedia PDF Downloads 410123 Potassium-Phosphorus-Nitrogen Detection and Spectral Segmentation Analysis Using Polarized Hyperspectral Imagery and Machine Learning
Authors: Nicholas V. Scott, Jack McCarthy
Abstract:
Military, law enforcement, and counter terrorism organizations are often tasked with target detection and image characterization of scenes containing explosive materials in various types of environments where light scattering intensity is high. Mitigation of this photonic noise using classical digital filtration and signal processing can be difficult. This is partially due to the lack of robust image processing methods for photonic noise removal, which strongly influence high resolution target detection and machine learning-based pattern recognition. Such analysis is crucial to the delivery of reliable intelligence. Polarization filters are a possible method for ambient glare reduction by allowing only certain modes of the electromagnetic field to be captured, providing strong scene contrast. An experiment was carried out utilizing a polarization lens attached to a hyperspectral imagery camera for the purpose of exploring the degree to which an imaged polarized scene of potassium, phosphorus, and nitrogen mixture allows for improved target detection and image segmentation. Preliminary imagery results based on the application of machine learning algorithms, including competitive leaky learning and distance metric analysis, to polarized hyperspectral imagery, suggest that polarization filters provide a slight advantage in image segmentation. The results of this work have implications for understanding the presence of explosive material in dry, desert areas where reflective glare is a significant impediment to scene characterization.Keywords: explosive material, hyperspectral imagery, image segmentation, machine learning, polarization
Procedia PDF Downloads 142122 The Potential of Sown Pastures as Feedstock for Biofuels in Brazil
Authors: Danilo G. De Quadros
Abstract:
Biofuels are a priority in the renewable energy agenda. The utilization of tropical grasses to ethanol production is a real opportunity to Brazil reaches the world’s leadership in biofuels production because there are 100 million hectares of sown pastures, which represent 20% of all land and 80% of agricultural areas. Basically, nowadays tropical grasses are used to raise livestock. The results obtained in this research could bring tremendous advance not only to national technology and economy but also to improve social and environmental aspects. Thus, the objective of this work was to estimate, through well-established international models, the potential of biofuels production using sown tropical pastures as feedstocks and to compare the results with sugarcane ethanol, considering state-of-art of conversion technology, advantages and limitations factors. There were used data from national and international literature about forage yield and biochemical conversion yield. Some scenarios were studied to evaluate potential advantages and limitations for cellulosic ethanol production, since non-food feedstock appeal to conversion strategies, passing through harvest, densification, logistics, environmental impacts (carbon and water cycles, nutrient recycling and biodiversity), and social aspects. If Brazil used only 1% of sown pastures to ethanol production by biochemical pathway, with average dry matter yield of 15 metric tons per hectare per year (there are results of 40 tons), resulted annually in 721 billion liters, that represents 10 times more than sugarcane ethanol projected by the Government in 2030. However, more research is necessary to take the results to commercial scale with competitive costs, considering many strategies and methods applied in ethanol production using cellulosic feedstock.Keywords: biofuels, biochemical pathway, cellulosic ethanol, sustainability
Procedia PDF Downloads 263121 Phytochemical Screening and Anti-Hypothyroidism Activity of Lepidium sativum Ethanolic Extract
Authors: Reham Hajomer, Ikram Elsiddig, Amna Hamad
Abstract:
Lepidium sativum (Garden Cress) belonging to Brassicaceae family is an annual herb locally known as El-rshad. In Ayurveda it is an important medicinal plant, traditionally used for the treatment of jaundice, liver problems, spleen diseases, gastrointestinal disorders, menstrual problems, fracture, arthritis, inflammatory conditions and for treatment of hypothyroidism. Hypothyroidism is a condition in which the thyroid gland does not produce enough thyroid hormones (Triiodithyronine T3 and Thyroxine T4) which are commonly caused by iodine deficiency. It’s divided into primary and secondary hypothyroidism, the primary caused by failure of thyroid function and secondary due to the failure of adequate thyroid-stimulating hormone (TSH) secretion from the pituitary gland or thyroid -releasing hormone (TRH) from the hypothalamus. The disease is most common in women over age 60. The objective regarding this study is to know whether Lepidium sativum would affect the level of thyroid hormones. The extract was prepared with 96% ethanol using Soxhlet apparatus. The anti-hypothyroidism activity was tested by using thirty male Wistar rats weighing (100-140 g) were used in the experiment. They were grouping into five groups, Group 1: Normal group= Administered only distilled water. Then 10 mg/kg Propylthiouracil was added to the drinking water of all other groups to induce hypothyroidism. Group 2: Negative control without any treatment; Group 3: Test group= treated with oral administration of 500mg/kg extract; Group 4: treated with oral administration of 250mg/kg of the extract; Group 5: Standard group (positive control) = treated with intraperitoneal Levothyroxine. All rats were incubated for 20 days at animal house with room temperature of proper ventilation provided with standard diet. The result show that the Lepidium sativum extract was found to increases the T3 and T4 in the propylthiouracil induced rats with values (0.29 ng/dl T3 and 0.57 U T4) for the 500mg/kg and (0.27 ng/dl T3 and 0.517 U T4) for the 250mg/kg in comparison with standard with values (0.241 ng/dl T3 and 0.516 U T4) so that Lepidium sativum can be stimulatory to thyroid function and possess significant anti-hypothyroidism effect with p-values ranges from (0.000006*-0.893472). In conclusion, from results obtained, Lepidium sativum plant extract was found to posses anti-hypothyroidism effects so its act as an agent that stimulates thyroid hormone secretion.Keywords: anti-hypothyroidism, extract, lepidium, sativum
Procedia PDF Downloads 205120 The Aesthetics of Time in Thus Spoke Zarathustra: A Reappraisal of the Eternal Recurrence of the Same
Authors: Melanie Tang
Abstract:
According to Nietzsche, the eternal recurrence is his most important idea. However, it is perhaps his most cryptic and difficult to interpret. Early readings considered it as a cosmological hypothesis about the cyclic nature of time. However, following Nehamas’s ‘Life as Literature’ (1985), it has become a widespread interpretation that the eternal recurrence never really had any theoretical dimensions, and is not actually a philosophy of time, but a practical thought experiment intended to measure the extent to which we have mastered and perfected our lives. This paper endeavours to challenge this line of thought becoming scholarly consensus, and to carry out a more complex analysis of the eternal recurrence as it is presented in Thus Spoke Zarathustra. In its wider scope, this research proposes that Thus Spoke Zarathustra — as opposed to The Birth of Tragedy — be taken as the primary source for a study of Nietzsche’s Aesthetics, due to its more intrinsic aesthetic qualities and expressive devices. The eternal recurrence is the central philosophy of a work that communicates its ideas in unprecedentedly experimental and aesthetic terms, and a more in-depth understanding of why Nietzsche chooses to present his conception of time in aesthetic terms is warranted. Through hermeneutical analysis of Thus Spoke Zarathustra and engagement with secondary sources such as those by Nehamas, Karl Löwith, and Jill Marsden, the present analysis challenges the ethics of self-perfection upon which current interpretations of the recurrence are based, as well as their reliance upon a linear conception of time. Instead, it finds the recurrence to be a cyclic interplay between the self and the world, rather than a metric pertaining solely to the self. In this interpretation, time is found to be composed of an intertemporal rather than linear multitude of will to power, which structures itself through tensional cycles into an experience of circular time that can be seen to have aesthetic dimensions. In putting forth this understanding of the eternal recurrence, this research hopes to reopen debate on this key concept in the field of Nietzsche studies.Keywords: Nietzsche, eternal recurrence, Zarathustra, aesthetics, time
Procedia PDF Downloads 150119 Determination of the Effective Economic and/or Demographic Indicators in Classification of European Union Member and Candidate Countries Using Partial Least Squares Discriminant Analysis
Authors: Esra Polat
Abstract:
Partial Least Squares Discriminant Analysis (PLSDA) is a statistical method for classification and consists a classical Partial Least Squares Regression (PLSR) in which the dependent variable is a categorical one expressing the class membership of each observation. PLSDA can be applied in many cases when classical discriminant analysis cannot be applied. For example, when the number of observations is low and when the number of independent variables is high. When there are missing values, PLSDA can be applied on the data that is available. Finally, it is adapted when multicollinearity between independent variables is high. The aim of this study is to determine the economic and/or demographic indicators, which are effective in grouping the 28 European Union (EU) member countries and 7 candidate countries (including potential candidates Bosnia and Herzegovina (BiH) and Kosova) by using the data set obtained from database of the World Bank for 2014. Leaving the political issues aside, the analysis is only concerned with the economic and demographic variables that have the potential influence on country’s eligibility for EU entrance. Hence, in this study, both the performance of PLSDA method in classifying the countries correctly to their pre-defined groups (candidate or member) and the differences between the EU countries and candidate countries in terms of these indicators are analyzed. As a result of the PLSDA, the value of percentage correctness of 100 % indicates that overall of the 35 countries is classified correctly. Moreover, the most important variables that determine the statuses of member and candidate countries in terms of economic indicators are identified as 'external balance on goods and services (% GDP)', 'gross domestic savings (% GDP)' and 'gross national expenditure (% GDP)' that means for the 2014 economical structure of countries is the most important determinant of EU membership. Subsequently, the model validated to prove the predictive ability by using the data set for 2015. For prediction sample, %97,14 of the countries are correctly classified. An interesting result is obtained for only BiH, which is still a potential candidate for EU, predicted as a member of EU by using the indicators data set for 2015 as a prediction sample. Although BiH has made a significant transformation from a war-torn country to a semi-functional state, ethnic tensions, nationalistic rhetoric and political disagreements are still evident, which inhibit Bosnian progress towards the EU.Keywords: classification, demographic indicators, economic indicators, European Union, partial least squares discriminant analysis
Procedia PDF Downloads 280