Search results for: variable clustering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2711

Search results for: variable clustering

2021 Capacitated Multiple Allocation P-Hub Median Problem on a Cluster Based Network under Congestion

Authors: Çağrı Özgün Kibiroğlu, Zeynep Turgut

Abstract:

This paper considers a hub location problem where the network service area partitioned into predetermined zones (represented by node clusters is given) and potential hub nodes capacity levels are determined a priori as a selection criteria of hub to investigate congestion effect on network. The objective is to design hub network by determining all required hub locations in the node clusters and also allocate non-hub nodes to hubs such that the total cost including transportation cost, opening cost of hubs and penalty cost for exceed of capacity level at hubs is minimized. A mixed integer linear programming model is developed introducing additional constraints to the traditional model of capacitated multiple allocation hub location problem and empirically tested.

Keywords: hub location problem, p-hub median problem, clustering, congestion

Procedia PDF Downloads 479
2020 Diagnose of the Future of Family Businesses Based on the Study of Spanish Family Businesses Founders

Authors: Fernando Doral

Abstract:

Family businesses are a key phenomenon within the business landscape. Nevertheless, it involves two terms (“family” and “business”) which are nowadays rapidly evolving. Consequently, it isn't easy to diagnose if a family business will be a growing or decreasing phenomenon, which is the objective of this study. For that purpose, a sample of 50 Spanish-established companies from various sectors was taken. Different factors were identified for each enterprise, related to the profile of the founders, such as age, the number of sons and daughters, or support received from the family at the moment to start it up. That information was taken as an input for a clustering method to identify groups, which could help define the founders' profiles. That characterization was carried as a base to identify three factors whose evolution should be analyzed: family structures, business landscape and entrepreneurs' motivations. The analysis of the evolution of these three factors seems to indicate a negative tendency of family businesses. Therefore the consequent diagnosis of this study is to consider family businesses as a declining phenomenon.

Keywords: business diagnose, business trends, family business, family business founders

Procedia PDF Downloads 190
2019 Data Mining Techniques for Anti-Money Laundering

Authors: M. Sai Veerendra

Abstract:

Today, money laundering (ML) poses a serious threat not only to financial institutions but also to the nation. This criminal activity is becoming more and more sophisticated and seems to have moved from the cliché of drug trafficking to financing terrorism and surely not forgetting personal gain. Most of the financial institutions internationally have been implementing anti-money laundering solutions (AML) to fight investment fraud activities. However, traditional investigative techniques consume numerous man-hours. Recently, data mining approaches have been developed and are considered as well-suited techniques for detecting ML activities. Within the scope of a collaboration project on developing a new data mining solution for AML Units in an international investment bank in Ireland, we survey recent data mining approaches for AML. In this paper, we present not only these approaches but also give an overview on the important factors in building data mining solutions for AML activities.

Keywords: data mining, clustering, money laundering, anti-money laundering solutions

Procedia PDF Downloads 524
2018 Simulation Approach for a Comparison of Linked Cluster Algorithm and Clusterhead Size Algorithm in Ad Hoc Networks

Authors: Ameen Jameel Alawneh

Abstract:

A Mobile ad-hoc network (MANET) is a collection of wireless mobile hosts that dynamically form a temporary network without the aid of a system administrator. It has neither fixed infrastructure nor wireless ad hoc sessions. It inherently reaches several nodes with a single transmission, and each node functions as both a host and a router. The network maybe represented as a set of clusters each managed by clusterhead. The cluster size is not fixed and it depends on the movement of nodes. We proposed a clusterhead size algorithm (CHSize). This clustering algorithm can be used by several routing algorithms for ad hoc networks. An elected clusterhead is assigned for communication with all other clusters. Analysis and simulation of the algorithm has been implemented using GloMoSim networks simulator, MATLAB and MAPL11 proved that the proposed algorithm achieves the goals.

Keywords: simulation, MANET, Ad-hoc, cluster head size, linked cluster algorithm, loss and dropped packets

Procedia PDF Downloads 376
2017 Financial Fraud Prediction for Russian Non-Public Firms Using Relational Data

Authors: Natalia Feruleva

Abstract:

The goal of this paper is to develop the fraud risk assessment model basing on both relational and financial data and test the impact of the relationships between Russian non-public companies on the likelihood of financial fraud commitment. Relationships mean various linkages between companies such as parent-subsidiary relationship and person-related relationships. These linkages may provide additional opportunities for committing fraud. Person-related relationships appear when firms share a director, or the director owns another firm. The number of companies belongs to CEO and managed by CEO, the number of subsidiaries was calculated to measure the relationships. Moreover, the dummy variable describing the existence of parent company was also included in model. Control variables such as financial leverage and return on assets were also implemented because they describe the motivating factors of fraud. To check the hypotheses about the influence of the chosen parameters on the likelihood of financial fraud, information about person-related relationships between companies, existence of parent company and subsidiaries, profitability and the level of debt was collected. The resulting sample consists of 160 Russian non-public firms. The sample includes 80 fraudsters and 80 non-fraudsters operating in 2006-2017. The dependent variable is dichotomous, and it takes the value 1 if the firm is engaged in financial crime, otherwise 0. Employing probit model, it was revealed that the number of companies which belong to CEO of the firm or managed by CEO has significant impact on the likelihood of financial fraud. The results obtained indicate that the more companies are affiliated with the CEO, the higher the likelihood that the company will be involved in financial crime. The forecast accuracy of the model is about is 80%. Thus, the model basing on both relational and financial data gives high level of forecast accuracy.

Keywords: financial fraud, fraud prediction, non-public companies, regression analysis, relational data

Procedia PDF Downloads 105
2016 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors

Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin

Abstract:

IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).

Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)

Procedia PDF Downloads 123
2015 Modeling the Demand for the Healthcare Services Using Data Analysis Techniques

Authors: Elizaveta S. Prokofyeva, Svetlana V. Maltseva, Roman D. Zaitsev

Abstract:

Rapidly evolving modern data analysis technologies in healthcare play a large role in understanding the operation of the system and its characteristics. Nowadays, one of the key tasks in urban healthcare is to optimize the resource allocation. Thus, the application of data analysis in medical institutions to solve optimization problems determines the significance of this study. The purpose of this research was to establish the dependence between the indicators of the effectiveness of the medical institution and its resources. Hospital discharges by diagnosis; hospital days of in-patients and in-patient average length of stay were selected as the performance indicators and the demand of the medical facility. The hospital beds by type of care, medical technology (magnetic resonance tomography, gamma cameras, angiographic complexes and lithotripters) and physicians characterized the resource provision of medical institutions for the developed models. The data source for the research was an open database of the statistical service Eurostat. The choice of the source is due to the fact that the databases contain complete and open information necessary for research tasks in the field of public health. In addition, the statistical database has a user-friendly interface that allows you to quickly build analytical reports. The study provides information on 28 European for the period from 2007 to 2016. For all countries included in the study, with the most accurate and complete data for the period under review, predictive models were developed based on historical panel data. An attempt to improve the quality and the interpretation of the models was made by cluster analysis of the investigated set of countries. The main idea was to assess the similarity of the joint behavior of the variables throughout the time period under consideration to identify groups of similar countries and to construct the separate regression models for them. Therefore, the original time series were used as the objects of clustering. The hierarchical agglomerate algorithm k-medoids was used. The sampled objects were used as the centers of the clusters obtained, since determining the centroid when working with time series involves additional difficulties. The number of clusters used the silhouette coefficient. After the cluster analysis it was possible to significantly improve the predictive power of the models: for example, in the one of the clusters, MAPE error was only 0,82%, which makes it possible to conclude that this forecast is highly reliable in the short term. The obtained predicted values of the developed models have a relatively low level of error and can be used to make decisions on the resource provision of the hospital by medical personnel. The research displays the strong dependencies between the demand for the medical services and the modern medical equipment variable, which highlights the importance of the technological component for the successful development of the medical facility. Currently, data analysis has a huge potential, which allows to significantly improving health services. Medical institutions that are the first to introduce these technologies will certainly have a competitive advantage.

Keywords: data analysis, demand modeling, healthcare, medical facilities

Procedia PDF Downloads 130
2014 Performance Prediction Methodology of Slow Aging Assets

Authors: M. Ben Slimene, M.-S. Ouali

Abstract:

Asset management of urban infrastructures faces a multitude of challenges that need to be overcome to obtain a reliable measurement of performances. Predicting the performance of slowly aging systems is one of those challenges, which helps the asset manager to investigate specific failure modes and to undertake the appropriate maintenance and rehabilitation interventions to avoid catastrophic failures as well as to optimize the maintenance costs. This article presents a methodology for modeling the deterioration of slowly degrading assets based on an operating history. It consists of extracting degradation profiles by grouping together assets that exhibit similar degradation sequences using an unsupervised classification technique derived from artificial intelligence. The obtained clusters are used to build the performance prediction models. This methodology is applied to a sample of a stormwater drainage culvert dataset.

Keywords: artificial Intelligence, clustering, culvert, regression model, slow degradation

Procedia PDF Downloads 89
2013 Item-Trait Pattern Recognition of Replenished Items in Multidimensional Computerized Adaptive Testing

Authors: Jianan Sun, Ziwen Ye

Abstract:

Multidimensional computerized adaptive testing (MCAT) is a popular research topic in psychometrics. It is important for practitioners to clearly know the item-trait patterns of administered items when a test like MCAT is operated. Item-trait pattern recognition refers to detecting which latent traits in a psychological test are measured by each of the specified items. If the item-trait patterns of the replenished items in MCAT item pool are well detected, the interpretability of the items can be improved, which can further promote the abilities of the examinees who attending the MCAT to be accurately estimated. This research explores to solve the item-trait pattern recognition problem of the replenished items in MCAT item pool from the perspective of statistical variable selection. The popular multidimensional item response theory model, multidimensional two-parameter logistic model, is assumed to fit the response data of MCAT. The proposed method uses the least absolute shrinkage and selection operator (LASSO) to detect item-trait patterns of replenished items based on the essential information of item responses and ability estimates of examinees collected from a designed MCAT procedure. Several advantages of the proposed method are outlined. First, the proposed method does not strictly depend on the relative order between the replenished items and the selected operational items, so it allows the replenished items to be mixed into the operational items in reasonable order such as considering content constraints or other test requirements. Second, the LASSO used in this research improves the interpretability of the multidimensional replenished items in MCAT. Third, the proposed method can exert the advantage of shrinkage method idea for variable selection, so it can help to check item quality and key dimension features of replenished items and saves more costs of time and labors in response data collection than traditional factor analysis method. Moreover, the proposed method makes sure the dimensions of replenished items are recognized to be consistent with the dimensions of operational items in MCAT item pool. Simulation studies are conducted to investigate the performance of the proposed method under different conditions for varying dimensionality of item pool, latent trait correlation, item discrimination, test lengths and item selection criteria in MCAT. Results show that the proposed method can accurately detect the item-trait patterns of the replenished items in the two-dimensional and the three-dimensional item pool. Selecting enough operational items from the item pool consisting of high discriminating items by Bayesian A-optimality in MCAT can improve the recognition accuracy of item-trait patterns of replenished items for the proposed method. The pattern recognition accuracy for the conditions with correlated traits is better than those with independent traits especially for the item pool consisting of comparatively low discriminating items. To sum up, the proposed data-driven method based on the LASSO can accurately and efficiently detect the item-trait patterns of replenished items in MCAT.

Keywords: item-trait pattern recognition, least absolute shrinkage and selection operator, multidimensional computerized adaptive testing, variable selection

Procedia PDF Downloads 113
2012 Unsupervised Learning with Self-Organizing Maps for Named Entity Recognition in the CONLL2003 Dataset

Authors: Assel Jaxylykova, Alexnder Pak

Abstract:

This study utilized a Self-Organizing Map (SOM) for unsupervised learning on the CONLL-2003 dataset for Named Entity Recognition (NER). The process involved encoding words into 300-dimensional vectors using FastText. These vectors were input into a SOM grid, where training adjusted node weights to minimize distances. The SOM provided a topological representation for identifying and clustering named entities, demonstrating its efficacy without labeled examples. Results showed an F1-measure of 0.86, highlighting SOM's viability. Although some methods achieve higher F1 measures, SOM eliminates the need for labeled data, offering a scalable and efficient alternative. The SOM's ability to uncover hidden patterns provides insights that could enhance existing supervised methods. Further investigation into potential limitations and optimization strategies is suggested to maximize benefits.

Keywords: named entity recognition natural, language processing, self-organizing map, CONLL-2003, semantics

Procedia PDF Downloads 0
2011 Exploring the Nature and Meaning of Theory in the Field of Neuroeducation Studies

Authors: Ali Nouri

Abstract:

Neuroeducation is one of the most exciting research fields which is continually evolving. However, there is a need to develop its theoretical bases in connection to practice. The present paper is a starting attempt in this regard to provide a space from which to think about neuroeducational theory and invoke more investigation in this area. Accordingly, a comprehensive theory of neuroeducation could be defined as grouping or clustering of concepts and propositions that describe and explain the nature of human learning to provide valid interpretations and implications useful for educational practice in relation to philosophical aspects or values. Whereas it should be originated from the philosophical foundations of the field and explain its normative significance, it needs to be testable in terms of rigorous evidence to fundamentally advance contemporary educational policy and practice. There is thus pragmatically a need to include a course on neuroeducational theory into the curriculum of the field. In addition, there is a need to articulate and disseminate considerable discussion over the subject within professional journals and academic societies.

Keywords: neuroeducation studies, neuroeducational theory, theory building, neuroeducation research

Procedia PDF Downloads 436
2010 Young Female’s Heart Was Bitten by Unknown Ghost (Isolated Cardiac Sarcoidosis): A Case Report

Authors: Heru Al Amin

Abstract:

Sarcoidosis is a granulomatous inflammatory disorder of unclear etiology that can affect multiple different organ systems. Isolated cardiac sarcoidosis is a very rare condition that causes lethal arrhythmia and heart failure. A definite diagnosis of cardiac sarcoidosis remains challenging. The use of multimodality imaging plays a pivotal role in the diagnosis of this entity. Case summary: In this report, we discuss a case of a 50-year-old woman who presented with recurrent palpitation, dizziness, vertigo and presyncope. Electrocardiogram revealed variable heart blocks, including first-degree AV block, second-degree AV block, high-degree AV block, complete AV block, trifascicular block and sometimes supraventricular arrhythmia. Twenty-four hours of Holter monitoring show atrial bigeminy, first-degree AV block and trifascicular block. Transthoracic echocardiography showed Thinning of basal anteroseptal and inferred septum with LV dilatation with reduction of Global Longitudinal Strain. A dual-chamber pacemaker was implanted. CT Coronary angiogram showed no coronary artery disease. Cardiac magnetic resonance revealed basal anteroseptal and inferior septum thinning with focal edema with LGE suggestive of sarcoidosis. Computed tomography of the chest showed no lymphadenopathy or pulmonary infiltration. 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) of the whole body showed. We started steroids and followed up with the patient. Conclusion: This case serves to highlight the challenges in identifying and managing isolated CS in a young patient with recurrent syncope with variable heart block. Early, even late initiation of steroids can improve arrhythmia as well as left ventricular function.

Keywords: cardiac sarcoidosis, conduction abnormality, syncope, cardiac MRI

Procedia PDF Downloads 73
2009 Physical Activity and Nutrition Intervention for Singaporean Women Aged 50 Years and Above: A Study Protocol for a Community Based Randomised Controlled Trial

Authors: Elaine Yee Sing Wong, Jonine Jancey, Andy H. Lee, Anthony P. James

Abstract:

Singapore has a rapidly aging population, where the majority of older women aged 50 years and above, are physically inactive and have unhealthy dietary habits, placing them at ‘high risk’ of non-communicable diseases. Given the multiplicity of less than optimal dietary habits and high levels of physical inactivity among Singaporean women, it is imperative to develop appropriate lifestyle interventions at recreational centres to enhance both their physical and nutritional knowledge, as well as provide them with the opportunity to develop skills to support behaviour change. To the best of our knowledge, this proposed study is the first physical activity and nutrition cluster randomised controlled trial conducted in Singapore for older women. Findings from this study may provide insights and recommendations for policy makers and key stakeholders to create new healthy living, recreational centres with supportive environments. This 6-month community-based cluster randomised controlled trial will involve the implementation and evaluation of physical activity and nutrition program for community dwelling Singaporean women, who currently attend recreational centres to promote social leisure activities in their local neighbourhood. The intervention will include dietary education and counselling sessions, physical activity classes, and telephone contact by certified fitness instructors and qualified nutritionists. Social Cognitive Theory with Motivational Interviewing will inform the development of strategies to support health behaviour change. Sixty recreational centres located in Singapore will be randomly selected from five major geographical districts and randomly allocated to the intervention (n=30) or control (n=30) cluster. A sample of 600 (intervention n=300; control n=300) women aged 50 years and above will then be recruited from these recreational centres. The control clusters will only undergo pre and post data collection and will not receive the intervention. It is hypothesised that by the end of the intervention, the intervention group participants (n = 300) compared to the control group (n = 300), will show significant improvements in the following variables: lipid profile, body mass index, physical activity and dietary behaviour, anthropometry, mental and physical health. Data collection will be examined and compared via the Statistical Package for the Social Science version 23. Descriptive and summary statistics will be used to quantify participants’ characteristics and outcome variables. Multi-variable mixed regression analyses will be used to confirm the effects of the proposed health intervention, taking into account the repeated measures and the clustering of the observations. The research protocol was approved by the Curtin University Human Research Ethics Committee (approval number: HRE2016-0366). The study has been registered with the Australian and New Zealand Clinical Trial Registry (12617001022358).

Keywords: community based, healthy aging, intervention, nutrition, older women, physical activity

Procedia PDF Downloads 158
2008 Transformations between Bivariate Polynomial Bases

Authors: Dimitris Varsamis, Nicholas Karampetakis

Abstract:

It is well known that any interpolating polynomial P(x,y) on the vector space Pn,m of two-variable polynomials with degree less than n in terms of x and less than m in terms of y has various representations that depends on the basis of Pn,m that we select i.e. monomial, Newton and Lagrange basis etc. The aim of this paper is twofold: a) to present transformations between the coordinates of the polynomial P(x,y) in the aforementioned basis and b) to present transformations between these bases.

Keywords: bivariate interpolation polynomial, polynomial basis, transformations, interpolating polynomial

Procedia PDF Downloads 385
2007 Migration in Times of Uncertainty

Authors: Harman Jaggi, David Steinsaltz, Shripad Tuljapurkar

Abstract:

Understanding the effect of fluctuations on populations is crucial in the context of increasing habitat fragmentation, climate change, and biological invasions, among others. Migration in response to environmental disturbances enables populations to escape unfavorable conditions, benefit from new environments and thereby ride out fluctuations in variable environments. Would populations disperse if there is no uncertainty? Karlin showed in 1982 that when sub-populations experience distinct but fixed growth rates at different sites, greater mixing of populations will lower the overall growth rate relative to the most favorable site. Here we ask if and when environmental variability favors migration over no-migration. Specifically, in random environments, would a small amount of migration increase the overall long-run growth rate relative to the zero migration case? We use analysis and simulations to show how long-run growth rate changes with migration rate. Our results show that when fitness (dis)advantages fluctuate over time across sites, migration may allow populations to benefit from variability. When there is one best site with highest growth rate, the effect of migration on long-run growth rate depends on the difference in expected growth between sites, scaled by the variance of the difference. When variance is large, there is a substantial probability of an inferior site experiencing higher growth rate than its average. Thus, a high variance can compensate for a difference in average growth rates between sites. Positive correlations in growth rates across sites favor less migration. With multiple sites and large fluctuations, the length of shortest cycle (excursion) from the best site (on average) matters, and we explore the interplay between excursion length, average differences between sites and the size of fluctuations. Our findings have implications for conservation biology: even when there are superior sites in a sea of poor habitats, variability and habitat quality across space may be key to determining the importance of migration.

Keywords: migration, variable-environments, random, dispersal, fluctuations, habitat-quality

Procedia PDF Downloads 125
2006 Forecast Financial Bubbles: Multidimensional Phenomenon

Authors: Zouari Ezzeddine, Ghraieb Ikram

Abstract:

From the results of the academic literature which evokes the limitations of previous studies, this article shows the reasons for multidimensionality Prediction of financial bubbles. A new framework for modeling study predicting financial bubbles by linking a set of variable presented on several dimensions dictating its multidimensional character. It takes into account the preferences of financial actors. A multicriteria anticipation of the appearance of bubbles in international financial markets helps to fight against a possible crisis.

Keywords: classical measures, predictions, financial bubbles, multidimensional, artificial neural networks

Procedia PDF Downloads 555
2005 Advocating for and Implementing the Use of Advance Top Bar (ATB) for a More Than 100% Increase in Honey Yield in Top Bar Hives Owing to Honey Harvesting Without Comb Destruction

Authors: Perry Ayi Mankattah

Abstract:

Introduction: Africa, which should lead the world in honey production, is importing three times the honey it produces even though it has a healthy, industrious and large population of bees. This is due to the mechanism of honey harvesting that destroys the combs and thereby reducing honey production and rate of harvesting. For Africa to take its place in the world of honey production, Africa should adopt a method that enables a higher rate of honey harvesting. The Advance Top Bar is, therefore, a simplified framework that provides that answer. It can be made of wood, plastic and metal that can be fabricated by tin/metal smiths, wielders and carpenters at the village level without any very sophisticated machines. Material and Methods: ATB is a top bar-like hollow framework of dimension 3.2*48 cm that can be made of wood, plastic and metal. It is made up of three parts of a constant hollow top bar, a variable grooved bottom bar with both bars being joined through synchronized holes (that align both the top and bottom bars ) by either metal or plastic rods of length 22cm and diameter of 5 mm with rounded balls at both ends It could be used with foundation combs or without and also other accessories to have about ten (10) function which includes commercial propolis harvesting queen rearing etc. The variable bottom bar length depends on the width of the hive, as most African beehives are somehow not standardized. Results: Foundation combs are placed within the Advance Top Bar for the bees to form their combs over its mesh to prevent comb breakage during honey harvesting. Similarly, honeycombs on top bars will produce natural foundation combs when also placed in the Advance top bar system just as they are re-used in the Langstroth Frames. Discussions and Conclusions: Any modification that will promote non-comb destruction during honey harvesting in Top bars shall cause Africa to increase honey production by over 100% as beekeepers adopt the mechanism. Honey-laden combs from the current normal top bars could be placed in the Advance Top Bar to harvest without comb destruction; hence the same system could be used as a transition to the adoption of the Advance Top Bar with less cost.

Keywords: honey, harvest, increase, production

Procedia PDF Downloads 52
2004 A Literature Review on the Role of Local Potential for Creative Industries

Authors: Maya Irjayanti

Abstract:

Local creativity utilization has been a strategic investment to be expanded as a creative industry due to its significant contribution to the national gross domestic product. Many developed and developing countries look toward creative industries as an agenda for the economic growth. This study aims to identify the role of local potential for creative industries from various empirical studies. The method performed in this study will involve a peer-reviewed journal articles and conference papers review addressing local potential and creative industries. The literature review analysis will include several steps: material collection, descriptive analysis, category selection, and material evaluation. Finally, the outcome expected provides a creative industries clustering based on the local potential of various nations. In addition, the finding of this study will be used as future research reference to explore a particular area with well-known aspects of local potential for creative industry products.

Keywords: business, creativity, local potential, local wisdom

Procedia PDF Downloads 360
2003 Globally Convergent Sequential Linear Programming for Multi-Material Topology Optimization Using Ordered Solid Isotropic Material with Penalization Interpolation

Authors: Darwin Castillo Huamaní, Francisco A. M. Gomes

Abstract:

The aim of the multi-material topology optimization (MTO) is to obtain the optimal topology of structures composed by many materials, according to a given set of constraints and cost criteria. In this work, we seek the optimal distribution of materials in a domain, such that the flexibility of the structure is minimized, under certain boundary conditions and the intervention of external forces. In the case we have only one material, each point of the discretized domain is represented by two values from a function, where the value of the function is 1 if the element belongs to the structure or 0 if the element is empty. A common way to avoid the high computational cost of solving integer variable optimization problems is to adopt the Solid Isotropic Material with Penalization (SIMP) method. This method relies on the continuous interpolation function, power function, where the base variable represents a pseudo density at each point of domain. For proper exponent values, the SIMP method reduces intermediate densities, since values other than 0 or 1 usually does not have a physical meaning for the problem. Several extension of the SIMP method were proposed for the multi-material case. The one that we explore here is the ordered SIMP method, that has the advantage of not being based on the addition of variables to represent material selection, so the computational cost is independent of the number of materials considered. Although the number of variables is not increased by this algorithm, the optimization subproblems that are generated at each iteration cannot be solved by methods that rely on second derivatives, due to the cost of calculating the second derivatives. To overcome this, we apply a globally convergent version of the sequential linear programming method, which solves a linear approximation sequence of optimization problems.

Keywords: globally convergence, multi-material design ordered simp, sequential linear programming, topology optimization

Procedia PDF Downloads 297
2002 Fast Short-Term Electrical Load Forecasting under High Meteorological Variability with a Multiple Equation Time Series Approach

Authors: Charline David, Alexandre Blondin Massé, Arnaud Zinflou

Abstract:

In 2016, Clements, Hurn, and Li proposed a multiple equation time series approach for the short-term load forecasting, reporting an average mean absolute percentage error (MAPE) of 1.36% on an 11-years dataset for the Queensland region in Australia. We present an adaptation of their model to the electrical power load consumption for the whole Quebec province in Canada. More precisely, we take into account two additional meteorological variables — cloudiness and wind speed — on top of temperature, as well as the use of multiple meteorological measurements taken at different locations on the territory. We also consider other minor improvements. Our final model shows an average MAPE score of 1:79% over an 8-years dataset.

Keywords: short-term load forecasting, special days, time series, multiple equations, parallelization, clustering

Procedia PDF Downloads 82
2001 Liver Lesion Extraction with Fuzzy Thresholding in Contrast Enhanced Ultrasound Images

Authors: Abder-Rahman Ali, Adélaïde Albouy-Kissi, Manuel Grand-Brochier, Viviane Ladan-Marcus, Christine Hoeffl, Claude Marcus, Antoine Vacavant, Jean-Yves Boire

Abstract:

In this paper, we present a new segmentation approach for focal liver lesions in contrast enhanced ultrasound imaging. This approach, based on a two-cluster Fuzzy C-Means methodology, considers type-II fuzzy sets to handle uncertainty due to the image modality (presence of speckle noise, low contrast, etc.), and to calculate the optimum inter-cluster threshold. Fine boundaries are detected by a local recursive merging of ambiguous pixels. The method has been tested on a representative database. Compared to both Otsu and type-I Fuzzy C-Means techniques, the proposed method significantly reduces the segmentation errors.

Keywords: defuzzification, fuzzy clustering, image segmentation, type-II fuzzy sets

Procedia PDF Downloads 465
2000 Analytical Study of Data Mining Techniques for Software Quality Assurance

Authors: Mariam Bibi, Rubab Mehboob, Mehreen Sirshar

Abstract:

Satisfying the customer requirements is the ultimate goal of producing or developing any product. The quality of the product is decided on the bases of the level of customer satisfaction. There are different techniques which have been reported during the survey which enhance the quality of the product through software defect prediction and by locating the missing software requirements. Some mining techniques were proposed to assess the individual performance indicators in collaborative environment to reduce errors at individual level. The basic intention is to produce a product with zero or few defects thereby producing a best product quality wise. In the analysis of survey the techniques like Genetic algorithm, artificial neural network, classification and clustering techniques and decision tree are studied. After analysis it has been discovered that these techniques contributed much to the improvement and enhancement of the quality of the product.

Keywords: data mining, defect prediction, missing requirements, software quality

Procedia PDF Downloads 450
1999 The Effectiveness of Teaching Emotional Intelligence on Reducing Marital Conflicts and Marital Adjustment in Married Students of Tehran University

Authors: Elham Jafari

Abstract:

The aim of this study was to evaluate the effectiveness of emotional intelligence training on reducing marital conflict and marital adjustment in married students of the University of Tehran. This research is an applied type in terms of purpose and a semi-experimental design of pre-test-post-test type with the control group and with follow-up test in terms of the data collection method. The statistical population of the present study consisted of all married students of the University of Tehran. In this study, 30 married students of the University of Tehran were selected by convenience sampling method as a sample that 15 people in the experimental group and 15 people in the control group were randomly selected. The method of data collection in this research was field and library. The data collection tool in the field section was two questionnaires of marital conflict and marital adjustment. To analyze the collected data, first at the descriptive level, using statistical indicators, the demographic characteristics of the sample were described by SPSS software. In inferential statistics, the statistical method used was the test of analysis of covariance. The results showed that the effect of the independent variable of emotional intelligence on the reduction of marital conflicts is statistically significant. And it can be inferred that emotional intelligence training has reduced the marital conflicts of married students of the University of Tehran in the experimental group compared to the control group. Also, the effect of the independent variable of emotional intelligence on marital adjustment was statistically significant. It can be inferred that emotional intelligence training has adjusted the marital adjustment of married students of the University of Tehran in the experimental group compared to the control group.

Keywords: emotional intelligence, marital conflicts, marital compatibility, married students

Procedia PDF Downloads 238
1998 Learning Grammars for Detection of Disaster-Related Micro Events

Authors: Josef Steinberger, Vanni Zavarella, Hristo Tanev

Abstract:

Natural disasters cause tens of thousands of victims and massive material damages. We refer to all those events caused by natural disasters, such as damage on people, infrastructure, vehicles, services and resource supply, as micro events. This paper addresses the problem of micro - event detection in online media sources. We present a natural language grammar learning algorithm and apply it to online news. The algorithm in question is based on distributional clustering and detection of word collocations. We also explore the extraction of micro-events from social media and describe a Twitter mining robot, who uses combinations of keywords to detect tweets which talk about effects of disasters.

Keywords: online news, natural language processing, machine learning, event extraction, crisis computing, disaster effects, Twitter

Procedia PDF Downloads 466
1997 Collaborative Planning and Forecasting

Authors: Neha Asthana, Vishal Krishna Prasad

Abstract:

Collaborative planning and forecasting are the innovative and systematic approaches towards productive integration and assimilation of data synergized into information. The changing and variable market dynamics have persuaded global business chains to incorporate collaborative planning and forecasting as an imperative tool. Thus, it is essential for the supply chains to constantly improvise, update its nature, and mould as per changing global environment.

Keywords: information transfer, forecasting, optimization, supply chain management

Procedia PDF Downloads 416
1996 An Embarrassingly Simple Semi-supervised Approach to Increase Recall in Online Shopping Domain to Match Structured Data with Unstructured Data

Authors: Sachin Nagargoje

Abstract:

Complete labeled data is often difficult to obtain in a practical scenario. Even if one manages to obtain the data, the quality of the data is always in question. In shopping vertical, offers are the input data, which is given by advertiser with or without a good quality of information. In this paper, an author investigated the possibility of using a very simple Semi-supervised learning approach to increase the recall of unhealthy offers (has badly written Offer Title or partial product details) in shopping vertical domain. The author found that the semisupervised learning method had improved the recall in the Smart Phone category by 30% on A=B testing on 10% traffic and increased the YoY (Year over Year) number of impressions per month by 33% at production. This also made a significant increase in Revenue, but that cannot be publicly disclosed.

Keywords: semi-supervised learning, clustering, recall, coverage

Procedia PDF Downloads 106
1995 The Mechanisms of Peer-Effects in Education: A Frame-Factor Analysis of Instruction

Authors: Pontus Backstrom

Abstract:

In the educational literature on peer effects, attention has been brought to the fact that the mechanisms creating peer effects are still to a large extent hidden in obscurity. The hypothesis in this study is that the Frame Factor Theory can be used to explain these mechanisms. At heart of the theory is the concept of “time needed” for students to learn a certain curricula unit. The relations between class-aggregated time needed and the actual time available, steers and hinders the actions possible for the teacher. Further, the theory predicts that the timing and pacing of the teachers’ instruction is governed by a “criterion steering group” (CSG), namely the pupils in the 10th-25th percentile of the aptitude distribution in class. The class composition hereby set the possibilities and limitations for instruction, creating peer effects on individual outcomes. To test if the theory can be applied to the issue of peer effects, the study employs multilevel structural equation modelling (M-SEM) on Swedish TIMSS 2015-data (Trends in International Mathematics and Science Study; students N=4090, teachers N=200). Using confirmatory factor analysis (CFA) in the SEM-framework in MPLUS, latent variables are specified according to the theory, such as “limitations of instruction” from TIMSS survey items. The results indicate a good model fit to data of the measurement model. Research is still in progress, but preliminary results from initial M-SEM-models verify a strong relation between the mean level of the CSG and the latent variable of limitations on instruction, a variable which in turn have a great impact on individual students’ test results. Further analysis is required, but so far the analysis indicates a confirmation of the predictions derived from the frame factor theory and reveals that one of the important mechanisms creating peer effects in student outcomes is the effect the class composition has upon the teachers’ instruction in class.

Keywords: compositional effects, frame factor theory, peer effects, structural equation modelling

Procedia PDF Downloads 122
1994 Computing Customer Lifetime Value in E-Commerce Websites with Regard to Returned Orders and Payment Method

Authors: Morteza Giti

Abstract:

As online shopping is becoming increasingly popular, computing customer lifetime value for better knowing the customers is also gaining more importance. Two distinct factors that can affect the value of a customer in the context of online shopping is the number of returned orders and payment method. Returned orders are those which have been shipped but not collected by the customer and are returned to the store. Payment method refers to the way that customers choose to pay for the price of the order which are usually two: Pre-pay and Cash-on-delivery. In this paper, a novel model called RFMSP is presented to calculated the customer lifetime value, taking these two parameters into account. The RFMSP model is based on the common RFM model while adding two extra parameter. The S represents the order status and the P indicates the payment method. As a case study for this model, the purchase history of customers in an online shop is used to compute the customer lifetime value over a period of twenty months.

Keywords: RFMSP model, AHP, customer lifetime value, k-means clustering, e-commerce

Procedia PDF Downloads 306
1993 An Exploratory Research on Awareness towards Human Rights among Public Representatives of Bihar, India

Authors: Saba Farheen, Uday Shankar

Abstract:

Background- Attaining equality among all humans and eliminating all forms of discrimination against them are fundamental human rights. These rights are based on the belief that all human beings are born free with equal dignity, esteem, and honour. In India, more than 30 percent politicians are having criminal background. They are also illiterate, which obstacle them in governing the system. They do not know the basic human rights. Because of this, they cannot decide what to do for the sake of the nation. Bihar is the third largest populated state of India and is characterized by corrupt politicians and poor literacy rate. If the politicians can aware about the human rights, then they will show positive attitude towards these. Aim- The main goal of the present research was to study the subjects’ knowledge or awareness towards their human rights. It was an attempt to identify social-psychological conditions that inhibit or facilitate awareness among public representatives towards their human rights in the special context of Bihar, India. Thus the main variable awareness towards human rights has been treated as the main dependent variable. The other two variables-socio economic status and Educational status, have been treated as independent variables. Method- The subjects were 400 public representatives in the age group of 35 to 50 years. They were from High socio economic status (N=150), Middle socio economic status (N=150), and Low socio economic status (N=100). The subjects were either educated (N=200) or Uneducated (N=200). The subjects were selected randomly from the different districts of Bihar, India. “Human Rights Awareness Scale” by Dr. Iftekhar Hossain, Dr. Saba Farheen, and Dr. Uday Shankar was applied in this study. Results- Results have shown that the public representatives have very low level of awareness towards the human rights. Also, the subjects from Middle SES have highest awareness in comparison with subjects of High and Low SES. Uneducated public representatives have less awareness than the educated one about human rights. Conclusion- Conclusively, it can be stated that human rights awareness among the public representatives of India is very low, and it is being affected by their Socio economic status and literacy level.

Keywords: human rights, awareness, public representatives, bihar, India

Procedia PDF Downloads 114
1992 PDDA: Priority-Based, Dynamic Data Aggregation Approach for Sensor-Based Big Data Framework

Authors: Lutful Karim, Mohammed S. Al-kahtani

Abstract:

Sensors are being used in various applications such as agriculture, health monitoring, air and water pollution monitoring, traffic monitoring and control and hence, play the vital role in the growth of big data. However, sensors collect redundant data. Thus, aggregating and filtering sensors data are significantly important to design an efficient big data framework. Current researches do not focus on aggregating and filtering data at multiple layers of sensor-based big data framework. Thus, this paper introduces (i) three layers data aggregation and framework for big data and (ii) a priority-based, dynamic data aggregation scheme (PDDA) for the lowest layer at sensors. Simulation results show that the PDDA outperforms existing tree and cluster-based data aggregation scheme in terms of overall network energy consumptions and end-to-end data transmission delay.

Keywords: big data, clustering, tree topology, data aggregation, sensor networks

Procedia PDF Downloads 322