Search results for: multi criteria inventory classification models
14410 A Statistical Approach to Predict and Classify the Commercial Hatchability of Chickens Using Extrinsic Parameters of Breeders and Eggs
Authors: M. S. Wickramarachchi, L. S. Nawarathna, C. M. B. Dematawewa
Abstract:
Hatchery performance is critical for the profitability of poultry breeder operations. Some extrinsic parameters of eggs and breeders cause to increase or decrease the hatchability. This study aims to identify the affecting extrinsic parameters on the commercial hatchability of local chicken's eggs and determine the most efficient classification model with a hatchability rate greater than 90%. In this study, seven extrinsic parameters were considered: egg weight, moisture loss, breeders age, number of fertilised eggs, shell width, shell length, and shell thickness. Multiple linear regression was performed to determine the most influencing variable on hatchability. First, the correlation between each parameter and hatchability were checked. Then a multiple regression model was developed, and the accuracy of the fitted model was evaluated. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), k-Nearest Neighbors (kNN), Support Vector Machines (SVM) with a linear kernel, and Random Forest (RF) algorithms were applied to classify the hatchability. This grouping process was conducted using binary classification techniques. Hatchability was negatively correlated with egg weight, breeders' age, shell width, shell length, and positive correlations were identified with moisture loss, number of fertilised eggs, and shell thickness. Multiple linear regression models were more accurate than single linear models regarding the highest coefficient of determination (R²) with 94% and minimum AIC and BIC values. According to the classification results, RF, CART, and kNN had performed the highest accuracy values 0.99, 0.975, and 0.972, respectively, for the commercial hatchery process. Therefore, the RF is the most appropriate machine learning algorithm for classifying the breeder outcomes, which are economically profitable or not, in a commercial hatchery.Keywords: classification models, egg weight, fertilised eggs, multiple linear regression
Procedia PDF Downloads 8814409 Aberrant Consumer Behavior in Seller’s and Consumer’s Eyes: Newly Developed Classification
Authors: Amal Abdelhadi
Abstract:
Consumer misbehavior evaluation can be markedly different based on a number of variables and different from one environment to another. Using three aberrant consumer behavior (ACB) scenarios (shoplifting, stealing from hotel rooms and software piracy) this study aimed to explore Libyan seller and consumers of ACB. Materials were collected by using a multi-method approach was employed (qualitative and quantitative approaches) in two fieldwork phases. In the phase stage, a qualitative data were collected from 26 Libyan sellers’ by face-to-face interviews. In the second stage, a consumer survey was used to collect quantitative data from 679 Libyan consumers. This study found that the consumer’s and seller’s evaluation of ACB are not always consistent. Further, ACB evaluations differed based on the form of ACB. Furthermore, the study found that not all consumer behaviors that were considered as bad behavior in other countries have the same evaluation in Libya; for example, software piracy. Therefore this study suggested a newly developed classification of ACB based on marketers’ and consumers’ views. This classification provides 9 ACB types within two dimensions (marketers’ and consumers’ views) and three degrees of behavior evaluation (good, acceptable and misbehavior).Keywords: aberrant consumer behavior, Libya, multi-method approach, planned behavior theory
Procedia PDF Downloads 57514408 Research on Ultrafine Particles Classification Using Hydrocyclone with Annular Rinse Water
Authors: Tao Youjun, Zhao Younan
Abstract:
The separation effect of fine coal can be improved by the process of pre-desliming. It was significantly enhanced when the fine coal was processed using Falcon concentrator with the removal of -45um coal slime. Ultrafine classification tests using Krebs classification cyclone with annular rinse water showed that increasing feeding pressure can effectively avoid the phenomena of heavy particles passing into overflow and light particles slipping into underflow. The increase of rinse water pressure could reduce the content of fine-grained particles while increasing the classification size. The increase in feeding concentration had a negative effect on the efficiency of classification, meanwhile increased the classification size due to the enhanced hindered settling caused by high underflow concentration. As a result of optimization experiments with response indicator of classification efficiency which based on orthogonal design using Design-Expert software indicated that the optimal classification efficiency reached 91.32% with the feeding pressure of 0.03MPa, the rinse water pressure of 0.02MPa and the feeding concentration of 12.5%. Meanwhile, the classification size was 49.99 μm which had a good agreement with the predicted value.Keywords: hydrocyclone, ultrafine classification, slime, classification efficiency, classification size
Procedia PDF Downloads 16814407 An Information-Based Approach for Preference Method in Multi-Attribute Decision Making
Authors: Serhat Tuzun, Tufan Demirel
Abstract:
Multi-Criteria Decision Making (MCDM) is the modelling of real-life to solve problems we encounter. It is a discipline that aids decision makers who are faced with conflicting alternatives to make an optimal decision. MCDM problems can be classified into two main categories: Multi-Attribute Decision Making (MADM) and Multi-Objective Decision Making (MODM), based on the different purposes and different data types. Although various MADM techniques were developed for the problems encountered, their methodology is limited in modelling real-life. Moreover, objective results are hard to obtain, and the findings are generally derived from subjective data. Although, new and modified techniques are developed by presenting new approaches such as fuzzy logic; comprehensive techniques, even though they are better in modelling real-life, could not find a place in real world applications for being hard to apply due to its complex structure. These constraints restrict the development of MADM. This study aims to conduct a comprehensive analysis of preference methods in MADM and propose an approach based on information. For this purpose, a detailed literature review has been conducted, current approaches with their advantages and disadvantages have been analyzed. Then, the approach has been introduced. In this approach, performance values of the criteria are calculated in two steps: first by determining the distribution of each attribute and standardizing them, then calculating the information of each attribute as informational energy.Keywords: literature review, multi-attribute decision making, operations research, preference method, informational energy
Procedia PDF Downloads 22514406 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution
Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone
Abstract:
The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder
Procedia PDF Downloads 11414405 Multi-Criteria Decision-Making Evaluations for Oily Waste Management of Marine Oil Spill
Authors: Naznin Sultana Daisy, Mohammad Hesam Hafezi, Lei Liu
Abstract:
Nowadays, oily solid waste management has become an important issue for many countries due to frequent oil spill accidents and the increase of industrial oily wastewater. The historical oil spill data show that marine oil spills that affect the shoreline can, in extreme cases, produce up to 30 or 40 times more waste than the volume of oil initially released. Hence, responsive authorities aim to develop the most effective oily waste management solution in a timely manner to manage and minimize the waste generated. In this study initially, we tried to develop the roadmap of oily waste management for three-tiered spill scenarios for Atlantic Canada. For that purpose, three oily waste disposal scenarios are evaluated via six criteria which are determined according to the opinions of the experts from the field. Consequently, through sustainable response strategies, the most appropriate and feasible scenario is determined. The results of this study will assist to develop an integrated oily waste management system for identifying the optimal waste-generation-allocation-disposal schemes and generating the optimal management alternatives based on the holistic consideration of environmental, technological, economic, social, and regulatory factors.Keywords: oily waste management, marine oil spill, multi-criteria decision making, oil spill response
Procedia PDF Downloads 13714404 Developing and Evaluating Clinical Risk Prediction Models for Coronary Artery Bypass Graft Surgery
Authors: Mohammadreza Mohebbi, Masoumeh Sanagou
Abstract:
The ability to predict clinical outcomes is of great importance to physicians and clinicians. A number of different methods have been used in an effort to accurately predict these outcomes. These methods include the development of scoring systems based on multivariate statistical modelling, and models involving the use of classification and regression trees. The process usually consists of two consecutive phases, namely model development and external validation. The model development phase consists of building a multivariate model and evaluating its predictive performance by examining calibration and discrimination, and internal validation. External validation tests the predictive performance of a model by assessing its calibration and discrimination in different but plausibly related patients. A motivate example focuses on prediction modeling using a sample of patients undergone coronary artery bypass graft (CABG) has been used for illustrative purpose and a set of primary considerations for evaluating prediction model studies using specific quality indicators as criteria to help stakeholders evaluate the quality of a prediction model study has been proposed.Keywords: clinical prediction models, clinical decision rule, prognosis, external validation, model calibration, biostatistics
Procedia PDF Downloads 29814403 Impact of VARK Learning Model at Tertiary Level Education
Authors: Munazza A. Mirza, Khawar Khurshid
Abstract:
Individuals are generally associated with different learning styles, which have been explored extensively in recent past. The learning styles refer to the potential of an individual by which s/he can easily comprehend and retain information. Among various learning style models, VARK is the most accepted model which categorizes the learners with respect to their sensory characteristics. Based on the number of preferred learning modes, the learners can be categorized as uni-modal, bi-modal, tri-modal, or quad/multi-modal. Although there is a prevalent belief in the learning styles, however, the model is not being frequently and effectively utilized in the higher education. This research describes the identification model to validate teacher’s didactic practice and student’s performance linkage with the learning styles. The identification model is recommended to check the effective application and evaluation of the various learning styles. The proposed model is a guideline to effectively implement learning styles inventory in order to ensure that it will validate performance linkage with learning styles. If performance is linked with learning styles, this may help eradicate the distrust on learning style theory. For this purpose, a comprehensive study was conducted to compare and understand how VARK inventory model is being used to identify learning preferences and their correlation with learner’s performance. A comparative analysis of the findings of these studies is presented to understand the learning styles of tertiary students in various disciplines. It is concluded with confidence that the learning styles of students cannot be associated with any specific discipline. Furthermore, there is not enough empirical proof to link performance with learning styles.Keywords: learning style, VARK, sensory preferences, identification model, didactic practices
Procedia PDF Downloads 28114402 A Dataset of Program Educational Objectives Mapped to ABET Outcomes: Data Cleansing, Exploratory Data Analysis and Modeling
Authors: Addin Osman, Anwar Ali Yahya, Mohammed Basit Kamal
Abstract:
Datasets or collections are becoming important assets by themselves and now they can be accepted as a primary intellectual output of a research. The quality and usage of the datasets depend mainly on the context under which they have been collected, processed, analyzed, validated, and interpreted. This paper aims to present a collection of program educational objectives mapped to student’s outcomes collected from self-study reports prepared by 32 engineering programs accredited by ABET. The manual mapping (classification) of this data is a notoriously tedious, time consuming process. In addition, it requires experts in the area, which are mostly not available. It has been shown the operational settings under which the collection has been produced. The collection has been cleansed, preprocessed, some features have been selected and preliminary exploratory data analysis has been performed so as to illustrate the properties and usefulness of the collection. At the end, the collection has been benchmarked using nine of the most widely used supervised multiclass classification techniques (Binary Relevance, Label Powerset, Classifier Chains, Pruned Sets, Random k-label sets, Ensemble of Classifier Chains, Ensemble of Pruned Sets, Multi-Label k-Nearest Neighbors and Back-Propagation Multi-Label Learning). The techniques have been compared to each other using five well-known measurements (Accuracy, Hamming Loss, Micro-F, Macro-F, and Macro-F). The Ensemble of Classifier Chains and Ensemble of Pruned Sets have achieved encouraging performance compared to other experimented multi-label classification methods. The Classifier Chains method has shown the worst performance. To recap, the benchmark has achieved promising results by utilizing preliminary exploratory data analysis performed on the collection, proposing new trends for research and providing a baseline for future studies.Keywords: ABET, accreditation, benchmark collection, machine learning, program educational objectives, student outcomes, supervised multi-class classification, text mining
Procedia PDF Downloads 17314401 Supply Chain Control and Inventory Management in Garment Industry
Authors: Nisa Nur Duman, Sümeyya Kiliç
Abstract:
In global competition conditions, survival of the plants by obtaining competitive advantage relies on the effective usage of existing sources. By this way, the plants can minimize their costs without losing their quality. They also take advantage took advantage on their competitors and enlarge customer portfolio by increasing profit margins. Changing structure of market and customer demands also change the structure of the competition between companies. Furthermore, competition is not only between the companies. By this manner, supply chain and supply chain management get importance by considering company performances. Companies that want to survive, search the ways of decreasing costs and the ways of meeting customer expectations. One of the important tools for reaching these goals is inventory managemet. The best inventory management system is meeting the demands by considering plant goals.Keywords: Supply chain, inventory management, apparel sector, garment industry
Procedia PDF Downloads 37014400 Radical Web Text Classification Using a Composite-Based Approach
Authors: Kolade Olawande Owoeye, George R. S. Weir
Abstract:
The widespread of terrorism and extremism activities on the internet has become a major threat to the government and national securities due to their potential dangers which have necessitated the need for intelligence gathering via web and real-time monitoring of potential websites for extremist activities. However, the manual classification for such contents is practically difficult or time-consuming. In response to this challenge, an automated classification system called composite technique was developed. This is a computational framework that explores the combination of both semantics and syntactic features of textual contents of a web. We implemented the framework on a set of extremist webpages dataset that has been subjected to the manual classification process. Therein, we developed a classification model on the data using J48 decision algorithm, this is to generate a measure of how well each page can be classified into their appropriate classes. The classification result obtained from our method when compared with other states of arts, indicated a 96% success rate in classifying overall webpages when matched against the manual classification.Keywords: extremist, web pages, classification, semantics, posit
Procedia PDF Downloads 14614399 Analytic Hierarchy Process and Multi-Criteria Decision-Making Approach for Selecting the Most Effective Soil Erosion Zone in Gomati River Basin
Authors: Rajesh Chakraborty, Dibyendu Das, Rabindra Nath Barman, Uttam Kumar Mandal
Abstract:
In the present study, the objective is to find out the most effective zone causing soil erosion in the Gumati river basin located in the state of Tripura, a north eastern state of India using analytical hierarchy process (AHP) and multi-objective optimization on the basis of ratio analysis (MOORA).The watershed is segmented into 20 zones based on Area. The watershed is considered by pointing the maximum elevation from sea lever from Google earth. The soil erosion is determined using the universal soil loss equation. The different independent variables of soil loss equation bear different weightage for different soil zones. And therefore, to find the weightage factor for all the variables of soil loss equation like rainfall runoff erosivity index, soil erodibility factor etc, analytical hierarchy process (AHP) is used. And thereafter, multi-objective optimization on the basis of ratio analysis (MOORA) approach is used to select the most effective zone causing soil erosion. The MCDM technique concludes that the maximum soil erosion is occurring in the zone 14.Keywords: soil erosion, analytic hierarchy process (AHP), multi criteria decision making (MCDM), universal soil loss equation (USLE), multi-objective optimization on the basis of ratio analysis (MOORA)
Procedia PDF Downloads 53914398 Using Machine Learning to Predict Answers to Big-Five Personality Questions
Authors: Aadityaa Singla
Abstract:
The big five personality traits are as follows: openness, conscientiousness, extraversion, agreeableness, and neuroticism. In order to get an insight into their personality, many flocks to these categories, which each have different meanings/characteristics. This information is important not only to individuals but also to career professionals and psychologists who can use this information for candidate assessment or job recruitment. The links between AI and psychology have been well studied in cognitive science, but it is still a rather novel development. It is possible for various AI classification models to accurately predict a personality question via ten input questions. This would contrast with the hundred questions that normal humans have to answer to gain a complete picture of their five personality traits. In order to approach this problem, various AI classification models were used on a dataset to predict what a user may answer. From there, the model's prediction was compared to its actual response. Normally, there are five answer choices (a 20% chance of correct guess), and the models exceed that value to different degrees, proving their significance. By utilizing an MLP classifier, decision tree, linear model, and K-nearest neighbors, they were able to obtain a test accuracy of 86.643, 54.625, 47.875, and 52.125, respectively. These approaches display that there is potential in the future for more nuanced predictions to be made regarding personality.Keywords: machine learning, personally, big five personality traits, cognitive science
Procedia PDF Downloads 14714397 Diagonal Vector Autoregressive Models and Their Properties
Authors: Usoro Anthony E., Udoh Emediong
Abstract:
Diagonal Vector Autoregressive Models are special classes of the general vector autoregressive models identified under certain conditions, where parameters are restricted to the diagonal elements in the coefficient matrices. Variance, autocovariance, and autocorrelation properties of the upper and lower diagonal VAR models are derived. The new set of VAR models is verified with empirical data and is found to perform favourably with the general VAR models. The advantage of the diagonal models over the existing models is that the new models are parsimonious, given the reduction in the interactive coefficients of the general VAR models.Keywords: VAR models, diagonal VAR models, variance, autocovariance, autocorrelations
Procedia PDF Downloads 11614396 Application of Fuzzy TOPSIS in Evaluating Green Transportation Options for Dhaka Megacity
Authors: Md. Moniruzzaman, Thirayoot Limanond
Abstract:
Being the most visible indicator, the transport system of a city points out how developed the city is. Dhaka megacity holds a mixed composition of motorized and non-motorized modes of transport and the number of vehicle figure is escalating over times. And this obviously poses associated environmental costs like air pollution, noise etc. which is degrading the quality of life in the city. Eventually sustainable transport or more importantly green transport from environmental point of view has become a prime choice to the transport professionals in order to cope up the crisis. Currently the city authority is planning to execute such sustainable transport systems that could serve the pressing demand of the present and meet the future needs effectively. This study focuses on the selection and evaluation of green transportation systems among potential alternatives on a priority basis. In this paper, Fuzzy TOPSIS - a multi-criteria decision method is presented to find out the most prioritized alternative. In the first step, Twenty-one individual specific criteria for sustainability assessment are selected. In the following step, experts provide linguistic ratings to the potential alternatives with respect to the selected criteria. The approach is used to generate aggregate scores for sustainability assessment and selection of the best alternative. In the third step, a sensitivity analysis is performed to understand the influence of criteria weights on the decision making process. The key strength of fuzzy TOPSIS approach is its practical applicability having a generation of good quality solution even under uncertainty.Keywords: green transport, multi-criteria decision approach, urban transportation system, sustainability assessment, fuzzy theory, uncertainty
Procedia PDF Downloads 29214395 Neural Network Based Decision Trees Using Machine Learning for Alzheimer's Diagnosis
Authors: P. S. Jagadeesh Kumar, Tracy Lin Huan, S. Meenakshi Sundaram
Abstract:
Alzheimer’s disease is one of the prevalent kind of ailment, expected for impudent reconciliation or an effectual therapy is to be accredited hitherto. Probable detonation of patients in the upcoming years, and consequently an enormous deal of apprehension in early discovery of the disorder, this will conceivably chaperon to enhanced healing outcomes. Complex impetuosity of the brain is an observant symbolic of the disease and a unique recognition of genetic sign of the disease. Machine learning alongside deep learning and decision tree reinforces the aptitude to absorb characteristics from multi-dimensional data’s and thus simplifies automatic classification of Alzheimer’s disease. Susceptible testing was prophesied and realized in training the prospect of Alzheimer’s disease classification built on machine learning advances. It was shrewd that the decision trees trained with deep neural network fashioned the excellent results parallel to related pattern classification.Keywords: Alzheimer's diagnosis, decision trees, deep neural network, machine learning, pattern classification
Procedia PDF Downloads 29714394 Local Interpretable Model-agnostic Explanations (LIME) Approach to Email Spam Detection
Authors: Rohini Hariharan, Yazhini R., Blessy Maria Mathew
Abstract:
The task of detecting email spam is a very important one in the era of digital technology that needs effective ways of curbing unwanted messages. This paper presents an approach aimed at making email spam categorization algorithms transparent, reliable and more trustworthy by incorporating Local Interpretable Model-agnostic Explanations (LIME). Our technique assists in providing interpretable explanations for specific classifications of emails to help users understand the decision-making process by the model. In this study, we developed a complete pipeline that incorporates LIME into the spam classification framework and allows creating simplified, interpretable models tailored to individual emails. LIME identifies influential terms, pointing out key elements that drive classification results, thus reducing opacity inherent in conventional machine learning models. Additionally, we suggest a visualization scheme for displaying keywords that will improve understanding of categorization decisions by users. We test our method on a diverse email dataset and compare its performance with various baseline models, such as Gaussian Naive Bayes, Multinomial Naive Bayes, Bernoulli Naive Bayes, Support Vector Classifier, K-Nearest Neighbors, Decision Tree, and Logistic Regression. Our testing results show that our model surpasses all other models, achieving an accuracy of 96.59% and a precision of 99.12%.Keywords: text classification, LIME (local interpretable model-agnostic explanations), stemming, tokenization, logistic regression.
Procedia PDF Downloads 4814393 Applying Genetic Algorithm in Exchange Rate Models Determination
Authors: Mehdi Rostamzadeh
Abstract:
Genetic Algorithms (GAs) are an adaptive heuristic search algorithm premised on the evolutionary ideas of natural selection and genetic. In this study, we apply GAs for fundamental and technical models of exchange rate determination in exchange rate market. In this framework, we estimated absolute and relative purchasing power parity, Mundell-Fleming, sticky and flexible prices (monetary models), equilibrium exchange rate and portfolio balance model as fundamental models and Auto Regressive (AR), Moving Average (MA), Auto-Regressive with Moving Average (ARMA) and Mean Reversion (MR) as technical models for Iranian Rial against European Union’s Euro using monthly data from January 1992 to December 2014. Then, we put these models into the genetic algorithm system for measuring their optimal weight for each model. These optimal weights have been measured according to four criteria i.e. R-Squared (R2), mean square error (MSE), mean absolute percentage error (MAPE) and root mean square error (RMSE).Based on obtained Results, it seems that for explaining of Iranian Rial against EU Euro exchange rate behavior, fundamental models are better than technical models.Keywords: exchange rate, genetic algorithm, fundamental models, technical models
Procedia PDF Downloads 27314392 Measuring Sustainable Interior Design
Authors: Iman Ibrahim
Abstract:
The interest of this paper is to review the sustainability measuring tools in Interior Design in UAE. To examine the ability of creating sustainable interior designed buildings satisfying the community social culture needs related to the world eco systems and how much it’s affected by humans, as the research will focus on sustainability as a multi-dimensional concept including environmental, social and economic dimensions. The aim of this research is to reach the most suitable sustainable rating method criteria for buildings in UAE, in a trial to develop it to match the community culture. Developing such criteria is gaining significance in UAE as a result of increased awareness of the environmental, economic and social issues. This will allow an exploration of the suitable criteria for developing a sustainable rating method for buildings in UAE. The final research findings will be presented as suitable criteria for developing a sustainable building assessment method for UAE in terms of environmental, economic, social and cultural perspectives.Keywords: rating methods, sustainability tools, UAE, local conditions
Procedia PDF Downloads 42014391 The Layout Analysis of Handwriting Characters and the Fusion of Multi-style Ancient Books’ Background
Authors: Yaolin Tian, Shanxiong Chen, Fujia Zhao, Xiaoyu Lin, Hailing Xiong
Abstract:
Ancient books are significant culture inheritors and their background textures convey the potential history information. However, multi-style texture recovery of ancient books has received little attention. Restricted by insufficient ancient textures and complex handling process, the generation of ancient textures confronts with new challenges. For instance, training without sufficient data usually brings about overfitting or mode collapse, so some of the outputs are prone to be fake. Recently, image generation and style transfer based on deep learning are widely applied in computer vision. Breakthroughs within the field make it possible to conduct research upon multi-style texture recovery of ancient books. Under the circumstances, we proposed a network of layout analysis and image fusion system. Firstly, we trained models by using Deep Convolution Generative against Networks (DCGAN) to synthesize multi-style ancient textures; then, we analyzed layouts based on the Position Rearrangement (PR) algorithm that we proposed to adjust the layout structure of foreground content; at last, we realized our goal by fusing rearranged foreground texts and generated background. In experiments, diversified samples such as ancient Yi, Jurchen, Seal were selected as our training sets. Then, the performances of different fine-turning models were gradually improved by adjusting DCGAN model in parameters as well as structures. In order to evaluate the results scientifically, cross entropy loss function and Fréchet Inception Distance (FID) are selected to be our assessment criteria. Eventually, we got model M8 with lowest FID score. Compared with DCGAN model proposed by Radford at el., the FID score of M8 improved by 19.26%, enhancing the quality of the synthetic images profoundly.Keywords: deep learning, image fusion, image generation, layout analysis
Procedia PDF Downloads 15914390 Application of Argumentation for Improving the Classification Accuracy in Inductive Concept Formation
Authors: Vadim Vagin, Marina Fomina, Oleg Morosin
Abstract:
This paper contains the description of argumentation approach for the problem of inductive concept formation. It is proposed to use argumentation, based on defeasible reasoning with justification degrees, to improve the quality of classification models, obtained by generalization algorithms. The experiment’s results on both clear and noisy data are also presented.Keywords: argumentation, justification degrees, inductive concept formation, noise, generalization
Procedia PDF Downloads 44214389 Hyperspectral Image Classification Using Tree Search Algorithm
Authors: Shreya Pare, Parvin Akhter
Abstract:
Remotely sensing image classification becomes a very challenging task owing to the high dimensionality of hyperspectral images. The pixel-wise classification methods fail to take the spatial structure information of an image. Therefore, to improve the performance of classification, spatial information can be integrated into the classification process. In this paper, the multilevel thresholding algorithm based on a modified fuzzy entropy function is used to perform the segmentation of hyperspectral images. The fuzzy parameters of the MFE function have been optimized by using a new meta-heuristic algorithm based on the Tree-Search algorithm. The segmented image is classified by a large distribution machine (LDM) classifier. Experimental results are shown on a hyperspectral image dataset. The experimental outputs indicate that the proposed technique (MFE-TSA-LDM) achieves much higher classification accuracy for hyperspectral images when compared to state-of-art classification techniques. The proposed algorithm provides accurate segmentation and classification maps, thus becoming more suitable for image classification with large spatial structures.Keywords: classification, hyperspectral images, large distribution margin, modified fuzzy entropy function, multilevel thresholding, tree search algorithm, hyperspectral image classification using tree search algorithm
Procedia PDF Downloads 17814388 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction
Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal
Abstract:
Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction
Procedia PDF Downloads 13914387 Pose Normalization Network for Object Classification
Authors: Bingquan Shen
Abstract:
Convolutional Neural Networks (CNN) have demonstrated their effectiveness in synthesizing 3D views of object instances at various viewpoints. Given the problem where one have limited viewpoints of a particular object for classification, we present a pose normalization architecture to transform the object to existing viewpoints in the training dataset before classification to yield better classification performance. We have demonstrated that this Pose Normalization Network (PNN) can capture the style of the target object and is able to re-render it to a desired viewpoint. Moreover, we have shown that the PNN improves the classification result for the 3D chairs dataset and ShapeNet airplanes dataset when given only images at limited viewpoint, as compared to a CNN baseline.Keywords: convolutional neural networks, object classification, pose normalization, viewpoint invariant
Procedia PDF Downloads 35514386 A Hybrid Tabu Search Algorithm for the Multi-Objective Job Shop Scheduling Problems
Authors: Aydin Teymourifar, Gurkan Ozturk
Abstract:
In this paper, a hybrid Tabu Search (TS) algorithm is suggested for the multi-objective job shop scheduling problems (MO-JSSPs). The algorithm integrates several shifting bottleneck based neighborhood structures with the Giffler & Thompson algorithm, which improve efficiency of the search. Diversification and intensification are provided with local and global left shift algorithms application and also new semi-active, active, and non-delay schedules creation. The suggested algorithm is tested in the MO-JSSPs benchmarks from the literature based on the Pareto optimality concept. Different performances criteria are used for the multi-objective algorithm evaluation. The proposed algorithm is able to find the Pareto solutions of the test problems in shorter time than other algorithm of the literature.Keywords: tabu search, heuristics, job shop scheduling, multi-objective optimization, Pareto optimality
Procedia PDF Downloads 44314385 Infilling Strategies for Surrogate Model Based Multi-disciplinary Analysis and Applications to Velocity Prediction Programs
Authors: Malo Pocheau-Lesteven, Olivier Le Maître
Abstract:
Engineering and optimisation of complex systems is often achieved through multi-disciplinary analysis of the system, where each subsystem is modeled and interacts with other subsystems to model the complete system. The coherence of the output of the different sub-systems is achieved through the use of compatibility constraints, which enforce the coupling between the different subsystems. Due to the complexity of some sub-systems and the computational cost of evaluating their respective models, it is often necessary to build surrogate models of these subsystems to allow repeated evaluation these subsystems at a relatively low computational cost. In this paper, gaussian processes are used, as their probabilistic nature is leveraged to evaluate the likelihood of satisfying the compatibility constraints. This paper presents infilling strategies to build accurate surrogate models of the subsystems in areas where they are likely to meet the compatibility constraint. It is shown that these infilling strategies can reduce the computational cost of building surrogate models for a given level of accuracy. An application of these methods to velocity prediction programs used in offshore racing naval architecture further demonstrates these method's applicability in a real engineering context. Also, some examples of the application of uncertainty quantification to field of naval architecture are presented.Keywords: infilling strategy, gaussian process, multi disciplinary analysis, velocity prediction program
Procedia PDF Downloads 15714384 The Use of Layered Neural Networks for Classifying Hierarchical Scientific Fields of Study
Authors: Colin Smith, Linsey S Passarella
Abstract:
Due to the proliferation and decentralized nature of academic publication, no widely accepted scheme exists for organizing papers by their scientific field of study (FoS) to the author’s best knowledge. While many academic journals require author provided keywords for papers, these keywords range wildly in scope and are not consistent across papers, journals, or field domains, necessitating alternative approaches to paper classification. Past attempts to perform field-of-study (FoS) classification on scientific texts have largely used a-hierarchical FoS schemas or ignored the schema’s inherently hierarchical structure, e.g. by compressing the structure into a single layer for multi-label classification. In this paper, we introduce an application of a Layered Neural Network (LNN) to the problem of performing supervised hierarchical classification of scientific fields of study (FoS) on research papers. In this approach, paper embeddings from a pretrained language model are fed into a top-down LNN. Beginning with a single neural network (NN) for the highest layer of the class hierarchy, each node uses a separate local NN to classify the subsequent subfield child node(s) for an input embedding of concatenated paper titles and abstracts. We compare our LNN-FOS method to other recent machine learning methods using the Microsoft Academic Graph (MAG) FoS hierarchy and find that the LNN-FOS offers increased classification accuracy at each FoS hierarchical level.Keywords: hierarchical classification, layer neural network, scientific field of study, scientific taxonomy
Procedia PDF Downloads 13414383 Fine-Scale Modeling the Influencing Factors of Multi-Time Dimensions of Transit Ridership at Station Level: The Study of Guangzhou City
Authors: Dijiang Lyu, Shaoying Li, Zhangzhi Tan, Zhifeng Wu, Feng Gao
Abstract:
Nowadays, China is experiencing rapidly urban rail transit expansions in the world. The purpose of this study is to finely model factors influencing transit ridership at multi-time dimensions within transit stations’ pedestrian catchment area (PCA) in Guangzhou, China. This study was based on multi-sources spatial data, including smart card data, high spatial resolution images, points of interest (POIs), real-estate online data and building height data. Eight multiple linear regression models using backward stepwise method and Geographic Information System (GIS) were created at station-level. According to Chinese code for classification of urban land use and planning standards of development land, residential land-use were divided into three categories: first-level (e.g. villa), second-level (e.g. community) and third-level (e.g. urban villages). Finally, it concluded that: (1) four factors (CBD dummy, number of feeder bus route, number of entrance or exit and the years of station operation) were proved to be positively correlated with transit ridership, but the area of green land-use and water land-use negative correlated instead. (2) The area of education land-use, the second-level and third-level residential land-use were found to be highly connected to the average value of morning peak boarding and evening peak alighting ridership. But the area of commercial land-use and the average height of buildings, were significantly positive associated with the average value of morning peak alighting and evening peak boarding ridership. (3) The area of the second-level residential land-use was rarely correlated with ridership in other regression models. Because private car ownership is still large in Guangzhou now, and some residents living in the community around the stations go to work by transit at peak time, but others are much more willing to drive their own car at non-peak time. The area of the third-level residential land-use, like urban villages, was highly positive correlated with ridership in all models, indicating that residents who live in the third-level residential land-use are the main passenger source of the Guangzhou Metro. (4) The diversity of land-use was found to have a significant impact on the passenger flow on the weekend, but was non-related to weekday. The findings can be useful for station planning, management and policymaking.Keywords: fine-scale modeling, Guangzhou city, multi-time dimensions, multi-sources spatial data, transit ridership
Procedia PDF Downloads 14214382 A Metaheuristic Approach for Optimizing Perishable Goods Distribution
Authors: Bahare Askarian, Suchithra Rajendran
Abstract:
Maintaining the freshness and quality of perishable goods during distribution is a critical challenge for logistics companies. This study presents a comprehensive framework aimed at optimizing the distribution of perishable goods through a mathematical model of the Transportation Inventory Location Routing Problem (TILRP). The model incorporates the impact of product age on customer demand, addressing the complexities associated with inventory management and routing. To tackle this problem, we develop both simple and hybrid metaheuristic algorithms designed for small- and medium-scale scenarios. The hybrid algorithm combines Biogeographical Based Optimization (BBO) algorithms with local search techniques to enhance performance in small- and medium-scale scenarios, extending our approach to larger-scale challenges. Through extensive numerical simulations and sensitivity analyses across various scenarios, the performance of the proposed algorithms is evaluated, assessing their effectiveness in achieving optimal solutions. The results demonstrate that our algorithms significantly enhance distribution efficiency, offering valuable insights for logistics companies striving to improve their perishable goods supply chains.Keywords: perishable goods, meta-heuristic algorithm, vehicle problem, inventory models
Procedia PDF Downloads 2314381 Urban Transport Demand Management Multi-Criteria Decision Using AHP and SERVQUAL Models: Case Study of Nigerian Cities
Authors: Suleiman Hassan Otuoze, Dexter Vernon Lloyd Hunt, Ian Jefferson
Abstract:
Urbanization has continued to widen the gap between demand and resources available to provide resilient and sustainable transport services in many fast-growing developing countries' cities. Transport demand management is a decision-based optimization concept for both benchmarking and ensuring efficient use of transport resources. This study assesses the service quality of infrastructure and mobility services in the Nigerian cities of Kano and Lagos through five dimensions of quality (i.e., Tangibility, Reliability, Responsibility, Safety Assurance and Empathy). The methodology adopts a hybrid AHP-SERVQUAL model applied on questionnaire surveys to gauge the quality of satisfaction and the views of experts in the field. The AHP results prioritize tangibility, which defines the state of transportation infrastructure and services in terms of satisfaction qualities and intervention decision weights in the two cities. The results recorded ‘unsatisfactory’ indices of quality of performance and satisfaction rating values of 48% and 49% for Kano and Lagos, respectively. The satisfaction indices are identified as indicators of low performances of transportation demand management (TDM) measures and the necessity to re-order priorities and take proactive steps towards infrastructure. The findings pilot a framework for comparative assessment of recognizable standards in transport services, best ethics of management and a necessity of quality infrastructure to guarantee both resilient and sustainable urban mobility.Keywords: transportation demand management, multi-criteria decision support, transport infrastructure, service quality, sustainable transport
Procedia PDF Downloads 225