Search results for: multi criteria inventory classification models
14267 Classification of Barley Varieties by Artificial Neural Networks
Authors: Alper Taner, Yesim Benal Oztekin, Huseyin Duran
Abstract:
In this study, an Artificial Neural Network (ANN) was developed in order to classify barley varieties. For this purpose, physical properties of barley varieties were determined and ANN techniques were used. The physical properties of 8 barley varieties grown in Turkey, namely thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain, were determined and it was found that these properties were statistically significant with respect to varieties. As ANN model, three models, N-l, N-2 and N-3 were constructed. The performances of these models were compared. It was determined that the best-fit model was N-1. In the N-1 model, the structure of the model was designed to be 11 input layers, 2 hidden layers and 1 output layer. Thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain were used as input parameter; and varieties as output parameter. R2, Root Mean Square Error and Mean Error for the N-l model were found as 99.99%, 0.00074 and 0.009%, respectively. All results obtained by the N-l model were observed to have been quite consistent with real data. By this model, it would be possible to construct automation systems for classification and cleaning in flourmills.Keywords: physical properties, artificial neural networks, barley, classification
Procedia PDF Downloads 18014266 Optimal Emergency Shipment Policy for a Single-Echelon Periodic Review Inventory System
Authors: Saeed Poormoaied, Zumbul Atan
Abstract:
Emergency shipments provide a powerful mechanism to alleviate the risk of imminent stock-outs and can result in substantial benefits in an inventory system. Customer satisfaction and high service level are immediate consequences of utilizing emergency shipments. In this paper, we consider a single-echelon periodic review inventory system consisting of a single local warehouse, being replenished from a central warehouse with ample capacity in an infinite horizon setting. Since the structure of the optimal policy appears to be complicated, we analyze this problem under an order-up-to-S inventory control policy framework, the (S, T) policy, with the emergency shipment consideration. In each period of the periodic review policy, there is a single opportunity at any point of time for the emergency shipment so that in case of stock-outs, an emergency shipment is requested. The goal is to determine the timing and amount of the emergency shipment during a period (emergency shipment policy) as well as the base stock periodic review policy parameters (replenishment policy). We show that how taking advantage of having an emergency shipment during periods improves the performance of the classical (S, T) policy, especially when fixed and unit emergency shipment costs are small. Investigating the structure of the objective function, we develop an exact algorithm for finding the optimal solution. We also provide a heuristic and an approximation algorithm for the periodic review inventory system problem. The experimental analyses indicate that the heuristic algorithm is computationally more efficient than the approximation algorithm, but in terms of the solution efficiency, the approximation algorithm performs very well. We achieve up to 13% cost savings in the (S, T) policy if we apply the proposed emergency shipment policy. Moreover, our computational results reveal that the approximated solution is often within 0.21% of the globally optimal solution.Keywords: emergency shipment, inventory, periodic review policy, approximation algorithm.
Procedia PDF Downloads 14114265 Identification of High-Rise Buildings Using Object Based Classification and Shadow Extraction Techniques
Authors: Subham Kharel, Sudha Ravindranath, A. Vidya, B. Chandrasekaran, K. Ganesha Raj, T. Shesadri
Abstract:
Digitization of urban features is a tedious and time-consuming process when done manually. In addition to this problem, Indian cities have complex habitat patterns and convoluted clustering patterns, which make it even more difficult to map features. This paper makes an attempt to classify urban objects in the satellite image using object-oriented classification techniques in which various classes such as vegetation, water bodies, buildings, and shadows adjacent to the buildings were mapped semi-automatically. Building layer obtained as a result of object-oriented classification along with already available building layers was used. The main focus, however, lay in the extraction of high-rise buildings using spatial technology, digital image processing, and modeling, which would otherwise be a very difficult task to carry out manually. Results indicated a considerable rise in the total number of buildings in the city. High-rise buildings were successfully mapped using satellite imagery, spatial technology along with logical reasoning and mathematical considerations. The results clearly depict the ability of Remote Sensing and GIS to solve complex problems in urban scenarios like studying urban sprawl and identification of more complex features in an urban area like high-rise buildings and multi-dwelling units. Object-Oriented Technique has been proven to be effective and has yielded an overall efficiency of 80 percent in the classification of high-rise buildings.Keywords: object oriented classification, shadow extraction, high-rise buildings, satellite imagery, spatial technology
Procedia PDF Downloads 15614264 A Review of Different Studies on Hidden Markov Models for Multi-Temporal Satellite Images: Stationarity and Non-Stationarity Issues
Authors: Ali Ben Abbes, Imed Riadh Farah
Abstract:
Due to the considerable advances in Multi-Temporal Satellite Images (MTSI), remote sensing application became more accurate. Recently, many advances in modeling MTSI are developed using various models. The purpose of this article is to present an overview of studies using Hidden Markov Model (HMM). First of all, we provide a background of using HMM and their applications in this context. A comparison of the different works is discussed, and possible areas and challenges are highlighted. Secondly, we discussed the difference on vegetation monitoring as well as urban growth. Nevertheless, most research efforts have been used only stationary data. From another point of view, in this paper, we describe a new non-stationarity HMM, that is defined with a set of parts of the time series e.g. seasonal, trend and random. In addition, a new approach giving more accurate results and improve the applicability of the HMM in modeling a non-stationary data series. In order to assess the performance of the HMM, different experiments are carried out using Moderate Resolution Imaging Spectroradiometer (MODIS) NDVI time series of the northwestern region of Tunisia and Landsat time series of tres Cantos-Madrid in Spain.Keywords: multi-temporal satellite image, HMM , nonstationarity, vegetation, urban
Procedia PDF Downloads 35414263 Multi-Sectoral Prioritization of Zoonotic Diseases in Uganda, 2017: The Perspective of One Health Experts
Authors: Musa Sekamatte
Abstract:
Background: Zoonotic diseases continue to be a public health burden in countries around the world. Uganda is especially vulnerable due to its location, biodiversity, and population. Given these concerns, the Ugandan government in collaboration with the Global Health Security Agenda conducted a zoonotic disease prioritization workshop to identify zoonotic diseases of concern to multiple Ugandan ministries. Materials and Methods: The One Health Zoonotic Disease Prioritization tool, developed by the U.S. Centers for Disease Control and Prevention (CDC), was used for prioritization of zoonotic diseases in Uganda. Workshop participants included voting members representing human, animal, and environmental health ministries as well as key partners who observed the workshop. Over 100 articles describing characteristics of these zoonotic diseases were reviewed for the workshop. During the workshop, criteria for prioritization were selected, and questions and weights relevant to each criterion were determined. Next steps for multi-sectoral engagement for the prioritized zoonoses were then discussed. Results: 48 zoonotic diseases were considered during the workshop. Criteria selected to prioritize zoonotic diseases in order of importance were (1) severity of disease in humans in Uganda, (2) availability of effective control strategies, (3) potential to cause an epidemic or pandemic in humans or animals, (4) social and economic impacts, and (5) bioterrorism potential. Seven zoonotic diseases were identified as priorities for Uganda: anthrax, zoonotic influenza viruses, viral hemorrhagic fevers, brucellosis, African trypanosomiasis, plague, and rabies. Discussion: One Health approaches and multi-sectoral collaborations are crucial in the surveillance, prevention, and control strategies for zoonotic diseases. Uganda used such an approach to identify zoonotic diseases of national concern. Identifying these priority diseases enables the National One Health Platform and the Zoonotic Disease Coordinating Office to address the diseases in the future.Keywords: national one health platform, zoonotic diseases, multi-sectoral, severity
Procedia PDF Downloads 19614262 A t-SNE and UMAP Based Neural Network Image Classification Algorithm
Authors: Shelby Simpson, William Stanley, Namir Naba, Xiaodi Wang
Abstract:
Both t-SNE and UMAP are brand new state of art tools to predominantly preserve the local structure that is to group neighboring data points together, which indeed provides a very informative visualization of heterogeneity in our data. In this research, we develop a t-SNE and UMAP base neural network image classification algorithm to embed the original dataset to a corresponding low dimensional dataset as a preprocessing step, then use this embedded database as input to our specially designed neural network classifier for image classification. We use the fashion MNIST data set, which is a labeled data set of images of clothing objects in our experiments. t-SNE and UMAP are used for dimensionality reduction of the data set and thus produce low dimensional embeddings. Furthermore, we use the embeddings from t-SNE and UMAP to feed into two neural networks. The accuracy of the models from the two neural networks is then compared to a dense neural network that does not use embedding as an input to show which model can classify the images of clothing objects more accurately.Keywords: t-SNE, UMAP, fashion MNIST, neural networks
Procedia PDF Downloads 19914261 A Machine Learning Approach for Classification of Directional Valve Leakage in the Hydraulic Final Test
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Due to increasing cost pressure in global markets, artificial intelligence is becoming a technology that is decisive for competition. Predictive quality enables machinery and plant manufacturers to ensure product quality by using data-driven forecasts via machine learning models as a decision-making basis for test results. The use of cross-process Bosch production data along the value chain of hydraulic valves is a promising approach to classifying the quality characteristics of workpieces.Keywords: predictive quality, hydraulics, machine learning, classification, supervised learning
Procedia PDF Downloads 23214260 Control-Oriented Enhanced Zero-Dimensional Two-Zone Combustion Modelling of Internal Combustion Engines
Authors: Razieh Arian, Hadi Adibi-Asl
Abstract:
This paper investigates an efficient combustion modeling for cycle simulation of internal combustion engine (ICE) studies. The term “efficient model” means that the models must generate desired simulation results while having fast simulation time. In other words, the efficient model is defined based on the application of the model. The objective of this study is to develop math-based models for control applications or shortly control-oriented models. This study compares different modeling approaches used to model the ICEs such as mean-value models, zero dimensional, quasi-dimensional, and multi-dimensional models for control applications. Mean-value models have been widely used for model-based control applications, but recently by developing advanced simulation tools (e.g. Maple/MapleSim) the higher order models (more complex) could be considered as control-oriented models. This paper presents the enhanced zero-dimensional cycle-by-cycle modeling and simulation of a spark ignition engine with a two-zone combustion model. The simulation results are cross-validated against the simulation results from GT-Power package and show a good agreement in terms of trends and values.Keywords: Two-zone combustion, control-oriented model, wiebe function, internal combustion engine
Procedia PDF Downloads 34114259 Identifying Large-Scale Photovoltaic and Concentrated Solar Power Hot Spots: Multi-Criteria Decision-Making Framework
Authors: Ayat-Allah Bouramdane
Abstract:
Solar Photovoltaic (PV) and Concentrated Solar Power (CSP) do not burn fossil fuels and, therefore, could meet the world's needs for low-carbon power generation as they do not release greenhouse gases into the atmosphere as they generate electricity. The power output of the solar PV module and CSP collector is proportional to the temperature and the amount of solar radiation received by their surface. Hence, the determination of the most convenient locations of PV and CSP systems is crucial to maximizing their output power. This study aims to provide a hands-on and plausible approach to the multi-criteria evaluation of site suitability of PV and CSP plants using a combination of Geographic Referenced Information (GRI) and Analytic Hierarchy Process (AHP). Applying the GRI-based AHP approach is meant to specify the criteria and sub-criteria, to identify the unsuitable areas, the low-, moderate-, high- and very high suitable areas for each layer of GRI, to perform the pairwise comparison matrix at each level of the hierarchy structure based on experts' knowledge, and calculate the weights using AHP to create the final map of solar PV and CSP plants suitability in Morocco with a particular focus on the Dakhla city. The results recognize that solar irradiation is the main decision factor for the integration of these technologies on energy policy goals of Morocco but explicitly account for other factors that cannot only limit the potential of certain locations but can even exclude the Dakhla city classified as unsuitable area. We discuss the sensitivity of the PV and CSP site suitability to different aspects, such as the methodology, the climate conditions, and the technology used in each source, and provide the final recommendations to the Moroccan energy strategy by analyzing if actual Morocco's PV and CSP installations are located within areas deemed suitable and by discussing several cases to provide mutual benefits across the Food-Energy-Water nexus. The adapted methodology and conducted suitability map could be used by researchers or engineers to provide helpful information for decision-makers in terms of sites selection, design, and planning of future solar plants, especially in areas suffering from energy shortages, such as the Dakhla city, which is now one of Africa's most promising investment hubs and it is especially attractive to investors looking to root their operations in Africa and import to European markets.Keywords: analytic hierarchy process, concentrated solar power, dakhla, geographic referenced information, Morocco, multi-criteria decision-making, photovoltaic, site suitability
Procedia PDF Downloads 18014258 Effect of Storey Number on Vierendeel Action in Progressive Collapse of RC Frames
Authors: Qian Huiya, Feng Lin
Abstract:
The progressive collapse of reinforced concrete (RC) structures will cause huge casualties and property losses. Therefore, it is necessary to evaluate the ability of structures against progressive collapse accurately. This paper numerically investigated the effect of storey number on the mechanism and quantitative contribution of the Vierendeel action (VA) in progressive collapse under corner column removal scenario. First, finite element (FE) models of multi-storey RC frame structures were developed using LS-DYNA. Then, the accuracy of the modeling technique was validated by test results conducted by the authors. Last, the validated FE models were applied to investigated the structural behavior of the RC frames with different storey numbers from one to six storeys. Results found the multi-storey substructure formed additional plastic hinges at the beam ends near the corner column in the second to top storeys, and at the lower end of the corner column in the first storey. The average ultimate resistance of each storey of the multi-storey substructures were increased by 14.0% to 18.5% compared with that of the single-storey substructure experiencing no VA. The contribution of VA to the ultimate resistance was decreased with the increase of the storey number.Keywords: progressive collapse, reinforced concrete structure, storey number, Vierendeel action
Procedia PDF Downloads 6614257 Image Captioning with Vision-Language Models
Authors: Promise Ekpo Osaine, Daniel Melesse
Abstract:
Image captioning is an active area of research in the multi-modal artificial intelligence (AI) community as it connects vision and language understanding, especially in settings where it is required that a model understands the content shown in an image and generates semantically and grammatically correct descriptions. In this project, we followed a standard approach to a deep learning-based image captioning model, injecting architecture for the encoder-decoder setup, where the encoder extracts image features, and the decoder generates a sequence of words that represents the image content. As such, we investigated image encoders, which are ResNet101, InceptionResNetV2, EfficientNetB7, EfficientNetV2M, and CLIP. As a caption generation structure, we explored long short-term memory (LSTM). The CLIP-LSTM model demonstrated superior performance compared to the encoder-decoder models, achieving a BLEU-1 score of 0.904 and a BLEU-4 score of 0.640. Additionally, among the CNN-LSTM models, EfficientNetV2M-LSTM exhibited the highest performance with a BLEU-1 score of 0.896 and a BLEU-4 score of 0.586 while using a single-layer LSTM.Keywords: multi-modal AI systems, image captioning, encoder, decoder, BLUE score
Procedia PDF Downloads 7714256 Classification of Germinatable Mung Bean by Near Infrared Hyperspectral Imaging
Authors: Kaewkarn Phuangsombat, Arthit Phuangsombat, Anupun Terdwongworakul
Abstract:
Hard seeds will not grow and can cause mold in sprouting process. Thus, the hard seeds need to be separated from the normal seeds. Near infrared hyperspectral imaging in a range of 900 to 1700 nm was implemented to develop a model by partial least squares discriminant analysis to discriminate the hard seeds from the normal seeds. The orientation of the seeds was also studied to compare the performance of the models. The model based on hilum-up orientation achieved the best result giving the coefficient of determination of 0.98, and root mean square error of prediction of 0.07 with classification accuracy was equal to 100%.Keywords: mung bean, near infrared, germinatability, hard seed
Procedia PDF Downloads 30514255 Multi-Criteria Decision Making Tool for Assessment of Biorefinery Strategies
Authors: Marzouk Benali, Jawad Jeaidi, Behrang Mansoornejad, Olumoye Ajao, Banafsheh Gilani, Nima Ghavidel Mehr
Abstract:
Canadian forest industry is seeking to identify and implement transformational strategies for enhanced financial performance through the emerging bioeconomy or more specifically through the concept of the biorefinery. For example, processing forest residues or surplus of biomass available on the mill sites for the production of biofuels, biochemicals and/or biomaterials is one of the attractive strategies along with traditional wood and paper products and cogenerated energy. There are many possible process-product biorefinery pathways, each associated with specific product portfolios with different levels of risk. Thus, it is not obvious which unique strategy forest industry should select and implement. Therefore, there is a need for analytical and design tools that enable evaluating biorefinery strategies based on a set of criteria considering a perspective of sustainability over the short and long terms, while selecting the existing core products as well as selecting the new product portfolio. In addition, it is critical to assess the manufacturing flexibility to internalize the risk from market price volatility of each targeted bio-based product in the product portfolio, prior to invest heavily in any biorefinery strategy. The proposed paper will focus on introducing a systematic methodology for designing integrated biorefineries using process systems engineering tools as well as a multi-criteria decision making framework to put forward the most effective biorefinery strategies that fulfill the needs of the forest industry. Topics to be covered will include market analysis, techno-economic assessment, cost accounting, energy integration analysis, life cycle assessment and supply chain analysis. This will be followed by describing the vision as well as the key features and functionalities of the I-BIOREF software platform, developed by CanmetENERGY of Natural Resources Canada. Two industrial case studies will be presented to support the robustness and flexibility of I-BIOREF software platform: i) An integrated Canadian Kraft pulp mill with lignin recovery process (namely, LignoBoost™); ii) A standalone biorefinery based on ethanol-organosolv process.Keywords: biorefinery strategies, bioproducts, co-production, multi-criteria decision making, tool
Procedia PDF Downloads 23214254 Comparative Analysis of Feature Extraction and Classification Techniques
Authors: R. L. Ujjwal, Abhishek Jain
Abstract:
In the field of computer vision, most facial variations such as identity, expression, emotions and gender have been extensively studied. Automatic age estimation has been rarely explored. With age progression of a human, the features of the face changes. This paper is providing a new comparable study of different type of algorithm to feature extraction [Hybrid features using HAAR cascade & HOG features] & classification [KNN & SVM] training dataset. By using these algorithms we are trying to find out one of the best classification algorithms. Same thing we have done on the feature selection part, we extract the feature by using HAAR cascade and HOG. This work will be done in context of age group classification model.Keywords: computer vision, age group, face detection
Procedia PDF Downloads 37014253 A Comparative Analysis of ARIMA and Threshold Autoregressive Models on Exchange Rate
Authors: Diteboho Xaba, Kolentino Mpeta, Tlotliso Qejoe
Abstract:
This paper assesses the in-sample forecasting of the South African exchange rates comparing a linear ARIMA model and a SETAR model. The study uses a monthly adjusted data of South African exchange rates with 420 observations. Akaike information criterion (AIC) and the Schwarz information criteria (SIC) are used for model selection. Mean absolute error (MAE), root mean squared error (RMSE) and mean absolute percentage error (MAPE) are error metrics used to evaluate forecast capability of the models. The Diebold –Mariano (DM) test is employed in the study to check forecast accuracy in order to distinguish the forecasting performance between the two models (ARIMA and SETAR). The results indicate that both models perform well when modelling and forecasting the exchange rates, but SETAR seemed to outperform ARIMA.Keywords: ARIMA, error metrices, model selection, SETAR
Procedia PDF Downloads 24414252 Suitable Models and Methods for the Steady-State Analysis of Multi-Energy Networks
Authors: Juan José Mesas, Luis Sainz
Abstract:
The motivation for the development of this paper lies in the need for energy networks to reduce losses, improve performance, optimize their operation and try to benefit from the interconnection capacity with other networks enabled for other energy carriers. These interconnections generate interdependencies between some energy networks and others, which requires suitable models and methods for their analysis. Traditionally, the modeling and study of energy networks have been carried out independently for each energy carrier. Thus, there are well-established models and methods for the steady-state analysis of electrical networks, gas networks, and thermal networks separately. What is intended is to extend and combine them adequately to be able to face in an integrated way the steady-state analysis of networks with multiple energy carriers. Firstly, the added value of multi-energy networks, their operation, and the basic principles that characterize them are explained. In addition, two current aspects of great relevance are exposed: the storage technologies and the coupling elements used to interconnect one energy network with another. Secondly, the characteristic equations of the different energy networks necessary to carry out the steady-state analysis are detailed. The electrical network, the natural gas network, and the thermal network of heat and cold are considered in this paper. After the presentation of the equations, a particular case of the steady-state analysis of a specific multi-energy network is studied. This network is represented graphically, the interconnections between the different energy carriers are described, their technical data are exposed and the equations that have previously been presented theoretically are formulated and developed. Finally, the two iterative numerical resolution methods considered in this paper are presented, as well as the resolution procedure and the results obtained. The pros and cons of the application of both methods are explained. It is verified that the results obtained for the electrical network (voltages in modulus and angle), the natural gas network (pressures), and the thermal network (mass flows and temperatures) are correct since they comply with the distribution, operation, consumption and technical characteristics of the multi-energy network under study.Keywords: coupling elements, energy carriers, multi-energy networks, steady-state analysis
Procedia PDF Downloads 8114251 Selection of Appropriate Classification Technique for Lithological Mapping of Gali Jagir Area, Pakistan
Authors: Khunsa Fatima, Umar K. Khattak, Allah Bakhsh Kausar
Abstract:
Satellite images interpretation and analysis assist geologists by providing valuable information about geology and minerals of an area to be surveyed. A test site in Fatejang of district Attock has been studied using Landsat ETM+ and ASTER satellite images for lithological mapping. Five different supervised image classification techniques namely maximum likelihood, parallelepiped, minimum distance to mean, mahalanobis distance and spectral angle mapper have been performed on both satellite data images to find out the suitable classification technique for lithological mapping in the study area. Results of these five image classification techniques were compared with the geological map produced by Geological Survey of Pakistan. The result of maximum likelihood classification technique applied on ASTER satellite image has the highest correlation of 0.66 with the geological map. Field observations and XRD spectra of field samples also verified the results. A lithological map was then prepared based on the maximum likelihood classification of ASTER satellite image.Keywords: ASTER, Landsat-ETM+, satellite, image classification
Procedia PDF Downloads 39514250 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods
Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer
Abstract:
Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.Keywords: cross-validation, importance sampling, information criteria, predictive accuracy
Procedia PDF Downloads 39314249 Residual Life Estimation Based on Multi-Phase Nonlinear Wiener Process
Authors: Hao Chen, Bo Guo, Ping Jiang
Abstract:
Residual life (RL) estimation based on multi-phase nonlinear Wiener process was studied in this paper, which is significant for complicated products with small samples. Firstly, nonlinear Wiener model with random parameter was introduced and multi-phase nonlinear Wiener model was proposed to model degradation process of products that were nonlinear and separated into different phases. Then the multi-phase RL probability density function based on the presented model was derived approximately in a closed form and parameters estimation was achieved with the method of maximum likelihood estimation (MLE). Finally, the method was applied to estimate the RL of high voltage plus capacitor. Compared with the other three different models by log-likelihood function (Log-LF) and Akaike information criterion (AIC), the results show that the proposed degradation model can capture degradation process of high voltage plus capacitors in a better way and provide a more reliable result.Keywords: multi-phase nonlinear wiener process, residual life estimation, maximum likelihood estimation, high voltage plus capacitor
Procedia PDF Downloads 45314248 Virtual Container Yard: A Paradigm Shift in Container Inventory Management
Authors: Lalith Edirisinghe, Zhihong Jin, A.W. Wijeratne, Hansa Edirisinghe, Lakshmi Ranwala Rashika Mudunkotuwa
Abstract:
A paradigm shift in container inventory management (CIM) is a long-awaited industry need. Virtual container yard (VCY) is a concept developed in 2013 and its primary objective is to minimize shipping transport cost through implementing container exchange between carriers. Shipping lines always try to maintain lower container idle time and provide higher customer satisfaction. However, it is disappointing to note that carriers turn a blind eye to the escalating cost resulted from the present inefficient CIM mechanism. The cost of empty container management is simply transferred to the importers and exporters as freight adjustments. It also creates an environmental hazard. Therefore, it has now become a problem for the society. Therefore, a paradigm shift may be required as the present CIM system is not working for common interests of human beings as it should be.Keywords: collaboation, inventory, shipping, virtual container yard
Procedia PDF Downloads 26314247 Extraction of Forest Plantation Resources in Selected Forest of San Manuel, Pangasinan, Philippines Using LiDAR Data for Forest Status Assessment
Authors: Mark Joseph Quinto, Roan Beronilla, Guiller Damian, Eliza Camaso, Ronaldo Alberto
Abstract:
Forest inventories are essential to assess the composition, structure and distribution of forest vegetation that can be used as baseline information for management decisions. Classical forest inventory is labor intensive and time-consuming and sometimes even dangerous. The use of Light Detection and Ranging (LiDAR) in forest inventory would improve and overcome these restrictions. This study was conducted to determine the possibility of using LiDAR derived data in extracting high accuracy forest biophysical parameters and as a non-destructive method for forest status analysis of San Manual, Pangasinan. Forest resources extraction was carried out using LAS tools, GIS, Envi and .bat scripts with the available LiDAR data. The process includes the generation of derivatives such as Digital Terrain Model (DTM), Canopy Height Model (CHM) and Canopy Cover Model (CCM) in .bat scripts followed by the generation of 17 composite bands to be used in the extraction of forest classification covers using ENVI 4.8 and GIS software. The Diameter in Breast Height (DBH), Above Ground Biomass (AGB) and Carbon Stock (CS) were estimated for each classified forest cover and Tree Count Extraction was carried out using GIS. Subsequently, field validation was conducted for accuracy assessment. Results showed that the forest of San Manuel has 73% Forest Cover, which is relatively much higher as compared to the 10% canopy cover requirement. On the extracted canopy height, 80% of the tree’s height ranges from 12 m to 17 m. CS of the three forest covers based on the AGB were: 20819.59 kg/20x20 m for closed broadleaf, 8609.82 kg/20x20 m for broadleaf plantation and 15545.57 kg/20x20m for open broadleaf. Average tree counts for the tree forest plantation was 413 trees/ha. As such, the forest of San Manuel has high percent forest cover and high CS.Keywords: carbon stock, forest inventory, LiDAR, tree count
Procedia PDF Downloads 39014246 A Distinct Method Based on Mamba-Unet for Brain Tumor Image Segmentation
Authors: Djallel Bouamama, Yasser R. Haddadi
Abstract:
Accurate brain tumor segmentation is crucial for diagnosis and treatment planning, yet it remains a challenging task due to the variability in tumor shapes and intensities. This paper introduces a distinct approach to brain tumor image segmentation by leveraging an advanced architecture known as Mamba-Unet. Building on the well-established U-Net framework, Mamba-Unet incorporates distinct design enhancements to improve segmentation performance. Our proposed method integrates a multi-scale attention mechanism and a hybrid loss function to effectively capture fine-grained details and contextual information in brain MRI scans. We demonstrate that Mamba-Unet significantly enhances segmentation accuracy compared to conventional U-Net models by utilizing a comprehensive dataset of annotated brain MRI scans. Quantitative evaluations reveal that Mamba-Unet surpasses traditional U-Net architectures and other contemporary segmentation models regarding Dice coefficient, sensitivity, and specificity. The improvements are attributed to the method's ability to manage class imbalance better and resolve complex tumor boundaries. This work advances the state-of-the-art in brain tumor segmentation and holds promise for improving clinical workflows and patient outcomes through more precise and reliable tumor detection.Keywords: brain tumor classification, image segmentation, CNN, U-NET
Procedia PDF Downloads 3914245 Using Analytic Hierarchy Process as a Decision-Making Tool in Project Portfolio Management
Authors: Darius Danesh, Michael J. Ryan, Alireza Abbasi
Abstract:
Project Portfolio Management (PPM) is an essential component of an organisation’s strategic procedures, which requires attention of several factors to envisage a range of long-term outcomes to support strategic project portfolio decisions. To evaluate overall efficiency at the portfolio level, it is essential to identify the functionality of specific projects as well as to aggregate those findings in a mathematically meaningful manner that indicates the strategic significance of the associated projects at a number of levels of abstraction. PPM success is directly associated with the quality of decisions made and poor judgment increases portfolio costs. Hence, various Multi-Criteria Decision Making (MCDM) techniques have been designed and employed to support the decision-making functions. This paper reviews possible option to improve the decision-making outcomes in the organisational portfolio management processes using the Analytic Hierarchy Process (AHP) both from academic and practical perspectives and will examine the usability, certainty and quality of the technique. The results of the study will also provide insight into the technical risk associated with current decision-making model to underpin initiative tracking and strategic portfolio management.Keywords: analytic hierarchy process, decision support systems, multi-criteria decision making, project portfolio management
Procedia PDF Downloads 32114244 A Model to Assess Sustainability Using Multi-Criteria Analysis and Geographic Information Systems: A Case Study
Authors: Antonio Boggia, Luisa Paolotti, Gianluca Massei, Lucia Rocchi, Elaine Pace, Maria Attard
Abstract:
The aim of this paper is to present a methodology and a computer model for sustainability assessment based on the integration of Multi-criteria Decision Analysis (MCDA) with a Geographic Information System (GIS). It presents the result of a study for the implementation of a model for measuring sustainability to address the policy actions for the improvement of sustainability at territory level. The aim is to rank areas in order to understand the specific technical and/or financial support that is required to develop sustainable growth. Assessing sustainable development is a multidimensional problem: economic, social and environmental aspects have to be taken into account at the same time. The tool for a multidimensional representation is a proper set of indicators. The set of indicators must be integrated into a model, that is an assessment methodology, to be used for measuring sustainability. The model, developed by the Environmental Laboratory of the University of Perugia, is called GeoUmbriaSUIT. It is a calculation procedure developed as a plugin working in the open-source GIS software QuantumGIS. The multi-criteria method used within GeoUmbriaSUIT is the algorithm TOPSIS (Technique for Order Preference by Similarity to Ideal Design), which defines a ranking based on the distance from the worst point and the closeness to an ideal point, for each of the criteria used. For the sustainability assessment procedure, GeoUmbriaSUIT uses a geographic vector file where the graphic data represent the study area and the single evaluation units within it (the alternatives, e.g. the regions of a country, or the municipalities of a region), while the alphanumeric data (attribute table), describe the environmental, economic and social aspects related to the evaluation units by means of a set of indicators (criteria). The use of the algorithm available in the plugin allows to treat individually the indicators representing the three dimensions of sustainability, and to compute three different indices: environmental index, economic index and social index. The graphic output of the model allows for an integrated assessment of the three dimensions, avoiding aggregation. The presence of separate indices and graphic output make GeoUmbriaSUIT a readable and transparent tool, since it doesn’t produce an aggregate index of sustainability as final result of the calculations, which is often cryptic and difficult to interpret. In addition, it is possible to develop a “back analysis”, able to explain the positions obtained by the alternatives in the ranking, based on the criteria used. The case study presented is an assessment of the level of sustainability in the six regions of Malta, an island state in the middle of the Mediterranean Sea and the southernmost member of the European Union. The results show that the integration of MCDA-GIS is an adequate approach for sustainability assessment. In particular, the implemented model is able to provide easy to understand results. This is a very important condition for a sound decision support tool, since most of the time decision makers are not experts and need understandable output. In addition, the evaluation path is traceable and transparent.Keywords: GIS, multi-criteria analysis, sustainability assessment, sustainable development
Procedia PDF Downloads 29114243 Artificial Intelligence Based Comparative Analysis for Supplier Selection in Multi-Echelon Automotive Supply Chains via GEP and ANN Models
Authors: Seyed Esmail Seyedi Bariran, Laysheng Ewe, Amy Ling
Abstract:
Since supplier selection appears as a vital decision, selecting supplier based on the best and most accurate ways has a lot of importance for enterprises. In this study, a new Artificial Intelligence approach is exerted to remove weaknesses of supplier selection. The paper has three parts. First part is choosing the appropriate criteria for assessing the suppliers’ performance. Next one is collecting the data set based on experts. Afterwards, the data set is divided into two parts, the training data set and the testing data set. By the training data set the best structure of GEP and ANN are selected and to evaluate the power of the mentioned methods the testing data set is used. The result obtained shows that the accuracy of GEP is more than ANN. Moreover, unlike ANN, a mathematical equation is presented by GEP for the supplier selection.Keywords: supplier selection, automotive supply chains, ANN, GEP
Procedia PDF Downloads 63214242 Students' Perception of Using Dental E-Models in an Inquiry-Based Curriculum
Authors: Yanqi Yang, Chongshan Liao, Cheuk Hin Ho, Susan Bridges
Abstract:
Aim: To investigate student’s perceptions of using e-models in an inquiry-based curriculum. Approach: 52 second-year dental students completed a pre- and post-test questionnaire relating to their perceptions of e-models and their use in inquiry-based learning. The pre-test occurred prior to any learning with e-models. The follow-up survey was conducted after one year's experience of using e-models. Results: There was no significant difference between the two sets of questionnaires regarding student’s perceptions of the usefulness of e-models and their willingness to use e-models in future inquiry-based learning. Most of the students preferred using both plaster models and e-models in tandem. Conclusion: Students did not change their attitude towards e-models and most of them agreed or were neutral that e-models are useful in inquiry-based learning. Whilst recognizing the utility of 3D models for learning, student's preference for combining these with solid models has implications for the development of haptic sensibility in an operative discipline.Keywords: e-models, inquiry-based curriculum, education, questionnaire
Procedia PDF Downloads 43314241 Track Initiation Method Based on Multi-Algorithm Fusion Learning of 1DCNN And Bi-LSTM
Abstract:
Aiming at the problem of high-density clutter and interference affecting radar detection target track initiation in ECM and complex radar mission, the traditional radar target track initiation method has been difficult to adapt. To this end, we propose a multi-algorithm fusion learning track initiation algorithm, which transforms the track initiation problem into a true-false track discrimination problem, and designs an algorithm based on 1DCNN(One-Dimensional CNN)combined with Bi-LSTM (Bi-Directional Long Short-Term Memory )for fusion classification. The experimental dataset consists of real trajectories obtained from a certain type of three-coordinate radar measurements, and the experiments are compared with traditional trajectory initiation methods such as rule-based method, logical-based method and Hough-transform-based method. The simulation results show that the overall performance of the multi-algorithm fusion learning track initiation algorithm is significantly better than that of the traditional method, and the real track initiation rate can be effectively improved under high clutter density with the average initiation time similar to the logical method.Keywords: track initiation, multi-algorithm fusion, 1DCNN, Bi-LSTM
Procedia PDF Downloads 9514240 Taxonomic Classification for Living Organisms Using Convolutional Neural Networks
Authors: Saed Khawaldeh, Mohamed Elsharnouby, Alaa Eddin Alchalabi, Usama Pervaiz, Tajwar Aleef, Vu Hoang Minh
Abstract:
Taxonomic classification has a wide-range of applications such as finding out more about the evolutionary history of organisms that can be done by making a comparison between species living now and species that lived in the past. This comparison can be made using different kinds of extracted species’ data which include DNA sequences. Compared to the estimated number of the organisms that nature harbours, humanity does not have a thorough comprehension of which specific species they all belong to, in spite of the significant development of science and scientific knowledge over many years. One of the methods that can be applied to extract information out of the study of organisms in this regard is to use the DNA sequence of a living organism as a marker, thus making it available to classify it into a taxonomy. The classification of living organisms can be done in many machine learning techniques including Neural Networks (NNs). In this study, DNA sequences classification is performed using Convolutional Neural Networks (CNNs) which is a special type of NNs.Keywords: deep networks, convolutional neural networks, taxonomic classification, DNA sequences classification
Procedia PDF Downloads 44314239 A Fuzzy Hybrıd Decısıon Support System for Naval Base Place Selectıon in a Foreıgn Country
Authors: Latif Yanar, Muharrem Kaçan
Abstract:
In this study, an Analytic Hierarchy Process and Analytic Network Process Decision Support System (DSS) model for determination of a navy base place in another country is proposed together with a decision support software (DESTEC 1.0) developed using C Sharp programming language. The proposed software also has the ability of performing the fuzzy models (Fuzzy AHP and Fuzzy ANP) of the proposed DSS to cope with the ambiguous and linguistic nature of the model. The AHP and ANP model, for a decision support for selecting the best place among the alternatives, including the criteria and alternatives, is developed and solved by the experts from Turkish Navy and Turkish academicians related to international relations branches of the universities in Turkey. Also, the questionnaires used for weighting of the criteria and the alternatives are filled by these experts.Some of our alternatives are: economic and political stability of the third country, the effect of another super power in that country, historical relations, security in that country, social facilities in the city in which the base will be built, the transportation security and difficulty from a main city that have an airport to the city will have the base etc. Over 20 criteria like these are determined which are categorized in social, political, economic and military aspects. As a result all the criteria and three alternatives are evaluated by different people who have background and experience to weight the criteria and alternatives as it must be in AHP and ANP evaluation system. The alternatives got their degrees all between 0 – 1 and the total is 1. At the end the DSS advices one of the alternatives as the best one to the decision maker according to the developed model and the evaluations of the experts.Keywords: analytic hierarchical process, analytic network process, fuzzy logic, naval base place selection, multiple criteria decision making
Procedia PDF Downloads 39114238 High Resolution Satellite Imagery and Lidar Data for Object-Based Tree Species Classification in Quebec, Canada
Authors: Bilel Chalghaf, Mathieu Varin
Abstract:
Forest characterization in Quebec, Canada, is usually assessed based on photo-interpretation at the stand level. For species identification, this often results in a lack of precision. Very high spatial resolution imagery, such as DigitalGlobe, and Light Detection and Ranging (LiDAR), have the potential to overcome the limitations of aerial imagery. To date, few studies have used that data to map a large number of species at the tree level using machine learning techniques. The main objective of this study is to map 11 individual high tree species ( > 17m) at the tree level using an object-based approach in the broadleaf forest of Kenauk Nature, Quebec. For the individual tree crown segmentation, three canopy-height models (CHMs) from LiDAR data were assessed: 1) the original, 2) a filtered, and 3) a corrected model. The corrected CHM gave the best accuracy and was then coupled with imagery to refine tree species crown identification. When compared with photo-interpretation, 90% of the objects represented a single species. For modeling, 313 variables were derived from 16-band WorldView-3 imagery and LiDAR data, using radiance, reflectance, pixel, and object-based calculation techniques. Variable selection procedures were employed to reduce their number from 313 to 16, using only 11 bands to aid reproducibility. For classification, a global approach using all 11 species was compared to a semi-hierarchical hybrid classification approach at two levels: (1) tree type (broadleaf/conifer) and (2) individual broadleaf (five) and conifer (six) species. Five different model techniques were used: (1) support vector machine (SVM), (2) classification and regression tree (CART), (3) random forest (RF), (4) k-nearest neighbors (k-NN), and (5) linear discriminant analysis (LDA). Each model was tuned separately for all approaches and levels. For the global approach, the best model was the SVM using eight variables (overall accuracy (OA): 80%, Kappa: 0.77). With the semi-hierarchical hybrid approach, at the tree type level, the best model was the k-NN using six variables (OA: 100% and Kappa: 1.00). At the level of identifying broadleaf and conifer species, the best model was the SVM, with OA of 80% and 97% and Kappa values of 0.74 and 0.97, respectively, using seven variables for both models. This paper demonstrates that a hybrid classification approach gives better results and that using 16-band WorldView-3 with LiDAR data leads to more precise predictions for tree segmentation and classification, especially when the number of tree species is large.Keywords: tree species, object-based, classification, multispectral, machine learning, WorldView-3, LiDAR
Procedia PDF Downloads 136