Search results for: estimation after selection
3284 Modeling Default Probabilities of the Chosen Czech Banks in the Time of the Financial Crisis
Authors: Petr Gurný
Abstract:
One of the most important tasks in the risk management is the correct determination of probability of default (PD) of particular financial subjects. In this paper a possibility of determination of financial institution’s PD according to the credit-scoring models is discussed. The paper is divided into the two parts. The first part is devoted to the estimation of the three different models (based on the linear discriminant analysis, logit regression and probit regression) from the sample of almost three hundred US commercial banks. Afterwards these models are compared and verified on the control sample with the view to choose the best one. The second part of the paper is aimed at the application of the chosen model on the portfolio of three key Czech banks to estimate their present financial stability. However, it is not less important to be able to estimate the evolution of PD in the future. For this reason, the second task in this paper is to estimate the probability distribution of the future PD for the Czech banks. So, there are sampled randomly the values of particular indicators and estimated the PDs’ distribution, while it’s assumed that the indicators are distributed according to the multidimensional subordinated Lévy model (Variance Gamma model and Normal Inverse Gaussian model, particularly). Although the obtained results show that all banks are relatively healthy, there is still high chance that “a financial crisis” will occur, at least in terms of probability. This is indicated by estimation of the various quantiles in the estimated distributions. Finally, it should be noted that the applicability of the estimated model (with respect to the used data) is limited to the recessionary phase of the financial market.Keywords: credit-scoring models, multidimensional subordinated Lévy model, probability of default
Procedia PDF Downloads 4563283 Timescape-Based Panoramic View for Historic Landmarks
Authors: H. Ali, A. Whitehead
Abstract:
Providing a panoramic view of famous landmarks around the world offers artistic and historic value for historians, tourists, and researchers. Exploring the history of famous landmarks by presenting a comprehensive view of a temporal panorama merged with geographical and historical information presents a unique challenge of dealing with images that span a long period, from the 1800’s up to the present. This work presents the concept of temporal panorama through a timeline display of aligned historic and modern images for many famous landmarks. Utilization of this panorama requires a collection of hundreds of thousands of landmark images from the Internet comprised of historic images and modern images of the digital age. These images have to be classified for subset selection to keep the more suitable images that chronologically document a landmark’s history. Processing of historic images captured using older analog technology under various different capturing conditions represents a big challenge when they have to be used with modern digital images. Successful processing of historic images to prepare them for next steps of temporal panorama creation represents an active contribution in cultural heritage preservation through the fulfillment of one of UNESCO goals in preservation and displaying famous worldwide landmarks.Keywords: cultural heritage, image registration, image subset selection, registered image similarity, temporal panorama, timescapes
Procedia PDF Downloads 1653282 The Hotel Logging Behavior and Factors of Tourists in Bankontee District, Samut Songkhram Province, Thailand
Authors: Aticha Kwaengsopha
Abstract:
The purpose of this research was to study the behaviour and related factors that tourists utilized for making decisions to choose their accommodations at a tourist destination, Bangkontee district, Samut Songkhran Province, Thailand. The independent variables included gender, age, income, occupation, and region, while the three important dependent variables included selection behaviour, factors related selection process, and satisfaction of the accommodation service. A total of 400 Thai and international tourists were interviewed at tourist destination of Bangkontee. A questionnaire was used as the tool for collecting data. Descriptive statistics in this research included percentage, mean, and standard deviation. The findings revealed that the majority of respondents were single, female, and with the age between 23-30 years old. Most of the international tourists were from Asia and planned to stay in Thailand about 1-6 days. In addition, the majority of tourists preferred to travel in small groups of 3 persons. The majority of respondents used internet and word of mouth as the main tool to search for information. The majority of respondents spent most of their budget on food & drink, accommodation, and travelling. Even though the majority of tourists were satisfied with the quality of accommodation, the price range of accommodation, and the image of accommodation and the facilities of the accommodation, they indicated that they were not likely to re-visit Thailand in the near future.Keywords: behaviour, decision factors, tourists, media engineering
Procedia PDF Downloads 2753281 Characterization of Complex Gold Ores for Preliminary Process Selection: The Case of Kapanda, Ibindi, Mawemeru, and Itumbi in Tanzania
Authors: Sospeter P. Maganga, Alphonce Wikedzi, Mussa D. Budeba, Samwel V. Manyele
Abstract:
This study characterizes complex gold ores (elemental and mineralogical composition, gold distribution, ore grindability, and mineral liberation) for preliminary process selection. About 200 kg of ore samples were collected from each location using systematic sampling by mass interval. Ores were dried, crushed, milled, and split into representative sub-samples (about 1 kg) for elemental and mineralogical composition analyses using X-ray fluorescence (XRF), fire assay finished with Atomic Absorption Spectrometer (AAS), and X-ray Diffraction (XRD) methods, respectively. The gold distribution was studied on size-by-size fractions, while ore grindability was determined using the standard Bond test. The mineral liberation analysis was conducted using ThermoFisher Scientific Mineral Liberation Analyzer (MLA) 650, where unsieved polished grain mounts (80% passing 700 µm) were used as MLA feed. Two MLA measurement modes, X-ray modal analysis (XMOD) and sparse phase liberation-grain X-ray mapping analysis (SPL-GXMAP), were employed. At least two cyanide consumers (Cu, Fe, Pb, and Zn) and kinetics impeders (Mn, S, As, and Bi) were present in all locations investigated. Copper content at Kapanda (0.77% Cu) and Ibindi (7.48% Cu) exceeded the recommended threshold of 0.5% Cu for direct cyanidation. The gold ore at Ibindi indicated a higher rate of grinding compared to other locations. This could be explained by the highest grindability (2.119 g/rev.) and lowest Bond work index (10.213 kWh/t) values. The pyrite-marcasite, chalcopyrite, galena, and siderite were identified as major gold, copper, lead, and iron-bearing minerals, respectively, with potential for economic extraction. However, only gold and copper can be recovered under conventional milling because of grain size issues (galena is exposed by 10%) and process complexity (difficult to concentrate and smelt iron from siderite). Therefore, the preliminary process selection is copper flotation followed by gold cyanidation for Kapanda and Ibindi ores, whereas gold cyanidation with additives such as glycine or ammonia is selected for Mawemeru and Itumbi ores because of low concentrations of Cu, Pb, Fe, and Zn minerals.Keywords: complex gold ores, mineral liberation, ore characterization, ore grindability
Procedia PDF Downloads 733280 Full-Field Estimation of Cyclic Threshold Shear Strain
Authors: E. E. S. Uy, T. Noda, K. Nakai, J. R. Dungca
Abstract:
Cyclic threshold shear strain is the cyclic shear strain amplitude that serves as the indicator of the development of pore water pressure. The parameter can be obtained by performing either cyclic triaxial test, shaking table test, cyclic simple shear or resonant column. In a cyclic triaxial test, other researchers install measuring devices in close proximity of the soil to measure the parameter. In this study, an attempt was made to estimate the cyclic threshold shear strain parameter using full-field measurement technique. The technique uses a camera to monitor and measure the movement of the soil. For this study, the technique was incorporated in a strain-controlled consolidated undrained cyclic triaxial test. Calibration of the camera was first performed to ensure that the camera can properly measure the deformation under cyclic loading. Its capacity to measure deformation was also investigated using a cylindrical rubber dummy. Two-dimensional image processing was implemented. Lucas and Kanade optical flow algorithm was applied to track the movement of the soil particles. Results from the full-field measurement technique were compared with the results from the linear variable displacement transducer. A range of values was determined from the estimation. This was due to the nonhomogeneous deformation of the soil observed during the cyclic loading. The minimum values were in the order of 10-2% in some areas of the specimen.Keywords: cyclic loading, cyclic threshold shear strain, full-field measurement, optical flow
Procedia PDF Downloads 2353279 Different in Factors of the Distributor Selection for Food and Non-Food OTOP Entrepreneur in Thailand
Authors: Phutthiwat Waiyawuththanapoom
Abstract:
This study has only one objective which is to identify the different in factors of choosing the distributor for food and non-food OTOP entrepreneur in Thailand. In this research, the types of OTOP product will be divided into two groups which are food and non-food. The sample for the food type OTOP product was the processed fruit and vegetable from Nakorn Pathom province and the sample for the non-food type OTOP product was the court doll from Ang Thong province. The research was divided into 3 parts which were a study of the distribution pattern and how to choose the distributor of the food type OTOP product, a study of the distribution pattern and how to choose the distributor of the non-food type OTOP product and a comparison between 2 types of products to find the differentiation in the factor of choosing distributor. The data and information was collected by using the interview. The populations in the research were 5 producers of the processed fruit and vegetable from Nakorn Pathom province and 5 producers of the court doll from Ang Thong province. The significant factor in choosing the distributor of the food type OTOP product is the material handling efficiency and on-time delivery but for the non-food type OTOP product is focused on the channel of distribution and cost of the distributor.Keywords: distributor, OTOP, food and non-food, selection
Procedia PDF Downloads 3553278 The Non-Stationary BINARMA(1,1) Process with Poisson Innovations: An Application on Accident Data
Authors: Y. Sunecher, N. Mamode Khan, V. Jowaheer
Abstract:
This paper considers the modelling of a non-stationary bivariate integer-valued autoregressive moving average of order one (BINARMA(1,1)) with correlated Poisson innovations. The BINARMA(1,1) model is specified using the binomial thinning operator and by assuming that the cross-correlation between the two series is induced by the innovation terms only. Based on these assumptions, the non-stationary marginal and joint moments of the BINARMA(1,1) are derived iteratively by using some initial stationary moments. As regards to the estimation of parameters of the proposed model, the conditional maximum likelihood (CML) estimation method is derived based on thinning and convolution properties. The forecasting equations of the BINARMA(1,1) model are also derived. A simulation study is also proposed where BINARMA(1,1) count data are generated using a multivariate Poisson R code for the innovation terms. The performance of the BINARMA(1,1) model is then assessed through a simulation experiment and the mean estimates of the model parameters obtained are all efficient, based on their standard errors. The proposed model is then used to analyse a real-life accident data on the motorway in Mauritius, based on some covariates: policemen, daily patrol, speed cameras, traffic lights and roundabouts. The BINARMA(1,1) model is applied on the accident data and the CML estimates clearly indicate a significant impact of the covariates on the number of accidents on the motorway in Mauritius. The forecasting equations also provide reliable one-step ahead forecasts.Keywords: non-stationary, BINARMA(1, 1) model, Poisson innovations, conditional maximum likelihood, CML
Procedia PDF Downloads 1293277 River Stage-Discharge Forecasting Based on Multiple-Gauge Strategy Using EEMD-DWT-LSSVM Approach
Authors: Farhad Alizadeh, Alireza Faregh Gharamaleki, Mojtaba Jalilzadeh, Houshang Gholami, Ali Akhoundzadeh
Abstract:
This study presented hybrid pre-processing approach along with a conceptual model to enhance the accuracy of river discharge prediction. In order to achieve this goal, Ensemble Empirical Mode Decomposition algorithm (EEMD), Discrete Wavelet Transform (DWT) and Mutual Information (MI) were employed as a hybrid pre-processing approach conjugated to Least Square Support Vector Machine (LSSVM). A conceptual strategy namely multi-station model was developed to forecast the Souris River discharge more accurately. The strategy used herein was capable of covering uncertainties and complexities of river discharge modeling. DWT and EEMD was coupled, and the feature selection was performed for decomposed sub-series using MI to be employed in multi-station model. In the proposed feature selection method, some useless sub-series were omitted to achieve better performance. Results approved efficiency of the proposed DWT-EEMD-MI approach to improve accuracy of multi-station modeling strategies.Keywords: river stage-discharge process, LSSVM, discrete wavelet transform, Ensemble Empirical Decomposition Mode, multi-station modeling
Procedia PDF Downloads 1753276 Deliberation of Daily Evapotranspiration and Evaporative Fraction Based on Remote Sensing Data
Authors: J. Bahrawi, M. Elhag
Abstract:
Estimation of evapotranspiration is always a major component in water resources management. Traditional techniques of calculating daily evapotranspiration based on field measurements are valid only for local scales. Earth observation satellite sensors are thus used to overcome difficulties in obtaining daily evapotranspiration measurements on regional scale. The Surface Energy Balance System (SEBS) model was adopted to estimate daily evapotranspiration and relative evaporation along with other land surface energy fluxes. The model requires agro-climatic data that improve the model outputs. Advance Along Track Scanning Radiometer (AATSR) and Medium Spectral Resolution Imaging Spectrometer (MERIS) imageries were used to estimate the daily evapotranspiration and relative evaporation over the entire Nile Delta region in Egypt supported by meteorological data collected from six different weather stations located within the study area. Daily evapotranspiration maps derived from SEBS model show a strong agreement with actual ground-truth data taken from 92 points uniformly distributed all over the study area. Moreover, daily evapotranspiration and relative evaporation are strongly correlated. The reliable estimation of daily evapotranspiration supports the decision makers to review the current land use practices in terms of water management, while enabling them to propose proper land use changes.Keywords: daily evapotranspiration, relative evaporation, SEBS, AATSR, MERIS, Nile Delta
Procedia PDF Downloads 2593275 Linear Regression Estimation of Tactile Comfort for Denim Fabrics Based on In-Plane Shear Behavior
Authors: Nazli Uren, Ayse Okur
Abstract:
Tactile comfort of a textile product is an essential property and a major concern when it comes to customer perceptions and preferences. The subjective nature of comfort and the difficulties regarding the simulation of human hand sensory feelings make it hard to establish a well-accepted link between tactile comfort and objective evaluations. On the other hand, shear behavior of a fabric is a mechanical parameter which can be measured by various objective test methods. The principal aim of this study is to determine the tactile comfort of commercially available denim fabrics by subjective measurements, create a tactile score database for denim fabrics and investigate the relations between tactile comfort and shear behavior. In-plane shear behaviors of 17 different commercially available denim fabrics with a variety of raw material and weave structure were measured by a custom design shear frame and conventional bias extension method in two corresponding diagonal directions. Tactile comfort of denim fabrics was determined via subjective customer evaluations as well. Aforesaid relations were statistically investigated and introduced as regression equations. The analyses regarding the relations between tactile comfort and shear behavior showed that there are considerably high correlation coefficients. The suggested regression equations were likewise found out to be statistically significant. Accordingly, it was concluded that the tactile comfort of denim fabrics can be estimated with a high precision, based on the results of in-plane shear behavior measurements.Keywords: denim fabrics, in-plane shear behavior, linear regression estimation, tactile comfort
Procedia PDF Downloads 3023274 Fuzzy Decision Making to the Construction Project Management: Glass Facade Selection
Authors: Katarina Rogulj, Ivana Racetin, Jelena Kilic
Abstract:
In this study, the fuzzy logic approach (FLA) was developed for construction project management (CPM) under uncertainty and duality. The focus was on decision making in selecting the type of the glass facade for a residential-commercial building in the main design. The adoption of fuzzy sets was capable of reflecting construction managers’ reliability level over subjective judgments, and thus the robustness of the system can be achieved. An α-cuts method was utilized for discretizing the fuzzy sets in FLA. This method can communicate all uncertain information in the optimization process, taking into account the values of this information. Furthermore, FLA provides in-depth analyses of diverse policy scenarios that are related to various levels of economic aspects when it comes to the construction projects' valid decision making. The developed approach is applied to CPM to demonstrate its applicability. Analyzing the materials of glass facades, variants were defined. The development of the FLA for the CPM included relevant construction projec'ts stakeholders that were involved in the criteria definition to evaluate each variant. Using fuzzy Decision-Making Trial and Evaluation Laboratory Method (DEMATEL) comparison of the glass facade was conducted. This way, a rank, according to the priorities for inclusion into the main design, of variants is obtained. The concept was tested on a residential-commercial building in the city of Rijeka, Croatia. The newly developed methodology was then compared with the existing one. The aim of the research was to define an approach that will improve current judgments and decisions when it comes to the material selection of buildings facade as one of the most important architectural and engineering tasks in the main design. The advantage of the new methodology compared to the old one is that it includes the subjective side of the managers’ decisions, as an inevitable factor in each decision making. The proposed approach can help construction projects managers to identify the desired type of glass facade according to their preference and practical conditions, as well as facilitate in-depth analyses of tradeoffs between economic efficiency and architectural design.Keywords: construction projects management, DEMATEL, fuzzy logic approach, glass façade selection
Procedia PDF Downloads 1373273 Correlation Analysis between the Corporate Governance and Financial Performance of Banking Sectors Using Parameter Estimation
Authors: Vishwa Nath Maurya, Rama Shanker Sharma, Saad Talib Hasson Aljebori, Avadhesh Kumar Maurya, Diwinder Kaur Arora
Abstract:
Present paper deals with problems of determining the relationship between the variables of corporate governance and financial performance of Islamic banks. Here, we dealt with the corporate governance in the banking sector, where increasing the importance of corporate governance, due to their special nature, as the bankruptcy of banks affects not only the relevant parties from customers, depositors and lenders, but also affect financial stability and then the economy as a whole. Through this paper we dealt to the specificity of governance in Islamic banks, which face double governance: Anglo-Saxon governance system and Islamic governance system. In addition, we focused our attention to measure the impact of corporate governance variables on financial performance through an empirical study on a sample of Islamic banks during the period 2005-2012 in the GCC region. Our present study implies that there is a very strong relationship between the variables of governance and financial performance of Islamic banks, where there is a positive relationship between return on assets and the composition of the Board of Directors, the size of the Board of Directors, the number of committees in the Council, as well as the number of members of the Sharia Supervisory Board, while it is clear that there is a negative relationship between return on assets and concentration ownership.Keywords: correlation analysis, parametric estimation, corporate governance, financial performance, financial stability, conventional banks, bankruptcy, Islamic governance system
Procedia PDF Downloads 5163272 A Hybrid Genetic Algorithm and Neural Network for Wind Profile Estimation
Authors: M. Saiful Islam, M. Mohandes, S. Rehman, S. Badran
Abstract:
Increasing necessity of wind power is directing us to have precise knowledge on wind resources. Methodical investigation of potential locations is required for wind power deployment. High penetration of wind energy to the grid is leading multi megawatt installations with huge investment cost. This fact appeals to determine appropriate places for wind farm operation. For accurate assessment, detailed examination of wind speed profile, relative humidity, temperature and other geological or atmospheric parameters are required. Among all of these uncertainty factors influencing wind power estimation, vertical extrapolation of wind speed is perhaps the most difficult and critical one. Different approaches have been used for the extrapolation of wind speed to hub height which are mainly based on Log law, Power law and various modifications of the two. This paper proposes a Artificial Neural Network (ANN) and Genetic Algorithm (GA) based hybrid model, namely GA-NN for vertical extrapolation of wind speed. This model is very simple in a sense that it does not require any parametric estimations like wind shear coefficient, roughness length or atmospheric stability and also reliable compared to other methods. This model uses available measured wind speeds at 10m, 20m and 30m heights to estimate wind speeds up to 100m. A good comparison is found between measured and estimated wind speeds at 30m and 40m with approximately 3% mean absolute percentage error. Comparisons with ANN and power law, further prove the feasibility of the proposed method.Keywords: wind profile, vertical extrapolation of wind, genetic algorithm, artificial neural network, hybrid machine learning
Procedia PDF Downloads 4903271 Georgia Case: Tourism Expenses of International Visitors on the Basis of Growing Attractiveness
Authors: Nino Abesadze, Marine Mindorashvili, Nino Paresashvili
Abstract:
At present actual tourism indicators cannot be calculated in Georgia, making it impossible to perform their quantitative analysis. Therefore, the study conducted by us is highly important from a theoretical as well as practical standpoint. The main purpose of the article is to make complex statistical analysis of tourist expenses of foreign visitors and to calculate statistical attractiveness indices of the tourism potential of Georgia. During the research, the method involving random and proportional selection has been applied. Computer software SPSS was used to compute statistical data for corresponding analysis. Corresponding methodology of tourism statistics was implemented according to international standards. Important information was collected and grouped from major Georgian airports, and a representative population of foreign visitors and a rule of selection of respondents were determined. The results show a trend of growth in tourist numbers and the share of tourists from post-soviet countries are constantly increasing. The level of satisfaction with tourist facilities and quality of service has improved, but still we have a problem of disparity between the service quality and the prices. The design of tourist expenses of foreign visitors is diverse; competitiveness of tourist products of Georgian tourist companies is higher. Attractiveness of popular cities of Georgia has increased by 43%.Keywords: tourist, expenses, indexes, statistics, analysis
Procedia PDF Downloads 3333270 Optimization of a Convolutional Neural Network for the Automated Diagnosis of Melanoma
Authors: Kemka C. Ihemelandu, Chukwuemeka U. Ihemelandu
Abstract:
The incidence of melanoma has been increasing rapidly over the past two decades, making melanoma a current public health crisis. Unfortunately, even as screening efforts continue to expand in an effort to ameliorate the death rate from melanoma, there is a need to improve diagnostic accuracy to decrease misdiagnosis. Artificial intelligence (AI) a new frontier in patient care has the ability to improve the accuracy of melanoma diagnosis. Convolutional neural network (CNN) a form of deep neural network, most commonly applied to analyze visual imagery, has been shown to outperform the human brain in pattern recognition. However, there are noted limitations with the accuracy of the CNN models. Our aim in this study was the optimization of convolutional neural network algorithms for the automated diagnosis of melanoma. We hypothesized that Optimal selection of the momentum and batch hyperparameter increases model accuracy. Our most successful model developed during this study, showed that optimal selection of momentum of 0.25, batch size of 2, led to a superior performance and a faster model training time, with an accuracy of ~ 83% after nine hours of training. We did notice a lack of diversity in the dataset used, with a noted class imbalance favoring lighter vs. darker skin tone. Training set image transformations did not result in a superior model performance in our study.Keywords: melanoma, convolutional neural network, momentum, batch hyperparameter
Procedia PDF Downloads 1013269 An Energy Holes Avoidance Routing Protocol for Underwater Wireless Sensor Networks
Authors: A. Khan, H. Mahmood
Abstract:
In Underwater Wireless Sensor Networks (UWSNs), sensor nodes close to water surface (final destination) are often preferred for selection as forwarders. However, their frequent selection makes them depleted of their limited battery power. In consequence, these nodes die during early stage of network operation and create energy holes where forwarders are not available for packets forwarding. These holes severely affect network throughput. As a result, system performance significantly degrades. In this paper, a routing protocol is proposed to avoid energy holes during packets forwarding. The proposed protocol does not require the conventional position information (localization) of holes to avoid them. Localization is cumbersome; energy is inefficient and difficult to achieve in underwater environment where sensor nodes change their positions with water currents. Forwarders with the lowest water pressure level and the maximum number of neighbors are preferred to forward packets. These two parameters together minimize packet drop by following the paths where maximum forwarders are available. To avoid interference along the paths with the maximum forwarders, a packet holding time is defined for each forwarder. Simulation results reveal superior performance of the proposed scheme than the counterpart technique.Keywords: energy holes, interference, routing, underwater
Procedia PDF Downloads 4093268 Atmospheric CO2 Capture via Temperature/Vacuum Swing Adsorption in SIFSIX-3-Ni
Authors: Eleni Tsalaporta, Sebastien Vaesen, James M. D. MacElroy, Wolfgang Schmitt
Abstract:
Carbon dioxide capture has attracted the attention of many governments, industries and scientists over the last few decades, due to the rapid increase in atmospheric CO2 composition, with several studies being conducted in this area over the last few years. In many of these studies, CO2 capture in complex Pressure Swing Adsorption (PSA) cycles has been associated with high energy consumption despite the promising capture performance of such processes. The purpose of this study is the economic capture of atmospheric carbon dioxide for its transformation into a clean type of energy. A single column Temperature /Vacuum Swing Adsorption (TSA/VSA) process is proposed as an alternative option to multi column Pressure Swing Adsorption (PSA) processes. The proposed adsorbent is SIFSIX-3-Ni, a newly developed MOF (Metal Organic Framework), with extended CO2 selectivity and capacity. There are three stages involved in this paper: (i) SIFSIX-3-Ni is synthesized and pelletized and its physical and chemical properties are examined before and after the pelletization process, (ii) experiments are designed and undertaken for the estimation of the diffusion and adsorption parameters and limitations for CO2 undergoing capture from the air; and (iii) the CO2 adsorption capacity and dynamical characteristics of SIFSIX-3-Ni are investigated both experimentally and mathematically by employing a single column TSA/VSA, for the capture of atmospheric CO2. This work is further supported by a technical-economical study for the estimation of the investment cost and the energy consumption of the single column TSA/VSA process. The simulations are performed using gProms.Keywords: carbon dioxide capture, temperature/vacuum swing adsorption, metal organic frameworks, SIFSIX-3-Ni
Procedia PDF Downloads 2633267 Estimation of World Steel Production by Process
Authors: Reina Kawase
Abstract:
World GHG emissions should be reduced 50% by 2050 compared with 1990 level. CO2 emission reduction from steel sector, an energy-intensive sector, is essential. To estimate CO2 emission from steel sector in the world, estimation of steel production is required. The world steel production by process is estimated during the period of 2005-2050. The world is divided into aggregated 35 regions. For a steel making process, two kinds of processes are considered; basic oxygen furnace (BOF) and electric arc furnace (EAF). Steel production by process in each region is decided based on a current production capacity, supply-demand balance of steel and scrap, technology innovation of steel making, steel consumption projection, and goods trade. World steel production under moderate countermeasure scenario in 2050 increases by 1.3 times compared with that in 2012. When domestic scrap recycling is promoted, steel production in developed regions increases about 1.5 times. The share in developed regions changes from 34 %(2012) to about 40%(2050). This is because developed regions are main suppliers of scrap. 48-57% of world steel production is produced by EAF. Under the scenario which thinks much of supply-demand balance of steel, steel production in developing regions increases is 1.4 times and is larger than that in developed regions. The share in developing regions, however, is not so different from current level. The increase in steel production by EAF is the largest under the scenario in which supply-demand balance of steel is an important factor. The share reaches 65%.Keywords: global steel production, production distribution scenario, steel making process, supply-demand balance
Procedia PDF Downloads 4503266 Factors Influencing Site Overhead Cost of Construction Projects in Egypt: A Comparative Analysis
Authors: Aya Effat, Ossama A. Hosny, Elkhayam M. Dorra
Abstract:
Estimating costs is a crucial step in construction management and should be completed at the beginning of every project to establish the project's budget. The precision of the cost estimate plays a significant role in the success of construction projects as it allows project managers to effectively manage the project's costs. Site overhead costs constitute a significant portion of construction project budgets, necessitating accurate prediction and management. These costs are influenced by a multitude of factors, requiring a thorough examination and analysis to understand their relative importance and impact. Thus, the main aim of this research is to enhance the contractor’s ability to predict and manage site overheads by identifying and analyzing the main factors influencing the site overheads costs in the Egyptian construction industry. Through a comprehensive literature review, key factors were first identified and subsequently validated using a thorough comparative analysis of data from 55 real-life construction projects. Through this comparative analysis, the relationship between each factor and site overheads percentage as well as each site overheads subcategory and each project construction phase was identified and examined. Furthermore, correlation analysis was done to check for multicollinearity and identify factors with the highest impact. The findings of this research offer valuable insights into the key drivers of site overhead costs in the Egyptian construction industry. By understanding these factors, construction professionals can make informed decisions regarding the estimation and management of site overhead costs.Keywords: comparative analysis, cost estimation, construction management, site overheads
Procedia PDF Downloads 183265 Design and Test a Robust Bearing-Only Target Motion Analysis Algorithm Based on Modified Gain Extended Kalman Filter
Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy
Abstract:
Passive sonar is a method for detecting acoustic signals in the ocean. It detects the acoustic signals emanating from external sources. With passive sonar, we can determine the bearing of the target only, no information about the range of the target. Target Motion Analysis (TMA) is a process to estimate the position and speed of a target using passive sonar information. Since bearing is the only available information, the TMA technique called Bearing-only TMA. Many TMA techniques have been developed. However, until now, there is not a very effective method that could be used to always track an unknown target and extract its moving trace. In this work, a design of effective Bearing-only TMA Algorithm is done. The measured bearing angles are very noisy. Moreover, for multi-beam sonar, the measurements is quantized due to the sonar beam width. To deal with this, modified gain extended Kalman filter algorithm is used. The algorithm is fine-tuned, and many modules are added to improve the performance. A special validation gate module is used to insure stability of the algorithm. Many indicators of the performance and confidence level measurement are designed and tested. A new method to detect if the target is maneuvering is proposed. Moreover, a reactive optimal observer maneuver based on bearing measurements is proposed, which insure converging to the right solution all of the times. To test the performance of the proposed TMA algorithm a simulation is done with a MATLAB program. The simulator program tries to model a discrete scenario for an observer and a target. The simulator takes into consideration all the practical aspects of the problem such as a smooth transition in the speed, a circular turn of the ship, noisy measurements, and a quantized bearing measurement come for multi-beam sonar. The tests are done for a lot of given test scenarios. For all the tests, full tracking is achieved within 10 minutes with very little error. The range estimation error was less than 5%, speed error less than 5% and heading error less than 2 degree. For the online performance estimator, it is mostly aligned with the real performance. The range estimation confidence level gives a value equal to 90% when the range error less than 10%. The experiments show that the proposed TMA algorithm is very robust and has low estimation error. However, the converging time of the algorithm is needed to be improved.Keywords: target motion analysis, Kalman filter, passive sonar, bearing-only tracking
Procedia PDF Downloads 4023264 Automatic Detection of Defects in Ornamental Limestone Using Wavelets
Authors: Maria C. Proença, Marco Aniceto, Pedro N. Santos, José C. Freitas
Abstract:
A methodology based on wavelets is proposed for the automatic location and delimitation of defects in limestone plates. Natural defects include dark colored spots, crystal zones trapped in the stone, areas of abnormal contrast colors, cracks or fracture lines, and fossil patterns. Although some of these may or may not be considered as defects according to the intended use of the plate, the goal is to pair each stone with a map of defects that can be overlaid on a computer display. These layers of defects constitute a database that will allow the preliminary selection of matching tiles of a particular variety, with specific dimensions, for a requirement of N square meters, to be done on a desktop computer rather than by a two-hour search in the storage park, with human operators manipulating stone plates as large as 3 m x 2 m, weighing about one ton. Accident risks and work times are reduced, with a consequent increase in productivity. The base for the algorithm is wavelet decomposition executed in two instances of the original image, to detect both hypotheses – dark and clear defects. The existence and/or size of these defects are the gauge to classify the quality grade of the stone products. The tuning of parameters that are possible in the framework of the wavelets corresponds to different levels of accuracy in the drawing of the contours and selection of the defects size, which allows for the use of the map of defects to cut a selected stone into tiles with minimum waste, according the dimension of defects allowed.Keywords: automatic detection, defects, fracture lines, wavelets
Procedia PDF Downloads 2483263 Estimating of Groundwater Recharge Value for Al-Najaf City, Iraq
Authors: Hayder H. Kareem
Abstract:
Groundwater recharge is a crucial parameter for any groundwater management system. The variability of the recharge rates and the difficulty in estimating this factor in many processes by direct observation leads to the complexity of estimating the recharge value. Various methods are existing to estimate the groundwater recharge, with some limitations for each method to be able for application. This paper focuses particularly on a real study area, Al-Najaf City, Iraq. In this city, there are few groundwater aquifers, but the aquifer which is considered in this study is the closest one to the ground surface, the Dibdibba aquifer. According to the Aridity Index, which is estimated in the paper, Al-Najaf City is classified as a region located in an arid climate, and this identified that the most appropriate method to estimate the groundwater recharge is Thornthwaite's formula or Thornthwaite's method. From the calculations, the estimated average groundwater recharge over the period 1980-2014 for Al-Najaf City is 40.32 mm/year. Groundwater recharge is completely affected the groundwater table level (groundwater head). Therefore, to make sure that this value of recharge is true, the MODFLOW program has been used to apply this value through finding the relationship between the calculated and observed heads where a groundwater model for the Al-Najaf City study area has been built by MODFLOW to simulate this area for different purposes, one of these purposes is to simulate the groundwater recharge. MODFLOW results show that this value of groundwater recharge is extremely high and needs to be reduced. Therefore, a further sensitivity test has been carried out for the Al-Najaf City study area by the MODFLOW program through changing the recharge value and found that the best estimation of groundwater recharge value for this city is 16.5 mm/year where this value gives the best fitting between the calculated and observed heads with minimum values of RMSE % (13.175) and RSS m² (1454).Keywords: Al-Najaf City, groundwater modelling, recharge estimation, visual MODFLOW
Procedia PDF Downloads 1353262 Stature Prediction from Anthropometry of Extremities among Jordanians
Authors: Amal A. Mashali, Omar Eltaweel, Elerian Ekladious
Abstract:
Stature of an individual has an important role in identification, which is often required in medico-legal practice. The estimation of stature is an important step in the identification of dismembered remains or when only a part of a skeleton is only available as in major disasters or with mutilation. There is no published data on anthropological data among Jordanian population. The present study was designed in order to find out relationship of stature to some anthropometric measures among a sample of Jordanian population and to determine the most accurate and reliable one in predicting the stature of an individual. A cross sectional study was conducted on 336 adult healthy volunteers , free of bone diseases, nutritional diseases and abnormalities in the extremities after taking their consent. Students of Faculty of Medicine, Mutah University helped in collecting the data. The anthropometric measurements (anatomically defined) were stature, humerus length, hand length and breadth, foot length and breadth, foot index and knee height on both right and left sides of the body. The measurements were typical on both sides of the bodies of the studied samples. All the anthropologic data showed significant relation with age except the knee height. There was a significant difference between male and female measurements except for the foot index where F= 0.269. There was a significant positive correlation between the different measures and the stature of the individuals. Three equations were developed for estimation of stature. The most sensitive measure for prediction of a stature was found to be the humerus length.Keywords: foot index, foot length, hand length, humerus length, stature
Procedia PDF Downloads 3063261 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays
Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín
Abstract:
Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation
Procedia PDF Downloads 1953260 Assessment of the Egyptian Agricultural Foreign Trade with Common Market for Eastern and Southern Africa Countries
Authors: Doaa H. I. Mahmoud, El-Said M. Elsharkawy, Saad Z. Soliman, Soher E. Mustfa
Abstract:
The opening of new promising foreign markets is one of the objectives of Egypt’s foreign trade policies, especially for agricultural exports. This study aims at the examination of the commodity structure of the Egyptian agricultural imports and exports with the COMESA countries. In addition, estimation of the surplus/deficit of the Egyptian commodities and agricultural balance with these countries is made. Time series data covering the period 2004-2016 is used. Estimation of the growth function along with the derivation of the annual growth rates of the study’s variables is made. Some of the results of the study period display the following: (1) The average total Egyptian exports to the COMESA (Common Market for Eastern and Southern Africa) countries is estimated at 1,491 million dollars, with an annual growth rate of 14.4% (214.7 million dollars). (2) The average annual Egyptian agricultural exports to these economies is estimated at 555 million dollars, with an annual growth rate of 19.4% (107.7 million dollars). (3) The average annual value of agricultural imports from the COMESA countries is set at 289 Million Dollars, with an annual growth rate of 14.4% (41.6 million dollars). (4) The study shows that there is a continuous surplus in the agricultural balance with these economies, whilst having a deficit in the raw-materials agricultural balance, as well as the balance of input requirements with these countries.Keywords: COMESA, Egypt, growth rates, trade balance
Procedia PDF Downloads 2093259 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.Keywords: classification, achine learning, predictive quality, feature selection
Procedia PDF Downloads 1623258 Feasibility Study of Constructed Wetlands for Wastewater Treatment and Reuse in Asmara, Eritrea
Authors: Hagos Gebrehiwet Bahta
Abstract:
Asmara, the capital city of Eritrea, is facing a sanitation challenge because the city discharges its wastewater to the environment without any kind of treatment. The aim of this research is to conduct a pre-feasibility study of using constructed wetlands in the peri-urban areas of Asmara for wastewater treatment and reuse. It was found that around 15,000 m³ of wastewater is used daily for agricultural activities, and products are sold in the city's markets, which are claimed to cause some health effects. In this study, three potential sites were investigated around Mai-Bela and an optimum location was selected on the basis of land availability, topography, and geotechnical information. Some types of local microphytes that can be used in constructed wetlands have been identified and documented for further studies. It was found that subsurface constructed wetlands can provide a sufficient pollutant removal with careful planning and design. Following the feasibility study, a preliminary design of screening, grit chamber and subsurface constructed wetland was prepared and cost estimation was done. In the cost estimation part, the filter media was found to be the most expensive part and consists of around 30% percent of the overall cost. The city wastewater drainage runs in two directions and the selected site is located in the southern sub-system, which only carries sewage (separate system). The wastewater analysis conducted particularly around this area (Sembel) indicates high heavy metal levels and organic concentrations, which reveals that there is a high level of industrial pollution in addition to the domestic sewage.Keywords: agriculture, constructed wetland, Mai-Bela, wastewater reuse
Procedia PDF Downloads 2173257 Estimation of State of Charge, State of Health and Power Status for the Li-Ion Battery On-Board Vehicle
Authors: S. Sabatino, V. Calderaro, V. Galdi, G. Graber, L. Ippolito
Abstract:
Climate change is a rapidly growing global threat caused mainly by increased emissions of carbon dioxide (CO₂) into the atmosphere. These emissions come from multiple sources, including industry, power generation, and the transport sector. The need to tackle climate change and reduce CO₂ emissions is indisputable. A crucial solution to achieving decarbonization in the transport sector is the adoption of electric vehicles (EVs). These vehicles use lithium (Li-Ion) batteries as an energy source, making them extremely efficient and with low direct emissions. However, Li-Ion batteries are not without problems, including the risk of overheating and performance degradation. To ensure its safety and longevity, it is essential to use a battery management system (BMS). The BMS constantly monitors battery status, adjusts temperature and cell balance, ensuring optimal performance and preventing dangerous situations. From the monitoring carried out, it is also able to optimally manage the battery to increase its life. Among the parameters monitored by the BMS, the main ones are State of Charge (SoC), State of Health (SoH), and State of Power (SoP). The evaluation of these parameters can be carried out in two ways: offline, using benchtop batteries tested in the laboratory, or online, using batteries installed in moving vehicles. Online estimation is the preferred approach, as it relies on capturing real-time data from batteries while operating in real-life situations, such as in everyday EV use. Actual battery usage conditions are highly variable. Moving vehicles are exposed to a wide range of factors, including temperature variations, different driving styles, and complex charge/discharge cycles. This variability is difficult to replicate in a controlled laboratory environment and can greatly affect performance and battery life. Online estimation captures this variety of conditions, providing a more accurate assessment of battery behavior in real-world situations. In this article, a hybrid approach based on a neural network and a statistical method for real-time estimation of SoC, SoH, and SoP parameters of interest is proposed. These parameters are estimated from the analysis of a one-day driving profile of an electric vehicle, assumed to be divided into the following four phases: (i) Partial discharge (SoC 100% - SoC 50%), (ii) Partial discharge (SoC 50% - SoC 80%), (iii) Deep Discharge (SoC 80% - SoC 30%) (iv) Full charge (SoC 30% - SoC 100%). The neural network predicts the values of ohmic resistance and incremental capacity, while the statistical method is used to estimate the parameters of interest. This reduces the complexity of the model and improves its prediction accuracy. The effectiveness of the proposed model is evaluated by analyzing its performance in terms of square mean error (RMSE) and percentage error (MAPE) and comparing it with the reference method found in the literature.Keywords: electric vehicle, Li-Ion battery, BMS, state-of-charge, state-of-health, state-of-power, artificial neural networks
Procedia PDF Downloads 673256 Extended Kalman Filter and Markov Chain Monte Carlo Method for Uncertainty Estimation: Application to X-Ray Fluorescence Machine Calibration and Metal Testing
Authors: S. Bouhouche, R. Drai, J. Bast
Abstract:
This paper is concerned with a method for uncertainty evaluation of steel sample content using X-Ray Fluorescence method. The considered method of analysis is a comparative technique based on the X-Ray Fluorescence; the calibration step assumes the adequate chemical composition of metallic analyzed sample. It is proposed in this work a new combined approach using the Kalman Filter and Markov Chain Monte Carlo (MCMC) for uncertainty estimation of steel content analysis. The Kalman filter algorithm is extended to the model identification of the chemical analysis process using the main factors affecting the analysis results; in this case, the estimated states are reduced to the model parameters. The MCMC is a stochastic method that computes the statistical properties of the considered states such as the probability distribution function (PDF) according to the initial state and the target distribution using Monte Carlo simulation algorithm. Conventional approach is based on the linear correlation, the uncertainty budget is established for steel Mn(wt%), Cr(wt%), Ni(wt%) and Mo(wt%) content respectively. A comparative study between the conventional procedure and the proposed method is given. This kind of approaches is applied for constructing an accurate computing procedure of uncertainty measurement.Keywords: Kalman filter, Markov chain Monte Carlo, x-ray fluorescence calibration and testing, steel content measurement, uncertainty measurement
Procedia PDF Downloads 2833255 Random Forest Classification for Population Segmentation
Authors: Regina Chua
Abstract:
To reduce the costs of re-fielding a large survey, a Random Forest classifier was applied to measure the accuracy of classifying individuals into their assigned segments with the fewest possible questions. Given a long survey, one needed to determine the most predictive ten or fewer questions that would accurately assign new individuals to custom segments. Furthermore, the solution needed to be quick in its classification and usable in non-Python environments. In this paper, a supervised Random Forest classifier was modeled on a dataset with 7,000 individuals, 60 questions, and 254 features. The Random Forest consisted of an iterative collection of individual decision trees that result in a predicted segment with robust precision and recall scores compared to a single tree. A random 70-30 stratified sampling for training the algorithm was used, and accuracy trade-offs at different depths for each segment were identified. Ultimately, the Random Forest classifier performed at 87% accuracy at a depth of 10 with 20 instead of 254 features and 10 instead of 60 questions. With an acceptable accuracy in prioritizing feature selection, new tools were developed for non-Python environments: a worksheet with a formulaic version of the algorithm and an embedded function to predict the segment of an individual in real-time. Random Forest was determined to be an optimal classification model by its feature selection, performance, processing speed, and flexible application in other environments.Keywords: machine learning, supervised learning, data science, random forest, classification, prediction, predictive modeling
Procedia PDF Downloads 94