Search results for: Fuzzy Logic estimation
2258 A Case Study on the Value of Corporate Social Responsibility Systems
Authors: José M. Brotons, Manuel E. Sansalvador
Abstract:
The relationship between Corporate Social Responsibility (CSR) and financial performance (FP) is a subject of great interest that has not yet been resolved. In this work, we have developed a new and original tool to measure this relation. The tool quantifies the value contributed to companies that are committed to CSR. The theoretical model used is the fuzzy discounted cash flow method. Two assumptions have been considered, the first, the company has implemented the IQNet SR10 certification, and the second, the company has not implemented that certification. For the first one, the growth rate used for the time horizon is the rate maintained by the company after obtaining the IQNet SR10 certificate. For the second one, both, the growth rates company prior to the implementation of the certification, and the evolution of the sector will be taken into account. By using triangular fuzzy numbers, it is possible to deal adequately with each company’s forecasts as well as the information corresponding to the sector. Once the annual growth rate of the sales is obtained, the profit and loss accounts are generated from the annual estimate sales. For the remaining elements of this account, their regression with the nets sales has been considered. The difference between these two valuations, made in a fuzzy environment, allows obtaining the value of the IQNet SR10 certification. Although this study presents an innovative methodology to quantify the relation between CSR and FP, the authors are aware that only one company has been analyzed. This is precisely the main limitation of this study which in turn opens up an interesting line for future research: to broaden the sample of companies.Keywords: corporate social responsibility, case study, financial performance, company valuation
Procedia PDF Downloads 1882257 Estimation of Time Loss and Costs of Traffic Congestion: The Contingent Valuation Method
Authors: Amira Mabrouk, Chokri Abdennadher
Abstract:
The reduction of road congestion which is inherent to the use of vehicles is an obvious priority to public authority. Therefore, assessing the willingness to pay of an individual in order to save trip-time is akin to estimating the change in price which was the result of setting up a new transport policy to increase the networks fluidity and improving the level of social welfare. This study holds an innovative perspective. In fact, it initiates an economic calculation that has the objective of giving an estimation of the monetized time value during the trips made in Sfax. This research is founded on a double-objective approach. The aim of this study is to i) give an estimation of the monetized value of time; an hour dedicated to trips, ii) determine whether or not the consumer considers the environmental variables to be significant, iii) analyze the impact of applying a public management of the congestion via imposing taxation of city tolls on urban dwellers. This article is built upon a rich field survey led in the city of Sfax. With the use of the contingent valuation method, we analyze the “declared time preferences” of 450 drivers during rush hours. Based on the fond consideration of attributed bias of the applied method, we bring to light the delicacy of this approach with regards to the revelation mode and the interrogative techniques by following the NOAA panel recommendations bearing the exception of the valorization point and other similar studies about the estimation of transportation externality.Keywords: willingness to pay, contingent valuation, time value, city toll
Procedia PDF Downloads 4382256 Estimation of Synchronous Machine Synchronizing and Damping Torque Coefficients
Authors: Khaled M. EL-Naggar
Abstract:
Synchronizing and damping torque coefficients of a synchronous machine can give a quite clear picture for machine behavior during transients. These coefficients are used as a power system transient stability measurement. In this paper, a crow search optimization algorithm is presented and implemented to study the power system stability during transients. The algorithm makes use of the machine responses to perform the stability study in time domain. The problem is formulated as a dynamic estimation problem. An objective function that minimizes the error square in the estimated coefficients is designed. The method is tested using practical system with different study cases. Results are reported and a thorough discussion is presented. The study illustrates that the proposed method can estimate the stability coefficients for the critical stable cases where other methods may fail. The tests proved that the proposed tool is an accurate and reliable tool for estimating the machine coefficients for assessment of power system stability.Keywords: optimization, estimation, synchronous, machine, crow search
Procedia PDF Downloads 1402255 Modelling Mode Choice Behaviour Using Cloud Theory
Authors: Leah Wright, Trevor Townsend
Abstract:
Mode choice models are crucial instruments in the analysis of travel behaviour. These models show the relationship between an individual’s choice of transportation mode for a given O-D pair and the individual’s socioeconomic characteristics such as household size and income level, age and/or gender, and the features of the transportation system. The most popular functional forms of these models are based on Utility-Based Choice Theory, which addresses the uncertainty in the decision-making process with the use of an error term. However, with the development of artificial intelligence, many researchers have started to take a different approach to travel demand modelling. In recent times, researchers have looked at using neural networks, fuzzy logic and rough set theory to develop improved mode choice formulas. The concept of cloud theory has recently been introduced to model decision-making under uncertainty. Unlike the previously mentioned theories, cloud theory recognises a relationship between randomness and fuzziness, two of the most common types of uncertainty. This research aims to investigate the use of cloud theory in mode choice models. This paper highlights the conceptual framework of the mode choice model using cloud theory. Merging decision-making under uncertainty and mode choice models is state of the art. The cloud theory model is expected to address the issues and concerns with the nested logit and improve the design of mode choice models and their use in travel demand.Keywords: Cloud theory, decision-making, mode choice models, travel behaviour, uncertainty
Procedia PDF Downloads 3892254 A Cognitive Approach to the Optimization of Power Distribution across an Educational Campus
Authors: Mrinmoy Majumder, Apu Kumar Saha
Abstract:
The ever-increasing human population and its demand for energy is placing stress upon conventional energy sources; and as demand for power continues to outstrip supply, the need to optimize energy distribution and utilization is emerging as an important focus for various stakeholders. The distribution of available energy must be achieved in such a way that the needs of the consumer are satisfied. However, if the availability of resources is not sufficient to satisfy consumer demand, it is necessary to find a method to select consumers based on factors such as their socio-economic or environmental impacts. Weighting consumer types in this way can help separate them based on their relative importance, and cognitive optimization of the allocation process can then be carried out so that, even on days of particularly scarce supply, the socio-economic impacts of not satisfying the needs of consumers can be minimized. In this context, the present study utilized fuzzy logic to assign weightage to different types of consumers based at an educational campus in India, and then established optimal allocation by applying the non-linear mapping capability of neuro-genetic algorithms. The outputs of the algorithms were compared with similar outputs from particle swarm optimization and differential evolution algorithms. The results of the study demonstrate an option for the optimal utilization of available energy based on the socio-economic importance of consumers.Keywords: power allocation, optimization problem, neural networks, environmental and ecological engineering
Procedia PDF Downloads 4802253 Anticipation of Bending Reinforcement Based on Iranian Concrete Code Using Meta-Heuristic Tools
Authors: Seyed Sadegh Naseralavi, Najmeh Bemani
Abstract:
In this paper, different concrete codes including America, New Zealand, Mexico, Italy, India, Canada, Hong Kong, Euro Code and Britain are compared with the Iranian concrete design code. First, by using Adaptive Neuro Fuzzy Inference System (ANFIS), the codes having the most correlation with the Iranian ninth issue of the national regulation are determined. Consequently, two anticipated methods are used for comparing the codes: Artificial Neural Network (ANN) and Multi-variable regression. The results show that ANN performs better. Predicting is done by using only tensile steel ratio and with ignoring the compression steel ratio.Keywords: adaptive neuro fuzzy inference system, anticipate method, artificial neural network, concrete design code, multi-variable regression
Procedia PDF Downloads 2862252 Empirical Model for the Estimation of Global Solar Radiation on Horizontal Surface in Algeria
Authors: Malika Fekih, Abdenour Bourabaa, Rafika Hariti, Mohamed Saighi
Abstract:
In Algeria the global solar radiation and its components is not available for all locations due to which there is a requirement of using different models for the estimation of global solar radiation that use climatological parameters of the locations. Empirical constants for these models have been estimated and the results obtained have been tested statistically. The results show encouraging agreement between estimated and measured values.Keywords: global solar radiation, empirical model, semi arid areas, climatological parameters
Procedia PDF Downloads 5032251 Tenants Use Less Input on Rented Plots: Evidence from Northern Ethiopia
Authors: Desta Brhanu Gebrehiwot
Abstract:
The study aims to investigate the impact of land tenure arrangements on fertilizer use per hectare in Northern Ethiopia. Household and Plot level data are used for analysis. Land tenure contracts such as sharecropping and fixed rent arrangements have endogeneity. Different unobservable characteristics may affect renting-out decisions. Thus, the appropriate method of analysis was the instrumental variable estimation technic. Therefore, the family of instrumental variable estimation methods two-stage least-squares regression (2SLS, the generalized method of moments (GMM), Limited information maximum likelihood (LIML), and instrumental variable Tobit (IV-Tobit) was used. Besides, a method to handle a binary endogenous variable is applied, which uses a two-step estimation. In the first step probit model includes instruments, and in the second step, maximum likelihood estimation (MLE) (“etregress” command in Stata 14) was used. There was lower fertilizer use per hectare on sharecropped and fixed rented plots relative to owner-operated. The result supports the Marshallian inefficiency principle in sharecropping. The difference in fertilizer use per hectare could be explained by a lack of incentivized detailed contract forms, such as giving more proportion of the output to the tenant under sharecropping contracts, which motivates to use of more fertilizer in rented plots to maximize the production because most sharecropping arrangements share output equally between tenants and landlords.Keywords: tenure-contracts, endogeneity, plot-level data, Ethiopia, fertilizer
Procedia PDF Downloads 862250 Runoff Estimation in the Khiyav River Basin by Using the SCS_ CN Model
Authors: F. Esfandyari Darabad, Z. Samadi
Abstract:
The volume of runoff caused by rainfall in the river basin has enticed the researchers in the fields of the water management resources. In this study, first of the hydrological data such as the rainfall and discharge of the Khiyav river basin of Meshkin city in the northwest of Iran collected and then the process of analyzing and reconstructing has been completed. The soil conservation service (scs) has developed a method for calculating the runoff, in which is based on the curve number specification (CN). This research implemented the following model in the Khiyav river basin of Meshkin city by the GIS techniques and concluded the following fact in which represents the usage of weight model in calculating the curve numbers that provides the possibility for the all efficient factors which is contributing to the runoff creation such as; the geometric characteristics of the basin, the basin soil characteristics, vegetation, geology, climate and human factors to be considered, so an accurate estimation of runoff from precipitation to be achieved as the result. The findings also exposed the accident-prone areas in the output of the Khiyav river basin so it was revealed that the Khiyav river basin embodies a high potential for the flood creation.Keywords: curve number, khiyav river basin, runoff estimation, SCS
Procedia PDF Downloads 6222249 Comparative Study on Daily Discharge Estimation of Soolegan River
Authors: Redvan Ghasemlounia, Elham Ansari, Hikmet Kerem Cigizoglu
Abstract:
Hydrological modeling in arid and semi-arid regions is very important. Iran has many regions with these climate conditions such as Chaharmahal and Bakhtiari province that needs lots of attention with an appropriate management. Forecasting of hydrological parameters and estimation of hydrological events of catchments, provide important information that used for design, management and operation of water resources such as river systems, and dams, widely. Discharge in rivers is one of these parameters. This study presents the application and comparison of some estimation methods such as Feed-Forward Back Propagation Neural Network (FFBPNN), Multi Linear Regression (MLR), Gene Expression Programming (GEP) and Bayesian Network (BN) to predict the daily flow discharge of the Soolegan River, located at Chaharmahal and Bakhtiari province, in Iran. In this study, Soolegan, station was considered. This Station is located in Soolegan River at 51° 14՜ Latitude 31° 38՜ longitude at North Karoon basin. The Soolegan station is 2086 meters higher than sea level. The data used in this study are daily discharge and daily precipitation of Soolegan station. Feed Forward Back Propagation Neural Network(FFBPNN), Multi Linear Regression (MLR), Gene Expression Programming (GEP) and Bayesian Network (BN) models were developed using the same input parameters for Soolegan's daily discharge estimation. The results of estimation models were compared with observed discharge values to evaluate performance of the developed models. Results of all methods were compared and shown in tables and charts.Keywords: ANN, multi linear regression, Bayesian network, forecasting, discharge, gene expression programming
Procedia PDF Downloads 5612248 Parameter Estimation in Dynamical Systems Based on Latent Variables
Authors: Arcady Ponosov
Abstract:
A novel mathematical approach is suggested, which facilitates a compressed representation and efficient validation of parameter-rich ordinary differential equation models describing the dynamics of complex, especially biology-related, systems and which is based on identification of the system's latent variables. In particular, an efficient parameter estimation method for the compressed non-linear dynamical systems is developed. The method is applied to the so-called 'power-law systems' being non-linear differential equations typically used in Biochemical System Theory.Keywords: generalized law of mass action, metamodels, principal components, synergetic systems
Procedia PDF Downloads 3572247 Foil Bearing Stiffness Estimation with Pseudospectral Scheme
Authors: Balaji Sankar, Sadanand Kulkarni
Abstract:
Compliant foil gas lubricated bearings are used for the support of light loads in the order of few kilograms at high speeds, in the order of 50,000 RPM. The stiffness of the foil bearings depends both on the stiffness of the compliant foil and on the lubricating gas film. The stiffness of the bearings plays a crucial role in the stable operation of the supported rotor over a range of speeds. This paper describes a numerical approach to estimate the stiffness of the bearings using pseudo spectral scheme. Methodology to obtain the stiffness of the foil bearing as a function of weight of the shaft is given and the results are presented.Keywords: foil bearing, simulation, numerical, stiffness estimation
Procedia PDF Downloads 3422246 Mobile Smart Application Proposal for Predicting Calories in Food
Authors: Marcos Valdez Alexander Junior, Igor Aguilar-Alonso
Abstract:
Malnutrition is the root of different diseases that universally affect everyone, diseases such as obesity and malnutrition. The objective of this research is to predict the calories of the food to be eaten, developing a smart mobile application to show the user if a meal is balanced. Due to the large percentage of obesity and malnutrition in Peru, the present work is carried out. The development of the intelligent application is proposed with a three-layer architecture, and for the prediction of the nutritional value of the food, the use of pre-trained models based on convolutional neural networks is proposed.Keywords: volume estimation, calorie estimation, artificial vision, food nutrition
Procedia PDF Downloads 1012245 An Application of Fuzzy Analytical Network Process to Select a New Production Base: An AEC Perspective
Authors: Walailak Atthirawong
Abstract:
By the end of 2015, the Association of Southeast Asian Nations (ASEAN) countries proclaim to transform into the next stage of an economic era by having a single market and production base called ASEAN Economic Community (AEC). One objective of the AEC is to establish ASEAN as a single market and one production base making ASEAN highly competitive economic region and competitive with new mechanisms. As a result, it will open more opportunities to enterprises in both trade and investment, which offering a competitive market of US$ 2.6 trillion and over 622 million people. Location decision plays a key role in achieving corporate competitiveness. Hence, it may be necessary for enterprises to redesign their supply chains via enlarging a new production base which has low labor cost, high labor skill and numerous of labor available. This strategy will help companies especially for apparel industry in order to maintain a competitive position in the global market. Therefore, in this paper a generic model for location selection decision for Thai apparel industry using Fuzzy Analytical Network Process (FANP) is proposed. Myanmar, Vietnam and Cambodia are referred for alternative location decision from interviewing expert persons in this industry who have planned to enlarge their businesses in AEC countries. The contribution of this paper lies in proposing an approach model that is more practical and trustworthy to top management in making a decision on location selection.Keywords: apparel industry, ASEAN Economic Community (AEC), Fuzzy Analytical Network Process (FANP), location decision
Procedia PDF Downloads 2372244 Evaluation of Dual Polarization Rainfall Estimation Algorithm Applicability in Korea: A Case Study on Biseulsan Radar
Authors: Chulsang Yoo, Gildo Kim
Abstract:
Dual polarization radar provides comprehensive information about rainfall by measuring multiple parameters. In Korea, for the rainfall estimation, JPOLE and CSU-HIDRO algorithms are generally used. This study evaluated the local applicability of JPOLE and CSU-HIDRO algorithms in Korea by using the observed rainfall data collected on August, 2014 by the Biseulsan dual polarization radar data and KMA AWS. A total of 11,372 pairs of radar-ground rain rate data were classified according to thresholds of synthetic algorithms into suitable and unsuitable data. Then, evaluation criteria were derived by comparing radar rain rate and ground rain rate, respectively, for entire, suitable, unsuitable data. The results are as follows: (1) The radar rain rate equation including KDP, was found better in the rainfall estimation than the other equations for both JPOLE and CSU-HIDRO algorithms. The thresholds were found to be adequately applied for both algorithms including specific differential phase. (2) The radar rain rate equation including horizontal reflectivity and differential reflectivity were found poor compared to the others. The result was not improved even when only the suitable data were applied. Acknowledgments: This work was supported by the Basic Science Research Program through the National Research Foundation of Korea, funded by the Ministry of Education (NRF-2013R1A1A2011012).Keywords: CSU-HIDRO algorithm, dual polarization radar, JPOLE algorithm, radar rainfall estimation algorithm
Procedia PDF Downloads 2152243 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis
Authors: C. B. Le, V. N. Pham
Abstract:
In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering
Procedia PDF Downloads 1912242 Service Business Model Canvas: A Boundary Object Operating as a Business Development Tool
Authors: Taru Hakanen, Mervi Murtonen
Abstract:
This study aims to increase understanding of the transition of business models in servitization. The significance of service in all business has increased dramatically during the past decades. Service-dominant logic (SDL) describes this change in the economy and questions the goods-dominant logic on which business has primarily been based in the past. A business model canvas is one of the most cited and used tools in defining end developing business models. The starting point of this paper lies in the notion that the traditional business model canvas is inherently goods-oriented and best suits for product-based business. However, the basic differences between goods and services necessitate changes in business model representations when proceeding in servitization. Therefore, new knowledge is needed on how the conception of business model and the business model canvas as its representation should be altered in servitized firms in order to better serve business developers and inter-firm co-creation. That is to say, compared to products, services are intangible and they are co-produced between the supplier and the customer. Value is always co-created in interaction between a supplier and a customer, and customer experience primarily depends on how well the interaction succeeds between the actors. The role of service experience is even stronger in service business compared to product business, as services are co-produced with the customer. This paper provides business model developers with a service business model canvas, which takes into account the intangible, interactive, and relational nature of service. The study employs a design science approach that contributes to theory development via design artifacts. This study utilizes qualitative data gathered in workshops with ten companies from various industries. In particular, key differences between Goods-dominant logic (GDL) and SDL-based business models are identified when an industrial firm proceeds in servitization. As the result of the study, an updated version of the business model canvas is provided based on service-dominant logic. The service business model canvas ensures a stronger customer focus and includes aspects salient for services, such as interaction between companies, service co-production, and customer experience. It can be used for the analysis and development of a current service business model of a company or for designing a new business model. It facilitates customer-focused new service design and service development. It aids in the identification of development needs, and facilitates the creation of a common view of the business model. Therefore, the service business model canvas can be regarded as a boundary object, which facilitates the creation of a common understanding of the business model between several actors involved. The study contributes to the business model and service business development disciplines by providing a managerial tool for practitioners in service development. It also provides research insight into how servitization challenges companies’ business models.Keywords: boundary object, business model canvas, managerial tool, service-dominant logic
Procedia PDF Downloads 3692241 Improved Distance Estimation in Dynamic Environments through Multi-Sensor Fusion with Extended Kalman Filter
Authors: Iffat Ara Ebu, Fahmida Islam, Mohammad Abdus Shahid Rafi, Mahfuzur Rahman, Umar Iqbal, John Ball
Abstract:
The application of multi-sensor fusion for enhanced distance estimation accuracy in dynamic environments is crucial for advanced driver assistance systems (ADAS) and autonomous vehicles. Limitations of single sensors such as cameras or radar in adverse conditions motivate the use of combined camera and radar data to improve reliability, adaptability, and object recognition. A multi-sensor fusion approach using an extended Kalman filter (EKF) is proposed to combine sensor measurements with a dynamic system model, achieving robust and accurate distance estimation. The research utilizes the Mississippi State University Autonomous Vehicular Simulator (MAVS) to create a controlled environment for data collection. Data analysis is performed using MATLAB. Qualitative (visualization of fused data vs ground truth) and quantitative metrics (RMSE, MAE) are employed for performance assessment. Initial results with simulated data demonstrate accurate distance estimation compared to individual sensors. The optimal sensor measurement noise variance and plant noise variance parameters within the EKF are identified, and the algorithm is validated with real-world data from a Chevrolet Blazer. In summary, this research demonstrates that multi-sensor fusion with an EKF significantly improves distance estimation accuracy in dynamic environments. This is supported by comprehensive evaluation metrics, with validation transitioning from simulated to real-world data, paving the way for safer and more reliable autonomous vehicle control.Keywords: sensor fusion, EKF, MATLAB, MAVS, autonomous vehicle, ADAS
Procedia PDF Downloads 462240 Biomass Carbon Credit Estimation for Sustainable Urban Planning and Micro-climate Assessment
Authors: R. Niranchana, K. Meena Alias Jeyanthi
Abstract:
As a result of the present climate change dilemma, the energy balancing strategy is to construct a sustainable environment has become a top concern for researchers worldwide. The environment itself has always been a solution from the earliest days of human evolution. Carbon capture begins with its accurate estimation and monitoring credit inventories, and its efficient use. Sustainable urban planning with deliverables of re-use energy models might benefit from assessment methods like biomass carbon credit ranking. The term "biomass energy" refers to the various ways in which living organisms can potentially be converted into a source of energy. The approaches that can be applied to biomass and an algorithm for evaluating carbon credits are presented in this paper. The micro-climate evaluation using Computational Fluid dynamics was carried out across the location (1 km x1 km) at Dindigul, India (10°24'58.68" North, 77°54.1.80 East). Sustainable Urban design must be carried out considering environmental and physiological convection, conduction, radiation and evaporative heat exchange due to proceeding solar access and wind intensities.Keywords: biomass, climate assessment, urban planning, multi-regression, carbon estimation algorithm
Procedia PDF Downloads 972239 Switched System Diagnosis Based on Intelligent State Filtering with Unknown Models
Authors: Nada Slimane, Foued Theljani, Faouzi Bouani
Abstract:
The paper addresses the problem of fault diagnosis for systems operating in several modes (normal or faulty) based on states assessment. We use, for this purpose, a methodology consisting of three main processes: 1) sequential data clustering, 2) linear model regression and 3) state filtering. Typically, Kalman Filter (KF) is an algorithm that provides estimation of unknown states using a sequence of I/O measurements. Inevitably, although it is an efficient technique for state estimation, it presents two main weaknesses. First, it merely predicts states without being able to isolate/classify them according to their different operating modes, whether normal or faulty modes. To deal with this dilemma, the KF is endowed with an extra clustering step based fully on sequential version of the k-means algorithm. Second, to provide state estimation, KF requires state space models, which can be unknown. A linear regularized regression is used to identify the required models. To prove its effectiveness, the proposed approach is assessed on a simulated benchmark.Keywords: clustering, diagnosis, Kalman Filtering, k-means, regularized regression
Procedia PDF Downloads 1842238 Modeling of Age Hardening Process Using Adaptive Neuro-Fuzzy Inference System: Results from Aluminum Alloy A356/Cow Horn Particulate Composite
Authors: Chidozie C. Nwobi-Okoye, Basil Q. Ochieze, Stanley Okiy
Abstract:
This research reports on the modeling of age hardening process using adaptive neuro-fuzzy inference system (ANFIS). The age hardening output (Hardness) was predicted using ANFIS. The input parameters were ageing time, temperature and percentage composition of cow horn particles (CHp%). The results show the correlation coefficient (R) of the predicted hardness values versus the measured values was of 0.9985. Subsequently, values outside the experimental data points were predicted. When the temperature was kept constant, and other input parameters were varied, the average relative error of the predicted values was 0.0931%. When the temperature was varied, and other input parameters kept constant, the average relative error of the hardness values predictions was 80%. The results show that ANFIS with coarse experimental data points for learning is not very effective in predicting process outputs in the age hardening operation of A356 alloy/CHp particulate composite. The fine experimental data requirements by ANFIS make it more expensive in modeling and optimization of age hardening operations of A356 alloy/CHp particulate composite.Keywords: adaptive neuro-fuzzy inference system (ANFIS), age hardening, aluminum alloy, metal matrix composite
Procedia PDF Downloads 1552237 How to Enhance Performance of Universities by Implementing Balanced Scorecard with Using FDM and ANP
Authors: Neda Jalaliyoon, Nooh Abu Bakar, Hamed Taherdoost
Abstract:
The present research recommended balanced scorecard (BSC) framework to appraise the performance of the universities. As the original model of balanced scorecard has four perspectives in order to implement BSC in present research the same model with “financial perspective”, “customer”,” internal process” and “learning and growth” is used as well. With applying fuzzy Delphi method (FDM) and questionnaire sixteen measures of performance were identified. Moreover, with using the analytic network process (ANP) the weights of the selected indicators were determined. Results indicated that the most important BSC’s aspect were Internal Process (0.3149), Customer (0.2769), Learning and Growth (0.2049), and Financial (0.2033) respectively. The proposed BSC framework can help universities to enhance their efficiency in competitive environment.Keywords: balanced scorecard, higher education, fuzzy delphi method, analytic network process (ANP)
Procedia PDF Downloads 4282236 Risk Assessment of Building Information Modelling Adoption in Construction Projects
Authors: Amirhossein Karamoozian, Desheng Wu, Behzad Abbasnejad
Abstract:
Building information modelling (BIM) is a new technology to enhance the efficiency of project management in the construction industry. In addition to the potential benefits of this useful technology, there are various risks and obstacles to applying it in construction projects. In this study, a decision making approach is presented for risk assessment in BIM adoption in construction projects. Various risk factors of exerting BIM during different phases of the project lifecycle are identified with the help of Delphi method, experts’ opinions and related literature. Afterward, Shannon’s entropy and Fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Situation) are applied to derive priorities of the identified risk factors. Results indicated that lack of knowledge between professional engineers about workflows in BIM and conflict of opinions between different stakeholders are the risk factors with the highest priority.Keywords: risk, BIM, fuzzy TOPSIS, construction projects
Procedia PDF Downloads 2312235 The Generalized Pareto Distribution as a Model for Sequential Order Statistics
Authors: Mahdy Esmailian, Mahdi Doostparast, Ahmad Parsian
Abstract:
In this article, sequential order statistics (SOS) censoring type II samples coming from the generalized Pareto distribution are considered. Maximum likelihood (ML) estimators of the unknown parameters are derived on the basis of the available multiple SOS data. Necessary conditions for existence and uniqueness of the derived ML estimates are given. Due to complexity in the proposed likelihood function, a useful re-parametrization is suggested. For illustrative purposes, a Monte Carlo simulation study is conducted and an illustrative example is analysed.Keywords: bayesian estimation, generalized pareto distribution, maximum likelihood estimation, sequential order statistics
Procedia PDF Downloads 5132234 Information Management Approach in the Prediction of Acute Appendicitis
Authors: Ahmad Shahin, Walid Moudani, Ali Bekraki
Abstract:
This research aims at presenting a predictive data mining model to handle an accurate diagnosis of acute appendicitis with patients for the purpose of maximizing the health service quality, minimizing morbidity/mortality, and reducing cost. However, acute appendicitis is the most common disease which requires timely accurate diagnosis and needs surgical intervention. Although the treatment of acute appendicitis is simple and straightforward, its diagnosis is still difficult because no single sign, symptom, laboratory or image examination accurately confirms the diagnosis of acute appendicitis in all cases. This contributes in increasing morbidity and negative appendectomy. In this study, the authors propose to generate an accurate model in prediction of patients with acute appendicitis which is based, firstly, on the segmentation technique associated to ABC algorithm to segment the patients; secondly, on applying fuzzy logic to process the massive volume of heterogeneous and noisy data (age, sex, fever, white blood cell, neutrophilia, CRP, urine, ultrasound, CT, appendectomy, etc.) in order to express knowledge and analyze the relationships among data in a comprehensive manner; and thirdly, on applying dynamic programming technique to reduce the number of data attributes. The proposed model is evaluated based on a set of benchmark techniques and even on a set of benchmark classification problems of osteoporosis, diabetes and heart obtained from the UCI data and other data sources.Keywords: healthcare management, acute appendicitis, data mining, classification, decision tree
Procedia PDF Downloads 3522233 Construction Unit Rate Factor Modelling Using Neural Networks
Authors: Balimu Mwiya, Mundia Muya, Chabota Kaliba, Peter Mukalula
Abstract:
Factors affecting construction unit cost vary depending on a country’s political, economic, social and technological inclinations. Factors affecting construction costs have been studied from various perspectives. Analysis of cost factors requires an appreciation of a country’s practices. Identified cost factors provide an indication of a country’s construction economic strata. The purpose of this paper is to identify the essential factors that affect unit cost estimation and their breakdown using artificial neural networks. Twenty-five (25) identified cost factors in road construction were subjected to a questionnaire survey and employing SPSS factor analysis the factors were reduced to eight. The 8 factors were analysed using the neural network (NN) to determine the proportionate breakdown of the cost factors in a given construction unit rate. NN predicted that political environment accounted 44% of the unit rate followed by contractor capacity at 22% and financial delays, project feasibility, overhead and profit each at 11%. Project location, material availability and corruption perception index had minimal impact on the unit cost from the training data provided. Quantified cost factors can be incorporated in unit cost estimation models (UCEM) to produce more accurate estimates. This can create improvements in the cost estimation of infrastructure projects and establish a benchmark standard to assist the process of alignment of work practises and training of new staff, permitting the on-going development of best practises in cost estimation to become more effective.Keywords: construction cost factors, neural networks, roadworks, Zambian construction industry
Procedia PDF Downloads 3662232 Robust Heart Rate Estimation from Multiple Cardiovascular and Non-Cardiovascular Physiological Signals Using Signal Quality Indices and Kalman Filter
Authors: Shalini Rankawat, Mansi Rankawat, Rahul Dubey, Mazad Zaveri
Abstract:
Physiological signals such as electrocardiogram (ECG) and arterial blood pressure (ABP) in the intensive care unit (ICU) are often seriously corrupted by noise, artifacts, and missing data, which lead to errors in the estimation of heart rate (HR) and incidences of false alarm from ICU monitors. Clinical support in ICU requires most reliable heart rate estimation. Cardiac activity, because of its relatively high electrical energy, may introduce artifacts in Electroencephalogram (EEG), Electrooculogram (EOG), and Electromyogram (EMG) recordings. This paper presents a robust heart rate estimation method by detection of R-peaks of ECG artifacts in EEG, EMG & EOG signals, using energy-based function and a novel Signal Quality Index (SQI) assessment technique. SQIs of physiological signals (EEG, EMG, & EOG) were obtained by correlation of nonlinear energy operator (teager energy) of these signals with either ECG or ABP signal. HR is estimated from ECG, ABP, EEG, EMG, and EOG signals from separate Kalman filter based upon individual SQIs. Data fusion of each HR estimate was then performed by weighing each estimate by the Kalman filters’ SQI modified innovations. The fused signal HR estimate is more accurate and robust than any of the individual HR estimate. This method was evaluated on MIMIC II data base of PhysioNet from bedside monitors of ICU patients. The method provides an accurate HR estimate even in the presence of noise and artifacts.Keywords: ECG, ABP, EEG, EMG, EOG, ECG artifacts, Teager-Kaiser energy, heart rate, signal quality index, Kalman filter, data fusion
Procedia PDF Downloads 6962231 Extended Kalman Filter Based Direct Torque Control of Permanent Magnet Synchronous Motor
Authors: Liang Qin, Hanan M. D. Habbi
Abstract:
A robust sensorless speed for permanent magnet synchronous motor (PMSM) has been presented for estimation of stator flux components and rotor speed based on The Extended Kalman Filter (EKF). The model of PMSM and its EKF models are modeled in Matlab /Sirnulink environment. The proposed EKF speed estimation method is also proved insensitive to the PMSM parameter variations. Simulation results demonstrate a good performance and robustness.Keywords: DTC, Extended Kalman Filter (EKF), PMSM, sensorless control, anti-windup PI
Procedia PDF Downloads 6662230 DNA PLA: A Nano-Biotechnological Programmable Device
Authors: Hafiz Md. HasanBabu, Khandaker Mohammad Mohi Uddin, Md. IstiakJaman Ami, Rahat Hossain Faisal
Abstract:
Computing in biomolecular programming performs through the different types of reactions. Proteins and nucleic acids are used to store the information generated by biomolecular programming. DNA (Deoxyribose Nucleic Acid) can be used to build a molecular computing system and operating system for its predictable molecular behavior property. The DNA device has clear advantages over conventional devices when applied to problems that can be divided into separate, non-sequential tasks. The reason is that DNA strands can hold so much data in memory and conduct multiple operations at once, thus solving decomposable problems much faster. Programmable Logic Array, abbreviated as PLA is a programmable device having programmable AND operations and OR operations. In this paper, a DNA PLA is designed by different molecular operations using DNA molecules with the proposed algorithms. The molecular PLA could take advantage of DNA's physical properties to store information and perform calculations. These include extremely dense information storage, enormous parallelism, and extraordinary energy efficiency.Keywords: biological systems, DNA computing, parallel computing, programmable logic array, PLA, DNA
Procedia PDF Downloads 1302229 Key Performance Indicators and the Model for Achieving Digital Inclusion for Smart Cities
Authors: Khalid Obaed Mahmod, Mesut Cevik
Abstract:
The term smart city has appeared recently and was accompanied by many definitions and concepts, but as a simplified and clear definition, it can be said that the smart city is a geographical location that has gained efficiency and flexibility in providing public services to citizens through its use of technological and communication technologies, and this is what distinguishes it from other cities. Smart cities connect the various components of the city through the main and sub-networks in addition to a set of applications and thus be able to collect data that is the basis for providing technological solutions to manage resources and provide services. The basis of the work of the smart city is the use of artificial intelligence and the technology of the Internet of Things. The work presents the concept of smart cities, the pillars, standards, and evaluation indicators on which smart cities depend, and the reasons that prompted the world to move towards its establishment. It also provides a simplified hypothetical way to measure the ideal smart city model by defining some indicators and key pillars, simulating them with logic circuits, and testing them to determine if the city can be considered an ideal smart city or not.Keywords: factors, indicators, logic gates, pillars, smart city
Procedia PDF Downloads 153