Search results for: cost estimating
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2357

Search results for: cost estimating

1547 Logistical Optimization of Nuclear Waste Flows during Decommissioning

Authors: G. Dottavio, M. F. Andrade, F. Renard, V. Cheutet, A.-L. L. S. Vercraene, P. Hoang, S. Briet, R. Dachicourt, Y. Baizet

Abstract:

An important number of technological equipment and high-skilled workers over long periods of time have to be mobilized during nuclear decommissioning processes. The related operations generate complex flows of waste and high inventory levels, associated to information flows of heterogeneous types. Taking into account that more than 10 decommissioning operations are on-going in France and about 50 are expected toward 2025: A big challenge is addressed today. The management of decommissioning and dismantling of nuclear installations represents an important part of the nuclear-based energy lifecycle, since it has an environmental impact as well as an important influence on the electricity cost and therefore the price for end-users. Bringing new technologies and new solutions into decommissioning methodologies is thus mandatory to improve the quality, cost and delay efficiency of these operations. The purpose of our project is to improve decommissioning management efficiency by developing a decision-support framework dedicated to plan nuclear facility decommissioning operations and to optimize waste evacuation by means of a logistic approach. The target is to create an easy-to-handle tool capable of i) predicting waste flows and proposing the best decommissioning logistics scenario and ii) managing information during all the steps of the process and following the progress: planning, resources, delays, authorizations, saturation zones, waste volume, etc. In this article we present our results from waste nuclear flows simulation during decommissioning process, including discrete-event simulation supported by FLEXSIM 3-D software. This approach was successfully tested and our works confirms its ability to improve this type of industrial process by identifying the critical points of the chain and optimizing it by identifying improvement actions. This type of simulation, executed before the start of the process operations on the basis of a first conception, allow ‘what-if’ process evaluation and help to ensure quality of the process in an uncertain context. The simulation of nuclear waste flows before evacuation from the site will help reducing the cost and duration of the decommissioning process by optimizing the planning and the use of resources, transitional storage and expensive radioactive waste containers. Additional benefits are expected for the governance system of the waste evacuation since it will enable a shared responsibility of the waste flows.

Keywords: Nuclear decommissioning, logistical optimization, decision-support framework, waste management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1529
1546 Convective Hot Air Drying of Different Varieties of Blanched Sweet Potato Slices

Authors: M. O. Oke, T. S. Workneh

Abstract:

Drying behavior of blanched sweet potato in a cabinet dryer using different five air temperatures (40-80°C) and ten sweet potato varieties sliced to 5mm thickness were investigated. The drying data were fitted to eight models. The Modified Henderson and Pabis model gave the best fit to the experimental moisture ratio data obtained during the drying of all the varieties while Newton (Lewis) and Wang and Singh models gave the least fit. The values of Deff obtained for Bophelo variety (1.27 x 10-9 to 1.77 x 10-9 m2/s) was the least while that of S191 (1.93 x 10-9 to 2.47 x 10-9 m2/s) was the highest which indicates that moisture diffusivity in sweet potato is affected by the genetic factor. Activation energy values ranged from 0.27-6.54 kJ/mol. The lower activation energy indicates that drying of sweet potato slices requires less energy and is hence a cost and energy saving method. The drying behavior of blanched sweet potato was investigated in a cabinet dryer. Drying time decreased considerably with increase in hot air temperature. Out of the eight models fitted, the Modified Henderson and Pabis model gave the best fit to the experimental moisture ratio data on all the varieties while Newton, Wang and Singh models gave the least. The lower activation energy (0.27 - 6.54 kJ/mol) obtained indicates that drying of sweet potato slices requires less energy and is hence a cost and energy saving method.

Keywords: Sweet Potato Slice, Drying Models, Moisture Ratio, Moisture Diffusivity, Activation Energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2981
1545 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles

Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi

Abstract:

Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.

Keywords: Artificial neural networks, fuel consumption, machine learning, regression, statistical tests.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 793
1544 Unsteady Transonic Aerodynamic Analysis for Oscillatory Airfoils using Time Spectral Method

Authors: Mohamad Reza. Mohaghegh, Majid. Malek Jafarian

Abstract:

This research proposes an algorithm for the simulation of time-periodic unsteady problems via the solution unsteady Euler and Navier-Stokes equations. This algorithm which is called Time Spectral method uses a Fourier representation in time and hence solve for the periodic state directly without resolving transients (which consume most of the resources in a time-accurate scheme). Mathematical tools used here are discrete Fourier transformations. It has shown tremendous potential for reducing the computational cost compared to conventional time-accurate methods, by enforcing periodicity and using Fourier representation in time, leading to spectral accuracy. The accuracy and efficiency of this technique is verified by Euler and Navier-Stokes calculations for pitching airfoils. Because of flow turbulence nature, Baldwin-Lomax turbulence model has been used at viscous flow analysis. The results presented by the Time Spectral method are compared with experimental data. It has shown tremendous potential for reducing the computational cost compared to the conventional time-accurate methods, by enforcing periodicity and using Fourier representation in time, leading to spectral accuracy, because results verify the small number of time intervals per pitching cycle required to capture the flow physics.

Keywords: Time Spectral Method, Time-periodic unsteadyflow, Discrete Fourier transform, Pitching airfoil, Turbulence flow

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1748
1543 Reasons for the Slow Uptake of Embodied Carbon Estimation in the Sri Lankan Building Sector

Authors: Amalka Nawarathna, Nirodha Fernando, Zaid Alwan

Abstract:

Global carbon reduction is not merely a responsibility of environmentally advanced developed countries, but also a responsibility of developing countries regardless of their less impact on global carbon emissions. In recognition of that, Sri Lanka as a developing country has initiated promoting green building construction as one reduction strategy. However, notwithstanding the increasing attention on Embodied Carbon (EC) reduction in the global building sector, they still mostly focus on Operational Carbon (OC) reduction (through improving operational energy). An adequate attention has not yet been given on EC estimation and reduction. Therefore, this study aims to identify the reasons for the slow uptake of EC estimation in the Sri Lankan building sector. To achieve this aim, 16 numbers of global barriers to estimate EC were identified through existing literature. They were then subjected to a pilot survey to identify the significant reasons for the slow uptake of EC estimation in the Sri Lankan building sector. A questionnaire with a three-point Likert scale was used to this end. The collected data were analysed using descriptive statistics. The findings revealed that 11 out of 16 challenges/ barriers are highly relevant as reasons for the slow uptake in estimating EC in buildings in Sri Lanka while the other five challenges/ barriers remain as moderately relevant reasons. Further, the findings revealed that there are no low relevant reasons. Eventually, the paper concluded that all the known reasons are significant to the Sri Lankan building sector and it is necessary to address them in order to upturn the attention on EC reduction.

Keywords: Embodied carbon emissions, embodied carbon estimation, global carbon reduction, Sri Lankan building sector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 806
1542 Automated Textile Defect Recognition System Using Computer Vision and Artificial Neural Networks

Authors: Atiqul Islam, Shamim Akhter, Tumnun E. Mursalin

Abstract:

Least Development Countries (LDC) like Bangladesh, whose 25% revenue earning is achieved from Textile export, requires producing less defective textile for minimizing production cost and time. Inspection processes done on these industries are mostly manual and time consuming. To reduce error on identifying fabric defects requires more automotive and accurate inspection process. Considering this lacking, this research implements a Textile Defect Recognizer which uses computer vision methodology with the combination of multi-layer neural networks to identify four classifications of textile defects. The recognizer, suitable for LDC countries, identifies the fabric defects within economical cost and produces less error prone inspection system in real time. In order to generate input set for the neural network, primarily the recognizer captures digital fabric images by image acquisition device and converts the RGB images into binary images by restoration process and local threshold techniques. Later, the output of the processed image, the area of the faulty portion, the number of objects of the image and the sharp factor of the image, are feed backed as an input layer to the neural network which uses back propagation algorithm to compute the weighted factors and generates the desired classifications of defects as an output.

Keywords: Computer vision, image acquisition device, machine vision, multi-layer neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3264
1541 Further Development in Predicting Post-Earthquake Fire Ignition Hazard

Authors: Pegah Farshadmanesh, Jamshid Mohammadi, Mehdi Modares

Abstract:

In nearly all earthquakes of the past century that resulted in moderate to significant damage, the occurrence of postearthquake fire ignition (PEFI) has imposed a serious hazard and caused severe damage, especially in urban areas. In order to reduce the loss of life and property caused by post-earthquake fires, there is a crucial need for predictive models to estimate the PEFI risk. The parameters affecting PEFI risk can be categorized as: 1) factors influencing fire ignition in normal (non-earthquake) condition, including floor area, building category, ignitability, type of appliance, and prevention devices, and 2) earthquake related factors contributing to the PEFI risk, including building vulnerability and earthquake characteristics such as intensity, peak ground acceleration, and peak ground velocity. State-of-the-art statistical PEFI risk models are solely based on limited available earthquake data, and therefore they cannot predict the PEFI risk for areas with insufficient earthquake records since such records are needed in estimating the PEFI model parameters. In this paper, the correlation between normal condition ignition risk, peak ground acceleration, and PEFI risk is examined in an effort to offer a means for predicting post-earthquake ignition events. An illustrative example is presented to demonstrate how such correlation can be employed in a seismic area to predict PEFI hazard.

Keywords: Fire risk, post-earthquake fire ignition (PEFI), risk management, seismicity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1344
1540 Using Field Indices of Rill and Gully in order to Erosion Estimating and Sediment Analysis (Case Study: Menderjan Watershed in Isfahan Province, Iran)

Authors: Masoud Nasri, Sadat Feiznia, Mohammad Jafari, Hasan Ahmadi

Abstract:

Today, incorrect use of lands and land use changes, excessive grazing, no suitable using of agricultural farms, plowing on steep slopes, road construct, building construct, mine excavation etc have been caused increasing of soil erosion and sediment yield. For erosion and sediment estimation one can use statistical and empirical methods. This needs to identify land unit map and the map of effective factors. However, these empirical methods are usually time consuming and do not give accurate estimation of erosion. In this study, we applied GIS techniques to estimate erosion and sediment of Menderjan watershed at upstream Zayandehrud river in center of Iran. Erosion faces at each land unit were defined on the basis of land use, geology and land unit map using GIS. The UTM coordinates of each erosion type that showed more erosion amounts such as rills and gullies were inserted in GIS using GPS data. The frequency of erosion indicators at each land unit, land use and their sediment yield of these indices were calculated. Also using tendency analysis of sediment yield changes in watershed outlet (Menderjan hydrometric gauge station), was calculated related parameters and estimation errors. The results of this study according to implemented watershed management projects can be used for more rapid and more accurate estimation of erosion than traditional methods. These results can also be used for regional erosion assessment and can be used for remote sensing image processing.

Keywords: Erosion and sedimentation, Gully, Rill, GIS, GPS, Menderjan Watershed

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1887
1539 Optimal Duty-Cycle Modulation Scheme for Analog-To-Digital Conversion Systems

Authors: G. Sonfack, J. Mbihi, B. Lonla Moffo

Abstract:

This paper presents an optimal duty-cycle modulation (ODCM) scheme for analog-to-digital conversion (ADC) systems. The overall ODCM-Based ADC problem is decoupled into optimal DCM and digital filtering sub-problems, while taking into account constraints of mutual design parameters between the two. Using a set of three lemmas and four morphological theorems, the ODCM sub-problem is modelled as a nonlinear cost function with nonlinear constraints. Then, a weighted least pth norm of the error between ideal and predicted frequency responses is used as a cost function for the digital filtering sub-problem. In addition, MATLAB fmincon and MATLAB iirlnorm tools are used as optimal DCM and least pth norm solvers respectively. Furthermore, the virtual simulation scheme of an overall prototyping ODCM-based ADC system is implemented and well tested with the help of Simulink tool according to relevant set of design data, i.e., 3 KHz of modulating bandwidth, 172 KHz of maximum modulation frequency and 25 MHZ of sampling frequency. Finally, the results obtained and presented show that the ODCM-based ADC achieves under 3 KHz of modulating bandwidth: 57 dBc of SINAD (signal-to-noise and distorsion), 58 dB of SFDR (Surpious free dynamic range) -80 dBc of THD (total harmonic distorsion), and 10 bits of minimum resolution. These performance levels appear to be a great challenge within the class of oversampling ADC topologies, with 2nd order IIR (infinite impulse response) decimation filter.

Keywords: Digital IIR filter, morphological lemmas and theorems, optimal DCM-based DAC, virtual simulation, weighted least pth norm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 904
1538 Development of a Robust Supply Chain for Dynamic Operating Environment

Authors: Shilan Li, Ivan Arokiam, Peter Jarvis, Wendy Garner, Gazelleh Moradi, Stuart Wakefield

Abstract:

Development of a Robust Supply Chain for Dynamic Operating Environment as we move further into the twenty first century, organisations are under increasing pressure to deliver a high product variation at a reasonable cost without compromise in quality. In a number of cases this will take the form of a customised or high variety low volume manufacturing system that requires prudent management of resources, among a number of functions, to achieve competitive advantage. Purchasing and Supply Chain management is one of such function and due to the substantial interaction with external elements needs to be strategically managed. This requires a number of primary and supporting tools that will enable the appropriate decisions to be made rapidly. This capability is especially vital in a dynamic environment as it provides a pivotal role in increasing the profit margin of the product. The management of this function can be challenging by itself and even more for Small and Medium Enterprises (SMEs) due to the limited resources and expertise available at their disposal. This paper discusses the development of tools and concepts towards effectively managing the purchasing and supply chain function. The developed tools and concepts will provide a cost effective way of managing this function within SMEs. The paper further shows the use of these tools within Contechs, a manufacturer of luxury boat interiors, and the associated benefits achieved as a result of this implementation. Finally a generic framework towards use in such environments is presented.

Keywords: Lean, Supply Chain, High variety Low volume, Small and Medium Enterprises.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1439
1537 A Prediction Model for Dynamic Responses of Building from Earthquake Based on Evolutionary Learning

Authors: Kyu Jin Kim, Byung Kwan Oh, Hyo Seon Park

Abstract:

The seismic responses-based structural health monitoring system has been performed to prevent seismic damage. Structural seismic damage of building is caused by the instantaneous stress concentration which is related with dynamic characteristic of earthquake. Meanwhile, seismic response analysis to estimate the dynamic responses of building demands significantly high computational cost. To prevent the failure of structural members from the characteristic of the earthquake and the significantly high computational cost for seismic response analysis, this paper presents an artificial neural network (ANN) based prediction model for dynamic responses of building considering specific time length. Through the measured dynamic responses, input and output node of the ANN are formed by the length of specific time, and adopted for the training. In the model, evolutionary radial basis function neural network (ERBFNN), that radial basis function network (RBFN) is integrated with evolutionary optimization algorithm to find variables in RBF, is implemented. The effectiveness of the proposed model is verified through an analytical study applying responses from dynamic analysis for multi-degree of freedom system to training data in ERBFNN.

Keywords: Structural health monitoring, dynamic response, artificial neural network, radial basis function network, genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 392
1536 Rapid Finite-Element Based Airport Pavement Moduli Solutions using Neural Networks

Authors: Kasthurirangan Gopalakrishnan, Marshall R. Thompson, Anshu Manik

Abstract:

This paper describes the use of artificial neural networks (ANN) for predicting non-linear layer moduli of flexible airfield pavements subjected to new generation aircraft (NGA) loading, based on the deflection profiles obtained from Heavy Weight Deflectometer (HWD) test data. The HWD test is one of the most widely used tests for routinely assessing the structural integrity of airport pavements in a non-destructive manner. The elastic moduli of the individual pavement layers backcalculated from the HWD deflection profiles are effective indicators of layer condition and are used for estimating the pavement remaining life. HWD tests were periodically conducted at the Federal Aviation Administration-s (FAA-s) National Airport Pavement Test Facility (NAPTF) to monitor the effect of Boeing 777 (B777) and Beoing 747 (B747) test gear trafficking on the structural condition of flexible pavement sections. In this study, a multi-layer, feed-forward network which uses an error-backpropagation algorithm was trained to approximate the HWD backcalculation function. The synthetic database generated using an advanced non-linear pavement finite-element program was used to train the ANN to overcome the limitations associated with conventional pavement moduli backcalculation. The changes in ANN-based backcalculated pavement moduli with trafficking were used to compare the relative severity effects of the aircraft landing gears on the NAPTF test pavements.

Keywords: Airfield pavements, ANN, backcalculation, newgeneration aircraft

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2166
1535 Tailoring of ECSS Standard for Space Qualification Test of CubeSat Nano-Satellite

Authors: B. Tiseo, V. Quaranta, G. Bruno, G. Sisinni

Abstract:

There is an increasing demand of nano-satellite development among universities, small companies, and emerging countries. Low-cost and fast-delivery are the main advantages of such class of satellites achieved by the extensive use of commercial-off-the-shelf components. On the other side, the loss of reliability and the poor success rate are limiting the use of nano-satellite to educational and technology demonstration and not to the commercial purpose. Standardization of nano-satellite environmental testing by tailoring the existing test standard for medium/large satellites is then a crucial step for their market growth. Thus, it is fundamental to find the right trade-off between the improvement of reliability and the need to keep their low-cost/fast-delivery advantages. This is particularly even more essential for satellites of CubeSat family. Such miniaturized and standardized satellites have 10 cm cubic form and mass no more than 1.33 kilograms per 1 unit (1U). For this class of nano-satellites, the qualification process is mandatory to reduce the risk of failure during a space mission. This paper reports the description and results of the space qualification test campaign performed on Endurosat’s CubeSat nano-satellite and modules. Mechanical and environmental tests have been carried out step by step: from the testing of the single subsystem up to the assembled CubeSat nano-satellite. Functional tests have been performed during all the test campaign to verify the functionalities of the systems. The test duration and levels have been selected by tailoring the European Space Agency standard ECSS-E-ST-10-03C and GEVS: GSFC-STD-7000A.

Keywords: CubeSat, Nano-satellite, shock, testing, vibration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1642
1534 Budget Optimization for Maintenance of Bridges in Egypt

Authors: Hesham Abd Elkhalek, Sherif M. Hafez, Yasser M. El Fahham

Abstract:

Allocating limited budget to maintain bridge networks and selecting effective maintenance strategies for each bridge represent challenging tasks for maintenance managers and decision makers. In Egypt, bridges are continuously deteriorating. In many cases, maintenance works are performed due to user complaints. The objective of this paper is to develop a practical and reliable framework to manage the maintenance, repair, and rehabilitation (MR&R) activities of Bridges network considering performance and budget limits. The model solves an optimization problem that maximizes the average condition of the entire network given the limited available budget using Genetic Algorithm (GA). The framework contains bridge inventory, condition assessment, repair cost calculation, deterioration prediction, and maintenance optimization. The developed model takes into account multiple parameters including serviceability requirements, budget allocation, element importance on structural safety and serviceability, bridge impact on network, and traffic. A questionnaire is conducted to complete the research scope. The proposed model is implemented in software, which provides a friendly user interface. The framework provides a multi-year maintenance plan for the entire network for up to five years. A case study of ten bridges is presented to validate and test the proposed model with data collected from Transportation Authorities in Egypt. Different scenarios are presented. The results are reasonable, feasible and within acceptable domain.

Keywords: Bridge Management Systems (BMS), cost optimization condition assessment, fund allocation, Markov chain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1931
1533 Mix Proportioning and Strength Prediction of High Performance Concrete Including Waste Using Artificial Neural Network

Authors: D. G. Badagha, C. D. Modhera, S. A. Vasanwala

Abstract:

There is a great challenge for civil engineering field to contribute in environment prevention by finding out alternatives of cement and natural aggregates. There is a problem of global warming due to cement utilization in concrete, so it is necessary to give sustainable solution to produce concrete containing waste. It is very difficult to produce designated grade of concrete containing different ingredient and water cement ratio including waste to achieve desired fresh and harden properties of concrete as per requirement and specifications. To achieve the desired grade of concrete, a number of trials have to be taken, and then after evaluating the different parameters at long time performance, the concrete can be finalized to use for different purposes. This research work is carried out to solve the problem of time, cost and serviceability in the field of construction. In this research work, artificial neural network introduced to fix proportion of concrete ingredient with 50% waste replacement for M20, M25, M30, M35, M40, M45, M50, M55 and M60 grades of concrete. By using the neural network, mix design of high performance concrete was finalized, and the main basic mechanical properties were predicted at 3 days, 7 days and 28 days. The predicted strength was compared with the actual experimental mix design and concrete cube strength after 3 days, 7 days and 28 days. This experimentally and neural network based mix design can be used practically in field to give cost effective, time saving, feasible and sustainable high performance concrete for different types of structures.

Keywords: Artificial neural network, ANN, high performance concrete, rebound hammer, strength prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1179
1532 Empirical Evidence on Equity Valuation of Thai Firms

Authors: Somchai Supattarakul, Anya Khanthavit

Abstract:

This study aims at providing empirical evidence on a comparison of two equity valuation models: (1) the dividend discount model (DDM) and (2) the residual income model (RIM), in estimating equity values of Thai firms during 1995-2004. Results suggest that DDM and RIM underestimate equity values of Thai firms and that RIM outperforms DDM in predicting cross-sectional stock prices. Results on regression of cross-sectional stock prices on the decomposed DDM and RIM equity values indicate that book value of equity provides the greatest incremental explanatory power, relative to other components in DDM and RIM terminal values, suggesting that book value distortions resulting from accounting procedures and choices are less severe than forecast and measurement errors in discount rates and growth rates. We also document that the incremental explanatory power of book value of equity during 1998-2004, representing the information environment under Thai Accounting Standards reformed after the 1997 economic crisis to conform to International Accounting Standards, is significantly greater than that during 1995-1996, representing the information environment under the pre-reformed Thai Accounting Standards. This implies that the book value distortions are less severe under the 1997 Reformed Thai Accounting Standards than the pre-reformed Thai Accounting Standards.

Keywords: Dividend Discount Model, Equity Valuation Model, Residual Income Model, Thai Stock Market

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1870
1531 Hardiness vs Alienation Personality Construct Essentially Explains Burnout Proclivity and Erroneous Computer Entry Problems in Rural Hellenic Hospital Labs

Authors: Angela–M. Paleologou, Aphrodite Dellaporta

Abstract:

Erroneous computer entry problems [here: 'e'errors] in hospital labs threaten the patients-–health carers- relationship, undermining the health system credibility. Are e-errors random, and do lab professionals make them accidentally, or may they be traced through meaningful determinants? Theories on internal causality of mistakes compel to seek specific causal ascriptions of hospital lab eerrors instead of accepting some inescapability. Undeniably, 'To Err is Human'. But in view of rapid global health organizational changes, e-errors are too expensive to lack in-depth considerations. Yet, that efunction might supposedly be entrenched in the health carers- job description remains under dispute – at least for Hellenic labs, where e-use falls behind generalized(able) appreciation and application. In this study: i) an empirical basis of a truly high annual cost of e-errors at about €498,000.00 per rural Hellenic hospital was established, hence interest in exploring the issue was sufficiently substantiated; ii) a sample of 270 lab-expert nurses, technicians and doctors were assessed on several personality, burnout and e-error measures, and iii) the hypothesis that the Hardiness vs Alienation personality construct disposition explains resistance vs proclivity to e-errors was tested and verified: Hardiness operates as a resilience source in the encounter of high pressures experienced in the hospital lab, whereas its 'opposite', i.e., Alienation, functions as a predictor, not only of making e-errors, but also of leading to burn-out. Implications for apt interventions are discussed.

Keywords: Hospital lab, personality hardiness/alienation, e-errors' cost, burnout.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1912
1530 Development of Better Quality Low-Cost Activated Carbon from South African Pine Tree (Pinus patula) Sawdust: Characterization and Comparative Phenol Adsorption

Authors: L. Mukosha, M. S. Onyango, A. Ochieng, H. Kasaini

Abstract:

The remediation of water resources pollution in developing countries requires the application of alternative sustainable cheaper and efficient end-of-pipe wastewater treatment technologies. The feasibility of use of South African cheap and abundant pine tree (Pinus patula) sawdust for development of lowcost AC of comparable quality to expensive commercial ACs in the abatement of water pollution was investigated. AC was developed at optimized two-stage N2-superheated steam activation conditions in a fixed bed reactor, and characterized for proximate and ultimate properties, N2-BET surface area, pore size distribution, SEM, pHPZC and FTIR. The sawdust pyrolysis activation energy was evaluated by TGA. Results indicated that the chars prepared at 800oC and 2hrs were suitable for development of better quality AC at 800oC and 47% burn-off having BET surface area (1086m2/g), micropore volume (0.26cm3/g), and mesopore volume (0.43cm3/g) comparable to expensive commercial ACs, and suitable for water contaminants removal. The developed AC showed basic surface functionality at pHPZC at 10.3, and a phenol adsorption capacity that was higher than that of commercial Norit (RO 0.8) AC. Thus, it is feasible to develop better quality low-cost AC from (Pinus patula) sawdust using twostage N2-steam activation in fixed-bed reactor.

Keywords: Activated carbon, phenol adsorption, sawdust integrated utilization, economical wastewater treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3448
1529 Estimation of Time Loss and Costs of Traffic Congestion: The Contingent Valuation Method

Authors: Amira Mabrouk, Chokri Abdennadher

Abstract:

The reduction of road congestion which is inherent to the use of vehicles is an obvious priority to public authority. Therefore, assessing the willingness to pay of an individual in order to save trip-time is akin to estimating the change in price which was the result of setting up a new transport policy to increase the networks fluidity and improving the level of social welfare. This study holds an innovative perspective. In fact, it initiates an economic calculation that has the objective of giving an estimation of the monetized time value during the trips made in Sfax. This research is founded on a double-objective approach. The aim of this study is to i) give an estimation of the monetized value of time; an hour dedicated to trips, ii) determine whether or not the consumer considers the environmental variables to be significant, iii) analyze the impact of applying a public management of the congestion via imposing taxation of city tolls on urban dwellers. This article is built upon a rich field survey led in the city of Sfax. With the use of the contingent valuation method, we analyze the “declared time preferences” of 450 drivers during rush hours. Based on the fond consideration of attributed bias of the applied method, we bring to light the delicacy of this approach with regards to the revelation mode and the interrogative techniques by following the NOAA panel recommendations bearing the exception of the valorization point and other similar studies about the estimation of transportation externality.

Keywords: Willingness to pay, value of time, contingent valuation, time value, city toll, transport.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2271
1528 The Coverage of the Object-Oriented Framework Application Class-Based Test Cases

Authors: Jehad Al Dallal, Paul Sorenson

Abstract:

An application framework provides a reusable design and implementation for a family of software systems. Frameworks are introduced to reduce the cost of a product line (i.e., family of products that share the common features). Software testing is a time consuming and costly ongoing activity during the application software development process. Generating reusable test cases for the framework applications at the framework development stage, and providing and using the test cases to test part of the framework application whenever the framework is used reduces the application development time and cost considerably. Framework Interface Classes (FICs) are classes introduced by the framework hooks to be implemented at the application development stage. They can have reusable test cases generated at the framework development stage and provided with the framework to test the implementations of the FICs at the application development stage. In this paper, we conduct a case study using thirteen applications developed using three frameworks; one domain oriented and two application oriented. The results show that, in general, the percentage of the number of FICs in the applications developed using domain frameworks is, on average, greater than the percentage of the number of FICs in the applications developed using application frameworks. Consequently, the reduction of the application unit testing time using the reusable test cases generated for domain frameworks is, in general, greater than the reduction of the application unit testing time using the reusable test cases generated for application frameworks.

Keywords: FICs, object-oriented framework, object-orientedframework application, software testing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1424
1527 IntelligentLogger: A Heavy-Duty Vehicles Fleet Management System Based on IoT and Smart Prediction Techniques

Authors: D. Goustouridis, A. Sideris, I. Sdrolias, G. Loizos, N.-Alexander Tatlas, S. M. Potirakis

Abstract:

Both daily and long-term management of a heavy-duty vehicles and construction machinery fleet is an extremely complicated and hard to solve issue. This is mainly due to the diversity of the fleet vehicles – machinery, which concerns not only the vehicle types, but also their age/efficiency, as well as the fleet volume, which is often of the order of hundreds or even thousands of vehicles/machineries. In the present paper we present “InteligentLogger”, a holistic heavy-duty fleet management system covering a wide range of diverse fleet vehicles. This is based on specifically designed hardware and software for the automated vehicle health status and operational cost monitoring, for smart maintenance. InteligentLogger is characterized by high adaptability that permits to be tailored to practically any heavy-duty vehicle/machinery (of different technologies -modern or legacy- and of dissimilar uses). Contrary to conventional logistic systems, which are characterized by raised operational costs and often errors, InteligentLogger provides a cost-effective and reliable integrated solution for the e-management and e-maintenance of the fleet members. The InteligentLogger system offers the following unique features that guarantee successful heavy-duty vehicles/machineries fleet management: (a) Recording and storage of operating data of motorized construction machinery, in a reliable way and in real time, using specifically designed Internet of Things (IoT) sensor nodes that communicate through the available network infrastructures, e.g., 3G/LTE; (b) Use on any machine, regardless of its age, in a universal way; (c) Flexibility and complete customization both in terms of data collection, integration with 3rd party systems, as well as in terms of processing and drawing conclusions; (d) Validation, error reporting & correction, as well as update of the system’s database; (e) Artificial intelligence (AI) software, for processing information in real time, identifying out-of-normal behavior and generating alerts; (f) A MicroStrategy based enterprise BI, for modeling information and producing reports, dashboards, and alerts focusing on vehicles– machinery optimal usage, as well as maintenance and scraping policies; (g) Modular structure that allows low implementation costs in the basic fully functional version, but offers scalability without requiring a complete system upgrade.

Keywords: E-maintenance, predictive maintenance, IoT sensor nodes, cost optimization, artificial intelligence, heavy-duty vehicles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 728
1526 Hybrid Heat Pump for Micro Heat Network

Authors: J. M. Counsell, Y. Khalid, M. J. Stewart

Abstract:

Achieving nearly zero carbon heating continues to be identified by UK government analysis as an important feature of any lowest cost pathway to reducing greenhouse gas emissions. Heat currently accounts for 48% of UK energy consumption and approximately one third of UK’s greenhouse gas emissions. Heat Networks are being promoted by UK investment policies as one means of supporting hybrid heat pump based solutions. To this effect the RISE (Renewable Integrated and Sustainable Electric) heating system project is investigating how an all-electric heating sourceshybrid configuration could play a key role in long-term decarbonisation of heat.  For the purposes of this study, hybrid systems are defined as systems combining the technologies of an electric driven air source heat pump, electric powered thermal storage, a thermal vessel and micro-heat network as an integrated system.  This hybrid strategy allows for the system to store up energy during periods of low electricity demand from the national grid, turning it into a dynamic supply of low cost heat which is utilized only when required. Currently a prototype of such a system is being tested in a modern house integrated with advanced controls and sensors. This paper presents the virtual performance analysis of the system and its design for a micro heat network with multiple dwelling units. The results show that the RISE system is controllable and can reduce carbon emissions whilst being competitive in running costs with a conventional gas boiler heating system.

Keywords: Gas boilers, heat pumps, hybrid heating and thermal storage, renewable integrated& sustainable electric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1290
1525 A Propagator Method like Algorithm for Estimation of Multiple Real-Valued Sinusoidal Signal Frequencies

Authors: Sambit Prasad Kar, P.Palanisamy

Abstract:

In this paper a novel method for multiple one dimensional real valued sinusoidal signal frequency estimation in the presence of additive Gaussian noise is postulated. A computationally simple frequency estimation method with efficient statistical performance is attractive in many array signal processing applications. The prime focus of this paper is to combine the subspace-based technique and a simple peak search approach. This paper presents a variant of the Propagator Method (PM), where a collaborative approach of SUMWE and Propagator method is applied in order to estimate the multiple real valued sine wave frequencies. A new data model is proposed, which gives the dimension of the signal subspace is equal to the number of frequencies present in the observation. But, the signal subspace dimension is twice the number of frequencies in the conventional MUSIC method for estimating frequencies of real-valued sinusoidal signal. The statistical analysis of the proposed method is studied, and the explicit expression of asymptotic (large-sample) mean-squared-error (MSE) or variance of the estimation error is derived. The performance of the method is demonstrated, and the theoretical analysis is substantiated through numerical examples. The proposed method can achieve sustainable high estimation accuracy and frequency resolution at a lower SNR, which is verified by simulation by comparing with conventional MUSIC, ESPRIT and Propagator Method.

Keywords: Frequency estimation, peak search, subspace-based method without eigen decomposition, quadratic convex function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1714
1524 Reconstitute Information about Discontinued Water Quality Variables in the Nile Delta Monitoring Network Using Two Record Extension Techniques

Authors: Bahaa Khalil, Taha B. M. J. Ouarda, André St-Hilaire

Abstract:

The world economic crises and budget constraints have caused authorities, especially those in developing countries, to rationalize water quality monitoring activities. Rationalization consists of reducing the number of monitoring sites, the number of samples, and/or the number of water quality variables measured. The reduction in water quality variables is usually based on correlation. If two variables exhibit high correlation, it is an indication that some of the information produced may be redundant. Consequently, one variable can be discontinued, and the other continues to be measured. Later, the ordinary least squares (OLS) regression technique is employed to reconstitute information about discontinued variable by using the continuously measured one as an explanatory variable. In this paper, two record extension techniques are employed to reconstitute information about discontinued water quality variables, the OLS and the Line of Organic Correlation (LOC). An empirical experiment is conducted using water quality records from the Nile Delta water quality monitoring network in Egypt. The record extension techniques are compared for their ability to predict different statistical parameters of the discontinued variables. Results show that the OLS is better at estimating individual water quality records. However, results indicate an underestimation of the variance in the extended records. The LOC technique is superior in preserving characteristics of the entire distribution and avoids underestimation of the variance. It is concluded from this study that the OLS can be used for the substitution of missing values, while LOC is preferable for inferring statements about the probability distribution.

Keywords: Record extension, record augmentation, monitoringnetworks, water quality indicators.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1590
1523 Evaluating Efficiency of Nina Distribution Company Using Window Data Envelopment Analysis and Malmquist Index

Authors: Hossein Taherian Far, Ali Bazaee

Abstract:

Achieving continuous sustained economic growth and following economic development can be the target for all countries which are looking for it. In this regard, distribution industry plays an important role in growth and development of any nation. So, estimating the efficiency and productivity of the so called industry and identifying factors influencing it, is very necessary. The objective of the present study is to measure the efficiency and productivity of seven branches of Nina Distribution Company using window data envelopment analysis and Malmquist productivity index from spring 2013 to summer 2015. In this study, using criteria of fixed assets, payroll personnel, operating costs and duration of collection of receivables were selected as inputs and people and net sales, gross profit and percentage of coverage to customers were selected as outputs. Then, the process of performance window data envelopment analysis was driven and process efficiency has been measured using Malmquist index. The results indicate that the average technical efficiency of window Data Envelopment Analysis (DEA) model and fluctuating trend is sustainable. But the average management efficiency in window DEA model is related with negative growth (decline) of about 13%. The mean scale efficiency in all windows, except in the second one which is faced with 8%, shows growth of 18% compared to the first window. On the other hand, the mean change in total factor productivity in all branches of the industry shows average negative growth (decrease) of 12% which are the result of a negative change in technology.

Keywords: Nina Distribution Company branches, window data envelopment analysis, Malmquist productivity index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1144
1522 Preliminary Roadway Alignment Design: A Spatial-Data Optimization Approach

Authors: Y. Abdelrazig, R. Moses

Abstract:

Roadway planning and design is a very complex process involving five key phases before a project is completed; planning, project development, final design, right-of-way, and construction. The planning phase for a new roadway transportation project is a very critical phase as it greatly affects all latter phases of the project. A location study is usually performed during the preliminary planning phase in a new roadway project. The objective of the location study is to develop alignment alternatives that are cost efficient considering land acquisition and construction costs. This paper describes a methodology to develop optimal preliminary roadway alignments utilizing spatial-data. Four optimization criteria are taken into consideration; roadway length, land cost, land slope, and environmental impacts. The basic concept of the methodology is to convert the proposed project area into a grid, which represents the search space for an optimal alignment. The aforementioned optimization criteria are represented in each of the grid’s cells. A spatial-data optimization technique is utilized to find the optimal alignment in the search space based on the four optimization criteria. Two case studies for new roadway projects in Duval County in the State of Florida are presented to illustrate the methodology. The optimization output alignments are compared to the proposed Florida Department of Transportation (FDOT) alignments. The comparison is based on right-of-way costs for the alignments. For both case studies, the right-of-way costs for the developed optimal alignments were found to be significantly lower than the FDOT alignments.

Keywords: Optimization, planning, roadway alignment, FDOT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2009
1521 Investigation of Physical Properties of Asphalt Binder Modified by Recycled Polyethylene and Ground Tire Rubber

Authors: Sajjad H. Kasanagh, Perviz Ahmedzade, Alexander Fainleib, Taylan Gunay

Abstract:

Modification of asphalt is a fundamental method around the world mainly on the purpose of providing more durable pavements which lead to diminish repairing cost during the lifetime of highways. Various polymers such as styrene-butadiene-styrene (SBS) and ethylene vinyl acetate (EVA) make up the greater parts of the all-over asphalt modifiers generally providing better physical properties of asphalt by decreasing temperature dependency which eventually diminishes permanent deformation on highways such as rutting. However, some waste and low-cost materials such as recycled plastics and ground rubber tire have been attempted to utilize in asphalt as modifier instead of manufactured polymer modifiers due to decreasing the eventual highway cost. On the other hand, the usage of recycled plastics has become a worldwide requirement and awareness in order to decrease the pollution made by waste plastics. Hence, finding an area in which recycling plastics could be utilized has been targeted by many research teams so as to reduce polymer manufacturing and plastic pollution. To this end, in this paper, thermoplastic dynamic vulcanizate (TDV) obtained from recycled post-consumer polyethylene and ground tire rubber (GTR) were used to provide an efficient modifier for asphalt which decreases the production cost as well and finally might provide an ecological solution by decreasing polymer disposal problems. TDV was synthesized by the chemists in the research group by means of the abovementioned components that are considered as compatible physical characteristic of asphalt materials. TDV modified asphalt samples having different rate of proportions of 3, 4, 5, 6, 7 wt.% TDV modifier were prepared. Conventional tests, such as penetration, softening point and roll thin film oven (RTFO) tests were performed to obtain fundamental physical and aging properties of the base and modified binders. The high temperature performance grade (PG) of binders was determined by Superpave tests conducted on original and aged binders. The multiple stress creep and recovery (MSCR) test which is relatively up-to-date method for classifying asphalts taking account of their elasticity abilities was carried out to evaluate PG plus grades of binders. The results obtained from performance grading, and MSCR tests were also evaluated together so as to make a comparison between the methods both aiming to determine rheological parameters of asphalt. The test results revealed that TDV modification leads to a decrease in penetration, an increase in softening point, which proves an increasing stiffness of asphalt. DSR results indicate an improvement in PG for modified binders compared to base asphalt. On the other hand, MSCR results that are compatible with DSR results also indicate an enhancement on rheological properties of asphalt. However, according to the results, the improvement is not as distinct as observed in DSR results since elastic properties are fundamental in MSCR. At the end of the testing program, it can be concluded that TDV can be used as modifier which provides better rheological properties for asphalt and might diminish plastic waste pollution since the material is 100% recycled.

Keywords: Asphalt, ground tire rubber, recycled polymer, thermoplastic dynamic vulcanized.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 755
1520 Battery Energy Storage System Economic Benefits Assessment on a Network Frequency Control

Authors: Kréhi Serge Agbli, Samuel Portebos, Michaël Salomon

Abstract:

Here a methodology is considered aiming at evaluating the economic benefit of the provision of a primary frequency control unit using a Battery Energy Storage System (BESS). In this methodology, two control types (basic and hysteresis) are implemented and the corresponding minimum energy storage system power allowing to maintain the frequency drop inside a given threshold under a given contingency is identified and compared using DigSilent’s PowerFactory software. Following this step, the corresponding energy storage capacity (in MWh) is calculated. As PowerFactory is dedicated to dynamic simulation for transient analysis, a first order model related to the IEEE 9 bus grid used for the analysis under PowerFactory is characterized and implemented on MATLAB-Simulink. Primary frequency control is simulated using the two control types over one-month grid's frequency deviation data on this Simulink model. This simulation results in the energy throughput both basic and hysteresis BESSs. It emerges that the 15 minutes operation band of the battery capacity allocated to frequency control is sufficient under the considered disturbances. A sensitivity analysis on the width of the control deadband is then performed for the two control types. The deadband width variation leads to an identical sizing with the hysteresis control showing a better frequency control at the cost of a higher delivered throughput compared to the basic control. An economic analysis comparing the cost of the sized BESS to the potential revenues is then performed.

Keywords: Battery Energy Storage System, electrical network frequency stability, frequency control unit, PowerFactory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 750
1519 Development of Nondestructive Imaging Analysis Method Using Muonic X-Ray with a Double-Sided Silicon Strip Detector

Authors: I-Huan Chiu, Kazuhiko Ninomiya, Shin’ichiro Takeda, Meito Kajino, Miho Katsuragawa, Shunsaku Nagasawa, Atsushi Shinohara, Tadayuki Takahashi, Ryota Tomaru, Shin Watanabe, Goro Yabu

Abstract:

In recent years, a nondestructive elemental analysis method based on muonic X-ray measurements has been developed and applied for various samples. Muonic X-rays are emitted after the formation of a muonic atom, which occurs when a negatively charged muon is captured in a muon atomic orbit around the nucleus. Because muonic X-rays have a higher energy than electronic X-rays due to the muon mass, they can be measured without being absorbed by a material. Thus, estimating the two-dimensional (2D) elemental distribution of a sample became possible using an X-ray imaging detector. In this work, we report a non-destructive imaging experiment using muonic X-rays at Japan Proton Accelerator Research Complex. The irradiated target consisted of a polypropylene material, and a double-sided silicon strip detector, which was developed as an imaging detector for astronomical obervation, was employed. A peak corresponding to muonic X-rays from the carbon atoms in the target was clearly observed in the energy spectrum at an energy of 14 keV, and 2D visualizations were successfully reconstructed to reveal the projection image from the target. This result demonstrates the potential of the nondestructive elemental imaging method that is based on muonic X-ray measurement. To obtain a higher position resolution for imaging a smaller target, a new detector system will be developed to improve the statistical analysis in further research.

Keywords: DSSD, muon, muonic X-ray, imaging, non-destructive analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1213
1518 Appraisal of Methods for Identifying, Mapping, and Modelling of Fluvial Erosion in a Mining Environment

Authors: F. F. Howard, I. Yakubu, C. B. Boye, J. S. Y. Kuma

Abstract:

Natural and human activities, such as mining operations, expose the natural soil to adverse environmental conditions, leading to contamination of soil, groundwater, and surface water, which has negative effects on humans, flora, and fauna. Bare or partly exposed soil is most liable to fluvial erosion. This paper enumerates various methods used to identify, map, and model fluvial erosion in a mining environment. Classical, Artificial Intelligence (AI), and GIS methods have been reviewed. One of the many classical methods used to estimate river erosion is the Revised Universal Soil Loss Equation (RUSLE) model. The RUSLE model is easy to use. Its reliance on empirical relationships that may not always be applicable to specific circumstances or locations is a flaw. Other classical models for estimating fluvial erosion are the Soil and Water Assessment Tool (SWAT) and the Universal Soil Loss Equation (USLE). These models offer a more complete understanding of the underlying physical processes and encompass a wider range of situations. Although more difficult to utilise, they depend on the availability and dependability of input data for correctness. AI can help deal with multivariate and complex difficulties and predict soil loss with higher accuracy than traditional methods, and also be used to build unique models for identifying degraded areas. AI techniques have become popular as an alternative predictor for degraded environments. However, this research proposed a hybrid of classical, AI, and GIS methods for efficient and effective modelling of fluvial erosion.

Keywords: Fluvial erosion, classical methods, Artificial Intelligence, Geographic Information System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 140