Search results for: Friction estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1357

Search results for: Friction estimation

67 Improved Fuzzy Neural Modeling for Underwater Vehicles

Authors: O. Hassanein, Sreenatha G. Anavatti, Tapabrata Ray

Abstract:

The dynamics of the Autonomous Underwater Vehicles (AUVs) are highly nonlinear and time varying and the hydrodynamic coefficients of vehicles are difficult to estimate accurately because of the variations of these coefficients with different navigation conditions and external disturbances. This study presents the on-line system identification of AUV dynamics to obtain the coupled nonlinear dynamic model of AUV as a black box. This black box has an input-output relationship based upon on-line adaptive fuzzy model and adaptive neural fuzzy network (ANFN) model techniques to overcome the uncertain external disturbance and the difficulties of modelling the hydrodynamic forces of the AUVs instead of using the mathematical model with hydrodynamic parameters estimation. The models- parameters are adapted according to the back propagation algorithm based upon the error between the identified model and the actual output of the plant. The proposed ANFN model adopts a functional link neural network (FLNN) as the consequent part of the fuzzy rules. Thus, the consequent part of the ANFN model is a nonlinear combination of input variables. Fuzzy control system is applied to guide and control the AUV using both adaptive models and mathematical model. Simulation results show the superiority of the proposed adaptive neural fuzzy network (ANFN) model in tracking of the behavior of the AUV accurately even in the presence of noise and disturbance.

Keywords: AUV, AUV dynamic model, fuzzy control, fuzzy modelling, adaptive fuzzy control, back propagation, system identification, neural fuzzy model, FLNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2153
66 Comparative Analysis of the Third Generation of Research Data for Evaluation of Solar Energy Potential

Authors: Claudineia Brazil, Elison Eduardo Jardim Bierhals, Luciane Teresa Salvi, Rafael Haag

Abstract:

Renewable energy sources are dependent on climatic variability, so for adequate energy planning, observations of the meteorological variables are required, preferably representing long-period series. Despite the scientific and technological advances that meteorological measurement systems have undergone in the last decades, there is still a considerable lack of meteorological observations that form series of long periods. The reanalysis is a system of assimilation of data prepared using general atmospheric circulation models, based on the combination of data collected at surface stations, ocean buoys, satellites and radiosondes, allowing the production of long period data, for a wide gamma. The third generation of reanalysis data emerged in 2010, among them is the Climate Forecast System Reanalysis (CFSR) developed by the National Centers for Environmental Prediction (NCEP), these data have a spatial resolution of 0.50 x 0.50. In order to overcome these difficulties, it aims to evaluate the performance of solar radiation estimation through alternative data bases, such as data from Reanalysis and from meteorological satellites that satisfactorily meet the absence of observations of solar radiation at global and/or regional level. The results of the analysis of the solar radiation data indicated that the reanalysis data of the CFSR model presented a good performance in relation to the observed data, with determination coefficient around 0.90. Therefore, it is concluded that these data have the potential to be used as an alternative source in locations with no seasons or long series of solar radiation, important for the evaluation of solar energy potential.

Keywords: Climate, reanalysis, renewable energy, solar radiation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 906
65 An Approach to Correlate the Statistical-Based Lorenz Method, as a Way of Measuring Heterogeneity, with Kozeny-Carman Equation

Authors: H. Khanfari, M. Johari Fard

Abstract:

Dealing with carbonate reservoirs can be mind-boggling for the reservoir engineers due to various digenetic processes that cause a variety of properties through the reservoir. A good estimation of the reservoir heterogeneity which is defined as the quality of variation in rock properties with location in a reservoir or formation, can better help modeling the reservoir and thus can offer better understanding of the behavior of that reservoir. Most of reservoirs are heterogeneous formations whose mineralogy, organic content, natural fractures, and other properties vary from place to place. Over years, reservoir engineers have tried to establish methods to describe the heterogeneity, because heterogeneity is important in modeling the reservoir flow and in well testing. Geological methods are used to describe the variations in the rock properties because of the similarities of environments in which different beds have deposited in. To illustrate the heterogeneity of a reservoir vertically, two methods are generally used in petroleum work: Dykstra-Parsons permeability variations (V) and Lorenz coefficient (L) that are reviewed briefly in this paper. The concept of Lorenz is based on statistics and has been used in petroleum from that point of view. In this paper, we correlated the statistical-based Lorenz method to a petroleum concept, i.e. Kozeny-Carman equation and derived the straight line plot of Lorenz graph for a homogeneous system. Finally, we applied the two methods on a heterogeneous field in South Iran and discussed each, separately, with numbers and figures. As expected, these methods show great departure from homogeneity. Therefore, for future investment, the reservoir needs to be treated carefully.

Keywords: Carbonate reservoirs, heterogeneity, homogeneous system, Dykstra-Parsons permeability variations (V), Lorenz coefficient (L).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1790
64 Estimation of Hysteretic Damping in Steel Dual Systems with Buckling Restrained Brace and Moment Resisting Frame

Authors: Seyed Saeid Tabaee, Omid Bahar

Abstract:

Nowadays, energy dissipation devices are commonly used in structures. High rate of energy absorption during earthquakes is the benefit of using such devices, which results in damage reduction of structural elements, specifically columns. The hysteretic damping capacity of energy dissipation devices is the key point that it may adversely make analysis and design process complicated. This effect may be generally represented by Equivalent Viscous Damping (EVD). The equivalent viscous damping might be obtained from the expected hysteretic behavior regarding to the design or maximum considered displacement of a structure. In this paper, the hysteretic damping coefficient of a steel Moment Resisting Frame (MRF), which its performance is enhanced by a Buckling Restrained Brace (BRB) system has been evaluated. Having foresight of damping fraction between BRB and MRF is inevitable for seismic design procedures like Direct Displacement-Based Design (DDBD) method. This paper presents an approach to calculate the damping fraction for such systems by carrying out the dynamic nonlinear time history analysis (NTHA) under harmonic loading, which is tuned to the natural system frequency. Two MRF structures, one equipped with BRB and the other without BRB are simultaneously studied. Extensive analysis shows that proportion of each system damping fraction may be calculated by its shear story portion. In this way, contribution of each BRB in the floors and their general contribution in the structural performance may be clearly recognized, in advance.

Keywords: Buckling restrained brace, Direct displacement based design, Dual systems, Hysteretic damping, Moment resisting frames.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2474
63 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function

Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos

Abstract:

Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.

Keywords: Diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion equation, trends functions, bi-parameters Weibull density function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1967
62 Retail Strategy to Reduce Waste Keeping High Profit Utilizing Taylor's Law in Point-of-Sales Data

Authors: Gen Sakoda, Hideki Takayasu, Misako Takayasu

Abstract:

Waste reduction is a fundamental problem for sustainability. Methods for waste reduction with point-of-sales (POS) data are proposed, utilizing the knowledge of a recent econophysics study on a statistical property of POS data. Concretely, the non-stationary time series analysis method based on the Particle Filter is developed, which considers abnormal fluctuation scaling known as Taylor's law. This method is extended for handling incomplete sales data because of stock-outs by introducing maximum likelihood estimation for censored data. The way for optimal stock determination with pricing the cost of waste reduction is also proposed. This study focuses on the examination of the methods for large sales numbers where Taylor's law is obvious. Numerical analysis using aggregated POS data shows the effectiveness of the methods to reduce food waste maintaining a high profit for large sales numbers. Moreover, the way of pricing the cost of waste reduction reveals that a small profit loss realizes substantial waste reduction, especially in the case that the proportionality constant  of Taylor’s law is small. Specifically, around 1% profit loss realizes half disposal at =0.12, which is the actual  value of processed food items used in this research. The methods provide practical and effective solutions for waste reduction keeping a high profit, especially with large sales numbers.

Keywords: Food waste reduction, particle filter, point of sales, sustainable development goals, Taylor's Law, time series analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 871
61 Quantification of Soft Tissue Artefacts Using Motion Capture Data and Ultrasound Depth Measurements

Authors: Azadeh Rouhandeh, Chris Joslin, Zhen Qu, Yuu Ono

Abstract:

The centre of rotation of the hip joint is needed for an accurate simulation of the joint performance in many applications such as pre-operative planning simulation, human gait analysis, and hip joint disorders. In human movement analysis, the hip joint center can be estimated using a functional method based on the relative motion of the femur to pelvis measured using reflective markers attached to the skin surface. The principal source of errors in estimation of hip joint centre location using functional methods is soft tissue artefacts due to the relative motion between the markers and bone. One of the main objectives in human movement analysis is the assessment of soft tissue artefact as the accuracy of functional methods depends upon it. Various studies have described the movement of soft tissue artefact invasively, such as intra-cortical pins, external fixators, percutaneous skeletal trackers, and Roentgen photogrammetry. The goal of this study is to present a non-invasive method to assess the displacements of the markers relative to the underlying bone using optical motion capture data and tissue thickness from ultrasound measurements during flexion, extension, and abduction (all with knee extended) of the hip joint. Results show that the artefact skin marker displacements are non-linear and larger in areas closer to the hip joint. Also marker displacements are dependent on the movement type and relatively larger in abduction movement. The quantification of soft tissue artefacts can be used as a basis for a correction procedure for hip joint kinematics.

Keywords: Hip joint centre, motion capture, soft tissue artefact, ultrasound depth measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2861
60 A Preliminary Study on the Suitability of Data Driven Approach for Continuous Water Level Modeling

Authors: Muhammad Aqil, Ichiro Kita, Moses Macalinao

Abstract:

Reliable water level forecasts are particularly important for warning against dangerous flood and inundation. The current study aims at investigating the suitability of the adaptive network based fuzzy inference system for continuous water level modeling. A hybrid learning algorithm, which combines the least square method and the back propagation algorithm, is used to identify the parameters of the network. For this study, water levels data are available for a hydrological year of 2002 with a sampling interval of 1-hour. The number of antecedent water level that should be included in the input variables is determined by two statistical methods, i.e. autocorrelation function and partial autocorrelation function between the variables. Forecasting was done for 1-hour until 12-hour ahead in order to compare the models generalization at higher horizons. The results demonstrate that the adaptive networkbased fuzzy inference system model can be applied successfully and provide high accuracy and reliability for river water level estimation. In general, the adaptive network-based fuzzy inference system provides accurate and reliable water level prediction for 1-hour ahead where the MAPE=1.15% and correlation=0.98 was achieved. Up to 12-hour ahead prediction, the model still shows relatively good performance where the error of prediction resulted was less than 9.65%. The information gathered from the preliminary results provide a useful guidance or reference for flood early warning system design in which the magnitude and the timing of a potential extreme flood are indicated.

Keywords: Neural Network, Fuzzy, River, Forecasting

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1289
59 Identifying Areas on the Pavement Where Rain Water Runoff Affects Motorcycle Behavior

Authors: Panagiotis Lemonakis, Theodoros Αlimonakis, George Kaliabetsos, Nikos Eliou

Abstract:

It is very well known that certain vertical and longitudinal slopes have to be assured in order to achieve adequate rainwater runoff from the pavement. The selection of longitudinal slopes, between the turning points of the vertical curves that meet the afore-mentioned requirement does not ensure adequate drainage because the same condition must also be applied at the transition curves. In this way none of the pavement edges’ slopes (as well as any other spot that lie on the pavement) will be opposite to the longitudinal slope of the rotation axis. Horizontal and vertical alignment must be properly combined in order to form a road which resultant slope does not take small values and hence, checks must be performed in every cross section and every chainage of the road. The present research investigates the rain water runoff from the road surface in order to identify the conditions under which, areas of inadequate drainage are being created, to analyze the rainwater behavior in such areas, to provide design examples of good and bad drainage zones and to track down certain motorcycle types which might encounter hazardous situations due to the presence of water film between the pavement and both of their tires resulting loss of traction. Moreover, it investigates the combination of longitudinal and cross slope values in critical pavement areas. It should be pointed out that the drainage gradient is analytically calculated for the whole road width and not just for an oblique slope per chainage (combination of longitudinal grade and cross slope). Lastly, various combinations of horizontal and vertical design are presented, indicating the crucial zones of bad pavement drainage. The key conclusion of the study is that any type of motorcycle will travel for some time inside the area of improper runoff for a certain time frame which depends on the speed and the trajectory that the rider chooses along the transition curve. Taking into account that on this section the rider will have to lean his motorcycle and hence reduce the contact area of his tire with the pavement it is apparent that any variations on the friction value due to the presence of a water film may lead to serious problems regarding his safety. The water runoff from the road pavement is improved when between reverse longitudinal slopes, crest instead of sag curve is chosen and particularly when its edges coincide with the edges of the horizontal curve. Lastly, the results of the investigation have shown that the variation of the longitudinal slope involves the vertical shift of the center of the poor water runoff area. The magnitude of this area increases as the length of the transition curve increases.

Keywords: Drainage, motorcycle safety, superelevation, transition curves, vertical grade.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 536
58 Feature Point Reduction for Video Stabilization

Authors: Theerawat Songyot, Tham Manjing, Bunyarit Uyyanonvara, Chanjira Sinthanayothin

Abstract:

Corner detection and optical flow are common techniques for feature-based video stabilization. However, these algorithms are computationally expensive and should be performed at a reasonable rate. This paper presents an algorithm for discarding irrelevant feature points and maintaining them for future use so as to improve the computational cost. The algorithm starts by initializing a maintained set. The feature points in the maintained set are examined against its accuracy for modeling. Corner detection is required only when the feature points are insufficiently accurate for future modeling. Then, optical flows are computed from the maintained feature points toward the consecutive frame. After that, a motion model is estimated based on the simplified affine motion model and least square method, with outliers belonging to moving objects presented. Studentized residuals are used to eliminate such outliers. The model estimation and elimination processes repeat until no more outliers are identified. Finally, the entire algorithm repeats along the video sequence with the points remaining from the previous iteration used as the maintained set. As a practical application, an efficient video stabilization can be achieved by exploiting the computed motion models. Our study shows that the number of times corner detection needs to perform is greatly reduced, thus significantly improving the computational cost. Moreover, optical flow vectors are computed for only the maintained feature points, not for outliers, thus also reducing the computational cost. In addition, the feature points after reduction can sufficiently be used for background objects tracking as demonstrated in the simple video stabilizer based on our proposed algorithm.

Keywords: background object tracking, feature point reduction, low cost tracking, video stabilization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1767
57 Creating the Color Panoramic View using Medley of Grayscale and Color Partial Images

Authors: Dr. H. B. Kekre, Sudeep D. Thepade

Abstract:

Panoramic view generation has always offered novel and distinct challenges in the field of image processing. Panoramic view generation is nothing but construction of bigger view mosaic image from set of partial images of the desired view. The paper presents a solution to one of the problems of image seascape formation where some of the partial images are color and others are grayscale. The simplest solution could be to convert all image parts into grayscale images and fusing them to get grayscale image panorama. But in the multihued world, obtaining the colored seascape will always be preferred. This could be achieved by picking colors from the color parts and squirting them in grayscale parts of the seascape. So firstly the grayscale image parts should be colored with help of color image parts and then these parts should be fused to construct the seascape image. The problem of coloring grayscale images has no exact solution. In the proposed technique of panoramic view generation, the job of transferring color traits from reference color image to grayscale image is done by palette based method. In this technique, the color palette is prepared using pixel windows of some degrees taken from color image parts. Then the grayscale image part is divided into pixel windows with same degrees. For every window of grayscale image part the palette is searched and equivalent color values are found, which could be used to color grayscale window. For palette preparation we have used RGB color space and Kekre-s LUV color space. Kekre-s LUV color space gives better quality of coloring. The searching time through color palette is improved over the exhaustive search using Kekre-s fast search technique. After coloring the grayscale image pieces the next job is fusion of all these pieces to obtain panoramic view. For similarity estimation between partial images correlation coefficient is used.

Keywords: Panoramic View, Similarity Estimate, Color Transfer, Color Palette, Kekre's Fast Search, Kekre's LUV

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1753
56 A Three-Dimensional TLM Simulation Method for Thermal Effect in PV-Solar Cells

Authors: R. Hocine, A. Boudjemai, A. Amrani, K. Belkacemi

Abstract:

Temperature rising is a negative factor in almost all systems. It could cause by self heating or ambient temperature. In solar photovoltaic cells this temperature rising affects on the behavior of cells. The ability of a PV module to withstand the effects of periodic hot-spot heating that occurs when cells are operated under reverse biased conditions is closely related to the properties of the cell semi-conductor material.

In addition, the thermal effect also influences the estimation of the maximum power point (MPP) and electrical parameters for the PV modules, such as maximum output power, maximum conversion efficiency, internal efficiency, reliability, and lifetime. The cells junction temperature is a critical parameter that significantly affects the electrical characteristics of PV modules. For practical applications of PV modules, it is very important to accurately estimate the junction temperature of PV modules and analyze the thermal characteristics of the PV modules. Once the temperature variation is taken into account, we can then acquire a more accurate MPP for the PV modules, and the maximum utilization efficiency of the PV modules can also be further achieved.

In this paper, the three-Dimensional Transmission Line Matrix (3D-TLM) method was used to map the surface temperature distribution of solar cells while in the reverse bias mode. It was observed that some cells exhibited an inhomogeneity of the surface temperature resulting in localized heating (hot-spot). This hot-spot heating causes irreversible destruction of the solar cell structure. Hot spots can have a deleterious impact on the total solar modules if individual solar cells are heated. So, the results show clearly that the solar cells are capable of self-generating considerable amounts of heat that should be dissipated very quickly to increase PV module's lifetime.

Keywords: Thermal effect, Conduction, Heat dissipation, Thermal conductivity, Solar cell, PV module, Nodes, 3D-TLM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2347
55 Quality Classification and Monitoring Using Adaptive Metric Distance and Neural Networks: Application in Pickling Process

Authors: S. Bouhouche, M. Lahreche, S. Ziani, J. Bast

Abstract:

Modern manufacturing facilities are large scale, highly complex, and operate with large number of variables under closed loop control. Early and accurate fault detection and diagnosis for these plants can minimise down time, increase the safety of plant operations, and reduce manufacturing costs. Fault detection and isolation is more complex particularly in the case of the faulty analog control systems. Analog control systems are not equipped with monitoring function where the process parameters are continually visualised. In this situation, It is very difficult to find the relationship between the fault importance and its consequences on the product failure. We consider in this paper an approach to fault detection and analysis of its effect on the production quality using an adaptive centring and scaling in the pickling process in cold rolling. The fault appeared on one of the power unit driving a rotary machine, this machine can not track a reference speed given by another machine. The length of metal loop is then in continuous oscillation, this affects the product quality. Using a computerised data acquisition system, the main machine parameters have been monitored. The fault has been detected and isolated on basis of analysis of monitored data. Normal and faulty situation have been obtained by an artificial neural network (ANN) model which is implemented to simulate the normal and faulty status of rotary machine. Correlation between the product quality defined by an index and the residual is used to quality classification.

Keywords: Modeling, fault detection and diagnosis, parameters estimation, neural networks, Fault Detection and Diagnosis (FDD), pickling process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1577
54 Reducing the Imbalance Penalty through Artificial Intelligence Methods Geothermal Production Forecasting: A Case Study for Turkey

Authors: H. Anıl, G. Kar

Abstract:

In addition to being rich in renewable energy resources, Turkey is one of the countries that promise potential in geothermal energy production with its high installed power, cheapness, and sustainability. Increasing imbalance penalties become an economic burden for organizations, since the geothermal generation plants cannot maintain the balance of supply and demand due to the inadequacy of the production forecasts given in the day-ahead market. A better production forecast reduces the imbalance penalties of market participants and provides a better imbalance in the day ahead market. In this study, using machine learning, deep learning and time series methods, the total generation of the power plants belonging to Zorlu Doğal Electricity Generation, which has a high installed capacity in terms of geothermal, was predicted for the first one-week and first two-weeks of March, then the imbalance penalties were calculated with these estimates and compared with the real values. These modeling operations were carried out on two datasets, the basic dataset and the dataset created by extracting new features from this dataset with the feature engineering method. According to the results, Support Vector Regression from traditional machine learning models outperformed other models and exhibited the best performance. In addition, the estimation results in the feature engineering dataset showed lower error rates than the basic dataset. It has been concluded that the estimated imbalance penalty calculated for the selected organization is lower than the actual imbalance penalty, optimum and profitable accounts.

Keywords: Machine learning, deep learning, time series models, feature engineering, geothermal energy production forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 204
53 FT-NIR Method to Determine Moisture in Gluten Free Rice Based Pasta during Drying

Authors: Navneet Singh Deora, Aastha Deswal, H. N. Mishra

Abstract:

Pasta is one of the most widely consumed food products around the world. Rapid determination of the moisture content in pasta will assist food processors to provide online quality control of pasta during large scale production. Rapid Fourier transform near-infrared method (FT-NIR) was developed for determining moisture content in pasta. A calibration set of 150 samples, a validation set of 30 samples and a prediction set of 25 samples of pasta were used. The diffuse reflection spectra of different types of pastas were measured by FT-NIR analyzer in the 4,000-12,000cm-1 spectral range. Calibration and validation sets were designed for the conception and evaluation of the method adequacy in the range of moisture content 10 to 15 percent (w.b) of the pasta. The prediction models based on partial least squares (PLS) regression, were developed in the near-infrared. Conventional criteria such as the R2, the root mean square errors of cross validation (RMSECV), root mean square errors of estimation (RMSEE) as well as the number of PLS factors were considered for the selection of three pre-processing (vector normalization, minimum-maximum normalization and multiplicative scatter correction) methods. Spectra of pasta sample were treated with different mathematic pre-treatments before being used to build models between the spectral information and moisture content. The moisture content in pasta predicted by FT-NIR methods had very good correlation with their values determined via traditional methods (R2 = 0.983), which clearly indicated that FT-NIR methods could be used as an effective tool for rapid determination of moisture content in pasta. The best calibration model was developed with min-max normalization (MMN) spectral pre-processing (R2 = 0.9775). The MMN pre-processing method was found most suitable and the maximum coefficient of determination (R2) value of 0.9875 was obtained for the calibration model developed.

Keywords: FT-NIR, Pasta, moisture determination.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2822
52 Optimization of Assembly and Welding of Complex 3D Structures on the Base of Modeling with Use of Finite Elements Method

Authors: M. N. Zelenin, V. S. Mikhailov, R. P. Zhivotovsky

Abstract:

It is known that residual welding deformations give negative effect to processability and operational quality of welded structures, complicating their assembly and reducing strength. Therefore, selection of optimal technology, ensuring minimum welding deformations, is one of the main goals in developing a technology for manufacturing of welded structures. Through years, JSC SSTC has been developing a theory for estimation of welding deformations and practical activities for reducing and compensating such deformations during welding process. During long time a methodology was used, based on analytic dependence. This methodology allowed defining volumetric changes of metal due to welding heating and subsequent cooling. However, dependences for definition of structures deformations, arising as a result of volumetric changes of metal in the weld area, allowed performing calculations only for simple structures, such as units, flat sections and sections with small curvature. In case of complex 3D structures, estimations on the base of analytic dependences gave significant errors. To eliminate this shortage, it was suggested to use finite elements method for resolving of deformation problem. Here, one shall first calculate volumes of longitudinal and transversal shortenings of welding joints using method of analytic dependences and further, with obtained shortenings, calculate forces, which action is equivalent to the action of active welding stresses. Further, a finiteelements model of the structure is developed and equivalent forces are added to this model. Having results of calculations, an optimal sequence of assembly and welding is selected and special measures to reduce and compensate welding deformations are developed and taken.

Keywords: Finite elements method, modeling, expected welding deformations, welding, assembling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1756
51 Dengue Disease Mapping with Standardized Morbidity Ratio and Poisson-gamma Model: An Analysis of Dengue Disease in Perak, Malaysia

Authors: N. A. Samat, S. H. Mohd Imam Ma’arof

Abstract:

Dengue disease is an infectious vector-borne viral disease that is commonly found in tropical and sub-tropical regions, especially in urban and semi-urban areas, around the world and including Malaysia. There is no currently available vaccine or chemotherapy for the prevention or treatment of dengue disease. Therefore prevention and treatment of the disease depend on vector surveillance and control measures. Disease risk mapping has been recognized as an important tool in the prevention and control strategies for diseases. The choice of statistical model used for relative risk estimation is important as a good model will subsequently produce a good disease risk map. Therefore, the aim of this study is to estimate the relative risk for dengue disease based initially on the most common statistic used in disease mapping called Standardized Morbidity Ratio (SMR) and one of the earliest applications of Bayesian methodology called Poisson-gamma model. This paper begins by providing a review of the SMR method, which we then apply to dengue data of Perak, Malaysia. We then fit an extension of the SMR method, which is the Poisson-gamma model. Both results are displayed and compared using graph, tables and maps. Results of the analysis shows that the latter method gives a better relative risk estimates compared with using the SMR. The Poisson-gamma model has been demonstrated can overcome the problem of SMR when there is no observed dengue cases in certain regions. However, covariate adjustment in this model is difficult and there is no possibility for allowing spatial correlation between risks in adjacent areas. The drawbacks of this model have motivated many researchers to propose other alternative methods for estimating the risk.

Keywords: Dengue disease, Disease mapping, Standardized Morbidity Ratio, Poisson-gamma model, Relative risk.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3294
50 Pyrethroid Resistance and Its Mechanism in Field Populations of the Sand Termite, Psammotermes hypostoma Desneux

Authors: Mai. M. Toughan, Ahmed A. A. Sallam, Ashraf O. Abd El-Latif

Abstract:

Termites are eusocial insects that are found on all continents except Antarctica. Termites have serious destructive impact, damaging local huts and crops of poor subsistence. The annual cost of termite damage and its control is determined in the billions globally. In Egypt, most of these damages are due to the subterranean termite species especially the sand termite, P. hypostoma. Pyrethroids became the primary weapon for subterranean termite control, after the use of chlorpyrifos as a soil termiticide was banned. Despite the important role of pyrethroids in termite control, its extensive use in pest control led to the eventual rise of insecticide resistance which may make many of the pyrethroids ineffective. The ability to diagnose the precise mechanism of pyrethroid resistance in any insect species would be the key component of its management at specified location for a specific population. In the present study, detailed toxicological and biochemical studies was conducted on the mechanism of pyrethroid resistance in P. hypostoma. The susceptibility of field populations of P. hypostoma against deltamethrin, α-cypermethrin and ƛ-cyhalothrin was evaluated. The obtained results revealed that the workers of P. hypostoma have developed high resistance level against the tested pyrethroids. Studies carried out through estimation of detoxification enzyme activity indicated that enhanced esterase and cytochrome P450 activities were probably important mechanisms for pyrethroid resistance in field populations. Elevated esterase activity and also additional esterase isozyme were observed in the pyrethroid-resistant populations compared to the susceptible populations. Strong positive correlation between cytochrome P450 activity and pyrethroid resistance was also reported. |Deltamethrin could be recommended as a resistance-breaking pyrethroid that is active against resistant populations of P. hypostoma.

Keywords: Psammotermes hypostoma, pyrethroid resistance, esterase, cytochrome P450.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1812
49 Dependence of Densification, Hardness and Wear Behaviors of Ti6Al4V Powders on Sintering Temperature

Authors: Adewale O. Adegbenjo, Elsie Nsiah-Baafi, Mxolisi B. Shongwe, Mercy Ramakokovhu, Peter A. Olubambi

Abstract:

The sintering step in powder metallurgy (P/M) processes is very sensitive as it determines to a large extent the properties of the final component produced. Spark plasma sintering over the past decade has been extensively used in consolidating a wide range of materials including metallic alloy powders. This novel, non-conventional sintering method has proven to be advantageous offering full densification of materials, high heating rates, low sintering temperatures, and short sintering cycles over conventional sintering methods. Ti6Al4V has been adjudged the most widely used α+β alloy due to its impressive mechanical performance in service environments, especially in the aerospace and automobile industries being a light metal alloy with the capacity for fuel efficiency needed in these industries. The P/M route has been a promising method for the fabrication of parts made from Ti6Al4V alloy due to its cost and material loss reductions and the ability to produce near net and intricate shapes. However, the use of this alloy has been largely limited owing to its relatively poor hardness and wear properties. The effect of sintering temperature on the densification, hardness, and wear behaviors of spark plasma sintered Ti6Al4V powders was investigated in this present study. Sintering of the alloy powders was performed in the 650–850°C temperature range at a constant heating rate, applied pressure and holding time of 100°C/min, 50 MPa and 5 min, respectively. Density measurements were carried out according to Archimedes’ principle and microhardness tests were performed on sectioned as-polished surfaces at a load of 100gf and dwell time of 15 s. Dry sliding wear tests were performed at varied sliding loads of 5, 15, 25 and 35 N using the ball-on-disc tribometer configuration with WC as the counterface material. Microstructural characterization of the sintered samples and wear tracks were carried out using SEM and EDX techniques. The density and hardness characteristics of sintered samples increased with increasing sintering temperature. Near full densification (99.6% of the theoretical density) and Vickers’ micro-indentation hardness of 360 HV were attained at 850°C. The coefficient of friction (COF) and wear depth improved significantly with increased sintering temperature under all the loading conditions examined, except at 25 N indicating better mechanical properties at high sintering temperatures. Worn surface analyses showed the wear mechanism was a synergy of adhesive and abrasive wears, although the former was prevalent.

Keywords: Hardness, powder metallurgy, Spark plasma sintering, wear.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1577
48 Life Cycle Assessment of Residential Buildings: A Case Study in Canada

Authors: Venkatesh Kumar, Kasun Hewage, Rehan Sadiq

Abstract:

Residential buildings consume significant amounts of energy and produce large amount of emissions and waste. However, there is a substantial potential for energy savings in this sector which needs to be evaluated over the life cycle of residential buildings. Life Cycle Assessment (LCA) methodology has been employed to study the primary energy uses and associated environmental impacts of different phases (i.e., product, construction, use, end of life, and beyond building life) for residential buildings. Four different alternatives of residential buildings in Vancouver (BC, Canada) with a 50-year lifespan have been evaluated, including High Rise Apartment (HRA), Low Rise Apartment (LRA), Single family Attached House (SAH), and Single family Detached House (SDH). Life cycle performance of the buildings is evaluated for embodied energy, embodied environmental impacts, operational energy, operational environmental impacts, total life-cycle energy, and total life cycle environmental impacts. Estimation of operational energy and LCA are performed using DesignBuilder software and Athena Impact estimator software respectively. The study results revealed that over the life span of the buildings, the relationship between the energy use and the environmental impacts are identical. LRA is found to be the best alternative in terms of embodied energy use and embodied environmental impacts; while, HRA showed the best life-cycle performance in terms of minimum energy use and environmental impacts. Sensitivity analysis has also been carried out to study the influence of building service lifespan over 50, 75, and 100 years on the relative significance of embodied energy and total life cycle energy. The life-cycle energy requirements for SDH are found to be a significant component among the four types of residential buildings. The overall disclose that the primary operations of these buildings accounts for 90% of the total life cycle energy which far outweighs minor differences in embodied effects between the buildings.

Keywords: Building simulation, environmental impacts, life cycle assessment, life cycle energy analysis, residential buildings.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5187
47 Sensor and Actuator Fault Detection in Connected Vehicles under a Packet Dropping Network

Authors: Z. Abdollahi Biron, P. Pisu

Abstract:

Connected vehicles are one of the promising technologies for future Intelligent Transportation Systems (ITS). A connected vehicle system is essentially a set of vehicles communicating through a network to exchange their information with each other and the infrastructure. Although this interconnection of the vehicles can be potentially beneficial in creating an efficient, sustainable, and green transportation system, a set of safety and reliability challenges come out with this technology. The first challenge arises from the information loss due to unreliable communication network which affects the control/management system of the individual vehicles and the overall system. Such scenario may lead to degraded or even unsafe operation which could be potentially catastrophic. Secondly, faulty sensors and actuators can affect the individual vehicle’s safe operation and in turn will create a potentially unsafe node in the vehicular network. Further, sending that faulty sensor information to other vehicles and failure in actuators may significantly affect the safe operation of the overall vehicular network. Therefore, it is of utmost importance to take these issues into consideration while designing the control/management algorithms of the individual vehicles as a part of connected vehicle system. In this paper, we consider a connected vehicle system under Co-operative Adaptive Cruise Control (CACC) and propose a fault diagnosis scheme that deals with these aforementioned challenges. Specifically, the conventional CACC algorithm is modified by adding a Kalman filter-based estimation algorithm to suppress the effect of lost information under unreliable network. Further, a sliding mode observer-based algorithm is used to improve the sensor reliability under faults. The effectiveness of the overall diagnostic scheme is verified via simulation studies.

Keywords: Fault diagnostics, communication network, connected vehicles, packet drop out, platoon.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2002
46 Estimation of Seismic Ground Motion and Shaking Parameters Based On Microtremor Measurements at Palu City, Central Sulawesi Province, Indonesia

Authors: P. S. Thein, S. Pramumijoyo, K. S. Brotopuspito, J. Kiyono, W. Wilopo, A. Furukawa, A. Setianto

Abstract:

In this study, we estimated the seismic ground motion parameters based on microtremor measurements atPalu City. Several earthquakes have struck along the Palu-Koro Fault during recent years. The USGS epicenter, magnitude Mw 6.3 event that occurred on January 23, 2005 caused several casualties. We conducted a microtremor survey to estimate the strong ground motion distribution during the earthquake. From this surveywe produced a map of the peak ground acceleration, velocity, seismic vulnerability index and ground shear strain maps in Palu City. We performed single observations of microtremor at 151 sites in Palu City. We also conducted8-site microtremors array investigation to gain a representative determination of the soil condition of subsurface structures in Palu City.From the array observations, Palu City corresponds to relatively soil condition with Vs ≤ 300m/s, the predominant periods due to horizontal vertical ratios (HVSRs) are in the range of 0.4 to 1.8 s and the frequency are in the range of 0.7 to 3.3 Hz. Strong ground motions of the Palu area were predicted based on the empirical stochastic green’s function method. Peak ground acceleration and velocity becomes more than 400 gal and 30 kine in some areas, which causes severe damage for buildings in high probability. Microtremor survey results showed that in hilly areas had low seismic vulnerability index and ground shear strain, whereas in coastal alluvium was composed of material having a high seismic vulnerability and ground shear strain indication.

Keywords: Palu-Koro Fault, Microtremor, Peak Ground Acceleration, Peak Ground Velocity and Seismic Vulnerability Index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3319
45 Spatial Mapping of Dengue Incidence: A Case Study in Hulu Langat District, Selangor, Malaysia

Authors: Er, A. C., Rosli, M. H., Asmahani A., Mohamad Naim M. R., Harsuzilawati M.

Abstract:

Dengue is a mosquito-borne infection that has peaked to an alarming rate in recent decades. It can be found in tropical and sub-tropical climate. In Malaysia, dengue has been declared as one of the national health threat to the public. This study aimed to map the spatial distributions of dengue cases in the district of Hulu Langat, Selangor via a combination of Geographic Information System (GIS) and spatial statistic tools. Data related to dengue was gathered from the various government health agencies. The location of dengue cases was geocoded using a handheld GPS Juno SB Trimble. A total of 197 dengue cases occurring in 2003 were used in this study. Those data then was aggregated into sub-district level and then converted into GIS format. The study also used population or demographic data as well as the boundary of Hulu Langat. To assess the spatial distribution of dengue cases three spatial statistics method (Moran-s I, average nearest neighborhood (ANN) and kernel density estimation) were applied together with spatial analysis in the GIS environment. Those three indices were used to analyze the spatial distribution and average distance of dengue incidence and to locate the hot spot of dengue cases. The results indicated that the dengue cases was clustered (p < 0.01) when analyze using Moran-s I with z scores 5.03. The results from ANN analysis showed that the average nearest neighbor ratio is less than 1 which is 0.518755 (p < 0.0001). From this result, we can expect the dengue cases pattern in Hulu Langat district is exhibiting a cluster pattern. The z-score for dengue incidence within the district is -13.0525 (p < 0.0001). It was also found that the significant spatial autocorrelation of dengue incidences occurs at an average distance of 380.81 meters (p < 0.0001). Several locations especially residential area also had been identified as the hot spots of dengue cases in the district.

Keywords: Dengue, geographic information system (GIS), spatial analysis, spatial statistics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5368
44 An Estimation of Rice Output Supply Response in Sierra Leone: A Nerlovian Model Approach

Authors: Alhaji M. H. Conteh, Xiangbin Yan, Issa Fofana, Brima Gegbe, Tamba I. Isaac

Abstract:

Rice grain is Sierra Leone’s staple food and the nation imports over 120,000 metric tons annually due to a shortfall in its cultivation. Thus, the insufficient level of the crop's cultivation in Sierra Leone is caused by many problems and this led to the endlessly widening supply and demand for the crop within the country. Consequently, this has instigated the government to spend huge money on the importation of this grain that would have been otherwise cultivated domestically at a cheaper cost. Hence, this research attempts to explore the response of rice supply with respect to its demand in Sierra Leone within the period 1980-2010. The Nerlovian adjustment model to the Sierra Leone rice data set within the period 1980-2010 was used. The estimated trend equations revealed that time had significant effect on output, productivity (yield) and area (acreage) of rice grain within the period 1980-2010 and this occurred generally at the 1% level of significance. The results showed that, almost the entire growth in output had the tendency to increase in the area cultivated to the crop. The time trend variable that was included for government policy intervention showed an insignificant effect on all the variables considered in this research. Therefore, both the short-run and long-run price response was inelastic since all their values were less than one. From the findings above, immediate actions that will lead to productivity growth in rice cultivation are required. To achieve the above, the responsible agencies should provide extension service schemes to farmers as well as motivating them on the adoption of modern rice varieties and technology in their rice cultivation ventures.

Keywords: Nerlovian adjustment model, price elasticities, Sierra Leone, Trend equations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2794
43 Bio-Estimation of Selected Heavy Metals in Shellfish and Their Surrounding Environmental Media

Authors: Ebeed A. Saleh, Kadry M. Sadek, Safaa H. Ghorbal

Abstract:

Due to the determination of the pollution status of fresh resources in the Egyptian territorial waters is very important for public health; this study was carried out to reveal the levels of heavy metals in the shellfish and their environment and its relation to the highly developed industrial activities in those areas. A total of 100 shellfish samples from the Rosetta, Edku, El-Maadiya, Abo-Kir and El-Max coasts [10 crustaceans (shrimp) and 10 mollusks (oysters)] were randomly collected from each coast. Additionally, 10 samples from both the water and the sediment were collected from each coast. Each collected sample was analyzed for cadmium, chromium, copper, lead and zinc residues using a Perkin Elmer atomic absorption spectrophotometer (AAS). The results showed that the levels of heavy metals were higher in the water and sediment from Abo-Kir. The heavy metal levels decreased successively for the Rosetta, Edku, El-Maadiya, and El-Max coasts, and the concentrations of heavy metals, except copper and zinc, in shellfish exhibited the same pattern. For the concentration of heavy metals in shellfish tissue, the highest was zinc and the concentrations decreased successively for copper, lead, chromium and cadmium for all coasts, except the Abo-Kir coast, where the chromium level was highest and the other metals decreased successively for zinc, copper, lead and cadmium. In Rosetta, chromium was higher only in the mollusks, while the level of this metal was lower in the crustaceans; this trend was observed at the Edku, El-Maadiya and El-Max coasts as well. Herein, we discuss the importance of such contamination for public health and the sources of shellfish contamination with heavy metals. We suggest measures to minimize and prevent these pollutants in the aquatic environment and, furthermore, how to protect humans from excessive intake.

Keywords: Atomic absorption, heavy metals, sediment, shellfish, water.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2853
42 A Bayesian Classification System for Facilitating an Institutional Risk Profile Definition

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for easy creation and classification of institutional risk profiles supporting endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support set up of the most important risk factors. Subsequently, risk profiles employ risk factors classifier and associated configurations to support digital preservation experts with a semi-automatic estimation of endangerment group for file format risk profiles. Our goal is to make use of an expert knowledge base, accuired through a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation of risk factors for a requried dimension for analysis. Using the naive Bayes method, the decision support system recommends to an expert the matching risk profile group for the previously selected institutional risk profile. The proposed methods improve the visibility of risk factor values and the quality of a digital preservation process. The presented approach is designed to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and values of file format risk profiles. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert and to define its profile group. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.

Keywords: linked open data, information integration, digital libraries, data mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 730
41 Improving Production Traits for El-Salam and Mandarah Chicken Strains by Crossing II-Estimation of Crossbreeding Effects on Egg Production and Egg Quality Traits

Authors: Ayman E. Taha, Fawzy A. Abd El-Ghany

Abstract:

A crossbreeding experiment was carried out between two Egyptian strains of chickens namely Mandarah (MM) and El-Salam (SS). The two purebred strains and their reciprocal crosses (MS and SM) were used to estimate the effect of crossing on egg laying and egg quality parameters, direct additive and maternal additive effects as well as heterosis and direct heterosis percentages for studied traits. Results revealed that SM cross recorded the highest significant averages for most of egg production traits including body weight at sexual maturity (BW1), egg numbers at first 90 days, 42 weeks and 65 weeks of age (EN1, EN2 and EN3; respectively), egg weight at 90 days, 42 weeks of age (EW1 and EW2), egg mass at 90 days, 42 weeks and 65 weeks of age (EM1, EM2 and EM3; respectively), feed conversion ratio to egg production at 90 days , 42 weeks and 65 weeks of age (FCR1, FCR2 and FCR3; respectively), fertility and commercial hatchability percentages. Moreover, SM line reached the age sexual maturity (ASM) and period to the first ten eggs (Pf10 egg) at earlier age than other lines. On the other hand, crossing did not well improve egg quality parameters. Estimates and percentages of direct additive effect (GI) were negative for most of the studied traits except for EN1, EN2, EN3, FCR3, fertility, scientific and commercial hatchability percentages that were positive. But Estimates and percentages of maternal heterosis (Gm) were positive for all the studied traits of egg production, except for BW2, BW3, ASM, Pf10, FCR1, FCR2, FCR3 and scientific hatchability that were negative. Also, positive estimates and percentages of heterosis were recorded for most of egg production and egg quality traits. It was concluded that using of SS strain as a sire line and MM strain as a dam line resulting in best new commercial egg line (SM) which is of great concern for poultry breeder in Egypt.

Keywords: Mandarahand El-Salam chickens, Crossing, Egg production, Egg quality, Crossbreeding components.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2875
40 Study on Seismic Performance of Reinforced Soil Walls to Modify the Pseudo Static Method

Authors: Majid Yazdandoust

Abstract:

This study, tries to suggest a design method based on displacement using finite difference numerical modeling in reinforcing soil retaining wall with steel strip. In this case, dynamic loading characteristics such as duration, frequency, peak ground acceleration, geometrical characteristics of reinforced soil structure and type of the site are considered to correct the pseudo static method and finally introduce the pseudo static coefficient as a function of seismic performance level and peak ground acceleration. For this purpose, the influence of dynamic loading characteristics, reinforcement length, height of reinforced system and type of the site are investigated on seismic behavior of reinforcing soil retaining wall with steel strip. Numerical results illustrate that the seismic response of this type of wall is highly dependent to cumulative absolute velocity, maximum acceleration, and height and reinforcement length so that the reinforcement length can be introduced as the main factor in shape of failure. Considering the loading parameters, geometric parameters of the wall and type of the site showed that the used method in this study leads to efficient designs in comparison with other methods, which are usually based on limit-equilibrium concept. The outputs show the over-estimation of equilibrium design methods in comparison with proposed displacement based methods here.

Keywords: Pseudo static coefficient, seismic performance design, numerical modeling, steel strip reinforcement, retaining walls, cumulative absolute velocity, failure shape.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2152
39 Modeling a Multinomial Logit Model of Intercity Travel Mode Choice Behavior for All Trips in Libya

Authors: Manssour A. Abdulsalam Bin Miskeen, Ahmed Mohamed Alhodairi, Riza Atiq Abdullah Bin O. K. Rahmat

Abstract:

In the planning point of view, it is essential to have mode choice, due to the massive amount of incurred in transportation systems. The intercity travellers in Libya have distinct features, as against travellers from other countries, which includes cultural and socioeconomic factors. Consequently, the goal of this study is to recognize the behavior of intercity travel using disaggregate models, for projecting the demand of nation-level intercity travel in Libya. Multinomial Logit Model for all the intercity trips has been formulated to examine the national-level intercity transportation in Libya. The Multinomial logit model was calibrated using nationwide revealed preferences (RP) and stated preferences (SP) survey. The model was developed for deference purpose of intercity trips (work, social and recreational). The variables of the model have been predicted based on maximum likelihood method. The data needed for model development were obtained from all major intercity corridors in Libya. The final sample size consisted of 1300 interviews. About two-thirds of these data were used for model calibration, and the remaining parts were used for model validation. This study, which is the first of its kind in Libya, investigates the intercity traveler’s mode-choice behavior. The intercity travel mode-choice model was successfully calibrated and validated. The outcomes indicate that, the overall model is effective and yields higher precision of estimation. The proposed model is beneficial, due to the fact that, it is receptive to a lot of variables, and can be employed to determine the impact of modifications in the numerous characteristics on the need for various travel modes. Estimations of the model might also be of valuable to planners, who can estimate possibilities for various modes and determine the impact of unique policy modifications on the need for intercity travel.

Keywords: Multinomial logit model, improved intercity transport, intercity mode-choice behavior, disaggregate analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7867
38 Estimating the Costs of Conservation in Multiple Output Agricultural Setting

Authors: T. Chaiechi, N. Stoeckl

Abstract:

Scarcity of resources for biodiversity conservation gives rise to the need of strategic investment with priorities given to the cost of conservation. While the literature provides abundant methodological options for biodiversity conservation; estimating true cost of conservation remains abstract and simplistic, without recognising dynamic nature of the cost. Some recent works demonstrate the prominence of economic theory to inform biodiversity decisions, particularly on the costs and benefits of biodiversity however, the integration of the concept of true cost into biodiversity actions and planning are very slow to come by, and specially on a farm level. Conservation planning studies often use area as a proxy for costs neglecting different land values as well as protected areas. These literature consider only heterogeneous benefits while land costs are considered homogenous. Analysis with the assumption of cost homogeneity results in biased estimation; since not only it doesn’t address the true total cost of biodiversity actions and plans, but also it fails to screen out lands that are more (or less) expensive and/or difficult (or more suitable) for biodiversity conservation purposes, hindering validity and comparability of the results. Economies of scope” is one of the other most neglected aspects in conservation literature. The concept of economies of scope introduces the existence of cost complementarities within a multiple output production system and it suggests a lower cost during the concurrent production of multiple outputs by a given farm. If there are, indeed, economies of scope then simplistic representation of costs will tend to overestimate the true cost of conservation leading to suboptimal outcomes. The aim of this paper, therefore, is to provide first road review of the various theoretical ways in which economies of scope are likely to occur of how they might occur in conservation. Consequently, the paper addresses gaps that have to be filled in future analysis.

Keywords: Cost, biodiversity conservation, Multi-output production systems, Empirical techniques.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2206