Search results for: modeling
2929 Evaluation of the Dry Compressive Strength of Refractory Bricks Developed from Local Kaolin
Authors: Olanrewaju Rotimi Bodede, Akinlabi Oyetunji
Abstract:
Modeling the dry compressive strength of sodium silicate bonded kaolin refractory bricks was studied. The materials used for this research work included refractory clay obtained from Ijero-Ekiti kaolin deposit on coordinates 7º 49´N and 5º 5´E, sodium silicate obtained from the open market in Lagos on coordinates 6°27′11″N 3°23′45″E all in the South Western part of Nigeria. The mineralogical composition of the kaolin clay was determined using the Energy Dispersive X-Ray Fluorescence Spectrometer (ED-XRF). The clay samples were crushed and sieved using the laboratory pulveriser, ball mill and sieve shaker respectively to obtain 100 μm diameter particles. Manual pipe extruder of dimension 30 mm diameter by 43.30 mm height was used to prepare the samples with varying percentage volume of sodium silicate 5 %, 7.5 % 10 %, 12.5 %, 15 %, 17.5 %, 20% and 22.5 % while kaolin and water were kept at 50 % and 5 % respectively for the comprehensive test. The samples were left to dry in the open laboratory atmosphere for 24 hours to remove moisture. The samples were then were fired in an electrically powered muffle furnace. Firing was done at the following temperatures; 700ºC, 750ºC, 800ºC, 850ºC, 900ºC, 950ºC, 1000ºC and 1100ºC. Compressive strength test was carried out on the dried samples using a Testometric Universal Testing Machine (TUTM) equipped with a computer and printer, optimum compression of 4.41 kN/mm2 was obtained at 12.5 % sodium silicate; the experimental results were modeled with MATLAB and Origin packages using polynomial regression equations that predicted the estimated values for dry compressive strength and later validated with Pearson’s rank correlation coefficient, thereby obtaining a very high positive correlation value of 0.97.Keywords: dry compressive strength, kaolin, modeling, sodium silicate
Procedia PDF Downloads 4552928 Numerical Modeling of Geogrid Reinforced Soil Bed under Strip Footings Using Finite Element Analysis
Authors: Ahmed M. Gamal, Adel M. Belal, S. A. Elsoud
Abstract:
This article aims to study the effect of reinforcement inclusions (geogrids) on the sand dunes bearing capacity under strip footings. In this research experimental physical model was carried out to study the effect of the first geogrid reinforcement depth (u/B), the spacing between the reinforcement (h/B) and its extension relative to the footing length (L/B) on the mobilized bearing capacity. This paper presents the numerical modeling using the commercial finite element package (PLAXIS version 8.2) to simulate the laboratory physical model, studying the same parameters previously handled in the experimental work (u/B, L/B & h/B) for the purpose of validation. In this study the soil, the geogrid, the interface element and the boundary condition are discussed with a set of finite element results and the validation. Then the validated FEM used for studying real material and dimensions of strip foundation. Based on the experimental and numerical investigation results, a significant increase in the bearing capacity of footings has occurred due to an appropriate location of the inclusions in sand. The optimum embedment depth of the first reinforcement layer (u/B) is equal to 0.25. The optimum spacing between each successive reinforcement layer (h/B) is equal to 0.75 B. The optimum Length of the reinforcement layer (L/B) is equal to 7.5 B. The optimum number of reinforcement is equal to 4 layers. The study showed a directly proportional relation between the number of reinforcement layer and the Bearing Capacity Ratio BCR, and an inversely proportional relation between the footing width and the BCR.Keywords: reinforced soil, geogrid, sand dunes, bearing capacity
Procedia PDF Downloads 4212927 Computational Fluid Dynamics Modeling of Liquefaction of Wood and It's Model Components Using a Modified Multistage Shrinking-Core Model
Authors: K. G. R. M. Jayathilake, S. Rudra
Abstract:
Wood degradation in hot compressed water is modeled with a Computational Fluid Dynamics (CFD) code using cellulose, xylan, and lignin as model compounds. Model compounds are reacted under catalyst-free conditions in a temperature range from 250 to 370 °C. Using a simplified reaction scheme where water soluble products, methanol soluble products, char like compounds and gas are generated through intermediates with each model compound. A modified multistage shrinking core model is developed to simulate particle degradation. In the modified shrinking core model, each model compound is hydrolyzed in separate stages. Cellulose is decomposed to glucose/oligomers before producing degradation products. Xylan is decomposed through xylose and then to degradation products where lignin is decomposed into soluble products before producing the total guaiacol, organic carbon (TOC) and then char and gas. Hydrolysis of each model compound is used as the main reaction of the process. Diffusion of water monomers to the particle surface to initiate hydrolysis and dissolution of the products in water is given importance during the modeling process. In the developed model the temperature variation depends on the Arrhenius relationship. Kinetic parameters from the literature are used for the mathematical model. Meanwhile, limited initial fast reaction kinetic data limit the development of more accurate CFD models. Liquefaction results of the CFD model are analyzed and validated using the experimental data available in the literature where it shows reasonable agreement.Keywords: computational fluid dynamics, liquefaction, shrinking-core, wood
Procedia PDF Downloads 1252926 Modeling and Numerical Simulation of Heat Transfer and Internal Loads at Insulating Glass Units
Authors: Nina Penkova, Kalin Krumov, Liliana Zashcova, Ivan Kassabov
Abstract:
The insulating glass units (IGU) are widely used in the advanced and renovated buildings in order to reduce the energy for heating and cooling. Rules for the choice of IGU to ensure energy efficiency and thermal comfort in the indoor space are well known. The existing of internal loads - gage or vacuum pressure in the hermetized gas space, requires additional attention at the design of the facades. The internal loads appear at variations of the altitude, meteorological pressure and gas temperature according to the same at the process of sealing. The gas temperature depends on the presence of coatings, coating position in the transparent multi-layer system, IGU geometry and space orientation, its fixing on the facades and varies with the climate conditions. An algorithm for modeling and numerical simulation of thermal fields and internal pressure in the gas cavity at insulating glass units as function of the meteorological conditions is developed. It includes models of the radiation heat transfer in solar and infrared wave length, indoor and outdoor convection heat transfer and free convection in the hermetized gas space, assuming the gas as compressible. The algorithm allows prediction of temperature and pressure stratification in the gas domain of the IGU at different fixing system. The models are validated by comparison of the numerical results with experimental data obtained by Hot-box testing. Numerical calculations and estimation of 3D temperature, fluid flow fields, thermal performances and internal loads at IGU in window system are implemented.Keywords: insulating glass units, thermal loads, internal pressure, CFD analysis
Procedia PDF Downloads 2742925 A Two-Week and Six-Month Stability of Cancer Health Literacy Classification Using the CHLT-6
Authors: Levent Dumenci, Laura A. Siminoff
Abstract:
Health literacy has been shown to predict a variety of health outcomes. Reliable identification of persons with limited cancer health literacy (LCHL) has been proved questionable with existing instruments using an arbitrary cut point along a continuum. The CHLT-6, however, uses a latent mixture modeling approach to identify persons with LCHL. The purpose of this study was to estimate two-week and six-month stability of identifying persons with LCHL using the CHLT-6 with a discrete latent variable approach as the underlying measurement structure. Using a test-retest design, the CHLT-6 was administered to cancer patients with two-week (N=98) and six-month (N=51) intervals. The two-week and six-month latent test-retest agreements were 89% and 88%, respectively. The chance-corrected latent agreements estimated from Dumenci’s latent kappa were 0.62 (95% CI: 0.41 – 0.82) and .47 (95% CI: 0.14 – 0.80) for the two-week and six-month intervals, respectively. High levels of latent test-retest agreement between limited and adequate categories of cancer health literacy construct, coupled with moderate to good levels of change-corrected latent agreements indicated that the CHLT-6 classification of limited versus adequate cancer health literacy is relatively stable over time. In conclusion, the measurement structure underlying the instrument allows for estimating classification errors circumventing limitations due to arbitrary approaches adopted by all other instruments. The CHLT-6 can be used to identify persons with LCHL in oncology clinics and intervention studies to accurately estimate treatment effectiveness.Keywords: limited cancer health literacy, the CHLT-6, discrete latent variable modeling, latent agreement
Procedia PDF Downloads 1792924 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation
Authors: Somayeh Komeylian
Abstract:
The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).Keywords: DoA estimation, Adaptive antenna array, Deep Neural Network, LS-SVM optimization model, Radial basis function, and MSE
Procedia PDF Downloads 1002923 An Approach to Correlate the Statistical-Based Lorenz Method, as a Way of Measuring Heterogeneity, with Kozeny-Carman Equation
Authors: H. Khanfari, M. Johari Fard
Abstract:
Dealing with carbonate reservoirs can be mind-boggling for the reservoir engineers due to various digenetic processes that cause a variety of properties through the reservoir. A good estimation of the reservoir heterogeneity which is defined as the quality of variation in rock properties with location in a reservoir or formation, can better help modeling the reservoir and thus can offer better understanding of the behavior of that reservoir. Most of reservoirs are heterogeneous formations whose mineralogy, organic content, natural fractures, and other properties vary from place to place. Over years, reservoir engineers have tried to establish methods to describe the heterogeneity, because heterogeneity is important in modeling the reservoir flow and in well testing. Geological methods are used to describe the variations in the rock properties because of the similarities of environments in which different beds have deposited in. To illustrate the heterogeneity of a reservoir vertically, two methods are generally used in petroleum work: Dykstra-Parsons permeability variations (V) and Lorenz coefficient (L) that are reviewed briefly in this paper. The concept of Lorenz is based on statistics and has been used in petroleum from that point of view. In this paper, we correlated the statistical-based Lorenz method to a petroleum concept, i.e. Kozeny-Carman equation and derived the straight line plot of Lorenz graph for a homogeneous system. Finally, we applied the two methods on a heterogeneous field in South Iran and discussed each, separately, with numbers and figures. As expected, these methods show great departure from homogeneity. Therefore, for future investment, the reservoir needs to be treated carefully.Keywords: carbonate reservoirs, heterogeneity, homogeneous system, Dykstra-Parsons permeability variations (V), Lorenz coefficient (L)
Procedia PDF Downloads 2222922 Analysis of Key Factors Influencing Muslim Women’s Buying Intentions of Clothes: A Study of UK’s Ethnic Minorities and Modest Fashion Industry
Authors: Nargis Ali
Abstract:
Since the modest fashion market is growing in the UK, there is still little understanding and more concerns found among researchers and marketers about Muslim consumers. Therefore, the present study is designed to explore critical factors influencing Muslim women’s intention to purchase clothing and to identify the differences in the purchase intention of ethnic minority groups in the UK. The conceptual framework is designed using the theory of planned behavior and social identity theory. In order to satisfy the research objectives, a structured online questionnaire was published on Facebook from 20 November to 21 March. As a result, 1087 usable questionnaires were received and used to assess the proposed model fit through structural equation modeling. Results revealed that social media does influence the purchase intention of Muslim women. Muslim women search for stylish clothes that provide comfort during summer while they prefer soft and subdued colors. Furthermore, religious knowledge and religious practice, and fashion uniqueness strongly influence their purchase intention, while hybrid identity is negatively related to the purchase intention of Muslim women. This research contributes to the literature linked to Muslim consumers at a time when the UK's large retailers were seeking to attract Muslim consumers through modestly designed outfits. Besides, it will be helpful to formulate or revise product and marketing strategies according to UK’s Muslim women’s tastes and needs.Keywords: fashion uniqueness, hybrid identity, religiosity, social media, social identity theory, structural equation modeling, theory of planned behavior
Procedia PDF Downloads 2272921 Numerical Investigation of Pressure Drop in Core Annular Horizontal Pipe Flow
Authors: John Abish, Bibin John
Abstract:
Liquid-liquid flow in horizontal pipe is investigated in order to reveal the flow patterns arising from the co-existed flow of oil and water. The main focus of the study is to identify the feasibility of reducing the pumping power requirements of petroleum transportation lines by having an annular flow of water around the thick oil core. This idea makes oil transportation cheaper and easier. The present study uses computational fluid dynamics techniques to model oil-water flows with liquids of similar density and varying viscosity. The simulation of the flow is conducted using commercial package Ansys Fluent. Flow domain modeling and grid generation accomplished through ICEM CFD. The horizontal pipe is modeled with two different inlets and meshed with O-Grid mesh. The standard k-ε turbulence scheme along with the volume of fluid (VOF) multiphase modeling method is used to simulate the oil-water flow. Transient flow simulations carried out for a total period of 30s showed significant reduction in pressure drop while employing core annular flow concept. This study also reveals the effect of viscosity ratio, mass flow rates of individual fluids and ration of superficial velocities on the pressure drop across the pipe length. Contours of velocity and volume fractions are employed along with pressure predictions to assess the effectiveness of this proposed concept quantitatively as well as qualitatively. The outcome of the present study is found to be very relevant for the petrochemical industries.Keywords: computational fluid dynamics, core-annular flows, frictional flow resistance, oil transportation, pressure drop
Procedia PDF Downloads 4072920 Modeling Breathable Particulate Matter Concentrations over Mexico City Retrieved from Landsat 8 Satellite Imagery
Authors: Rodrigo T. Sepulveda-Hirose, Ana B. Carrera-Aguilar, Magnolia G. Martinez-Rivera, Pablo de J. Angeles-Salto, Carlos Herrera-Ventosa
Abstract:
In order to diminish health risks, it is of major importance to monitor air quality. However, this process is accompanied by the high costs of physical and human resources. In this context, this research is carried out with the main objective of developing a predictive model for concentrations of inhalable particles (PM10-2.5) using remote sensing. To develop the model, satellite images, mainly from Landsat 8, of the Mexico City’s Metropolitan Area were used. Using historical PM10 and PM2.5 measurements of the RAMA (Automatic Environmental Monitoring Network of Mexico City) and through the processing of the available satellite images, a preliminary model was generated in which it was possible to observe critical opportunity areas that will allow the generation of a robust model. Through the preliminary model applied to the scenes of Mexico City, three areas were identified that cause great interest due to the presumed high concentration of PM; the zones are those that present high plant density, bodies of water and soil without constructions or vegetation. To date, work continues on this line to improve the preliminary model that has been proposed. In addition, a brief analysis was made of six models, presented in articles developed in different parts of the world, this in order to visualize the optimal bands for the generation of a suitable model for Mexico City. It was found that infrared bands have helped to model in other cities, but the effectiveness that these bands could provide for the geographic and climatic conditions of Mexico City is still being evaluated.Keywords: air quality, modeling pollution, particulate matter, remote sensing
Procedia PDF Downloads 1562919 Identification of Natural Liver X Receptor Agonists as the Treatments or Supplements for the Management of Alzheimer and Metabolic Diseases
Authors: Hsiang-Ru Lin
Abstract:
Cholesterol plays an essential role in the regulation of the progression of numerous important diseases including atherosclerosis and Alzheimer disease so the generation of suitable cholesterol-lowering reagents is urgent to develop. Liver X receptor (LXR) is a ligand-activated transcription factor whose natural ligands are cholesterols, oxysterols and glucose. Once being activated, LXR can transactivate the transcription action of various genes including CYP7A1, ABCA1, and SREBP1c, involved in the lipid metabolism, glucose metabolism and inflammatory pathway. Essentially, the upregulation of ABCA1 facilitates cholesterol efflux from the cells and attenuates the production of beta-amyloid (ABeta) 42 in brain so LXR is a promising target to develop the cholesterol-lowering reagents and preventative treatment of Alzheimer disease. Engelhardia roxburghiana is a deciduous tree growing in India, China, and Taiwan. However, its chemical composition is only reported to exhibit antitubercular and anti-inflammatory effects. In this study, four compounds, engelheptanoxides A, C, engelhardiol A, and B isolated from the root of Engelhardia roxburghiana were evaluated for their agonistic activity against LXR by the transient transfection reporter assays in the HepG2 cells. Furthermore, their interactive modes with LXR ligand binding pocket were generated by molecular modeling programs. By using the cell-based biological assays, engelheptanoxides A, C, engelhardiol A, and B showing no cytotoxic effect against the proliferation of HepG2 cells, exerted obvious LXR agonistic effects with similar activity as T0901317, a novel synthetic LXR agonist. Further modeling studies including docking and SAR (structure-activity relationship) showed that these compounds can locate in LXR ligand binding pocket in the similar manner as T0901317. Thus, LXR is one of nuclear receptors targeted by pharmaceutical industry for developing treatments of Alzheimer and atherosclerosis diseases. Importantly, the cell-based assays, together with molecular modeling studies suggesting a plausible binding mode, demonstrate that engelheptanoxides A, C, engelhardiol A, and B function as LXR agonists. This is the first report to demonstrate that the extract of Engelhardia roxburghiana contains LXR agonists. As such, these active components of Engelhardia roxburghiana or subsequent analogs may show important therapeutic effects through selective modulation of the LXR pathway.Keywords: Liver X receptor (LXR), Engelhardia roxburghiana, CYP7A1, ABCA1, SREBP1c, HepG2 cells
Procedia PDF Downloads 4202918 Flood Modeling in Urban Area Using a Well-Balanced Discontinuous Galerkin Scheme on Unstructured Triangular Grids
Authors: Rabih Ghostine, Craig Kapfer, Viswanathan Kannan, Ibrahim Hoteit
Abstract:
Urban flooding resulting from a sudden release of water due to dam-break or excessive rainfall is a serious threatening environment hazard, which causes loss of human life and large economic losses. Anticipating floods before they occur could minimize human and economic losses through the implementation of appropriate protection, provision, and rescue plans. This work reports on the numerical modelling of flash flood propagation in urban areas after an excessive rainfall event or dam-break. A two-dimensional (2D) depth-averaged shallow water model is used with a refined unstructured grid of triangles for representing the urban area topography. The 2D shallow water equations are solved using a second-order well-balanced discontinuous Galerkin scheme. Theoretical test case and three flood events are described to demonstrate the potential benefits of the scheme: (i) wetting and drying in a parabolic basin (ii) flash flood over a physical model of the urbanized Toce River valley in Italy; (iii) wave propagation on the Reyran river valley in consequence of the Malpasset dam-break in 1959 (France); and (iv) dam-break flood in October 1982 at the town of Sumacarcel (Spain). The capability of the scheme is also verified against alternative models. Computational results compare well with recorded data and show that the scheme is at least as efficient as comparable second-order finite volume schemes, with notable efficiency speedup due to parallelization.Keywords: dam-break, discontinuous Galerkin scheme, flood modeling, shallow water equations
Procedia PDF Downloads 1752917 Numerical Performance Evaluation of a Savonius Wind Turbines Using Resistive Torque Modeling
Authors: Guermache Ahmed Chafik, Khelfellah Ismail, Ait-Ali Takfarines
Abstract:
The Savonius vertical axis wind turbine is characterized by sufficient starting torque at low wind speeds, simple design and does not require orientation to the wind direction; however, the developed power is lower than other types of wind turbines such as Darrieus. To increase these performances several studies and researches have been developed, such as optimizing blades shape, using passive controls and also minimizing power losses sources like the resisting torque due to friction. This work aims to estimate the performance of a Savonius wind turbine introducing a User Defined Function to the CFD model analyzing resisting torque. This User Defined Function is developed to simulate the action of the wind speed on the rotor; it receives the moment coefficient as an input to compute the rotational velocity that should be imposed on computational domain rotating regions. The rotational velocity depends on the aerodynamic moment applied on the turbine and the resisting torque, which is considered a linear function. Linking the implemented User Defined Function with the CFD solver allows simulating the real functioning of the Savonius turbine exposed to wind. It is noticed that the wind turbine takes a while to reach the stationary regime where the rotational velocity becomes invariable; at that moment, the tip speed ratio, the moment and power coefficients are computed. To validate this approach, the power coefficient versus tip speed ratio curve is compared with the experimental one. The obtained results are in agreement with the available experimental results.Keywords: resistant torque modeling, Savonius wind turbine, user-defined function, vertical axis wind turbine performances
Procedia PDF Downloads 1572916 Modeling Route Selection Using Real-Time Information and GPS Data
Authors: William Albeiro Alvarez, Gloria Patricia Jaramillo, Ivan Reinaldo Sarmiento
Abstract:
Understanding the behavior of individuals and the different human factors that influence the choice when faced with a complex system such as transportation is one of the most complicated aspects of measuring in the components that constitute the modeling of route choice due to that various behaviors and driving mode directly or indirectly affect the choice. During the last two decades, with the development of information and communications technologies, new data collection techniques have emerged such as GPS, geolocation with mobile phones, apps for choosing the route between origin and destination, individual service transport applications among others, where an interest has been generated to improve discrete choice models when considering the incorporation of these developments as well as psychological factors that affect decision making. This paper implements a discrete choice model that proposes and estimates a hybrid model that integrates route choice models and latent variables based on the observation on the route of a sample of public taxi drivers from the city of Medellín, Colombia in relation to its behavior, personality, socioeconomic characteristics, and driving mode. The set of choice options includes the routes generated by the individual service transport applications versus the driver's choice. The hybrid model consists of measurement equations that relate latent variables with measurement indicators and utilities with choice indicators along with structural equations that link the observable characteristics of drivers with latent variables and explanatory variables with utilities.Keywords: behavior choice model, human factors, hybrid model, real time data
Procedia PDF Downloads 1552915 Intelligent Control of Bioprocesses: A Software Application
Authors: Mihai Caramihai, Dan Vasilescu
Abstract:
The main research objective of the experimental bioprocess analyzed in this paper was to obtain large biomass quantities. The bioprocess is performed in 100 L Bioengineering bioreactor with 42 L cultivation medium made of peptone, meat extract and sodium chloride. The reactor was equipped with pH, temperature, dissolved oxygen, and agitation controllers. The operating parameters were 37 oC, 1.2 atm, 250 rpm and air flow rate of 15 L/min. The main objective of this paper is to present a case study to demonstrate that intelligent control, describing the complexity of the biological process in a qualitative and subjective manner as perceived by human operator, is an efficient control strategy for this kind of bioprocesses. In order to simulate the bioprocess evolution, an intelligent control structure, based on fuzzy logic has been designed. The specific objective is to present a fuzzy control approach, based on human expert’ rules vs. a modeling approach of the cells growth based on bioprocess experimental data. The kinetic modeling may represent only a small number of bioprocesses for overall biosystem behavior while fuzzy control system (FCS) can manipulate incomplete and uncertain information about the process assuring high control performance and provides an alternative solution to non-linear control as it is closer to the real world. Due to the high degree of non-linearity and time variance of bioprocesses, the need of control mechanism arises. BIOSIM, an original developed software package, implements such a control structure. The simulation study has showed that the fuzzy technique is quite appropriate for this non-linear, time-varying system vs. the classical control method based on a priori model.Keywords: intelligent, control, fuzzy model, bioprocess optimization
Procedia PDF Downloads 3272914 Estimation of Source Parameters and Moment Tensor Solution through Waveform Modeling of 2013 Kishtwar Earthquake
Authors: Shveta Puri, Shiv Jyoti Pandey, G. M. Bhat, Neha Raina
Abstract:
TheJammu and Kashmir region of the Northwest Himalaya had witnessed many devastating earthquakes in the recent past and has remained unexplored for any kind of seismic investigations except scanty records of the earthquakes that occurred in this region in the past. In this study, we have used local seismic data of year 2013 that was recorded by the network of Broadband Seismographs in J&K. During this period, our seismic stations recorded about 207 earthquakes including two moderate events of Mw 5.7 on 1st May, 2013 and Mw 5.1 of 2nd August, 2013.We analyzed the events of Mw 3-4.6 and the main events only (for minimizing the error) for source parameters, b value and sense of movement through waveform modeling for understanding seismotectonic and seismic hazard of the region. It has been observed that most of the events are bounded between 32.9° N – 33.3° N latitude and 75.4° E – 76.1° E longitudes, Moment Magnitude (Mw) ranges from Mw 3 to 5.7, Source radius (r), from 0.21 to 3.5 km, stress drop, from 1.90 bars to 71.1 bars and Corner frequency, from 0.39 – 6.06 Hz. The b-value for this region was found to be 0.83±0 from these events which are lower than the normal value (b=1), indicating the area is under high stress. The travel time inversion and waveform inversion method suggest focal depth up to 10 km probably above the detachment depth of the Himalayan region. Moment tensor solution of the (Mw 5.1, 02:32:47 UTC) main event of 2ndAugust suggested that the source fault is striking at 295° with dip of 33° and rake value of 85°. It was found that these events form intense clustering of small to moderate events within a narrow zone between Panjal Thrust and Kishtwar Window. Moment tensor solution of the main events and their aftershocks indicating thrust type of movement is occurring in this region.Keywords: b-value, moment tensor, seismotectonics, source parameters
Procedia PDF Downloads 3132913 Manual Wheelchair Propulsion Efficiency on Different Slopes
Authors: A. Boonpratatong, J. Pantong, S. Kiattisaksophon, W. Senavongse
Abstract:
In this study, an integrated sensing and modeling system for manual wheelchair propulsion measurement and propulsion efficiency calculation was used to indicate the level of overuse. Seven subjects participated in the measurement. On the level surface, the propulsion efficiencies were not different significantly as the riding speed increased. By contrast, the propulsion efficiencies on the 15-degree incline were restricted to around 0.5. The results are supported by previously reported wheeling resistance and propulsion torque relationships implying margin of the overuse. Upper limb musculoskeletal injuries and syndromes in manual wheelchair riders are common, chronic, and may be caused at different levels by the overuse i.e. repetitive riding on steep incline. The qualitative analysis such as the mechanical effectiveness on manual wheeling to establish the relationship between the riding difficulties, mechanical efforts and propulsion outputs is scarce, possibly due to the challenge of simultaneous measurement of those factors in conventional manual wheelchairs and everyday environments. In this study, the integrated sensing and modeling system were used to measure manual wheelchair propulsion efficiency in conventional manual wheelchairs and everyday environments. The sensing unit is comprised of the contact pressure and inertia sensors which are portable and universal. Four healthy male and three healthy female subjects participated in the measurement on level and 15-degree incline surface. Subjects were asked to perform manual wheelchair ridings with three different self-selected speeds on level surface and only preferred speed on the 15-degree incline. Five trials were performed in each condition. The kinematic data of the subject’s dominant hand and a spoke and the trunk of the wheelchair were collected through the inertia sensors. The compression force applied from the thumb of the dominant hand to the push rim was collected through the contact pressure sensors. The signals from all sensors were recorded synchronously. The subject-selected speeds for slow, preferred and fast riding on level surface and subject-preferred speed on 15-degree incline were recorded. The propulsion efficiency as a ratio between the pushing force in tangential direction to the push rim and the net force as a result of the three-dimensional riding motion were derived by inverse dynamic problem solving in the modeling unit. The intra-subject variability of the riding speed was not different significantly as the self-selected speed increased on the level surface. Since the riding speed on the 15-degree incline was difficult to regulate, the intra-subject variability was not applied. On the level surface, the propulsion efficiencies were not different significantly as the riding speed increased. However, the propulsion efficiencies on the 15-degree incline were restricted to around 0.5 for all subjects on their preferred speed. The results are supported by the previously reported relationship between the wheeling resistance and propulsion torque in which the wheelchair axle torque increased but the muscle activities were not increased when the resistance is high. This implies the margin of dynamic efforts on the relatively high resistance being similar to the margin of the overuse indicated by the restricted propulsion efficiency on the 15-degree incline.Keywords: contact pressure sensor, inertia sensor, integrating sensing and modeling system, manual wheelchair propulsion efficiency, manual wheelchair propulsion measurement, tangential force, resultant force, three-dimensional riding motion
Procedia PDF Downloads 2902912 Statistical Model of Water Quality in Estero El Macho, Machala-El Oro
Authors: Rafael Zhindon Almeida
Abstract:
Surface water quality is an important concern for the evaluation and prediction of water quality conditions. The objective of this study is to develop a statistical model that can accurately predict the water quality of the El Macho estuary in the city of Machala, El Oro province. The methodology employed in this study is of a basic type that involves a thorough search for theoretical foundations to improve the understanding of statistical modeling for water quality analysis. The research design is correlational, using a multivariate statistical model involving multiple linear regression and principal component analysis. The results indicate that water quality parameters such as fecal coliforms, biochemical oxygen demand, chemical oxygen demand, iron and dissolved oxygen exceed the allowable limits. The water of the El Macho estuary is determined to be below the required water quality criteria. The multiple linear regression model, based on chemical oxygen demand and total dissolved solids, explains 99.9% of the variance of the dependent variable. In addition, principal component analysis shows that the model has an explanatory power of 86.242%. The study successfully developed a statistical model to evaluate the water quality of the El Macho estuary. The estuary did not meet the water quality criteria, with several parameters exceeding the allowable limits. The multiple linear regression model and principal component analysis provide valuable information on the relationship between the various water quality parameters. The findings of the study emphasize the need for immediate action to improve the water quality of the El Macho estuary to ensure the preservation and protection of this valuable natural resource.Keywords: statistical modeling, water quality, multiple linear regression, principal components, statistical models
Procedia PDF Downloads 992911 The Benefits of End-To-End Integrated Planning from the Mine to Client Supply for Minimizing Penalties
Authors: G. Martino, F. Silva, E. Marchal
Abstract:
The control over delivered iron ore blend characteristics is one of the most important aspects of the mining business. The iron ore price is a function of its composition, which is the outcome of the beneficiation process. So, end-to-end integrated planning of mine operations can reduce risks of penalties on the iron ore price. In a standard iron mining company, the production chain is composed of mining, ore beneficiation, and client supply. When mine planning and client supply decisions are made uncoordinated, the beneficiation plant struggles to deliver the best blend possible. Technological improvements in several fields allowed bridging the gap between departments and boosting integrated decision-making processes. Clusterization and classification algorithms over historical production data generate reasonable previsions for quality and volume of iron ore produced for each pile of run-of-mine (ROM) processed. Mathematical modeling can use those deterministic relations to propose iron ore blends that better-fit specifications within a delivery schedule. Additionally, a model capable of representing the whole production chain can clearly compare the overall impact of different decisions in the process. This study shows how flexibilization combined with a planning optimization model between the mine and the ore beneficiation processes can reduce risks of out of specification deliveries. The model capabilities are illustrated on a hypothetical iron ore mine with magnetic separation process. Finally, this study shows ways of cost reduction or profit increase by optimizing process indicators across the production chain and integrating the different plannings with the sales decisions.Keywords: clusterization and classification algorithms, integrated planning, mathematical modeling, optimization, penalty minimization
Procedia PDF Downloads 1242910 Finite Element Modeling of Global Ti-6Al-4V Mechanical Behavior in Relationship with Microstructural Parameters
Authors: Fatna Benmessaoud, Mohammed Cheikh, Vencent Velay, Vanessa Vedal, Farhad Rezai-Aria, Christine Boher
Abstract:
The global mechanical behavior of materials is strongly linked to their microstructure, especially their crystallographic texture and their grains morphology. These material aspects determine the mechanical fields character (heterogeneous or homogeneous), thus, they give to the global behavior a degree of anisotropy according the initial microstructure. For these reasons, the prediction of global behavior of materials in relationship with the microstructure must be performed with a multi-scale approach. Therefore, multi-scale modeling in the context of crystal plasticity is widely used. In this present contribution, a phenomenological elasto-viscoplastic model developed in the crystal plasticity context and finite element method are used to investigate the effects of crystallographic texture and grains sizes on global behavior of a polycrystalline equiaxed Ti-6Al-4V alloy. The constitutive equations of this model are written on local scale for each slip system within each grain while the strain and stress mechanical fields are investigated at the global scale via finite element scale transition. The beta phase of Ti-6Al-4V alloy modeled is negligible; its percent is less than 10%. Three families of slip systems of alpha phase are considered: basal and prismatic families with a burgers vector and pyramidal family with aKeywords: microstructural parameters, multi-scale modeling, crystal plasticity, Ti-6Al-4V alloy
Procedia PDF Downloads 1262909 An Investigation into the Impacts of High-Frequency Electromagnetic Fields Utilized in the 5G Technology on Insects
Authors: Veriko Jeladze, Besarion Partsvania, Levan Shoshiashvili
Abstract:
This paper addresses a very topical issue today. The frequency range 2.5-100 GHz contains frequencies that have already been used or will be used in modern 5G technologies. The wavelengths used in 5G systems will be close to the body dimensions of small size biological objects, particularly insects. Because the body and body parts dimensions of insects at these frequencies are comparable with the wavelength, the high absorption of EMF energy in the body tissues can occur(body resonance) and therefore can cause harmful effects, possibly the extinction of some of them. An investigation into the impact of radio-frequency nonionizing electromagnetic field (EMF) utilized in the future 5G on insects is of great importance as a very high number of 5G network components will increase the total EMF exposure in the environment. All ecosystems of the earth are interconnected. If one component of an ecosystem is disrupted, the whole system will be affected (which could cause cascading effects). The study of these problems is an important challenge for scientists today because the existing studies are incomplete and insufficient. Consequently, the purpose of this proposed research is to investigate the possible hazardous impact of RF-EMFs (including 5G EMFs) on insects. The project will study the effects of these EMFs on various insects that have different body sizes through computer modeling at frequencies from 2.5 to 100 GHz. The selected insects are honey bee, wasp, and ladybug. For this purpose, the detailed 3D discrete models of insects are created for EM and thermal modeling through FDTD and will be evaluated whole-body Specific Absorption Rates (SAR) at selected frequencies. All these studies represent a novelty. The proposed study will promote new investigations about the bio-effects of 5G-EMFs and will contribute to the harmonization of safe exposure levels and frequencies of 5G-EMFs'.Keywords: electromagnetic field, insect, FDTD, specific absorption rate (SAR)
Procedia PDF Downloads 912908 An Agent-Based Model of Innovation Diffusion Using Heterogeneous Social Interaction and Preference
Authors: Jang kyun Cho, Jeong-dong Lee
Abstract:
The advent of the Internet, mobile communications, and social network services has stimulated social interactions among consumers, allowing people to affect one another’s innovation adoptions by exchanging information more frequently and more quickly. Previous diffusion models, such as the Bass model, however, face limitations in reflecting such recent phenomena in society. These models are weak in their ability to model interactions between agents; they model aggregated-level behaviors only. The agent based model, which is an alternative to the aggregate model, is good for individual modeling, but it is still not based on an economic perspective of social interactions so far. This study assumes the presence of social utility from other consumers in the adoption of innovation and investigates the effect of individual interactions on innovation diffusion by developing a new model called the interaction-based diffusion model. By comparing this model with previous diffusion models, the study also examines how the proposed model explains innovation diffusion from the perspective of economics. In addition, the study recommends the use of a small-world network topology instead of cellular automata to describe innovation diffusion. This study develops a model based on individual preference and heterogeneous social interactions using utility specification, which is expandable and, thus, able to encompass various issues in diffusion research, such as reservation price. Furthermore, the study proposes a new framework to forecast aggregated-level market demand from individual level modeling. The model also exhibits a good fit to real market data. It is expected that the study will contribute to our understanding of the innovation diffusion process through its microeconomic theoretical approach.Keywords: innovation diffusion, agent based model, small-world network, demand forecasting
Procedia PDF Downloads 3412907 A Parking Demand Forecasting Method for Making Parking Policy in the Center of Kabul City
Authors: Roien Qiam, Shoshi Mizokami
Abstract:
Parking demand in the Central Business District (CBD) has enlarged with the increase of the number of private vehicles due to rapid economic growth, lack of an efficient public transport and traffic management system. This has resulted in low mobility, poor accessibility, serious congestion, high rates of traffic accident fatalities and injuries and air pollution, mainly because people have to drive slowly around to find a vacant spot. With parking pricing and enforcement policy, considerable advancement could be found, and on-street parking spaces could be managed efficiently and effectively. To evaluate parking demand and making parking policy, it is required to understand the current parking condition and driver’s behavior, understand how drivers choose their parking type and location as well as their behavior toward finding a vacant parking spot under parking charges and search times. This study illustrates the result from an observational, revealed and stated preference surveys and experiment. Attained data shows that there is a gap between supply and demand in parking and it has maximized. For the modeling of the parking decision, a choice model was constructed based on discrete choice modeling theory and multinomial logit model estimated by using SP survey data; the model represents the choice of an alternative among different alternatives which are priced on-street, off-street, and illegal parking. Individuals choose a parking type based on their preference concerning parking charges, searching times, access times and waiting times. The parking assignment model was obtained directly from behavioral model and is used in parking simulation. The study concludes with an evaluation of parking policy.Keywords: CBD, parking demand forecast, parking policy, parking choice model
Procedia PDF Downloads 1982906 Modeling of the Biodegradation Performance of a Membrane Bioreactor to Enhance Water Reuse in Agri-food Industry - Poultry Slaughterhouse as an Example
Authors: masmoudi Jabri Khaoula, Zitouni Hana, Bousselmi Latifa, Akrout Hanen
Abstract:
Mathematical modeling has become an essential tool for sustainable wastewater management, particularly for the simulation and the optimization of complex processes involved in activated sludge systems. In this context, the activated sludge model (ASM3h) was used for the simulation of a Biological Membrane Reactor (MBR) as it includes the integration of biological wastewater treatment and physical separation by membrane filtration. In this study, the MBR with a useful volume of 12.5 L was fed continuously with poultry slaughterhouse wastewater (PSWW) for 50 days at a feed rate of 2 L/h and for a hydraulic retention time (HRT) of 6.25h. Throughout its operation, High removal efficiency was observed for the removal of organic pollutants in terms of COD with 84% of efficiency. Moreover, the MBR has generated a treated effluent which fits with the limits of discharge into the public sewer according to the Tunisian standards which were set in March 2018. In fact, for the nitrogenous compounds, average concentrations of nitrate and nitrite in the permeat reached 0.26±0.3 mg. L-1 and 2.2±2.53 mg. L-1, respectively. The simulation of the MBR process was performed using SIMBA software v 5.0. The state variables employed in the steady state calibration of the ASM3h were determined using physical and respirometric methods. The model calibration was performed using experimental data obtained during the first 20 days of the MBR operation. Afterwards, kinetic parameters of the model were adjusted and the simulated values of COD, N-NH4+and N- NOx were compared with those reported from the experiment. A good prediction was observed for the COD, N-NH4+and N- NOx concentrations with 467 g COD/m³, 110.2 g N/m³, 3.2 g N/m³ compared to the experimental data which were 436.4 g COD/m³, 114.7 g N/m³ and 3 g N/m³, respectively. For the validation of the model under dynamic simulation, the results of the experiments obtained during the second treatment phase of 30 days were used. It was demonstrated that the model simulated the conditions accurately by yielding a similar pattern on the variation of the COD concentration. On the other hand, an underestimation of the N-NH4+ concentration was observed during the simulation compared to the experimental results and the measured N-NO3 concentrations were lower than the predicted ones, this difference could be explained by the fact that the ASM models were mainly designed for the simulation of biological processes in the activated sludge systems. In addition, more treatment time could be required by the autotrophic bacteria to achieve a complete and stable nitrification. Overall, this study demonstrated the effectiveness of mathematical modeling in the prediction of the performance of the MBR systems with respect to organic pollution, the model can be further improved for the simulation of nutrients removal for a longer treatment period.Keywords: activated sludge model (ASM3h), membrane bioreactor (MBR), poultry slaughter wastewater (PSWW), reuse
Procedia PDF Downloads 602905 Application of Electrochemical Impedance Spectroscopy to Monitor the Steel/Soil Interface During Cathodic Protection of Steel in Simulated Soil Solution
Authors: Mandlenkosi George Robert Mahlobo, Tumelo Seadira, Major Melusi Mabuza, Peter Apata Olubambi
Abstract:
Cathodic protection (CP) has been widely considered a suitable technique for mitigating corrosion of buried metal structures. Plenty of efforts have been made in developing techniques, in particular non-destructive techniques, for monitoring and quantifying the effectiveness of CP to ensure the sustainability and performance of buried steel structures. The aim of this study was to investigate the evolution of the electrochemical processes at the steel/soil interface during the application of CP on steel in simulated soil. Carbon steel was subjected to electrochemical tests with NS4 solution used as simulated soil conditions for 4 days before applying CP for a further 11 days. A previously modified non-destructive voltammetry technique was applied before and after the application of CP to measure the corrosion rate. Electrochemical impedance spectroscopy (EIS), in combination with mathematical modeling through equivalent electric circuits, was applied to determine the electrochemical behavior at the steel/soil interface. The measured corrosion rate was found to have decreased from 410 µm/yr to 8 µm/yr between days 5 and 14 because of the applied CP. Equivalent electrical circuits were successfully constructed and used to adequately model the EIS results. The modeling of the obtained EIS results revealed the formation of corrosion products via a mixed activation-diffusion mechanism during the first 4 days, while the activation mechanism prevailed in the presence of CP, resulting in a protective film. The x-ray diffraction analysis confirmed the presence of corrosion products and the predominant protective film corresponding to the calcareous deposit.Keywords: carbon steel, cathodic protection, NS4 solution, voltammetry, EIS
Procedia PDF Downloads 642904 Modeling Depth Averaged Velocity and Boundary Shear Stress Distributions
Authors: Ebissa Gadissa Kedir, C. S. P. Ojha, K. S. Hari Prasad
Abstract:
In the present study, the depth-averaged velocity and boundary shear stress in non-prismatic compound channels with three different converging floodplain angles ranging from 1.43ᶱ to 7.59ᶱ have been studied. The analytical solutions were derived by considering acting forces on the channel beds and walls. In the present study, five key parameters, i.e., non-dimensional coefficient, secondary flow term, secondary flow coefficient, friction factor, and dimensionless eddy viscosity, were considered and discussed. An expression for non-dimensional coefficient and integration constants was derived based on the boundary conditions. The model was applied to different data sets of the present experiments and experiments from other sources, respectively, to examine and analyse the influence of floodplain converging angles on depth-averaged velocity and boundary shear stress distributions. The results show that the non-dimensional parameter plays important in portraying the variation of depth-averaged velocity and boundary shear stress distributions with different floodplain converging angles. Thus, the variation of the non-dimensional coefficient needs attention since it affects the secondary flow term and secondary flow coefficient in both the main channel and floodplains. The analysis shows that the depth-averaged velocities are sensitive to a shear stress-dependent model parameter non-dimensional coefficient, and the analytical solutions are well agreed with experimental data when five parameters are included. It is inferred that the developed model may facilitate the interest of others in complex flow modeling.Keywords: depth-average velocity, converging floodplain angles, non-dimensional coefficient, non-prismatic compound channels
Procedia PDF Downloads 742903 Nonlinear Finite Element Modeling of Deep Beam Resting on Linear and Nonlinear Random Soil
Authors: M. Seguini, D. Nedjar
Abstract:
An accuracy nonlinear analysis of a deep beam resting on elastic perfectly plastic soil is carried out in this study. In fact, a nonlinear finite element modeling for large deflection and moderate rotation of Euler-Bernoulli beam resting on linear and nonlinear random soil is investigated. The geometric nonlinear analysis of the beam is based on the theory of von Kàrmàn, where the Newton-Raphson incremental iteration method is implemented in a Matlab code to solve the nonlinear equation of the soil-beam interaction system. However, two analyses (deterministic and probabilistic) are proposed to verify the accuracy and the efficiency of the proposed model where the theory of the local average based on the Monte Carlo approach is used to analyze the effect of the spatial variability of the soil properties on the nonlinear beam response. The effect of six main parameters are investigated: the external load, the length of a beam, the coefficient of subgrade reaction of the soil, the Young’s modulus of the beam, the coefficient of variation and the correlation length of the soil’s coefficient of subgrade reaction. A comparison between the beam resting on linear and nonlinear soil models is presented for different beam’s length and external load. Numerical results have been obtained for the combination of the geometric nonlinearity of beam and material nonlinearity of random soil. This comparison highlighted the need of including the material nonlinearity and spatial variability of the soil in the geometric nonlinear analysis, when the beam undergoes large deflections.Keywords: finite element method, geometric nonlinearity, material nonlinearity, soil-structure interaction, spatial variability
Procedia PDF Downloads 4142902 A Phenomenological Approach to Computational Modeling of Analogy
Authors: José Eduardo García-Mendiola
Abstract:
In this work, a phenomenological approach to computational modeling of analogy processing is carried out. The paper goes through the consideration of the structure of the analogy, based on the possibility of sustaining the genesis of its elements regarding Husserl's genetic theory of association. Among particular processes which take place in order to get analogical inferences, there is one which arises crucial for enabling efficient base cases retrieval through long-term memory, namely analogical transference grounded on familiarity. In general, it has been argued that analogical reasoning is a way by which a conscious agent tries to determine or define a certain scope of objects and relationships between them using previous knowledge of other familiar domain of objects and relations. However, looking for a complete description of analogy process, a deeper consideration of phenomenological nature is required in so far, its simulation by computational programs is aimed. Also, one would get an idea of how complex it would be to have a fully computational account of the analogy elements. In fact, familiarity is not a result of a mere chain of repetitions of objects or events but generated insofar as the object/attribute or event in question is integrable inside a certain context that is taking shape as functionalities and functional approaches or perspectives of the object are being defined. Its familiarity is generated not by the identification of its parts or objective determinations as if they were isolated from those functionalities and approaches. Rather, at the core of such a familiarity between entities of different kinds lays the way they are functionally encoded. So, and hoping to make deeper inroads towards these topics, this essay allows us to consider that cognitive-computational perspectives can visualize, from the phenomenological projection of the analogy process reviewing achievements already obtained as well as exploration of new theoretical-experimental configurations towards implementation of analogy models in specific as well as in general purpose machines.Keywords: analogy, association, encoding, retrieval
Procedia PDF Downloads 1232901 Experimental Study for Examination of Nature of Diffusion Process during Wine Microoxygenation
Authors: Ilirjan Malollari, Redi Buzo, Lorina Lici
Abstract:
This study was done for the characterization of polyphenols changes of anthocyanins, flavonoids, the color intensity and total polyphenols index, maturity and oxidation index during the process of micro-oxygenation of wine that comes from a specific geographic area in the southeastern region of the country. Also, through mathematical modeling of the oxygen distribution within solution of wort for wine fermentation, was shown the strong impact of carbon dioxide present in the liquor. Analytical results show periodic increases of color intensity and tonality, reduction level of free anthocyanins and flavonoids free because of polycondensation reactions between tannins and anthocyanins, increased total polyphenols index and decrease the ratio between the flavonoids and anthocyanins offering a red stabilize wine proved by sensory degustation tasting for color intensity, tonality, body, tannic perception, taste and remained back taste which comes by specific area associated with environmental indications. Micro-oxygenation of wine is a wine-making technique, which consists in the addition of small and controlled amounts of oxygen in the different stages of wine production but more efficiently after end of alcoholic fermentation. The objectives of the process include improved mouth feel (body and texture), color enhanced stability, increased oxidative stability, and decreased vegetative aroma during polyphenols changes process. A very important factor is polyphenolics organic grape composition strongly associated with the environment geographical specifics area in which it is grown the grape.Keywords: micro oxygenation, polyphenols, environment, wine stability, diffusion modeling
Procedia PDF Downloads 2112900 Fuzzy Time Series- Markov Chain Method for Corn and Soybean Price Forecasting in North Carolina Markets
Authors: Selin Guney, Andres Riquelme
Abstract:
Among the main purposes of optimal and efficient forecasts of agricultural commodity prices is to guide the firms to advance the economic decision making process such as planning business operations and marketing decisions. Governments are also the beneficiaries and suppliers of agricultural price forecasts. They use this information to establish a proper agricultural policy, and hence, the forecasts affect social welfare and systematic errors in forecasts could lead to a misallocation of scarce resources. Various empirical approaches have been applied to forecast commodity prices that have used different methodologies. Most commonly-used approaches to forecast commodity sectors depend on classical time series models that assume values of the response variables are precise which is quite often not true in reality. Recently, this literature has mostly evolved to a consideration of fuzzy time series models that provide more flexibility in terms of the classical time series models assumptions such as stationarity, and large sample size requirement. Besides, fuzzy modeling approach allows decision making with estimated values under incomplete information or uncertainty. A number of fuzzy time series models have been developed and implemented over the last decades; however, most of them are not appropriate for forecasting repeated and nonconsecutive transitions in the data. The modeling scheme used in this paper eliminates this problem by introducing Markov modeling approach that takes into account both the repeated and nonconsecutive transitions. Also, the determination of length of interval is crucial in terms of the accuracy of forecasts. The problem of determining the length of interval arbitrarily is overcome and a methodology to determine the proper length of interval based on the distribution or mean of the first differences of series to improve forecast accuracy is proposed. The specific purpose of this paper is to propose and investigate the potential of a new forecasting model that integrates methodologies for determining the proper length of interval based on the distribution or mean of the first differences of series and Fuzzy Time Series- Markov Chain model. Moreover, the accuracy of the forecasting performance of proposed integrated model is compared to different univariate time series models and the superiority of proposed method over competing methods in respect of modelling and forecasting on the basis of forecast evaluation criteria is demonstrated. The application is to daily corn and soybean prices observed at three commercially important North Carolina markets; Candor, Cofield and Roaring River for corn and Fayetteville, Cofield and Greenville City for soybeans respectively. One main conclusion from this paper is that using fuzzy logic improves the forecast performance and accuracy; the effectiveness and potential benefits of the proposed model is confirmed with small selection criteria value such MAPE. The paper concludes with a discussion of the implications of integrating fuzzy logic and nonarbitrary determination of length of interval for the reliability and accuracy of price forecasts. The empirical results represent a significant contribution to our understanding of the applicability of fuzzy modeling in commodity price forecasts.Keywords: commodity, forecast, fuzzy, Markov
Procedia PDF Downloads 217