Search results for: uncertainty propagation.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 819

Search results for: uncertainty propagation.

69 An Autonomous Collaborative Forecasting System Implementation – The First Step towards Successful CPFR System

Authors: Chi-Fang Huang, Yun-Shiow Chen, Yun-Kung Chung

Abstract:

In the past decade, artificial neural networks (ANNs) have been regarded as an instrument for problem-solving and decision-making; indeed, they have already done with a substantial efficiency and effectiveness improvement in industries and businesses. In this paper, the Back-Propagation neural Networks (BPNs) will be modulated to demonstrate the performance of the collaborative forecasting (CF) function of a Collaborative Planning, Forecasting and Replenishment (CPFR®) system. CPFR functions the balance between the sufficient product supply and the necessary customer demand in a Supply and Demand Chain (SDC). Several classical standard BPN will be grouped, collaborated and exploited for the easy implementation of the proposed modular ANN framework based on the topology of a SDC. Each individual BPN is applied as a modular tool to perform the task of forecasting SKUs (Stock-Keeping Units) levels that are managed and supervised at a POS (point of sale), a wholesaler, and a manufacturer in an SDC. The proposed modular BPN-based CF system will be exemplified and experimentally verified using lots of datasets of the simulated SDC. The experimental results showed that a complex CF problem can be divided into a group of simpler sub-problems based on the single independent trading partners distributed over SDC, and its SKU forecasting accuracy was satisfied when the system forecasted values compared to the original simulated SDC data. The primary task of implementing an autonomous CF involves the study of supervised ANN learning methodology which aims at making “knowledgeable" decision for the best SKU sales plan and stocks management.

Keywords: CPFR, artificial neural networks, global logistics, supply and demand chain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1947
68 Review of Downscaling Methods in Climate Change and Their Role in Hydrological Studies

Authors: Nishi Bhuvandas, P. V. Timbadiya, P. L. Patel, P. D. Porey

Abstract:

Recent perceived climate variability raises concerns with unprecedented hydrological phenomena and extremes. Distribution and circulation of the waters of the Earth become increasingly difficult to determine because of additional uncertainty related to anthropogenic emissions. The world wide observed changes in the large-scale hydrological cycle have been related to an increase in the observed temperature over several decades. Although the effect of change in climate on hydrology provides a general picture of possible hydrological global change, new tools and frameworks for modelling hydrological series with nonstationary characteristics at finer scales, are required for assessing climate change impacts. Of the downscaling techniques, dynamic downscaling is usually based on the use of Regional Climate Models (RCMs), which generate finer resolution output based on atmospheric physics over a region using General Circulation Model (GCM) fields as boundary conditions. However, RCMs are not expected to capture the observed spatial precipitation extremes at a fine cell scale or at a basin scale. Statistical downscaling derives a statistical or empirical relationship between the variables simulated by the GCMs, called predictors, and station-scale hydrologic variables, called predictands. The main focus of the paper is on the need for using statistical downscaling techniques for projection of local hydrometeorological variables under climate change scenarios. The projections can be then served as a means of input source to various hydrologic models to obtain streamflow, evapotranspiration, soil moisture and other hydrological variables of interest.

Keywords: Climate Change, Downscaling, GCM, RCM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3321
67 Large Eddy Simulation of Hydrogen Deflagration in Open Space and Vented Enclosure

Authors: T. Nozu, K. Hibi, T. Nishiie

Abstract:

This paper discusses the applicability of the numerical model for a damage prediction method of the accidental hydrogen explosion occurring in a hydrogen facility. The numerical model was based on an unstructured finite volume method (FVM) code “NuFD/FrontFlowRed”. For simulating unsteady turbulent combustion of leaked hydrogen gas, a combination of Large Eddy Simulation (LES) and a combustion model were used. The combustion model was based on a two scalar flamelet approach, where a G-equation model and a conserved scalar model expressed a propagation of premixed flame surface and a diffusion combustion process, respectively. For validation of this numerical model, we have simulated the previous two types of hydrogen explosion tests. One is open-space explosion test, and the source was a prismatic 5.27 m3 volume with 30% of hydrogen-air mixture. A reinforced concrete wall was set 4 m away from the front surface of the source. The source was ignited at the bottom center by a spark. The other is vented enclosure explosion test, and the chamber was 4.6 m × 4.6 m × 3.0 m with a vent opening on one side. Vent area of 5.4 m2 was used. Test was performed with ignition at the center of the wall opposite the vent. Hydrogen-air mixtures with hydrogen concentrations close to 18% vol. were used in the tests. The results from the numerical simulations are compared with the previous experimental data for the accuracy of the numerical model, and we have verified that the simulated overpressures and flame time-of-arrival data were in good agreement with the results of the previous two explosion tests.

Keywords: Deflagration, Large Eddy Simulation, Turbulent combustion, Vented enclosure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1440
66 Upgraded Rough Clustering and Outlier Detection Method on Yeast Dataset by Entropy Rough K-Means Method

Authors: P. Ashok, G. M. Kadhar Nawaz

Abstract:

Rough set theory is used to handle uncertainty and incomplete information by applying two accurate sets, Lower approximation and Upper approximation. In this paper, the rough clustering algorithms are improved by adopting the Similarity, Dissimilarity–Similarity and Entropy based initial centroids selection method on three different clustering algorithms namely Entropy based Rough K-Means (ERKM), Similarity based Rough K-Means (SRKM) and Dissimilarity-Similarity based Rough K-Means (DSRKM) were developed and executed by yeast dataset. The rough clustering algorithms are validated by cluster validity indexes namely Rand and Adjusted Rand indexes. An experimental result shows that the ERKM clustering algorithm perform effectively and delivers better results than other clustering methods. Outlier detection is an important task in data mining and very much different from the rest of the objects in the clusters. Entropy based Rough Outlier Factor (EROF) method is seemly to detect outlier effectively for yeast dataset. In rough K-Means method, by tuning the epsilon (ᶓ) value from 0.8 to 1.08 can detect outliers on boundary region and the RKM algorithm delivers better results, when choosing the value of epsilon (ᶓ) in the specified range. An experimental result shows that the EROF method on clustering algorithm performed very well and suitable for detecting outlier effectively for all datasets. Further, experimental readings show that the ERKM clustering method outperformed the other methods.

Keywords: Clustering, Entropy, Outlier, Rough K-Means, validity index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1368
65 Evolving Paradigm of Right to Development in International Human Rights Law and Its Transformation into the National Legal System: Challenges and Responses in Pakistan

Authors: Naeem Ullah Khan, Kalsoom Khan

Abstract:

No state can be progressive and prosperous in which a large number of people is deprived of their basic economic rights and freedoms. In the contemporary world of globalization, the right to development has gained a momentum force in the domain of International Development Law (IDL) and has integrated into the National Legal System (NLS) of the major developed states. The international experts on human rights argued that the right to development (RTD) is called a third-generation human right which tends to enhance the welfare and prosperity of individuals, and thus, it is a right to a process whose outcomes are human rights despite the controversy on the implications of RTD. In the Pakistan legal system, the RTD has not been expressly stated in the constitution of the Islamic Republic of Pakistan, 1973. However, there are some implied constitutional provisions which reflect the concept of RTD. The jurisprudence on RTD is still an evolving paradigm in the contextual perspective of Pakistan, and the superior court of diverse jurisdiction acts as a catalyst regarding the protection and enforcement of RTD in the interest of the public at large. However, the case law explores the positive inclination of the courts in Pakistan on RTD be incorporated as an express provision in the chapters of fundamental rights; in this scenario, the high court’s of Pakistan under Article 199 and the supreme court of Pakistan under Article 184(3) have exercised jurisdiction on the enforcement of RTD. This paper inter-alia examines the national dimensions of RTD from the standpoint of state practice in Pakistan and it analyzes the experience of judiciary in the protection and enforcement of RTD. Moreover, the paper highlights the social and cultural challenges to Pakistan in the implementation of RTD and possible solution to improve the conditions of human rights in Pakistan. This paper will also highlight the steps taken by Pakistan regarding the awareness, incorporation, and propagation of RTD at the national level.

Keywords: Globalization, Pakistan, RTD, third-generation right.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 865
64 A Preliminary Study on the Suitability of Data Driven Approach for Continuous Water Level Modeling

Authors: Muhammad Aqil, Ichiro Kita, Moses Macalinao

Abstract:

Reliable water level forecasts are particularly important for warning against dangerous flood and inundation. The current study aims at investigating the suitability of the adaptive network based fuzzy inference system for continuous water level modeling. A hybrid learning algorithm, which combines the least square method and the back propagation algorithm, is used to identify the parameters of the network. For this study, water levels data are available for a hydrological year of 2002 with a sampling interval of 1-hour. The number of antecedent water level that should be included in the input variables is determined by two statistical methods, i.e. autocorrelation function and partial autocorrelation function between the variables. Forecasting was done for 1-hour until 12-hour ahead in order to compare the models generalization at higher horizons. The results demonstrate that the adaptive networkbased fuzzy inference system model can be applied successfully and provide high accuracy and reliability for river water level estimation. In general, the adaptive network-based fuzzy inference system provides accurate and reliable water level prediction for 1-hour ahead where the MAPE=1.15% and correlation=0.98 was achieved. Up to 12-hour ahead prediction, the model still shows relatively good performance where the error of prediction resulted was less than 9.65%. The information gathered from the preliminary results provide a useful guidance or reference for flood early warning system design in which the magnitude and the timing of a potential extreme flood are indicated.

Keywords: Neural Network, Fuzzy, River, Forecasting

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1254
63 Numerical Simulations of Acoustic Imaging in Hydrodynamic Tunnel with Model Adaptation and Boundary Layer Noise Reduction

Authors: Sylvain Amailland, Jean-Hugh Thomas, Charles Pézerat, Romuald Boucheron, Jean-Claude Pascal

Abstract:

The noise requirements for naval and research vessels have seen an increasing demand for quieter ships in order to fulfil current regulations and to reduce the effects on marine life. Hence, new methods dedicated to the characterization of propeller noise, which is the main source of noise in the far-field, are needed. The study of cavitating propellers in closed-section is interesting for analyzing hydrodynamic performance but could involve significant difficulties for hydroacoustic study, especially due to reverberation and boundary layer noise in the tunnel. The aim of this paper is to present a numerical methodology for the identification of hydroacoustic sources on marine propellers using hydrophone arrays in a large hydrodynamic tunnel. The main difficulties are linked to the reverberation of the tunnel and the boundary layer noise that strongly reduce the signal-to-noise ratio. In this paper it is proposed to estimate the reflection coefficients using an inverse method and some reference transfer functions measured in the tunnel. This approach allows to reduce the uncertainties of the propagation model used in the inverse problem. In order to reduce the boundary layer noise, a cleaning algorithm taking advantage of the low rank and sparse structure of the cross-spectrum matrices of the acoustic and the boundary layer noise is presented. This approach allows to recover the acoustic signal even well under the boundary layer noise. The improvement brought by this method is visible on acoustic maps resulting from beamforming and DAMAS algorithms.

Keywords: Acoustic imaging, boundary layer noise denoising, inverse problems, model adaptation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 932
62 Long Wavelength Coherent Pulse of Sound Propagating in Granular Media

Authors: Rohit Kumar Shrivastava, Amalia Thomas, Nathalie Vriend, Stefan Luding

Abstract:

A mechanical wave or vibration propagating through granular media exhibits a specific signature in time. A coherent pulse or wavefront arrives first with multiply scattered waves (coda) arriving later. The coherent pulse is micro-structure independent i.e. it depends only on the bulk properties of the disordered granular sample, the sound wave velocity of the granular sample and hence bulk and shear moduli. The coherent wavefront attenuates (decreases in amplitude) and broadens with distance from its source. The pulse attenuation and broadening effects are affected by disorder (polydispersity; contrast in size of the granules) and have often been attributed to dispersion and scattering. To study the effect of disorder and initial amplitude (non-linearity) of the pulse imparted to the system on the coherent wavefront, numerical simulations have been carried out on one-dimensional sets of particles (granular chains). The interaction force between the particles is given by a Hertzian contact model. The sizes of particles have been selected randomly from a Gaussian distribution, where the standard deviation of this distribution is the relevant parameter that quantifies the effect of disorder on the coherent wavefront. Since, the coherent wavefront is system configuration independent, ensemble averaging has been used for improving the signal quality of the coherent pulse and removing the multiply scattered waves. The results concerning the width of the coherent wavefront have been formulated in terms of scaling laws. An experimental set-up of photoelastic particles constituting a granular chain is proposed to validate the numerical results.

Keywords: Discrete elements, Hertzian Contact, polydispersity, weakly nonlinear, wave propagation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 881
61 The Effect of Socio-Affective Variables in the Relationship between Organizational Trust and Employee Turnover Intention

Authors: Paula A. Cruise, Carvell McLeary

Abstract:

Employee turnover leads to lowered productivity, decreased morale and work quality, and psychological effects associated with employee separation and replacement. Yet, it remains unknown why talented employees willingly withdraw from organizations. This uncertainty is worsened as studies; a) priorities organizational over individual predictors resulting in restriction in range in turnover measurement; b) focus on actual rather than intended turnover thereby limiting conceptual understanding of the turnover construct and its relationship with other variables and; c) produce inconsistent findings across cultures, contexts and industries despite a clear need for a unified perspective. The current study addressed these gaps by adopting the theory of planned behavior (TPB) framework to examine socio-cognitive factors in organizational trust and individual turnover intentions among bankers and energy employees in Jamaica. In a comparative study of n=369 [nbank= 264; male=57 (22.73%); nenergy =105; male =45 (42.86)], it was hypothesized that organizational trust was a predictor of employee turnover intention, and the effect of individual, group, cognitive and socio-affective variables varied across industry. Findings from structural equation modelling confirmed the hypothesis, with a model of both cognitive and socio-affective variables being a better fit [CMIN (χ2) = 800.067, df = 364, p ≤ .000; CFI = 0.950; RMSEA = 0.057 with 90% C.I. (0.052 - 0.062); PCLOSE = 0.016; PNFI = 0.818 in predicting turnover intention. The findings are discussed in relation to socio-cognitive components of trust models and predicting negative employee behaviors across cultures and industries.

Keywords: Context-specific organizational trust, cross-cultural psychology, theory of planned behavior, employee turnover intention.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1036
60 Delamination Fracture Toughness Benefits of Inter-Woven Plies in Composite Laminates Produced through Automated Fibre Placement

Authors: Jayden Levy, Garth M. K. Pearce

Abstract:

An automated fibre placement method has been developed to build through-thickness reinforcement into carbon fibre reinforced plastic laminates during their production, with the goal of increasing delamination fracture toughness while circumventing the additional costs and defects imposed by post-layup stitching and z-pinning. Termed ‘inter-weaving’, the method uses custom placement sequences of thermoset prepreg tows to distribute regular fibre link regions in traditionally clean ply interfaces. Inter-weaving’s impact on mode I delamination fracture toughness was evaluated experimentally through double cantilever beam tests (ASTM standard D5528-13) on [±15°]9 laminates made from Park Electrochemical Corp. E-752-LT 1/4” carbon fibre prepreg tape. Unwoven and inter-woven automated fibre placement samples were compared to those of traditional laminates produced from standard uni-directional plies of the same material system. Unwoven automated fibre placement laminates were found to suffer a mostly constant 3.5% decrease in mode I delamination fracture toughness compared to flat uni-directional plies. Inter-weaving caused significant local fracture toughness increases (up to 50%), though these were offset by a matching overall reduction. These positive and negative behaviours of inter-woven laminates were respectively found to be caused by fibre breakage and matrix deformation at inter-weave sites, and the 3D layering of inter-woven ply interfaces providing numerous paths of least resistance for crack propagation.

Keywords: AFP, automated fibre placement, delamination, fracture toughness, inter-weaving.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 631
59 Estimating Saturated Hydraulic Conductivity from Soil Physical Properties using Neural Networks Model

Authors: B. Ghanbarian-Alavijeh, A.M. Liaghat, S. Sohrabi

Abstract:

Saturated hydraulic conductivity is one of the soil hydraulic properties which is widely used in environmental studies especially subsurface ground water. Since, its direct measurement is time consuming and therefore costly, indirect methods such as pedotransfer functions have been developed based on multiple linear regression equations and neural networks model in order to estimate saturated hydraulic conductivity from readily available soil properties e.g. sand, silt, and clay contents, bulk density, and organic matter. The objective of this study was to develop neural networks (NNs) model to estimate saturated hydraulic conductivity from available parameters such as sand and clay contents, bulk density, van Genuchten retention model parameters (i.e. r θ , α , and n) as well as effective porosity. We used two methods to calculate effective porosity: : (1) eff s FC φ =θ -θ , and (2) inf φ =θ -θ eff s , in which s θ is saturated water content, FC θ is water content retained at -33 kPa matric potential, and inf θ is water content at the inflection point. Total of 311 soil samples from the UNSODA database was divided into three groups as 187 for the training, 62 for the validation (to avoid over training), and 62 for the test of NNs model. A commercial neural network toolbox of MATLAB software with a multi-layer perceptron model and back propagation algorithm were used for the training procedure. The statistical parameters such as correlation coefficient (R2), and mean square error (MSE) were also used to evaluate the developed NNs model. The best number of neurons in the middle layer of NNs model for methods (1) and (2) were calculated 44 and 6, respectively. The R2 and MSE values of the test phase were determined for method (1), 0.94 and 0.0016, and for method (2), 0.98 and 0.00065, respectively, which shows that method (2) estimates saturated hydraulic conductivity better than method (1).

Keywords: Neural network, Saturated hydraulic conductivity, Soil physical properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2515
58 Torsion Behavior of Steel Fibered High Strength Self Compacting Concrete Beams Reinforced by GFRB Bars

Authors: Khaled S. Ragab, Ahmed S. Eisa

Abstract:

This paper investigates experimentally and analytically the torsion behavior of steel fibered high strength self compacting concrete beams reinforced by GFRP bars. Steel fibered high strength self compacting concrete (SFHSSCC) and GFRP bars became in the recent decades a very important materials in the structural engineering field. The use of GFRP bars to replace steel bars has emerged as one of the many techniques put forward to enhance the corrosion resistance of reinforced concrete structures. High strength concrete and GFRP bars attract designers and architects as it allows improving the durability as well as the esthetics of a construction. One of the trends in SFHSSCC structures is to provide their ductile behavior and additional goal is to limit development and propagation of macro-cracks in the body of SFHSSCC elements. SFHSSCC and GFRP bars are tough, improve the workability, enhance the corrosion resistance of reinforced concrete structures, and demonstrate high residual strengths after appearance of the first crack. Experimental studies were carried out to select effective fiber contents. Three types of volume fraction from hooked shape steel fibers are used in this study, the hooked steel fibers were evaluated in volume fractions ranging between 0.0%, 0.75% and 1.5%. The beams shape is chosen to create the required forces (i.e. torsion and bending moments simultaneously) on the test zone. A total of seven beams were tested, classified into three groups. All beams, have 200cm length, cross section of 10×20cm, longitudinal bottom reinforcement of 3

Keywords: Self compacting concrete, torsion behavior, steel fiber, steel fiber reinforced high strength self compacting concrete (SFRHSCC), GFRP bars.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3322
57 Lamb Wave Wireless Communication in Healthy Plates Using Coherent Demodulation

Authors: Rudy Bahouth, Farouk Benmeddour, Emmanuel Moulin, Jamal Assaad

Abstract:

Guided ultrasonic waves are used in Non-Destructive Testing and Structural Health Monitoring for inspection and damage detection. Recently, wireless data transmission using ultrasonic waves in solid metallic channels has gained popularity in some industrial applications such as nuclear, aerospace and smart vehicles. The idea is to find a good substitute for electromagnetic waves since they are highly attenuated near metallic components due to Faraday shielding. The proposed solution is to use ultrasonic guided waves such as Lamb waves as an information carrier due to their capability of propagation for long distances. In addition to this, valuable information about the health of the structure could be extracted simultaneously. In this work, the reliable frequency bandwidth for communication is extracted experimentally from dispersion curves at first. Then, an experimental platform for wireless communication using Lamb waves is described and built. After this, coherent demodulation algorithm used in telecommunications is tested for Amplitude Shift Keying, On-Off Keying and Binary Phase Shift Keying modulation techniques. Signal processing parameters such as threshold choice, number of cycles per bit and Bit Rate are optimized. Experimental results are compared based on the average bit error percentage. Results has shown high sensitivity to threshold selection for Amplitude Shift Keying and On-Off Keying techniques resulting a Bit Rate decrease. Binary Phase Shift Keying technique shows the highest stability and data rate between all tested modulation techniques.

Keywords: Lamb Wave Communication, wireless communication, coherent demodulation, bit error percentage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 504
56 Experimental and Theoretical Investigation of Rough Rice Drying in Infrared-assisted Hot Air Dryer Using Artificial Neural Network

Authors: D. Zare, H. Naderi, A. A. Jafari

Abstract:

Drying characteristics of rough rice (variety of lenjan) with an initial moisture content of 25% dry basis (db) was studied in a hot air dryer assisted by infrared heating. Three arrival air temperatures (30, 40 and 500C) and four infrared radiation intensities (0, 0.2 , 0.4 and 0.6 W/cm2) and three arrival air speeds (0.1, 0.15 and 0.2 m.s-1) were studied. Bending strength of brown rice kernel, percentage of cracked kernels and time of drying were measured and evaluated. The results showed that increasing the drying arrival air temperature and radiation intensity of infrared resulted decrease in drying time. High bending strength and low percentage of cracked kernel was obtained when paddy was dried by hot air assisted infrared dryer. Between this factors and their interactive effect were a significant difference (p<0.01). An intensity level of 0.2 W/cm2 was found to be optimum for radiation drying. Furthermore, in the present study, the application of Artificial Neural Network (ANN) for predicting the moisture content during drying (output parameter for ANN modeling) was investigated. Infrared Radiation intensity, drying air temperature, arrival air speed and drying time were considered as input parameters for the model. An ANN model with two hidden layers with 8 and 14 neurons were selected for studying the influence of transfer functions and training algorithms. The results revealed that a network with the Tansig (hyperbolic tangent sigmoid) transfer function and trainlm (Levenberg-Marquardt) back propagation algorithm made the most accurate predictions for the paddy drying system. Mean square error (MSE) was calculated and found that the random errors were within and acceptable range of ±5% with coefficient of determination (R2) of 99%.

Keywords: Rough rice, Infrared-hot air, Artificial Neural Network

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1792
55 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function

Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos

Abstract:

Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.

Keywords: Diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion equation, trends functions, bi-parameters Weibull density function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1936
54 Integrated Approaches to Enhance Aggregate Production Planning with Inventory Uncertainty Based On Improved Harmony Search Algorithm

Authors: P. Luangpaiboon, P. Aungkulanon

Abstract:

This work presents a multiple objective linear programming (MOLP) model based on the desirability function approach for solving the aggregate production planning (APP) decision problem upon Masud and Hwang-s model. The proposed model minimises total production costs, carrying or backordering costs and rates of change in labor levels. An industrial case demonstrates the feasibility of applying the proposed model to the APP problems with three scenarios of inventory levels. The proposed model yields an efficient compromise solution and the overall levels of DM satisfaction with the multiple combined response levels. There has been a trend to solve complex planning problems using various metaheuristics. Therefore, in this paper, the multi-objective APP problem is solved by hybrid metaheuristics of the hunting search (HuSIHSA) and firefly (FAIHSA) mechanisms on the improved harmony search algorithm. Results obtained from the solution of are then compared. It is observed that the FAIHSA can be used as a successful alternative solution mechanism for solving APP problems over three scenarios. Furthermore, the FAIHSA provides a systematic framework for facilitating the decision-making process, enabling a decision maker interactively to modify the desirability function approach and related model parameters until a good optimal solution is obtained with proper selection of control parameters when compared.

Keywords: Aggregate Production Planning, Desirability Function Approach, Improved Harmony Search Algorithm, Hunting Search Algorithm and Firefly Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884
53 Information Retrieval in Domain Specific Search Engine with Machine Learning Approaches

Authors: Shilpy Sharma

Abstract:

As the web continues to grow exponentially, the idea of crawling the entire web on a regular basis becomes less and less feasible, so the need to include information on specific domain, domain-specific search engines was proposed. As more information becomes available on the World Wide Web, it becomes more difficult to provide effective search tools for information access. Today, people access web information through two main kinds of search interfaces: Browsers (clicking and following hyperlinks) and Query Engines (queries in the form of a set of keywords showing the topic of interest) [2]. Better support is needed for expressing one's information need and returning high quality search results by web search tools. There appears to be a need for systems that do reasoning under uncertainty and are flexible enough to recover from the contradictions, inconsistencies, and irregularities that such reasoning involves. In a multi-view problem, the features of the domain can be partitioned into disjoint subsets (views) that are sufficient to learn the target concept. Semi-supervised, multi-view algorithms, which reduce the amount of labeled data required for learning, rely on the assumptions that the views are compatible and uncorrelated. This paper describes the use of semi-structured machine learning approach with Active learning for the “Domain Specific Search Engines". A domain-specific search engine is “An information access system that allows access to all the information on the web that is relevant to a particular domain. The proposed work shows that with the help of this approach relevant data can be extracted with the minimum queries fired by the user. It requires small number of labeled data and pool of unlabelled data on which the learning algorithm is applied to extract the required data.

Keywords: Search engines; machine learning, Informationretrieval, Active logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2051
52 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things

Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker

Abstract:

Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.

Keywords: CUSUM, evidence theory, KL divergence, quickest change detection, time series data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 952
51 Fuzzy Optimization in Metabolic Systems

Authors: Feng-Sheng Wang, Wu-Hsiung Wu, Kai-Cheng Hsu

Abstract:

The optimization of biological systems, which is a branch of metabolic engineering, has generated a lot of industrial and academic interest for a long time. In the last decade, metabolic engineering approaches based on mathematical optimizations have been used extensively for the analysis and manipulation of metabolic networks. In practical optimization of metabolic reaction networks, designers have to manage the nature of uncertainty resulting from qualitative characters of metabolic reactions, e.g., the possibility of enzyme effects. A deterministic approach does not give an adequate representation for metabolic reaction networks with uncertain characters. Fuzzy optimization formulations can be applied to cope with this problem. A fuzzy multi-objective optimization problem can be introduced for finding the optimal engineering interventions on metabolic network systems considering the resilience phenomenon and cell viability constraints. The accuracy of optimization results depends heavily on the development of essential kinetic models of metabolic networks. Kinetic models can quantitatively capture the experimentally observed regulation data of metabolic systems and are often used to find the optimal manipulation of external inputs. To address the issues of optimizing the regulatory structure of metabolic networks, it is necessary to consider qualitative effects, e.g., the resilience phenomena and cell viability constraints. Combining the qualitative and quantitative descriptions for metabolic networks makes it possible to design a viable strain and accurately predict the maximum possible flux rates of desired products. Considering the resilience phenomena in metabolic networks can improve the predictions of gene intervention and maximum synthesis rates in metabolic engineering. Two case studies will present in the conference to illustrate the phenomena.

Keywords: Fuzzy multi-objective optimization problem, kinetic model, metabolic engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1978
50 Computational Feasibility Study of a Torsional Wave Transducer for Tissue Stiffness Monitoring

Authors: Rafael Muñoz, Juan Melchor, Alicia Valera, Laura Peralta, Guillermo Rus

Abstract:

A torsional piezoelectric ultrasonic transducer design is proposed to measure shear moduli in soft tissue with direct access availability, using shear wave elastography technique. The measurement of shear moduli of tissues is a challenging problem, mainly derived from a) the difficulty of isolating a pure shear wave, given the interference of multiple waves of different types (P, S, even guided) emitted by the transducers and reflected in geometric boundaries, and b) the highly attenuating nature of soft tissular materials. An immediate application, overcoming these drawbacks, is the measurement of changes in cervix stiffness to estimate the gestational age at delivery. The design has been optimized using a finite element model (FEM) and a semi-analytical estimator of the probability of detection (POD) to determine a suitable geometry, materials and generated waves. The technique is based on the time of flight measurement between emitter and receiver, to infer shear wave velocity. Current research is centered in prototype testing and validation. The geometric optimization of the transducer was able to annihilate the compressional wave emission, generating a quite pure shear torsional wave. Currently, mechanical and electromagnetic coupling between emitter and receiver signals are being the research focus. Conclusions: the design overcomes the main described problems. The almost pure shear torsional wave along with the short time of flight avoids the possibility of multiple wave interference. This short propagation distance reduce the effect of attenuation, and allow the emission of very low energies assuring a good biological security for human use.

Keywords: Cervix ripening, preterm birth, shear modulus, shear wave elastography, soft tissue, torsional wave.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1532
49 Combination of Different Classifiers for Cardiac Arrhythmia Recognition

Authors: M. R. Homaeinezhad, E. Tavakkoli, M. Habibi, S. A. Atyabi, A. Ghaffari

Abstract:

This paper describes a new supervised fusion (hybrid) electrocardiogram (ECG) classification solution consisting of a new QRS complex geometrical feature extraction as well as a new version of the learning vector quantization (LVQ) classification algorithm aimed for overcoming the stability-plasticity dilemma. Toward this objective, after detection and delineation of the major events of ECG signal via an appropriate algorithm, each QRS region and also its corresponding discrete wavelet transform (DWT) are supposed as virtual images and each of them is divided into eight polar sectors. Then, the curve length of each excerpted segment is calculated and is used as the element of the feature space. To increase the robustness of the proposed classification algorithm versus noise, artifacts and arrhythmic outliers, a fusion structure consisting of five different classifiers namely as Support Vector Machine (SVM), Modified Learning Vector Quantization (MLVQ) and three Multi Layer Perceptron-Back Propagation (MLP–BP) neural networks with different topologies were designed and implemented. The new proposed algorithm was applied to all 48 MIT–BIH Arrhythmia Database records (within–record analysis) and the discrimination power of the classifier in isolation of different beat types of each record was assessed and as the result, the average accuracy value Acc=98.51% was obtained. Also, the proposed method was applied to 6 number of arrhythmias (Normal, LBBB, RBBB, PVC, APB, PB) belonging to 20 different records of the aforementioned database (between– record analysis) and the average value of Acc=95.6% was achieved. To evaluate performance quality of the new proposed hybrid learning machine, the obtained results were compared with similar peer– reviewed studies in this area.

Keywords: Feature Extraction, Curve Length Method, SupportVector Machine, Learning Vector Quantization, Multi Layer Perceptron, Fusion (Hybrid) Classification, Arrhythmia Classification, Supervised Learning Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2185
48 Soft Real-Time Fuzzy Task Scheduling for Multiprocessor Systems

Authors: Mahdi Hamzeh, Sied Mehdi Fakhraie, Caro Lucas

Abstract:

All practical real-time scheduling algorithms in multiprocessor systems present a trade-off between their computational complexity and performance. In real-time systems, tasks have to be performed correctly and timely. Finding minimal schedule in multiprocessor systems with real-time constraints is shown to be NP-hard. Although some optimal algorithms have been employed in uni-processor systems, they fail when they are applied in multiprocessor systems. The practical scheduling algorithms in real-time systems have not deterministic response time. Deterministic timing behavior is an important parameter for system robustness analysis. The intrinsic uncertainty in dynamic real-time systems increases the difficulties of scheduling problem. To alleviate these difficulties, we have proposed a fuzzy scheduling approach to arrange real-time periodic and non-periodic tasks in multiprocessor systems. Static and dynamic optimal scheduling algorithms fail with non-critical overload. In contrast, our approach balances task loads of the processors successfully while consider starvation prevention and fairness which cause higher priority tasks have higher running probability. A simulation is conducted to evaluate the performance of the proposed approach. Experimental results have shown that the proposed fuzzy scheduler creates feasible schedules for homogeneous and heterogeneous tasks. It also and considers tasks priorities which cause higher system utilization and lowers deadline miss time. According to the results, it performs very close to optimal schedule of uni-processor systems.

Keywords: Computational complexity, Deadline, Feasible scheduling, Fuzzy scheduling, Priority, Real-time multiprocessor systems, Robustness, System utilization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2086
47 Comparing Field Displacement History with Numerical Results to Estimate Geotechnical Parameters: Case Study of Arash-Esfandiar-Niayesh under Passing Tunnel, 2.5 Traffic Lane Tunnel, Tehran, Iran

Authors: A. Golshani, M. Gharizade Varnusefaderani, S. Majidian

Abstract:

Underground structures are of those structures that have uncertainty in design procedures. That is due to the complexity of soil condition around. Under passing tunnels are also such affected structures. Despite geotechnical site investigations, lots of uncertainties exist in soil properties due to unknown events. As results, it possibly causes conflicting settlements in numerical analysis with recorded values in the project. This paper aims to report a case study on a specific under passing tunnel constructed by New Austrian Tunnelling Method in Iran. The intended tunnel has an overburden of about 11.3m, the height of 12.2m and, the width of 14.4m with 2.5 traffic lane. The numerical modeling was developed by a 2D finite element program (PLAXIS Version 8). Comparing displacement histories at the ground surface during the entire installation of initial lining, the estimated surface settlement was about four times the field recorded one, which indicates that some local unknown events affect that value. Also, the displacement ratios were in a big difference between the numerical and field data. Consequently, running several numerical back analyses using laboratory and field tests data, the geotechnical parameters were accurately revised to match with the obtained monitoring data. Finally, it was found that usually the values of soil parameters are conservatively low-estimated up to 40 percent by typical engineering judgment. Additionally, it could be attributed to inappropriate constitutive models applied for the specific soil condition.

Keywords: NATM, surface displacement history, soil tests, monitoring data, numerical back-analysis, geotechnical parameters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 738
46 Typical Day Prediction Model for Output Power and Energy Efficiency of a Grid-Connected Solar Photovoltaic System

Authors: Yan Su, L. C. Chan

Abstract:

A novel typical day prediction model have been built and validated by the measured data of a grid-connected solar photovoltaic (PV) system in Macau. Unlike conventional statistical method used by previous study on PV systems which get results by averaging nearby continuous points, the present typical day statistical method obtain the value at every minute in a typical day by averaging discontinuous points at the same minute in different days. This typical day statistical method based on discontinuous point averaging makes it possible for us to obtain the Gaussian shape dynamical distributions for solar irradiance and output power in a yearly or monthly typical day. Based on the yearly typical day statistical analysis results, the maximum possible accumulated output energy in a year with on site climate conditions and the corresponding optimal PV system running time are obtained. Periodic Gaussian shape prediction models for solar irradiance, output energy and system energy efficiency have been built and their coefficients have been determined based on the yearly, maximum and minimum monthly typical day Gaussian distribution parameters, which are obtained from iterations for minimum Root Mean Squared Deviation (RMSD). With the present model, the dynamical effects due to time difference in a day are kept and the day to day uncertainty due to weather changing are smoothed but still included. The periodic Gaussian shape correlations for solar irradiance, output power and system energy efficiency have been compared favorably with data of the PV system in Macau and proved to be an improvement than previous models.

Keywords: Grid Connected, RMSD, Solar PV System, Typical Day.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1643
45 A Post Keynesian Environmental Macroeconomic Model for Agricultural Water Sustainability under Climate Change in the Murray-Darling Basin, Australia

Authors: Ke Zhao, Ballarat Colin Richardson, Jerry Courvisanos, John Crawford

Abstract:

Climate change has profound consequences for the agriculture of south-eastern Australia and its climate-induced water shortage in the Murray-Darling Basin. Post Keynesian Economics (PKE) macro-dynamics, along with Kaleckian investment and growth theory, are used to develop an ecological-economic system dynamics model of this complex nonlinear river basin system. The Murray- Darling Basin Simulation Model (MDB-SM) uses the principles of PKE to incorporate the fundamental uncertainty of economic behaviors of farmers regarding the investments they make and the climate change they face, particularly as regards water ecosystem services. MDB-SM provides a framework for macroeconomic policies, especially for long-term fiscal policy and for policy directed at the sustainability of agricultural water, as measured by socio-economic well-being considerations, which include sustainable consumption and investment in the river basin. The model can also reproduce other ecological and economic aspects and, for certain parameters and initial values, exhibit endogenous business cycles and ecological sustainability with realistic characteristics. Most importantly, MDBSM provides a platform for the analysis of alternative economic policy scenarios. These results reveal the importance of understanding water ecosystem adaptation under climate change by integrating a PKE macroeconomic analytical framework with the system dynamics modelling approach. Once parameterised and supplied with historical initial values, MDB-SM should prove to be a practical tool to provide alternative long-term policy simulations of agricultural water and socio-economic well-being.

Keywords: Agricultural water, Macroeconomic dynamics, Modeling, Investment dynamics, Sustainability, Unemployment, Economics, Keynesian, Kaleckian.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2133
44 Teaching Translation in Brazilian Universities: A Study about the Possible Impacts of Translators’ Comments on the Cyberspace about Translator Education

Authors: Erica Lima

Abstract:

The objective of this paper is to discuss relevant points about teaching translation in Brazilian universities and the possible impacts of blogs and social networks to translator education today. It is intended to analyze the curricula of Brazilian translation courses, contrasting them to information obtained from two social networking groups of great visibility in the area concerning essential characteristics to become a successful profession. Therefore, research has, as its main corpus, a few undergraduate translation programs’ syllabuses, as well as a few postings on social networks groups that specifically share professional opinions regarding the necessity for a translator to obtain a degree in translation to practice the profession. To a certain extent, such comments and their corresponding responses lead to the propagation of discourses which influence the ideas that aspiring translators and recent graduates end up having towards themselves and their undergraduate courses. The postings also show that many professionals do not have a clear position regarding the translator education; while refuting it, they also encourage “free” courses. It is thus observed that cyberspace constitutes, on the one hand, a place of mobilization of people in defense of similar ideas. However, on the other hand, it embodies a place of tension and conflict, in view of the fact that there are many participants and, as in any other situation of interlocution, disagreements may arise. From the postings, aspects related to professionalism were analyzed (including discussions about regulation), as well as questions about the classic dichotomies: theory/practice; art/technique; self-education/academic training. As partial result, the common interest regarding the valorization of the profession could be mentioned, although there is no consensus on the essential characteristics to be a good translator. It was also possible to observe that the set of socially constructed representations in the group reflects characteristics of the world situation of the translation courses (especially in some European countries and in the United States), which, in the first instance, does not accurately reflect the Brazilian idiosyncrasies of the area.

Keywords: Cyberspace, teaching translation, translator education, university.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 866
43 Performance Analysis of HSDPA Systems using Low-Density Parity-Check (LDPC)Coding as Compared to Turbo Coding

Authors: K. Anitha Sheela, J. Tarun Kumar

Abstract:

HSDPA is a new feature which is introduced in Release-5 specifications of the 3GPP WCDMA/UTRA standard to realize higher speed data rate together with lower round-trip times. Moreover, the HSDPA concept offers outstanding improvement of packet throughput and also significantly reduces the packet call transfer delay as compared to Release -99 DSCH. Till now the HSDPA system uses turbo coding which is the best coding technique to achieve the Shannon limit. However, the main drawbacks of turbo coding are high decoding complexity and high latency which makes it unsuitable for some applications like satellite communications, since the transmission distance itself introduces latency due to limited speed of light. Hence in this paper it is proposed to use LDPC coding in place of Turbo coding for HSDPA system which decreases the latency and decoding complexity. But LDPC coding increases the Encoding complexity. Though the complexity of transmitter increases at NodeB, the End user is at an advantage in terms of receiver complexity and Bit- error rate. In this paper LDPC Encoder is implemented using “sparse parity check matrix" H to generate a codeword at Encoder and “Belief Propagation algorithm "for LDPC decoding .Simulation results shows that in LDPC coding the BER suddenly drops as the number of iterations increase with a small increase in Eb/No. Which is not possible in Turbo coding. Also same BER was achieved using less number of iterations and hence the latency and receiver complexity has decreased for LDPC coding. HSDPA increases the downlink data rate within a cell to a theoretical maximum of 14Mbps, with 2Mbps on the uplink. The changes that HSDPA enables includes better quality, more reliable and more robust data services. In other words, while realistic data rates are only a few Mbps, the actual quality and number of users achieved will improve significantly.

Keywords: AMC, HSDPA, LDPC, WCDMA, 3GPP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2013
42 Seismic Fragility Assessment of Strongback Steel Braced Frames Subjected to Near-Field Earthquakes

Authors: Mohammadreza Salek Faramarzi, Touraj Taghikhany

Abstract:

In this paper, seismic fragility assessment of a recently developed hybrid structural system, known as the strongback system (SBS) is investigated. In this system, to mitigate the occurrence of the soft-story mechanism and improve the distribution of story drifts over the height of the structure, an elastic vertical truss is formed. The strengthened members of the braced span are designed to remain substantially elastic during levels of excitation where soft-story mechanisms are likely to occur and impose a nearly uniform story drift distribution. Due to the distinctive characteristics of near-field ground motions, it seems to be necessary to study the effect of these records on seismic performance of the SBS. To this end, a set of 56 near-field ground motion records suggested by FEMA P695 methodology is used. For fragility assessment, nonlinear dynamic analyses are carried out in OpenSEES based on the recommended procedure in HAZUS technical manual. Four damage states including slight, moderate, extensive, and complete damage (collapse) are considered. To evaluate each damage state, inter-story drift ratio and floor acceleration are implemented as engineering demand parameters. Further, to extend the evaluation of the collapse state of the system, a different collapse criterion suggested in FEMA P695 is applied. It is concluded that SBS can significantly increase the collapse capacity and consequently decrease the collapse risk of the structure during its life time. Comparing the observing mean annual frequency (MAF) of exceedance of each damage state against the allowable values presented in performance-based design methods, it is found that using the elastic vertical truss, improves the structural response effectively.

Keywords: Strongback System, Near-fault, Seismic fragility, Uncertainty, IDA, Probabilistic performance assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 517
41 Multiple Criteria Decision Making Analysis for Selecting and Evaluating Fighter Aircraft

Authors: C. Ardil, A. M. Pashaev, R.A. Sadiqov, P. Abdullayev

Abstract:

In this paper, multiple criteria decision making analysis technique, is presented for ranking and selection of a set of determined alternatives - fighter aircraft - which are associated with a set of decision factors. In fighter aircraft design, conflicting decision criteria, disciplines, and technologies are always involved in the design process. Multiple criteria decision making analysis techniques can be helpful to effectively deal with such situations and make wise design decisions. Multiple criteria decision making analysis theory is a systematic mathematical approach for dealing with problems which contain uncertainties in decision making. The feasibility and contributions of applying the multiple criteria decision making analysis technique in fighter aircraft selection analysis is explored. In this study, an integrated framework incorporating multiple criteria decision making analysis technique in fighter aircraft analysis is established using entropy objective weighting method. An improved integrated multiple criteria decision making analysis method is utilized to aggregate the multiple decision criteria into one composite figure of merit, which serves as an objective function in the decision process. Therefore, it is demonstrated that the suitable multiple criteria decision making analysis method with decision solution provides an effective objective function for the decision making analysis. Considering that the inherent uncertainties and the weighting factors have crucial decision impacts on the fighter aircraft evaluation, seven fighter aircraft models for the multiple design criteria in terms of the weighting factors are constructed. The proposed multiple criteria decision making analysis model is based on integrated entropy index procedure, and additive multiple criteria decision making analysis theory. Hence, the applicability of proposed technique for fighter aircraft selection problem is considered. The constructed multiple criteria decision making analysis model can provide efficient decision analysis approach for uncertainty assessment of the decision problem. Consequently, the fighter aircraft alternatives are ranked based their final evaluation scores, and sensitivity analysis is conducted.

Keywords: Fighter Aircraft, Fighter Aircraft Selection, Multiple Criteria Decision Making, Multiple Criteria Decision Making Analysis, MCDMA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 575
40 Reinforcement of Calcium Phosphate Cement with E-Glass Fibre

Authors: Sudip Dasgupta, Debosmita Pani, Kanchan Maji

Abstract:

Calcium Phosphate Cement (CPC) due to its high bioactivity and optimum bioresorbability shows excellent bone regeneration capability. Despite it has limited applications as bone implant due to its macro-porous microstructure causing its poor mechanical strength. The reinforcement of apatitic CPCs with biocompatible fibre glass phase is an attractive area of research to improve upon its mechanical strength. Here, we study the setting behaviour of Si-doped and un-doped α tri calcium phosphate (α - TCP) based CPC and its reinforcement with addition of E-glass fibre. Alpha Tri calcium phosphate powders were prepared by solid state sintering of CaCO3 , CaHPO4 and Tetra Ethyl Ortho Silicate (TEOS) was used as silicon source to synthesize Si doped α-TCP powders. Both initial and final setting time of the developed cement was delayed because of Si addition. Crystalline phases of HA (JCPDS 9- 432), α-TCP (JCPDS 29-359) and β-TCP (JCPDS 9-169) were detected in the X-ray diffraction (XRD) pattern after immersion of CPC in simulated body fluid (SBF) for 0 hours to 10 days. As Si incorporation in the crystal lattice stabilized the TCP phase, Si doped CPC showed little slower rate of conversion into HA phase as compared to un-doped CPC. The SEM image of the microstructure of hardened CPC showed lower grain size of HA in un-doped CPC because of premature setting and faster hydrolysis of un-doped CPC in SBF as compared that in Si-doped CPC. Premature setting caused generation of micro and macro porosity in un-doped CPC structure which resulted in its lower mechanical strength as compared to that in Si-doped CPC. It was found that addition of 10 wt% of E-glass fibre into Si-doped α-TCP increased the average DTS of CPC from 8 MPa to 15 MPa as the fibres could resists the propagation of crack by deflecting the crack tip. Our study shows that biocompatible E-glass fibre in optimum proportion in CPC matrix can enhance the mechanical strength of CPC without affecting its biocompatibility. 

Keywords: Calcium phosphate cement, biocompatibility, e-glass fibre, diametral tensile strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2169