Search results for: average symbol error rate
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4832

Search results for: average symbol error rate

1982 Per Flow Packet Scheduling Scheme to Improve the End-to-End Fairness in Mobile Ad Hoc Wireless Network

Authors: K. Sasikala, R. S. D Wahidabanu

Abstract:

Various fairness models and criteria proposed by academia and industries for wired networks can be applied for ad hoc wireless network. The end-to-end fairness in an ad hoc wireless network is a challenging task compared to wired networks, which has not been addressed effectively. Most of the traffic in an ad hoc network are transport layer flows and thus the fairness of transport layer flows has attracted the interest of the researchers. The factors such as MAC protocol, routing protocol, the length of a route, buffer size, active queue management algorithm and the congestion control algorithms affects the fairness of transport layer flows. In this paper, we have considered the rate of data transmission, the queue management and packet scheduling technique. The ad hoc network is dynamic in nature due to various parameters such as transmission of control packets, multihop nature of forwarding packets, changes in source and destination nodes, changes in the routing path influences determining throughput and fairness among the concurrent flows. In addition, the effect of interaction between the protocol in the data link and transport layers has also plays a role in determining the rate of the data transmission. We maintain queue for each flow and the delay information of each flow is maintained accordingly. The pre-processing of flow is done up to the network layer only. The source and destination address information is used for separating the flow and the transport layer information is not used. This minimizes the delay in the network. Each flow is attached to a timer and is updated dynamically. Finite State Machine (FSM) is proposed for queue and transmission control mechanism. The performance of the proposed approach is evaluated in ns-2 simulation environment. The throughput and fairness based on mobility for different flows used as performance metrics. We have compared the performance of the proposed approach with ATP and the transport layer information is used. This minimizes the delay in the network. Each flow is attached to a timer and is updated dynamically. Finite State Machine (FSM) is proposed for queue and transmission control mechanism. The performance of the proposed approach is evaluated in ns-2 simulation environment. The throughput and fairness based on not mobility for different flows used as performance metrics. We have compared the performance of the proposed approach with ATP and MC-MLAS and the performance of the proposed approach is encouraging.

Keywords: ATP, End-to-End fairness, FSM, MAC, QoS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1970
1981 Effect of Injection Moulding Process Parameter on Tensile Strength Using Taguchi Method

Authors: Gurjeet Singh, M. K. Pradhan, Ajay Verma

Abstract:

The plastic industry plays very important role in the economy of any country. It is generally among the leading share of the economy of the country. Since metals and their alloys are very rarely available on the earth. Therefore, to produce plastic products and components, which finds application in many industrial as well as household consumer products is beneficial. Since 50% plastic products are manufactured by injection moulding process. For production of better quality product, we have to control quality characteristics and performance of the product. The process parameters plays a significant role in production of plastic, hence the control of process parameter is essential. In this paper the effect of the parameters selection on injection moulding process has been described. It is to define suitable parameters in producing plastic product. Selecting the process parameter by trial and error is neither desirable nor acceptable, as it is often tends to increase the cost and time. Hence, optimization of processing parameter of injection moulding process is essential. The experiments were designed with Taguchi’s orthogonal array to achieve the result with least number of experiments. Plastic material polypropylene is studied. Tensile strength test of material is done on universal testing machine, which is produced by injection moulding machine. By using Taguchi technique with the help of MiniTab-14 software the best value of injection pressure, melt temperature, packing pressure and packing time is obtained. We found that process parameter packing pressure contribute more in production of good tensile plastic product.

Keywords: Injection moulding, tensile strength, Taguchi method, poly-propylene.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3728
1980 An Efficient Adaptive Thresholding Technique for Wavelet Based Image Denoising

Authors: D.Gnanadurai, V.Sadasivam

Abstract:

This frame work describes a computationally more efficient and adaptive threshold estimation method for image denoising in the wavelet domain based on Generalized Gaussian Distribution (GGD) modeling of subband coefficients. In this proposed method, the choice of the threshold estimation is carried out by analysing the statistical parameters of the wavelet subband coefficients like standard deviation, arithmetic mean and geometrical mean. The noisy image is first decomposed into many levels to obtain different frequency bands. Then soft thresholding method is used to remove the noisy coefficients, by fixing the optimum thresholding value by the proposed method. Experimental results on several test images by using this method show that this method yields significantly superior image quality and better Peak Signal to Noise Ratio (PSNR). Here, to prove the efficiency of this method in image denoising, we have compared this with various denoising methods like wiener filter, Average filter, VisuShrink and BayesShrink.

Keywords: Wavelet Transform, Gaussian Noise, ImageDenoising, Filter Banks and Thresholding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2880
1979 CFD Analysis of Two Phase Flow in a Horizontal Pipe – Prediction of Pressure Drop

Authors: P. Bhramara, V. D. Rao, K. V. Sharma , T. K. K. Reddy

Abstract:

In designing of condensers, the prediction of pressure drop is as important as the prediction of heat transfer coefficient. Modeling of two phase flow, particularly liquid – vapor flow under diabatic conditions inside a horizontal tube using CFD analysis is difficult with the available two phase models in FLUENT due to continuously changing flow patterns. In the present analysis, CFD analysis of two phase flow of refrigerants inside a horizontal tube of inner diameter, 0.0085 m and 1.2 m length is carried out using homogeneous model under adiabatic conditions. The refrigerants considered are R22, R134a and R407C. The analysis is performed at different saturation temperatures and at different flow rates to evaluate the local frictional pressure drop. Using Homogeneous model, average properties are obtained for each of the refrigerants that is considered as single phase pseudo fluid. The so obtained pressure drop data is compared with the separated flow models available in literature.

Keywords: Adiabatic conditions, CFD analysis, Homogeneousmodel and Liquid – Vapor flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3672
1978 Lattice Boltzmann Simulation of Natural Convection Heat Transfer in an Inclined Open Ended Cavity

Authors: M.Jafari, A.Naysari, K.Bodaghi

Abstract:

In the present study, the lattice Boltzmann Method (LBM) is applied for simulating of Natural Convection in an inclined open ended cavity. The cavity horizontal walls are insulated while the west wall is maintained at a uniform temperature higher than the ambient. Prandtl number is fixed to 0.71 (air) while Rayligh numbers, aspect ratio of the cavity are changed in the range of 103 to 104 and of 1-4, respectively. The numerical code is validated for the previously results for open ended cavities, and then the results of an inclined open ended cavity for various angles of rotating open ended cavity are presented. Result shows by increasing of aspect ratio, the average Nusselt number on hot wall decreases for all rotation angles. When gravity acceleration direction is opposite of standard gravity direction the convection heat transfer has a manner same as conduction.

Keywords: Lattice Boltzmann Method, Open Ended Cavity, Natural Convection, Inclined Cavity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2567
1977 Structural and Electronic Characterization of Supported Ni and Au Catalysts used in Environment Protection Determined by XRD,XAS and XPS methods

Authors: N. Aldea, V. Rednic, F. Matei, Tiandou Hu, M. Neumann

Abstract:

The nickel and gold nanoclusters as supported catalysts were analyzed by XAS, XRD and XPS in order to determine their local, global and electronic structure. The present study has pointed out a strong deformation of the local structure of the metal, due to its interaction with oxide supports. The average particle size, the mean squares of the microstrain, the particle size distribution and microstrain functions of the supported Ni and Au catalysts were determined by XRD method using Generalized Fermi Function for the X-ray line profiles approximation. Based on EXAFS analysis we consider that the local structure of the investigated systems is strongly distorted concerning the atomic number pairs. Metal-support interaction is confirmed by the shape changes of the probability densities of electron transitions: Ni K edge (1s → continuum and 2p), Au LIII-edge (2p3/2 → continuum, 6s, 6d5/2 and 6d3/2). XPS investigations confirm the metal-support interaction at their interface.

Keywords: local and global structure, metal-support interaction, supported metal catalysts, synchrotron radiation, X-ray absorptionspectroscopy, X-ray diffraction, X-ray photoelectron spectroscopy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1755
1976 Curriculum Based Measurement and Precision Teaching in Writing Empowerment Enhancement: Results from an Italian Learning Center

Authors: I. Pelizzoni, C. Cavallini, I. Salvaderi, F. Cavallini

Abstract:

We present the improvement in writing skills obtained by 94 participants (aged between six and 10 years) with special educational needs through a writing enhancement program based on fluency principles. The study was planned and conducted with a single-subject experimental plan for each of the participants, in order to confirm the results in the literature. These results were obtained using precision teaching (PT) methodology to increase the number of written graphemes per minute in the pre- and post-test, by curriculum based measurement (CBM). Results indicated an increase in the number of written graphemes for all participants. The average overall duration of the intervention is 144 minutes in five months of treatment. These considerations have been analyzed taking account of the complexity of the implementation of measurement systems in real operational contexts (an Italian learning center) and important aspects of replicability and cost-effectiveness of such interventions.

Keywords: Precision teaching, writing skills, CBM, Italian Learning Center.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 752
1975 Dynamic Analysis of Composite Doubly Curved Panels with Variable Thickness

Authors: I. Algul, G. Akgun, H. Kurtaran

Abstract:

Dynamic analysis of composite doubly curved panels with variable thickness subjected to different pulse types using Generalized Differential Quadrature method (GDQ) is presented in this study. Panels with variable thickness are used in the construction of aerospace and marine industry. Giving variable thickness to panels can allow the designer to get optimum structural efficiency. For this reason, estimating the response of variable thickness panels is very important to design more reliable structures under dynamic loads. Dynamic equations for composite panels with variable thickness are obtained using virtual work principle. Partial derivatives in the equation of motion are expressed with GDQ and Newmark average acceleration scheme is used for temporal discretization. Several examples are used to highlight the effectiveness of the proposed method. Results are compared with finite element method. Effects of taper ratios, boundary conditions and loading type on the response of composite panel are investigated.

Keywords: Generalized differential quadrature method, doubly curved panels, laminated composite materials, small displacement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 920
1974 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images

Authors: U. Datta

Abstract:

The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.

Keywords: Co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 683
1973 Exterior Calculus: Economic Profit Dynamics

Authors: Troy L. Story

Abstract:

A mathematical model for the Dynamics of Economic Profit is constructed by proposing a characteristic differential oneform for this dynamics (analogous to the action in Hamiltonian dynamics). After processing this form with exterior calculus, a pair of characteristic differential equations is generated and solved for the rate of change of profit P as a function of revenue R (t) and cost C (t). By contracting the characteristic differential one-form with a vortex vector, the Lagrangian is obtained for the Dynamics of Economic Profit.

Keywords: Differential geometry, exterior calculus, Hamiltonian geometry, mathematical economics, economic functions, and dynamics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2513
1972 The Evaluation of Gravity Anomalies Based on Global Models by Land Gravity Data

Authors: M. Yilmaz, I. Yilmaz, M. Uysal

Abstract:

The Earth system generates different phenomena that are observable at the surface of the Earth such as mass deformations and displacements leading to plate tectonics, earthquakes, and volcanism. The dynamic processes associated with the interior, surface, and atmosphere of the Earth affect the three pillars of geodesy: shape of the Earth, its gravity field, and its rotation. Geodesy establishes a characteristic structure in order to define, monitor, and predict of the whole Earth system. The traditional and new instruments, observables, and techniques in geodesy are related to the gravity field. Therefore, the geodesy monitors the gravity field and its temporal variability in order to transform the geodetic observations made on the physical surface of the Earth into the geometrical surface in which positions are mathematically defined. In this paper, the main components of the gravity field modeling, (Free-air and Bouguer) gravity anomalies are calculated via recent global models (EGM2008, EIGEN6C4, and GECO) over a selected study area. The model-based gravity anomalies are compared with the corresponding terrestrial gravity data in terms of standard deviation (SD) and root mean square error (RMSE) for determining the best fit global model in the study area at a regional scale in Turkey. The least SD (13.63 mGal) and RMSE (15.71 mGal) were obtained by EGM2008 for the Free-air gravity anomaly residuals. For the Bouguer gravity anomaly residuals, EIGEN6C4 provides the least SD (8.05 mGal) and RMSE (8.12 mGal). The results indicated that EIGEN6C4 can be a useful tool for modeling the gravity field of the Earth over the study area.

Keywords: Free-air gravity anomaly, Bouguer gravity anomaly, global model, land gravity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 945
1971 The Induced Generalized Hybrid Averaging Operator and its Application in Financial Decision Making

Authors: José M. Merigó, Montserrat Casanovas

Abstract:

We present the induced generalized hybrid averaging (IGHA) operator. It is a new aggregation operator that generalizes the hybrid averaging (HA) by using generalized means and order inducing variables. With this formulation, we get a wide range of mean operators such as the induced HA (IHA), the induced hybrid quadratic averaging (IHQA), the HA, etc. The ordered weighted averaging (OWA) operator and the weighted average (WA) are included as special cases of the HA operator. Therefore, with this generalization we can obtain a wide range of aggregation operators such as the induced generalized OWA (IGOWA), the generalized OWA (GOWA), etc. We further generalize the IGHA operator by using quasi-arithmetic means. Then, we get the Quasi-IHA operator. Finally, we also develop an illustrative example of the new approach in a financial decision making problem. The main advantage of the IGHA is that it gives a more complete view of the decision problem to the decision maker because it considers a wide range of situations depending on the operator used.

Keywords: Decision making, Aggregation operators, OWA operator, Generalized means, Selection of investments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1478
1970 Green Function and Eshelby Tensor Based on Mindlin’s 2nd Gradient Model: An Explicit Study of Spherical Inclusion Case

Authors: A. Selmi, A. Bisharat

Abstract:

Using Fourier transform and based on the Mindlin's 2nd gradient model that involves two length scale parameters, the Green's function, the Eshelby tensor, and the Eshelby-like tensor for a spherical inclusion are derived. It is proved that the Eshelby tensor consists of two parts; the classical Eshelby tensor and a gradient part including the length scale parameters which enable the interpretation of the size effect. When the strain gradient is not taken into account, the obtained Green's function and Eshelby tensor reduce to its analogue based on the classical elasticity. The Eshelby tensor in and outside the inclusion, the volume average of the gradient part and the Eshelby-like tensor are explicitly obtained. Unlike the classical Eshelby tensor, the results show that the components of the new Eshelby tensor vary with the position and the inclusion dimensions. It is demonstrated that the contribution of the gradient part should not be neglected.

Keywords: Eshelby tensor, Eshelby-like tensor, Green’s function, Mindlin’s 2nd gradient model, Spherical inclusion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 696
1969 Numerical Simulation of the Flowing of Ice Slurry in Seawater Pipe of Polar Ships

Authors: Li Xu, Huanbao Jiang, Zhenfei Huang, Lailai Zhang

Abstract:

In recent years, as global warming, the sea-ice extent of North Arctic undergoes an evident decrease and Arctic channel has attracted the attention of shipping industry. Ice crystals existing in the seawater of Arctic channel which enter the seawater system of the ship with the seawater were found blocking the seawater pipe. The appearance of cooler paralysis, auxiliary machine error and even ship power system paralysis may be happened if seriously. In order to reduce the effect of high temperature in auxiliary equipment, seawater system will use external ice-water to participate in the cooling cycle and achieve the state of its flow. The distribution of ice crystals in seawater pipe can be achieved. As the ice slurry system is solid liquid two-phase system, the flow process of ice-water mixture is very complex and diverse. In this paper, the flow process in seawater pipe of ice slurry is simulated with fluid dynamics simulation software based on k-ε turbulence model. As the ice packing fraction is a key factor effecting the distribution of ice crystals, the influence of ice packing fraction on the flowing process of ice slurry is analyzed. In this work, the simulation results show that as the ice packing fraction is relatively large, the distribution of ice crystals is uneven in the flowing process of the seawater which has such disadvantage as increase the possibility of blocking, that will provide scientific forecasting methods for the forming of ice block in seawater piping system. It has important significance for the reliability of the operating of polar ships in the future.

Keywords: Ice slurry, seawater pipe, ice packing fraction, numerical simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1356
1968 Estimating Enzyme Kinetic Parameters from Apparent KMs and Vmaxs

Authors: Simon Brown, Noorzaid Muhamad, David C Simcock

Abstract:

The kinetic properties of enzymes are often reported using the apparent KM and Vmax appropriate to the standard Michaelis-Menten enzyme. However, this model is inappropriate to enzymes that have more than one substrate or where the rate expression does not apply for other reasons. Consequently, it is desirable to have a means of estimating the appropriate kinetic parameters from the apparent values of KM and Vmax reported for each substrate. We provide a means of estimating the range within which the parameters should lie and apply the method to data for glutamate dehydrogenase from the nematode parasite of sheep Teladorsagia circumcincta.

Keywords: enzyme kinetics, glutamate dehydrogenase, intervalanalysis, parameter estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1923
1967 Monitoring of Belt-Drive Defects Using the Vibration Signals and Simulation Models

Authors: A. Nabhan, Mohamed R. El-Sharkawy, A. Rashed

Abstract:

The main aim of this paper is to dedicate the belt drive system faults like cogs missing, misalignment and belt worm using vibration analysis technique. Experimentally, the belt drive test-rig is equipped to measure vibrations signals under different operating conditions. Finite element 3D model of belt drive system is created and vibration response analyzed using commercial finite element software ABAQUS/CAE.  Root mean square (RMS) and Crest Factor will serve as indicators of average amplitude of envelope analysis signals. The vibration signals pattern obtained from the simulation model and experimental data have the same characteristics. It can be concluded that each case of the RMS is more effective in detecting the defect for acceleration response. While Crest Factor parameter has a response with the displacement and velocity of vibration signals. Also it can be noticed that the model has difficulty in completing the solution when the misalignment angle is higher than 1 degree.

Keywords: Simulation model, misalignment, cogs missing and vibration analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 857
1966 Electricity Price Forecasting: A Comparative Analysis with Shallow-ANN and DNN

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Electricity prices have sophisticated features such as high volatility, nonlinearity and high frequency that make forecasting quite difficult. Electricity price has a volatile and non-random character so that, it is possible to identify the patterns based on the historical data. Intelligent decision-making requires accurate price forecasting for market traders, retailers, and generation companies. So far, many shallow-ANN (artificial neural networks) models have been published in the literature and showed adequate forecasting results. During the last years, neural networks with many hidden layers, which are referred to as DNN (deep neural networks) have been using in the machine learning community. The goal of this study is to investigate electricity price forecasting performance of the shallow-ANN and DNN models for the Turkish day-ahead electricity market. The forecasting accuracy of the models has been evaluated with publicly available data from the Turkish day-ahead electricity market. Both shallow-ANN and DNN approach would give successful result in forecasting problems. Historical load, price and weather temperature data are used as the input variables for the models. The data set includes power consumption measurements gathered between January 2016 and December 2017 with one-hour resolution. In this regard, forecasting studies have been carried out comparatively with shallow-ANN and DNN models for Turkish electricity markets in the related time period. The main contribution of this study is the investigation of different shallow-ANN and DNN models in the field of electricity price forecast. All models are compared regarding their MAE (Mean Absolute Error) and MSE (Mean Square) results. DNN models give better forecasting performance compare to shallow-ANN. Best five MAE results for DNN models are 0.346, 0.372, 0.392, 0,402 and 0.409.

Keywords: Deep learning, artificial neural networks, energy price forecasting, Turkey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1059
1965 Hybrid Adaptive Modeling to Enhance Robustness of Real-Time Optimization

Authors: Hussain Syed Asad, Richard Kwok Kit Yuen, Gongsheng Huang

Abstract:

Real-time optimization has been considered an effective approach for improving energy efficient operation of heating, ventilation, and air-conditioning (HVAC) systems. In model-based real-time optimization, model mismatches cannot be avoided. When model mismatches are significant, the performance of the real-time optimization will be impaired and hence the expected energy saving will be reduced. In this paper, the model mismatches for chiller plant on real-time optimization are considered. In the real-time optimization of the chiller plant, simplified semi-physical or grey box model of chiller is always used, which should be identified using available operation data. To overcome the model mismatches associated with the chiller model, hybrid Genetic Algorithms (HGAs) method is used for online real-time training of the chiller model. HGAs combines Genetic Algorithms (GAs) method (for global search) and traditional optimization method (i.e. faster and more efficient for local search) to avoid conventional hit and trial process of GAs. The identification of model parameters is synthesized as an optimization problem; and the objective function is the Least Square Error between the output from the model and the actual output from the chiller plant. A case study is used to illustrate the implementation of the proposed method. It has been shown that the proposed approach is able to provide reliability in decision making, enhance the robustness of the real-time optimization strategy and improve on energy performance.

Keywords: Energy performance, hybrid adaptive modeling, hybrid genetic algorithms, real-time optimization, heating, ventilation, and air-conditioning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1118
1964 Motivation Factors to Influence the Decision to Choose Thai Fabric

Authors: Pisit Potjanajaruwit

Abstract:

The purpose of this research was to study the motivation factors to influence the decision to choose Thai Fabric. A multiple-stage sample was utilized to collect 400 samples from working women who had diverse occupations all over Thailand. This research was a quantitative analysis and questionnaire was used a tool to collect data. Descriptive statistics used in this research included percentage, average, and standard deviation and inferential statistics included hypothesis testing of one way ANOVA. The research findings revealed that demographic factors and social factors had an influence to the positive idea of wearing Thai fabric (F = 5.377, P value < 0.05). The respondents who had the age over 41 years old had a better positive idea of wearing Thai fabric than other groups. Moreover, the findings revealed that age had influenced the positive idea of wearing Thai fabric (F = 3.918, P value < 0.05). The respondents who had the age over 41 years old also had stronger believe that wearing Thai fabric to work and social gatherings are socially acceptable than other groups.

Keywords: Decision, Motivation, Influence, Thai Fabric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1990
1963 Comparison of Different Methods to Produce Fuzzy Tolerance Relations for Rainfall Data Classification in the Region of Central Greece

Authors: N. Samarinas, C. Evangelides, C. Vrekos

Abstract:

The aim of this paper is the comparison of three different methods, in order to produce fuzzy tolerance relations for rainfall data classification. More specifically, the three methods are correlation coefficient, cosine amplitude and max-min method. The data were obtained from seven rainfall stations in the region of central Greece and refers to 20-year time series of monthly rainfall height average. Three methods were used to express these data as a fuzzy relation. This specific fuzzy tolerance relation is reformed into an equivalence relation with max-min composition for all three methods. From the equivalence relation, the rainfall stations were categorized and classified according to the degree of confidence. The classification shows the similarities among the rainfall stations. Stations with high similarity can be utilized in water resource management scenarios interchangeably or to augment data from one to another. Due to the complexity of calculations, it is important to find out which of the methods is computationally simpler and needs fewer compositions in order to give reliable results.

Keywords: Classification, fuzzy logic, tolerance relations, rainfall data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1000
1962 Improvement of Reaction Technology of Decalin Halogenation

Authors: Dmitriy Yu. Korulkin, Ravshan M. Nuraliev, Raissa A. Muzychkina

Abstract:

In this research paper were investigated the main regularities of a radical bromination reaction of decalin. There had been studied the temperature effect, durations of reaction, frequency rate of process, a ratio of initial components, type and number of the initiator on decalin bromination degree. There were specified optimum conditions of synthesis of a perbromodecalin by the method of a decalin bromination. There are developed the technological flowchart of receiving a perbromodecalin and the mass balance of process on the first and the subsequent loadings of components. The results of research of antibacterial and antifungal activity of synthesized bromoderivatives have been represented.

Keywords: Decalin, optimum technology, perbromodecalin, radical bromination.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2195
1961 Effects of Upflow Liquid Velocity on Performance of Expanded Granular Sludge Bed (EGSB) System

Authors: Seni Karnchanawong, Wachara Phajee

Abstract:

The effects of upflow liquid velocity (ULV) on performance of expanded granular sludge bed (EGSB) system were investigated. The EGSB reactor, made from galvanized steel pipe 0.10 m diameter and 5 m height, had been used to treat piggery wastewater, after passing through acidification tank. It consisted of 39.3 l working volume in reaction zone and 122 l working volume in sedimentation zone, at the upper part. The reactor was seeded with anaerobically digested sludge and operated at the ULVs of 4, 8, 12 and 16 m/h, consecutively, corresponding to organic loading rates of 9.6 – 13.0 kg COD/ (m3.d). The average COD concentrations in the influent were 9,601 – 13,050 mg/l. The COD removal was not significantly different, i.e. 93.0% - 94.0%, except at ULV 12 m/h where SS in the influent was exceptionally high so that VSS washout had occurred, leading to low COD removal. The FCOD and VFA concentrations in the effluent of all experiments were not much different, indicating the same range of treatment performance. The biogas production decreased at higher ULV and ULV of 4 m/h is suggested as design criterion for EGSB system.

Keywords: Expanded granular sludge bed system, piggery wastewater, upflow liquid velocity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2769
1960 Face Recognition using Radial Basis Function Network based on LDA

Authors: Byung-Joo Oh

Abstract:

This paper describes a method to improve the robustness of a face recognition system based on the combination of two compensating classifiers. The face images are preprocessed by the appearance-based statistical approaches such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). LDA features of the face image are taken as the input of the Radial Basis Function Network (RBFN). The proposed approach has been tested on the ORL database. The experimental results show that the LDA+RBFN algorithm has achieved a recognition rate of 93.5%

Keywords: Face recognition, linear discriminant analysis, radial basis function network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2096
1959 Design of OTA with Common Drain and Folded Cascade Used in ADC

Authors: Gu Wei, Gao Wei

Abstract:

In this report, an OTA which is used in fully differential pipelined ADC was described. Using gain-boost architecture with difference-ended amplifier, this OTA achieve high-gain and high-speed. Besides, the CMFB circuit is also used, and some methods are concerned to improve the performance. Then, by optimization the layout design, OTA-s mismatch was reduced. This design was using TSMC 0.18um CMOS process and simulation both schematic and layout in Cadence. The result of the simulation shows that the OTA has a gain up to 80dB,a unity gain bandwidth of about 1.437GHz for a 2pF load, a slew rate is about 428V/μs, a output swing is 0.2V~1.35V, with the power supply of 1.8V, the power consumption is 88mW. This amplifier was used in a 10bit 150MHz pipelined ADC.

Keywords: OTA, common drain, CMFB, pipelined ADC

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3309
1958 Rail Degradation Modelling Using ARMAX: A Case Study Applied to Melbourne Tram System

Authors: M. Karimpour, N. Elkhoury, L. Hitihamillage, S. Moridpour, R. Hesami

Abstract:

There is a necessity among rail transportation authorities for a superior understanding of the rail track degradation overtime and the factors influencing rail degradation. They need an accurate technique to identify the time when rail tracks fail or need maintenance. In turn, this will help to increase the level of safety and comfort of the passengers and the vehicles as well as improve the cost effectiveness of maintenance activities. An accurate model can play a key role in prediction of the long-term behaviour of railroad tracks. An accurate model can decrease the cost of maintenance. In this research, the rail track degradation is predicted using an autoregressive moving average with exogenous input (ARMAX). An ARMAX has been implemented on Melbourne tram data to estimate the values for the tram track degradation. Gauge values and rail usage in Million Gross Tone (MGT) are the main parameters used in the model. The developed model can accurately predict the future status of the tram tracks.

Keywords: ARMAX, Dynamic systems, MGT, Prediction, Rail degradation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1040
1957 Transmitting a Distance Training Model to the Community in the Upper Northeastern Region

Authors: Teerawach Khamkorn, Laongtip Mathurasa, Savittree Rochanasmita Arnold, Witthaya Mekhum

Abstract:

The objective of this research seeks to transmit a distance training model to the community in the upper northeastern region. The group sampling consists of 60 community leaders in the municipality of sub-district Kumphawapi, Kumphawapi Disrict, Udonthani Province. The research tools rely on the following instruments, they are : 1) the achievement test of community leaders- training and 2) the satisfaction questionnaires of community leaders. The statistics used in data analysis takes the statistical mean, percentage, standard deviation, and statistical T-test. The resulted findings reveal : 1) the efficiency of the distance training developed by the researcher for the community leaders joining in the training received the average score between in-training and post-training period higher than the setup criterion, 2) the two groups of participants in the training achieved higher knowledge than their pre-training state, 3) the comparison of the achievements between the two group presented no different results, 4) the community leaders obtained the high-to-highest satisfaction.

Keywords: Distance Training, Management, Technology, Transmitting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1280
1956 The Effect of Repeated Reading on Student Fluency: Does Practice Always Make Perfect?

Authors: Angela R. Roundy, Philip T. Roundy

Abstract:

Fluency is a skill that, unfortunately, many students lack. This deficiency causes students to be frustrated with, and overwhelmed by, the act of reading. However, research suggests that the repeated reading method may help students to improve their fluency. This study examines the effects of repeated readings on student fluency. The study-s overarching question is: What effect do increases in repeated reading have on reading fluency among middle school students from diverse backgrounds? More specifically, the authors examine whether repeated reading improves the fluency, reading speed, reading-oriented self-esteem, and confidence of students of diverse academic abilities, socio-economics statuses, and racial and ethnic backgrounds. To examine these questions the authors conducted a study using repeated reading strategies with a sample of students from an urban, middle school in the southeastern United States. We found that, on average, the use of repeated reading strategies increased students- fluency, words per minute (wpm) reading score, reading-oriented self-esteem, and confidence.

Keywords: Comprehension, Diverse Learners, Reading Fluency, Repeated Reading.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5889
1955 Proteins Length and their Phenotypic Potential

Authors: Tom Snir, Eitan Rubin

Abstract:

Mendelian Disease Genes represent a collection of single points of failure for the various systems they constitute. Such genes have been shown, on average, to encode longer proteins than 'non-disease' proteins. Existing models suggest that this results from the increased likeli-hood of longer genes undergoing mutations. Here, we show that in saturated mutagenesis experiments performed on model organisms, where the likelihood of each gene mutating is one, a similar relationship between length and the probability of a gene being lethal was observed. We thus suggest an extended model demonstrating that the likelihood of a mutated gene to produce a severe phenotype is length-dependent. Using the occurrence of conserved domains, we bring evidence that this dependency results from a correlation between protein length and the number of functions it performs. We propose that protein length thus serves as a proxy for protein cardinality in different networks required for the organism's survival and well-being. We use this example to argue that the collection of Mendelian Disease Genes can, and should, be used to study the rules governing systems vulnerability in living organisms.

Keywords: Systems Biology, Protein Length

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1774
1954 A Linear Regression Model for Estimating Anxiety Index Using Wide Area Frontal Lobe Brain Blood Volume

Authors: Takashi Kaburagi, Masashi Takenaka, Yosuke Kurihara, Takashi Matsumoto

Abstract:

Major depressive disorder (MDD) is one of the most common mental illnesses today. It is believed to be caused by a combination of several factors, including stress. Stress can be quantitatively evaluated using the State-Trait Anxiety Inventory (STAI), one of the best indices to evaluate anxiety. Although STAI scores are widely used in applications ranging from clinical diagnosis to basic research, the scores are calculated based on a self-reported questionnaire. An objective evaluation is required because the subject may intentionally change his/her answers if multiple tests are carried out. In this article, we present a modified index called the “multi-channel Laterality Index at Rest (mc-LIR)” by recording the brain activity from a wider area of the frontal lobe using multi-channel functional near-infrared spectroscopy (fNIRS). The presented index aims to measure multiple positions near the Fpz defined by the international 10-20 system positioning. Using 24 subjects, the dependencies on the number of measuring points used to calculate the mc-LIR and its correlation coefficients with the STAI scores are reported. Furthermore, a simple linear regression was performed to estimate the STAI scores from mc-LIR. The cross-validation error is also reported. The experimental results show that using multiple positions near the Fpz will improve the correlation coefficients and estimation than those using only two positions.

Keywords: Stress, functional near-infrared spectroscopy, frontal lobe, state-trait anxiety inventory score.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1136
1953 Design and Development of Graphene Oxide Modified by Chitosan Nanosheets Showing pH-Sensitive Surface as a Smart Drug Delivery System for Controlled Release of Doxorubicin

Authors: Parisa Shirzadeh

Abstract:

Drug delivery systems in which drugs are traditionally used, multi-stage and at specified intervals by patients, do not meet the needs of the world's up-to-date drug delivery. In today's world, we are dealing with a huge number of recombinant peptide and protean drugs and analogues of hormones in the body, most of which are made with genetic engineering techniques. Most of these drugs are used to treat critical diseases such as cancer. Due to the limitations of the traditional method, researchers sought to find ways to solve the problems of the traditional method to a large extent. Following these efforts, controlled drug release systems were introduced, which have many advantages. Using controlled release of the drug in the body, the concentration of the drug is kept at a certain level, and in a short time, it is done at a higher rate. Graphene is a natural material that is biodegradable, non-toxic, natural and wide surfaces of graphene plates makes it more effective to modify graphene than carbon nanotubes. Graphene oxide is often synthesized using concentrated oxidizers such as sulfuric acid, nitric acid, and potassium permanganate based on Hummer method. graphene oxide is very hydrophilic and easily dissolves in water and creates a stable solution. Graphene oxide (GO) has been modified by chitosan (CS) covalently, developed for control release of doxorubicin (DOX). In this study, GO is produced by the hummer method under acidic conditions. Then, it is chlorinated by oxalyl chloride to increase its reactivity against amine. After that, in the presence of CS, the amino reaction was performed to form amide transplantation, and the DOX was connected to the carrier surface by π-π interaction in buffer phosphate. GO, GO-CS, and GO-CS-DOX were characterized by FT-IR and TGA to recognize new functional groups which show the new bonding of CS to GO, RAMA and SEM to recognize size of layers that show changing in size and number of layers. The ability to load and release is determined by UV-Visible spectroscopy. The loading result showed a high capacity of DOX absorption (99%) and pH dependence identified as a result of DOX release from GO-CS nanosheet at pH 5.3 and 7.4, which show a fast release rate in acidic conditions.

Keywords: Graphene oxide, chitosan, nanosheet, controlled drug release, doxorubicin.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 187