Search results for: minimum mean square error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2423

Search results for: minimum mean square error

2093 Experimental Demonstration of an Ultra-Low Power Vertical-Cavity Surface-Emitting Laser for Optical Power Generation

Authors: S. Nazhan, Hassan K. Al-Musawi, Khalid A. Humood

Abstract:

This paper reports on an experimental investigation into the influence of current modulation on the properties of a vertical-cavity surface-emitting laser (VCSEL) with a direct square wave modulation. The optical output power response, as a function of the pumping current, modulation frequency, and amplitude, is measured for an 850 nm VCSEL. We demonstrate that modulation frequency and amplitude play important roles in reducing the VCSEL’s power consumption for optical generation. Indeed, even when the biasing current is below the static threshold, the VCSEL emits optical power under the square wave modulation. The power consumed by the device to generate light is significantly reduced to > 50%, which is below the threshold current, in response to both the modulation frequency and amplitude. An operating VCSEL device at low power is very desirable for less thermal effects, which are essential for a high-speed modulation bandwidth.

Keywords: VCSELs, optical power generation, power consumption, square wave modulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 513
2092 Particle Swarm Optimization with Reduction for Global Optimization Problems

Authors: Michiharu Maeda, Shinya Tsuda

Abstract:

This paper presents an algorithm of particle swarm optimization with reduction for global optimization problems. Particle swarm optimization is an algorithm which refers to the collective motion such as birds or fishes, and a multi-point search algorithm which finds a best solution using multiple particles. Particle swarm optimization is so flexible that it can adapt to a number of optimization problems. When an objective function has a lot of local minimums complicatedly, the particle may fall into a local minimum. For avoiding the local minimum, a number of particles are initially prepared and their positions are updated by particle swarm optimization. Particles sequentially reduce to reach a predetermined number of them grounded in evaluation value and particle swarm optimization continues until the termination condition is met. In order to show the effectiveness of the proposed algorithm, we examine the minimum by using test functions compared to existing algorithms. Furthermore the influence of best value on the initial number of particles for our algorithm is discussed.

Keywords: Particle swarm optimization, Global optimization, Metaheuristics, Reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1572
2091 Error Correction of Radial Displacement in Grinding Machine Tool Spindle by Optimizing Shape and Bearing Tuning

Authors: Khairul Jauhari, Achmad Widodo, Ismoyo Haryanto

Abstract:

In this article, the radial displacement error correction capability of a high precision spindle grinding caused by unbalance force was investigated. The spindle shaft is considered as a flexible rotor mounted on two sets of angular contact ball bearing. Finite element methods (FEM) have been adopted for obtaining the equation of motion of the spindle. In this paper, firstly, natural frequencies, critical frequencies, and amplitude of the unbalance response caused by residual unbalance are determined in order to investigate the spindle behaviors. Furthermore, an optimization design algorithm is employed to minimize radial displacement of the spindle which considers dimension of the spindle shaft, the dynamic characteristics of the bearings, critical frequencies and amplitude of the unbalance response, and computes optimum spindle diameters and stiffness and damping of the bearings. Numerical simulation results show that by optimizing the spindle diameters, and stiffness and damping in the bearings, radial displacement of the spindle can be reduced. A spindle about 4 μm radial displacement error can be compensated with 2 μm accuracy. This certainly can improve the accuracy of the product of machining.

Keywords: Error correction, High precision grinding, Optimization, Radial displacement, Spindle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1752
2090 Iterative Solutions to Some Linear Matrix Equations

Authors: Jiashang Jiang, Hao Liu, Yongxin Yuan

Abstract:

In this paper the gradient based iterative algorithms are presented to solve the following four types linear matrix equations: (a) AXB = F; (b) AXB = F, CXD = G; (c) AXB = F s. t. X = XT ; (d) AXB+CYD = F, where X and Y are unknown matrices, A,B,C,D, F,G are the given constant matrices. It is proved that if the equation considered has a solution, then the unique minimum norm solution can be obtained by choosing a special kind of initial matrices. The numerical results show that the proposed method is reliable and attractive.

Keywords: Matrix equation, iterative algorithm, parameter estimation, minimum norm solution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1810
2089 Binary Mixture of Copper-Cobalt Ions Uptake by Zeolite using Neural Network

Authors: John Kabuba, Antoine Mulaba-Bafubiandi, Kim Battle

Abstract:

In this study a neural network (NN) was proposed to predict the sorption of binary mixture of copper-cobalt ions into clinoptilolite as ion-exchanger. The configuration of the backpropagation neural network giving the smallest mean square error was three-layer NN with tangent sigmoid transfer function at hidden layer with 10 neurons, linear transfer function at output layer and Levenberg-Marquardt backpropagation training algorithm. Experiments have been carried out in the batch reactor to obtain equilibrium data of the individual sorption and the mixture of coppercobalt ions. The obtained modeling results have shown that the used of neural network has better adjusted the equilibrium data of the binary system when compared with the conventional sorption isotherm models.

Keywords: Adsorption isotherm, binary system, neural network; sorption

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2002
2088 Impact of Height of Silicon Pillar on Vertical DG-MOSFET Device

Authors: K. E. Kaharudin, A. H. Hamidon, F. Salehuddin

Abstract:

Vertical Double Gate (DG) Metal Oxide Semiconductor Field Effect Transistor (MOSFET) is believed to suppress various short channel effect problems. The gate to channel coupling in vertical DG-MOSFET are doubled, thus resulting in higher current density. By having two gates, both gates are able to control the channel from both sides and possess better electrostatic control over the channel. In order to ensure that the transistor possess a superb turn-off characteristic, the subs-threshold swing (SS) must be kept at minimum value (60-90mV/dec). By utilizing SILVACO TCAD software, an n-channel vertical DG-MOSFET was successfully designed while keeping the sub-threshold swing (SS) value as minimum as possible. From the observation made, the value of sub-threshold swing (SS) was able to be varied by adjusting the height of the silicon pillar. The minimum value of sub-threshold swing (SS) was found to be 64.7mV/dec with threshold voltage (VTH) of 0.895V. The ideal height of the vertical DG-MOSFET pillar was found to be at 0.265 µm.

Keywords: DG-MOSFET, pillar, SCE, vertical

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1821
2087 A New Method for Contour Approximation Using Basic Ramer Idea

Authors: Ali Abdrhman Ukasha

Abstract:

This paper presented two new efficient algorithms for contour approximation. The proposed algorithm is compared with Ramer (good quality), Triangle (faster) and Trapezoid (fastest) in this work; which are briefly described. Cartesian co-ordinates of an input contour are processed in such a manner that finally contours is presented by a set of selected vertices of the edge of the contour. In the paper the main idea of the analyzed procedures for contour compression is performed. For comparison, the mean square error and signal-to-noise ratio criterions are used. Computational time of analyzed methods is estimated depending on a number of numerical operations. Experimental results are obtained both in terms of image quality, compression ratios, and speed. The main advantages of the analyzed algorithm is small numbers of the arithmetic operations compared to the existing algorithms.

Keywords: Polygonal approximation, Ramer, Triangle and Trapezoid methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1774
2086 The Hall Coefficient and Magnetoresistance in Rectangular Quantum Wires with Infinitely High Potential under the Influence of a Laser Radiation

Authors: Nguyen Thu Huong, Nguyen Quang Bau

Abstract:

The Hall Coefficient (HC) and the Magnetoresistance (MR) have been studied in two-dimensional systems. The HC and the MR in Rectangular Quantum Wire (RQW) subjected to a crossed DC electric field and magnetic field in the presence of a Strong Electromagnetic Wave (EMW) characterized by electric field are studied in this work. Using the quantum kinetic equation for electrons interacting with optical phonons, we obtain the analytic expressions for the HC and the MR with a dependence on magnetic field, EMW frequency, temperatures of systems and the length characteristic parameters of RQW. These expressions are different from those obtained for bulk semiconductors and cylindrical quantum wires. The analytical results are applied to GaAs/GaAs/Al. For this material, MR depends on the ratio of the EMW frequency to the cyclotron frequency. Indeed, MR reaches a minimum at the ratio 5/4, and when this ratio increases, it tends towards a saturation value. The HC can take negative or positive values. Each curve has one maximum and one minimum. When magnetic field increases, the HC is negative, achieves a minimum value and then increases suddenly to a maximum with a positive value. This phenomenon differs from the one observed in cylindrical quantum wire, which does not have maximum and minimum values.

Keywords: Hall coefficient, rectangular quantum wires, electron-optical phonon interaction, quantum kinetic equation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1837
2085 Multiwavelet and Biological Signal Processing

Authors: Morteza Moazami-Goudarzi, Ali Taheri, Mohammad Pooyan, Reza Mahboobi

Abstract:

In this paper we are to find the optimum multiwavelet for compression of electrocardiogram (ECG) signals and then, selecting it for using with SPIHT codec. At present, it is not well known which multiwavelet is the best choice for optimum compression of ECG. In this work, we examine different multiwavelets on 24 sets of ECG data with entirely different characteristics, selected from MIT-BIH database. For assessing the functionality of the different multiwavelets in compressing ECG signals, in addition to known factors such as Compression Ratio (CR), Percent Root Difference (PRD), Distortion (D), Root Mean Square Error (RMSE) in compression literature, we also employed the Cross Correlation (CC) criterion for studying the morphological relations between the reconstructed and the original ECG signal and Signal to reconstruction Noise Ratio (SNR). The simulation results show that the Cardinal Balanced Multiwavelet (cardbal2) by the means of identity (Id) prefiltering method to be the best effective transformation. After finding the most efficient multiwavelet, we apply SPIHT coding algorithm on the transformed signal by this multiwavelet.

Keywords: ECG compression, Prefiltering, Cardinal Balanced Multiwavelet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1815
2084 Large-Eddy Simulation of Hypersonic Configuration Aerodynamics

Authors: Huang Shengqin, Xiao Hong

Abstract:

LES with mixed subgrid-scale model has been used to simulate aerodynamic performance of hypersonic configuration. The simulation was conducted to replicate conditions and geometry of a model which has been previously tested. LES Model has been successful in predict pressure coefficient with the max error 1.5% besides afterbody. But in the high Mach number condition, it is poor in predict ability and product 12.5% error. The calculation error are mainly conducted by the distribution swirling. The fact of poor ability in the high Mach number and afterbody region indicated that the mixed subgrid-scale model should be improved in large eddied especially in hypersonic separate region. In the condition of attach and sideslip flight, the calculation results have waves. LES are successful in the prediction the pressure wave in hypersonic flow.

Keywords: Hypersonic, LES, mixed Subgrid-scale model, experiment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1536
2083 A Time-Reducible Approach to Compute Determinant |I-X|

Authors: Wang Xingbo

Abstract:

Computation of determinant in the form |I-X| is primary and fundamental because it can help to compute many other determinants. This article puts forward a time-reducible approach to compute determinant |I-X|. The approach is derived from the Newton’s identity and its time complexity is no more than that to compute the eigenvalues of the square matrix X. Mathematical deductions and numerical example are presented in detail for the approach. By comparison with classical approaches the new approach is proved to be superior to the classical ones and it can naturally reduce the computational time with the improvement of efficiency to compute eigenvalues of the square matrix.

Keywords: Algorithm, determinant, computation, eigenvalue, time complexity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1112
2082 Error Detection and Correction for Onboard Satellite Computers Using Hamming Code

Authors: Rafsan Al Mamun, Md. Motaharul Islam, Rabana Tajrin, Nabiha Noor, Shafinaz Qader

Abstract:

In an attempt to enrich the lives of billions of people by providing proper information, security and a way of communicating with others, the need for efficient and improved satellites is constantly growing. Thus, there is an increasing demand for better error detection and correction (EDAC) schemes, which are capable of protecting the data onboard the satellites. The paper is aimed towards detecting and correcting such errors using a special algorithm called the Hamming Code, which uses the concept of parity and parity bits to prevent single-bit errors onboard a satellite in Low Earth Orbit. This paper focuses on the study of Low Earth Orbit satellites and the process of generating the Hamming Code matrix to be used for EDAC using computer programs. The most effective version of Hamming Code generated was the Hamming (16, 11, 4) version using MATLAB, and the paper compares this particular scheme with other EDAC mechanisms, including other versions of Hamming Codes and Cyclic Redundancy Check (CRC), and the limitations of this scheme. This particular version of the Hamming Code guarantees single-bit error corrections as well as double-bit error detections. Furthermore, this version of Hamming Code has proved to be fast with a checking time of 5.669 nanoseconds, that has a relatively higher code rate and lower bit overhead compared to the other versions and can detect a greater percentage of errors per length of code than other EDAC schemes with similar capabilities. In conclusion, with the proper implementation of the system, it is quite possible to ensure a relatively uncorrupted satellite storage system.

Keywords: Bit-flips, Hamming code, low earth orbit, parity bits, satellite, single error upset.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 848
2081 No-Reference Image Quality Assessment using Blur and Noise

Authors: Min Goo Choi, Jung Hoon Jung, Jae Wook Jeon

Abstract:

Assessment for image quality traditionally needs its original image as a reference. The conventional method for assessment like Mean Square Error (MSE) or Peak Signal to Noise Ratio (PSNR) is invalid when there is no reference. In this paper, we present a new No-Reference (NR) assessment of image quality using blur and noise. The recent camera applications provide high quality images by help of digital Image Signal Processor (ISP). Since the images taken by the high performance of digital camera have few blocking and ringing artifacts, we only focus on the blur and noise for predicting the objective image quality. The experimental results show that the proposed assessment method gives high correlation with subjective Difference Mean Opinion Score (DMOS). Furthermore, the proposed method provides very low computational load in spatial domain and similar extraction of characteristics to human perceptional assessment.

Keywords: No Reference, Image Quality Assessment, blur, noise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3821
2080 On Constructing a Cubically Convergent Numerical Method for Multiple Roots

Authors: Young Hee Geum

Abstract:

We propose the numerical method defined by

xn+1 = xn − λ[f(xn − μh(xn))/]f'(xn) , n ∈ N,

and determine the control parameter λ and μ to converge cubically. In addition, we derive the asymptotic error constant. Applying this proposed scheme to various test functions, numerical results show a good agreement with the theory analyzed in this paper and are proven using Mathematica with its high-precision computability.

Keywords: Asymptotic error constant, iterative method , multiple root, root-finding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1458
2079 Compton Scattering of Annihilation Photons as a Short Range Quantum Key Distribution Mechanism

Authors: Roman Novak, Matjaz Vencelj

Abstract:

The angular distribution of Compton scattering of two quanta originating in the annihilation of a positron with an electron is investigated as a quantum key distribution (QKD) mechanism in the gamma spectral range. The geometry of coincident Compton scattering is observed on the two sides as a way to obtain partially correlated readings on the quantum channel. We derive the noise probability density function of a conceptually equivalent prepare and measure quantum channel in order to evaluate the limits of the concept in terms of the device secrecy capacity and estimate it at roughly 1.9 bits per 1 000 annihilation events. The high error rate is well above the tolerable error rates of the common reconciliation protocols; therefore, the proposed key agreement protocol by public discussion requires key reconciliation using classical error-correcting codes. We constructed a prototype device based on the readily available monolithic detectors in the least complex setup.

Keywords: Compton scattering, gamma-ray polarization, quantumcryptography, quantum key distribution

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2205
2078 A Note on the Minimum Cardinality of Critical Sets of Inertias for Irreducible Zero-nonzero Patterns of Order 4

Authors: Ber-Lin Yu, Ting-Zhu Huang

Abstract:

If there exists a nonempty, proper subset S of the set of all (n+1)(n+2)/2 inertias such that S Ôèå i(A) is sufficient for any n×n zero-nonzero pattern A to be inertially arbitrary, then S is called a critical set of inertias for zero-nonzero patterns of order n. If no proper subset of S is a critical set, then S is called a minimal critical set of inertias. In [Kim, Olesky and Driessche, Critical sets of inertias for matrix patterns, Linear and Multilinear Algebra, 57 (3) (2009) 293-306], identifying all minimal critical sets of inertias for n×n zero-nonzero patterns with n ≥ 3 and the minimum cardinality of such a set are posed as two open questions by Kim, Olesky and Driessche. In this note, the minimum cardinality of all critical sets of inertias for 4 × 4 irreducible zero-nonzero patterns is identified.

Keywords: Zero-nonzero pattern, inertia, critical set of inertias, inertially arbitrary.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1156
2077 Text Mining Technique for Data Mining Application

Authors: M. Govindarajan

Abstract:

Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In decision tree approach is most useful in classification problem. With this technique, tree is constructed to model the classification process. There are two basic steps in the technique: building the tree and applying the tree to the database. This paper describes a proposed C5.0 classifier that performs rulesets, cross validation and boosting for original C5.0 in order to reduce the optimization of error ratio. The feasibility and the benefits of the proposed approach are demonstrated by means of medial data set like hypothyroid. It is shown that, the performance of a classifier on the training cases from which it was constructed gives a poor estimate by sampling or using a separate test file, either way, the classifier is evaluated on cases that were not used to build and evaluate the classifier are both are large. If the cases in hypothyroid.data and hypothyroid.test were to be shuffled and divided into a new 2772 case training set and a 1000 case test set, C5.0 might construct a different classifier with a lower or higher error rate on the test cases. An important feature of see5 is its ability to classifiers called rulesets. The ruleset has an error rate 0.5 % on the test cases. The standard errors of the means provide an estimate of the variability of results. One way to get a more reliable estimate of predictive is by f-fold –cross- validation. The error rate of a classifier produced from all the cases is estimated as the ratio of the total number of errors on the hold-out cases to the total number of cases. The Boost option with x trials instructs See5 to construct up to x classifiers in this manner. Trials over numerous datasets, large and small, show that on average 10-classifier boosting reduces the error rate for test cases by about 25%.

Keywords: C5.0, Error Ratio, text mining, training data, test data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2438
2076 High Performance of Direct Torque and Flux Control of a Double Stator Induction Motor Drive with a Fuzzy Stator Resistance Estimator

Authors: K. Kouzi

Abstract:

In order to have stable and high performance of direct torque and flux control (DTFC) of double star induction motor drive (DSIM), proper on-line adaptation of the stator resistance is very important. This is inevitably due to the variation of the stator resistance during operating conditions, which introduces error in estimated flux position and the magnitude of the stator flux. Error in the estimated stator flux deteriorates the performance of the DTFC drive. Also, the effect of error in estimation is very important especially at low speed. Due to this, our aim is to overcome the sensitivity of the DTFC to the stator resistance variation by proposing on-line fuzzy estimation stator resistance. The fuzzy estimation method is based on an on-line stator resistance correction through the variations of the stator current estimation error and its variations. The fuzzy logic controller gives the future stator resistance increment at the output. The main advantage of the suggested algorithm control is to avoid the drive instability that may occur in certain situations and ensure the tracking of the actual stator resistance. The validity of the technique and the improvement of the whole system performance are proved by the results.

Keywords: Direct torque control, dual stator induction motor, fuzzy logic estimation, stator resistance adaptation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1117
2075 Optimizing Forecasting for Indonesia's Coal and Palm Oil Exports: A Comparative Analysis of ARIMA, ANN, and LSTM Methods

Authors: Mochammad Dewo, Sumarsono Sudarto

Abstract:

The Exponential Triple Smoothing Algorithm approach nowadays, which is used to anticipate the export value of Indonesia's two major commodities, coal and palm oil, has a Mean Percentage Absolute Error (MAPE) value of 30-50%, which may be considered as a "reasonable" forecasting mistake. Forecasting errors of more than 30% shall have a domino effect on industrial output, as extra production adds to raw material, manufacturing and storage expenses. Whereas, reaching an "excellent" classification with an error value of less than 10% will provide new investors and exporters with confidence in the commercial development of related sectors. Industrial growth will bring out a positive impact on economic development. It can be applied for other commodities if the forecast error is less than 10%. The purpose of this project is to create a forecasting technique that can produce precise forecasting results with an error of less than 10%. This research analyzes forecasting methods such as ARIMA (Autoregressive Integrated Moving Average), ANN (Artificial Neural Network) and LSTM (Long-Short Term Memory). By providing a MAPE of 1%, this study reveals that ANN is the most successful strategy for forecasting coal and palm oil commodities in Indonesia.

Keywords: ANN, Artificial Neural Network, ARIMA, Autoregressive Integrated Moving Average, export value, forecast, LSTM, Long Short Term Memory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 156
2074 Objective Performance of Compressed Image Quality Assessments

Authors: Ratchakit Sakuldee, Somkait Udomhunsakul

Abstract:

Measurement of the quality of image compression is important for image processing application. In this paper, we propose an objective image quality assessment to measure the quality of gray scale compressed image, which is correlation well with subjective quality measurement (MOS) and least time taken. The new objective image quality measurement is developed from a few fundamental of objective measurements to evaluate the compressed image quality based on JPEG and JPEG2000. The reliability between each fundamental objective measurement and subjective measurement (MOS) is found. From the experimental results, we found that the Maximum Difference measurement (MD) and a new proposed measurement, Structural Content Laplacian Mean Square Error (SCLMSE), are the suitable measurements that can be used to evaluate the quality of JPEG200 and JPEG compressed image, respectively. In addition, MD and SCLMSE measurements are scaled to make them equivalent to MOS, given the rate of compressed image quality from 1 to 5 (unacceptable to excellent quality).

Keywords: JPEG, JPEG2000, objective image quality measurement, subjective image quality measurement, correlation coefficients.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2132
2073 A Minimum Spanning Tree-Based Method for Initializing the K-Means Clustering Algorithm

Authors: J. Yang, Y. Ma, X. Zhang, S. Li, Y. Zhang

Abstract:

The traditional k-means algorithm has been widely used as a simple and efficient clustering method. However, the algorithm often converges to local minima for the reason that it is sensitive to the initial cluster centers. In this paper, an algorithm for selecting initial cluster centers on the basis of minimum spanning tree (MST) is presented. The set of vertices in MST with same degree are regarded as a whole which is used to find the skeleton data points. Furthermore, a distance measure between the skeleton data points with consideration of degree and Euclidean distance is presented. Finally, MST-based initialization method for the k-means algorithm is presented, and the corresponding time complexity is analyzed as well. The presented algorithm is tested on five data sets from the UCI Machine Learning Repository. The experimental results illustrate the effectiveness of the presented algorithm compared to three existing initialization methods.

Keywords: Degree, initial cluster center, k-means, minimum spanning tree.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1499
2072 Levenberg-Marquardt Algorithm for Karachi Stock Exchange Share Rates Forecasting

Authors: Syed Muhammad Aqil Burney, Tahseen Ahmed Jilani, C. Ardil

Abstract:

Financial forecasting is an example of signal processing problems. A number of ways to train/learn the network are available. We have used Levenberg-Marquardt algorithm for error back-propagation for weight adjustment. Pre-processing of data has reduced much of the variation at large scale to small scale, reducing the variation of training data.

Keywords: Gradient descent method, jacobian matrix.Levenberg-Marquardt algorithm, quadratic error surfaces,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2429
2071 Modeling of Fluid Flow in 2D Triangular, Sinusoidal, and Square Corrugated Channels

Authors: Abdulbasit G. A. Abdulsayid

Abstract:

The main focus of the work was concerned with hydrodynamic and thermal analysis of the plate heat exchanger channel with corrugation patterns suggested to be triangular, sinusoidal, and square corrugation. This study was to numerically model and validate the triangular corrugated channel with dimensions/parameters taken from open literature, and then model/analyze both sinusoidal, and square corrugated channel referred to the triangular model. Initially, 2D modeling with local extensive analysis for triangular corrugated channel was carried out. By that, all local pressure drop, wall shear stress, friction factor, static temperature, heat flux, Nusselt number, and surface heat coefficient, were analyzed to interpret the hydrodynamic and thermal phenomena occurred in the flow. Furthermore, in order to facilitate confidence in this model, a comparison between the values predicted, and experimental results taken from literature for almost the same case, was done. Moreover, a holistic numerical study for sinusoidal and square channels together with global comparisons with triangular corrugation under the same condition, were handled. Later, a comparison between electric, and fluid cooling through varying the boundary condition was achieved. The constant wall temperature and constant wall heat flux boundary conditions were employed, and the different resulted Nusselt numbers as a consequence were justified. The results obtained can be used to come up with an optimal design, a 'compromise' between heat transfer and pressure drop.

Keywords: Corrugated Channel, CFD, Heat Exchanger, Heat Enhancement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3134
2070 Predictive Clustering Hybrid Regression(pCHR) Approach and Its Application to Sucrose-Based Biohydrogen Production

Authors: Nikhil, Ari Visa, Chin-Chao Chen, Chiu-Yue Lin, Jaakko A. Puhakka, Olli Yli-Harja

Abstract:

A predictive clustering hybrid regression (pCHR) approach was developed and evaluated using dataset from H2- producing sucrose-based bioreactor operated for 15 months. The aim was to model and predict the H2-production rate using information available about envirome and metabolome of the bioprocess. Selforganizing maps (SOM) and Sammon map were used to visualize the dataset and to identify main metabolic patterns and clusters in bioprocess data. Three metabolic clusters: acetate coupled with other metabolites, butyrate only, and transition phases were detected. The developed pCHR model combines principles of k-means clustering, kNN classification and regression techniques. The model performed well in modeling and predicting the H2-production rate with mean square error values of 0.0014 and 0.0032, respectively.

Keywords: Biohydrogen, bioprocess modeling, clusteringhybrid regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730
2069 Determining the Best Fitting Distributions for Minimum Flows of Streams in Gediz Basin

Authors: Naci Büyükkaracığan

Abstract:

Today, the need for water sources is swiftly increasing due to population growth. At the same time, it is known that some regions will face with shortage of water and drought because of the global warming and climate change. In this context, evaluation and analysis of hydrological data such as the observed trends, drought and flood prediction of short term flow has great deal of importance. The most accurate selection probability distribution is important to describe the low flow statistics for the studies related to drought analysis. As in many basins In Turkey, Gediz River basin will be affected enough by the drought and will decrease the amount of used water. The aim of this study is to derive appropriate probability distributions for frequency analysis of annual minimum flows at 6 gauging stations of the Gediz Basin. After applying 10 different probability distributions, six different parameter estimation methods and 3 fitness test, the Pearson 3 distribution and general extreme values distributions were found to give optimal results.

Keywords: Gediz Basin, goodness-of-fit tests, Minimum flows, probability distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2471
2068 Analysis of a WDM System for Tanzania

Authors: Shaban Pazi, Chris Chatwin, Rupert Young, Philip Birch

Abstract:

Internet infrastructures in most places of the world have been supported by the advancement of optical fiber technology, most notably wavelength division multiplexing (WDM) system. Optical technology by means of WDM system has revolutionized long distance data transport and has resulted in high data capacity, cost reductions, extremely low bit error rate, and operational simplification of the overall Internet infrastructure. This paper analyses and compares the system impairments, which occur at data transmission rates of 2.5Gb/s and 10 Gb/s per wavelength channel in our proposed optical WDM system for Internet infrastructure in Tanzania. The results show that the data transmission rate of 2.5 Gb/s has minimum system impairments compared with a rate of 10 Gb/s per wavelength channel, and achieves a sufficient system performance to provide a good Internet access service.

Keywords: Internet infrastructure, WDM system, standard single mode fibers, system impairments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1623
2067 The Effect of a Free -Trade Agreement upon Agricultural Imports

Authors: Andres G. Victorio, Montita Rungswang

Abstract:

A free-trade agreement is found to increase Thailand-s agricultural imports from New Zealand, despite the short span of time for which the agreement has been operational. The finding is described by autoregressive estimates that correct for possible unit roots in the data. The agreement-s effect upon imports is also estimated while considering an error-correction model of imports against gross domestic product.

Keywords: Agricultural imports, free trade, unit roots, cointegration, error correction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1764
2066 Equalization Algorithms for MIMO System

Authors: Said Elkassimi, Said Safi, B. Manaut

Abstract:

In recent years, multi-antenna techniques are being considered as a potential solution to increase the flow of future wireless communication systems. The objective of this article is to study the emission and reception system MIMO (Multiple Input Multiple Output), and present the different reception decoding techniques. First we will present the least complex technical, linear receivers such as the zero forcing equalizer (ZF) and minimum mean squared error (MMSE). Then a nonlinear technique called ordered successive cancellation of interferences (OSIC) and the optimal detector based on the maximum likelihood criterion (ML), finally, we simulate the associated decoding algorithms for MIMO system such as ZF, MMSE, OSIC and ML, thus a comparison of performance of these algorithms in MIMO context.

Keywords: Multiple Input Multiple Outputs (MIMO), ZF, MMSE, Ordered Interference Successive Cancellation (OSIC), ML, Interference Successive Cancellation (SIC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2789
2065 Numerical Analysis of Laminar Flow around Square Cylinders with EHD Phenomenon

Authors: M. Salmanpour, O. Nourani Zonouz

Abstract:

In this research, a numerical simulation of an Electrohydrodynamic (EHD) actuator’s effects on the flow around a square cylinder by using a finite volume method has been investigated. This is one of the newest ways for controlling the fluid flows. Two plate electrodes are flush-mounted on the surface of the cylinder and one wire electrode is placed on the line with zero angle of attack relative to the stagnation point and excited with DC power supply. The discharge produces an electric force and changes the local momentum behaviors in the fluid layers. For this purpose, after selecting proper domain and boundary conditions, the electric field relating to the problem has been analyzed and then the results in the form of electrical body force have been entered in the governing equations of fluid field (Navier-Stokes equations). The effect of ionic wind resulted from the Electrohydrodynamic actuator, on the velocity, pressure and the wake behind cylinder has been considered. According to the results, it is observed that the fluid flow accelerates in the nearest wall of the frontal half of the cylinder and the pressure difference between frontal and hinder cylinder is increased.

Keywords: CFD, corona discharge, electro hydrodynamics, flow around square cylinders.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 831
2064 Sinusoidal Roughness Elements in a Square Cavity

Authors: M. Yousaf, S. Usman

Abstract:

Numerical studies were conducted using Lattice Boltzmann Method (LBM) to study the natural convection in a square cavity in the presence of roughness. An algorithm based on a single relaxation time Bhatnagar-Gross-Krook (BGK) model of Lattice Boltzmann Method (LBM) was developed. Roughness was introduced on both the hot and cold walls in the form of sinusoidal roughness elements. The study was conducted for a Newtonian fluid of Prandtl number (Pr) 1.0. The range of Ra number was explored from 10^3 to 10^6 in a laminar region. Thermal and hydrodynamic behavior of fluid was analyzed using a differentially heated square cavity with roughness elements present on both the hot and cold wall. Neumann boundary conditions were introduced on horizontal walls with vertical walls as isothermal. The roughness elements were at the same boundary condition as corresponding walls. Computational algorithm was validated against previous benchmark studies performed with different numerical methods, and a good agreement was found to exist. Results indicate that the maximum reduction in the average heat transfer was 16.66 percent at Ra number 10^5.

Keywords: Lattice Boltzmann Method Natural convection, Nusselt Number Rayleigh number, Roughness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2108