Search results for: prediction error
212 Sperm Identification Using Elliptic Model and Tail Detection
Authors: Vahid Reza Nafisi, Mohammad Hasan Moradi, Mohammad Hosain Nasr-Esfahani
Abstract:
The conventional assessment of human semen is a highly subjective assessment, with considerable intra- and interlaboratory variability. Computer-Assisted Sperm Analysis (CASA) systems provide a rapid and automated assessment of the sperm characteristics, together with improved standardization and quality control. However, the outcome of CASA systems is sensitive to the method of experimentation. While conventional CASA systems use digital microscopes with phase-contrast accessories, producing higher contrast images, we have used raw semen samples (no staining materials) and a regular light microscope, with a digital camera directly attached to its eyepiece, to insure cost benefits and simple assembling of the system. However, since the accurate finding of sperms in the semen image is the first step in the examination and analysis of the semen, any error in this step can affect the outcome of the analysis. This article introduces and explains an algorithm for finding sperms in low contrast images: First, an image enhancement algorithm is applied to remove extra particles from the image. Then, the foreground particles (including sperms and round cells) are segmented form the background. Finally, based on certain features and criteria, sperms are separated from other cells.Keywords: Computer-Assisted Sperm Analysis (CASA), Sperm identification, Tail detection, Elliptic shape model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1928211 Finite Volume Method for Flow Prediction Using Unstructured Meshes
Authors: Juhee Lee, Yongjun Lee
Abstract:
In designing a low-energy-consuming buildings, the heat transfer through a large glass or wall becomes critical. Multiple layers of the window glasses and walls are employed for the high insulation. The gravity driven air flow between window glasses or wall layers is a natural heat convection phenomenon being a key of the heat transfer. For the first step of the natural heat transfer analysis, in this study the development and application of a finite volume method for the numerical computation of viscous incompressible flows is presented. It will become a part of the natural convection analysis with high-order scheme, multi-grid method, and dual-time step in the future. A finite volume method based on a fully-implicit second-order is used to discretize and solve the fluid flow on unstructured grids composed of arbitrary-shaped cells. The integrations of the governing equation are discretised in the finite volume manner using a collocated arrangement of variables. The convergence of the SIMPLE segregated algorithm for the solution of the coupled nonlinear algebraic equations is accelerated by using a sparse matrix solver such as BiCGSTAB. The method used in the present study is verified by applying it to some flows for which either the numerical solution is known or the solution can be obtained using another numerical technique available in the other researches. The accuracy of the method is assessed through the grid refinement.
Keywords: Finite volume method, fluid flow, laminar flow, unstructured grid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1845210 Dynamic Features Selection for Heart Disease Classification
Authors: Walid MOUDANI
Abstract:
The healthcare environment is generally perceived as being information rich yet knowledge poor. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. In fact, valuable knowledge can be discovered from application of data mining techniques in healthcare system. In this study, a proficient methodology for the extraction of significant patterns from the Coronary Heart Disease warehouses for heart attack prediction, which unfortunately continues to be a leading cause of mortality in the whole world, has been presented. For this purpose, we propose to enumerate dynamically the optimal subsets of the reduced features of high interest by using rough sets technique associated to dynamic programming. Therefore, we propose to validate the classification using Random Forest (RF) decision tree to identify the risky heart disease cases. This work is based on a large amount of data collected from several clinical institutions based on the medical profile of patient. Moreover, the experts- knowledge in this field has been taken into consideration in order to define the disease, its risk factors, and to establish significant knowledge relationships among the medical factors. A computer-aided system is developed for this purpose based on a population of 525 adults. The performance of the proposed model is analyzed and evaluated based on set of benchmark techniques applied in this classification problem.Keywords: Multi-Classifier Decisions Tree, Features Reduction, Dynamic Programming, Rough Sets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2532209 The Use of Artificial Neural Network in Option Pricing: The Case of S and P 100 Index Options
Authors: Zeynep İltüzer Samur, Gül Tekin Temur
Abstract:
Due to the increasing and varying risks that economic units face with, derivative instruments gain substantial importance, and trading volumes of derivatives have reached very significant level. Parallel with these high trading volumes, researchers have developed many different models. Some are parametric, some are nonparametric. In this study, the aim is to analyse the success of artificial neural network in pricing of options with S&P 100 index options data. Generally, the previous studies cover the data of European type call options. This study includes not only European call option but also American call and put options and European put options. Three data sets are used to perform three different ANN models. One only includes data that are directly observed from the economic environment, i.e. strike price, spot price, interest rate, maturity, type of the contract. The others include an extra input that is not an observable data but a parameter, i.e. volatility. With these detail data, the performance of ANN in put/call dimension, American/European dimension, moneyness dimension is analyzed and whether the contribution of the volatility in neural network analysis make improvement in prediction performance or not is examined. The most striking results revealed by the study is that ANN shows better performance when pricing call options compared to put options; and the use of volatility parameter as an input does not improve the performance.
Keywords: Option Pricing, Neural Network, S&P 100 Index, American/European options
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3084208 Flow Discharge Determination in Straight Compound Channels Using ANNs
Authors: A. Zahiri, A. A. Dehghani
Abstract:
Although many researchers have studied the flow hydraulics in compound channels, there are still many complicated problems in determination of their flow rating curves. Many different methods have been presented for these channels but extending them for all types of compound channels with different geometrical and hydraulic conditions is certainly difficult. In this study, by aid of nearly 400 laboratory and field data sets of geometry and flow rating curves from 30 different straight compound sections and using artificial neural networks (ANNs), flow discharge in compound channels was estimated. 13 dimensionless input variables including relative depth, relative roughness, relative width, aspect ratio, bed slope, main channel side slopes, flood plains side slopes and berm inclination and one output variable (flow discharge), have been used in ANNs. Comparison of ANNs model and traditional method (divided channel method-DCM) shows high accuracy of ANNs model results. The results of Sensitivity analysis showed that the relative depth with 47.6 percent contribution, is the most effective input parameter for flow discharge prediction. Relative width and relative roughness have 19.3 and 12.2 percent of importance, respectively. On the other hand, shape parameter, main channel and flood plains side slopes with 2.1, 3.8 and 3.8 percent of contribution, have the least importance.Keywords: ANN model, compound channels, divided channel method (DCM), flow rating curve
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2558207 Investigating the Demand for Short-shelf Life Food Products for SME Wholesalers
Authors: Yamini Raju, Parminder S. Kang, Adam Moroz, Ross Clement, Ashley Hopwell, Alistair Duffy
Abstract:
Accurate forecasting of fresh produce demand is one the challenges faced by Small Medium Enterprise (SME) wholesalers. This paper is an attempt to understand the cause for the high level of variability such as weather, holidays etc., in demand of SME wholesalers. Therefore, understanding the significance of unidentified factors may improve the forecasting accuracy. This paper presents the current literature on the factors used to predict demand and the existing forecasting techniques of short shelf life products. It then investigates a variety of internal and external possible factors, some of which is not used by other researchers in the demand prediction process. The results presented in this paper are further analysed using a number of techniques to minimize noise in the data. For the analysis past sales data (January 2009 to May 2014) from a UK based SME wholesaler is used and the results presented are limited to product ‘Milk’ focused on café’s in derby. The correlation analysis is done to check the dependencies of variability factor on the actual demand. Further PCA analysis is done to understand the significance of factors identified using correlation. The PCA results suggest that the cloud cover, weather summary and temperature are the most significant factors that can be used in forecasting the demand. The correlation of the above three factors increased relative to monthly and becomes more stable compared to the weekly and daily demand.Keywords: Demand Forecasting, Deteriorating Products, Food Wholesalers, Principal Component Analysis and Variability Factors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3368206 Effect of Transverse Reinforcement on the Behavior of Tension Lap splice in High-Strength Reinforced Concrete Beams
Authors: Ahmed H. Abdel-Kareem, Hala. Abousafa, Omia S. El-Hadidi
Abstract:
The results of an experimental program conducted on seventeen simply supported concrete beams to study the effect of transverse reinforcement on the behavior of lap splice of steel reinforcement in tension zones in high strength concrete beams, are presented. The parameters included in the experimental program were the concrete compressive strength, the lap splice length, the amount of transverse reinforcement provided within the splice region, and the shape of transverse reinforcement around spliced bars. The experimental results showed that the displacement ductility increased and the mode of failure changed from splitting bond failure to flexural failure when the amount of transverse reinforcement in splice region increased, and the compressive strength increased up to 100 MPa. The presence of transverse reinforcement around spliced bars had pronounced effect on increasing the ultimate load, the ultimate deflection, and the displacement ductility. The prediction of maximum steel stresses for spliced bars using ACI 318-05 building code was compared with the experimental results. The comparison showed that the effect of transverse reinforcement around spliced bars has to be considered into the design equations for lap splice length in high strength concrete beams.
Keywords: Ductility, high strength concrete, tension lap splice, transverse reinforcement, steel stresses.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4712205 The Analysis of Defects Prediction in Injection Molding
Authors: Mehdi Moayyedian, Kazem Abhary, Romeo Marian
Abstract:
This paper presents an evaluation of a plastic defect in injection molding before it occurs in the process; it is known as the short shot defect. The evaluation of different parameters which affect the possibility of short shot defect is the aim of this paper. The analysis of short shot possibility is conducted via SolidWorks Plastics and Taguchi method to determine the most significant parameters. Finite Element Method (FEM) is employed to analyze two circular flat polypropylene plates of 1 mm thickness. Filling time, part cooling time, pressure holding time, melt temperature and gate type are chosen as process and geometric parameters, respectively. A methodology is presented herein to predict the possibility of the short-shot occurrence. The analysis determined melt temperature is the most influential parameter affecting the possibility of short shot defect with a contribution of 74.25%, and filling time with a contribution of 22%, followed by gate type with a contribution of 3.69%. It was also determined the optimum level of each parameter leading to a reduction in the possibility of short shot are gate type at level 1, filling time at level 3 and melt temperature at level 3. Finally, the most significant parameters affecting the possibility of short shot were determined to be melt temperature, filling time, and gate type.Keywords: Injection molding, plastic defects, short shot, Taguchi method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1532204 Breast Cancer Survivability Prediction via Classifier Ensemble
Authors: Mohamed Al-Badrashiny, Abdelghani Bellaachia
Abstract:
This paper presents a classifier ensemble approach for predicting the survivability of the breast cancer patients using the latest database version of the Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute. The system consists of two main components; features selection and classifier ensemble components. The features selection component divides the features in SEER database into four groups. After that it tries to find the most important features among the four groups that maximizes the weighted average F-score of a certain classification algorithm. The ensemble component uses three different classifiers, each of which models different set of features from SEER through the features selection module. On top of them, another classifier is used to give the final decision based on the output decisions and confidence scores from each of the underlying classifiers. Different classification algorithms have been examined; the best setup found is by using the decision tree, Bayesian network, and Na¨ıve Bayes algorithms for the underlying classifiers and Na¨ıve Bayes for the classifier ensemble step. The system outperforms all published systems to date when evaluated against the exact same data of SEER (period of 1973-2002). It gives 87.39% weighted average F-score compared to 85.82% and 81.34% of the other published systems. By increasing the data size to cover the whole database (period of 1973-2014), the overall weighted average F-score jumps to 92.4% on the held out unseen test set.Keywords: Classifier ensemble, breast cancer survivability, data mining, SEER.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1671203 Turbo-Coded Mobile Terrestrial Communication Systems in Urban and Suburban Areas for Wireless Multimedia Applications
Authors: F. Mehran
Abstract:
With the rapid popularization of internet services, it is apparent that the next generation terrestrial communication systems must be capable of supporting various applications like voice, video, and data. This paper presents the performance evaluation of turbo- coded mobile terrestrial communication systems, which are capable of providing high quality services for delay sensitive (voice or video) and delay tolerant (text transmission) multimedia applications in urban and suburban areas. Different types of multimedia information require different service qualities, which are generally expressed in terms of a maximum acceptable bit-error-rate (BER) and maximum tolerable latency. The breakthrough discovery of turbo codes allows us to significantly reduce the probability of bit errors with feasible latency. In a turbo-coded system, a trade-off between latency and BER results from the choice of convolutional component codes, interleaver type and size, decoding algorithm, and the number of decoding iterations. This trade-off can be exploited for multimedia applications by using optimal and suboptimal performance parameter amalgamations to achieve different service qualities. The results are therefore proposing an adaptive framework for turbo-coded wireless multimedia communications which incorporate a set of performance parameters that achieve an appropriate set of service qualities, depending on the application's requirements.
Keywords: Mobile communications, Turbo codes, wireless multimedia communication systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1597202 Optimizing and Evaluating Performance Quality Control of the Production Process of Disposable Essentials Using Approach Vague Goal Programming
Authors: Hadi Gholizadeh, Ali Tajdin
Abstract:
To have effective production planning, it is necessary to control the quality of processes. This paper aims at improving the performance of the disposable essentials process using statistical quality control and goal programming in a vague environment. That is expressed uncertainty because there is always a measurement error in the real world. Therefore, in this study, the conditions are examined in a vague environment that is a distance-based environment. The disposable essentials process in Kach Company was studied. Statistical control tools were used to characterize the existing process for four factor responses including the average of disposable glasses’ weights, heights, crater diameters, and volumes. Goal programming was then utilized to find the combination of optimal factors setting in a vague environment which is measured to apply uncertainty of the initial information when some of the parameters of the models are vague; also, the fuzzy regression model is used to predict the responses of the four described factors. Optimization results show that the process capability index values for disposable glasses’ average of weights, heights, crater diameters and volumes were improved. Such increasing the quality of the products and reducing the waste, which will reduce the cost of the finished product, and ultimately will bring customer satisfaction, and this satisfaction, will mean increased sales.Keywords: Goal programming, quality control, vague environment, disposable glasses’ optimization, fuzzy regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1040201 Stock Price Forecast by Using Neuro-Fuzzy Inference System
Authors: Ebrahim Abbasi, Amir Abouec
Abstract:
In this research, the researchers have managed to design a model to investigate the current trend of stock price of the "IRAN KHODRO corporation" at Tehran Stock Exchange by utilizing an Adaptive Neuro - Fuzzy Inference system. For the Longterm Period, a Neuro-Fuzzy with two Triangular membership functions and four independent Variables including trade volume, Dividend Per Share (DPS), Price to Earning Ratio (P/E), and also closing Price and Stock Price fluctuation as an dependent variable are selected as an optimal model. For the short-term Period, a neureo – fuzzy model with two triangular membership functions for the first quarter of a year, two trapezoidal membership functions for the Second quarter of a year, two Gaussian combination membership functions for the third quarter of a year and two trapezoidal membership functions for the fourth quarter of a year were selected as an optimal model for the stock price forecasting. In addition, three independent variables including trade volume, price to earning ratio, closing Stock Price and a dependent variable of stock price fluctuation were selected as an optimal model. The findings of the research demonstrate that the trend of stock price could be forecasted with the lower level of error.Keywords: Stock Price forecast, membership functions, Adaptive Neuro-Fuzzy Inference System, trade volume, P/E, DPS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2613200 Improving the Shunt Active Power Filter Performance Using Synchronous Reference Frame PI Based Controller with Anti-Windup Scheme
Authors: Consalva J. Msigwa, Beda J. Kundy, Bakari M. M. Mwinyiwiwa
Abstract:
In this paper the reference current for Voltage Source Converter (VSC) of the Shunt Active Power Filter (SAPF) is generated using Synchronous Reference Frame method, incorporating the PI controller with anti-windup scheme. The proposed method improves the harmonic filtering by compensating the winding up phenomenon caused by the integral term of the PI controller. Using Reference Frame Transformation, the current is transformed from om a - b - c stationery frame to rotating 0 - d - q frame. Using the PI controller, the current in the 0 - d - q frame is controlled to get the desired reference signal. A controller with integral action combined with an actuator that becomes saturated can give some undesirable effects. If the control error is so large that the integrator saturates the actuator, the feedback path becomes ineffective because the actuator will remain saturated even if the process output changes. The integrator being an unstable system may then integrate to a very large value, the phenomenon known as integrator windup. Implementing the integrator anti-windup circuit turns off the integrator action when the actuator saturates, hence improving the performance of the SAPF and dynamically compensating harmonics in the power network. In this paper the system performance is examined with Shunt Active Power Filter simulation model.Keywords: Phase Locked Loop (PLL), Voltage SourceConverter (VSC), Shunt Active Power Filter (SAPF), PI, Pulse WidthModulation (PWM).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1567199 Computing Entropy for Ortholog Detection
Authors: Hsing-Kuo Pao, John Case
Abstract:
Biological sequences from different species are called or-thologs if they evolved from a sequence of a common ancestor species and they have the same biological function. Approximations of Kolmogorov complexity or entropy of biological sequences are already well known to be useful in extracting similarity information between such sequences -in the interest, for example, of ortholog detection. As is well known, the exact Kolmogorov complexity is not algorithmically computable. In prac-tice one can approximate it by computable compression methods. How-ever, such compression methods do not provide a good approximation to Kolmogorov complexity for short sequences. Herein is suggested a new ap-proach to overcome the problem that compression approximations may notwork well on short sequences. This approach is inspired by new, conditional computations of Kolmogorov entropy. A main contribution of the empir-ical work described shows the new set of entropy-based machine learning attributes provides good separation between positive (ortholog) and nega-tive (non-ortholog) data - better than with good, previously known alter-natives (which do not employ some means to handle short sequences well).Also empirically compared are the new entropy based attribute set and a number of other, more standard similarity attributes sets commonly used in genomic analysis. The various similarity attributes are evaluated by cross validation, through boosted decision tree induction C5.0, and by Receiver Operating Characteristic (ROC) analysis. The results point to the conclu-sion: the new, entropy based attribute set by itself is not the one giving the best prediction; however, it is the best attribute set for use in improving the other, standard attribute sets when conjoined with them.
Keywords: compression, decision tree, entropy, ortholog, ROC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1827198 Rapid Finite-Element Based Airport Pavement Moduli Solutions using Neural Networks
Authors: Kasthurirangan Gopalakrishnan, Marshall R. Thompson, Anshu Manik
Abstract:
This paper describes the use of artificial neural networks (ANN) for predicting non-linear layer moduli of flexible airfield pavements subjected to new generation aircraft (NGA) loading, based on the deflection profiles obtained from Heavy Weight Deflectometer (HWD) test data. The HWD test is one of the most widely used tests for routinely assessing the structural integrity of airport pavements in a non-destructive manner. The elastic moduli of the individual pavement layers backcalculated from the HWD deflection profiles are effective indicators of layer condition and are used for estimating the pavement remaining life. HWD tests were periodically conducted at the Federal Aviation Administration-s (FAA-s) National Airport Pavement Test Facility (NAPTF) to monitor the effect of Boeing 777 (B777) and Beoing 747 (B747) test gear trafficking on the structural condition of flexible pavement sections. In this study, a multi-layer, feed-forward network which uses an error-backpropagation algorithm was trained to approximate the HWD backcalculation function. The synthetic database generated using an advanced non-linear pavement finite-element program was used to train the ANN to overcome the limitations associated with conventional pavement moduli backcalculation. The changes in ANN-based backcalculated pavement moduli with trafficking were used to compare the relative severity effects of the aircraft landing gears on the NAPTF test pavements.Keywords: Airfield pavements, ANN, backcalculation, newgeneration aircraft
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2185197 Estimation of Attenuation and Phase Delay in Driving Voltage Waveform of a Digital-Noiseless, Ultra-High-Speed Image Sensor
Authors: V. T. S. Dao, T. G. Etoh, C. Vo Le, H. D. Nguyen, K. Takehara, T. Akino, K. Nishi
Abstract:
Since 2004, we have been developing an in-situ storage image sensor (ISIS) that captures more than 100 consecutive images at a frame rate of 10 Mfps with ultra-high sensitivity as well as the video camera for use with this ISIS. Currently, basic research is continuing in an attempt to increase the frame rate up to 100 Mfps and above. In order to suppress electro-magnetic noise at such high frequency, a digital-noiseless imaging transfer scheme has been developed utilizing solely sinusoidal driving voltages. This paper presents highly efficient-yet-accurate expressions to estimate attenuation as well as phase delay of driving voltages through RC networks of an ultra-high-speed image sensor. Elmore metric for a fundamental RC chain is employed as the first-order approximation. By application of dimensional analysis to SPICE data, we found a simple expression that significantly improves the accuracy of the approximation. Similarly, another simple closed-form model to estimate phase delay through fundamental RC networks is also obtained. Estimation error of both expressions is much less than previous works, only less 2% for most of the cases . The framework of this analysis can be extended to address similar issues of other VLSI structures.
Keywords: Dimensional Analysis, ISIS, Digital-noiseless, RC network, Attenuation, Phase Delay, Elmore model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1454196 Finite Element Prediction of Multi-Size Particulate Flow through Two-Dimensional Pump Casing
Authors: K. V. Pagalthivarthi, R. J. Visintainer
Abstract:
Two-dimensional Eulerian (volume-averaged) continuity and momentum equations governing multi-size slurry flow through pump casings are solved by applying a penalty finite element formulation. The computational strategy validated for multi-phase flow through rectangular channels is adapted to the present study. The flow fields of the carrier, mixture and each solids species, and the concentration field of each species are determined sequentially in an iterative manner. The eddy viscosity field computed using Spalart-Allmaras model for the pure carrier phase is modified for the presence of particles. Streamline upwind Petrov-Galerkin formulation is used for all the momentum equations for the carrier, mixture and each solids species and the concentration field for each species. After ensuring mesh-independence of solutions, results of multi-size particulate flow simulation are presented to bring out the effect of bulk flow rate, average inlet concentration, and inlet particle size distribution. Mono-size computations using (1) the concentration-weighted mean diameter of the slurry and (2) the D50 size of the slurry are also presented for comparison with multi-size results.
Keywords: Eulerian-Eulerian model, Multi-size particulate flow, Penalty finite elements, Pump casing, Spalart-Allmaras.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1430195 Utilizing Computational Fluid Dynamics in the Analysis of Natural Ventilation in Buildings
Authors: A. W. J. Wong, I. H. Ibrahim
Abstract:
Increasing urbanisation has driven building designers to incorporate natural ventilation in the designs of sustainable buildings. This project utilises Computational Fluid Dynamics (CFD) to investigate the natural ventilation of an academic building, SIT@SP, using an assessment criterion based on daily mean temperature and mean velocity. The areas of interest are the pedestrian level of first and fourth levels of the building. A reference case recommended by the Architectural Institute of Japan was used to validate the simulation model. The validated simulation model was then used for coupled simulations on SIT@SP and neighbouring geometries, under two wind speeds. Both steady and transient simulations were used to identify differences in results. Steady and transient results are agreeable with the transient simulation identifying peak velocities during flow development. Under a lower wind speed, the first level was sufficiently ventilated while the fourth level was not. The first level has excessive wind velocities in the higher wind speed and the fourth level was adequately ventilated. Fourth level flow velocity was consistently lower than those of the first level. This is attributed to either simulation model error or poor building design. SIT@SP is concluded to have a sufficiently ventilated first level and insufficiently ventilated fourth level. Future works for this project extend to modifying the urban geometry, simulation model improvements, evaluation using other assessment metrics and extending the area of interest to the entire building.Keywords: Buildings, CFD simulation, natural ventilation, urban airflow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1302194 Interaction Effect of Feed Rate and Cutting Speed in CNC-Turning on Chip Micro-Hardness of 304- Austenitic Stainless Steel
Authors: G. H. Senussi
Abstract:
The present work is concerned with the effect of turning process parameters (cutting speed, feed rate, and depth of cut) and distance from the center of work piece as input variables on the chip micro-hardness as response or output. Three experiments were conducted; they were used to investigate the chip micro-hardness behavior at diameter of work piece for 30[mm], 40[mm], and 50[mm]. Response surface methodology (R.S.M) is used to determine and present the cause and effect of the relationship between true mean response and input control variables influencing the response as a two or three dimensional hyper surface. R.S.M has been used for designing a three factor with five level central composite rotatable factors design in order to construct statistical models capable of accurate prediction of responses. The results obtained showed that the application of R.S.M can predict the effect of machining parameters on chip micro-hardness. The five level factorial designs can be employed easily for developing statistical models to predict chip micro-hardness by controllable machining parameters. Results obtained showed that the combined effect of cutting speed at it?s lower level, feed rate and depth of cut at their higher values, and larger work piece diameter can result increasing chi micro-hardness.Keywords: Machining Parameters, Chip Micro-Hardness, CNCMachining, 304-Austenic Stainless Steel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3284193 EZW Coding System with Artificial Neural Networks
Authors: Saudagar Abdul Khader Jilani, Syed Abdul Sattar
Abstract:
Image compression plays a vital role in today-s communication. The limitation in allocated bandwidth leads to slower communication. To exchange the rate of transmission in the limited bandwidth the Image data must be compressed before transmission. Basically there are two types of compressions, 1) LOSSY compression and 2) LOSSLESS compression. Lossy compression though gives more compression compared to lossless compression; the accuracy in retrievation is less in case of lossy compression as compared to lossless compression. JPEG, JPEG2000 image compression system follows huffman coding for image compression. JPEG 2000 coding system use wavelet transform, which decompose the image into different levels, where the coefficient in each sub band are uncorrelated from coefficient of other sub bands. Embedded Zero tree wavelet (EZW) coding exploits the multi-resolution properties of the wavelet transform to give a computationally simple algorithm with better performance compared to existing wavelet transforms. For further improvement of compression applications other coding methods were recently been suggested. An ANN base approach is one such method. Artificial Neural Network has been applied to many problems in image processing and has demonstrated their superiority over classical methods when dealing with noisy or incomplete data for image compression applications. The performance analysis of different images is proposed with an analysis of EZW coding system with Error Backpropagation algorithm. The implementation and analysis shows approximately 30% more accuracy in retrieved image compare to the existing EZW coding system.Keywords: Accuracy, Compression, EZW, JPEG2000, Performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1933192 Concrete Mix Design Using Neural Network
Authors: Rama Shanker, Anil Kumar Sachan
Abstract:
Basic ingredients of concrete are cement, fine aggregate, coarse aggregate and water. To produce a concrete of certain specific properties, optimum proportion of these ingredients are mixed. The important factors which govern the mix design are grade of concrete, type of cement and size, shape and grading of aggregates. Concrete mix design method is based on experimentally evolved empirical relationship between the factors in the choice of mix design. Basic draw backs of this method are that it does not produce desired strength, calculations are cumbersome and a number of tables are to be referred for arriving at trial mix proportion moreover, the variation in attainment of desired strength is uncertain below the target strength and may even fail. To solve this problem, a lot of cubes of standard grades were prepared and attained 28 days strength determined for different combination of cement, fine aggregate, coarse aggregate and water. An artificial neural network (ANN) was prepared using these data. The input of ANN were grade of concrete, type of cement, size, shape and grading of aggregates and output were proportions of various ingredients. With the help of these inputs and outputs, ANN was trained using feed forward back proportion model. Finally trained ANN was validated, it was seen that it gave the result with/ error of maximum 4 to 5%. Hence, specific type of concrete can be prepared from given material properties and proportions of these materials can be quickly evaluated using the proposed ANN.
Keywords: Aggregate Proportions, Artificial Neural Network, Concrete Grade, Concrete Mix Design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2638191 Financing Decision and Productivity Growth for the Venture Capital Industry Using High-Order Fuzzy Time Series
Authors: Shang-En Yu
Abstract:
Human society, there are many uncertainties, such as economic growth rate forecast of the financial crisis, many scholars have, since the the Song Chissom two scholars in 1993 the concept of the so-called fuzzy time series (Fuzzy Time Series)different mode to deal with these problems, a previous study, however, usually does not consider the relevant variables selected and fuzzy process based solely on subjective opinions the fuzzy semantic discrete, so can not objectively reflect the characteristics of the data set, in addition to carrying outforecasts are often fuzzy rules as equally important, failed to consider the importance of each fuzzy rule. For these reasons, the variable selection (Factor Selection) through self-organizing map (Self-Organizing Map, SOM) and proposed high-end weighted multivariate fuzzy time series model based on fuzzy neural network (Fuzzy-BPN), and using the the sequential weighted average operator (Ordered Weighted Averaging operator, OWA) weighted prediction. Therefore, in order to verify the proposed method, the Taiwan stock exchange (Taiwan Stock Exchange Corporation) Taiwan Weighted Stock Index (Taiwan Stock Exchange Capitalization Weighted Stock Index, TAIEX) as experimental forecast target, in order to filter the appropriate variables in the experiment Finally, included in other studies in recent years mode in conjunction with this study, the results showed that the predictive ability of this study further improve.
Keywords: Heterogeneity, residential mortgage loans, foreclosure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1388190 Design and Performance Analysis of One Dimensional Zero Cross-Correlation Coding Technique for a Fixed Wavelength Hopping SAC-OCDMA
Authors: Satyasen Panda, Urmila Bhanja
Abstract:
This paper presents a SAC-OCDMA code with zero cross correlation property to minimize the Multiple Access Interface (MAI) as New Zero Cross Correlation code (NZCC), which is found to be more scalable compared to the other existing SAC-OCDMA codes. This NZCC code is constructed using address segment and data segment. In this work, the proposed NZCC code is implemented in an optical system using the Opti-System software for the spectral amplitude coded optical code-division multiple-access (SAC-OCDMA) scheme. The main contribution of the proposed NZCC code is the zero cross correlation, which reduces both the MAI and PIIN noises. The proposed NZCC code reveals properties of minimum cross-correlation, flexibility in selecting the code parameters and supports a large number of users, combined with high data rate and longer fiber length. Simulation results reveal that the optical code division multiple access system based on the proposed NZCC code accommodates maximum number of simultaneous users with higher data rate transmission, lower Bit Error Rates (BER) and longer travelling distance without any signal quality degradation, as compared to the former existing SAC-OCDMA codes.
Keywords: Cross Correlation, Optical Code Division Multiple Access, Spectral Amplitude Coding Optical Code Division Multiple Access, Multiple Access Interference, Phase Induced Intensity Noise, New Zero Cross Correlation code.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2243189 Modeling of Material Removal on Machining of Ti-6Al-4V through EDM using Copper Tungsten Electrode and Positive Polarity
Authors: M. M. Rahman, Md. Ashikur Rahman Khan, K. Kadirgama M. M. Noor, Rosli A. Bakar
Abstract:
This paper deals optimized model to investigate the effects of peak current, pulse on time and pulse off time in EDM performance on material removal rate of titanium alloy utilizing copper tungsten as electrode and positive polarity of the electrode. The experiments are carried out on Ti6Al4V. Experiments were conducted by varying the peak current, pulse on time and pulse off time. A mathematical model is developed to correlate the influences of these variables and material removal rate of workpiece. Design of experiments (DOE) method and response surface methodology (RSM) techniques are implemented. The validity test of the fit and adequacy of the proposed models has been carried out through analysis of variance (ANOVA). The obtained results evidence that as the material removal rate increases as peak current and pulse on time increases. The effect of pulse off time on MRR changes with peak ampere. The optimum machining conditions in favor of material removal rate are verified and compared. The optimum machining conditions in favor of material removal rate are estimated and verified with proposed optimized results. It is observed that the developed model is within the limits of the agreeable error (about 4%) when compared to experimental results. This result leads to desirable material removal rate and economical industrial machining to optimize the input parameters.Keywords: Ti-6Al-4V, material removal rate, copper tungsten, positive polarity, RSM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2537188 A Robust Reception of IEEE 802.15.4a IR-TH UWB in Dense Multipath and Gaussian Noise
Authors: Farah Haroon, Haroon Rasheed, Kazi M Ahmed
Abstract:
IEEE 802.15.4a impulse radio-time hopping ultra wide band (IR-TH UWB) physical layer, due to small duty cycle and very short pulse widths is robust against multipath propagation. However, scattering and reflections with the large number of obstacles in indoor channel environments, give rise to dense multipath fading. It imposes serious problem to optimum Rake receiver architectures, for which very large number of fingers are needed. Presence of strong noise also affects the reception of fine pulses having extremely low power spectral density. A robust SRake receiver for IEEE 802.15.4a IRTH UWB in dense multipath and additive white Gaussian noise (AWGN) is proposed to efficiently recover the weak signals with much reduced complexity. It adaptively increases the signal to noise (SNR) by decreasing noise through a recursive least square (RLS) algorithm. For simulation, dense multipath environment of IEEE 802.15.4a industrial non line of sight (NLOS) is employed. The power delay profile (PDF) and the cumulative distribution function (CDF) for the respective channel environment are found. Moreover, the error performance of the proposed architecture is evaluated in comparison with conventional SRake and AWGN correlation receivers. The simulation results indicate a substantial performance improvement with very less number of Rake fingers.Keywords: Adaptive noise cancellation, dense multipath propoagation, IEEE 802.15.4a, IR-TH UWB, industrial NLOS environment, SRake receiver
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1827187 Analytical Prediction of Seismic Response of Steel Frames with Superelastic Shape Memory Alloy
Authors: Mohamed Omar
Abstract:
Superelastic Shape Memory Alloy (SMA) is accepted when it used as connection in steel structures. The seismic behaviour of steel frames with SMA is being assessed in this study. Three eightstorey steel frames with different SMA systems are suggested, the first one of which is braced with diagonal bracing system, the second one is braced with nee bracing system while the last one is which the SMA is used as connection at the plastic hinge regions of beams. Nonlinear time history analyses of steel frames with SMA subjected to two different ground motion records have been performed using Seismostruct software. To evaluate the efficiency of suggested systems, the dynamic responses of the frames were compared. From the comparison results, it can be concluded that using SMA element is an effective way to improve the dynamic response of structures subjected to earthquake excitations. Implementing the SMA braces can lead to a reduction in residual roof displacement. The shape memory alloy is effective in reducing the maximum displacement at the frame top and it provides a large elastic deformation range. SMA connections are very effective in dissipating energy and reducing the total input energy of the whole frame under severe seismic ground motion. Using of the SMA connection system is more effective in controlling the reaction forces at the base frame than other bracing systems. Using SMA as bracing is more effective in reducing the displacements. The efficiency of SMA is dependant on the input wave motions and the construction system as well.Keywords: Finite element analysis, seismic response, shapesmemory alloy, steel frame, superelasticity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1845186 A Development of the Multiple Intelligences Measurement of Elementary Students
Authors: Chaiwat Waree
Abstract:
This research aims at development of the Multiple Intelligences Measurement of Elementary Students. The structural accuracy test and normality establishment are based on the Multiple Intelligences Theory of Gardner. This theory consists of eight aspects namely linguistics, logic and mathematics, visual-spatial relations, body and movement, music, human relations, self-realization/selfunderstanding and nature. The sample used in this research consists of elementary school students (aged between 5-11 years). The size of the sample group was determined by Yamane Table. The group has 2,504 students. Multistage Sampling was used. Basic statistical analysis and construct validity testing were done using confirmatory factor analysis. The research can be summarized as follows; 1. Multiple Intelligences Measurement consisting of 120 items is content-accurate. Internal consistent reliability according to the method of Kuder-Richardson of the whole Multiple Intelligences Measurement equals .91. The difficulty of the measurement test is between .39-.83. Discrimination is between .21-.85. 2). The Multiple Intelligences Measurement has construct validity in a good range, that is 8 components and all 120 test items have statistical significance level at .01. Chi-square value equals 4357.7; p=.00 at the degree of freedom of 244 and Goodness of Fit Index equals 1.00. Adjusted Goodness of Fit Index equals .92. Comparative Fit Index (CFI) equals .68. Root Mean Squared Residual (RMR) equals 0.064 and Root Mean Square Error of Approximation equals 0.82. 3). The normality of the Multiple Intelligences Measurement is categorized into 3 levels. Those with high intelligence are those with percentiles of more than 78. Those with moderate/medium intelligence are those with percentiles between 24 and 77.9. Those with low intelligence are those with percentiles from 23.9 downwards.
Keywords: Multiple Intelligences, Measurement, Elementary Students.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2958185 Flow Characteristics around Rectangular Obstacles with the Varying Direction of Obstacles
Authors: Hee-Chang Lim
Abstract:
The study aims to understand the surface pressure distribution around the bodies such as the suction pressure in the leading edge on the top and side-face when the aspect ratio of bodies and the wind direction are changed, respectively. We carried out the wind tunnel measurement and numerical simulation around a series of rectangular bodies (40d×80w×80h, 80d×80w×80h, 160d×80w×80h, 80d×40w×80h and 80d×160w×80h in mm3) placed in a deep turbulent boundary layer. Based on a modern numerical platform, the Navier-Stokes equation with the typical 2-equation (k-ε model) and the DES (Detached Eddy Simulation) turbulence model has been calculated, and they are both compared with the measurement data. Regarding the turbulence model, the DES model makes a better prediction comparing with the k-ε model, especially when calculating the separated turbulent flow around a bluff body with sharp edged corner. In order to observe the effect of wind direction on the pressure variation around the cube (e.g., 80d×80w×80h in mm), it rotates at 0º, 10º, 20º, 30º, and 45º, which stands for the salient wind directions in the tunnel. The result shows that the surface pressure variation is highly dependent upon the approaching wind direction, especially on the top and the side-face of the cube. In addition, the transverse width has a substantial effect on the variation of surface pressure around the bodies, while the longitudinal length has little or no influence.
Keywords: Rectangular bodies, wind direction, aspect ratio, surface pressure distribution, wind-tunnel measurement, k-ε model, DES model, CFD.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 911184 Multi-Temporal Mapping of Built-up Areas Using Daytime and Nighttime Satellite Images Based on Google Earth Engine Platform
Authors: S. Hutasavi, D. Chen
Abstract:
The built-up area is a significant proxy to measure regional economic growth and reflects the Gross Provincial Product (GPP). However, an up-to-date and reliable database of built-up areas is not always available, especially in developing countries. The cloud-based geospatial analysis platform such as Google Earth Engine (GEE) provides an opportunity with accessibility and computational power for those countries to generate the built-up data. Therefore, this study aims to extract the built-up areas in Eastern Economic Corridor (EEC), Thailand using day and nighttime satellite imagery based on GEE facilities. The normalized indices were generated from Landsat 8 surface reflectance dataset, including Normalized Difference Built-up Index (NDBI), Built-up Index (BUI), and Modified Built-up Index (MBUI). These indices were applied to identify built-up areas in EEC. The result shows that MBUI performs better than BUI and NDBI, with the highest accuracy of 0.85 and Kappa of 0.82. Moreover, the overall accuracy of classification was improved from 79% to 90%, and error of total built-up area was decreased from 29% to 0.7%, after night-time light data from the Visible and Infrared Imaging Suite (VIIRS) Day Night Band (DNB). The results suggest that MBUI with night-time light imagery is appropriate for built-up area extraction and be utilize for further study of socioeconomic impacts of regional development policy over the EEC region.
Keywords: Built-up area extraction, Google earth engine, adaptive thresholding method, rapid mapping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 610183 A Neurofuzzy Learning and its Application to Control System
Authors: Seema Chopra, R. Mitra, Vijay Kumar
Abstract:
A neurofuzzy approach for a given set of input-output training data is proposed in two phases. Firstly, the data set is partitioned automatically into a set of clusters. Then a fuzzy if-then rule is extracted from each cluster to form a fuzzy rule base. Secondly, a fuzzy neural network is constructed accordingly and parameters are tuned to increase the precision of the fuzzy rule base. This network is able to learn and optimize the rule base of a Sugeno like Fuzzy inference system using Hybrid learning algorithm, which combines gradient descent, and least mean square algorithm. This proposed neurofuzzy system has the advantage of determining the number of rules automatically and also reduce the number of rules, decrease computational time, learns faster and consumes less memory. The authors also investigate that how neurofuzzy techniques can be applied in the area of control theory to design a fuzzy controller for linear and nonlinear dynamic systems modelling from a set of input/output data. The simulation analysis on a wide range of processes, to identify nonlinear components on-linely in a control system and a benchmark problem involving the prediction of a chaotic time series is carried out. Furthermore, the well-known examples of linear and nonlinear systems are also simulated under the Matlab/Simulink environment. The above combination is also illustrated in modeling the relationship between automobile trips and demographic factors.
Keywords: Fuzzy control, neuro-fuzzy techniques, fuzzy subtractive clustering, extraction of rules, and optimization of membership functions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2592