Search results for: prediction method.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8798

Search results for: prediction method.

5888 Effect of Hartmann Number on Free Convective Flow in a Square Cavity with Different Positions of Heated Square Block

Authors: Abdul Halim Bhuiyan, M. A. Alim, Md. Nasir Uddin

Abstract:

This paper is concerned with the effect of Hartmann number on the free convective flow in a square cavity with different positions of heated square block. The two-dimensional Physical and mathematical model have been developed, and mathematical model includes the system of governing mass, momentum and energy equations are solved by the finite element method. The calculations have been computed for Prandtl number Pr = 0.71, the Rayleigh number Ra = 1000 and the different values of Hartmann number. The results are illustrated with the streamlines, isotherms, velocity and temperature fields as well as local Nusselt number.

Keywords: Finite element method, free convection, Hartmann number, square cavity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2978
5887 A Model for Estimation of Efforts in Development of Software Systems

Authors: Parvinder S. Sandhu, Manisha Prashar, Pourush Bassi, Atul Bisht

Abstract:

Software effort estimation is the process of predicting the most realistic use of effort required to develop or maintain software based on incomplete, uncertain and/or noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets. There are various models like Halstead, Walston-Felix, Bailey-Basili, Doty and GA Based models which have already used to estimate the software effort for projects. In this study Statistical Models, Fuzzy-GA and Neuro-Fuzzy (NF) Inference Systems are experimented to estimate the software effort for projects. The performances of the developed models were tested on NASA software project datasets and results are compared with the Halstead, Walston-Felix, Bailey-Basili, Doty and Genetic Algorithm Based models mentioned in the literature. The result shows that the NF Model has the lowest MMRE and RMSE values. The NF Model shows the best results as compared with the Fuzzy-GA based hybrid Inference System and other existing Models that are being used for the Effort Prediction with lowest MMRE and RMSE values.

Keywords: Neuro-Fuzzy Model, Halstead Model, Walston-Felix Model, Bailey-Basili Model, Doty Model, GA Based Model, Genetic Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3222
5886 Flow Field Analysis of Submerged Horizontal Plate Type Breakwater

Authors: Ke Wang, Zhi-Qiang Zhang, Z. Chen

Abstract:

A submerged horizontal plate type breakwater is pointed out as an efficient wave protection device for cage culture in marine fishery. In order to reveal the wave elimination principle of this type breakwater, boundary element method is utilized to investigate this problem. The flow field and the trajectory of water particles are studied carefully. The flow field analysis shows that: the interaction of incident wave and adverse current above the plate disturbs the water domain drastically. This can slow down the horizontal velocity and vertical velocity of the water particles.

Keywords: boundary element method, plate type breakwater, flow field analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2081
5885 Evaluation of Solid Phase Micro-extraction with Standard Testing Method for Formaldehyde Determination

Authors: Y. L. Yung, Kong Mun Lo

Abstract:

In this study, solid phase micro-extraction (SPME) was optimized to improve the sensitivity and accuracy in formaldehyde determination for plywood panels. Further work has been carried out to compare the newly developed technique with existing method which reacts formaldehyde collected in desiccators with acetyl acetone reagent (DC-AA). In SPME, formaldehyde was first derivatized with O-(2,3,4,5,6 pentafluorobenzyl)-hydroxylamine hydrochloride (PFBHA) and analysis was then performed by gas chromatography in combination with mass spectrometry (GC-MS). SPME data subjected to various wood species gave satisfactory results, with relative standard deviations (RSDs) obtained in the range of 3.1-10.3%. It was also well correlated with DC values, giving a correlation coefficient, RSQ, of 0.959. The quantitative analysis of formaldehyde by SPME was an alternative in wood industry with great potential

Keywords: Formaldehyde, GCMS, Plywood and SPME

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2531
5884 Calibration of Parallel Multi-View Cameras

Authors: M. Ali-Bey, N. Manamanni, S. Moughamir

Abstract:

This paper focuses on the calibration problem of a multi-view shooting system designed for the production of 3D content for auto-stereoscopic visualization. The considered multiview camera is characterized by coplanar and decentered image sensors regarding to the corresponding optical axis. Based on the Faugéras and Toscani-s calibration approach, a calibration method is herein proposed for the case of multi-view camera with parallel and decentered image sensors. At first, the geometrical model of the shooting system is recalled and some industrial prototypes with some shooting simulations are presented. Next, the development of the proposed calibration method is detailed. Finally, some simulation results are presented before ending with some conclusions about this work.

Keywords: Auto-stereoscopic display, camera calibration, multi-view cameras, visual servoing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1693
5883 Methodology: A Review in Modelling and Predictability of Embankment in Soft Ground

Authors: Bhim Kumar Dahal

Abstract:

Transportation network development in the developing country is in rapid pace. The majority of the network belongs to railway and expressway which passes through diverse topography, landform and geological conditions despite the avoidance principle during route selection. Construction of such networks demand many low to high embankment which required improvement in the foundation soil. This paper is mainly focused on the various advanced ground improvement techniques used to improve the soft soil, modelling approach and its predictability for embankments construction. The ground improvement techniques can be broadly classified in to three groups i.e. densification group, drainage and consolidation group and reinforcement group which are discussed with some case studies.  Various methods were used in modelling of the embankments from simple 1-dimensional to complex 3-dimensional model using variety of constitutive models. However, the reliability of the predictions is not found systematically improved with the level of sophistication.  And sometimes the predictions are deviated more than 60% to the monitored value besides using same level of erudition. This deviation is found mainly due to the selection of constitutive model, assumptions made during different stages, deviation in the selection of model parameters and simplification during physical modelling of the ground condition. This deviation can be reduced by using optimization process, optimization tools and sensitivity analysis of the model parameters which will guide to select the appropriate model parameters.

Keywords: Embankment, ground improvement, modelling, model prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 939
5882 Experimental Design and Performance Analysis in Plasma Arc Surface Hardening

Authors: M.I.S. Ismail, Z. Taha

Abstract:

In this paper, the experimental design of using the Taguchi method is employed to optimize the processing parameters in the plasma arc surface hardening process. The processing parameters evaluated are arc current, scanning velocity and carbon content of steel. In addition, other significant effects such as the relation between processing parameters are also investigated. An orthogonal array, signal-to-noise (S/N) ratio and analysis of variance (ANOVA) are employed to investigate the effects of these processing parameters. Through this study, not only the hardened depth increased and surface roughness improved, but also the parameters that significantly affect the hardening performance are identified. Experimental results are provided to verify the effectiveness of this approach.

Keywords: Plasma arc, hardened depth, surface roughness, Taguchi method, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2352
5881 The Auto-Tuning PID Controller for Interacting Water Level Process

Authors: Satean Tunyasrirut, Tianchai Suksri, Arjin Numsomran, Supan Gulpanich, Kitti Tirasesth

Abstract:

This paper presents the approach to design the Auto- Tuning PID controller for interactive Water Level Process using integral step response. The Integral Step Response (ISR) is the method to model a dynamic process which can be done easily, conveniently and very efficiently. Therefore this method is advantage for design the auto tune PID controller. Our scheme uses the root locus technique to design PID controller. In this paper MATLAB is used for modeling and testing of the control system. The experimental results of the interacting water level process can be satisfyingly illustrated the transient response and the steady state response.

Keywords: Coupled-Tank, Interacting water level process, PIDController, Auto-tuning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2295
5880 Evaluation of Heterogeneity of Paint Coating on Metal Substrate Using Laser Infrared Thermography and Eddy Current

Authors: S. Mezghani, E. Perrin, J. L Bodnar, J. Marthe, B. Cauwe, V. Vrabie

Abstract:

Non contact evaluation of the thickness of paint coatings can be attempted by different destructive and nondestructive methods such as cross-section microscopy, gravimetric mass measurement, magnetic gauges, Eddy current, ultrasound or terahertz. Infrared thermography is a nondestructive and non-invasive method that can be envisaged as a useful tool to measure the surface thickness variations by analyzing the temperature response. In this paper, the thermal quadrupole method for two layered samples heated up with a pulsed excitation is firstly used. By analyzing the thermal responses as a function of thermal properties and thicknesses of both layers, optimal parameters for the excitation source can be identified. Simulations show that a pulsed excitation with duration of ten milliseconds allows obtaining a substrate-independent thermal response. Based on this result, an experimental setup consisting of a near-infrared laser diode and an Infrared camera was next used to evaluate the variation of paint coating thickness between 60 μm and 130 μm on two samples. Results show that the parameters extracted for thermal images are correlated with the estimated thicknesses by the Eddy current methods. The laser pulsed thermography is thus an interesting alternative nondestructive method that can be moreover used for nonconductive substrates.

Keywords: Nondestructive, paint coating, thickness, infrared thermography, laser, heterogeneity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2066
5879 Reversible Watermarking for H.264/AVC Videos

Authors: Yih-Chuan Lin, Jung-Hong Li

Abstract:

In this paper, we propose a reversible watermarking scheme based on histogram shifting (HS) to embed watermark bits into the H.264/AVC standard videos by modifying the last nonzero level in the context adaptive variable length coding (CAVLC) domain. The proposed method collects all of the last nonzero coefficients (or called last level coefficient) of 4×4 sub-macro blocks in a macro block and utilizes predictions for the current last level from the neighbor block-s last levels to embed watermark bits. The feature of the proposed method is low computational and has the ability of reversible recovery. The experimental results have demonstrated that our proposed scheme has acceptable degradation on video quality and output bit-rate for most test videos.

Keywords: Reversible data hiding, H.264/AVC standard, CAVLC, Histogram shifting

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2027
5878 Traffic Forecasting for Open Radio Access Networks Virtualized Network Functions in 5G Networks

Authors: Khalid Ali, Manar Jammal

Abstract:

In order to meet the stringent latency and reliability requirements of the upcoming 5G networks, Open Radio Access Networks (O-RAN) have been proposed. The virtualization of O-RAN has allowed it to be treated as a Network Function Virtualization (NFV) architecture, while its components are considered Virtualized Network Functions (VNFs). Hence, intelligent Machine Learning (ML) based solutions can be utilized to apply different resource management and allocation techniques on O-RAN. However, intelligently allocating resources for O-RAN VNFs can prove challenging due to the dynamicity of traffic in mobile networks. Network providers need to dynamically scale the allocated resources in response to the incoming traffic. Elastically allocating resources can provide a higher level of flexibility in the network in addition to reducing the OPerational EXpenditure (OPEX) and increasing the resources utilization. Most of the existing elastic solutions are reactive in nature, despite the fact that proactive approaches are more agile since they scale instances ahead of time by predicting the incoming traffic. In this work, we propose and evaluate traffic forecasting models based on the ML algorithm. The algorithms aim at predicting future O-RAN traffic by using previous traffic data. Detailed analysis of the traffic data was carried out to validate the quality and applicability of the traffic dataset. Hence, two ML models were proposed and evaluated based on their prediction capabilities.

Keywords: O-RAN, traffic forecasting, NFV, ARIMA, LSTM, elasticity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 521
5877 Fast Document Segmentation Using Contourand X-Y Cut Technique

Authors: Boontee Kruatrachue, Narongchai Moongfangklang, Kritawan Siriboon

Abstract:

This paper describes fast and efficient method for page segmentation of document containing nonrectangular block. The segmentation is based on edge following algorithm using small window of 16 by 32 pixels. This segmentation is very fast since only border pixels of paragraph are used without scanning the whole page. Still, the segmentation may contain error if the space between them is smaller than the window used in edge following. Consequently, this paper reduce this error by first identify the missed segmentation point using direction information in edge following then, using X-Y cut at the missed segmentation point to separate the connected columns. The advantage of the proposed method is the fast identification of missed segmentation point. This methodology is faster with fewer overheads than other algorithms that need to access much more pixel of a document.

Keywords: Contour Direction Technique, Missed SegmentationPoints, Page Segmentation, Recursive X-Y Cut Technique

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2775
5876 Particle Filter Supported with the Neural Network for Aircraft Tracking Based on Kernel and Active Contour

Authors: Mohammad Izadkhah, Mojtaba Hoseini, Alireza Khalili Tehrani

Abstract:

In this paper we presented a new method for tracking flying targets in color video sequences based on contour and kernel. The aim of this work is to overcome the problem of losing target in changing light, large displacement, changing speed, and occlusion. The proposed method is made in three steps, estimate the target location by particle filter, segmentation target region using neural network and find the exact contours by greedy snake algorithm. In the proposed method we have used both region and contour information to create target candidate model and this model is dynamically updated during tracking. To avoid the accumulation of errors when updating, target region given to a perceptron neural network to separate the target from background. Then its output used for exact calculation of size and center of the target. Also it is used as the initial contour for the greedy snake algorithm to find the exact target's edge. The proposed algorithm has been tested on a database which contains a lot of challenges such as high speed and agility of aircrafts, background clutter, occlusions, camera movement, and so on. The experimental results show that the use of neural network increases the accuracy of tracking and segmentation.

Keywords: Video tracking, particle filter, greedy snake, neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1187
5875 Wet Polymeric Precipitation Synthesis for Monophasic Tricalcium Phosphate

Authors: I. Grigoraviciute-Puroniene, K. Tsuru, E. Garskaite, Z. Stankeviciute, A. Beganskiene, K. Ishikawa, A. Kareiva

Abstract:

Tricalcium phosphate (β-Ca3(PO4)2, β-TCP) powders were synthesized using wet polymeric precipitation method for the first time to our best knowledge. The results of X-ray diffraction analysis showed the formation of almost single a Ca-deficient hydroxyapatite (CDHA) phase of a poor crystallinity already at room temperature. With continuously increasing the calcination temperature up to 800 °C, the crystalline β-TCP was obtained as the main phase. It was demonstrated that infrared spectroscopy is very effective method to characterize the formation of β-TCP. The SEM results showed that β-TCP solids were homogeneous having a small particle size distribution. The β-TCP powders consisted of spherical particles varying in size from 100 to 300 nm. Fabricated β-TCP specimens were placed to the bones of the rats and maintained for 1-2 months.

Keywords: β-TCP, bone regeneration, wet chemical processing, polymeric precipitation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1051
5874 Expectation-Confirmation Model of Information System Continuance: A Meta-Analysis

Authors: Hui-Min Lai, Chin-Pin Chen, Yung-Fu Chang

Abstract:

The expectation-confirmation model (ECM) is one of the most widely used models for evaluating information system continuance, and this model has been extended to other study backgrounds, or expanded with other theoretical perspectives. However, combining ECM with other theories or investigating the background problem may produce some disparities, thus generating inaccurate conclusions. Habit is considered to be an important factor that influences the user’s continuance behavior. This paper thus critically examines seven pairs of relationships from the original ECM and the habit variable. A meta-analysis was used to tackle the development of ECM research over the last 10 years from a range of journals and conference papers published in 2005–2014. Forty-six journal articles and 19 conference papers were selected for analysis. The results confirm our prediction that a high effect size for the seven pairs of relationships was obtained (ranging from r=0.386 to r=0.588). Furthermore, a meta-analytic structural equation modeling was performed to simultaneously test all relationships. The results show that habit had a significant positive effect on continuance intention at p<=0.05 and that the six other pairs of relationships were significant at p<0.10. Based on the findings, we refined our original research model and an alternative model was proposed for understanding and predicting information system continuance. Some theoretical implications are also discussed.

Keywords: Expectation-confirmation theory, expectation- confirmation model, meta-analysis, meta-analytic structural equation modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2681
5873 Methodology of Estimating Assembly Cost by MODAPTS

Authors: Heung Jae Cho, Jae Il Park

Abstract:

This paper presents the development of an MODAPTS based cost estimating system to help designers in estimating the manufacturing cost of a assembly products which is belonged from the workers in working fields. Competitiveness of manufacturing cost is getting harder because of the development of Information and telecommunication, but also globalization. Therefore, the accuracy of the assembly cost estimation is getting important. DFA and MODAPTS is useful method for measuring the working hour. But these two methods are used just as a timetable. Therefore, in this paper, we suggest the process of measuring the working hours by MODAPTS which includes the working field-s accurate information. In addition, we adduce the estimation method of accuracy assembly cost with the real information. This research could be useful for designers that can estimate the assembly cost more accurately, and also effective for the companies that which are concerned to reduce the product cost.

Keywords: Cost estimation, DFA, MODAPTS, Assembly cost

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3970
5872 Motion Prediction and Motion Vector Cost Reduction during Fast Block Motion Estimation in MCTF

Authors: Karunakar A K, Manohara Pai M M

Abstract:

In 3D-wavelet video coding framework temporal filtering is done along the trajectory of motion using Motion Compensated Temporal Filtering (MCTF). Hence computationally efficient motion estimation technique is the need of MCTF. In this paper a predictive technique is proposed in order to reduce the computational complexity of the MCTF framework, by exploiting the high correlation among the frames in a Group Of Picture (GOP). The proposed technique applies coarse and fine searches of any fast block based motion estimation, only to the first pair of frames in a GOP. The generated motion vectors are supplied to the next consecutive frames, even to subsequent temporal levels and only fine search is carried out around those predicted motion vectors. Hence coarse search is skipped for all the motion estimation in a GOP except for the first pair of frames. The technique has been tested for different fast block based motion estimation algorithms over different standard test sequences using MC-EZBC, a state-of-the-art scalable video coder. The simulation result reveals substantial reduction (i.e. 20.75% to 38.24%) in the number of search points during motion estimation, without compromising the quality of the reconstructed video compared to non-predictive techniques. Since the motion vectors of all the pair of frames in a GOP except the first pair will have value ±1 around the motion vectors of the previous pair of frames, the number of bits required for motion vectors is also reduced by 50%.

Keywords: Motion Compensated Temporal Filtering, predictivemotion estimation, lifted wavelet transform, motion vector

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1611
5871 Neutrosophic Multiple Criteria Decision Making Analysis Method for Selecting Stealth Fighter Aircraft

Authors: C. Ardil

Abstract:

In this paper, a neutrosophic multiple criteria decision analysis method is proposed to select stealth fighter aircraft. Neutrosophic multiple criteria decision analysis methods are used to analyze the neutrosophic environment and give results under uncertainty and incompleteness. Neutrosophic numbers are used to evaluate alternatives over a set of evaluation criteria in decision making problems. Finally, the proposed model is applied to a practical decision problem for selecting stealth fighter aircraft.

Keywords: neutrosophic sets, multiple criteria decision making analysis, stealth fighter aircraft, aircraft selection, MCDMA, SVNNs

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 580
5870 Fuzzy Wavelet Packet based Feature Extraction Method for Multifunction Myoelectric Control

Authors: Rami N. Khushaba, Adel Al-Jumaily

Abstract:

The myoelectric signal (MES) is one of the Biosignals utilized in helping humans to control equipments. Recent approaches in MES classification to control prosthetic devices employing pattern recognition techniques revealed two problems, first, the classification performance of the system starts degrading when the number of motion classes to be classified increases, second, in order to solve the first problem, additional complicated methods were utilized which increase the computational cost of a multifunction myoelectric control system. In an effort to solve these problems and to achieve a feasible design for real time implementation with high overall accuracy, this paper presents a new method for feature extraction in MES recognition systems. The method works by extracting features using Wavelet Packet Transform (WPT) applied on the MES from multiple channels, and then employs Fuzzy c-means (FCM) algorithm to generate a measure that judges on features suitability for classification. Finally, Principle Component Analysis (PCA) is utilized to reduce the size of the data before computing the classification accuracy with a multilayer perceptron neural network. The proposed system produces powerful classification results (99% accuracy) by using only a small portion of the original feature set.

Keywords: Biomedical Signal Processing, Data mining andInformation Extraction, Machine Learning, Rehabilitation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730
5869 The Investigation of Crack's Parameters on the V-Notch using Photoelasticity Method

Authors: M. Saravani, M. Azizi,

Abstract:

The V-notches are most possible case for initiation of cracks in parts. The specifications of cracks on the tip of the notch will be influenced via opening angle, tip radius and depth of V-notch. In this study, the effects of V-notch-s opening angle on stress intensity factor and T-stress of crack on the notch has been investigated. The experiment has been done in different opening angles and various crack length in mode (I) loading using Photoelasticity method. The results illustrate that while angle increases in constant crack-s length, SIF and T-stress will decrease. Beside, the effect of V-notch angle in short crack is more than long crack. These V-notch affects are negligible by increasing the length of crack, and the crack-s behavior can be considered as a single-edge crack specimen. Finally, the results have been evaluated with numerical finite element analysis and good agreement was obvious.

Keywords: Photoelasticity, Stress intensity factor, T-stress, V-notch.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1795
5868 An Integrated DEMATEL-QFD Model for Medical Supplier Selection

Authors: Mehtap Dursun, Zeynep Şener

Abstract:

Supplier selection is considered as one of the most critical issues encountered by operations and purchasing managers to sharpen the company’s competitive advantage. In this paper, a novel fuzzy multi-criteria group decision making approach integrating quality function deployment (QFD) and decision making trial and evaluation laboratory (DEMATEL) method is proposed for supplier selection. The proposed methodology enables to consider the impacts of inner dependence among supplier assessment criteria. A house of quality (HOQ) which translates purchased product features into supplier assessment criteria is built using the weights obtained by DEMATEL approach to determine the desired levels of supplier assessment criteria. Supplier alternatives are ranked by a distance-based method.

Keywords: DEMATEL, Group decision making, QFD, Supplier selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2814
5867 An Improved Construction Method for MIHCs on Cycle Composition Networks

Authors: Hsun Su, Yuan-Kang Shih, Shin-Shin Kao

Abstract:

Many well-known interconnection networks, such as kary n-cubes, recursive circulant graphs, generalized recursive circulant graphs, circulant graphs and so on, are shown to belong to the family of cycle composition networks. Recently, various studies about mutually independent hamiltonian cycles, abbreviated as MIHC-s, on interconnection networks are published. In this paper, using an improved construction method, we obtain MIHC-s on cycle composition networks with a much weaker condition than the known result. In fact, we established the existence of MIHC-s in the cycle composition networks and the result is optimal in the sense that the number of MIHC-s we constructed is maximal.

Keywords: Hamiltonian cycle, k-ary n-cube, cycle composition networks, mutually independent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1375
5866 A P-SPACE Algorithm for Groebner Bases Computation in Boolean Rings

Authors: Quoc-Nam Tran

Abstract:

The theory of Groebner Bases, which has recently been honored with the ACM Paris Kanellakis Theory and Practice Award, has become a crucial building block to computer algebra, and is widely used in science, engineering, and computer science. It is wellknown that Groebner bases computation is EXP-SPACE in a general setting. In this paper, we give an algorithm to show that Groebner bases computation is P-SPACE in Boolean rings. We also show that with this discovery, the Groebner bases method can theoretically be as efficient as other methods for automated verification of hardware and software. Additionally, many useful and interesting properties of Groebner bases including the ability to efficiently convert the bases for different orders of variables making Groebner bases a promising method in automated verification.

Keywords: Algorithm, Complexity, Groebner basis, Applications of Computer Science.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1824
5865 Robotic Arm Control with Neural Networks Using Genetic Algorithm Optimization Approach

Authors: A. Pajaziti, H. Cana

Abstract:

In this paper, the structural genetic algorithm is used to optimize the neural network to control the joint movements of robotic arm. The robotic arm has also been modeled in 3D and simulated in real-time in MATLAB. It is found that Neural Networks provide a simple and effective way to control the robot tasks. Computer simulation examples are given to illustrate the significance of this method. By combining Genetic Algorithm optimization method and Neural Networks for the given robotic arm with 5 D.O.F. the obtained the results shown that the base joint movements overshooting time without controller was about 0.5 seconds, while with Neural Network controller (optimized with Genetic Algorithm) was about 0.2 seconds, and the population size of 150 gave best results.

Keywords: Robotic Arm, Neural Network, Genetic Algorithm, Optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3590
5864 Influence of Technology Parameters on Properties of AA6061/SiC Composites Produced By Kobo Method

Authors: J. Wozniak, M. Kostecki, K. Broniszewski, W. Bochniak, A. Olszyna

Abstract:

The influence of extrusion parameters on surface quality and properties of AA6061+x% vol. SiC (x = 0; 2,5; 5; 7,5;10) composites was discussed in this paper. The averages size of AA6061 and SiC particles were 10.6 μm and 0.42 μm, respectively. Two series of composites (I - compacts were preheated at extrusion temperature through 0.5 h and cooled by water directly after process; II - compacts were preheated through 3 hours and were not cooled) were consolidated via powder metallurgy processing and extruded by KoBo method. High values of density for both series of composites were achieved. Better surface quality was observed for II series of composites. Moreover, for these composites lower (compared to I series) but more uniform strength properties over the cross-section of the bar were noticed. Microstructure and Young-s modulus investigations were made.

Keywords: aluminum alloy, extrusion, metal matrix composites, microstructure

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1741
5863 Constructing Distinct Kinds of Solutions for the Time-Dependent Coefficients Coupled Klein-Gordon-Schrödinger Equation

Authors: Anupma Bansal

Abstract:

We seek exact solutions of the coupled Klein-Gordon-Schrödinger equation with variable coefficients with the aid of Lie classical approach. By using the Lie classical method, we are able to derive symmetries that are used for reducing the coupled system of partial differential equations into ordinary differential equations. From reduced differential equations we have derived some new exact solutions of coupled Klein-Gordon-Schrödinger equations involving some special functions such as Airy wave functions, Bessel functions, Mathieu functions etc.

Keywords: Klein-Gordon-Schödinger Equation, Lie Classical Method, Exact Solutions

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4522
5862 Adaptive WiFi Fingerprinting for Location Approximation

Authors: Mohd Fikri Azli bin Abdullah, Khairul Anwar bin Kamarul Hatta, Esther Jeganathan

Abstract:

WiFi has become an essential technology that is widely used nowadays. It is famous due to its convenience to be used with mobile devices. This is especially true for Internet users worldwide that use WiFi connections. There are many location based services that are available nowadays which uses Wireless Fidelity (WiFi) signal fingerprinting. A common example that is gaining popularity in this era would be Foursquare. In this work, the WiFi signal would be used to estimate the user or client’s location. Similar to GPS, fingerprinting method needs a floor plan to increase the accuracy of location estimation. Still, the factor of inconsistent WiFi signal makes the estimation defer at different time intervals. Given so, an adaptive method is needed to obtain the most accurate signal at all times. WiFi signals are heavily distorted by external factors such as physical objects, radio frequency interference, electrical interference, and environmental factors to name a few. Due to these factors, this work uses a method of reducing the signal noise and estimation using the Nearest Neighbour based on past activities of the signal to increase the signal accuracy up to more than 80%. The repository yet increases the accuracy by using Artificial Neural Network (ANN) pattern matching. The repository acts as the server cum support of the client side application decision. Numerous previous works has adapted the methods of collecting signal strengths in the repository over the years, but mostly were just static. In this work, proposed solutions on how the adaptive method is done to match the signal received to the data in the repository are highlighted. With the said approach, location estimation can be done more accurately. Adaptive update allows the latest location fingerprint to be stored in the repository. Furthermore, any redundant location fingerprints are removed and only the updated version of the fingerprint is stored in the repository. How the location estimation of the user can be predicted would be highlighted more in the proposed solution section. After some studies on previous works, it is found that the Artificial Neural Network is the most feasible method to deploy in updating the repository and making it adaptive. The Artificial Neural Network functions are to do the pattern matching of the WiFi signal to the existing data available in the repository.

Keywords: Adaptive Repository, Artificial Neural Network, Location Estimation, Nearest Neighbour Euclidean Distance, WiFi RSSI Fingerprinting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3452
5861 Cryogenic Freezing Process Optimization Based On Desirability Function on the Path of Steepest Ascent

Authors: R. Uporn, P. Luangpaiboon

Abstract:

This paper presents a comparative study of statistical methods for the multi-response surface optimization of a cryogenic freezing process. Taguchi design and analysis and steepest ascent methods based on the desirability function were conducted to ascertain the influential factors of a cryogenic freezing process and their optimal levels. The more preferable levels of the set point, exhaust fan speed, retention time and flow direction are set at -90oC, 20 Hz, 18 minutes and Counter Current, respectively. The overall desirability level is 0.7044.

Keywords: Cryogenic Freezing Process, Taguchi Design and Analysis, Response Surface Method, Steepest Ascent Method and Desirability Function Approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1821
5860 Detecting Earnings Management via Statistical and Neural Network Techniques

Authors: Mohammad Namazi, Mohammad Sadeghzadeh Maharluie

Abstract:

Predicting earnings management is vital for the capital market participants, financial analysts and managers. The aim of this research is attempting to respond to this query: Is there a significant difference between the regression model and neural networks’ models in predicting earnings management, and which one leads to a superior prediction of it? In approaching this question, a Linear Regression (LR) model was compared with two neural networks including Multi-Layer Perceptron (MLP), and Generalized Regression Neural Network (GRNN). The population of this study includes 94 listed companies in Tehran Stock Exchange (TSE) market from 2003 to 2011. After the results of all models were acquired, ANOVA was exerted to test the hypotheses. In general, the summary of statistical results showed that the precision of GRNN did not exhibit a significant difference in comparison with MLP. In addition, the mean square error of the MLP and GRNN showed a significant difference with the multi variable LR model. These findings support the notion of nonlinear behavior of the earnings management. Therefore, it is more appropriate for capital market participants to analyze earnings management based upon neural networks techniques, and not to adopt linear regression models.

Keywords: Earnings management, generalized regression neural networks, linear regression, multi-layer perceptron, Tehran stock exchange.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2097
5859 Despeckling of Synthetic Aperture Radar Images Using Inner Product Spaces in Undecimated Wavelet Domain

Authors: Syed Musharaf Ali, Muhammad Younus Javed, Naveed Sarfraz Khattak, Athar Mohsin, UmarFarooq

Abstract:

This paper introduces the effective speckle reduction of synthetic aperture radar (SAR) images using inner product spaces in undecimated wavelet domain. There are two major areas in projection onto span algorithm where improvement can be made. First is the use of undecimated wavelet transformation instead of discrete wavelet transformation. And second area is the use of smoothing filter namely directional smoothing filter which is an additional step. Proposed method does not need any noise estimation and thresholding technique. More over proposed method gives good results on both single polarimetric and fully polarimetric SAR images.

Keywords: Directional Smoothing, Inner product, Length ofvector, Undecimated wavelet transformation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1598