Search results for: error distribution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3073

Search results for: error distribution

2203 Electroencephalography Based Brain-Computer Interface for Cerebellum Impaired Patients

Authors: Young-Seok Choi

Abstract:

In healthy humans, the cortical brain rhythm shows specific mu (~6-14 Hz) and beta (~18-24 Hz) band patterns in the cases of both real and imaginary motor movements. As cerebellar ataxia is associated with impairment of precise motor movement control as well as motor imagery, ataxia is an ideal model system in which to study the role of the cerebellocortical circuit in rhythm control. We hypothesize that the EEG characteristics of ataxic patients differ from those of controls during the performance of a Brain-Computer Interface (BCI) task. Ataxia and control subjects showed a similar distribution of mu power during cued relaxation. During cued motor imagery, however, the ataxia group showed significant spatial distribution of the response, while the control group showed the expected decrease in mu-band power (localized to the motor cortex).

Keywords: Brain-computer interface, EEG, modulation, ataxia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1969
2202 A New Time-Frequency Speech Analysis Approach Based On Adaptive Fourier Decomposition

Authors: Liming Zhang

Abstract:

In this paper, a new adaptive Fourier decomposition (AFD) based time-frequency speech analysis approach is proposed. Given the fact that the fundamental frequency of speech signals often undergo fluctuation, the classical short-time Fourier transform (STFT) based spectrogram analysis suffers from the difficulty of window size selection. AFD is a newly developed signal decomposition theory. It is designed to deal with time-varying non-stationary signals. Its outstanding characteristic is to provide instantaneous frequency for each decomposed component, so the time-frequency analysis becomes easier. Experiments are conducted based on the sample sentence in TIMIT Acoustic-Phonetic Continuous Speech Corpus. The results show that the AFD based time-frequency distribution outperforms the STFT based one.

Keywords: Adaptive fourier decomposition, instantaneous frequency, speech analysis, time-frequency distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1724
2201 Reliable Capacitated Facility Location Problem Considering Maximal Covering

Authors: Mehdi Seifbarghy, Sajjad Jalali, Seyed Habib A. Rahmati

Abstract:

This paper provides a framework in order to incorporate reliability issue as a sign of disruption in distribution systems and partial covering theory as a response to limitation in coverage radios and economical preferences, simultaneously into the traditional literatures of capacitated facility location problems. As a result we develop a bi-objective model based on the discrete scenarios for expected cost minimization and demands coverage maximization through a three echelon supply chain network by facilitating multi-capacity levels for provider side layers and imposing gradual coverage function for distribution centers (DCs). Additionally, in spite of objectives aggregation for solving the model through LINGO software, a branch of LP-Metric method called Min- Max approach is proposed and different aspects of corresponds model will be explored.

Keywords: Reliability Cost, Partial Covering, LP-Metric

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1898
2200 Asymmetric and Kind of Bracing Effects on Steel Frames Under Earthquake Loads

Authors: Mahmoud Miri, Soliman Maramaee

Abstract:

Because of architectural condition and structure application, sometimes mass source and stiffness source are not coincidence, and the structure is irregular. The structure is also might be asymmetric as an asymmetric bracing in plan which leads to unbalance distribution of stiffness or because of unbalance distribution of the mass. Both condition lead to eccentricity and torsion in the structure. The deficiency of ordinary code to evaluate the performance of steel structures against earthquake has been caused designing based on performance level or capacity spectrum be used. By using the mentioned methods it is possible to design a structure that its behavior against different earthquakes be predictive. In this article 5- story buildings with different percentage of asymmetric which is because of stiffness changes and kind of bracing (x and chevron bracing) have been designed. The static and dynamic nonlinear analysis under three acceleration recording has been done. Finally performance level of the structure has been evaluated.

Keywords: Asymmetric, irregular, seismic analysis, torsion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1639
2199 The Effects of Wind Forcing on Surface Currents on the Continental Shelf Surrounding Rottnest Island

Authors: Jennifer Penton, Charitha Pattiaratchi

Abstract:

Surface currents play a major role in the distribution of contaminants, the connectivity of marine populations, and can influence the vertical and horizontal distribution of nutrients within the water column. This paper aims to determine the effects of sea breeze-wind patterns on the climatology of the surface currents on the continental shelf surrounding Rottnest Island, WA Australia. The alternating wind patterns allow for full cyclic rotations of wind direction, permitting the interpretation of the effect of the wind on the surface currents. It was found that the surface currents only clearly follow the northbound Capes Current in times when the Fremantle Doctor sets in. Surface currents react within an hour to a change of direction of the wind, allowing southerly currents to dominate during strong northerly sea breezes, often followed by mixed currents dominated by eddies in the inter-lying times.

Keywords: HF radar, surface currents, sea breeze.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1541
2198 A Perceptually Optimized Wavelet Embedded Zero Tree Image Coder

Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf

Abstract:

In this paper, we propose a Perceptually Optimized Embedded ZeroTree Image Coder (POEZIC) that introduces a perceptual weighting to wavelet transform coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to the coding quality obtained using the SPIHT algorithm only. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEZIC quality assessment. Our POEZIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) luminance masking and Contrast masking, 2) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting, 3) the Wavelet Error Sensitivity WES used to reduce the perceptual quantization errors. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.

Keywords: DWT, linear-phase 9/7 filter, 9/7 Wavelets Error Sensitivity WES, CSF implementation approaches, JND Just Noticeable Difference, Luminance masking, Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2051
2197 Experimental Study of the Extraction of Copper(II) from Sulphuric Acid by Means of Sodium Diethyldithiocarbamate (SDDT)

Authors: S.Touati, A.H. Meniai

Abstract:

The present work presents the extraction of copper(II) from sulphuric acid solutions with Sodium diethyldithiocarbamate (SDDT), and six different organic diluents: Dichloromethane, Chloroform, Carbon tetrachloride, Toluene, xylene and Cyclohexane, were tested. The pair SDDT/Chloroform showed to be the most selective in removing the copper cations, and hence was considered throughout the experimental study. The effects of operating parameters such as the initial concentration of the extracting agent, the agitation time, the agitation speed and the acid concentration were considered. For an initial concentration of Cu (II) of 63 ppm in a 0.5 M sulphuric acid solution, both with a mass of the extracting agent of 20 mg, an extraction percentage of about 97.8 % and a distribution coefficient of 44.42 were obtained, respectively, confirming the performance of the SDDT-Chloroform pair.

Keywords: Copper (II), Distribution coefficient, Extraction, SDDT, Sulphuric acid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1838
2196 Quadrotor Black-Box System Identification

Authors: Ionel Stanculeanu, Theodor Borangiu

Abstract:

This paper presents a new approach in the identification of the quadrotor dynamic model using a black-box system for identification. Also the paper considers the problems which appear during the identification in the closed-loop and offers a technical solution for overcoming the correlation between the input noise present in the output

Keywords: System identification, UAV, prediction error method, quadrotor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3461
2195 Role of Oxide Scale Thickness Measurements in Boiler Conditions Assessment

Authors: M. Alardhi, A. Almazrouee, S. Alsaleh

Abstract:

Oxide scale thickness measurements are used in assessing the life of different components operating at high temperature environment. Such measurements provide an approximation for the temperature inside components such as reheater and superheater tubes. A number of failures were encountered in one of the boilers in one of Kuwaiti power plants. These failure were mainly in the first row of the primary super heater tubes, therefore, the specialized engineer decide to replace them during the annual shutdown. As a tool for failure analysis, oxide scale thickness measurement were used to investigate the temperature distribution in these tubes. In this paper, the oxide scale thickness of these tubes were measured and used for analysis. The measurements provide an illustration of the distribution of heat transfer of the primary superheater tubes in the boiler system. Remarks and analysis about the design of the boiler are also provided.

Keywords: Super heater tubes, oxide scale measurements, overheating.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3732
2194 Split-Pipe Design of Water Distribution Networks Using a Combination of Tabu Search and Genetic Algorithm

Authors: J. Tospornsampan, I. Kita, M. Ishii, Y. Kitamura

Abstract:

In this paper a combination approach of two heuristic-based algorithms: genetic algorithm and tabu search is proposed. It has been developed to obtain the least cost based on the split-pipe design of looped water distribution network. The proposed combination algorithm has been applied to solve the three well-known water distribution networks taken from the literature. The development of the combination of these two heuristic-based algorithms for optimization is aimed at enhancing their strengths and compensating their weaknesses. Tabu search is rather systematic and deterministic that uses adaptive memory in search process, while genetic algorithm is probabilistic and stochastic optimization technique in which the solution space is explored by generating candidate solutions. Split-pipe design may not be realistic in practice but in optimization purpose, optimal solutions are always achieved with split-pipe design. The solutions obtained in this study have proved that the least cost solutions obtained from the split-pipe design are always better than those obtained from the single pipe design. The results obtained from the combination approach show its ability and effectiveness to solve combinatorial optimization problems. The solutions obtained are very satisfactory and high quality in which the solutions of two networks are found to be the lowest-cost solutions yet presented in the literature. The concept of combination approach proposed in this study is expected to contribute some useful benefits in diverse problems.

Keywords: GAs, Heuristics, Looped network, Least-cost design, Pipe network, Optimization, TS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1788
2193 A Propagator Method like Algorithm for Estimation of Multiple Real-Valued Sinusoidal Signal Frequencies

Authors: Sambit Prasad Kar, P.Palanisamy

Abstract:

In this paper a novel method for multiple one dimensional real valued sinusoidal signal frequency estimation in the presence of additive Gaussian noise is postulated. A computationally simple frequency estimation method with efficient statistical performance is attractive in many array signal processing applications. The prime focus of this paper is to combine the subspace-based technique and a simple peak search approach. This paper presents a variant of the Propagator Method (PM), where a collaborative approach of SUMWE and Propagator method is applied in order to estimate the multiple real valued sine wave frequencies. A new data model is proposed, which gives the dimension of the signal subspace is equal to the number of frequencies present in the observation. But, the signal subspace dimension is twice the number of frequencies in the conventional MUSIC method for estimating frequencies of real-valued sinusoidal signal. The statistical analysis of the proposed method is studied, and the explicit expression of asymptotic (large-sample) mean-squared-error (MSE) or variance of the estimation error is derived. The performance of the method is demonstrated, and the theoretical analysis is substantiated through numerical examples. The proposed method can achieve sustainable high estimation accuracy and frequency resolution at a lower SNR, which is verified by simulation by comparing with conventional MUSIC, ESPRIT and Propagator Method.

Keywords: Frequency estimation, peak search, subspace-based method without eigen decomposition, quadratic convex function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1735
2192 Numerical Solution of the Equations of Salt Diffusion into the Potato Tissues

Authors: Behrouz Mosayebi Dehkordi, Frazaneh Hashemi, Ramin Mostafazadeh

Abstract:

Fick's second law equations for unsteady state diffusion of salt into the potato tissues were solved numerically. The set of equations resulted from implicit modeling were solved using Thomas method to find the salt concentration profiles in solid phase. The needed effective diffusivity and equilibrium distribution coefficient were determined experimentally. Cylindrical samples of potato were infused with aqueous NaCl solutions of 1-3% concentrations, and variations in salt concentrations of brine were determined over time. Solute concentrations profiles of samples were determined by measuring salt uptake of potato slices. For the studied conditions, equilibrium distribution coefficients were found to be dependent on salt concentrations, whereas the effective diffusivity was slightly affected by brine concentration.

Keywords: Brine, Diffusion, Diffusivity, Modeling, Potato

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888
2191 The Modified Eigenface Method using Two Thresholds

Authors: Yan Ma, ShunBao Li

Abstract:

A new approach is adopted in this paper based on Turk and Pentland-s eigenface method. It was found that the probability density function of the distance between the projection vector of the input face image and the average projection vector of the subject in the face database, follows Rayleigh distribution. In order to decrease the false acceptance rate and increase the recognition rate, the input face image has been recognized using two thresholds including the acceptance threshold and the rejection threshold. We also find out that the value of two thresholds will be close to each other as number of trials increases. During the training, in order to reduce the number of trials, the projection vectors for each subject has been averaged. The recognition experiments using the proposed algorithm show that the recognition rate achieves to 92.875% whilst the average number of judgment is only 2.56 times.

Keywords: Eigenface, Face Recognition, Threshold, Rayleigh Distribution, Feature Extraction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1496
2190 Using Artificial Neural Network to Forecast Groundwater Depth in Union County Well

Authors: Zahra Ghadampour, Gholamreza Rakhshandehroo

Abstract:

A concern that researchers usually face in different applications of Artificial Neural Network (ANN) is determination of the size of effective domain in time series. In this paper, trial and error method was used on groundwater depth time series to determine the size of effective domain in the series in an observation well in Union County, New Jersey, U.S. different domains of 20, 40, 60, 80, 100, and 120 preceding day were examined and the 80 days was considered as effective length of the domain. Data sets in different domains were fed to a Feed Forward Back Propagation ANN with one hidden layer and the groundwater depths were forecasted. Root Mean Square Error (RMSE) and the correlation factor (R2) of estimated and observed groundwater depths for all domains were determined. In general, groundwater depth forecast improved, as evidenced by lower RMSEs and higher R2s, when the domain length increased from 20 to 120. However, 80 days was selected as the effective domain because the improvement was less than 1% beyond that. Forecasted ground water depths utilizing measured daily data (set #1) and data averaged over the effective domain (set #2) were compared. It was postulated that more accurate nature of measured daily data was the reason for a better forecast with lower RMSE (0.1027 m compared to 0.255 m) in set #1. However, the size of input data in this set was 80 times the size of input data in set #2; a factor that may increase the computational effort unpredictably. It was concluded that 80 daily data may be successfully utilized to lower the size of input data sets considerably, while maintaining the effective information in the data set.

Keywords: Neural networks, groundwater depth, forecast.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2519
2189 Time and Wavelength Division Multiplexing Passive Optical Network Comparative Analysis: Modulation Formats and Channel Spacings

Authors: A. Fayad, Q. Alqhazaly, T. Cinkler

Abstract:

In light of the substantial increase in end-user requirements and the incessant need of network operators to upgrade the capabilities of access networks, in this paper, the performance of the different modulation formats on eight-channels Time and Wavelength Division Multiplexing Passive Optical Network (TWDM-PON) transmission system has been examined and compared. Limitations and features of modulation formats have been determined to outline the most suitable design to enhance the data rate and transmission reach to obtain the best performance of the network. The considered modulation formats are On-Off Keying Non-Return-to-Zero (NRZ-OOK), Carrier Suppressed Return to Zero (CSRZ), Duo Binary (DB), Modified Duo Binary (MODB), Quadrature Phase Shift Keying (QPSK), and Differential Quadrature Phase Shift Keying (DQPSK). The performance has been analyzed by varying transmission distances and bit rates under different channel spacing. Furthermore, the system is evaluated in terms of minimum Bit Error Rate (BER) and Quality factor (Qf) without applying any dispersion compensation technique, or any optical amplifier. Optisystem software was used for simulation purposes.

Keywords: Bit Error Rate, BER, Carrier Suppressed Return to Zero, CSRZ, Duo Binary, DB, Differential Quadrature Phase Shift Keying, DQPSK, Modified Duo Binary, MODB, On-Off Keying Non-Return-to-Zero, NRZ-OOK, Quality factor, Qf, Time and Wavelength Division Multiplexing Passive Optical Network, TWDM-PON.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1040
2188 A Bathtub Curve from Nonparametric Model

Authors: Eduardo C. Guardia, Jose W. M. Lima, Afonso H. M. Santos

Abstract:

This paper presents a nonparametric method to obtain the hazard rate “Bathtub curve” for power system components. The model is a mixture of the three known phases of a component life, the decreasing failure rate (DFR), the constant failure rate (CFR) and the increasing failure rate (IFR) represented by three parametric Weibull models. The parameters are obtained from a simultaneous fitting process of the model to the Kernel nonparametric hazard rate curve. From the Weibull parameters and failure rate curves the useful lifetime and the characteristic lifetime were defined. To demonstrate the model the historic time-to-failure of distribution transformers were used as an example. The resulted “Bathtub curve” shows the failure rate for the equipment lifetime which can be applied in economic and replacement decision models.

Keywords: Bathtub curve, failure analysis, lifetime estimation, parameter estimation, Weibull distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2237
2187 Automated Textile Defect Recognition System Using Computer Vision and Artificial Neural Networks

Authors: Atiqul Islam, Shamim Akhter, Tumnun E. Mursalin

Abstract:

Least Development Countries (LDC) like Bangladesh, whose 25% revenue earning is achieved from Textile export, requires producing less defective textile for minimizing production cost and time. Inspection processes done on these industries are mostly manual and time consuming. To reduce error on identifying fabric defects requires more automotive and accurate inspection process. Considering this lacking, this research implements a Textile Defect Recognizer which uses computer vision methodology with the combination of multi-layer neural networks to identify four classifications of textile defects. The recognizer, suitable for LDC countries, identifies the fabric defects within economical cost and produces less error prone inspection system in real time. In order to generate input set for the neural network, primarily the recognizer captures digital fabric images by image acquisition device and converts the RGB images into binary images by restoration process and local threshold techniques. Later, the output of the processed image, the area of the faulty portion, the number of objects of the image and the sharp factor of the image, are feed backed as an input layer to the neural network which uses back propagation algorithm to compute the weighted factors and generates the desired classifications of defects as an output.

Keywords: Computer vision, image acquisition device, machine vision, multi-layer neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3303
2186 Nonlinear Thermal Hydraulic Model to Analyze Parallel Channel Density Wave Instabilities in Natural Circulation Boiling Water Reactor with Asymmetric Power Distribution

Authors: Sachin Kumar, Vivek Tiwari, Goutam Dutta

Abstract:

The paper investigates parallel channel instabilities of natural circulation boiling water reactor. A thermal-hydraulic model is developed to simulate two-phase flow behavior in the natural circulation boiling water reactor (NCBWR) with the incorporation of ex-core components and recirculation loop such as steam separator, down-comer, lower-horizontal section and upper-horizontal section and then, numerical analysis is carried out for parallel channel instabilities of the reactor undergoing both in-phase and out-of-phase modes of oscillations. To analyze the relative effect on stability of the reactor due to inclusion of various ex-core components and recirculation loop, marginal stable point is obtained at a particular inlet enthalpy of the reactor core without the inclusion of ex-core components and recirculation loop and then with the inclusion of the same. Numerical simulations are also conducted to determine the relative dominance between two modes of oscillations i.e. in-phase and out-of-phase. Simulations are also carried out when the channels are subjected to asymmetric power distribution keeping the inlet enthalpy same.

Keywords: Asymmetric power distribution, Density wave oscillations, In-phase and out-of-phase modes of instabilities, Natural circulation boiling water reactor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2261
2185 An Evaluation Method for Two-Dimensional Position Errors and Assembly Errors of a Rotational Table on a 4 Axis Machine Tool

Authors: Jooho Hwang, Chang-Kyu Song, Chun-Hong Park

Abstract:

This paper describes a method to measure and compensate a 4 axes ultra-precision machine tool that generates micro patterns on the large surfaces. The grooving machine is usually used for making a micro mold for many electrical parts such as a light guide plate for LCD and fuel cells. The ultra precision machine tool has three linear axes and one rotational table. Shaping is usually used to generate micro patterns. In the case of 50 μm pitch and 25 μm height pyramid pattern machining with a 90° wedge angle bite, one of linear axis is used for long stroke motion for high cutting speed and other linear axis are used for feeding. The triangular patterns can be generated with many times of long stroke of one axis. Then 90° rotation of work piece is needed to make pyramid patterns with superposition of machined two triangular patterns. To make a two dimensional positioning error, straightness of two axes in out of plane, squareness between the each axis are important. Positioning errors, straightness and squarness were measured by laser interferometer system. Those were compensated and confirmed by ISO230-6. One of difficult problem to measure the error motions is squareness or parallelism of axis between the rotational table and linear axis. It was investigated by simultaneous moving of rotary table and XY axes. This compensation method is introduced in this paper.

Keywords: Ultra-precision machine tool, muti-axis errors, squraness, positioning errors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1582
2184 Generative Adversarial Network Based Fingerprint Anti-Spoofing Limitations

Authors: Yehjune Heo

Abstract:

Fingerprint Anti-Spoofing approaches have been actively developed and applied in real-world applications. One of the main problems for Fingerprint Anti-Spoofing is not robust to unseen samples, especially in real-world scenarios. A possible solution will be to generate artificial, but realistic fingerprint samples and use them for training in order to achieve good generalization. This paper contains experimental and comparative results with currently popular GAN based methods and uses realistic synthesis of fingerprints in training in order to increase the performance. Among various GAN models, the most popular StyleGAN is used for the experiments. The CNN models were first trained with the dataset that did not contain generated fake images and the accuracy along with the mean average error rate were recorded. Then, the fake generated images (fake images of live fingerprints and fake images of spoof fingerprints) were each combined with the original images (real images of live fingerprints and real images of spoof fingerprints), and various CNN models were trained. The best performances for each CNN model, trained with the dataset of generated fake images and each time the accuracy and the mean average error rate, were recorded. We observe that current GAN based approaches need significant improvements for the Anti-Spoofing performance, although the overall quality of the synthesized fingerprints seems to be reasonable. We include the analysis of this performance degradation, especially with a small number of samples. In addition, we suggest several approaches towards improved generalization with a small number of samples, by focusing on what GAN based approaches should learn and should not learn.

Keywords: Anti-spoofing, CNN, fingerprint recognition, GAN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 594
2183 Increase of Energy Efficiency by Means of Application of Active Bearings

Authors: Alexander Babin, Leonid Savin

Abstract:

In the present paper, increasing of energy efficiency of a thrust hybrid bearing with a central feeding chamber is considered. The mathematical model was developed to determine the pressure distribution and the reaction forces, based on the Reynolds equation and static characteristics’ equations. The boundary problem of pressure distribution calculation was solved using the method of finite differences. For various types of lubricants, geometry and operational characteristics, axial gaps can be determined, where the minimal friction coefficient is provided. The next part of the study considers the application of servovalves in order to maintain the desired position of the rotor. The report features the calculation results and the analysis of the influence of the operational and geometric parameters on the energy efficiency of mechatronic fluid-film bearings.

Keywords: Active bearings, energy efficiency, mathematical model, mechatronics, thrust multipad bearing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1222
2182 Surface Elevation Dynamics Assessment Using Digital Elevation Models, Light Detection and Ranging, GPS and Geospatial Information Science Analysis: Ecosystem Modelling Approach

Authors: Ali K. M. Al-Nasrawi, Uday A. Al-Hamdany, Sarah M. Hamylton, Brian G. Jones, Yasir M. Alyazichi

Abstract:

Surface elevation dynamics have always responded to disturbance regimes. Creating Digital Elevation Models (DEMs) to detect surface dynamics has led to the development of several methods, devices and data clouds. DEMs can provide accurate and quick results with cost efficiency, in comparison to the inherited geomatics survey techniques. Nowadays, remote sensing datasets have become a primary source to create DEMs, including LiDAR point clouds with GIS analytic tools. However, these data need to be tested for error detection and correction. This paper evaluates various DEMs from different data sources over time for Apple Orchard Island, a coastal site in southeastern Australia, in order to detect surface dynamics. Subsequently, 30 chosen locations were examined in the field to test the error of the DEMs surface detection using high resolution global positioning systems (GPSs). Results show significant surface elevation changes on Apple Orchard Island. Accretion occurred on most of the island while surface elevation loss due to erosion is limited to the northern and southern parts. Concurrently, the projected differential correction and validation method aimed to identify errors in the dataset. The resultant DEMs demonstrated a small error ratio (≤ 3%) from the gathered datasets when compared with the fieldwork survey using RTK-GPS. As modern modelling approaches need to become more effective and accurate, applying several tools to create different DEMs on a multi-temporal scale would allow easy predictions in time-cost-frames with more comprehensive coverage and greater accuracy. With a DEM technique for the eco-geomorphic context, such insights about the ecosystem dynamic detection, at such a coastal intertidal system, would be valuable to assess the accuracy of the predicted eco-geomorphic risk for the conservation management sustainability. Demonstrating this framework to evaluate the historical and current anthropogenic and environmental stressors on coastal surface elevation dynamism could be profitably applied worldwide.

Keywords: DEMs, eco-geomorphic-dynamic processes, geospatial information science. Remote sensing, surface elevation changes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1159
2181 Theoretical Background of Dividend Taxation

Authors: Margareta Ilkova, Petr Teply

Abstract:

The article deals with dividends and their distribution from investors from a theoretical point of view. Some studies try to analyzed the reaction of the market on the dividend announcement and found out the change of dividend policy is associated with abnormal returns around the dividend announcement date. Another researches directly questioned the investors about their dividend preference and beliefs. Investors want the dividend from many reasons (e.g. some of them explain the dividend preference by the existence of transaction cost; investors prefer the dividend today, because there is less risky; the managers have private information about the firm). The most controversial theory of dividend policy was developed by Modigliani and Miller (1961) who demonstrated that in the perfect and complete capital markets the dividend policy is irrelevant and the value of the company is independent of its payout policy. Nevertheless, in the real world the capital markets are imperfect, because of asymmetric information, transaction costs, incomplete contracting possibilities and taxes.

Keywords: dividend distribution, taxation, payout policy, investor, Modigliani and Miller theorem

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2123
2180 A Study on Algorithm Fusion for Recognition and Tracking of Moving Robot

Authors: Jungho Choi, Youngwan Cho

Abstract:

This paper presents an algorithm for the recognition and tracking of moving objects, 1/10 scale model car is used to verify performance of the algorithm. Presented algorithm for the recognition and tracking of moving objects in the paper is as follows. SURF algorithm is merged with Lucas-Kanade algorithm. SURF algorithm has strong performance on contrast, size, rotation changes and it recognizes objects but it is slow due to many computational complexities. Processing speed of Lucas-Kanade algorithm is fast but the recognition of objects is impossible. Its optical flow compares the previous and current frames so that can track the movement of a pixel. The fusion algorithm is created in order to solve problems which occurred using the Kalman Filter to estimate the position and the accumulated error compensation algorithm was implemented. Kalman filter is used to create presented algorithm to complement problems that is occurred when fusion two algorithms. Kalman filter is used to estimate next location, compensate for the accumulated error. The resolution of the camera (Vision Sensor) is fixed to be 640x480. To verify the performance of the fusion algorithm, test is compared to SURF algorithm under three situations, driving straight, curve, and recognizing cars behind the obstacles. Situation similar to the actual is possible using a model vehicle. Proposed fusion algorithm showed superior performance and accuracy than the existing object recognition and tracking algorithms. We will improve the performance of the algorithm, so that you can experiment with the images of the actual road environment.

Keywords: SURF, Optical Flow Lucas-Kanade, Kalman Filter, object recognition, object tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2294
2179 Effect of Highly Pressurized Dispersion Arc Nozzle on Breakup of Oil Leakage in Offshore

Authors: N. M. M. Ammar, S. M. Mustaqim, N. M. Nadzir

Abstract:

The most important problem occurs on oil spills in sea water is to reduce the oil spills size. This study deals with the development of high pressurized nozzle using dispersion method for oil leakage in offshore. 3D numerical simulation results were obtained using ANSYS Fluent 13.0 code and correlate with the experimental data for validation. This paper studies the contribution of the process on flow speed and pressure of the flow from two different geometrical designs of nozzles and to generate a spray pattern suitable for dispersant application. Factor of size distribution of droplets generated by the nozzle is calculated using pressures ranging from 2 to 6 bars. Results obtain from both analyses shows a significant spray pattern and flow distribution as well as distance. Results also show a significant contribution on the effect of oil leakage in terms of the diameter of the oil spills break up.

Keywords: Arc Nozzle, CFD simulation, Droplets, Oil Spills.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1845
2178 A Statistical Model for the Dynamics of Single Cathode Spot in Vacuum Cylindrical Cathode

Authors: Po-Wen Chen, Jin-Yu Wu, Md. Manirul Ali, Yang Peng, Chen-Te Chang, Der-Jun Jan

Abstract:

Dynamics of cathode spot has become a major part of vacuum arc discharge with its high academic interest and wide application potential. In this article, using a three-dimensional statistical model, we simulate the distribution of the ignition probability of a new cathode spot occurring in different magnetic pressure on old cathode spot surface and at different arcing time. This model for the ignition probability of a new cathode spot was proposed in two typical situations, one by the pure isotropic random walk in the absence of an external magnetic field, other by the retrograde motion in external magnetic field, in parallel with the cathode surface. We mainly focus on developed relationship between the ignition probability density distribution of a new cathode spot and the external magnetic field.

Keywords: Cathode spot, vacuum arc discharge, transverse magnetic field, random walk.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1400
2177 Closed form Delay Model for on-Chip VLSIRLCG Interconnects for Ramp Input for Different Damping Conditions

Authors: Susmita Sahoo, Madhumanti Datta, Rajib Kar

Abstract:

Fast delay estimation methods, as opposed to simulation techniques, are needed for incremental performance driven layout synthesis. On-chip inductive effects are becoming predominant in deep submicron interconnects due to increasing clock speed and circuit complexity. Inductance causes noise in signal waveforms, which can adversely affect the performance of the circuit and signal integrity. Several approaches have been put forward which consider the inductance for on-chip interconnect modelling. But for even much higher frequency, of the order of few GHz, the shunt dielectric lossy component has become comparable to that of other electrical parameters for high speed VLSI design. In order to cope up with this effect, on-chip interconnect has to be modelled as distributed RLCG line. Elmore delay based methods, although efficient, cannot accurately estimate the delay for RLCG interconnect line. In this paper, an accurate analytical delay model has been derived, based on first and second moments of RLCG interconnection lines. The proposed model considers both the effect of inductance and conductance matrices. We have performed the simulation in 0.18μm technology node and an error of as low as less as 5% has been achieved with the proposed model when compared to SPICE. The importance of the conductance matrices in interconnect modelling has also been discussed and it is shown that if G is neglected for interconnect line modelling, then it will result an delay error of as high as 6% when compared to SPICE.

Keywords: Delay Modelling; On-Chip Interconnect; RLCGInterconnect; Ramp Input; Damping; VLSI

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2049
2176 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: Goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, type-I error, penalized quasi-likelihood, power, quasi-likelihood.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 733
2175 Particle Concentration Distribution under Idling Conditions in a Residential Underground Garage

Authors: Yu Zhao, Shinsuke Kato, Jianing Zhao

Abstract:

Particles exhausted from cars have adverse impacts on human health. The study developed a three-dimensional particle dispersion numerical model including particle coagulation to simulate the particle concentration distribution under idling conditions in a residential underground garage. The simulation results demonstrate that particle disperses much faster in the vertical direction than that in horizontal direction. The enhancement of particle dispersion in the vertical direction due to the increase of cars with engine running is much stronger than that in the car exhaust direction. Particle dispersion from each pair of adjacent cars has little influence on each other in the study. Average particle concentration after 120 seconds exhaust is 1.8-4.5 times higher than the initial total particles at ambient environment. Particle pollution in the residential underground garage is severe.

Keywords: Dispersion, Idling conditions, Particle concentration, Residential underground garage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1982
2174 Strategic Risk Issues for Film Distributors of Hindi Film Industry in Mumbai: A Grounded Theory Approach

Authors: R. Dyondi, S. K. Jha

Abstract:

The purpose of the paper is to address the strategic risk issues surrounding Hindi film distribution in Mumbai for a film distributor, who acts as an entrepreneur when launching a product (movie) in the market (film territory).The paper undertakes a fundamental review of films and risk in the Hindi film industry and applies Grounded Theory technique to understand the complex phenomena of risk taking behavior of the film distributors (both independent and studios) in Mumbai. Rich in-depth interviews with distributors are coded to develop core categories through constant comparison leading to conceptualization of the phenomena of interest. This paper is a first-of-its-kind-attempt to understand risk behavior of a distributor, which is akin to entrepreneurial risk behavior under conditions of uncertainty.

Keywords: Entrepreneurial Risk Behavior, Film Distribution Strategy, Hindi Film Industry, Risk.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2619